repo
stringclasses 1
value | number
int64 1
25.3k
| state
stringclasses 2
values | title
stringlengths 1
487
| body
stringlengths 0
234k
โ | created_at
stringlengths 19
19
| closed_at
stringlengths 19
19
| comments
stringlengths 0
293k
|
---|---|---|---|---|---|---|---|
transformers | 23,530 | closed | Incorrect handling of EOS tokens in DataCollatorForLanguageModeling when pad_token is set to eos_token | ### System Info
Doesn't seem to be version specific, but:
- `transformers` version: 4.29.2
- Platform: Linux-5.15.107+-x86_64-with-glibc2.31
- Python version: 3.10.11
- Huggingface_hub version: 0.14.1
- Safetensors version: not installed
- PyTorch version (GPU?): 2.0.1+cu118 (False)
- Tensorflow version (GPU?): 2.12.0 (False)
- Flax version (CPU?/GPU?/TPU?): 0.6.9 (cpu)
- Jax version: 0.4.8
- JaxLib version: 0.4.7
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help?
@sgugger I think?
### Information
- [X] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
When using `DataCollatorForLanguageModeling` for CLM with `pad_token` set to `eos_token` as it's shown [here](https://huggingface.co/docs/transformers/tasks/language_modeling ) - all EOS tokens in the labels are overwritten with -100, instead of just the ones used for padding.
In colab [here](https://colab.research.google.com/drive/13JsslKctbc9JEWbsJEJ6xru4zm6QxmLt?usp=sharing).
1. Prepare sentences that should be included in a single batch with explicit EOS tokens. (here for GPT-2)
```python
sentences = ["Short sentence<|endoftext|>", "Now that's a longer sentence<|endoftext|>"]
```
2. Tokenize them.
```python
tokenizer = AutoTokenizer.from_pretrained('gpt2')
tokenized_sentences = [tokenizer(sentence) for sentence in sentences]
```
3. Collate them with `DataCollatorForLanguageModeling`.
```python
tokenizer.pad_token = tokenizer.eos_token # 50256
data_collator = DataCollatorForLanguageModeling(tokenizer=tokenizer, mlm=False)
batch = data_collator(tokenized_sentences)
```
Batch is:
```python
{'input_ids': tensor([[16438, 6827, 50256, 50256, 50256, 50256, 50256],
[3844, 326, 338, 257, 2392, 6827, 50256]]),
'attention_mask': tensor([[1, 1, 1, 0, 0, 0, 0],
[1, 1, 1, 1, 1, 1, 1]]),
'labels': tensor([[16438, 6827, -100, -100, -100, -100, -100],
[3844, 326, 338, 257, 2392, 6827, -100]])}
```
Notice how even though `attention_mask` properly has `1` for the EOS tokens, they still got set to `-100`.
### Expected behavior
I'd expect that only the EOS tokens added as padding should be set to -100 in the labels, resulting in the following batch:
```python
{'input_ids': tensor([[16438, 6827, 50256, 50256, 50256, 50256, 50256],
[3844, 326, 338, 257, 2392, 6827, 50256]]),
'attention_mask': tensor([[1, 1, 1, 0, 0, 0, 0],
[1, 1, 1, 1, 1, 1, 1]]),
'labels': tensor([[16438, 6827, 50256, -100, -100, -100, -100],
[3844, 326, 338, 257, 2392, 6827, 50256]])}
```
(notice the `50256` in labels)
I wanted to fine-tune GPT-2 for a rather specific use-case with very short texts, so I've added EOS to my samples and it took me quite a bit of time to spot what was the issue, causing the model to fail at generating short texts. After I spotted it, as a workaround I just set labels manually and use the standard collator instead, which does what I wanted.
I feel like this is a simple change [here](https://github.com/huggingface/transformers/blob/v4.24.0/src/transformers/data/data_collator.py#L738), to use the attention mask instead of `[labels == self.tokenizer.pad_token_id]`.
I'd like to make a PR with that, but I just want to make sure that this is indeed a bug, and not expected behaviour. | 05-20-2023 14:50:29 | 05-20-2023 14:50:29 | You can do the change on your local fork for your example (or you could use a different token ID for the EOS and padding), but this is the right behavior in `DataCollatorForLanguageModeling`: labels corresponding to the pad token ID should be ignored in the loss computation and are thus set to -100.<|||||>Makes sense, but then perhaps that part of the course - setting `pad_token ` to `eos_token` - might be misleading? <|||||>Ran into a similar issue as @PatrykNeubauer. Although it might be the "right behavior", it is pretty easy to make a mistake if you want to have `<|endoftext|>` be a valid predicted token at the end of your text and also follow the suggestion of setting `pad_token = eos_token`.
I think there might be a few fairly simple, valid approaches to solve this problem:
1. Do what @PatrykNeubauer suggested about using the attention mask. Presumably, if someone passes a pad token with a non-zero attention mask they did this themselves on purpose. I believe this wouldn't change any behavior when following the standard workflow.
2. If `labels` is already defined in the `examples` passed to `torch_call` use these labels for the non-padded `input_ids` in `examples`. Then for each padding token added to `input_ids` add -100 to the labels to pad in the same way.
3. Add an `add_eos_token` argument to `DataCollatorForLanguageModeling` which will add an `eos_token` to the end of final padded output with the correct eos token label. This way we can keep padding the same way but still allow `eos_token` to be added easily and treated correctly. I think outside of this current issue it would also be a nice addition to have an easy way to automatically add the eos token.
@sgugger as far as I can tell there isn't a great way to (1) do dynamic padding and (2) use a causal language model with an end-of-sequence token. Generally causal models don't have a padding token so we need to use another token. As pretty much every model has an end-of-sequence token it is a natural choice, but then we run into the above issue. I also tried using `DataCollatorWithPadding` but it seems to have an issue with including `labels`, which results in an error (I can give more details if you like).
I think the best approach that currently works is what you suggested, setting pad to a token other than eos, but this seems sort of hacky as we can't assume some piece of text will be a single token for all causal language model tokenizers. This method is also not mentioned in any tutorial or documentation as far as I can tell which makes it seem like this isn't often used and perhaps is thought to be inelegant. <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 23,525 | closed | In MlflowCallback, Can not reattach to an existing | # In Documentation
[Doc](https://huggingface.co/docs/transformers/main_classes/callback#transformers.integrations.MLflowCallback) said we can reattach to an existing run by set **MLFLOW_RUN_ID** environment variable.
# In Code
But [Code](https://github.com/huggingface/transformers/blob/118e9810687dd713b6be07af79e80eeb1d916908/src/transformers/integrations.py#L990) seems making new mlflow run with **args.run_name**.
# ASIS
- code at https://github.com/huggingface/transformers/blob/118e9810687dd713b6be07af79e80eeb1d916908/src/transformers/integrations.py#L990
```
self._ml_flow.start_run(run_name=args.run_name, nested=self._nested_run)
```
# TOBE
```
self._ml_flow.start_run(run_id=self._run_id, nested=self._nested_run)
``` | 05-20-2023 13:51:47 | 05-20-2023 13:51:47 | Hi @ykihong0, thanks for reporting this issue.
The integrations are community added and maintained by their authors - Pinging @orieg who I believe added the env var logic in #17130 :)<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 23,516 | closed | KeyphraseExtractionPipeline - Postprocessor - TypeError: postprocess() got an unexpected keyword argument 'all_outputs' | ### System Info
transformers version: 4.20.1
Platform: (GNU/Linux 5.15.90.1-microsoft-standard-WSL2 x86_64
Python version: 3.9.16
PyTorch version (GPU?): 2.0.0+cu118
Tensorflow version (GPU?): not installed (NA)
Flax version (CPU?/GPU?/TPU?): not installed (NA)
Jax version: not installed
JaxLib version: not installed
Using GPU in script?: No
Using distributed or parallel set-up in script?: No
### Who can help?
I am trying to use a keyphrase extractor as described here - https://huggingface.co/ml6team/keyphrase-extraction-distilbert-inspec
and I am getting the following error. I am assuming "all_outputs" parameter is deprecated, the question what should I change it with?
@sgugger @luccailliau
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Copy and paste the code from [model](https://huggingface.co/ml6team/keyphrase-extraction-distilbert-inspec) into a Python file
Here is the code.
```
from transformers import (
TokenClassificationPipeline,
AutoModelForTokenClassification,
AutoTokenizer,
)
from transformers.pipelines import AggregationStrategy
import numpy as np
# Define keyphrase extraction pipeline
class KeyphraseExtractionPipeline(TokenClassificationPipeline):
def __init__(self, model, *args, **kwargs):
super().__init__(
model=AutoModelForTokenClassification.from_pretrained(model),
tokenizer=AutoTokenizer.from_pretrained(model),
*args,
**kwargs
)
def postprocess(self, all_outputs):
results = super().postprocess(
all_outputs=all_outputs,
aggregation_strategy=AggregationStrategy.FIRST,
)
return np.unique([result.get("word").strip() for result in results])
# Load pipeline
model_name = "ml6team/keyphrase-extraction-distilbert-inspec"
extractor = KeyphraseExtractionPipeline(model=model_name)
# Inference
text = """
Keyphrase extraction is a technique in text analysis where you extract the
important keyphrases from a document. Thanks to these keyphrases humans can
understand the content of a text very quickly and easily without reading it
completely. Keyphrase extraction was first done primarily by human annotators,
who read the text in detail and then wrote down the most important keyphrases.
The disadvantage is that if you work with a lot of documents, this process
can take a lot of time.
Here is where Artificial Intelligence comes in. Currently, classical machine
learning methods, that use statistical and linguistic features, are widely used
for the extraction process. Now with deep learning, it is possible to capture
the semantic meaning of a text even better than these classical methods.
Classical methods look at the frequency, occurrence and order of words
in the text, whereas these neural approaches can capture long-term
semantic dependencies and context of words in a text.
""".replace("\n", " ")
keyphrases = extractor(text)
print(keyphrases)
```
When running it, I am getting the following error
```
(hugface) eboraks@mittenwood:~/Projects/studynotes/notebooks$ python keyphrase.py
Traceback (most recent call last):
File "/home/eboraks/Projects/studynotes/notebooks/keyphrase.py", line 53, in <module>
keyphrases = extractor(text)
File "/home/eboraks/anaconda3/envs/hugface/lib/python3.9/site-packages/transformers/pipelines/token_classification.py", line 191, in __call__
return super().__call__(inputs, **kwargs)
File "/home/eboraks/anaconda3/envs/hugface/lib/python3.9/site-packages/transformers/pipelines/base.py", line 1043, in __call__
return self.run_single(inputs, preprocess_params, forward_params, postprocess_params)
File "/home/eboraks/anaconda3/envs/hugface/lib/python3.9/site-packages/transformers/pipelines/base.py", line 1051, in run_single
outputs = self.postprocess(model_outputs, **postprocess_params)
File "/home/eboraks/Projects/studynotes/notebooks/keyphrase.py", line 22, in postprocess
results = super().postprocess(
TypeError: postprocess() got an unexpected keyword argument 'all_outputs'
(hugface) eboraks@mittenwood:~/Projects/studynotes/notebooks$
```
### Expected behavior
# Output
['artificial intelligence' 'classical machine learning' 'deep learning'
'keyphrase extraction' 'linguistic features' 'statistical'
'text analysis'] | 05-20-2023 13:23:58 | 05-20-2023 13:23:58 | Hello @eboraks,
Since the release of Hugging Face 4.28.0, the `postprocess()` method has changed to handle unlimited length of text. Therefore, `all_outputs` can only be used if `transformers>=4.28.0`. I recommend updating transformers since you are using `transformers==4.20.1`.
If you can't upgrade to at least that specific version, renaming `all_outputs` to `model_outputs` should solve your problem:
```
def postprocess(self, model_outputs):
results = super().postprocess(
model_outputs=model_outputs,
aggregation_strategy=AggregationStrategy.FIRST,
)
return np.unique([result.get("word").strip() for result in results])
```
Have a good day<|||||>Thank you @luccailliau upgrading to transformers 4.28 solved the issue. |
transformers | 23,485 | closed | Fix `tests/repo_utils/test_get_test_info.py` | # What does this PR do?
3 tests break on `main` after #23153 (see [here](https://app.circleci.com/pipelines/github/huggingface/transformers/64922/workflows/d53f3351-a1d1-4df5-9c8b-28c45fff4f09/jobs/805142)), but can't blame @younesbelkada as the tests are not triggered on PR CI, neither after being merged. But on nightly CircleCI run, failures are detected. | 05-20-2023 04:47:24 | 05-20-2023 04:47:24 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_23485). All of your documentation changes will be reflected on that endpoint.<|||||>Thanks so much for taking care of this @ydshieh ๐ฅ ! |
transformers | 23,484 | closed | Add LlamaIndex to awesome-transformers.md | # What does this PR do?
This PR adds `LlamaIndex` to `awesome-transformers.md`. `LlamaIndex` is a project that provides a central interface to connect your LLM's with external data.
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
| 05-20-2023 01:38:13 | 05-20-2023 01:38:13 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Interestingly, you have `transformers` as an optional dependency when the Python version is above 3.9. It seems to be used quite a bit for tokenizing/streaming/image captioning etc.
If you'd rather not have it as a main dependency, I'd love to hear why so that we may improve! We've been trying to keep the package on the lighter side of dependencies so that it wasn't too heavy to add.
Thank you! |
transformers | 23,483 | closed | changing the requirements to a cpu torch version that works | requirements.txt is changed, because the versions would have produced errors. more info on the issue below.
Fixes https://github.com/huggingface/transformers/issues/23418
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@sanchit-gandhi
| 05-19-2023 23:12:16 | 05-19-2023 23:12:16 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 23,482 | closed | [Bart] Bart model familiesโ embedding shape? | ### System Info
- `transformers` version: 4.26.1
- Platform: Linux-5.15.0-71-generic-x86_64-with-glibc2.31
- Python version: 3.9.12
- Huggingface_hub version: 0.12.1
- PyTorch version (GPU?): 1.13.0+cu117 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help?
@ArthurZucker @younesbelkada
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Iโm implementing a finetuned Bart model for summarization, therefore Iโm making decisions between using the โfacebook/bart-largeโ or the โfacebook/bart-large-cnnโ. But when I look into the layers in both models, I found the shapes of their embedding layers are different, is this a special trick?
Code to repeat
```python
from transformers import BartTokenizer, BartModel, BartForConditionalGeneration
BARTmodel = BartModel.from_pretrained('facebook/bart-large')
CGmodel = BartForConditionalGeneration.from_pretrained('facebook/bart-large-cnn')
BARTmodel.shared
-----------
Embedding(50265, 1024, padding_idx=1)
```
```python
CGmodel.model.shared
-----------
Embedding(50264, 1024, padding_idx=1)
```
### Expected behavior
I expect the embedding layer's dimension are equal. | 05-19-2023 20:59:36 | 05-19-2023 20:59:36 | Hi @com3dian, thanks for raising this issue.
The reason the values are different is because the vocab size in the respective model configurations is different:
* [50265 for `facebook/bart-large`](https://huggingface.co/facebook/bart-large/blob/cb48c1365bd826bd521f650dc2e0940aee54720c/config.json#L71)
* [50264 for `facebook/bart-large-cnn`](https://huggingface.co/facebook/bart-large-cnn/blob/3d224934c6541b2b9147e023c2f6f6fe49bd27e1/config.json#L67)
I'll let @ArthurZucker and @younesbelkada handle whether this is expected :) <|||||>Thanks @amyeroberts !
I found this interesting difference when I was fine-tuning the Bart model for both summarization and other language generation tasks, which will be combined into a complete pipeline. My intention is to utilize the [facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn/) model specifically for summarization and the [facebook/bart-large](https://huggingface.co/facebook/bart-large/) model for the other tasks. It would be helpful to identify the missing token in the embedding layer of the [facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn/) model to ensure alignment of the embedding sizes.<|||||>Hey!
Both tokenizers have the same length, 50265. The last token is `{"id":50264,"special":true,"content":"<mask>","single_word":false,"lstrip":true,"rstrip":false,"normalized":true}`. The missing token is the `mask`.
Since the model was finetuned for text-generation (summarization), I think this is expected.
Tell me if this does not answer your question!<|||||>Thanks @ArthurZucker, your reply helps a lot! |
transformers | 23,481 | closed | add use_orig_params to fsdp with torch compile | # What does this PR do?
The FSDP wrapper inside `trainer.py` needs to be initialized with `use_orig_params=True` for FSDP + `torch.compile` to work well together. Therefore, I added some code to check inside the relevant FSDP section if `torch_compile` is set to `True` to then add `use_orig_params` with the corresponding value to FSDP.
Fixes #23341
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@sgugger
@pacman100 | 05-19-2023 20:08:22 | 05-19-2023 20:08:22 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_23481). All of your documentation changes will be reflected on that endpoint.<|||||>Hello, as the Accelerate now Powers Trainer, please use the accelerate launcher with Trainer for FSDP. That provides support for `use_orig_params` and hence this PR is no longer required.
Please do pip install git+https://github.com/huggingface/transformers and pip install git+https://github.com/huggingface/accelerate
Use Accelerate launcher with Trainer. More info here: [Using Accelerate Launcher with Trainer](https://huggingface.co/docs/transformers/main/en/main_classes/trainer#using-accelerate-launcher-with-trainer).
Therefore, closing this PR. Thank you for all the effort! |
transformers | 23,480 | open | SpeechT5 cannot read numbers | ### System Info
transformers == 4.29.0
environment = Colab
Python == 3.10.11
tensorflow == 2.12.0
torch == 2.0.1+cu118
torchaudio == 2.0.2+cu118
### Who can help?
@sanchit-gandhi
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
1. Init a Transformer agent
2. Init a text which contains numbers. For example text = "More than 10 people have been killed by Covid."
3. Call the agent for a text-to-speech (SpeechT5). For example, audio_translated = agent.run("Read out loud the text", text=text)
4. Play the generated audio
The audio blanks all the numbers/digits.
I am suspecting SpeechT5 to behave wrongly as the code generated by the agent seems to be correct.
Good luck :)
### Expected behavior
The audio file should contain numbers/digits indicated in the text. | 05-19-2023 18:42:39 | 05-19-2023 18:42:39 | The SpeechT5 tokenizer does not understand numerals. For this to work, the input text should be normalized to "More than ten people ...", with the number spelled out. My guess is that the agent doesn't do this.<|||||>Thanks for the reply. It should not be too difficult to ask the LLM to process the text in order to replace all numbers by their litteral equivalents. I will see the agent code to propose a fix.<|||||>Since the mapping from numbers -> words is deterministic, we could add this as a pre-processing step in the SpeechT5 tokenizer? E.g. the number "150" always gets mapped to "one-hundred and fifty". IMO this is a pretty easy way of guaranteeing that we have our text formatted as the model expects
We do the opposite in Whisper, where we normalise all spoken words to numbers ("one-hundred and fifty" -> "150"), see
https://github.com/huggingface/transformers/blob/fe34486f129d47abc0dddb39a22b24cdbe52dec8/src/transformers/models/whisper/english_normalizer.py#L110<|||||>May be just asking in the prompt of the agent that a pre-processing of input text is needed and should be done by the LLM itself before to be sent to Speech T5. The OpenAI LLM used by the agent by default can do the job, pretty sure about it.<|||||>That would fix the issue of SpeechT5 in the context of using transformers agents, but not as a standalone model! E.g. if we wanted to use SpeechT5 according to the example docs or blog post: https://huggingface.co/blog/speecht5#text-to-speech, then passing a numeric value (e.g. "150") still wouldn't work
My thinking is that we can add a normalisation argument to the `processor`/`tokenizer` that handles this for us:
```python
inputs = processor(text="This is a number, 150", return_tensors="pt", normalize=True)
```
Which under the hood converts the text from:
```
This is a number, 150
```
to
```
This is a number, one-hundred and fifty
```
Once we have this, it's trivial to update the transformers agents pipeline to switch on normalisation by default<|||||>Yeap I agree that the agent-only solution is not the optimal one. The processor is certainly a better way to fix it.<|||||>Keeping this open since I think it would be a valuable addition (especially since SpeechT5 is being used in transformers agents) - would you like to have a go at adding such a pre-processing function @jeromemassot? Happy to help you with the integration!<|||||>@sanchit-gandhi you predicted well. I think that it is important to include this pre-processing to bypass the issue of numbers recognition. ๐<|||||>Perfect - do you want to open a PR for this? We can reverse the logic that we use in the Whisper normaliser, where we go from written to numeric (see https://github.com/huggingface/transformers/issues/23480#issuecomment-1557526685)<|||||>I sadly won't have time to undertake this PR myself, but maintain that it's a worthwhile update to the library. If anyone in the community would like to pick-up this issue and submit a PR I'd be more than happy to guide you through the integration process and answer any questions. Much of what's needed to be done is outlined in this comment thread already!<|||||>Hi @sanchit-gandhi, if no one else has taken this up, I would love to fix this issue in a new PR!<|||||>Hey @heytanay - thanks for jumping on here, it's all yours! Feel free to open a PR and tag me - happy to assist with the integration! Think the details of how we can do this are more or less detailed in this thread, but let me know if you have any questions |
transformers | 23,479 | closed | 4-bit QLoRA via bitsandbytes (4-bit base model + LoRA) | # What does this PR do?
This PR introduces 4-bit QLoRA to transformers. The main changes are for the bitsandbytes config. Additionally, we added one change to the LLaMA implementation where there is a bug that the data type chances if layer norms are in 32-bit and the rest is in bf16.
More information about QLoRA from our abstract:
>We develop QLoRA tuning, a method that finetunes by backpropagating gradients through a frozen 4-bit base model into low rank adapters (LoRA). With QLoRA tuning we can finetune 30B/65B parameter models on 24/48GB GPUs while preserving regular 16-bit full finetuning runtime and task performance. We achieve the memory efficiency and quantization precision through a combination of new methods: nested quantization to reduce the average memory footprint from 4.5 to 4.1 bits per parameter, paged optimizers to manage gradient checkpointing memory spikes, and a new data type, 4-bit NormalFloat (NF4), which is information theoretically and empirically optimal for normally distributed weights. To demonstrate the effectiveness and ease of use of QLoRA tuning we finetune more than 1,000 models to create a detailed dissection of instruction following performance across datasets (FLAN, Alpaca, Chip2, SuperNatural Instructions, Chip2, AnthropicHH), models types (LLaMA, T5), and model scales (125M to 65B). A discussion of the results is forthcoming in our paper.
## Before submitting
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
@sgugger @younesbelkada @sourabh112
| 05-19-2023 18:29:33 | 05-19-2023 18:29:33 | Amazing!
cc @SunMarc for visibility as well!
Will review asap ๐ช <|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>Thank you for the review! I created a separate PR for the LLaMA bug: https://github.com/huggingface/transformers/pull/23535
The optimizers are also needed for the 4-bit fine-tuning of LLaMA 30B/65B on a single 24/48 GB GPU: https://github.com/huggingface/transformers/pull/23217
I reverted the names and removed code into other PRs. I added the missing test files. These, however are also renamed. Is the filename important for these?
Let me know if you see any other issues. Thank you, Sylvain!<|||||>@TimDettmers Can the bitsandbytes team provide a corresponding bitsandbytes branch that corresponds to this PR? The "closed_beta" on bitsandbytes is the closest I found but it doesn't appear to be final/rc-quality and contains debug print logs such as
https://github.com/TimDettmers/bitsandbytes/compare/main...closed_beta#diff-4d235c7e595546c6656c229dfa139298ce6602b356c2d0bafcb2352eb2cfae79R222
Without the proper branch/link to bitsandbytes changes, it is very hard to test this. Since this PR is public, the bnb branch should no longer be in closed beta.
Thank you.
<|||||>I think the changes for QLoRA were recently merged in the main branch of bitsandbytes. |
transformers | 23,478 | closed | Fix DeepSpeed stuff in the nightly CI | # What does this PR do?
Similar to #23463, but for the nightly CI (i.e. the nightly version of torch + deepspeed instead of the stable release).
This should be the last piece to make all the CI workflow to run (๐ค) | 05-19-2023 16:56:40 | 05-19-2023 16:56:40 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 23,477 | closed | Better TF docstring types | cc @sgugger - this is the PR based on the conversation we had on Slack! I just scanned our TF files and replaced some `Optional[]` and `Union[]` patterns with `|` instead. The doc builder now writes much cleaner docstrings, instead of the `tensorflow.python.framework.ops.Tensor` stuff it was writing before.
Some tests are failing because the test files also need `from __future__ import annotations` I think - will make sure everything passes before merging this. | 05-19-2023 16:29:09 | 05-19-2023 16:29:09 | _The documentation is not available anymore as the PR was closed or merged._<|||||>The tests seem to indicate we use those type annotations for something (auto-detecting the labels would be my guess).<|||||>We use them in one of the TF tests, in a way that causes issues with the `from __future__` import. I think there's an easy workaround for the affected test, though - working on it!<|||||>@sgugger issues have been resolved. What happened was two of the tests were trying to figure out which models were trainable using the type annotations if they were there, which was kind of flaky anyway and broke when we did this. I refactored the relevant code, which enabled some tests that were previously being skipped, and that surfaced a couple of small issues in the tests. The models are all fine, though, and everything passes now! |
transformers | 23,476 | closed | Flyte Callback | ### Feature request
Hi! I am an OSS contributor to [Flyte](https://flyte.org/), a workflow orchestration tool. I am working on a HuggingFace plugin to integrate better with Hugging Face.
I am working on a `FlyteCallback` that will automatically integrate with Flyte's checkpointing, visualizations, and eventually logging!
I was hoping that we could add it to the `integrations.py` similar to other ML Tools, but I wanted to check with you all before making a PR.
### Motivation
A good way to help Flyte users working with Hugging Face and Hugging Face users who end up using Flyte for orchestration.
### Your contribution
I would clean up and extend the callback in this [gist](https://gist.github.com/peridotml/68f376f0f4fd1926fb0746daaeea09f8) and create a PR. | 05-19-2023 16:11:19 | 05-19-2023 16:11:19 | Hi there! You can definitely open a PR. `integrations.py` is open to the community and callbacks defined there are maintained by their authors and not by us. So as long as you're fine getting pinged by users who get a problem with this callback in the future, please share your new callback :-) |
transformers | 23,475 | closed | Fix: Change tensors to integers for torch.dynamo and torch.compile compatibility | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes errors when trying to use PyTorch 2.0 torch.compile().
In the two fixes I suggested, we addressed the following issue:
torch.split() was expecting a list of integers specifying how to split the tensor along a given dimension. However, it received a list of scalar tensors instead. This mismatch was causing the TorchRuntimeError.
In the first instance, the tensor value_spatial_shapes was passed to the function as a list of scalar tensors, which resulted in the error. By converting them to integers using the .item() method, the error was resolved.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
@amyeroberts @sgugger
| 05-19-2023 15:48:38 | 05-19-2023 15:48:38 | _The documentation is not available anymore as the PR was closed or merged._<|||||>You undid your changes, you will need to make them in the file that mas2former copies from ๐
<|||||>Oh bummer! Thank you for your patience. I will try to fix it. |
transformers | 23,474 | closed | graphormer.collating_graphormer.preprocess_item TypeError: Cannot cast array data from dtype('int64') to dtype('int32') according to the rule 'safe' | ### System Info
- `transformers` version: 4.28.1
- Platform: Windows-10-10.0.19045-SP0
- Python version: 3.9.15
- Huggingface_hub version: 0.10.1
- Safetensors version: not installed
- PyTorch version (GPU?): 1.13.1 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
The transformers.models.graphormer.collating_graphormer.preprocess_item cause TypeError: Cannot cast array data from dtype('int64') to dtype('int32') according to the rule 'safe'
When data_processed = dataset.map(preprocess_item, batched=False), the dataset is ogb-molhiv
Because the data type in this function is np.int64, but in transformers\models\graphormer\algos_graphormer.pyx is np.int32 or np.long
I fixed it by replace "long" in algos_graphormer.pyx line 88 and 89 by np.int64 ,and replace all 32 in algos_graphormer.pyx by 64
But I don't think this is a appropritate aproach
### Expected behavior
how to fix this issue without modifying the library code | 05-19-2023 14:35:14 | 05-19-2023 14:35:14 | cc @clefourrier <|||||>Hi @YueZhengMeng, thank you for reporting this!
I'll check it this week. Btw, using int64s everywhere will make the memory explode considerably faster for large graphs, if you need to change it manually in the mean time, it's likely you should use int32s instead.<|||||>I have not been able to reproduce your bug, but I'm not on a Windows machine.
Could you provide me with a snippet of code and the complete trace of your error?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 23,473 | closed | Use config to set name and description if not present | # What does this PR do?
This PR makes sure we read the name and description of a donwloaded tool in the config to set them in the tool class if they were not set by the user. We also perform a consistency check and override values by using the tool config when they are different, as the tool config should be the source of truth.
Fies #23469 | 05-19-2023 14:15:43 | 05-19-2023 14:15:43 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 23,472 | closed | TypeError: 'type' object is not subscriptable | ### System Info
** pre-training wav2vec demo**
https://github.com/huggingface/transformers/blob/main/examples/pytorch/speech-pretraining/README.md running demo gives error:
File โ./run_wav2vec2_pretraining_no_trainer.py", line 783, in <module>
main()
File โ./run_wav2vec2_pretraining_no_trainer.py", line 510, in main
vectorized_datasets = raw_datasets.map(
TypeError: 'type' object is not subscriptable
### Who can help?
@sanchit-gandhi
@pacman100
@sgugger
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
just reproducing demo example with provided script and dataset:
https://github.com/huggingface/transformers/blob/main/examples/pytorch/speech-pretraining/README.md#demo
### Expected behavior
output should be a pre-trained wav2vec model on librispeech dataset | 05-19-2023 14:11:45 | 05-19-2023 14:11:45 | Hi @flckv, thanks for raising this error.
I'm unable to reproduce this error when I run locally on the main branch. Could you share the running environment being used: run `transformers-cli env` in the terminal and copy-paste the output?<|||||>Hi @amyeroberts, thanks for the quick reply.
### The output of transformers-cli env:
```
- `transformers` version: 4.26.1
- Platform: Linux-5.4.204-ql-generic-12.0-19-x86_64-with-glibc2.31
- Python version: 3.9.7
- Huggingface_hub version: 0.10.1
- PyTorch version (GPU?): 1.11.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
```
I am running on a cluster with resources:
```
#SBATCH --job-name=ol # Job name
#SBATCH --output=/home/flck/output_.%A.txt # Standard output and error log
#SBATCH --nodes=1 # Run all processes on a single node
#SBATCH --ntasks=1 # Run on a single CPU
#SBATCH --mem=64G # Total RAM to be used
#SBATCH --cpus-per-task=8 # Number of CPU cores
#SBATCH --gres=gpu:3 # Number of GPUs (per node)
#SBATCH -p gpu # Use the gpu partition
#SBATCH --time=12:00:00 # Specify the time needed for your experiment
#SBATCH --qos=gpu-8 # To enable the use of up to 8 GPUs
```
in my .sh file that I run on this cluster has these commands to reproduce the demo :
transformers-cli env
`accelerate launch wav2vec/run_wav2vec2_pretraining_no_trainer.py --cache_dir="/dev/shm/" --dataset_name="librispeech_asr" --dataset_config_names clean clean --dataset_split_names validation test --model_name_or_path="patrickvonplaten/wav2vec2-base-v2" --output_dir="./wav2vec2-pretrained-demo" --max_train_steps="20000" --num_warmup_steps="32000" --gradient_accumulation_steps="8" --learning_rate="0.005" --weight_decay="0.01" --max_duration_in_seconds="20.0" --min_duration_in_seconds="2.0" --logging_steps="1" --saving_steps="10000" --per_device_train_batch_size="8" --per_device_eval_batch_size="8" --adam_beta1="0.9" --adam_beta2="0.98" --adam_epsilon="1e-06" --gradient_checkpointing --mask_time_prob="0.65" --mask_time_length="10"`
transformers-cli env
Is this what you are asking for?
-------------------------------------------------------------------------------------------------
-------------------------------------------------------------------------------------------------
-------------------------------------------------------------------------------------------------
### My guess
I think the error is coming from the fact that the dataset preprocessing [(line 473)](https://github.com/huggingface/transformers/blob/3658488ff77ff8d45101293e749263acf437f4d5/examples/pytorch/speech-pretraining/run_wav2vec2_pretraining_no_trainer.py#L473) requires argument `"args.audio_column_name" `that is not specified in the demo command https://github.com/huggingface/transformers/blob/main/examples/pytorch/speech-pretraining/README.md#demo.
### 1. I tried specifying --audio_column_name= []
```
I got this error:
_-- schema metadata --
huggingface: '{"info": {"features": {"id": {"dtype": "string", "_type": "' + 163
to
{'id': Value(dtype='string', id=None), 'audio': Audio(sampling_rate=16000, mono=True, decode=True, id=None), 'duration_ms': Value(dtype='int32', id=None), 'text': Value(dtype='string', id=None), '[]': Audio(sampling_rate=16000, mono=True, decode=True, id=None)}
because column names don't match_
```
### 2. I tried specifying --audio_column_name=["audio", "duration_ms", "text"]
`error: "run_wav2vec2_pretraining_no_trainer.py: error: unrecognized arguments: duration_ms, text]"
`
### 3. I tried specifying --audio_column_name=["audio"], which is the default setting
same issue as in 1.
```
line 478, in main
raw_datasets = raw_datasets.cast_column(
raise ValueError(f"Couldn't cast\n{table.schema}\nto\n{features}\nbecause column names don't match")
ValueError: Couldn't cast
id: string
audio: struct<bytes: binary, path: string>
child 0, bytes: binary
child 1, path: string
duration_ms: int32
text: string
-- schema metadata --
huggingface: '{"info": {"features": {"id": {"dtype": "string", "_type": "' + 163
to
{'id': Value(dtype='string', id=None), 'audio': Audio(sampling_rate=16000, mono=True, decode=True, id=None), 'duration_ms': Value(dtype='int32', id=None), 'text': Value(dtype='string', id=None), '[audio]': Audio(sampling_rate=16000, mono=True, decode=True, id=None)}
because column names don't match
```
Any ideas? @amyeroberts @sanchit-gandhi @pacman100 @sgugger
<|||||>here is a more detailed output log content:
```
The following values were not passed to `accelerate launch` and had defaults used instead:
`--num_processes` was set to a value of `3`
More than one GPU was found, enabling multi-GPU training.
If this was unintended please pass in `--num_processes=1`.
`--num_machines` was set to a value of `1`
`--mixed_precision` was set to a value of `'no'`
`--dynamo_backend` was set to a value of `'no'`
To avoid this warning pass in values for each of the problematic parameters or run `accelerate config`.
wandb: Currently logged in as: flckv. Use `wandb login --relogin` to force relogin
wandb: wandb version 0.15.3 is available! To upgrade, please run:
wandb: $ pip install wandb --upgrade
wandb: Tracking run with wandb version 0.15.2
wandb: Run data is saved locally in /home/flckv/wandb/run-20230519_175326-yuyk0qvn
wandb: Run `wandb offline` to turn off syncing.
wandb: Syncing run rural-morning-6
wandb: โญ๏ธ View project at https://wandb.ai/flckv/wav2vec2-pretrained-demo
wandb: ๐ View run at https://wandb.ai/flckv/wav2vec2-pretrained-demo/runs/yuyk0qvn
Downloading and preparing dataset librispeech_asr/clean to /dev/shm/librispeech_asr/clean/2.1.0/cff5df6e7955c80a67f80e27e7e655de71c689e2d2364bece785b972acb37fe7...
Downloading data files: 0%| | 0/4 [00:00<?, ?it/s]
Downloading data files: 100%|โโโโโโโโโโ| 4/4 [00:00<00:00, 9828.48it/s]
Extracting data files: 0%| | 0/4 [00:00<?, ?it/s]
Extracting data files: 100%|โโโโโโโโโโ| 4/4 [00:00<00:00, 2225.39it/s]
Generating train.100 split: 100%|โโโโโโโโโโ| 28539/28539 [00:17<00:00, 1834.73 examples/s]
Generating train.360 split: 100%|โโโโโโโโโโ| 104014/104014 [01:00<00:00, 1634.61 examples/s]
Generating validation split: 100%|โโโโโโโโโโ| 2703/2703 [00:01<00:00, 2341.79 examples/s]
Generating test split: 93%|โโโโโโโโโโ| 2434/2620 [00:01<00:00, 2338.24 examples/s]
Dataset librispeech_asr downloaded and prepared to /dev/shm/librispeech_asr/clean/2.1.0/cff5df6e7955c80a67f80e27e7e655de71c689e2d2364bece785b972acb37fe7. Subsequent calls will reuse this data.
Found cached dataset librispeech_asr (/dev/shm/librispeech_asr/clean/2.1.0/cff5df6e7955c80a67f80e27e7e655de71c689e2d2364bece785b972acb37fe7)
Found cached dataset librispeech_asr (/dev/shm/librispeech_asr/clean/2.1.0/cff5df6e7955c80a67f80e27e7e655de71c689e2d2364bece785b972acb37fe7)
Downloading: 0%| | 0.00/214 [00:00<?, ?B/s]
Downloading: 100%|โโโโโโโโโโ| 214/214 [00:00<00:00, 171kB/s]
loading configuration file preprocessor_config.json from cache at /home/flckv/.cache/huggingface/hub/models--patrickvonplaten--wav2vec2-base-v2/snapshots/9371f1849947b4613f451680a8e96d907617ce86/preprocessor_config.json
Feature extractor Wav2Vec2FeatureExtractor {
"do_normalize": true,
"feature_extractor_type": "Wav2Vec2FeatureExtractor",
"feature_size": 1,
"padding_side": "right",
"padding_value": 0.0,
"return_attention_mask": true,
"sampling_rate": 16000
}
Map: 0%| | 0/5270 [00:00<?, ? examples/s]
Map: 0%| | 0/5270 [00:01<?, ? examples/s]
>
> Traceback (most recent call last):
> File "/home/flckv/wav2vec/run_wav2vec2_pretraining_no_trainer.py", line 783, in <module>
> main()
> File "/home/flckv/wav2vec/run_wav2vec2_pretraining_no_trainer.py", line 510, in main
> vectorized_datasets = raw_datasets.map(
> File "/home/flckv/.conda/envs/vcheckworthy/lib/python3.9/site-packages/datasets/dataset_dict.py", line 852, in map
> {
> File "/home/flckv/.conda/envs/vcheckworthy/lib/python3.9/site-packages/datasets/dataset_dict.py", line 853, in <dictcomp>
> k: dataset.map(
> File "/home/flckv/.conda/envs/vcheckworthy/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 563, in wrapper
> out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
> File "/home/flckv/.conda/envs/vcheckworthy/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 528, in wrapper
> out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
> File "/home/flckv/.conda/envs/vcheckworthy/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 2953, in map
> for rank, done, content in Dataset._map_single(**dataset_kwargs):
> File "/home/flckv/.conda/envs/vcheckworthy/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 3307, in _map_single
> example = apply_function_on_filtered_inputs(example, i, offset=offset)
> File "/home/flckv/.conda/envs/vcheckworthy/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 3210, in apply_function_on_filtered_inputs
> processed_inputs = function(*fn_args, *additional_args, **fn_kwargs)
> File "/home/flckv/wav2vec/run_wav2vec2_pretraining_no_trainer.py", line 493, in prepare_dataset
> sample = batch[args.audio_column_name]
> File "/home/flckv/.conda/envs/vcheckworthy/lib/python3.9/site-packages/datasets/formatting/formatting.py", line 282, in __getitem__
> value = self.format(key)
> File "/home/flckv/.conda/envs/vcheckworthy/lib/python3.9/site-packages/datasets/formatting/formatting.py", line 380, in format
> return self.formatter.format_column(self.pa_table.select([key]))[0]
> File "/home/flckv/.conda/envs/vcheckworthy/lib/python3.9/site-packages/datasets/formatting/formatting.py", line 447, in format_column
> column = self.python_features_decoder.decode_column(column, pa_table.column_names[0])
> File "/home/flckv/.conda/envs/vcheckworthy/lib/python3.9/site-packages/datasets/formatting/formatting.py", line 228, in decode_column
> return self.features.decode_column(column, column_name) if self.features else column
> File "/home/flckv/.conda/envs/vcheckworthy/lib/python3.9/site-packages/datasets/features/features.py", line 1866, in decode_column
> [decode_nested_example(self[column_name], value) if value is not None else None for value in column]
> File "/home/flckv/.conda/envs/vcheckworthy/lib/python3.9/site-packages/datasets/features/features.py", line 1866, in <listcomp>
> [decode_nested_example(self[column_name], value) if value is not None else None for value in column]
> File "/home/flckv/.conda/envs/vcheckworthy/lib/python3.9/site-packages/datasets/features/features.py", line 1308, in decode_nested_example
> return schema.decode_example(obj, token_per_repo_id=token_per_repo_id)
> File "/home/flckv/.conda/envs/vcheckworthy/lib/python3.9/site-packages/datasets/features/audio.py", line 164, in decode_example
> array, sampling_rate = self._decode_non_mp3_file_like(file)
> File "/home/flckv/.conda/envs/vcheckworthy/lib/python3.9/site-packages/datasets/features/audio.py", line 290, in _decode_non_mp3_file_like
> array = librosa.to_mono(array)
> File "/home/flckv/.conda/envs/vcheckworthy/lib/python3.9/site-packages/lazy_loader/__init__.py", line 77, in __getattr__
> attr = getattr(submod, name)
> File "/home/flckv/.conda/envs/vcheckworthy/lib/python3.9/site-packages/lazy_loader/__init__.py", line 76, in __getattr__
> submod = importlib.import_module(submod_path)
> File "/home/flckv/.conda/envs/vcheckworthy/lib/python3.9/importlib/__init__.py", line 127, in import_module
> return _bootstrap._gcd_import(name[level:], package, level)
> File "<frozen importlib._bootstrap>", line 1030, in _gcd_import
> File "<frozen importlib._bootstrap>", line 1007, in _find_and_load
> File "<frozen importlib._bootstrap>", line 986, in _find_and_load_unlocked
> File "<frozen importlib._bootstrap>", line 680, in _load_unlocked
> File "<frozen importlib._bootstrap_external>", line 850, in exec_module
> File "<frozen importlib._bootstrap>", line 228, in _call_with_frames_removed
> File "/home/flckv/.conda/envs/vcheckworthy/lib/python3.9/site-packages/librosa/core/audio.py", line 19, in <module>
> from .convert import frames_to_samples, time_to_samples
> File "/home/flckv/.conda/envs/vcheckworthy/lib/python3.9/site-packages/librosa/core/convert.py", line 7, in <module>
> from . import notation
> File "/home/flckv/.conda/envs/vcheckworthy/lib/python3.9/site-packages/librosa/core/notation.py", line 8, in <module>
> from .intervals import INTERVALS
> File "/home/flckv/.conda/envs/vcheckworthy/lib/python3.9/site-packages/librosa/core/intervals.py", line 10, in <module>
> from numpy.typing import ArrayLike
> File "/home/flckv/.conda/envs/vcheckworthy/lib/python3.9/site-packages/numpy/typing/__init__.py", line 158, in <module>
> from numpy._typing import (
> File "/home/flckv/.conda/envs/vcheckworthy/lib/python3.9/site-packages/numpy/_typing/__init__.py", line 164, in <module>
> from ._dtype_like import (
> File "/home/flckv/.conda/envs/vcheckworthy/lib/python3.9/site-packages/numpy/_typing/_dtype_like.py", line 17, in <module>
> from ._generic_alias import _DType as DType
> File "/home/flckv/.conda/envs/vcheckworthy/lib/python3.9/site-packages/numpy/_typing/_generic_alias.py", line 241, in <module>
> _DType = np.dtype[ScalarType]
> TypeError: 'type' object is not subscriptable
> wandb: Waiting for W&B process to finish... (failed 1). Press Control-C to abort syncing.
> wandb: ๐ View run rural-morning-6 at: https://wandb.ai/flckv/wav2vec2-pretrained-demo/runs/yuyk0qvn
> wandb: Synced 5 W&B file(s), 0 media file(s), 0 artifact file(s) and 0 other file(s)
> wandb: Find logs at: ./wandb/run-20230519_175326-yuyk0qvn/logs
> WARNING:torch.distributed.elastic.multiprocessing.api:Sending process 3914267 closing signal SIGTERM
> WARNING:torch.distributed.elastic.multiprocessing.api:Sending process 3914268 closing signal SIGTERM
> ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: 1) local_rank: 0 (pid: 3914266) of binary: /home/flckv/.conda/envs/vcheckworthy/bin/python
> Traceback (most recent call last):
> File "/home/flckv/.conda/envs/vcheckworthy/bin/accelerate", line 8, in <module>
> sys.exit(main())
> File "/home/flckv/.conda/envs/vcheckworthy/lib/python3.9/site-packages/accelerate/commands/accelerate_cli.py", line 45, in main
> args.func(args)
> File "/home/flckv/.conda/envs/vcheckworthy/lib/python3.9/site-packages/accelerate/commands/launch.py", line 909, in launch_command
> multi_gpu_launcher(args)
> File "/home/flckv/.conda/envs/vcheckworthy/lib/python3.9/site-packages/accelerate/commands/launch.py", line 604, in multi_gpu_launcher
> distrib_run.run(args)
> File "/home/flckv/.conda/envs/vcheckworthy/lib/python3.9/site-packages/torch/distributed/run.py", line 715, in run
> elastic_launch(
> File "/home/flckv/.conda/envs/vcheckworthy/lib/python3.9/site-packages/torch/distributed/launcher/api.py", line 131, in __call__
> return launch_agent(self._config, self._entrypoint, list(args))
> File "/home/flckv/.conda/envs/vcheckworthy/lib/python3.9/site-packages/torch/distributed/launcher/api.py", line 245, in launch_agent
> raise ChildFailedError(
> torch.distributed.elastic.multiprocessing.errors.ChildFailedError:
>
> wav2vec/run_wav2vec2_pretraining_no_trainer.py FAILED
> ------------------------------------------------------------
> Failures:
> <NO_OTHER_FAILURES>
> ------------------------------------------------------------
> Root Cause (first observed failure):
> [0]:
> time : 2023-05-19_17:55:10
> host : gpu-08
> rank : 0 (local_rank: 0)
> exitcode : 1 (pid: 3914266)
> error_file: <N/A>
> traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
>
> /var/lib/slurm-llnl/slurmd/job151161/slurm_script: line 42: EOL: command not found
>
```<|||||>Hey @flckv! Could you try first updating all your packages to the latest versions?
```
pip install --upgrade pip
pip install --upgrade soundfile librosa datasets accelerate numpy transformers
```
The error looks like it's happening when we decode the soundfile (i.e. as we read the soundfile with librosa) - there was recently a big change to how we load audio samples with datasets that might fix this for you https://github.com/huggingface/datasets/pull/5573<|||||>@sanchit-gandhi Thanks, but now the command is not working:
`accelerate launch wav2vec/run_wav2vec2_pretraining_no_trainer.py --cache_dir="/dev/shm/" --dataset_name="librispeech_asr" --dataset_config_names test --dataset_split_names test --model_name_or_path="patrickvonplaten/wav2vec2-base-v2" --output_dir="./wav2vec2-pretrained-demo" --max_train_steps="20000" --num_warmup_steps="32000" --gradient_accumulation_steps="8" --learning_rate="0.005" --weight_decay="0.01" --max_duration_in_seconds="20.0" --min_duration_in_seconds="2.0" --logging_steps="1" --saving_steps="10000" --per_device_train_batch_size="8" --per_device_eval_batch_size="8" --adam_beta1="0.9" --adam_beta2="0.98" --adam_epsilon="1e-06" --gradient_checkpointing --mask_time_prob="0.65" --mask_time_length="10"
`
ERROR:
Traceback (most recent call last):
File "/home/flck/wav2vec/run_wav2vec2_pretraining_no_trainer.py", line 785, in <module>
main()
File "/home/flck/wav2vec/run_wav2vec2_pretraining_no_trainer.py", line 513, in main
prepare_dataset(raw_datasets["train"]), # loading the audio `raise KeyError(f"Column {key} not in the dataset. Current columns in the dataset: {columns}") KeyError: "Column args.audio_column_name not in the dataset. Current columns in the dataset: ['id', 'audio', 'duration_ms', 'text']"`
File "/home/flck/wav2vec/run_wav2vec2_pretraining_no_trainer.py", line 493, in prepare_dataset
`sample = batch['args.audio_column_name']`
File "/home/flck/.conda/envs/vcheckworthy/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 2778, in __getitem__
return self._getitem(key)
File "/home/flck/.conda/envs/vcheckworthy/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 2762, in _getitem
pa_subtable = query_table(self._data, key, indices=self._indices if self._indices is not None else None)
File "/home/flck/.conda/envs/vcheckworthy/lib/python3.9/site-packages/datasets/formatting/formatting.py", line 575, in query_table
_check_valid_column_key(key, table.column_names)
File "/home/flck/.conda/envs/vcheckworthy/lib/python3.9/site-packages/datasets/formatting/formatting.py", line 515, in _check_valid_column_key
```
raise KeyError(f"Column {key} not in the dataset. Current columns in the dataset: {columns}")
KeyError: "Column args.audio_column_name not in the dataset. Current columns in the dataset: ['id', 'audio', 'duration_ms', 'text']"
```
Traceback (most recent call last):
File "/home/flck/.conda/envs/vcheckworthy/bin/accelerate", line 8, in <module>
sys.exit(main())
File "/home/flck/.conda/envs/vcheckworthy/lib/python3.9/site-packages/accelerate/commands/accelerate_cli.py", line 45, in main
args.func(args)
File "/home/flck/.conda/envs/vcheckworthy/lib/python3.9/site-packages/accelerate/commands/launch.py", line 918, in launch_command
simple_launcher(args)
File "/home/flck/.conda/envs/vcheckworthy/lib/python3.9/site-packages/accelerate/commands/launch.py", line 580, in simple_launcher
raise subprocess.CalledProcessError(returncode=process.returncode, cmd=cmd)
subprocess.CalledProcessError: Command '['/home/flck/.conda/envs/vcheckworthy/bin/python', 'wav2vec/run_wav2vec2_pretraining_no_trainer.py', '--cache_dir=/dev/shm/', '--dataset_name=librispeech_asr', '--dataset_config_names', 'test', '--dataset_split_names', 'test', '--model_name_or_path=patrickvonplaten/wav2vec2-base-v2', '--output_dir=./wav2vec2-pretrained-demo', '--max_train_steps=20000', '--num_warmup_steps=32000', '--gradient_accumulation_steps=8', '--learning_rate=0.005', '--weight_decay=0.01', '--max_duration_in_seconds=20.0', '--min_duration_in_seconds=2.0', '--logging_steps=1', '--saving_steps=10000', '--per_device_train_batch_size=8', '--per_device_eval_batch_size=8', '--adam_beta1=0.9', '--adam_beta2=0.98', '--adam_epsilon=1e-06', '--gradient_checkpointing', '--mask_time_prob=0.65', '--mask_time_length=10']' returned non-zero exit status 1.
/var/lib/slurm-llnl/slurmd/job153086/slurm_script: line 45: EOL: command not found
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
WHEN I specify this in the command args
accelerate launch wav2vec/run_wav2vec2_pretraining_no_trainer.py --cache_dir="/dev/shm/" --dataset_name="librispeech_asr" --dataset_config_names test --dataset_split_names test --model_name_or_path="patrickvonplaten/wav2vec2-base-v2" --output_dir="./wav2vec2-pretrained-demo"` --audio_column_name=["id", "audio", "duration_ms", "text"] `--max_train_steps="20000" --num_warmup_steps="32000" --gradient_accumulation_steps="8" --learning_rate="0.005" --weight_decay="0.01" --max_duration_in_seconds="20.0" --min_duration_in_seconds="2.0" --logging_steps="1" --saving_steps="10000" --per_device_train_batch_size="8" --per_device_eval_batch_size="8" --adam_beta1="0.9" --adam_beta2="0.98" --adam_epsilon="1e-06" --gradient_checkpointing --mask_time_prob="0.65" --mask_time_length="10"
then the error is:
`run_wav2vec2_pretraining_no_trainer.py: error: unrecognized arguments: audio, duration_ms, text]`
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
WHEN I only add "id":
accelerate launch wav2vec/run_wav2vec2_pretraining_no_trainer.py --cache_dir="/dev/shm/" --dataset_name="librispeech_asr" --dataset_config_names test --dataset_split_names test --model_name_or_path="patrickvonplaten/wav2vec2-base-v2" --output_dir="./wav2vec2-pretrained-demo"` --audio_column_name=["id"] `--max_train_steps="20000" --num_warmup_steps="32000" --gradient_accumulation_steps="8" --learning_rate="0.005" --weight_decay="0.01" --max_duration_in_seconds="20.0" --min_duration_in_seconds="2.0" --logging_steps="1" --saving_steps="10000" --per_device_train_batch_size="8" --per_device_eval_batch_size="8" --adam_beta1="0.9" --adam_beta2="0.98" --adam_epsilon="1e-06" --gradient_checkpointing --mask_time_prob="0.65" --mask_time_length="10"
Traceback (most recent call last):
File "/home/flck/wav2vec/run_wav2vec2_pretraining_no_trainer.py", line 785, in <module>
main()
File "/home/flck/wav2vec/run_wav2vec2_pretraining_no_trainer.py", line 478, in main
raw_datasets = raw_datasets.cast_column(
File "/home/flck/.conda/envs/vcheckworthy/lib/python3.9/site-packages/datasets/dataset_dict.py", line 310, in cast_column
return DatasetDict({k: dataset.cast_column(column=column, feature=feature) for k, dataset in self.items()})
File "/home/flck/.conda/envs/vcheckworthy/lib/python3.9/site-packages/datasets/dataset_dict.py", line 310, in <dictcomp>
return DatasetDict({k: dataset.cast_column(column=column, feature=feature) for k, dataset in self.items()})
File "/home/flck/.conda/envs/vcheckworthy/lib/python3.9/site-packages/datasets/fingerprint.py", line 511, in wrapper
out = func(dataset, *args, **kwargs)
File "/home/flck.conda/envs/vcheckworthy/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 2082, in cast_column
dataset._data = dataset._data.cast(dataset.features.arrow_schema)
File "/home/flck/.conda/envs/vcheckworthy/lib/python3.9/site-packages/datasets/table.py", line 1152, in cast
return MemoryMappedTable(table_cast(self.table, *args, **kwargs), self.path, replays)
File "/home/flck/.conda/envs/vcheckworthy/lib/python3.9/site-packages/datasets/table.py", line 2290, in table_cast
return cast_table_to_schema(table, schema)
File "/home/flck/.conda/envs/vcheckworthy/lib/python3.9/site-packages/datasets/table.py", line 2248, in cast_table_to_schema
raise ValueError(f"Couldn't cast\n{table.schema}\nto\n{features}\nbecause column names don't match")
```
ValueError: Couldn't cast
id: string
audio: struct<bytes: binary, path: string>
child 0, bytes: binary
child 1, path: string
duration_ms: int32
text: string
```
-- schema metadata --
huggingface: '{"info": {"features": {"id": {"dtype": "string", "_type": "' + 163
to
`{'id': Value(dtype='string', id=None), 'audio': Audio(sampling_rate=16000, mono=True, decode=True, id=None), 'duration_ms': Value(dtype='int32', id=None), 'text': Value(dtype='string', id=None), '[id]': Audio(sampling_rate=16000, mono=True, decode=True, id=None)}`
because column names don't match
wandb: Waiting for W&B process to finish... (failed 1). Press Control-C to abort syncing.
wandb: - 0.028 MB of 0.028 MB uploaded (0.000 MB deduped)
wandb: \ 0.028 MB of 0.032 MB uploaded (0.000 MB deduped)
wandb: | 0.035 MB of 0.035 MB uploaded (0.000 MB deduped)
wandb: ๐ View run helpful-voice-29 at: https://wandb.ai/flck/wav2vec2-pretrained-demo/runs/tnmnebg6
wandb: Synced 5 W&B file(s), 0 media file(s), 0 artifact file(s) and 0 other file(s)
wandb: Find logs at: ./wandb/run-20230523_133136-tnmnebg6/logs
Traceback (most recent call last):
File "/home/flck/.conda/envs/vcheckworthy/bin/accelerate", line 8, in <module>
sys.exit(main())
File "/home/flck.conda/envs/vcheckworthy/lib/python3.9/site-packages/accelerate/commands/accelerate_cli.py", line 45, in main
args.func(args)
File "/homeflck/.conda/envs/vcheckworthy/lib/python3.9/site-packages/accelerate/commands/launch.py", line 918, in launch_command
simple_launcher(args)
File "/home/flck.conda/envs/vcheckworthy/lib/python3.9/site-packages/accelerate/commands/launch.py", line 580, in simple_launcher
raise subprocess.CalledProcessError(returncode=process.returncode, cmd=cmd)
subprocess.CalledProcessError: Command '['/home/flck/.conda/envs/vcheckworthy/bin/python', 'wav2vec/run_wav2vec2_pretraining_no_trainer.py', '--cache_dir=/dev/shm/', '--dataset_name=librispeech_asr', '--dataset_config_names', 'test', '--dataset_split_names', 'test', '--model_name_or_path=patrickvonplaten/wav2vec2-base-v2', '--output_dir=./wav2vec2-pretrained-demo', '--audio_column_name=[id]', '--max_train_steps=20000', '--num_warmup_steps=32000', '--gradient_accumulation_steps=8', '--learning_rate=0.005', '--weight_decay=0.01', '--max_duration_in_seconds=20.0', '--min_duration_in_seconds=2.0', '--logging_steps=1', '--saving_steps=10000', '--per_device_train_batch_size=8', '--per_device_eval_batch_size=8', '--adam_beta1=0.9', '--adam_beta2=0.98', '--adam_epsilon=1e-06', '--gradient_checkpointing', '--mask_time_prob=0.65', '--mask_time_length=10']' returned non-zero exit status 1.
Copy-and-paste the text below in your GitHub issue and FILL OUT the two last points.
- `transformers` version: 4.29.2
- Platform: Linux-5.4.204-ql-generic-12.0-19-x86_64-with-glibc2.31
- Python version: 3.9.7
- Huggingface_hub version: 0.14.1
- Safetensors version: not installed
- PyTorch version (GPU?): 1.11.0 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
/var/lib/slurm-llnl/slurmd/job153092/slurm_script: line 50: EOL: `command not found`
/var/lib/slurm-llnl/slurmd/job153092/slurm_script: line 53: /home/flck/wav2vec/run_wav2vec2_pretraining_no_trainer.py: `Permission denied`
<|||||>Hey @flckv - great! Glad updating to the latest packages fixed the previous error. Can you try setting:
```
--audio_column_name="audio"
```
Here we just need to pick-out the correct column name for the audio inputs (which in this case is `"audio"`)<|||||>hey @sanchit-gandhi yes it is great! the column name is still not interpreted :
I added what you said:
accelerate launch wav2vec/run_wav2vec2_pretraining_no_trainer.py --cache_dir="/dev/shm/" --dataset_name="librispeech_asr" --dataset_config_names test --dataset_split_names test --model_name_or_path="patrickvonplaten/wav2vec2-base-v2" --output_dir="./wav2vec2-pretrained-demo" `--audio_column_name="audio" `--max_train_steps="20000" --num_warmup_steps="32000" --gradient_accumulation_steps="8" --learning_rate="0.005" --weight_decay="0.01" --max_duration_in_seconds="20.0" --min_duration_in_seconds="2.0" --logging_steps="1" --saving_steps="10000" --per_device_train_batch_size="8" --per_device_eval_batch_size="8" --adam_beta1="0.9" --adam_beta2="0.98" --adam_epsilon="1e-06" --gradient_checkpointing --mask_time_prob="0.65" --mask_time_length="10"
tried also:
--audio_column_name='audio'
--audio_column_name=['audio']
--audio_column_name=["audio"]
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
BUT :
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Traceback (most recent call last):
File "/home/flck/wav2vec/run_wav2vec2_pretraining_no_trainer.py", line 785, in <module>
main()
File "/home/flck/wav2vec/run_wav2vec2_pretraining_no_trainer.py", line 513, in main
prepare_dataset(raw_datasets["train"]), ` raise KeyError(f"Column {key} not in the dataset. Current columns in the dataset: {columns}") KeyError: "Column args.audio_column_name not in the dataset. Current columns in the dataset: ['id', 'audio', 'duration_ms', 'text']"`
File "/home/flck/wav2vec/run_wav2vec2_pretraining_no_trainer.py", line 493, in prepare_dataset
` sample = batch['args.audio_column_name'] `
File "/home/flck/.conda/envs/vcheckworthy/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 2778, in __getitem__
return self._getitem(key)
File "/home/flck/.conda/envs/vcheckworthy/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 2762, in _getitem
pa_subtable = query_table(self._data, key, indices=self._indices if self._indices is not None else None)
File "/home/flcks/.conda/envs/vcheckworthy/lib/python3.9/site-packages/datasets/formatting/formatting.py", line 575, in query_table
_check_valid_column_key(key, table.column_names)
File "/home/flck/.conda/envs/vcheckworthy/lib/python3.9/site-packages/datasets/formatting/formatting.py", line 515, in _check_valid_column_key
raise KeyError(f"Column {key} not in the dataset. Current columns in the dataset: {columns}")
`KeyError: "Column args.audio_column_name not in the dataset. Current columns in the dataset: ['id', 'audio', 'duration_ms', 'text']"`
<|||||>Can you double check you haven't changed the parser args for `audio_column_name`?
https://github.com/huggingface/transformers/blob/50a56bedb6ec8a4f9ba455c184d187cfee2e9c81/examples/pytorch/speech-pretraining/run_wav2vec2_pretraining_no_trainer.py#L112-L117
I can't see the check that is erroring out for you on the example script. Your error is occurring on line 513. If I check line 513 in the example, I get something completely different to the audio column name check: https://github.com/huggingface/transformers/blob/3d7baef1141e22520901310593c106b15493e6a9/examples/pytorch/speech-pretraining/run_wav2vec2_pretraining_no_trainer.py#L513
Could you make sure you are using the latest version of the script? You can just copy it from main.<|||||>@sanchit-gandhi thanks, you were right. It works now. |
transformers | 23,471 | closed | Fix PretrainedConfig `min_length` docstring | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Currently, the default values for the `min_length` parameter in the `PretrainedConfig` is set 0. However, the docstring says that the default value is 10 This PR fixes the docstring to reflect the correct default value.
Default value:
https://github.com/huggingface/transformers/blob/8aa8513f715faaa84cef1abd57ea4ded96c80e44/src/transformers/configuration_utils.py#L286
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
| 05-19-2023 13:12:49 | 05-19-2023 13:12:49 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 23,470 | closed | An ability to pass a function to tokenizer to transform prompt | ### Feature request
Models often work best with specific prompt template.And sometimes [prefix](https://huggingface.co/docs/transformers/v4.29.1/en/main_classes/configuration#transformers.PretrainedConfig.prefix) is not enough.
An ability to pass function to transform prompt would be excellent.
Example
```
def transform_prompt(prompt:str) -> str:
return f"""
### Instruction: {prompt}
### Assistant:
"""
pipeline = TextGenerationPipeline(model=model, tokenizer=tokenizer,max_new_tokens=256,temperature=0.1,transform=transform_prompt)
```
Providing text template is also an option.But in my belief it would be more holistic approach.
### Motivation
I am a little frustrated with the need to define CustomLLM in langchain in order to accomodate the model prompt template. And i believe thats not the only use case for it. Many models require special prompt template.
### Your contribution
I am sorry but no. I am not python programmer. | 05-19-2023 11:59:01 | 05-19-2023 11:59:01 | Hi @nikitastaf1996, thanks for raising this issue.
So that we can best understand the proposed feature - could you explain a bit more about the utility this would unlock? At the moment, it seems that a user could transform their prompts as needed before passing to the pipeline e.g.:
```python
def transform(prompt: str) -> str:
...
prompt = transform(original_prompt_string)
pipeline = TextGenerationPipeline(model=model, tokenizer=tokenizer, max_new_tokens=256, temperature=0.1)
outputs = pipeline(prompt)
```
cc @Narsil @gante <|||||>I will give you code. You will decide if it nessesary to implement. After some time has passed i am not so sure it is nesserally at all. It might be niche nuisance exclusive to me,
Current code i use:
```
#@title Custom llm
from langchain.llms.base import LLM
from typing import Optional, List, Mapping, Any
from transformers import TextGenerationPipeline
class CustomLLM(LLM):
@property
def _llm_type(self) -> str:
return "custom"
def _call(self, prompt: str, stop: Optional[List[str]] = None) -> str:
pipeline = TextGenerationPipeline(model=model, tokenizer=tokenizer,max_new_tokens=256,temperature=0.1)
prompt = f"""User:{prompt}
Assistant:"""
prompt_len = len(prompt)
result = pipeline(prompt)[0]["generated_text"][prompt_len:]
if stop is not None:
for stop_string in stop:
index = result.find(stop_string) # find the index of the substring
if index != -1:
result = result[:index + len(stop_string)]
break
return result
```
```
#@title Langchain Agent
from langchain.agents import load_tools,initialize_agent,AgentType
llm = CustomLLM()
tools = load_tools(["terminal","python_repl"])
agent = initialize_agent(tools, llm, agent=AgentType.STRUCTURED_CHAT_ZERO_SHOT_REACT_DESCRIPTION,verbose=True,handle_parsing_errors=True)
agent.run("Please install ffmpeg")
```
What i would like to see:
```
from langchain.agents import load_tools,initialize_agent,AgentType
from langchain.llms import HuggingFacePipeline
from transformers import pipeline
pipe = pipeline("text-generation", model=model, tokenizer=tokenizer,max_new_tokens=256,temperature=0.1,transform=TRANSFORMFUNCTION)
llm = HuggingFacePipeline(pipeline=pipe)
tools = load_tools(["terminal","python_repl"])
agent = initialize_agent(tools, llm, agent=AgentType.STRUCTURED_CHAT_ZERO_SHOT_REACT_DESCRIPTION,verbose=True,handle_parsing_errors=True)
agent.run("Please install ffmpeg")
```<|||||>I think your custom LLM is perfectly fine imo.
You have other ways to define it actually. Introducting `transform` function would bloat everything up IMHO and what you really need to do is just send a preprompted text to the pipelines like what you are doing. The `stop` part can also be implemented with `generate` arguments I think. (`stop_sequence`)
https://github.com/huggingface/transformers/blob/main/src/transformers/pipelines/text_generation.py#L138<|||||>The it's finished. |
transformers | 23,469 | closed | [Agents and Tools] Custom tool not showing up in the toolbox list | I have coded an inpainter tool:
https://huggingface.co/spaces/sayakpaul/inpainting-tool/
Then I load the tool as follows:
```py
from transformers import load_tool
inpainter = load_tool("sayakpaul/inpainting-tool")
```
When running `print(f"Description: '{inpainter.description}'")`, it shows the output as expected:
> Description: 'This is a tool that inpaints some parts of an image StableDiffusionInpaintPipeline according to a prompt. It takes three inputs: `image`, which should be the original image which will be inpainted, `mask_image`, which should be used to determine which parts of the original image (stored in the `image` variable) should be inpainted, and `prompt`, which should be the prompt to use to guide the inpainting process. It returns the inpainted image.'
Then, I try to add this tool to the list of existing tools:
```py
from transformers import HfAgent
tools = [inpainter]
agent = HfAgent(
"https://api-inference.huggingface.co/models/bigcode/starcoder",
additional_tools=tools
)
```
However, the tool is not added to the toolkit (it just leaves a bullet point):
```py
print("\n".join([f"- {a}" for a in agent.toolbox.keys()]))
```
```
- document_qa
- image_captioner
- image_qa
- image_segmenter
- transcriber
- summarizer
- text_classifier
- text_qa
- text_reader
- translator
- image_transformer
- text_downloader
- image_generator
- video_generator
-
```
As a result, when I run:
```py
image = agent.run(
"Inpaint the image: 'a cute dinosaur'",
image=orig_image,
mask_image=mask_image,
return_code=True
)
```
this is the code that gets generated:
```
==Explanation from the agent==
I will use the following tools: `image_transformer` to transform the image, then `image_segmenter` to create a mask, then `image_transformer` to inpaint the image.
==Code generated by the agent==
prompt = "a cute dinosaur"
image = image_transformer(image, prompt)
mask = image_segmenter(image, prompt)
inpainted_image = image_transformer(image, prompt, mask)
```
As we can see, there's no mention of the `image_inpainter` here.
Anything am I missing out on?
Here's my Colab Notebook for reproduction: https://colab.research.google.com/drive/1BuNz2-7ePeaRaeI7yNc3kqDOzfdXOsUE?usp=sharing
I followed this guide during the process:
https://huggingface.co/docs/transformers/custom_tools#using-custom-tools
`transformers-cli env` gives:
```bash
- `transformers` version: 4.29.2
- Platform: Linux-5.15.107+-x86_64-with-glibc2.31
- Python version: 3.10.11
- Huggingface_hub version: 0.14.1
- Safetensors version: 0.3.1
- PyTorch version (GPU?): 2.0.1+cu118 (True)
- Tensorflow version (GPU?): 2.12.0 (True)
- Flax version (CPU?/GPU?/TPU?): 0.6.9 (gpu)
- Jax version: 0.4.8
- JaxLib version: 0.4.7
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
```
| 05-19-2023 11:38:00 | 05-19-2023 11:38:00 | cc @sgugger @LysandreJik <|||||>Thanks for the report! I can confirm that `load_tool` does not properly set the name as described in the `tool_config.json`. One workaround is to implement it properly in the class of your tool (setting the `name` attribute like you did for the description) but the tool config should probably override the non-defined attribute. Will work on a fix this morning!<|||||>Thank you!
For the custom tool, I referred to https://huggingface.co/spaces/huggingface-tools/text-to-image/blob/main/text_to_image.py#L14 and saw it didn't also assign the `name` member. <|||||>Yes but this one is in the default tools, so is loaded differently. Bug should be fixed soon in any case.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 23,468 | closed | [`RWKV`] Rwkv fix for 8bit inference | # What does this PR do?
Fixes #23467
RWKV architecture scales down some linear layer weights with a certain factor at various stages for inference and training.
In the case of 8bit models, this leads to an error because the weight matrix is now an `int8` matrix. Therefore to apply the scaling one needs to scale the quantization statistics that are stored inside the `SCB` attribute.
cc @amyeroberts
```python
from transformers import AutoTokenizer, AutoModelForCausalLM, GenerationConfig
import torch
model_id = "RWKV/rwkv-4-1b5-pile"
model = AutoModelForCausalLM.from_pretrained(model_id, load_in_8bit=True, device_map={"":0})
tokenizer = AutoTokenizer.from_pretrained(model_id)
generation_config = GenerationConfig(max_new_tokens=20, pad_token_id=tokenizer.eos_token_id)
question = "Hello my name is"
inputs = tokenizer(question, return_tensors="pt").to(0)
output_int8 = model.generate((inputs["input_ids"]), generation_config=generation_config)
print(tokenizer.decode(output_int8[0], skip_special_tokens=True))
>>> Hello my name is John and I am a student at the University of Texas at Austin. I am a member of the
model_fp16 = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype=torch.float16, device_map={"":1})
output_fp16 = model_fp16.generate((inputs["input_ids"]), generation_config=generation_config)
print(tokenizer.decode(output_fp16[0], skip_special_tokens=True))
>>> Hello my name is John and I am a student at the University of South Florida. I am a member of the Alpha
```
| 05-19-2023 11:03:17 | 05-19-2023 11:03:17 | ^ for the snippet in the description, could you add what's printed out please? :) <|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>Waiting For This PR to Merge...
Merge this PR ASAP!<|||||>@amyeroberts Please Review this PR Fast!<|||||>Thanks! Just updated the comment! <|||||>Thanks You So Much @younesbelkada @amyeroberts For Your Work...
Hope it Works without getting into another problem now :) |
transformers | 23,467 | closed | RuntimeError: result type Float can't be cast to the desired output type Char | ### System Info
Colab Configuration:
- `transformers` version: 4.30.0.dev0
- Platform: Linux-5.15.107+-x86_64-with-glibc2.31
- Python version: 3.10.11
- Huggingface_hub version: 0.14.1
- Safetensors version: not installed
- PyTorch version (GPU?): 2.0.1+cu118 (True)
- Tensorflow version (GPU?): 2.12.0 (True)
- Flax version (CPU?/GPU?/TPU?): 0.6.9 (gpu)
- Jax version: 0.4.8
- JaxLib version: 0.4.7
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help?
@ArthurZucker @gante @sgugger
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
I Ran The Official Code Example:
```
from transformers import AutoTokenizer, AutoModelForCausalLM, GenerationConfig
import torch
model_id = "RWKV/rwkv-raven-1b5"
model = AutoModelForCausalLM.from_pretrained(model_id)
tokenizer = AutoTokenizer.from_pretrained(model_id)
model.eval()
if torch.__version__ >= "2":
torch.compile(model)
generation_config = GenerationConfig(max_new_tokens=1000, temperature=0.7, top_k=35, top_p=0.90, pad_token_id= tokenizer.eos_token_id)
question = "Write me a Poem About NLP"
prompt = f"### Instruction: {question}\n### Response:"
inputs = tokenizer(prompt, return_tensors="pt")
output = model.generate((inputs["input_ids"]), generation_config=generation_config)
print(output)
```
It Works Fine!
I Ran the same code with some additional args in from_pretrained() func when initialising the model:
```
from transformers import AutoTokenizer, AutoModelForCausalLM, GenerationConfig
import torch
model_id = "RWKV/rwkv-raven-1b5"
model = AutoModelForCausalLM.from_pretrained(model_id, low_cpu_mem_usage=True, load_in_8bit=True, device_map="auto")
tokenizer = AutoTokenizer.from_pretrained(model_id)
model.eval()
if torch.__version__ >= "2":
torch.compile(model)
generation_config = GenerationConfig(max_new_tokens=1000, temperature=0.7, top_k=35, top_p=0.90, pad_token_id= tokenizer.eos_token_id)
question = "Tell me How RWKV RNNs are Parallelizable"
prompt = f"### Instruction: {question}\n### Response:"
inputs = tokenizer(prompt, return_tensors="pt")
output = model.generate((inputs["input_ids"]), generation_config=generation_config)
print(output)
```
But When I Ran This Code, I Got The Following Error:
```
/usr/local/lib/python3.10/dist-packages/transformers/generation/utils.py:1448: UserWarning: You are calling .generate() with the `input_ids` being on a device type different than your model's device. `input_ids` is on cpu, whereas the model is on cuda. You may experience unexpected behaviors or slower generation. Please make sure that you have put `input_ids` to the correct device by calling for example input_ids = input_ids.to('cuda') before running `.generate()`.
warnings.warn(
โญโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ Traceback (most recent call last) โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฎ
โ in <cell line: 7>:7 โ
โ โ
โ /usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py:115 in decorate_context โ
โ โ
โ 112 โ @functools.wraps(func) โ
โ 113 โ def decorate_context(*args, **kwargs): โ
โ 114 โ โ with ctx_factory(): โ
โ โฑ 115 โ โ โ return func(*args, **kwargs) โ
โ 116 โ โ
โ 117 โ return decorate_context โ
โ 118 โ
โ โ
โ /usr/local/lib/python3.10/dist-packages/transformers/generation/utils.py:1518 in generate โ
โ โ
โ 1515 โ โ โ โ ) โ
โ 1516 โ โ โ โ
โ 1517 โ โ โ # 11. run greedy search โ
โ โฑ 1518 โ โ โ return self.greedy_search( โ
โ 1519 โ โ โ โ input_ids, โ
โ 1520 โ โ โ โ logits_processor=logits_processor, โ
โ 1521 โ โ โ โ stopping_criteria=stopping_criteria, โ
โ โ
โ /usr/local/lib/python3.10/dist-packages/transformers/generation/utils.py:2335 in greedy_search โ
โ โ
โ 2332 โ โ โ model_inputs = self.prepare_inputs_for_generation(input_ids, **model_kwargs) โ
โ 2333 โ โ โ โ
โ 2334 โ โ โ # forward pass to get next token โ
โ โฑ 2335 โ โ โ outputs = self( โ
โ 2336 โ โ โ โ **model_inputs, โ
โ 2337 โ โ โ โ return_dict=True, โ
โ 2338 โ โ โ โ output_attentions=output_attentions, โ
โ โ
โ /usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py:1501 in _call_impl โ
โ โ
โ 1498 โ โ if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks โ
โ 1499 โ โ โ โ or _global_backward_pre_hooks or _global_backward_hooks โ
โ 1500 โ โ โ โ or _global_forward_hooks or _global_forward_pre_hooks): โ
โ โฑ 1501 โ โ โ return forward_call(*args, **kwargs) โ
โ 1502 โ โ # Do not call functions when jit is used โ
โ 1503 โ โ full_backward_hooks, non_full_backward_hooks = [], [] โ
โ 1504 โ โ backward_pre_hooks = [] โ
โ โ
โ /usr/local/lib/python3.10/dist-packages/accelerate/hooks.py:165 in new_forward โ
โ โ
โ 162 โ โ โ with torch.no_grad(): โ
โ 163 โ โ โ โ output = old_forward(*args, **kwargs) โ
โ 164 โ โ else: โ
โ โฑ 165 โ โ โ output = old_forward(*args, **kwargs) โ
โ 166 โ โ return module._hf_hook.post_forward(module, output) โ
โ 167 โ โ
โ 168 โ module.forward = new_forward โ
โ โ
โ /usr/local/lib/python3.10/dist-packages/transformers/models/rwkv/modeling_rwkv.py:780 in forward โ
โ โ
โ 777 โ โ """ โ
โ 778 โ โ return_dict = return_dict if return_dict is not None else self.config.use_return โ
โ 779 โ โ โ
โ โฑ 780 โ โ rwkv_outputs = self.rwkv( โ
โ 781 โ โ โ input_ids, โ
โ 782 โ โ โ inputs_embeds=inputs_embeds, โ
โ 783 โ โ โ state=state, โ
โ โ
โ /usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py:1501 in _call_impl โ
โ โ
โ 1498 โ โ if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks โ
โ 1499 โ โ โ โ or _global_backward_pre_hooks or _global_backward_hooks โ
โ 1500 โ โ โ โ or _global_forward_hooks or _global_forward_pre_hooks): โ
โ โฑ 1501 โ โ โ return forward_call(*args, **kwargs) โ
โ 1502 โ โ # Do not call functions when jit is used โ
โ 1503 โ โ full_backward_hooks, non_full_backward_hooks = [], [] โ
โ 1504 โ โ backward_pre_hooks = [] โ
โ โ
โ /usr/local/lib/python3.10/dist-packages/accelerate/hooks.py:165 in new_forward โ
โ โ
โ 162 โ โ โ with torch.no_grad(): โ
โ 163 โ โ โ โ output = old_forward(*args, **kwargs) โ
โ 164 โ โ else: โ
โ โฑ 165 โ โ โ output = old_forward(*args, **kwargs) โ
โ 166 โ โ return module._hf_hook.post_forward(module, output) โ
โ 167 โ โ
โ 168 โ module.forward = new_forward โ
โ โ
โ /usr/local/lib/python3.10/dist-packages/transformers/models/rwkv/modeling_rwkv.py:645 in forward โ
โ โ
โ 642 โ โ return_dict = return_dict if return_dict is not None else self.config.use_return โ
โ 643 โ โ โ
โ 644 โ โ if self.training == self.layers_are_rescaled: โ
โ โฑ 645 โ โ โ self._rescale_layers() โ
โ 646 โ โ โ
โ 647 โ โ if input_ids is not None and inputs_embeds is not None: โ
โ 648 โ โ โ raise ValueError("You cannot specify both input_ids and inputs_embeds at the โ
โ โ
โ /usr/local/lib/python3.10/dist-packages/transformers/models/rwkv/modeling_rwkv.py:712 in โ
โ _rescale_layers โ
โ โ
โ 709 โ โ โ โ โ โ block.attention.output.weight.mul_(2 ** int(block_id // self.con โ
โ 710 โ โ โ โ โ โ block.feed_forward.value.weight.mul_(2 ** int(block_id // self.c โ
โ 711 โ โ โ โ โ else: โ
โ โฑ 712 โ โ โ โ โ โ block.attention.output.weight.div_(2 ** int(block_id // self.con โ
โ 713 โ โ โ โ โ โ block.feed_forward.value.weight.div_(2 ** int(block_id // self.c โ
โ 714 โ โ โ
โ 715 โ โ self.layers_are_rescaled = not self.training โ
โฐโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฏ
RuntimeError: result type Float can't be cast to the desired output type Char
```
I Tried So Many Ways to Address This, But Nothing Works.
But When I Run This Model Initializing code:
```model = AutoModelForCausalLM.from_pretrained(model_id)```
...without loading it in 8bits, and other args. it Works Fine.
So i guess There Should be Bug in rwkv modelling Code Which Prevents Generating Output, when loaded in 8bit and with some args(You Can See it in Above code snippets).
Correct Me If I were Wrong or Please fix it ASAP.
Who Can Help?
@ArthurZucker @gante @sgugger
### Expected behavior
I Expected it Generate Text as it Generate Before! | 05-19-2023 10:02:56 | 05-19-2023 10:02:56 | cc @younesbelkada regarding 8bit loading <|||||>Hi @TheFaheem
Thanks for the issue, it should be fixed in #23468<|||||>> Hi @TheFaheem Thanks for the issue, it should be fixed in #23468
Yo! Thanks Man. I Thought This Issue Takes Days To Get to Your Eyes.
Thanks For Your Lightning Speed Fix.
Waiting For That PR to Get Merged...<|||||>> cc @younesbelkada regarding 8bit loading
Review That PR ASAP!<|||||>@TheFaheem We understand that you want the issue resolved as soon as possible, but many of us working on the repo are busy and have many pieces of work to attend to. Spamming messages here and on the PR won't get the PR merged quicker, and isn't sustainable behaviour: if everyone does this then we're unable to meaningfully address notifications. <|||||>I Apologise for my Impatience and for interrupting you. Thanks for Your Work For the Community! |
transformers | 23,466 | closed | RuntimeError: result type Float can't be cast to the desired output type Char | ### System Info
My System Info:
- `transformers` version: 4.30.0.dev0
- Platform: Linux-5.15.107+-x86_64-with-glibc2.31
- Python version: 3.10.11
- Huggingface_hub version: 0.14.1
- Safetensors version: not installed
- PyTorch version (GPU?): 2.0.1+cu118 (True)
- Tensorflow version (GPU?): 2.12.0 (True)
- Flax version (CPU?/GPU?/TPU?): 0.6.9 (gpu)
- Jax version: 0.4.8
- JaxLib version: 0.4.7
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
I Ran The Official Code Example:
```
from transformers import AutoTokenizer, AutoModelForCausalLM, GenerationConfig
import torch
model_id = "RWKV/rwkv-raven-1b5"
model = AutoModelForCausalLM.from_pretrained(model_id)
tokenizer = AutoTokenizer.from_pretrained(model_id)
model.eval()
if torch.__version__ >= "2":
torch.compile(model)
generation_config = GenerationConfig(max_new_tokens=1000, temperature=0.7, top_k=35, top_p=0.90, pad_token_id= tokenizer.eos_token_id)
question = "Write me a Poem About NLP"
prompt = f"### Instruction: {question}\n### Response:"
inputs = tokenizer(prompt, return_tensors="pt")
output = model.generate((inputs["input_ids"]), generation_config=generation_config)
print(output)
```
It Works Fine!
I Ran the same code with some additional args in from_pretrained() func when initialising the model:
```
from transformers import AutoTokenizer, AutoModelForCausalLM, GenerationConfig
import torch
model_id = "RWKV/rwkv-raven-1b5"
model = AutoModelForCausalLM.from_pretrained(model_id, low_cpu_mem_usage=True, load_in_8bit=True, device_map="auto")
tokenizer = AutoTokenizer.from_pretrained(model_id)
model.eval()
if torch.__version__ >= "2":
torch.compile(model)
generation_config = GenerationConfig(max_new_tokens=1000, temperature=0.7, top_k=35, top_p=0.90, pad_token_id= tokenizer.eos_token_id)
question = "Tell me How RWKV RNNs are Parallelizable"
prompt = f"### Instruction: {question}\n### Response:"
inputs = tokenizer(prompt, return_tensors="pt")
output = model.generate((inputs["input_ids"]), generation_config=generation_config)
print(output)
```
But When I Ran This Code, I Got The Following Error:
```
/usr/local/lib/python3.10/dist-packages/transformers/generation/utils.py:1448: UserWarning: You are calling .generate() with the `input_ids` being on a device type different than your model's device. `input_ids` is on cpu, whereas the model is on cuda. You may experience unexpected behaviors or slower generation. Please make sure that you have put `input_ids` to the correct device by calling for example input_ids = input_ids.to('cuda') before running `.generate()`.
warnings.warn(
โญโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ Traceback (most recent call last) โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฎ
โ in <cell line: 7>:7 โ
โ โ
โ /usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py:115 in decorate_context โ
โ โ
โ 112 โ @functools.wraps(func) โ
โ 113 โ def decorate_context(*args, **kwargs): โ
โ 114 โ โ with ctx_factory(): โ
โ โฑ 115 โ โ โ return func(*args, **kwargs) โ
โ 116 โ โ
โ 117 โ return decorate_context โ
โ 118 โ
โ โ
โ /usr/local/lib/python3.10/dist-packages/transformers/generation/utils.py:1518 in generate โ
โ โ
โ 1515 โ โ โ โ ) โ
โ 1516 โ โ โ โ
โ 1517 โ โ โ # 11. run greedy search โ
โ โฑ 1518 โ โ โ return self.greedy_search( โ
โ 1519 โ โ โ โ input_ids, โ
โ 1520 โ โ โ โ logits_processor=logits_processor, โ
โ 1521 โ โ โ โ stopping_criteria=stopping_criteria, โ
โ โ
โ /usr/local/lib/python3.10/dist-packages/transformers/generation/utils.py:2335 in greedy_search โ
โ โ
โ 2332 โ โ โ model_inputs = self.prepare_inputs_for_generation(input_ids, **model_kwargs) โ
โ 2333 โ โ โ โ
โ 2334 โ โ โ # forward pass to get next token โ
โ โฑ 2335 โ โ โ outputs = self( โ
โ 2336 โ โ โ โ **model_inputs, โ
โ 2337 โ โ โ โ return_dict=True, โ
โ 2338 โ โ โ โ output_attentions=output_attentions, โ
โ โ
โ /usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py:1501 in _call_impl โ
โ โ
โ 1498 โ โ if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks โ
โ 1499 โ โ โ โ or _global_backward_pre_hooks or _global_backward_hooks โ
โ 1500 โ โ โ โ or _global_forward_hooks or _global_forward_pre_hooks): โ
โ โฑ 1501 โ โ โ return forward_call(*args, **kwargs) โ
โ 1502 โ โ # Do not call functions when jit is used โ
โ 1503 โ โ full_backward_hooks, non_full_backward_hooks = [], [] โ
โ 1504 โ โ backward_pre_hooks = [] โ
โ โ
โ /usr/local/lib/python3.10/dist-packages/accelerate/hooks.py:165 in new_forward โ
โ โ
โ 162 โ โ โ with torch.no_grad(): โ
โ 163 โ โ โ โ output = old_forward(*args, **kwargs) โ
โ 164 โ โ else: โ
โ โฑ 165 โ โ โ output = old_forward(*args, **kwargs) โ
โ 166 โ โ return module._hf_hook.post_forward(module, output) โ
โ 167 โ โ
โ 168 โ module.forward = new_forward โ
โ โ
โ /usr/local/lib/python3.10/dist-packages/transformers/models/rwkv/modeling_rwkv.py:780 in forward โ
โ โ
โ 777 โ โ """ โ
โ 778 โ โ return_dict = return_dict if return_dict is not None else self.config.use_return โ
โ 779 โ โ โ
โ โฑ 780 โ โ rwkv_outputs = self.rwkv( โ
โ 781 โ โ โ input_ids, โ
โ 782 โ โ โ inputs_embeds=inputs_embeds, โ
โ 783 โ โ โ state=state, โ
โ โ
โ /usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py:1501 in _call_impl โ
โ โ
โ 1498 โ โ if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks โ
โ 1499 โ โ โ โ or _global_backward_pre_hooks or _global_backward_hooks โ
โ 1500 โ โ โ โ or _global_forward_hooks or _global_forward_pre_hooks): โ
โ โฑ 1501 โ โ โ return forward_call(*args, **kwargs) โ
โ 1502 โ โ # Do not call functions when jit is used โ
โ 1503 โ โ full_backward_hooks, non_full_backward_hooks = [], [] โ
โ 1504 โ โ backward_pre_hooks = [] โ
โ โ
โ /usr/local/lib/python3.10/dist-packages/accelerate/hooks.py:165 in new_forward โ
โ โ
โ 162 โ โ โ with torch.no_grad(): โ
โ 163 โ โ โ โ output = old_forward(*args, **kwargs) โ
โ 164 โ โ else: โ
โ โฑ 165 โ โ โ output = old_forward(*args, **kwargs) โ
โ 166 โ โ return module._hf_hook.post_forward(module, output) โ
โ 167 โ โ
โ 168 โ module.forward = new_forward โ
โ โ
โ /usr/local/lib/python3.10/dist-packages/transformers/models/rwkv/modeling_rwkv.py:645 in forward โ
โ โ
โ 642 โ โ return_dict = return_dict if return_dict is not None else self.config.use_return โ
โ 643 โ โ โ
โ 644 โ โ if self.training == self.layers_are_rescaled: โ
โ โฑ 645 โ โ โ self._rescale_layers() โ
โ 646 โ โ โ
โ 647 โ โ if input_ids is not None and inputs_embeds is not None: โ
โ 648 โ โ โ raise ValueError("You cannot specify both input_ids and inputs_embeds at the โ
โ โ
โ /usr/local/lib/python3.10/dist-packages/transformers/models/rwkv/modeling_rwkv.py:712 in โ
โ _rescale_layers โ
โ โ
โ 709 โ โ โ โ โ โ block.attention.output.weight.mul_(2 ** int(block_id // self.con โ
โ 710 โ โ โ โ โ โ block.feed_forward.value.weight.mul_(2 ** int(block_id // self.c โ
โ 711 โ โ โ โ โ else: โ
โ โฑ 712 โ โ โ โ โ โ block.attention.output.weight.div_(2 ** int(block_id // self.con โ
โ 713 โ โ โ โ โ โ block.feed_forward.value.weight.div_(2 ** int(block_id // self.c โ
โ 714 โ โ โ
โ 715 โ โ self.layers_are_rescaled = not self.training โ
โฐโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฏ
RuntimeError: result type Float can't be cast to the desired output type Char
```
I Tried So Many Ways to Address This, But Nothing Works.
But When I Run This Model Initializing code:
```model = AutoModelForCausalLM.from_pretrained(model_id)```
...without loading it in 8bits, and other args. it Works Fine.
So i guess There Should be Bug in rwkv modelling Code Which Prevents Generating Output, when loaded in 8bit and with some args(You Can See it in Above code snippets).
Correct Me If I were Wrong or Please fix it ASAP.
Who Can Help?
@ArthurZucker @gante @sgugger
### Expected behavior
I Expected it Generate Text as it Generate Before! | 05-19-2023 09:59:51 | 05-19-2023 09:59:51 | Closing as it's exact copy of #23467 <|||||>Sorry Happened By mistake |
transformers | 23,465 | closed | Fix confusing `transformers` installation in CI | # What does this PR do?
As mentioned in [this comment](https://github.com/huggingface/transformers/pull/23277#issuecomment-1544080975), the `transformers` in CI runs is not the same one installed during docker image build.
Furthermore, for past CI, we didn't update the docker image daily (there is no need to update the fixed 3rd-packages environment), but the `transformers` code is the latest one being test against. We get
```bash
E ImportError: cannot import name 'HfDoctestModule' from 'transformers.testing_utils'
```
as the `transformers` is the one during docker image build (months ago) which has no `HfDoctestModule` at the time.
To avoid and fix all such super confusing failures in the future, it's better to install the `transformers` again in the edit mode in the workflow files.
I don't change the daily CI workflow file yet in this PR - I would wait this PR showing the change actually fix issues. | 05-19-2023 09:45:47 | 05-19-2023 09:45:47 | _The documentation is not available anymore as the PR was closed or merged._<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_23465). All of your documentation changes will be reflected on that endpoint. |
transformers | 23,464 | closed | Module image_processing_videomae can't be found | ### System Info
- `transformers` version: 4.22.0
- Platform: Windows-10-10.0.19041-SP0
- Python version: 3.8.5
- Huggingface_hub version: 0.14.1
- PyTorch version (GPU?): 1.12.1 (False)
- Tensorflow version (GPU?): 2.10.0 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help?
When following the tutorial for the TimeSFormer, I run into an issue importing the relevant modules. This is the tutorial:
https://huggingface.co/docs/transformers/main/tasks/video_classification
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
```
from transformers import VideoMAEImageProcessor, VideoMAEForVideoClassification
model_ckpt = "MCG-NJU/videomae-base"
image_processor = VideoMAEImageProcessor.from_pretrained(model_ckpt)
model = VideoMAEForVideoClassification.from_pretrained(
model_ckpt,
label2id=label2id,
id2label=id2label,
ignore_mismatched_sizes=True, # provide this in case you're planning to fine-tune an already fine-tuned checkpoint
)
```
The error:
```
ImportError Traceback (most recent call last)
[d:\Fall_Detection\BAP_fall_detection\.conda\lib\site-packages\transformers\utils\import_utils.py](file:///D:/Fall_Detection/BAP_fall_detection/.conda/lib/site-packages/transformers/utils/import_utils.py) in _get_module(self, module_name)
1171 try:
-> 1172 return importlib.import_module("." + module_name, self.__name__)
1173 except Exception as e:
[d:\Fall_Detection\BAP_fall_detection\.conda\lib\importlib\__init__.py](file:///D:/Fall_Detection/BAP_fall_detection/.conda/lib/importlib/__init__.py) in import_module(name, package)
126 level += 1
--> 127 return _bootstrap._gcd_import(name[level:], package, level)
128
[d:\Fall_Detection\BAP_fall_detection\.conda\lib\importlib\_bootstrap.py](file:///D:/Fall_Detection/BAP_fall_detection/.conda/lib/importlib/_bootstrap.py) in _gcd_import(name, package, level)
[d:\Fall_Detection\BAP_fall_detection\.conda\lib\importlib\_bootstrap.py](file:///D:/Fall_Detection/BAP_fall_detection/.conda/lib/importlib/_bootstrap.py) in _find_and_load(name, import_)
[d:\Fall_Detection\BAP_fall_detection\.conda\lib\importlib\_bootstrap.py](file:///D:/Fall_Detection/BAP_fall_detection/.conda/lib/importlib/_bootstrap.py) in _find_and_load_unlocked(name, import_)
[d:\Fall_Detection\BAP_fall_detection\.conda\lib\importlib\_bootstrap.py](file:///D:/Fall_Detection/BAP_fall_detection/.conda/lib/importlib/_bootstrap.py) in _load_unlocked(spec)
[d:\Fall_Detection\BAP_fall_detection\.conda\lib\importlib\_bootstrap_external.py](file:///D:/Fall_Detection/BAP_fall_detection/.conda/lib/importlib/_bootstrap_external.py) in exec_module(self, module)
[d:\Fall_Detection\BAP_fall_detection\.conda\lib\importlib\_bootstrap.py](file:///D:/Fall_Detection/BAP_fall_detection/.conda/lib/importlib/_bootstrap.py) in _call_with_frames_removed(f, *args, **kwds)
[d:\Fall_Detection\BAP_fall_detection\.conda\lib\site-packages\transformers\models\videomae\image_processing_videomae.py](file:///D:/Fall_Detection/BAP_fall_detection/.conda/lib/site-packages/transformers/models/videomae/image_processing_videomae.py) in
21 from ...image_processing_utils import BaseImageProcessor, BatchFeature, get_size_dict
---> 22 from ...image_transforms import (
23 center_crop,
[d:\Fall_Detection\BAP_fall_detection\.conda\lib\site-packages\transformers\image_transforms.py](file:///D:/Fall_Detection/BAP_fall_detection/.conda/lib/site-packages/transformers/image_transforms.py) in
20
---> 21 from .image_utils import (
22 ChannelDimension,
[d:\Fall_Detection\BAP_fall_detection\.conda\lib\site-packages\transformers\image_utils.py](file:///D:/Fall_Detection/BAP_fall_detection/.conda/lib/site-packages/transformers/image_utils.py) in
43 if is_vision_available():
---> 44 import PIL.Image
45 import PIL.ImageOps
[d:\Fall_Detection\BAP_fall_detection\.conda\lib\site-packages\PIL\Image.py](file:///D:/Fall_Detection/BAP_fall_detection/.conda/lib/site-packages/PIL/Image.py) in
99 # and should be considered private and subject to change.
--> 100 from . import _imaging as core
101
ImportError: DLL load failed: The specified module could not be found.
The above exception was the direct cause of the following exception:
RuntimeError Traceback (most recent call last)
[~\AppData\Local\Temp\ipykernel_10824\1632751044.py](https://file+.vscode-resource.vscode-cdn.net/d%3A/Fall_Detection/BAP_fall_detection/~/AppData/Local/Temp/ipykernel_10824/1632751044.py) in
----> 1 from transformers import VideoMAEImageProcessor, VideoMAEForVideoClassification
2
3 model_ckpt = "MCG-NJU/videomae-base"
4 image_processor = VideoMAEImageProcessor.from_pretrained(model_ckpt)
5 model = VideoMAEForVideoClassification.from_pretrained(
[d:\Fall_Detection\BAP_fall_detection\.conda\lib\importlib\_bootstrap.py](file:///D:/Fall_Detection/BAP_fall_detection/.conda/lib/importlib/_bootstrap.py) in _handle_fromlist(module, fromlist, import_, recursive)
[d:\Fall_Detection\BAP_fall_detection\.conda\lib\site-packages\transformers\utils\import_utils.py](file:///D:/Fall_Detection/BAP_fall_detection/.conda/lib/site-packages/transformers/utils/import_utils.py) in __getattr__(self, name)
1161 elif name in self._class_to_module.keys():
1162 module = self._get_module(self._class_to_module[name])
-> 1163 value = getattr(module, name)
1164 else:
1165 raise AttributeError(f"module {self.__name__} has no attribute {name}")
[d:\Fall_Detection\BAP_fall_detection\.conda\lib\site-packages\transformers\utils\import_utils.py](file:///D:/Fall_Detection/BAP_fall_detection/.conda/lib/site-packages/transformers/utils/import_utils.py) in __getattr__(self, name)
1160 value = self._get_module(name)
1161 elif name in self._class_to_module.keys():
-> 1162 module = self._get_module(self._class_to_module[name])
1163 value = getattr(module, name)
1164 else:
[d:\Fall_Detection\BAP_fall_detection\.conda\lib\site-packages\transformers\utils\import_utils.py](file:///D:/Fall_Detection/BAP_fall_detection/.conda/lib/site-packages/transformers/utils/import_utils.py) in _get_module(self, module_name)
1175 f"Failed to import {self.__name__}.{module_name} because of the following error (look up to see its"
1176 f" traceback):\n{e}"
-> 1177 ) from e
1178
1179 def __reduce__(self):
RuntimeError: Failed to import transformers.models.videomae.image_processing_videomae because of the following error (look up to see its traceback):
DLL load failed: The specified module could not be found.
```
When I run transformers-cli env:
```
WARNING:tensorflow:From D:\Program Files\Anaconda\lib\site-packages\transformers\commands\env.py:52: is_gpu_available (from tensorflow.python.framework.test_util) is deprecated and will be removed in a future version.
Instructions for updating:
Use `tf.config.list_physical_devices('GPU')` instead.
2023-05-19 10:53:04.292349: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
```
And the version list I gave above.
### Expected behavior
According to the tutorial, the output should be:
Some weights of the model checkpoint at MCG-NJU/videomae-base were not used when initializing VideoMAEForVideoClassification: [..., 'decoder.decoder_layers.1.attention.output.dense.bias', 'decoder.decoder_layers.2.attention.attention.key.weight']
- This IS expected if you are initializing VideoMAEForVideoClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing VideoMAEForVideoClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
Some weights of VideoMAEForVideoClassification were not initialized from the model checkpoint at MCG-NJU/videomae-base and are newly initialized: ['classifier.bias', 'classifier.weight']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. | 05-19-2023 09:05:12 | 05-19-2023 09:05:12 | Hi @TheOrange-cmd, thanks for reporting this issue.
Is seems the error is arising with an import of the PIL library. Could you run `pip list | grep Pillow` and share which version of `Pillow` is installed? <|||||>Thank you for the quick response. I get some errors with grep; I see it's meant for Linux? I tried installing it but it still doesn't work. If I just run pip list or pip freeze or conda list Pillow I get version 9.4.0 for each command. Conda specifies I have build py38hd77b12b_0. <|||||>Hi @TheOrange-cmd,
Yes, sorry, `grep` is a linux command - sometimes I'm so used to writing it out I forget to check about the OS.
Thanks for giving the Pillow version info. It's the same version as I'm running locally, so 9.4.0 should work. Could you try upgrading the running version of `transformers` to the most recent release - 4.29.2? <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 23,463 | closed | Fix `transformers`' DeepSpeed CI job | # What does this PR do?
The new (cu118) base docker image has pre-installed `transformer-engine` (which wasn't the case in the previous base image). This causes DeepSpeed CI job fails from the beginning with
```bash
E ImportError: /usr/local/lib/python3.8/dist-packages/transformer_engine_extensions.cpython-38-x86_64-linux-gnu.so: undefined symbol: _ZN3c106detail23torchInternalAssertFailEPKcS2_jS2_RKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEE
```
This PR uninstall `transformer-engine` so @ydshieh won't be the breaking bad. | 05-19-2023 08:37:37 | 05-19-2023 08:37:37 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Going to merge. cc @stas00 so you know now we are using CUDA 118 on CI (as you mentioned/requested once before)<|||||>super! thank you for the heads up, @ydshieh! |
transformers | 23,462 | closed | CLIPTextModel gives different results for batched vs unbatched | ### System Info
- `transformers` version: 4.28.1
- Platform: Linux-5.15.0-52-generic-x86_64-with-glibc2.29
- Python version: 3.8.10
- Huggingface_hub version: 0.13.4
- Safetensors version: 0.3.0
- PyTorch version (GPU?): 1.13.1+cu116 (True)
- Tensorflow version (GPU?): 2.11.1 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: N/A
- Using distributed or parallel set-up in script?: N/A
### Who can help?
```
num = 10
prompts = ['hey']*num
inputs = pipe.tokenizer(prompts, return_tensors='pt', padding='max_length', max_length=77).input_ids.to('cuda') # [10,77]
outputs = pipe.text_encoder(inputs).last_hidden_state
batch_outputs = torch.cat([pipe.text_encoder(inputs[i:i+1]).last_hidden_state for i in range(num)],dim=0)
print(torch.all(torch.isclose(outputs, batch_outputs)).item())
```
>> False
Can't figure out the reason for this. Setting num=1 yields True but for any values >1 it returns False. I've reviewed some of the functions around clip attention and mlp but nothing at a short glance.
<img width="935" alt="Screen Shot 2023-05-19 at 4 08 34 AM" src="https://github.com/huggingface/transformers/assets/98723285/a7f717f9-83c3-4c22-a8b9-5cc5afa4b1d7">
<img width="672" alt="Screen Shot 2023-05-19 at 4 09 03 AM" src="https://github.com/huggingface/transformers/assets/98723285/f0762b27-0739-4399-bdee-96335bbe7d58">
@younesbelkada
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
ran the following code in a notebook
```
import torch
import diffusers
pipe = diffusers.StableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5").to("cuda",torch.float16)
num = 10
prompts = ['hey']*num
inputs = pipe.tokenizer(prompts, return_tensors='pt', padding='max_length', max_length=77).input_ids.to('cuda') # [10,77]
outputs = pipe.text_encoder(inputs).last_hidden_state
batch_outputs = torch.cat([pipe.text_encoder(inputs[i:i+1]).last_hidden_state for i in range(num)],dim=0)
print(torch.all(torch.isclose(outputs, batch_outputs)).item())
```
### Expected behavior
the batched and unbatched values should not differ i believe | 05-19-2023 08:13:23 | 05-19-2023 08:13:23 | hi @ethansmith2000
Hmm I have also experienced few issue like that in the past with text models in half-precision (float16 mainly), I think that it is expected to have a few numerical differences between batched vs unbatched. Can you try different values for `atol` and `rtol`?
```python
print(torch.allclose(outputs, batch_outputs, atol=1e-3, rtol=1e-3))
```
I think the highest "acceptable" threshold is something around 1e-3 or slightly above (4e-3)<|||||>Thanks for getting back to me @younesbelkada, if this is expected behavior, then no worries! Would you know if there are any resources that may explain why this is the case?<|||||>No worries!
From now my observations are purely empirical, and based on my personal experience - would you mind trying your experiment with `float32` as well as `blfloat16` ? The `bfloat16` data format has been introduced so that half precision models can have the same training dynamics to avoid overflow issues, so maybe the results will be slightly better.
If you want to read more about float16/bfloat16 and you are not familiar with it, I would recommend reading [this datatype section](https://huggingface.co/blog/hf-bitsandbytes-integration#common-data-types-used-in-machine-learning) written by @stas00 <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 23,461 | open | [BUG] `current_segment_id` should increment from largest segment id. | Code below increment segment id from last iteration segment id, which is wrong. Correct segment id should increment from the latest(which is the largest so far) segment id.
https://github.com/huggingface/transformers/blob/a7920065f2cfd2549b838f9a30afd7c265fcdd88/src/transformers/models/mask2former/image_processing_mask2former.py#L234-L237 | 05-19-2023 08:03:51 | 05-19-2023 08:03:51 | Hi @jimmysue, thanks for raising this issue!
For the segment ids, they increment from 0 -> total number of segments for each image. This is expected for segmentation like panoptic, as we want each individual instance to have its own segment ID assigned e.g. segment 0 -> car 0, segment 1 -> car 1, segment 3 -> sky etc.
|
transformers | 23,460 | closed | Add InstructBLIP | # What does this PR do?
This PR adds [InstructBLIP](https://github.com/salesforce/LAVIS/tree/main/projects/instructblip), a visual instruction tuned version of [BLIP-2](https://huggingface.co/docs/transformers/main/model_doc/blip-2).
It's a bit like an open-source multimodal GPT-4, leveraging Flan-T5 and Vicuna pre-trained checkpoints.
Basic usage is as follows:
```
from transformers import InstructBlipProcessor, InstructBlipForConditionalGeneration
import torch
from PIL import Image
import requests
model = InstructBlipForConditionalGeneration.from_pretrained("...")
processor = InstructBlipProcessor.from_pretrained("...")
device = "cuda" if torch.cuda.is_available() else "cpu"
model.to(device)
url = "https://raw.githubusercontent.com/salesforce/LAVIS/main/docs/_static/Confusing-Pictures.jpg"
image = Image.open(requests.get(url, stream=True).raw).convert("RGB")
prompt = "What is unusual about this image?"
inputs = processor(images=image, text=prompt, return_tensors="pt")
outputs = model.generate(
**inputs,
do_sample=False,
num_beams=1,
max_length=256,
min_length=1,
top_p=0.9,
repetition_penalty=1.5,
length_penalty=1.0,
temperature=1,
)
generated_text = processor.batch_decode(outputs, skip_special_tokens=True)[0].strip()
print(generated_text)
```
To do:
- [x] discuss whether to integrate the `QFormerTokenizer` into the processor
- [x] integration tests
- [x] figure out the the best way to handle the various dtypes of the vision encoder and language model
Nice to haves:
- [ ] doc tests
- [ ] int8 support (cc @younesbelkada) | 05-19-2023 07:02:29 | 05-19-2023 07:02:29 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Thank you for your contribution! I noticed a potential problem with this open PR. It seems that the InstructBLIP processor is missing the QformerTokenizer compared to the BLIP2Processor. <|||||>Thanks for your review. Updates:
- all `autocast` logic was removed, turns out the implementation returns the same exact logits as the original implementation when also using `float32` for the original implementation. However, we may need to think about supporting various dtypes of building blocks of a model, cause if you'd do `from_pretrained("...", dtype=torch.float16")`, that would break for the Flan-T5 checkpoints, which require `bfloat16`. It would be nice to provide the possibility to load the vision encoder in `float16` and the language model in `bfloat16`.
- The `InstructBlipProcessor` is a bit different than other processors in the sense that it consists of 1 image processor and 2 tokenizers (one for the language model, one for the Q-Former). I've included logic to save the Q-Former tokenizer files in a separate folder on the hub as can be seen [here](https://huggingface.co/nielsr/instructblip-vicuna-7b/tree/main), and had to overwrite the `from_pretrained` and `save_pretrained` methods to make this work. I know that this logic may need to be addressed in a separate PR.<|||||>Will the converted weights be hosted on the model hub like blip-2?<|||||>All checkpoints are transferred: https://huggingface.co/models?other=instructblip.
Feel free to merge the PR.
The only thing left is uploading fast tokenizer files for the Vicuna-based checkpoints, but that can only be done once https://github.com/huggingface/transformers/issues/23889 is fixed. Currently the fast tokenizer is created on-the-fly based on the slow tokenizer files when loading from the hub.
Update: that's now also done, so it's entirely ready<|||||>@amyeroberts Could you have a final look and merge if you are happy?<|||||>> There's InstructBlipTextModelTester but no InstructBlipTextModelTest
In general, I would say yes to have 1-1 correspondence. But I don't want to make it strict if it doesn't really bring anything valuable.
The pipeline testing script would be easier if we have such correspondence, but since I was able to manage BLIP2 already, and this test file here is similar to BLIP2, I think it's fine.
> and some tests for InstructBlipModel are skipped because they're run in individual model tests.
It's same as CLIP test file, so it's OK :-)
<|||||>@ydshieh Thanks for reviewing & info about the tests!
> >and some tests for InstructBlipModel are skipped because they're run in individual model tests.
> It's same as CLIP test file, so it's OK :-)
Ah, sorry, I wasn't clear. What I meant was: if tests are skipped with the reason of being already tested in individual model tests, don't we need the modular tests classes implemented i.e. `InstructBlipTextModelTest`? <|||||>> Ah, sorry, I wasn't clear. What I meant was: if tests are skipped with the reason of being already tested in individual model tests, don't we need the modular tests classes implemented i.e. InstructBlipTextModelTest?
I agree (was thinking the same but my mind is lost in my reply).
@NielsRogge I will let you to express why there is no text model test class :-), which is same as in BLIP2.
Well, after looking a bit, the text part is not a fixed model class
```
if config.use_decoder_only_language_model:
language_model = AutoModelForCausalLM.from_config(config.text_config)
else:
language_model = AutoModelForSeq2SeqLM.from_config(config.text_config)
```
I think that's the main reason why we don't have the test for that part.
<|||||>Hi, will this land soon? I would love to try out this model. Thanks!<|||||>Thanks @amyeroberts for your review, there was a bug with `LlamaTokenizerFast` that has now been fixed, now the absolute tolerance is much lower (1e-4 and 1e-5).
I've removed `InstructBlipModel` from this PR as that was copied from `Blip2Model` using the CookieCutter template. The latter was added in this PR: #21817. However I'm not sure why the latter got approved, cause it's not really in lign with the design of the library, meaning that `xxxModel` are models not including any head on top and not accepting a `labels` argument. However `Blip2Model` seems like an entire copy of `Blip2ForConditionalGeneration`, which seems odd to me.<|||||>Do the prompt need further packaging when inference? For example, BLIP2 use "Question: {prompt}? Answer: " as prompt. And which type of prompt be used in InstructBLIP? Or we only use question to ask the model?<|||||>@NielsRogge It appears in the current diff that there a some changes unrelated to this PR? Could you rebase to sync up with `main`? Could you also respond to the questions in the PR review instead of just marking as resolved? <|||||>Well ๐ <|||||>Merge it now as ๐ข is approved.<|||||>Hi @zdxff there's no specific prompt being used for InstructBLIP. You can just ask it questions like "What is unusual about this image?"<|||||>Will work on the 8bit / 4bit integration ASAP !
EDIT: here you go https://github.com/huggingface/transformers/pull/24488 |
transformers | 23,459 | closed | โnever_splitโ not working on BertTokenizer | ### System Info
transformers 4.28.1
python 3.8.13
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
- I load BertTokenizer using my own vocab.txt, and add _'[outline]'_ into _never_split_, which is included in my vocab.txt. However, _'[outline]'_ got splitted. Following is my code:
`tokenizer = BertTokenizer.from_pretrained(pretrained_path,never_split=['[outline]'])
input = "ใ[outline]"
print(tokenizer.tokenize(input)) # ['ใ', '[', 'out', '##line', ']']
`
- I also do:
`print(tokenizer.basic_tokenizer.tokenize(input)) #['ใ', '[', 'outline', ']']`
### Expected behavior
When I do:
`tokenizer.tokenize("ใ[outline]")`
Get the result as `['ใ', '[outline]']`, the tokens in never_split don't be splited.
| 05-19-2023 06:50:54 | 05-19-2023 06:50:54 | cc @ArthurZucker @younesbelkada <|||||>The '[' or ']' in BertTokenizer is punctuation, it will be split at first. And the `outline` or `[outline]` is not in vocab, its will be set UNK. It doesn't seem to make sense anymore.
Look the code: https://github.com/huggingface/transformers/blob/main/src/transformers/models/bert/tokenization_bert.py#L446

<|||||>> The '[' or ']' in BertTokenizer is punctuation, it will be split at first. And the `outline` or `[outline]` is not in vocab, its will be set UNK. It doesn't seem to make sense anymore. Look the code: https://github.com/huggingface/transformers/blob/main/src/transformers/models/bert/tokenization_bert.py#L446 
Thanks for replying. As stated before, I am using my own vocab, and โ[outline]โ is in it,
tokenizer = BertTokenizer.from_pretrained(my_vocab_path, never_split='[outline]')
print(tokenizer.convert_tokens_to_ids('[outline]'))
print(tokenizer.convert_tokens_to_ids('ใ'))
print(tokenizer.tokenize('ใ[outline]'))

<|||||>Hey, reading the doc for the `BertTokenizer`, you should be using the `do_basic_tokenize=True` argument, as mentioned [here](https://github.com/ArthurZucker/transformers/blob/f732a643ab47a324405dc583532bbbfc45e2d8dc/src/transformers/models/bert/tokenization_bert.py#L153). <|||||>> Hey, reading the doc for the `BertTokenizer`, you should be using the `do_basic_tokenize=True` argument, as mentioned [here](https://github.com/ArthurZucker/transformers/blob/f732a643ab47a324405dc583532bbbfc45e2d8dc/src/transformers/models/bert/tokenization_bert.py#L153).
Your link is broken, it says '404 - page not found'?
Plus, `do_basic_tokenize=True` is default setting. Even if I add it intentionally, the result stays the same.
tokenizer = BertTokenizer.from_pretrained(my_vocab_path, never_split=['[outline]'], do_basic_tokenize=True)
print(tokenizer.tokenize('ใ[outline]')) # ['ใ', '[', 'out', '##line', ']']
Correct me if I do anything wrong.<|||||>Sorry, anyway the argument was set to `True` by default so that's not the problem.
Let's me investigate, in the mean time doing `tokenizer.add_token("[outline]", special_token = True)`" should (I think) prevent it from being split<|||||>( the doc mentions :
```python
never_split (`List[str]`, *optional*)
Kept for backward compatibility purposes. Now implemented directly at the base class level (see
[`PreTrainedTokenizer.tokenize`]) List of token not to split.
```<|||||>The best solution is to add the token to the list of special tokens using the `add_token` method<|||||>Yeah, add it as special_token does take care of the splitting problem. But in the latter process, I will decode with argument `skip_special_tokens=True`. Then the token will be skipped, while I don't want it be. For now, I add it to the special token list, but I still suggest fixing the `never_split` argument.
<|||||>Then that means that the token that you want to add is not `special`. I think that if you add it without the `special_token` set to `True` it should not be spilt no? <|||||>Without `special_token` set to True, it will be splitted.<|||||>No it won't :
```python
tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased", use_fast=False)
tokenizer.add_tokens("[outline]")
tokenizer.added_tokens_encoder
>>> {'[outline]': 30522}
tokenizer.encode("[outline]")
>>> [101, 30522, 102]
tokenizer.decode(tokenizer.encode("[outline]"))
>>> '[CLS] [outline] [SEP]'
print(tokenizer.tokenize(". [outline]"))
>>> ['.', '[outline]']
tokenizer.decode(tokenizer.encode(". [outline]"), skip_special_tokens=True)
>>> '. [outline]'<|||||>In your case, it won't. But I am using a different vocab.txt, it splits.<|||||>Seems like `'[outline]'` will not be added anyway, since it's already in the vocab.<|||||>I don't understand. You have a very specific usage, where you don't want to split `[outline]` that is already in your vocab.
The basic tokenizer works as expected: `tokenizer.basic_tokenizer.tokenize("[outline]")` will not split it.
When you are calling `tokenize` on the `BertTokenizerClass` the `_tokenize` function is then called, which relies on the `all_special_ids`. That means that the token should be added to both lists.
```python
tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased", use_fast=False, never_split= ["outline"])
tokenizer.add_tokens("[outline]")
```
I am guessing that this should work
<|||||>Edit: I manually added "[outline]" to my vocab and it worked for both the solution I gave you<|||||>Unfortunately, it still doesn't work on my vocab. I think it strictly related to the vocab. So far, only adding it to the special tokens works for me.
Also, shat I posted before is basic tokenizer split it, `tokenizer.basic_tokenizer.tokenize("[outline]")` splits it into` '[', 'outline', ']'`. The tokenizer then send the split tokens to do Wordpiece instead of fix it to the origin `'[outline]'`. I think that may be the reason. <|||||>[vocab.txt](https://github.com/huggingface/transformers/files/11564244/vocab.txt)
Here is my vocab, you can try on it.<|||||>I tried loading a tokenizer using your vocabulary and I cannot reproduce your issue.
Try downloading the latests `transformer` version!<|||||>Why......
I've updated transformers to 4.29.2, still the same result....
here is my code
```python
tokenizer = BertTokenizer.from_pretrained('../base_model/vocab.txt', never_split= ["[outline]"])
tokenizer.add_tokens("[outline]")
print(tokenizer.tokenize("ใ[outline]"))
# ['ใ', '[', 'out', '##line', ']']
```
<|||||>Can you try `tokenizer = BertTokenizer.from_pretrained('../base_model', never_split= ["[outline]"])`
Also I would suggest you create a colab , this will make sure that your cache is not messing with this. <|||||>Here is the Colab result:

<|||||>can you share a link to the colab, I'll try to reproduce and modify a copy ๐
<|||||>Also you did not add the token using `add_token(..., special_token = False)`<|||||>Another solution is to initialise the tokenizer using `...from_pretrained( path, additional_special_tokens = ["[outline]"]) ` <|||||>https://colab.research.google.com/drive/1EStD5K_lQM0-PgMUQ8z273TzAUcgY2IV?usp=sharing
You need to roll down to the bottom to see the code, `add_token(...)` already added.
`additional_special_tokens` add `[outline]` into special tokens too, so it works fine. But it still meets the `skip_special_token` problem. Anyway, this issue is about 'never_split' argument not working, so let's focus on this.<|||||>Thanks a lot.
Indeed the token is not added, because in the `_add_token` a check prevent it to be added if it is already in the vocab.
Workaround:
```python
tokenizer.added_tokens_encoder.update({"[outline]":85})
tokenizer.added_tokens_decoder.update({85:"[outline]"})
tokenizer.unique_no_split_tokens = sorted(set(tokenizer.unique_no_split_tokens).union({"[outline]"}))
tokenizer._create_trie(tokenizer.unique_no_split_tokens)
```
Its is not really elegant indeed. Also adding a token means that whether or not it is in the vocab, we want it to be in the added tokens, so I think it makes sense to add it, even if it exists. WDYT @Narsil
edit: I think it comes down to a choice, and both could have pos and cons.<|||||>About never split, the last commit is 4 years old, it has never been touch, and I'd rather we find a way to work around your problem using new code rather than changing legacy code! <|||||>Glad we are on the same page in the end. <|||||>I am not entirely sure yet whether or not we will support this as the fast ones don't, and my few tests appear to show that it might not be optimal<|||||>For now closing as `wontfix`, if more people require such usage, will make it available.
TLDR: adding a tokens as `AddedToken` when it is already in the vocab.
Will not fix because:
- fast does not support this and we want to keep fast stable
- it's a breaking change
- seems to be a specific usage, not the one intended for `add_tokens` |
transformers | 23,458 | closed | Update streamers.py | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
| 05-19-2023 05:11:13 | 05-19-2023 05:11:13 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_23458). All of your documentation changes will be reflected on that endpoint. |
transformers | 23,457 | closed | TF: standardize `test_model_common_attributes` for language models | # What does this PR do?
`test_model_common_attributes` was overridden in most TF LMs because:
1. it was not able to handle legacy classes with LM heads (there was not a single set of autoclasses that would catch them)
2. many modern decoder-only LMs do not have a bias in the LM head
This PR adapts the test to account for these 2 cases, and removes a large number of overridden tests.
| 05-18-2023 20:30:41 | 05-18-2023 20:30:41 | _The documentation is not available anymore as the PR was closed or merged._<|||||>@amyeroberts now with deprecation messages (as opposed to deletions) and pytest asserts ๐ค
@Rocketknight1 ping ๐ |
transformers | 23,456 | closed | TF: CTRL with native embedding layers | # What does this PR do?
Follows up on #23436, migrating the embedding layer of CTRL to native Keras embeddings
CTRL needed several related changes, so it deserves a stand-alone PR. This PR:
1. Replaces `TFSharedEmbeddings` by the native Keras layers
2. Fixes resized bias serialization, just like https://github.com/huggingface/transformers/pull/19013 does for BART -- in the process, gets rid of the separate LMHead class, which is outdated and tied to code scheduled for deprecation, and moves functions like `set_bias` to the right place
3. Fixes XLA issues (`prepare_inputs_for_generation` was incomplete)
| 05-18-2023 19:13:54 | 05-18-2023 19:13:54 | _The documentation is not available anymore as the PR was closed or merged._<|||||>@amyeroberts comments addressed ๐
I've double-checked: slow tests are passing for CTRL |
transformers | 23,455 | closed | Clean up CUDA kernels | # What does this PR do?
In the PR adding RWKV, a new `kernels` folder was created. This PR does some additional cleanup by moving the CUDA kernels of existing models into this new folder. | 05-18-2023 17:54:36 | 05-18-2023 17:54:36 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 23,454 | closed | Add an option to log result from the Agent | # What does this PR do?
This PR makes it possible to customize how results of the agent are displayed (default is using `print`) by adding a `set_stream` method.
Should address #23354 | 05-18-2023 17:31:41 | 05-18-2023 17:31:41 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 23,453 | closed | Minor awesome-transformers.md fixes | # What does this PR do?
Fixes minor typos and updates link to Nebuly (renamed from nebullvm)
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
## Who can review?
@LysandreJik
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
| 05-18-2023 15:26:18 | 05-18-2023 15:26:18 | _The documentation is not available anymore as the PR was closed or merged._<|||||>@LysandreJik sorry for the ping, but can you check the PR? |
transformers | 23,452 | closed | Properly guard PyTorch stuff | # What does this PR do?
#23438 accidentally broke main since the objects in `.generation` imported are only available when PyTorch is installed. This PR fixes that. | 05-18-2023 15:26:04 | 05-18-2023 15:26:04 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 23,451 | closed | Less flaky `test_assisted_decoding_matches_greedy_search` | # What does this PR do?
Less flaky `test_assisted_decoding_matches_greedy_search`: fail only if more than 1 failure among 10. | 05-18-2023 13:43:59 | 05-18-2023 13:43:59 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 23,450 | closed | Fix DecisionTransformerConfig doctring | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Currently, the default values for the `n_layer` and `n_head` parameters in the `DecisionTransformerConfig` are set to 3 and 1, respectively. However, the docstring says that the default value is 12 for both. This PR fixes the docstring to reflect the correct default values.
Default values:
https://github.com/huggingface/transformers/blob/f2d2880bbbd7769e12c37471af0b067b379dfc43/src/transformers/models/decision_transformer/configuration_decision_transformer.py#L120-L121
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
CC @edbeeching
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
| 05-18-2023 12:21:43 | 05-18-2023 12:21:43 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 23,449 | closed | Leverage Langchain Prompt Templates | By leveraging the Prompt Templates and the Conversation Memory we can induce ReAct ( Thought, Observation ) approach from Langchain | 05-18-2023 11:53:36 | 05-18-2023 11:53:36 | cc @LysandreJik @sgugger <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 23,448 | closed | Generate: increase left-padding test atol | # What does this PR do?
As stated in #23437, `GPTBigCode` was failing the left padding test. Upon further investigation:
1. `GPTBigCode` is left padding compatible, it accepts `position_ids` and uses them correctly
2. The test is only failing on CPU
3. The diff extremely small...
...so I've raised `atol` to `1e-7` (instead of the default `1e-8`) ๐ With `1e-7`, we can still detect failing cases, like the ones skipped in #23437 | 05-18-2023 11:23:19 | 05-18-2023 11:23:19 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 23,447 | closed | Error while running agent for Image Question Answering | ### System Info
Code :
import requests
from PIL import Image
import torch
image_url = "https://raw.githubusercontent.com/vis-nlp/ChartQA/main/ChartQA%20Dataset/val/png/20294671002019.png"
image = Image.open(requests.get(image_url, stream=True).raw)
#document = '/content/child_death.png'
agent.run(
question = "Which Country has the highest number of child deaths ?",
image=image,
task = 'image_qa'
)
Error:
==Explanation from the agent==
I will use the following tool: `image_qa` to answer a question about an image.
==Code generated by the agent==
answer = image_qa(image=image, question=question)
print(f"The answer is {answer}.")
==Result==
โญโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ Traceback (most recent call last) โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฎ
โ in <cell line: 8>:8 โ
โ โ
โ /usr/local/lib/python3.10/dist-packages/transformers/tools/agents.py:323 in run โ
โ โ
โ 320 โ โ if not return_code: โ
โ 321 โ โ โ print("\n\n==Result==") โ
โ 322 โ โ โ self.cached_tools = resolve_tools(code, self.toolbox, remote=remote, cached_ โ
โ โฑ 323 โ โ โ return evaluate(code, self.cached_tools, state=kwargs.copy()) โ
โ 324 โ โ else: โ
โ 325 โ โ โ tool_code = get_tool_creation_code(code, self.toolbox, remote=remote) โ
โ 326 โ โ โ return f"{tool_code}\n{code}" โ
โ โ
โ /usr/local/lib/python3.10/dist-packages/transformers/tools/python_interpreter.py:61 in evaluate โ
โ โ
โ 58 โ result = None โ
โ 59 โ for idx, node in enumerate(expression.body): โ
โ 60 โ โ try: โ
โ โฑ 61 โ โ โ line_result = evaluate_ast(node, state, tools) โ
โ 62 โ โ except InterpretorError as e: โ
โ 63 โ โ โ msg = f"Evaluation of the code stopped at line {idx} before the end because โ
โ 64 โ โ โ if chat_mode: โ
โ โ
โ /usr/local/lib/python3.10/dist-packages/transformers/tools/python_interpreter.py:98 in โ
โ evaluate_ast โ
โ โ
โ 95 โ if isinstance(expression, ast.Assign): โ
โ 96 โ โ # Assignement -> we evaluate the assignement which should update the state โ
โ 97 โ โ # We return the variable assigned as it may be used to determine the final resul โ
โ โฑ 98 โ โ return evaluate_assign(expression, state, tools) โ
โ 99 โ elif isinstance(expression, ast.Call): โ
โ 100 โ โ # Function call -> we return the value of the function call โ
โ 101 โ โ return evaluate_call(expression, state, tools) โ
โ โ
โ /usr/local/lib/python3.10/dist-packages/transformers/tools/python_interpreter.py:139 in โ
โ evaluate_assign โ
โ โ
โ 136 โ
โ 137 def evaluate_assign(assign, state, tools): โ
โ 138 โ var_names = assign.targets โ
โ โฑ 139 โ result = evaluate_ast(assign.value, state, tools) โ
โ 140 โ โ
โ 141 โ if len(var_names) == 1: โ
โ 142 โ โ state[var_names[0].id] = result โ
โ โ
โ /usr/local/lib/python3.10/dist-packages/transformers/tools/python_interpreter.py:101 in โ
โ evaluate_ast โ
โ โ
โ 98 โ โ return evaluate_assign(expression, state, tools) โ
โ 99 โ elif isinstance(expression, ast.Call): โ
โ 100 โ โ # Function call -> we return the value of the function call โ
โ โฑ 101 โ โ return evaluate_call(expression, state, tools) โ
โ 102 โ elif isinstance(expression, ast.Constant): โ
โ 103 โ โ # Constant -> just return the value โ
โ 104 โ โ return expression.value โ
โ โ
โ /usr/local/lib/python3.10/dist-packages/transformers/tools/python_interpreter.py:167 in โ
โ evaluate_call โ
โ โ
โ 164 โ # Todo deal with args โ
โ 165 โ args = [evaluate_ast(arg, state, tools) for arg in call.args] โ
โ 166 โ kwargs = {keyword.arg: evaluate_ast(keyword.value, state, tools) for keyword in call โ
โ โฑ 167 โ return func(*args, **kwargs) โ
โ 168 โ
โ 169 โ
โ 170 def evaluate_subscript(subscript, state, tools): โ
โ โ
โ /usr/local/lib/python3.10/dist-packages/transformers/tools/base.py:534 in __call__ โ
โ โ
โ 531 โ โ if not self.is_initialized: โ
โ 532 โ โ โ self.setup() โ
โ 533 โ โ โ
โ โฑ 534 โ โ encoded_inputs = self.encode(*args, **kwargs) โ
โ 535 โ โ encoded_inputs = send_to_device(encoded_inputs, self.device) โ
โ 536 โ โ outputs = self.forward(encoded_inputs) โ
โ 537 โ โ outputs = send_to_device(outputs, "cpu") โ
โ โ
โ /usr/local/lib/python3.10/dist-packages/transformers/tools/image_question_answering.py:49 in โ
โ encode โ
โ โ
โ 46 โ โ super().__init__(*args, **kwargs) โ
โ 47 โ โ
โ 48 โ def encode(self, image: "Image", question: str): โ
โ โฑ 49 โ โ return self.pre_processor(image, question, return_tensors="pt") โ
โ 50 โ โ
โ 51 โ def forward(self, inputs): โ
โ 52 โ โ with torch.no_grad(): โ
โ โ
โ /usr/local/lib/python3.10/dist-packages/transformers/models/vilt/processing_vilt.py:107 in โ
โ __call__ โ
โ โ
โ 104 โ โ โ **kwargs, โ
โ 105 โ โ ) โ
โ 106 โ โ # add pixel_values + pixel_mask โ
โ โฑ 107 โ โ encoding_image_processor = self.image_processor(images, return_tensors=return_te โ
โ 108 โ โ encoding.update(encoding_image_processor) โ
โ 109 โ โ โ
โ 110 โ โ return encoding โ
โ โ
โ /usr/local/lib/python3.10/dist-packages/transformers/image_processing_utils.py:464 in __call__ โ
โ โ
โ 461 โ โ
โ 462 โ def __call__(self, images, **kwargs) -> BatchFeature: โ
โ 463 โ โ """Preprocess an image or a batch of images.""" โ
โ โฑ 464 โ โ return self.preprocess(images, **kwargs) โ
โ 465 โ โ
โ 466 โ def preprocess(self, images, **kwargs) -> BatchFeature: โ
โ 467 โ โ raise NotImplementedError("Each image processor must implement its own preproces โ
โ โ
โ /usr/local/lib/python3.10/dist-packages/transformers/models/vilt/image_processing_vilt.py:462 in โ
โ preprocess โ
โ โ
โ 459 โ โ images = [to_numpy_array(image) for image in images] โ
โ 460 โ โ โ
โ 461 โ โ if do_resize: โ
โ โฑ 462 โ โ โ images = [ โ
โ 463 โ โ โ โ self.resize(image=image, size=size, size_divisor=size_divisor, resample= โ
โ 464 โ โ โ ] โ
โ 465 โ
โ โ
โ /usr/local/lib/python3.10/dist-packages/transformers/models/vilt/image_processing_vilt.py:463 in โ
โ <listcomp> โ
โ โ
โ 460 โ โ โ
โ 461 โ โ if do_resize: โ
โ 462 โ โ โ images = [ โ
โ โฑ 463 โ โ โ โ self.resize(image=image, size=size, size_divisor=size_divisor, resample= โ
โ 464 โ โ โ ] โ
โ 465 โ โ โ
โ 466 โ โ if do_rescale: โ
โ โ
โ /usr/local/lib/python3.10/dist-packages/transformers/models/vilt/image_processing_vilt.py:230 in โ
โ resize โ
โ โ
โ 227 โ โ โ raise ValueError(f"The `size` dictionary must contain the key `shortest_edge โ
โ 228 โ โ shorter = size["shortest_edge"] โ
โ 229 โ โ longer = int(1333 / 800 * shorter) โ
โ โฑ 230 โ โ output_size = get_resize_output_image_size(image, shorter=shorter, longer=longer โ
โ 231 โ โ return resize(image, size=output_size, resample=resample, data_format=data_forma โ
โ 232 โ โ
โ 233 โ def rescale( โ
โ โ
โ /usr/local/lib/python3.10/dist-packages/transformers/models/vilt/image_processing_vilt.py:87 in โ
โ get_resize_output_image_size โ
โ โ
โ 84 def get_resize_output_image_size( โ
โ 85 โ input_image: np.ndarray, shorter: int = 800, longer: int = 1333, size_divisor: int = โ
โ 86 ) -> Tuple[int, int]: โ
โ โฑ 87 โ input_height, input_width = get_image_size(input_image) โ
โ 88 โ min_size, max_size = shorter, longer โ
โ 89 โ โ
โ 90 โ scale = min_size / min(input_height, input_width) โ
โ โ
โ /usr/local/lib/python3.10/dist-packages/transformers/image_utils.py:205 in get_image_size โ
โ โ
โ 202 โ โ A tuple of the image's height and width. โ
โ 203 โ """ โ
โ 204 โ if channel_dim is None: โ
โ โฑ 205 โ โ channel_dim = infer_channel_dimension_format(image) โ
โ 206 โ โ
โ 207 โ if channel_dim == ChannelDimension.FIRST: โ
โ 208 โ โ return image.shape[-2], image.shape[-1] โ
โ โ
โ /usr/local/lib/python3.10/dist-packages/transformers/image_utils.py:169 in โ
โ infer_channel_dimension_format โ
โ โ
โ 166 โ โ return ChannelDimension.FIRST โ
โ 167 โ elif image.shape[last_dim] in (1, 3): โ
โ 168 โ โ return ChannelDimension.LAST โ
โ โฑ 169 โ raise ValueError("Unable to infer channel dimension format") โ
โ 170 โ
โ 171 โ
โ 172 def get_channel_dimension_axis(image: np.ndarray) -> int: โ
โฐโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฏ
ValueError: Unable to infer channel dimension format
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Code :
import requests
from PIL import Image
import torch
image_url = "https://raw.githubusercontent.com/vis-nlp/ChartQA/main/ChartQA%20Dataset/val/png/20294671002019.png"
image = Image.open(requests.get(image_url, stream=True).raw)
#document = '/content/child_death.png'
agent.run(
question = "Which Country has the highest number of child deaths ?",
image=image,
task = 'image_qa'
)
### Expected behavior
Answer from the graph | 05-18-2023 10:59:55 | 05-18-2023 10:59:55 | Hi @pratikkotian04, thanks for raising this issue and for providing so much detail.
I'm able to reproduce the issue. It seems the error is coming from our image processing library, I'll dig into it.<|||||>@pratikkotian04 The error is arising because the image has a depth channel i.e. is in RGBA format, and the image processors expect images to have 1 or 3 image channels. This is a know brittleness with the image processing library and a priority for me to address.
A robust solution is a bit involved so won't be merge in in the next day or two. A quick solution is to convert the image to RGB format before passing to the agent:
```python
import requests
from PIL import Image
from transformers import HfAgent
image_url = "https://raw.githubusercontent.com/vis-nlp/ChartQA/main/ChartQA%20Dataset/val/png/20294671002019.png"
image = Image.open(requests.get(image_url, stream=True).raw)
image = image.convert("RGB")
agent = HfAgent("https://api-inference.huggingface.co/models/bigcode/starcoder")
agent.run(
question = "Which Country has the highest number of child deaths ?",
image=image,
task = 'image_qa'
)
```
I was able to run the above snippet (although the response was: `The answer is england.` ๐ )
<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 23,446 | closed | Update tiny models and pipeline tests | # What does this PR do?
Update tiny models and pipeline tests.
The two failing pipeline tests for `rwkv` is addressed in #23442 and #23444 (and need to merge them before this one) | 05-18-2023 09:30:27 | 05-18-2023 09:30:27 | _The documentation is not available anymore as the PR was closed or merged._<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_23446). All of your documentation changes will be reflected on that endpoint. |
transformers | 23,445 | closed | Allow dict input for audio classification pipeline | # What does this PR do?
Allow dictionary inputs for the audio classification pipeline
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
| 05-18-2023 08:44:17 | 05-18-2023 08:44:17 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Apologies @sgugger! To clarify, the changes in this PR are one-for-one copied from the input arguments in https://github.com/huggingface/transformers/blob/main/src/transformers/pipelines/automatic_speech_recognition.py
Essentially, the PR allows users to input a dictionary of inputs to the pipeline. This aligns the pipeline with `datasets`, where the `audio` column returns a dict with `array` (the 1-d audio array) and `sampling_rate` (the sampling rate of the audio):
```python
from datasets import load_dataset
librispeech = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
librispeech[0]["audio"]
```
**Output:**
```
{'path': '/Users/sanchitgandhi/.cache/huggingface/datasets/downloads/extracted/aad76e6f21870761d7a8b9b34436f6f8db846546c68cb2d9388598d7a164fa4b/dev_clean/1272/128104/1272-128104-0000.flac',
'array': array([0.00238037, 0.0020752 , 0.00198364, ..., 0.00042725, 0.00057983,
0.0010376 ]),
'sampling_rate': 16000}
```
(the `path` column is deprecated an no longer required, but retained for backwards compatibility. This is what removing `path` refers to in the PR)
This PR enables the dict to be passed directly to the pipeline, in the same way that we do for the ASR pipeline and the `transformers` feature extractors:
```python
pred_labels = pipe(librispeech[0]["audio"])
```
If there are any API decisions you feel require changing, I'd be happy to update these in the original code before propagating to this file.<|||||>I think what you're trying to do is already supported, but the sampling rate needs to be in the same dict as the array (both are needed to represent a single audio).
That being said, the errors raised when misusing this feature could probably be largely improved (to guide users towards the correct form). |
transformers | 23,444 | closed | Fix (skip) a pipeline test for `RwkvModel` | # What does this PR do?
Fix (skip) a pipeline test for `RwkvModel`. | 05-18-2023 08:31:20 | 05-18-2023 08:31:20 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 23,443 | closed | fix: load_best_model_at_end error when load_in_8bit is True | Ref: https://github.com/huggingface/peft/issues/394
Loading a quantized checkpoint into non-quantized Linear8bitLt is not supported.
call module.cuda() before module.load_state_dict()
# What does this PR do?
fix: load_best_model_at_end error when load_in_8bit is True
Ref: https://github.com/huggingface/peft/issues/394
Loading a quantized checkpoint into non-quantized Linear8bitLt is not supported.
call module.cuda() before module.load_state_dict()
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
| 05-18-2023 08:05:00 | 05-18-2023 08:05:00 | _The documentation is not available anymore as the PR was closed or merged._<|||||>cc @sgugger @younesbelkada <|||||>>
using load_adapter seems to be a smarter solution, I'll try it.<|||||>Awesome, let us know how it goes!<|||||>is this is addressing where the bug came from... looking at an older working version https://github.com/huggingface/transformers/blob/d941f07a4e3bc7b61b7afbd25d6e2e8427fccc6d/src/transformers/trainer.py#L2170r version, the code being edited is functionally the same... I think we need to test more to see which repo the issue is from, PEFT, Transformers or bitsandbytes, and find what introduced it. Unless someone knows the change which would have caused this? <|||||>according to https://github.com/huggingface/peft/issues/286#issuecomment-1512611968
the correct way to save the intermediate checkpoints for PEFT when using Trainer would be to use Callbacks.
so we can assume that adapter have been saved properly and load_adapter from self.state.best_model_checkpoint
|
transformers | 23,442 | closed | Make `RwkvModel` accept `attention_mask` but discard it internally | # What does this PR do?
`RwkvModel` doesn't accept `attention_mask` but the tokenizer (it is `GPTNeoXTokenizer` from the checkpoints) prepares this input. When I try to run the pipeline tests for this model, I get
```bash
TypeError: forward() got an unexpected keyword argument 'attention_mask'
```
I see it would be quite annoying for people using this model with the default tokenizer. So it might be good to accept it then discard it internally with a warning.
(If `RwkvModel` had its own tokenizer class `RwkvTokenizer`, we could do this inside tokenizer class. But here is not the case)
| 05-18-2023 07:52:29 | 05-18-2023 07:52:29 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 23,441 | closed | Same capabilities for exporting using torchscript as with ONNX | ### Feature request
There are two supported ways to serialize the model which is to use ONNX or to use torchscript. Looking at the documentation, in [ONNX](https://huggingface.co/docs/transformers/main/serialization), ONNX exporter seemed to save both processor and model to a file, while torchscript exporter just export the model itself.
How do I do the same for the torchscript exporter?
Using ONNX
```
from pathlib import Path
from transformers.onnx import export
from transformers import AutoProcessor, AutoModel
from transformers import AutoConfig
from transformers.models.owlvit.configuration_owlvit import OwlViTOnnxConfig
onnx_path = Path("model.onnx")
model_ckpt = "google/owlvit-base-patch32"
config = AutoConfig.from_pretrained(model_ckpt)
onnx_config = OwlViTOnnxConfig(config)
base_model = AutoModel.from_pretrained(model_ckpt)
processor = AutoProcessor.from_pretrained(model_ckpt)
onnx_inputs, onnx_outputs = export(processor, base_model, onnx_config, onnx_config.default_onnx_opset, onnx_path)
```
What is the equivalent thing in torchscript?
```
from pathlib import Path
from transformers import AutoProcessor, AutoModel
base_model = AutoModel.from_pretrained(model_ckpt, torchscript=True)
processor = AutoProcessor.from_pretrained(model_ckpt, torchscript=True)
# what is this two line should be?
traced_model = torch.jit.trace(base_model, ??) #
torch.jit.save(traced_model, "traced_bert.pt")
```
### Motivation
Being able to use torchscript on processor
### Your contribution
Anything that I can do | 05-18-2023 07:43:08 | 05-18-2023 07:43:08 | @darwinharianto There's a guide on exporting to torchscript in the docs: https://huggingface.co/docs/transformers/torchscript
Let us know if there's any information missing there. <|||||>I can safely export the model to torchscript using this code, but I get this `TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect.`. I assume I can ignore this for now(?).
this is the code to export the model
```
import torch
import numpy as np
from transformers import AutoModel, AutoProcessor, OwlViTModel, OwlViTProcessor
from PIL import Image
import requests
class MyOpenDetector(torch.nn.Module):
def __init__(self, model=None):
super(MyOpenDetector, self).__init__()
self.model = model
def forward(self, input_ids, pixel_values, attention_mask):
# inputs = {"input_ids":x[0], "attention_mask":x[1], "pixel_values":x[2]}
outputs = self.model(input_ids=input_ids, pixel_values=pixel_values, attention_mask=attention_mask)
# print(type(outputs))
logits_per_image = outputs[0] # this is the image-text similarity score
probs = logits_per_image.softmax(dim=1) # we can take the softmax to get the label probabilities
return probs
def save_owlvitmodel(inputs, modelname):
openModel = AutoModel.from_pretrained(modelname, torchscript=True).eval()
x = tuple([inputs['input_ids'], inputs['pixel_values'], inputs['attention_mask']])
model = MyOpenDetector(model=openModel)
traced_model = torch.jit.trace(model, x)
torch.jit.save(traced_model, 'traced_owlvit.pt')
return
modelname = "google/owlvit-base-patch32"
processor = AutoProcessor.from_pretrained(modelname, torchscript=True)
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
inputs = processor(text=[["a photo of a cat", "a photo of a dog"]], images=torch.Tensor(np.asarray(image)), return_tensors="pt")
save_owlvitmodel(inputs, modelname)
loaded_model = torch.jit.load("traced_owlvit.pt")
loaded_model.eval()
x = tuple([inputs['input_ids'], inputs['pixel_values'], inputs['attention_mask']])
probs = loaded_model(*x)
print(probs)
```
how can I do the same for processor?
```
import torch
import numpy as np
from transformers import AutoModel, AutoProcessor, OwlViTModel, OwlViTProcessor
from PIL import Image
import requests
class MyOpenProcessor(torch.nn.Module):
def __init__(self, processor=None):
super(MyOpenProcessor, self).__init__()
self.processor = processor
def forward(self, text, images):
outputs = self.processor(text=text, images=images, return_tensors="pt")
return outputs
modelname = "google/owlvit-base-patch32"
processor = AutoProcessor.from_pretrained(modelname, torchscript=True)
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
inputs = processor(text=[["a photo of a cat", "a photo of a dog"]], images=torch.Tensor(np.asarray(image)), return_tensors="pt")
x = tuple([[["a photo of a cat", "a photo of a dog"]], torch.Tensor(np.asarray(image))])
newProcessor = MyOpenProcessor(processor=processor)
traced_processor = torch.jit.trace(newProcessor, x)
```
this throws an error because there is this tuple of list list str |
transformers | 23,440 | closed | add cleanlab to awesome-transformers tools list | This PR adds the cleanlab package to the awesome-transformers list.
cleanlab uses **transformers** & Hugging Face in all sorts of ways. Here's a couple of them:
- [tutorial for text data with **transformers**](https://docs.cleanlab.ai/stable/tutorials/datalab/text.html)
- [active learning with **transformers**](https://github.com/cleanlab/examples/blob/master/active_learning_transformers/active_learning.ipynb)
- [auditing data in the **datasets** format](https://github.com/cleanlab/examples/blob/master/datalab_image_classification/datalab.ipynb), which is now the [primary data format](https://docs.cleanlab.ai/stable/cleanlab/datalab/datalab.html) supported by cleanlab
- [how to wrap a **transformers** model to be sklearn compatible](https://github.com/cleanlab/examples/blob/master/transformer_sklearn/transformer_sklearn.ipynb)
I'm not sure which reviewer is most appropriate, but perhaps one of the listed documentation reviewers:
@sgugger, @stevhliu or @MKhalusova
| 05-17-2023 22:05:57 | 05-17-2023 22:05:57 | Also ccing @LysandreJik who compiled the list :)<|||||>_The documentation is not available anymore as the PR was closed or merged._ |
transformers | 23,439 | open | Add FastSpeech2Conformer | # What does this PR do?
Adds the TTS (text-to-speech) conformer version of the FastSpeech2 model. Closest related issue is [#15166](https://github.com/huggingface/transformers/issues/15166) though this implements ESPnet's conformer version instead of Fairseq's version as suggested in https://github.com/huggingface/transformers/pull/15773#issuecomment-1529558164.
[FastSpeech2 paper (Microsoft)](https://arxiv.org/pdf/2006.04558.pdf)
[Conformer version paper (ESPnet)](https://arxiv.org/pdf/2010.13956.pdf)
Conformer version code implementation: https://github.com/espnet/espnet/tree/master/espnet2/tts/fastspeech2
Additional conformer version code code implementation: https://github.com/DigitalPhonetics/IMS-Toucan/blob/ToucanTTS/TrainingInterfaces/Text_to_Spectrogram/FastSpeech2
The paper abstracts say most of this, but the main points of what makes this model an interesting addition are:
- It's non auto-regressive, leading to faster inference since it doesn't have to make predictions sequentially (hence the name `FastSpeech`)
- Uses a variance predictor in between the encoder and decoder to explicitly predict duration, pitch, and energy, leading to more accurate results
- Conformer architectures have been shown to improve performance in text to speech tasks, with the convolutions learning close range speech patterns and transformer attention helping with understanding longer range contexts
- There is currently only one other text-to-speech model in `transformers` (`SpeechT5ForTextToSpeech`)
## To do
- [x] Prepared ๐ค Transformers dev environment
- [x] Set up debugging environment of the original repository
- [x] Created script that successfully runs the forward() pass using the original repository and checkpoint
- [x] Successfully added the model skeleton to ๐ค Transformers (+ vocoder)
- [x] Successfully converted original checkpoint to ๐ค Transformers checkpoint (+ vocoder)
- [x] Successfully ran forward() pass in ๐ค Transformers that gives identical output to original checkpoint (+ vocoder)
- [x] Finished model tests in ๐ค Transformers
- [x] Successfully added tokenizer in ๐ค Transformers
- [x] Run end-to-end integration tests
- [x] Finished docs
- [x] Uploaded model weights to the Hub (will ask they're moved to just `fastspeech2_conformer` when ready)
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
@hollance @sanchit-gandhi | 05-17-2023 22:02:44 | 05-17-2023 22:02:44 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_23439). All of your documentation changes will be reflected on that endpoint.<|||||>Thanks for the review @hollance! Addressed the comments above, the only part that might need follow-up discussion is making the `labels` compatible with the `Trainer`
>Re labels, FastSpeech2 is somewhat unique in that it takes in many labels (spectrograms, pitch, energy, and duration) for training. I'm not sure exactly what this means for compatibility with Trainer since I haven't had time to do a deeper dive, but for now I've changed the "targets" to include _labels in their name, left the training test as skipped, and planning to look into it more when I do the demo notebook.<|||||>Appreciate the comments @hollance, @ArthurZucker @sanchit-gandhi this should be ready for review for you now<|||||>Thank you for the review @sanchit-gandhi, comments should be addressed now.
Centralizing a note on passing the config instead of args here since there were a few comments on that - the other modules mentioned are all instantiated twice with different arg values so they canโt solely be passed the config. Lmk if you think thereโs something I missed or if youโd prefer something else like duplicating the modules in order to pass just the config. |
transformers | 23,438 | closed | Add local agent | # What does this PR do?
This PR adds support for a local agent running the model to generate code on the user's machine. | 05-17-2023 20:25:27 | 05-17-2023 20:25:27 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 23,437 | closed | Generate: skip left-padding tests on old models | # What does this PR do?
We have a few left-padding tests polluting our daily CI -- these are from old models, where it is not worth committing >1hr per model to add support for left padding.
This PR skips the test in those models, so we can focus our energy where it matters :) | 05-17-2023 18:29:51 | 05-17-2023 18:29:51 | _The documentation is not available anymore as the PR was closed or merged._<|||||>It seems like `GPTBigCode` need to be fixed! Adding back the `@slow` to defer back fixing<|||||>@gante it all passes now ๐
|
transformers | 23,436 | closed | TF: GPT2 with native embedding layers | # What does this PR do?
This PR continues the (paused) goal of deprecating our custom TF embedding layers and related code. Previously, we have converted encoder-decoder models (e.g. [here](https://github.com/huggingface/transformers/pull/19263)), removing `TFSharedEmbeddings` there and making the necessary adaptations.
In this PR, I make the necessary adaptations for GPT2. The goal is for you, the reviewers, to raise objections in this PR :D All slow tests for TF GPT2 are passing.
Then, the following sequence of PRs will be opened:
1. Remove `TFSharedEmbeddings` from the other decoder-only models
2. Remove other uses of `TFSharedEmbeddings` in the codebase (e.g. in tests)
3. Remove `resize_token_embeddings` and all related functions (it is only used to resize our models' embeddings instantiated with `TFSharedEmbeddings`)
4. Remove the slow decorator from `test_save_load_after_resize_token_embeddings`, which will be fixed as a consequence of these changes ๐ | 05-17-2023 17:56:54 | 05-17-2023 17:56:54 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 23,435 | closed | Fix device issue in `SwiftFormerModelIntegrationTest::test_inference_image_classification_head` | # What does this PR do?
The title says everything ๐ | 05-17-2023 17:06:34 | 05-17-2023 17:06:34 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 23,434 | closed | Export to ONNX doc refocused on using optimum, added tflite | Currently, despite the notes saying that the recommended way is to use Optimum, the main focus of the doc is on using `transformers.onnx`, which is no longer maintained (according to optimum team).
This PR restructures the doc to put the optimum examples forward as the primary way for exporting models to ONNX.
The example of using `transformers.onnx` is kept for potential compatibility reasons, however, I have removed the examples of adding new architectures to transformers.onnx, as I believe this should be done in Optimum instead (links provided).
As suggested by the team, I have also added an example for TFLite, and for that reason, renamed the page to "Export to ONNX, TFLite"
UPD: since we now link to Optimum docs for the list of supported architectures, I have also removed the ONNX list check & update from the check_table script. | 05-17-2023 16:38:27 | 05-17-2023 16:38:27 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 23,433 | closed | remove unnecessary print in gpt neox sequence classifier | # What does this PR do?
Removes an unnecessary print from gpt neox's sequence classifier output which muffles unnecessary output that was likely used for debugging at one point. Before, training gpt neox as a sequence classifier would `print` the logit/label sizes every training step which is hard to muffle and generally not useful.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). **NA, original behaviour not documented**
- [ ] Did you write any new necessary tests? **NA, change too simple to require test**
## Who can review?
- text models: @ArthurZucker and @younesbelkada
| 05-17-2023 16:35:55 | 05-17-2023 16:35:55 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 23,432 | closed | Remove hardcoded prints in Trainer | There are some print calls in the Trainer when using 8-bit optimizers that might be annoying when they can't be disabled. This PR replaces them with `logger.info` calls, but please let me know if there's a better way of handling them.
@sgugger | 05-17-2023 16:24:31 | 05-17-2023 16:24:31 | |
transformers | 23,431 | closed | Update Bigbird Pegasus tests | # What does this PR do?
Need to updated some values in the tests after #23056 (which itself updated some values too).
(Otherwise tests fail at this moment).
| 05-17-2023 15:54:53 | 05-17-2023 15:54:53 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 23,430 | closed | ๐ [i18n-KO] Translated `tasks/zero_shot_object_detection.mdx` to Korean | <!-- PR์ ์ ๋ชฉ์ "๐ [i18n-KO] Translated `<your_file>.mdx` to Korean" ์ผ๋ก ๋ถํ๋๋ฆฝ๋๋ค -->
# What does this PR do?
Translated the `tasks/zero_shot_object_detection.mdx` file of the documentation to Korean.
Thank you in advance for your review.
Part of https://github.com/huggingface/transformers/issues/20179
<!-- ๋ฉ์ธ ์ด์์ ๊ธฐ๋ก์ด ๋จ์์! ๊ฐ์ง์ฐ๊ตฌ์ ๋ฆฌํฌ๋ฅผ ์ฌ์ฉํด ์ฐ์ตํ์ค๋๋ ์ ๊ฑฐํด์ฃผ์๋ฉด ๊ฐ์ฌํ๊ฒ ์ต๋๋ค! :smile: -->
## Before reviewing
- [x] Check for missing / redundant translations (๋ฒ์ญ ๋๋ฝ/์ค๋ณต ๊ฒ์ฌ)
- [x] Grammar Check (๋ง์ถค๋ฒ ๊ฒ์ฌ)
- [x] Review or Add new terms to glossary (์ฉ์ด ํ์ธ ๋ฐ ์ถ๊ฐ)
- [x] Check Inline TOC (e.g. `[[lowercased-header]]`)
- [x] Check live-preview for gotchas (live-preview๋ก ์ ์์๋ ํ์ธ)
## Who can review? (Initial)
<!-- 1. ์ ์ฒดํฌ๊ฐ ๋ชจ๋ ์๋ฃ๋ ๋ค์๋ง ๊ฐ์ง์ฐ๊ตฌ์ ํ์๋ค์๊ฒ ๋ฆฌ๋ทฐ ์์ฒญํ๋ ์๋ ์ฃผ์์ ๋
ธ์ถํด์ฃผ์ธ์! -->
Team PseudoLab, may you please review this PR? @0525hhgus, @KIHOON71, @sim-so, @gabrielwithappy, @HanNayeoniee, @wonhyeongseo, @jungnerd
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review? (Final)
<!-- 2. ๊ฐ์ง์ฐ๊ตฌ์ ํ์๋ค๊ณผ ๋ฆฌ๋ทฐ๊ฐ ๋๋ ํ์๋ง ํ๊น
ํ์ด์ค ์ง์๋ค์๊ฒ ๋ฆฌ๋ทฐ ์์ฒญํ๋ ์๋ ์ฃผ์์ ๋
ธ์ถํด์ฃผ์ธ์! -->
@sgugger, @ArthurZucker, @eunseojo May you please review this PR? | 05-17-2023 15:30:31 | 05-17-2023 15:30:31 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 23,429 | closed | fix bug in group_texts function, that was inserting short batches | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #23424
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
@sgugger @ArthurZucker | 05-17-2023 15:12:08 | 05-17-2023 15:12:08 | _The documentation is not available anymore as the PR was closed or merged._<|||||>You are right, my bad!
What about if we made it like this:
```python
k: [t[i : i + block_size] for i in range(0, total_length - block_size + 1, block_size)]
```
this way, we will not drop an entire batch when `total_length >= block_size` , and at the same time we will return an empty batch, in the case where `total_length < block_size`.
the issue with the code now is that it allows for some samples to have `sequence_length < block_size`, which throws an error when the `data_collator` tries to convert the batches into tensors.
`batch[k] = torch.tensor([f[k] for f in features])
ValueError: expected sequence of length 128 at dim 1 (got 94)`<|||||>Won't the empty block cause you the same error though?<|||||>The empty entries are removed and aren't there anymore after the mapping with `group_texts` happens. Even though I don't know why the `map` function removes empty entries.
I tried to replicate it here:
```python
def gen():
yield {"id": [1,2,3]}
yield {"id": [1]}
ds = datasets.Dataset.from_generator(gen)
def remove_short(example):
if len(example['id']) < 2:
example['id'] = []
return example
c = ds.map(remove_short)
for cc in c:
print (cc)
```
but got the output:
```
{'id': [1, 2, 3]}
{'id': []}
```
I don't know why does it removes the empty batch in the training script, and not here.
<|||||>We can also just remove the test which is how the script was originally written. The problem is that it then caused issues with users having a small dataset (see #12438).
So all in all there is always going to be one group of users not happy since there is no good solution. This script is primarily intended for training, so it just shouldn't be executed on a small dataset :man_shrugging: <|||||>This will only fail when the dataset has only one batch, and the total_length < block_size. But I don't think that's a valid case. So, I think we should just adjust for the normal case where we have many batches, but sometimes one batch would be too short that we need to exclude it, instead of returning a batch with shorter length, that will cause an error. <|||||>True. So let's go with removing the test at line 496 (and unindent `total_length = (total_length // block_size) * block_size`).<|||||>Great!
Should I create a new PR, or add a new commit to this one?
Should I do on for the rest of Language modeling examples? <|||||>This PR is fine, and yes please treat other examples the same way.<|||||>yes, I did it pushing now
|
transformers | 23,428 | closed | Small fixes and link in the README | Small fixes to the 100k Markdown file and link in the README | 05-17-2023 15:01:27 | 05-17-2023 15:01:27 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_23428). All of your documentation changes will be reflected on that endpoint. |
transformers | 23,427 | closed | TF: embeddings out of bounds check factored into function | # What does this PR do?
Cosmetic PR: the check to confirm that `input_ids` is within bounds, added by me, is annoyingly verbose. Since it is not part of the model code, this PR factors it out into a separate function -- the model code becomes more readable ๐ค
The corresponding test passes after these changes (`RUN_SLOW=1 py.test tests/models/ -k test_embeddings_out_of_bounds_raise_exception -vv`) | 05-17-2023 13:42:40 | 05-17-2023 13:42:40 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 23,426 | closed | Encoder-Decoder: add informative exception when the decoder is not compatible | # What does this PR do?
Related to https://github.com/huggingface/transformers/issues/23350
Many decoder models are not compatible with our `EncoderDecoder` structure (mostly because using the encoded hidden states appropriately requires extra code in the decoder model architecture itself). This PR adds an informative message in the presence of incompatible decoder models. | 05-17-2023 12:33:01 | 05-17-2023 12:33:01 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 23,425 | closed | Loading LLaMA hf format from local folder is not using GPU in Google Colab | ### System Info
Using Google colab:
NAME="Ubuntu"
VERSION="20.04.5 LTS (Focal Fossa)"
GPU = T4
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2022 NVIDIA Corporation
Built on Wed_Sep_21_10:33:58_PDT_2022
Cuda compilation tools, release 11.8, V11.8.89
Build cuda_11.8.r11.8/compiler.31833905_0
RAM 12.6 GB
GPU RAM 15GB
Disk 78.2 GB
Packages: (From running !pip list)
torch == 2.0.0+cu118
transformers == 4.29.2
### Who can help?
@ArthurZucker
@younesbelka
### Information
- [x] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
I followed the steps provided here: https://huggingface.co/docs/transformers/main/en/model_doc/llama#overview to get an hf format of the LLaMA model that I can use. When I load the model from the output path in my local computer with CPU is working fine, VERY SLOW but fine so I moved to Google Colab in order to use GPU cause I need to fine tune the model after loading it, but when I monitor the resources while loading the model I can see that the GPU is not being used.
from transformers import LlamaForCausalLM, LlamaTokenizer
import torch
#Check if GPU is available
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
print("Device:", device)
Output:
> Device: cuda
base_model_path = "/content/drive/MyDrive/Qbot-gpt/LLaMA-HF"
model = LlamaForCausalLM.from_pretrained(base_model_path ).to(device)
print_gpu_utilization()
tokenizer = LlamaTokenizer.from_pretrained(base_model_path)
The above code crashes in colab cause is not using the GPU but only the RAM.
This is the content of the folder LLaMA-HF:

The function being used to print the GPU usage:
from pynvml import *
def print_gpu_utilization():
nvmlInit()
handle = nvmlDeviceGetHandleByIndex(0)
info = nvmlDeviceGetMemoryInfo(handle)
print(f"GPU memory occupied: {info.used//1024**2} MB.")
def print_summary(result):
print(f"Time: {result.metrics['train_runtime']:.2f}")
print(f"Samples/second: {result.metrics['train_samples_per_second']:.2f}")
print_gpu_utilization()
### Expected behavior
It should output a loading bar with all the shards loaded:
Loading checkpoint shards: 100%. ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||| 2/2 [00:32<00:00, 14.09s/it]
And more importantly it should print the amount of GPU that is being used when loading the model:
"GPU memory occupied: 1343 MB." | 05-17-2023 12:24:40 | 05-17-2023 12:24:40 | Hi @algiraldohe, thanks for raising this issue.
To load the model directly onto the available GPUs, you should pass `device_map='auto'` when loading the model:
```python
model = LlamaForCausalLM.from_pretrained(base_model_path, device_map='auto')
```<|||||>Thank you for your prompt response I did as you mentioned and I received the following error:
โญโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ Traceback (most recent call last) โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฎ
โ in <cell line: 5>:5 โ
โ โ
โ /usr/local/lib/python3.10/dist-packages/transformers/modeling_utils.py:2777 in from_pretrained โ
โ โ
โ 2774 โ โ โ โ mismatched_keys, โ
โ 2775 โ โ โ โ offload_index, โ
โ 2776 โ โ โ โ error_msgs, โ
โ โฑ 2777 โ โ โ ) = cls._load_pretrained_model( โ
โ 2778 โ โ โ โ model, โ
โ 2779 โ โ โ โ state_dict, โ
โ 2780 โ โ โ โ loaded_state_dict_keys, # XXX: rename? โ
โ โ
โ /usr/local/lib/python3.10/dist-packages/transformers/modeling_utils.py:2871 in โ
โ _load_pretrained_model โ
โ โ
โ 2868 โ โ โ ) โ
โ 2869 โ โ โ is_safetensors = archive_file.endswith(".safetensors") โ
โ 2870 โ โ โ if offload_folder is None and not is_safetensors: โ
โ โฑ 2871 โ โ โ โ raise ValueError( โ
โ 2872 โ โ โ โ โ "The current `device_map` had weights offloaded to the disk. Please โ
โ 2873 โ โ โ โ โ " for them. Alternatively, make sure you have `safetensors` installe โ
โ 2874 โ โ โ โ โ " offers the weights in this format." โ
โฐโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฏ
ValueError: The current `device_map` had weights offloaded to the disk. Please provide an `offload_folder` for
them. Alternatively, make sure you have `safetensors` installed if the model you are using offers the weights in
this format.
Not quite sure what is requesting for. I tried to set the offload_folder = "/content/drive/MyDrive/Qbot-gpt/LLaMA-HF" (Same path of the hf model) like this:
model = LlamaForCausalLM.from_pretrained(base_model_path, device_map='auto' , offload_folder=offload_folder)
But no luck yet, still no using the GPU.<|||||>@algiraldohe Passing the `device_map` argument is actually using the accelerate library to smartly load the model weights to maximise GPU usage. To understand more about how it works, there's a [great doc page here](https://huggingface.co/docs/accelerate/usage_guides/big_modeling) describing how to load large models. There's also [this blog](https://huggingface.co/blog/accelerate-large-models).
Following these, specifying an offload folder should work. Could you try specifying a different folder than the one the model weights are stored in e.g.
```python
model = LlamaForCausalLM.from_pretrained(base_model_path, device_map='auto', offload_folder="offload")
```
The GPU RAM is 15 GB, but for e.g. 7B Llama, the weights alone are ~13.5 GB, so offload might be necessary. Let us know if there's still an issue.
Once the model can be loaded without error, then we can try and diagnose if the GPU is being properly utilized or not.
<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 23,424 | closed | group_texts function produces batches with length shorter than block_size | ### System Info
- `transformers` version: 4.28.1
- Platform: Linux-3.10.0-1160.76.1.el7.x86_64-x86_64-with-glibc2.17
- Python version: 3.10.0
- Huggingface_hub version: 0.14.1
- Safetensors version: not installed
- PyTorch version (GPU?): 1.12.1+cu102 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
There's a bug in the PyTorch language_modeling examples:
for example: [run_clm.py](https://github.com/huggingface/transformers/blob/main/examples/pytorch/language-modeling/run_clm.py)
The `group_texts` function is supposed to group the data into batches of equal sizes to `block_size`, and to ignore the small remainder chunk. There's a bug and instead, it inserts the last batch with a smaller size, which creates an error, as the model takes in data with equal sequence length.
## Error: ValueError: expected sequence of length 128 at dim 1 (got 94)
### Expected behavior
The expected behavior is for the function to return data with equal sequence lengths for all batches. | 05-17-2023 12:22:53 | 05-17-2023 12:22:53 | |
transformers | 23,423 | closed | Fire | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
| 05-17-2023 12:00:59 | 05-17-2023 12:00:59 | ๐ฅ <|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_23423). All of your documentation changes will be reflected on that endpoint. |
transformers | 23,422 | closed | Fix gradient checkpointing bugs in freezing part of models (requires_grad=False) | # What does this PR do?
Same as the [PR I opened](https://github.com/huggingface/diffusers/pull/3404) in diffusers.
Using ```torch.utils.checkpoint.checkpoint``` directly will cause the parameters in the checkpoint section to not be learned when part of the model parameters are freezed. As these discussions state:
https://discuss.pytorch.org/t/use-of-torch-utils-checkpoint-checkpoint-causes-simple-model-to-diverge/116271
https://discuss.pytorch.org/t/checkpoint-with-no-grad-requiring-inputs-problem/19117/19
In pytroch versions larger than 1.11.0, the ```use_reentrant=False``` can be added to fix this bug.
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
| 05-17-2023 03:49:06 | 05-17-2023 03:49:06 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_23422). All of your documentation changes will be reflected on that endpoint.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 23,421 | closed | Return early once stop token is found. | # What does this PR do?
Previously even after finding a stop token, other stop tokens were considered, which is unnecessary and slows down processing.
Currently, this unnecessary overhead is negligible since there are usually 2 stop tokens considered and they are fairly short, but in future it may become more expensive.
| 05-17-2023 02:46:57 | 05-17-2023 02:46:57 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 23,420 | closed | Fix a typo in HfAgent docstring. | # What does this PR do?
Fixes a typo in HfAgent docstring. | 05-17-2023 02:33:14 | 05-17-2023 02:33:14 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 23,419 | closed | Save checkpoint asynchronously on cpu to keep GPU training going | ### Feature request
I would like to save checkpoints during training as asynchronous
There is no need for training to wait the end of checkpoint saving
### Motivation
As Model size is bigger and bigger, Saving checkpoint during model training takes more time.
The important problem is that two jobs, saving checkpoint and model training, are synchronous; Model training wait until saving checkpoint will finish.
It causes ver expensive GPUs are idle under the process of saving checkpoint .
### Your contribution
I would be happy to submit a PR, might require some help.
Thanks! | 05-17-2023 02:14:29 | 05-17-2023 02:14:29 | Hi @ykihong0, thanks for raising this issue.
So that we can best understand the feature being discussed, is this checkpoint saving when using the `Trainer` class? <|||||>Hello @amyeroberts, Thanks for reply.
> So that we can best understand the feature being discussed, is this checkpoint saving when using the Trainer class?
-> Yes. I hope the above feature will be supported when the save_model method in Trainer class is called ( https://github.com/huggingface/transformers/blob/v4.27.2/src/transformers/trainer.py#L2718)
<|||||>cc @sgugger <|||||>That's not something we have on our roadmap at the moment. Happy to look at a PR if someone wants to integrate this.<|||||>Thaks @amyeroberts @sgugger for reply
I will give a PR to integrate this feature later.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 23,418 | closed | Request to upgrade torch version in vision model | ### System Info
Running vision [model](https://github.com/huggingface/transformers/tree/main/examples/flax/vision) on Cloud TPU with JAX version 4.10.0.
### Who can help?
@amyeroberts @sanchit-gandhi
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```
pip install --upgrade pip
pip install jax[tpu]==0.4.10 -f https://storage.googleapis.com/jax-releases/libtpu_releases.html
git clone https://github.com/huggingface/transformers.git
cd transformers && pip install .
pip install -r examples/flax/_tests_requirements.txt
pip install --upgrade huggingface-hub urllib3 zipp
pip install tensorflow
pip install -r examples/flax/vision/requirements.txt
```
Meet error as below:
```
ERROR: Could not find a version that satisfies the requirement torch==1.9.0+cpu (from versions: 1.11.0, 1.11.0+cpu, 1.11.0+cu102, 1.11.0+cu113, 1.11.0+cu115, 1.11.0+rocm4.3.1, 1.11.0+rocm4.5.2, 1.12.0, 1.12.0+cpu, 1.12.0+cu102, 1.12.0+cu113, 1.12.0+cu116, 1.12.0+rocm5.0, 1.12.0+rocm5.1.1, 1.12.1, 1.12.1+cpu, 1.12.1+cu102, 1.12.1+cu113, 1.12.1+cu116, 1.12.1+rocm5.0, 1.12.1+rocm5.1.1, 1.13.0, 1.13.0+cpu, 1.13.0+cu116, 1.13.0+cu117, 1.13.0+cu117.with.pypi.cudnn, 1.13.0+rocm5.1.1, 1.13.0+rocm5.2, 1.13.1, 1.13.1+cpu, 1.13.1+cu116, 1.13.1+cu117, 1.13.1+cu117.with.pypi.cudnn, 1.13.1+rocm5.1.1, 1.13.1+rocm5.2, 2.0.0, 2.0.0+cpu, 2.0.0+cpu.cxx11.abi, 2.0.0+cu117, 2.0.0+cu117.with.pypi.cudnn, 2.0.0+cu118, 2.0.0+rocm5.3, 2.0.0+rocm5.4.2, 2.0.1, 2.0.1+cpu, 2.0.1+cpu.cxx11.abi, 2.0.1+cu117, 2.0.1+cu117.with.pypi.cudnn, 2.0.1+cu118, 2.0.1+rocm5.3, 2.0.1+rocm5.4.2)
ERROR: No matching distribution found for torch==1.9.0+cpu
```
### Expected behavior
After upgrading the torch version to `1.11.0+cpu` and torchvision version to `0.12.0+cpu`, it works as expected.
```
pip3 install torch==1.11.0+cpu torchvision==0.12.0+cpu -f https://download.pytorch.org/whl/torch_stable.html
``` | 05-17-2023 00:37:33 | 05-17-2023 00:37:33 | Good catch @RissyRan! Torch cpu didn't exist until PT 1.10 ๐
, so fine for me to bump this! We can just amend this two lines in the requirements file: https://github.com/huggingface/transformers/blob/a574de302f8538d73c342ee946c2cbf8c64e7a6f/examples/flax/vision/requirements.txt#L5-L8
Would you like to open a PR to fix this @RissyRan?<|||||>I made a pull request for this change! thanks! |
transformers | 23,417 | closed | Remove .data usages in optimizations.py | # What does this PR do?
.data usages is deprecated in recent releases of PyTorch. See https://github.com/pytorch/pytorch/issues/91093#issuecomment-1397317273
This change replace all .data usages in optimizations.py with modern alternatives.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@connor-henderson @stas00 | 05-16-2023 22:45:53 | 05-16-2023 22:45:53 | @muellerzr the usage of the `.data` will greatly slow down pytorch/xla on nightly, we were hoping we can fix this issue before the next release.<|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>This is a very old and deprecated implementation since it doesn't even follow the AdamW algorithm exactly. One should use `torch.optim.AdamW` instead, which also has a fused version since pt-2.0.0 which is almost as fast as apex's fused AdamW. So really you shouldn't be using this version anyway.
The only reason it was kept is for BC for those who rely on exact results remaining exact after new `transformers` versions are released, otherwise we would have just replaced it with `torch.optim.AdamW` in the first place.
p.s. no objections though to making it better...<|||||>@stas00 Thanks for the reply. How about the adafactor then?<|||||>oh, sorry, I didn't see it was Adafactor too. It's hard to see from the diff as it doesn't show the class names.
This Adafactor is being used for sure, but its implementation is super old as well. So certainly it'd be a blessing to bring it up to more modern code standard.<|||||>@stas00 Do you mind give this pr a review? Thanks.<|||||>Thanks for your PR. Just to be sure though, is this all going to work with PyTorch 1.8+? 1.8 is the minimum version we offically support at the moment (for a couple more weeks at least, then 1.9 starting mid-June).<|||||>I'm almost 100% sure it is the case. the whole direct `.data` usage deprecation is a few years old at least.
Let me quickly test it with pt-1.8<|||||>```
$ pytest tests/optimization/test_optimization.py -k test_adafactor
========================================================== test session starts ===========================================================
platform linux -- Python 3.8.8, pytest-7.3.1, pluggy-0.13.1
rootdir: /mnt/nvme0/code/huggingface/transformers-master
configfile: setup.cfg
plugins: timeout-1.4.2, typeguard-2.12.1, flakefinder-1.0.0, forked-1.3.0, monitor-1.6.0, hypothesis-6.47.0, instafail-0.4.2, xdist-2.2.1
collected 3 items / 2 deselected / 1 selected
tests/optimization/test_optimization.py . [100%]
================================================================= PASSES =================================================================
======================================================== short test summary info =========================================================
PASSED tests/optimization/test_optimization.py::OptimizationTest::test_adafactor
============================================== 1 passed, 2 deselected, 4 warnings in 0.16s ===============================================
$ pt-ver
pt=1.8.2, cuda=11.1, nccl=2708
```
At least the Adafactor test that we have is passing.<|||||>Thanks @sgugger and @stas00 for reviewing the changes. |
transformers | 23,416 | closed | Loading LLM LoRA locally does not update weights | ### System Info
- `transformers` version: 4.28.1
- Platform: Linux-5.4.0-148-generic-x86_64-with-glibc2.17
- Python version: 3.8.16
- Huggingface_hub version: 0.12.0
- Safetensors version: not installed
- PyTorch version (GPU?): 1.13.1+cu117 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
After LoRA training and saving the model with the following snippet:
```py
import torch
from transformers import LlamaTokenizer, LlamaForCausalLM,GenerationConfig
tokenizer = LlamaTokenizer.from_pretrained('decapoda-research/llama-7b-hf')
model = LlamaForCausalLM.from_pretrained('decapoda-research/llama-7b-hf', device_map="auto", torch_dtype=torch.float16)
from peft import LoraConfig, get_peft_model,TaskType
lora_config = LoraConfig(
r=16,
lora_alpha=32,
target_modules=["q_proj", "v_proj"],
lora_dropout=0.05,
bias="none",
task_type="CAUSAL_LM"
)
model = get_peft_model(model, lora_config)
# After trainer.train() is complete
model.save_pretrained('./lora_pretrained')
```
Loading LoRA from save directory:
```py
# Loading base model is same as the snippet above
lora_config = LoraConfig.from_pretrained('./lora_pretrained')
model = get_peft_model(model, lora_config)
#The model generates outputs that are the same as the base model.
```
Trying to load the `adapter_model.bin` directly via this snippet results in errors about incompatible weights:
```py
model.load_state_dict(torch.load('./lora_pretrained/adapter_model.bin'))
RuntimeError: Error(s) in loading state_dict for PeftModelForCausalLM:
Missing key(s) in state_dict: "base_model.model.model.embed_tokens.weight", "base_model.model.model.layers.0.self_attn.q_proj.weight", "base_model.model.model.layers.0.self_attn.q_proj.lora_A.weight", "base_model.model.model.layers.0.self_attn.q_proj.lora_B.weight", "base_model.model.model.layers.0.self_attn.k_proj.weight", "base_model.model.model.layers.0.self_attn.v_proj.weight", "base_model.model.model.layers.0.self_attn.v_proj.lora_A.weight", "base_model.model.model.layers.0.self_attn.v_proj.lora_B.weight", "base_model.model.model.layers.0.self_attn.o_proj.weight", "base_model.model.model.layers.0.self_attn.rotary_emb.inv_freq", "base_model.model.model.layers.0.mlp.gate_proj.weight", "base_model.model.model.layers.0.mlp.down_proj.weight", "base_model.model.model.layers.0.mlp.up_proj.weight", "base_model.model.model.layers.0.input_layernorm.weight",
```
### Expected behavior
`LoraConfig.from_pretrained` should load the updated model weights. | 05-16-2023 21:47:05 | 05-16-2023 21:47:05 | cc @pacman100 <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>you need to load both the base model weights `pytorch_model.bin` and the adapter model weights `adapter_model.bin`.<|||||>```
from peft import PeftModel
model = PeftModel.from_pretrained(model, save_path) #save_path to contain both adapter_config.json and adapter_model.bin
```
this worked for me |
transformers | 23,415 | closed | Use dict.items to avoid unnecessary lookups. | # What does this PR do?
It's more efficient to iterate over key, value dict pairs instead of iterating over keys and performing value lookups on each iteration. It's also more idiomatic. | 05-16-2023 21:45:42 | 05-16-2023 21:45:42 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 23,414 | closed | Fix smdistributed check | # What does this PR do?
It turns out the `smdistributed` package does not have metadata so #23163 made this package always seem unavailable. This fixes it. | 05-16-2023 18:57:01 | 05-16-2023 18:57:01 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 23,413 | closed | Generation issues with seq2seq LMs | ### System Info
- `transformers` version: 4.27.1
- Platform: Linux-5.19.0-41-generic-x86_64-with-glibc2.35
- Python version: 3.9.12
- Huggingface_hub version: 0.13.2
- PyTorch version (GPU?): 2.0.0+cu117 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: Yes, parallel (accelerate auto-mapping)
### Who can help?
@ArthurZucker @gante
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
This has most recently arisen in using `trlX` to do reinforcement learning on `flan-T5`. I wrote an [issue](https://github.com/CarperAI/trlx/issues/468) on their own repo, but there seems to be no response, and it is somewhat more suited to be an issue in this repo since it has to do with `transformers` code at its core.
The main issue is that `generate` with a seq2seq model, namely `flan-t5`, sometimes generates the following error: ```RuntimeError: probability tensor contains either `inf`, `nan` or element < 0```. This has been well documented in other issues like [this one](https://github.com/huggingface/transformers/issues/15169), but the behavior in that issue is more custom than calling `generate` in its standard configuration.
Here is a code example to reproduce:
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
m = AutoModelForSeq2SeqLM.from_pretrained("google/flan-t5-large", device_map="auto")
t = AutoTokenizer.from_pretrained("google/flan-t5-large")
in_text = """You are a highly intelligent and accurate HVAC domain Resource Description Framework (RDF) data model. You take Passage as input and convert it into HVAC domain RDF triples. A triple is a set of three entities that codifies a statement about semantic data in the form of subjectโpredicateโobject expressions.
Your output format is only [[ subject, predicate, object ], ...] nothing else
Examples:
Input: The HV123 heating unit can supply 50W of power
Output: [[HV123, powerSupply, 50W]]
Input: Unit: ft. (m)
Model | Cooling Mode | Heating Mode
ABC123 | 28.8 (8.8) | 19.0 (5.8)
ABC456 | 28.8 (8.8) | 19.0 (5.8)
ABC789 | 28.8 (8.8) | 21.3 (6.5)
ABC987 | 29.0 (8.9) | 22.9 (7.0)
Output:"""
ins = t(in_text, return_tensors="pt").input_ids.to("cuda")
outs = m.generate(ins, do_sample=True, max_length=512, top_k=0, temperature=0.7, num_beams=2)
```
NB:
`temperature` seems to be one of the main causes of this issue, as removing this kwarg from the generate call does not produce the error in the above case. However, that is not true of all cases. I have seen the error in my `trlX` training loops with kwargs as simple as: `{"max_new_tokens": 512, "do_sample": True, "top_k": 0, "top_p": 1}`. Thus it seems this error is not always related to temperature.
### Expected behavior
The expected behavior in this case would be for the sampling to work every time instead of having strange edge cases where tokens are unreachable. | 05-16-2023 18:29:27 | 05-16-2023 18:29:27 | Hey @abarbet ๐
This issue may arise when beam search, sampling, and long outputs are used together. A potential bug on PyTorch itself compounds it. You can read the full story in [this issue](https://github.com/huggingface/transformers/issues/22914).
TL;DR -- my immediate suggestion would be to avoid using `num_beams` and `do_sample` together. If you want to use them both, you'll have to read the issue linked above, which describes the problem and solutions :)<|||||>Ah thank you, that issue is very helpful! Do you have any idea why we would see a similar error in `trlX` training despite not using beam sampling? I know you don't have access to my training script and also are most likely not familiar with their codebase, so this is a complete longshot.
The only thing I can think of if it's not caused by a sampling bug is some kind of destructive learning in the PPO step that causes token distributions to get completely out of whack.<|||||>@abarbet It may be due to [this PyTorch issue](https://github.com/pytorch/pytorch/issues/48841), where the sampling step may pick very low probability tokens that it shouldn't and, in turn, cause computations to derail.
Try running your script with PT 1.x instead of 2.0! <|||||>> @abarbet It may be due to [this PyTorch issue](https://github.com/pytorch/pytorch/issues/48841), where the sampling step may pick very low probability tokens that it shouldn't and, in turn, cause computations to derail.
>
> Try running your script with PT 1.x instead of 2.0!
For me, this issue also occurs with pytorch 1.13.1
https://github.com/huggingface/transformers/issues/22914#issuecomment-1562034753<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 23,412 | closed | Load model from cloud storage | ### Feature request
I would like to load models directly from GCS/AWS
Example if your model is on Google Cloud:
```python
model = AutoModelForSequenceClassification.from_pretrained('gs://my_bucket/my_model')
tokenizer = AutoTokenizer.from_pretrained('gs://my_bucket/my_model')
pipe = pipeline('text-classification', model='gs://my_bucket/my_model')
```
The `datasets` library supports this use case through `fsspec`. I propose to also use this library.
This could also simplify the code of `PretrainedConfig` as everything would use `fsspec` except if it's on the Hub.
### Motivation
I train my models in the cloud and then I load them into another tool such as Azimuth or a local Jupyter notebook for error analysis.
I could simply upload them to the Hub, but I don't want to upload all my models to the Hub or manually upload them.
### Your contribution
I would be happy to submit a PR, might require some help.
Thanks! | 05-16-2023 17:26:42 | 05-16-2023 17:26:42 | Thanks for opening an issue. There is no plan to support any other platform than the Hugging Face Hub for remote models.<|||||>That's fair.
Just saw the notice around the deprecation of remote model using HTTP as well.
Closing :) |
transformers | 23,411 | closed | Generative models return the same responses to all questions | ### System Info
- `transformers` version: 4.25.1
- Platform: Linux-3.10.0-1160.80.1.el7.x86_64-x86_64-with-centos-7.9.2009-Core
- Python version: 3.7.16
- Huggingface_hub version: 0.13.3
- PyTorch version (GPU?): 1.13.1+cu117 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help?
@ArthurZucker and @younesbelkada
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Problem: Different questions to a conversational pipeline result in the same answers for a given model. This problem occurs across multiple models, and occurs when a new Python session is initiated between runs.
Code to reproduce:
```python
from transformers import pipeline, Conversation
for model in ['facebook/opt-1.3b', 'bigscience/bloom-560m', 'gpt2']:
generator = pipeline(task='conversational', model=model)
convo = Conversation('Should I see a movie tonight?')
generator(convo)
for model in ['facebook/opt-1.3b', 'bigscience/bloom-560m', 'gpt2']:
generator = pipeline(task='conversational', model=model)
convo = Conversation('What do you know about biology?')
generator(convo)
```
Outputs:
* From the first for loop:
```
Conversation id: 9335b8bb-d73e-4fb0-91e3-bb0dbf62dd76
user >> Should I go see a movie tonight?
bot >> I'm not sure if this is a good idea.
Conversation id: 03c41e56-35b1-4b02-9757-4bf1c90a6f32
user >> Should I go see a movie tonight?
bot >> The first thing you need to do is to get a
The attention mask and the pad token id were not set. As a consequence, you may observe unexpecte
d behavior. Please pass your input's `attention_mask` to obtain reliable results.
Setting `pad_token_id` to `eos_token_id`:50256 for open-end generation.
A decoder-only architecture is being used, but right-padding was detected! For correct generation
results, please set `padding_side='left'` when initializing the tokenizer.
Conversation id: 62e5f6f0-7e6e-4a5c-baf6-93ea40e31b85
user >> Should I see a movie tonight?
bot >> The first time I saw the new Star Wars movie, I
```
* From the second for loop:
```
Conversation id: f14a10d8-3661-482e-8b95-bb0a417a0afd
user >> What do you know about biology?
bot >> I'm not sure if this is a good idea.
Conversation id: 24866d8e-bfc8-4ebf-825e-b90965ab60b7
user >> What do you know about biology?
bot >> The first thing you need to do is to get a good
The attention mask and the pad token id were not set. As a consequence, you may observe unexpecte
d behavior. Please pass your input's `attention_mask` to obtain reliable results.
Setting `pad_token_id` to `eos_token_id`:50256 for open-end generation.
A decoder-only architecture is being used, but right-padding was detected! For correct generation
results, please set `padding_side='left'` when initializing the tokenizer.
Conversation id: 40d35c22-cf89-4750-931e-f75a5d80431b
user >> What do you know about biology?
bot >> The first time I saw the new Star Wars movie, I
```
### Expected behavior
Sensical answers that are in response to the question, rather than an out-of-the-box response that doesn't make sense in context. | 05-16-2023 17:17:15 | 05-16-2023 17:17:15 | cc @Narsil @gante <|||||>Hey @serenalotreck ๐
The models you're trying to use are not compatible with the conversational pipeline. That's why you see the same output on a given model, regardless of the input.
Check [these docs](https://huggingface.co/docs/transformers/v4.29.1/en/main_classes/pipelines#transformers.ConversationalPipeline): "The models that this pipeline can use are models that have been fine-tuned on a multi-turn conversational task, currently: โmicrosoft/DialoGPT-smallโ, โmicrosoft/DialoGPT-mediumโ, โmicrosoft/DialoGPT-largeโ. See the up-to-date list of available models on [huggingface.co/models](https://huggingface.co/models?filter=conversational)."
P.S.: you might be able to get conversational-like behavior from a standard text generation pipeline, using models like [open assistant](https://huggingface.co/OpenAssistant/oasst-sft-1-pythia-12b), but we don't have step-by-step docs for that at the moment. Check the model card for high-level instructions.<|||||>@gante that makes sense, thank you!
I'm currently looking for open source alternatives to GPT-3.5 that I can use with an API for relation extraction through a series of prompts (e.g. "Rewrite this sentence into multiple sentences, each containing only one relation", or "Extract an SPO triple from the following sentence").
Do you happen to know if models other than Open Assistant can be used in the same manner? The models in the list in the code example above are all from the search results for Text Generation models, and claim to be open source alternatives to research LLMs, but even using `text-generation` type pipelines, I haven't been able to get responses that mimic what ChatGPT can do, even using GPT-2 (for example, in the Rewrite the Sentence prompt, it just adds to the sentence instead of rewriting), so I suspect I may just be doing something wrong with how I'm building my pipelines. I'll give Open Assistant a shot in the meantime!
Any thoughts are appreciated, thanks!<|||||>@serenalotreck you can check [this leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) to see the highest scoring open-source LLMs.
The catch is that they need a carefully crafted input prompt (also known as system prompt) before they turn into helpful assistants like ChatGPT. ChatGPT also has it, but it is hidden to you. Here's a [simple example](https://github.com/LAION-AI/Open-Assistant/blob/d1da7db6e9e4c198b6b66a68291e5886db80c7f6/model/model_training/configs/config.yaml#L77), for the case of open assistant -- you may be able to find more online :)
___________________________________
As per our [issues guidelines](https://github.com/huggingface/transformers/blob/main/ISSUES.md), we reserve GitHub issues for bugs in the repository and/or feature requests. For any other matters, we'd like to invite you to use our [forum](https://discuss.huggingface.co/) ๐ค <|||||>And even more than the system prompt, there is usually a specific token sequence used during the model finetuning, which is critical to get a good output.
For instance OpenAssistant biggest model is using "<|prompt_begin|><|prompter|>somethign something</s><|assistant|>".
And different models use different prompting. Unfortunately at this time there are too many different models released at the same time, and it's impossible to include all of these specific parts everywhere.
https://huggingface.co/chat/ should give you an idea of what OpenAssistant model is capable of.
OpenAssistant has their own front to their models https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=&cad=rja&uact=8&ved=2ahUKEwjCpZe2iPz-AhX6hv0HHdo1CsQQjBB6BAgdEAE&url=https%3A%2F%2Fopen-assistant.io%2Fchat&usg=AOvVaw2BLJ_sUF4zgiHZMHNcFVnd<|||||>Thank you all so much, that's super helpful!!<|||||>@serenalotreck this link might also be relevant to you: https://github.com/oobabooga/text-generation-webui/tree/main/characters/instruction-following
It contains the templates to manipulate specific models |
transformers | 23,410 | closed | TrainingArguments initializer changes `torch.current_device()` | ### System Info
```
transformers 4.28.1
python 3.9.13
torch 1.12.1
```
### Who can help?
@sgugger
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Run
```python
import torch
from transformers import TrainingArguments
torch.cuda.set_device(1)
print(torch.cuda.current_device())
training_args = TrainingArguments(output_dir="output/")
print(torch.cuda.current_device(), training_args.device)
```
Observe
```
1
0 cuda:0
```
### Expected behavior
I would expect that the `torch.current_device()` would not change and therefore to observe
```
1
1 cuda:1
``` | 05-16-2023 16:46:44 | 05-16-2023 16:46:44 | Your expectation is not correct as `TrainingArguments` will take all GPUs available (in this case it will use DataParallel on your model for the training afterward). You need to set the `CUDA_VISIBLE_DEVICES` environment variable to limit the GPUs seen.<|||||>Oh, I see. Thanks for the answer! |
transformers | 23,409 | closed | Fix parallel mode check | # What does this PR do?
Fixes a check that relies on `distributed_state` when it might not be there
Fixes # (issue)
Should fix https://github.com/huggingface/transformers/issues/23390
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@sgugger
| 05-16-2023 16:42:50 | 05-16-2023 16:42:50 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 23,408 | closed | small fix to remove unused eos in processor when it's not used. | # What does this PR do?
small fix to remove unused eos in processor when it's not used.
Fix https://github.com/huggingface/transformers/issues/23400
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
--> | 05-16-2023 15:50:02 | 05-16-2023 15:50:02 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 23,407 | closed | Fix translation no_trainer | # What does this PR do?
This PR fixes the reason translation has been failing, by adding in the same `num_beams` that were found to be used in the test.
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@sgugger, cc @ydshieh | 05-16-2023 15:45:43 | 05-16-2023 15:45:43 | _The documentation is not available anymore as the PR was closed or merged._<|||||>@sgugger it hasn't and has been failing for some time, due to the fact `num_beams` was `None` (the default in the CLI), and we need to pass it in. When checking the diff/blame there was not anything explicitly that changed in this file to have this happen, however this is the fix that is needed for the test to pass. <|||||>Ah ok, got confused because it's slow. |
transformers | 23,406 | closed | Update 3 docker files to use cu118 | # What does this PR do?
A follow up for #23339.
I will try to build the images and run a small subset of tests to make sure this doesn't break (too many) things before merge. | 05-16-2023 15:44:54 | 05-16-2023 15:44:54 | _The documentation is not available anymore as the PR was closed or merged._<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_23406). All of your documentation changes will be reflected on that endpoint. |
transformers | 23,405 | closed | Build with non Python files | # What does this PR do?
It appears the non python files (such as CUDA kernesl) have disappeared from the built package once again. Fir some reason there were found before with just `*.extension`, but now need `**/*.extension` (although once found again I can remove the **).
This is all super brittle, so this PR also adds:
- a check that the build package contains the non-Python files before we upload it on testpypi
- a check that the library installed does contain the non-Python files before we upload it on pypi
Will make a patch after this is merged. | 05-16-2023 15:37:23 | 05-16-2023 15:37:23 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 23,404 | closed | Connection error, and we cannot find the requested files in the cached path. Please try again or make sure your Internet connection is on. | ### System Info
irrespect of version
AutoTokenizer raises connection error
AutoTokenizer.from_pretrained('bert-base-uncased') takes forever and raise the connection error
@ArthurZucker
thx
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
AutoTokenizer.from_pretrained('bert-base-uncased')
### Expected behavior
expect return the tokenizer, but got connection error:
'HTTPSConnectionPool(host='huggingface.co', port=443): Max retries exceeded with url: /bert-base-uncased/resolve/main/tokenizer_config.json (Caused by ConnectTimeoutError(<urllib3.connection.HTTPSConnection object at 0x7f0e814eacd0>, 'Connection to huggingface.co timed out. (connect timeout=10)'))' thrown while requesting HEAD https://huggingface.co/bert-base-uncased/resolve/main/tokenizer_config.json
'HTTPSConnectionPool(host='huggingface.co', port=443): Max retries exceeded with url: /bert-base-uncased/resolve/main/config.json (Caused by ConnectTimeoutError(<urllib3.connection.HTTPSConnection object at 0x7f0e91bc2d00>, 'Connection to huggingface.co timed out. (connect timeout=10)'))' thrown while requesting HEAD https://huggingface.co/bert-base-uncased/resolve/main/config.json
Traceback (most recent call last):
File "/opt/conda/lib/python3.9/site-packages/transformers/utils/hub.py", line 409, in cached_file
resolved_file = hf_hub_download(
File "/opt/conda/lib/python3.9/site-packages/huggingface_hub/utils/_validators.py", line 124, in _inner_fn
return fn(*args, **kwargs)
File "/opt/conda/lib/python3.9/site-packages/huggingface_hub/file_download.py", line 1211, in hf_hub_download
raise LocalEntryNotFoundError(
huggingface_hub.utils._errors.LocalEntryNotFoundError: Connection error, and we cannot find the requested files in the disk cache. Please try again or make sure your Internet connection is on. | 05-16-2023 15:33:54 | 05-16-2023 15:33:54 | Hi @tanhanzhuo, thanks for raising this issue!
Is seems like there's a connection issue when trying to download from the hub. I've just tested locally by clearing my cache and forcing a download with `AutoTokenizer.from_pretrained('bert-base-uncased')`, which ran successfully. Have you tested the internet connection in the running environment? If so, is this only seen when downloading from the hugging face hub? <|||||>> Hi @tanhanzhuo, thanks for raising this issue!
>
> Is seems like there's a connection issue when trying to download from the hub. I've just tested locally by clearing my cache and forcing a download with `AutoTokenizer.from_pretrained('bert-base-uncased')`, which ran successfully. Have you tested the internet connection in the running environment? If so, is this only seen when downloading from the hugging face hub?
Thank you for reply! The error exists for around 2 hours, but now everything works well again.
During the error, I could load the model by setting local_files_only=True, but cannot download from the hub. Guess some strange bug occurred<|||||>Sorry to ask a little more. I used to meet the same problem as mentioned above. But I found that this error is not always in that way.
For example, if I want to initialize a pretrained stable diffusion model according to the demo code. The first several trials (usually around 4-5 times) will encounter this error. But if you keep on trying, it runs without any errors.
I am not sure if it is possible to enhance the pretrained weights loading code, to support try downloading the weights several times, so that the user with this error can set a relatively large times to try on downloading the weights. In that way, someone with this, maybe so-called **random** `ConnectionError`, can avoid it. :pray:<|||||>> Sorry to ask a little more. I used to meet the same problem as mentioned above. But I found that this error is not always in that way.
>
> For example, if I want to initialize a pretrained stable diffusion model according to the demo code. The first several trials (usually around 4-5 times) will encounter this error. But if you keep on trying, it runs without any errors.
>
> I am not sure if it is possible to enhance the pretrained weights loading code, to support try downloading the weights several times, so that the user with this error can set a relatively large times to try on downloading the weights. In that way, someone with this, maybe so-called **random** `ConnectionError`, can avoid it. ๐
Same here, just kept trying and everthing went well. ๐คฃ |
transformers | 23,403 | closed | Generate: add test to check KV format | # What does this PR do?
This PR adds a test to ensure our `.generate()`-compatible models have a standard KV cache format. Advanced generation methods (e.g. contrastive search or assisted generation) rely on cache manipulation, so it quickly becomes unmanageable if we don't stick to a conventional format (or a set of conventional formats).
I expect that future non-standard KV formats will have to be well justified in PRs, since it will now imply skipping this test. | 05-16-2023 15:05:02 | 05-16-2023 15:05:02 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 23,402 | closed | Update `ConvNextV2ModelIntegrationTest::test_inference_image_classification_head` | # What does this PR do?
Required as this test is currently failing after PR #23122 | 05-16-2023 14:09:30 | 05-16-2023 14:09:30 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 23,401 | closed | batch generation with Llama: IndexError: index out of range in self | ### System Info
I am using Cuda 11.6, and latest version of transformer which is 4.29.1 to the best of my knowledge.
### Who can help?
@gante
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
I am facing the below error with batch generation:
```
from transformers import AutoTokenizer, LlamaForCausalLM
model_name = "huggyllama/llama-7b"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = LlamaForCausalLM.from_pretrained(model_name)
tokenizer.add_special_tokens({'pad_token': '[PAD]'})
prompt = "Hey, are you consciours? Can you talk to me?"
inputs = tokenizer([prompt, prompt + " blah blah"], return_tensors="pt", padding=True, truncation=True)
# Generate
generate_ids = model.generate(inputs.input_ids, max_length=30)
tokenizer.batch_decode(generate_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0]
```
### Expected behavior
The above code works if the length of sentences in the batch be equal. E.g., if you initiate inputs variable with the below command then everything works perfectly:
`inputs = tokenizer([prompt, prompt], return_tensors="pt", padding=True, truncation=True)`
Here is the error:
`
โญโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ Traceback (most recent call last) โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฎ
โ in <module>:1 โ
โ โ
โ /data/arianaskariaa/conda_envs/envs/transformer_4_29_1_efficientLlaMa_int8_python_env/lib/python โ
โ 3.8/site-packages/torch/utils/_contextlib.py:115 in decorate_context โ
โ โ
โ 112 โ @functools.wraps(func) โ
โ 113 โ def decorate_context(*args, **kwargs): โ
โ 114 โ โ with ctx_factory(): โ
โ โฑ 115 โ โ โ return func(*args, **kwargs) โ
โ 116 โ โ
โ 117 โ return decorate_context โ
โ 118 โ
โ โ
โ /data/arianaskariaa/conda_envs/envs/transformer_4_29_1_efficientLlaMa_int8_python_env/lib/python โ
โ 3.8/site-packages/transformers/generation/utils.py:1515 in generate โ
โ โ
โ 1512 โ โ โ โ ) โ
โ 1513 โ โ โ โ
โ 1514 โ โ โ # 11. run greedy search โ
โ โฑ 1515 โ โ โ return self.greedy_search( โ
โ 1516 โ โ โ โ input_ids, โ
โ 1517 โ โ โ โ logits_processor=logits_processor, โ
โ 1518 โ โ โ โ stopping_criteria=stopping_criteria, โ
โ โ
โ /data/arianaskariaa/conda_envs/envs/transformer_4_29_1_efficientLlaMa_int8_python_env/lib/python โ
โ 3.8/site-packages/transformers/generation/utils.py:2332 in greedy_search โ
โ โ
โ 2329 โ โ โ model_inputs = self.prepare_inputs_for_generation(input_ids, **model_kwargs) โ
โ 2330 โ โ โ โ
โ 2331 โ โ โ # forward pass to get next token โ
โ โฑ 2332 โ โ โ outputs = self( โ
โ 2333 โ โ โ โ **model_inputs, โ
โ 2334 โ โ โ โ return_dict=True, โ
โ 2335 โ โ โ โ output_attentions=output_attentions, โ
โ โ
โ /data/arianaskariaa/conda_envs/envs/transformer_4_29_1_efficientLlaMa_int8_python_env/lib/python โ
โ 3.8/site-packages/torch/nn/modules/module.py:1501 in _call_impl โ
โ โ
โ 1498 โ โ if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks โ
โ 1499 โ โ โ โ or _global_backward_pre_hooks or _global_backward_hooks โ
โ 1500 โ โ โ โ or _global_forward_hooks or _global_forward_pre_hooks): โ
โ โฑ 1501 โ โ โ return forward_call(*args, **kwargs) โ
โ 1502 โ โ # Do not call functions when jit is used โ
โ 1503 โ โ full_backward_hooks, non_full_backward_hooks = [], [] โ
โ 1504 โ โ backward_pre_hooks = [] โ
โ โ
โ /data/arianaskariaa/conda_envs/envs/transformer_4_29_1_efficientLlaMa_int8_python_env/lib/python โ
โ 3.8/site-packages/transformers/models/llama/modeling_llama.py:688 in forward โ
โ โ
โ 685 โ โ return_dict = return_dict if return_dict is not None else self.config.use_return โ
โ 686 โ โ โ
โ 687 โ โ # decoder outputs consists of (dec_features, layer_state, dec_hidden, dec_attn) โ
โ โฑ 688 โ โ outputs = self.model( โ
โ 689 โ โ โ input_ids=input_ids, โ
โ 690 โ โ โ attention_mask=attention_mask, โ
โ 691 โ โ โ position_ids=position_ids, โ
โ โ
โ /data/arianaskariaa/conda_envs/envs/transformer_4_29_1_efficientLlaMa_int8_python_env/lib/python โ
โ 3.8/site-packages/torch/nn/modules/module.py:1501 in _call_impl โ
โ โ
โ 1498 โ โ if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks โ
โ 1499 โ โ โ โ or _global_backward_pre_hooks or _global_backward_hooks โ
โ 1500 โ โ โ โ or _global_forward_hooks or _global_forward_pre_hooks): โ
โ โฑ 1501 โ โ โ return forward_call(*args, **kwargs) โ
โ 1502 โ โ # Do not call functions when jit is used โ
โ 1503 โ โ full_backward_hooks, non_full_backward_hooks = [], [] โ
โ 1504 โ โ backward_pre_hooks = [] โ
โ โ
โ /data/arianaskariaa/conda_envs/envs/transformer_4_29_1_efficientLlaMa_int8_python_env/lib/python โ
โ 3.8/site-packages/transformers/models/llama/modeling_llama.py:531 in forward โ
โ โ
โ 528 โ โ โ position_ids = position_ids.view(-1, seq_length).long() โ
โ 529 โ โ โ
โ 530 โ โ if inputs_embeds is None: โ
โ โฑ 531 โ โ โ inputs_embeds = self.embed_tokens(input_ids) โ
โ 532 โ โ # embed positions โ
โ 533 โ โ if attention_mask is None: โ
โ 534 โ โ โ attention_mask = torch.ones( โ
โ โ
โ /data/arianaskariaa/conda_envs/envs/transformer_4_29_1_efficientLlaMa_int8_python_env/lib/python โ
โ 3.8/site-packages/torch/nn/modules/module.py:1501 in _call_impl โ
โ โ
โ 1498 โ โ if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks โ
โ 1499 โ โ โ โ or _global_backward_pre_hooks or _global_backward_hooks โ
โ 1500 โ โ โ โ or _global_forward_hooks or _global_forward_pre_hooks): โ
โ โฑ 1501 โ โ โ return forward_call(*args, **kwargs) โ
โ 1502 โ โ # Do not call functions when jit is used โ
โ 1503 โ โ full_backward_hooks, non_full_backward_hooks = [], [] โ
โ 1504 โ โ backward_pre_hooks = [] โ
โ โ
โ /data/arianaskariaa/conda_envs/envs/transformer_4_29_1_efficientLlaMa_int8_python_env/lib/python โ
โ 3.8/site-packages/torch/nn/modules/sparse.py:162 in forward โ
โ โ
โ 159 โ โ โ โ self.weight[self.padding_idx].fill_(0) โ
โ 160 โ โ
โ 161 โ def forward(self, input: Tensor) -> Tensor: โ
โ โฑ 162 โ โ return F.embedding( โ
โ 163 โ โ โ input, self.weight, self.padding_idx, self.max_norm, โ
โ 164 โ โ โ self.norm_type, self.scale_grad_by_freq, self.sparse) โ
โ 165 โ
โ โ
โ /data/arianaskariaa/conda_envs/envs/transformer_4_29_1_efficientLlaMa_int8_python_env/lib/python โ
โ 3.8/site-packages/torch/nn/functional.py:2210 in embedding โ
โ โ
โ 2207 โ โ # torch.embedding_renorm_ โ
โ 2208 โ โ # remove once script supports set_grad_enabled โ
โ 2209 โ โ _no_grad_embedding_renorm_(weight, input, max_norm, norm_type) โ
โ โฑ 2210 โ return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse) โ
โ 2211 โ
โ 2212 โ
โ 2213 def embedding_bag( โ
โฐโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฏ
IndexError: index out of range in self
`
Thanks in advance! | 05-16-2023 13:58:27 | 05-16-2023 13:58:27 | Hey @arian-askari ๐
The exception pops up because you are defining a new token (`[PAD]`), which causes the exception in the embedding layer (it doesn't know the embeddings for the new token until you define them).
Most decoder-only models have the same "issue" where the padding token is not defined. The standard workaround is as follows:
```py
from transformers import AutoTokenizer, LlamaForCausalLM
model_name = "huggyllama/llama-7b"
tokenizer = AutoTokenizer.from_pretrained(model_name, padding_side="left")
model = LlamaForCausalLM.from_pretrained(model_name)
tokenizer.pad_token = tokenizer.eos_token
model.generation_config.pad_token_id = model.generation_config.eos_token_id
prompt = "Hey, are you consciours? Can you talk to me?"
inputs = tokenizer([prompt, prompt + " blah blah"], return_tensors="pt", padding=True, truncation=True)
# Generate
generate_ids = model.generate(inputs.input_ids, max_length=30)
tokenizer.batch_decode(generate_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0]
```
Please note that I added `padding_side="left"` on the tokenizer -- it is critical for decoder-only models like Llama!<|||||>Hey @gante,
Thanks a lot! It got fixed with the suggested modification. <|||||>@gante, why need set padding_side="left" for decoder-only models?<|||||>@akk-123 these models predict the next token at any given point of the sequence, using the embedding of the latest token as a critical input. If your latest token is a pad token and/or if it is masked by the attention mask, your next token will be unrelated to the sequence -- mostly because the models are not trained to handle this case.
Left-padding ensures the phenomenon above doesn't occur.<|||||>@gante thanks a lot! but it seems origin llama model padding side is 'right' ?<|||||>Yes -- at train time, you want right-padded sequences. <|||||>I am confuse about it, you mean at train time, we need set right-padding, at inference time, we should set left-padding? what's more, we will set attention mask when padding, maybe attention mask will avoid the problem you mentioned?
```
these models predict the next token at any given point of the sequence, using the embedding of the latest token as a critical input. If your latest token is a pad token and/or if it is masked by the attention mask, your next token will be unrelated to the sequence -- mostly because the models are not trained to handle this case.
```
<|||||>The attention mask will not solve it, you need left-padding at generation time.
There's nothing like playing with the model to understand what would happen :)<|||||>> Hey @arian-askari ๐
>
> The exception pops up because you are defining a new token (`[PAD]`), which causes the exception in the embedding layer (it doesn't know the embeddings for the new token until you define them).
>
> Most decoder-only models have the same "issue" where the padding token is not defined. The standard workaround is as follows:
>
> ```python
> from transformers import AutoTokenizer, LlamaForCausalLM
> model_name = "huggyllama/llama-7b"
>
> tokenizer = AutoTokenizer.from_pretrained(model_name, padding_side="left")
> model = LlamaForCausalLM.from_pretrained(model_name)
> tokenizer.pad_token = tokenizer.eos_token
> model.generation_config.pad_token_id = model.generation_config.eos_token_id
>
> prompt = "Hey, are you consciours? Can you talk to me?"
> inputs = tokenizer([prompt, prompt + " blah blah"], return_tensors="pt", padding=True, truncation=True)
>
> # Generate
> generate_ids = model.generate(inputs.input_ids, max_length=30)
> tokenizer.batch_decode(generate_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0]
> ```
>
> Please note that I added `padding_side="left"` on the tokenizer -- it is critical for decoder-only models like Llama!
When using left padding, do we need to set the mask matrix for the left padding, or is there an automatic mask mechanism inside ?
<|||||>@renmengjie7 the masking mechanism is the same, the only difference is the mask that comes out of the tokenizer (`inputs.attention_mask` in the snippet above) :) <|||||>@gante got it ! Thank you very much. |
transformers | 23,400 | closed | The test `LlamaIntegrationTest::test_conversion` test is failing | The following command
```bash
RUN_SLOW=1 python3 -m pytest -v tests/models/llama/test_tokenization_llama.py::LlamaIntegrationTest::test_conversion
```
gives
```bash
> self.assertEqual(old_serialized, new_serialized)
E AssertionError: '{\n [1465 chars] "Sequence": {\n "id": "B",\n [1794589 chars]}\n}' != '{\n [1465 chars] "SpecialToken": {\n "id": "<s>",\n[1794837 chars]}\n}'
tests/models/llama/test_tokenization_llama.py:337: AssertionError
```
### Who can help?
@ArthurZucker | 05-16-2023 13:12:06 | 05-16-2023 13:12:06 | ~I looked into it.~
~The difference is that the newly converted tokenizer has ids 32000-32004 as special ids which correspond if I'm not mistaken to OpenAssistant llama fork.~
~Those do not seem to be declared here: https://huggingface.co/hf-internal-testing/llama-tokenizer/tree/main~
~I'm not sure which part of the code adds them to the slow tokenizer, but this seems indeed like a bug.~
Looked at the wrong file. Everything works it's only a different `type_id` in the post processor.
We simply need to update the tokenizer.json on the hub with the correct value (1)<|||||>(There's also a slight issue with the EOS token being added into the processor for no reason.
<|||||>https://huggingface.co/hf-internal-testing/llama-tokenizer/discussions/3
Goes along with
https://github.com/huggingface/transformers/issues/23400<|||||>Confirmed it works! |
transformers | 23,399 | closed | [`Pix2Struct`] Add conditional generation on docstring example | # What does this PR do?
As discussed in https://github.com/huggingface/transformers/pull/23391#issuecomment-1549555750 - this PR adds an example for users to run conditional generation using pix2struct. In fact, users shouldn't add special tokens when pre-pending the text - therefore it should be explicitly mentioned in the docs (done on the aformentioned PR) but also on the example snippets.
cc @amyeroberts
| 05-16-2023 13:09:44 | 05-16-2023 13:09:44 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 23,398 | closed | Generate: faster `can_generate` check on TF and Flax | # What does this PR do?
Same as https://github.com/huggingface/transformers/pull/22643, but on TF and Flax.
[This comment](https://github.com/huggingface/transformers/pull/22643#issuecomment-1501033074) shows that it reduces the exec time of this line from 1-500 ms to <0.01 ms | 05-16-2023 13:07:56 | 05-16-2023 13:07:56 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 23,397 | closed | Docs: add link to assisted generation blog post | # What does this PR do?
(see PR title) | 05-16-2023 12:55:49 | 05-16-2023 12:55:49 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 23,396 | closed | trainer.train(resume_from_checkpoint=True) failed | ### System Info
Copy-and-paste the text below in your GitHub issue and FILL OUT the two last points.
- `transformers` version: 4.28.1
- Platform: Linux-4.15.0-209-generic-x86_64-with-glibc2.27
- Python version: 3.10.10
- Huggingface_hub version: 0.13.3
- Safetensors version: not installed
- PyTorch version (GPU?): 2.0.1+cu117 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@sgugger
When I call trainer.train() to continue training a llama-7B model from a checkpoint, I encounter the following issue:



And I'm not sure why this problem is occurring. Here is the code I'm running:

### Information
- [X] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
RuntimeError: Trying to resize storage that is not resizable
Here is the code I'm running:
```
def train():
global local_rank
parser = transformers.HfArgumentParser(
(ModelArguments, DataArguments, TrainingArguments))
model_args, data_args, training_args = parser.parse_args_into_dataclasses()
local_rank = training_args.local_rank
model = transformers.AutoModelForCausalLM.from_pretrained(
model_args.model_name_or_path,
cache_dir=training_args.cache_dir,
).half()
tokenizer = transformers.AutoTokenizer.from_pretrained(
model_args.model_name_or_path,
cache_dir=training_args.cache_dir,
model_max_length=training_args.model_max_length,
padding_side="right",
use_fast=False,
)
tokenizer.pad_token = tokenizer.unk_token
data_module = make_supervised_data_module(tokenizer=tokenizer,
data_args=data_args)
trainer = Trainer(model=model,
tokenizer=tokenizer,
args=training_args,
**data_module)
if list(pathlib.Path(training_args.output_dir).glob("checkpoint-*")):
trainer.train(resume_from_checkpoint=True)
else:
trainer.train()
trainer.save_state()
trainer.save_model(training_args.output_dir)
```
### Expected behavior
I don't encounter any checkpoint import error when I continue training from a checkpoint. | 05-16-2023 12:29:04 | 05-16-2023 12:29:04 | The saved checkpoint is corrupted somehow. I don't know what could be the reason for it since I don't know how it was saved in the first place.<|||||>sorry for not showing the training parameters earlier. In fact, I used the trainer's automatic checkpoint saving method based on the number of steps. Here, it is set to save every 1200 steps๏ผ

Here is the directory where I save my checkpoints๏ผ

<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 23,395 | closed | Unable to import graphormer from transformers | ### System Info
transformers : 4.29.1
Python : 3.10
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
from transformers.models.graphormer.collating_graphormer import preprocess_item, GraphormerDataCollator
In the above import command the system is unable to import algos_graphormer from cython (pyx) file.
The below error message is popping up.
ImportError: cannot import name 'algos_graphormer' from 'transformers.models.graphormer' (/usr/local/lib/python3.10/dist-packages/transformers/models/graphormer/__init__.py)
### Expected behavior
It needs to be imported without any errors. | 05-16-2023 11:33:43 | 05-16-2023 11:33:43 | By importing cython file manually, fixed the issue. Thanks a lot.<|||||>>
i got the same queestion,but only
```
from transformers.models.graphormer.collating_graphormer import preprocess_item, GraphormerDataCollator
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/root/miniconda3/envs/g/lib/python3.8/site-packages/transformers/models/graphormer/collating_graphormer.py", line 16, in <module>
from . import algos_graphormer # noqa E402
ImportError: cannot import name 'algos_graphormer' from 'transformers.models.graphormer' (/root/miniconda3/envs/g/lib/python3.8/site-packages/transformers/models/graphormer/__init__.py)
```
could u please tell me about the details of your solution. where is the cython file? <|||||>Kindly copy the below cython file inside your transformer installed whl folder.(Under src/transformers/models/graphormer)
https://github.com/huggingface/transformers/blob/main/src/transformers/models/graphormer/algos_graphormer.pyx
<|||||>> Kindly copy the below cython file inside your transformer installed whl folder.(Under src/transformers/models/graphormer)
>
> https://github.com/huggingface/transformers/blob/main/src/transformers/models/graphormer/algos_graphormer.pyx
thanks for you replay.
i found this file in my
`/miniconda3/envs/payldet/lib/python3.9/site-packages/transformers/models/graphormer` folder:
```
/miniconda3/envs/payldet/lib/python3.9/site-packages/transformers/models/graphormer# ls
__init__.py __pycache__ algos_graphormer.c algos_graphormer.pyx collating_graphormer.py configuration_graphormer.py modeling_graphormer.py
```
do i need replace this file?
i try to compile this pyx file manually,but meet fatal error now:
`fatal error: numpy/arrayobject.h: No such file or directory`
<|||||>> i
I would request you to stage this .pyx file inside graphormer folder. ( The place where transformer gets installed) - Most likely under /usr/local/python<version>/.....<|||||>>
Thanks for your reply.
I installed Transformer in a Conda environment, so I don't have the path you replied. However, I finally resolved this question in another way.
By compiling this Pyx file manually, I got a .so file. To import it manually, I changed `transformers/models/graphormer/configuration_graphormer.py` file and added the specific path.
```
# line 15
# before:
if is_cython_available():
import pyximport
pyximport.install(setup_args={"include_dirs": np.get_include()})
from . import algos_graphormer # noqa E402
# after:
if is_cython_available():
import pyximport
pyximport.install(setup_args={"include_dirs": np.get_include()})
import sys
sys.path.append('/path/to/.so file')
import algos_graphormer
```
Successfully ran model.py. But I'm not sure this way will not have a bad influence in the future.
```
# just test
from datasets import load_dataset
from datasets import load_metric
import evaluate
import cython
from transformers.models.graphormer.collating_graphormer import preprocess_item, GraphormerDataCollator
dataset = load_dataset("OGB/ogbg-molhiv")
metric = evaluate.load("accuracy")
# print(dataset["train"].features)
dataset_processed = dataset.map(preprocess_item, batched=False)
# split up training into training + validation
train_ds = dataset_processed['train']
val_ds = dataset_processed['validation']
print(train_ds[0].keys())
```
and the result is :
```
python model.py
Found cached dataset json (/root/.cache/huggingface/datasets/OGB___json/OGB--ogbg-molhiv-8591baabc5d95f2f/0.0.0/e347ab1c932092252e717ff3f949105a4dd28b27e842dd53157d2f72e276c2e4)
100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 3/3 [00:00<00:00, 557.48it/s]
dict_keys(['edge_index', 'edge_attr', 'y', 'num_nodes', 'node_feat', 'input_nodes', 'attn_bias', 'attn_edge_type', 'spatial_pos', 'in_degree', 'out_degree', 'input_edges', 'labels'])
```<|||||>> mport sys
If you are using colab, Kindly stage your cython file at /usr/local/lib/python3.10/dist-packages/transformers/models/graphormer/algos_graphormer.pyx
Thanks for sharing the solution. After completing the training, would you able to write prediction code? ( without the influence of trainer.predict()).
By compiling this Pyx file manually, --> Did you create this file manually executing? May I know how you created this file? I just exported this pyx file and just used same as it's.<|||||>> Thanks for sharing the solution. After completing the training, would you able to write prediction code? ( without the influence of trainer.predict()).
Hi,
I do not think I should put this pyx file in the `/usr/local/lib/python3.10/dist-packages/transformers/models/graphormer/` folder because I am running this code in a Conda environment, not on Colab.
Regarding the prediction, I have to tell you that I am a novice in Graphormer (I downloaded it yesterday actually) and have not written any code about it. However, my main task is to get graph embeddings, so I'll try to finish it. Maybe we can do it together and learn Graphormer together.
How do I compile it? This is the code,named setup.py:
```
from setuptools import Extension, setup
import numpy
ext_modules = [
Extension(
name='example',
sources=['example.pyx'],
include_dirs=[numpy.get_include()]
)
]
setup(
name='example',
ext_modules=ext_modules,
)
```
after execute `python setup.py build_ext --inplace` ,you will get the .so file.<|||||>> > Thanks for sharing the solution. After completing the training, would you able to write prediction code? ( without the influence of trainer.predict()).
>
> Hi,
>
> I do not think I should put this pyx file in the `/usr/local/lib/python3.10/dist-packages/transformers/models/graphormer/` folder because I am running this code in a Conda environment, not on Colab.
>
> Regarding the prediction, I have to tell you that I am a novice in Graphormer (I downloaded it yesterday actually) and have not written any code about it. However, my main task is to get graph embeddings, so I'll try to finish it. Maybe we can do it together and learn Graphormer together.
>
> How do I compile it? This is the code,named setup.py:
>
> ```
> from setuptools import Extension, setup
> import numpy
>
> ext_modules = [
> Extension(
> name='example',
> sources=['example.pyx'],
> include_dirs=[numpy.get_include()]
> )
> ]
>
> setup(
> name='example',
> ext_modules=ext_modules,
> )
> ```
>
> after execute `python setup.py build_ext --inplace` ,you will get the .so file.
Please check this one for prediction code : https://github.com/huggingface/transformers/issues/23642<|||||>@techthiyanes Hi,Thank you for sharing. I'm going to check it out now. |
transformers | 23,394 | closed | README: Fix affiliation for MEGA | [discussed in this thread](https://twitter.com/gneubig/status/1658199635457101825) | 05-16-2023 10:55:40 | 05-16-2023 10:55:40 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 23,393 | closed | is it possible to add `system prompt` to Blenderbot ? | This is a simple `BlenderBot` app:
```python
from transformers import BlenderbotSmallTokenizer, BlenderbotSmallForConditionalGeneration
import os
class BlenderBot:
def __init__(
self,
model_name: str ='facebook/blenderbot_small-90M',
):
if not os.path.exists('./models/blenderbot'):
BlenderbotSmallForConditionalGeneration.from_pretrained(model_name).save_pretrained('./models/blenderbot')
BlenderbotSmallTokenizer.from_pretrained(model_name).save_pretrained('./models/blenderbot')
self.model = BlenderbotSmallForConditionalGeneration.from_pretrained('./models/blenderbot')
self.tokenizer = BlenderbotSmallTokenizer.from_pretrained('./models/blenderbot')
def __call__(self, inputs: str) -> str:
inputs_tokenized = self.tokenizer(inputs, return_tensors='pt')
reply_ids = self.model.generate(**inputs_tokenized)
reply = self.tokenizer.batch_decode(reply_ids, skip_special_tokens=True)[0]
return reply
def run(self):
while True:
user_input = input("User: ")
print("Bot:", self(user_input))
```
The problem is i dont know how to add any system prompts to manage the out puts of the chatbot. Any help with that ? | 05-16-2023 09:53:08 | 05-16-2023 09:53:08 | Hi @SKbarbon, thanks for raising an issue!
This is a question best placed in our [forums](https://discuss.huggingface.co/). We try to reserve the github issues for feature requests and bug reports.<|||||>Ah I am sorry @amyeroberts ! |
transformers | 23,392 | closed | Fix `RwkvModel` | # What does this PR do?
The convention is to filter out the `None` value from the output tuple.
And without this, torchscript tests fail as it doesn't like `None` value.
| 05-16-2023 09:52:24 | 05-16-2023 09:52:24 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 23,391 | closed | Update `test_batched_inference_image_captioning_conditioned` | # What does this PR do?
The test `tests/models/pix2struct/test_modeling_pix2struct.py::Pix2StructIntegrationTest::test_batched_inference_image_captioning_conditioned` starts to fail on CI run of `April 27` which includes the merged PR #23023.
@younesbelkada Could you double check if the changes in this PR are reasonable? Thank you.
| 05-16-2023 09:31:34 | 05-16-2023 09:31:34 | _The documentation is not available anymore as the PR was closed or merged._<|||||>@younesbelkada If you can take over this PR to avoid `generate weird output right after`, it would be really nice :-)<|||||>The test should be now fixed, the generated text produces different output than before, probably due to https://github.com/huggingface/transformers/pull/23051 that now made the model using a causal attention mask on the text decoder (which was not the case before)<|||||>Thanks a lot!<|||||>> I think that we should educate users that for text-conditioned generation we should never add special tokens to the tokenizer - as introduced in https://github.com/huggingface/transformers/pull/23004
@younesbelkada The best place to do this I think is in the example docstring for the model, as this is what a lot of users will reference, and it currently doesn't do that. Could you open a PR to update this? <|||||>> @younesbelkada The best place to do this I think is in the example docstring for the model, as this is what a lot of users will reference, and it currently doesn't do that. Could you open a PR to update this?
Sure yes will do!
> Changes look fine - my only concern is that the generations appear to have become worse. @younesbelkada @ydshieh do we have any other generation samples to make sure the model is behaving as expected?
Yes! I was relieved since we do have the tests `test_batched_inference_image_captioning` & `test_inference_image_captioning` that still pass --> meaning that the un-conditional text generation seem to be unaffected!<|||||>> > @younesbelkada The best place to do this I think is in the example docstring for the model, as this is what a lot of users will reference, and it currently doesn't do that. Could you open a PR to update this?
>
> Sure yes will do!
I am going to merge this PR and leave @amyeroberts 's suggestion for @younesbelkada in a separate PR. Thank you for the review and the refine of this PR. |
transformers | 23,390 | closed | [Sagemaker] sagemaker distributed features in Trainer broken since Transformers 4.29 | ### System Info
* `transformers: 4.29.1`
* `datasets: 2.12.0`
* `evaluate: 0.4.0`
* `accelerate: 0.19.0`
* `torch: 2.0.0`
* `diffusers: 0.16.1`
### Who can help?
@philschmid @sgugger
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
The issue is found while I update AWS Sagemaker deep learning container for Transformers-PyTorch. Sagemaker's distributed data parallel and model parallel features are broken since Transformers 4.29.* .
* Test scripts: [`test_smmp.py`](https://github.com/aws/deep-learning-containers/blob/master/test/sagemaker_tests/huggingface_pytorch/training/integration/sagemaker/test_smmp.py) and [`test_smdp.py`](https://github.com/aws/deep-learning-containers/blob/master/test/sagemaker_tests/huggingface_pytorch/training/integration/sagemaker/test_smdp.py).
* Related PR in the DLC repo: https://github.com/aws/deep-learning-containers/pull/2993
* Error log:
```
AttributeError: 'TrainingArguments' object has no attribute 'distributed_state'
```
<details close>
<summary>More detailed error log</summary>
<br>
```
_____________________________ test_smmp_gpu[gloo] ______________________________
ecr_image = '669063966089.dkr.ecr.us-west-2.amazonaws.com/pr-huggingface-pytorch-training:2.0.0-transformers4.29.1-gpu-py310-cu118-ubuntu20.04-pr-2993-2023-05-13-17-25-48'
sagemaker_regions = ['us-west-2', 'us-east-1', 'eu-west-1']
instance_type = 'ml.p3.8xlarge', framework_version = '2.0.0', py_version = 'py3'
dist_gpu_backend = 'gloo'
@pytest.mark.processor("gpu")
@pytest.mark.integration("smmp")
@pytest.mark.model("hf_qa_smmp")
@pytest.mark.skip_cpu
@pytest.mark.skip_py2_containers
@pytest.mark.skip_trcomp_containers
def test_smmp_gpu(
ecr_image, sagemaker_regions, instance_type, framework_version, py_version, dist_gpu_backend
):
> invoke_sm_helper_function(ecr_image, sagemaker_regions, _test_smmp_gpu_function, py_version, 1)
integration/sagemaker/test_smmp.py:76:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
../../__init__.py:119: in invoke_sm_helper_function
raise e
../../__init__.py:113: in invoke_sm_helper_function
test_function(tested_ecr_image, sagemaker_session, *test_function_args)
integration/sagemaker/test_smmp.py:117: in _test_smmp_gpu_function
huggingface_estimator.fit(job_name=sagemaker.utils.unique_name_from_base("test-hf-pt-qa-smmp"))
2.0.0-transformers4.29.1-gpu-py310-cu118-ubuntu20.04-pr-2993-2023-05-13-17-25-48/lib/python3.8/site-packages/sagemaker/workflow/pipeline_context.py:272: in wrapper
return run_func(*args, **kwargs)
2.0.0-transformers4.29.1-gpu-py310-cu118-ubuntu20.04-pr-2993-2023-05-13-17-25-48/lib/python3.8/site-packages/sagemaker/estimator.py:1156: in fit
self.latest_training_job.wait(logs=logs)
2.0.0-transformers4.29.1-gpu-py310-cu118-ubuntu20.04-pr-2993-2023-05-13-17-25-48/lib/python3.8/site-packages/sagemaker/estimator.py:2297: in wait
self.sagemaker_session.logs_for_job(self.job_name, wait=True, log_type=logs)
2.0.0-transformers4.29.1-gpu-py310-cu118-ubuntu20.04-pr-2993-2023-05-13-17-25-48/lib/python3.8/site-packages/sagemaker/session.py:4216: in logs_for_job
self._check_job_status(job_name, description, "TrainingJobStatus")
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <sagemaker.session.Session object at 0x7f466ef3fc40>
job = 'test-hf-pt-qa-smmp-1684001017-f81e'
desc = {'AlgorithmSpecification': {'EnableSageMakerMetricsTimeSeries': True, 'TrainingImage': '669063966089.dkr.ecr.us-west-2...)), 'DebugHookConfig': {'CollectionConfigurations': [], 'S3OutputPath': 's3://sagemaker-us-west-2-669063966089/'}, ...}
status_key_name = 'TrainingJobStatus'
```
def _check_job_status(self, job, desc, status_key_name):
"""Check to see if the job completed successfully.
If not, construct and raise a exceptions. (UnexpectedStatusException).
Args:
job (str): The name of the job to check.
desc (dict[str, str]): The result of ``describe_training_job()``.
status_key_name (str): Status key name to check for.
Raises:
exceptions.CapacityError: If the training job fails with CapacityError.
exceptions.UnexpectedStatusException: If the training job fails.
"""
status = desc[status_key_name]
# If the status is capital case, then convert it to Camel case
status = _STATUS_CODE_TABLE.get(status, status)
if status == "Stopped":
LOGGER.warning(
"Job ended with status 'Stopped' rather than 'Completed'. "
"This could mean the job timed out or stopped early for some other reason: "
"Consider checking whether it completed as you expect."
)
elif status != "Completed":
reason = desc.get("FailureReason", "(No reason provided)")
job_type = status_key_name.replace("JobStatus", " job")
message = "Error for {job_type} {job_name}: {status}. Reason: {reason}".format(
job_type=job_type, job_name=job, status=status, reason=reason
)
if "CapacityError" in str(reason):
raise exceptions.CapacityError(
message=message,
allowed_statuses=["Completed", "Stopped"],
actual_status=status,
)
> raise exceptions.UnexpectedStatusException(
message=message,
allowed_statuses=["Completed", "Stopped"],
actual_status=status,
)
E sagemaker.exceptions.UnexpectedStatusException: Error for Training job test-hf-pt-qa-smmp-1684001017-f81e: Failed. Reason: AlgorithmError: ExecuteUserScriptError:
E ExitCode 1
E ErrorMessage "AttributeError: 'TrainingArguments' object has no attribute 'distributed_state'
E โญโโโโโโโโโโโโโโโโโโโโโ Traceback (most recent call last) โโโโโโโโโโโโโโโโโโโโโโโฎ
E โ /opt/conda/lib/python3.10/runpy.py:196 in _run_module_as_main โ
E โ โ
E โ 193 โ main_globals = sys.modules["__main__"].__dict__ โ
E โ 194 โ if alter_argv: โ
E โ 195 โ โ sys.argv[0] = mod_spec.origin โ
E โ โฑ 196 โ return _run_code(code, main_globals, None, โ
E โ 197 โ โ โ โ โ "__main__", mod_spec) โ
E โ 198 โ
E โ 199 de, exit code: 1
2.0.0-transformers4.29.1-gpu-py310-cu118-ubuntu20.04-pr-2993-2023-05-13-17-25-48/lib/python3.8/site-packages/sagemaker/session.py:3749: UnexpectedStatusException
```
</details>
### Expected behavior
Find either what to adapt from the sagemaker side or from our side to make sure distributed features work, so that we would be update transformers to higher versions the next time. | 05-16-2023 09:21:14 | 05-16-2023 09:21:14 | cc @pacman100 <|||||>cc @muellerzr as he has full insights on the related PR.<|||||>@JingyaHuang could you show the full stack trace so I can see where this error is specifically being raised from/its context?<|||||>I need any context of a trace that relates to something in transformers to be able to know where/how this is stemming from. SageMaker should not be handled by the current accelerate implementation, so it's critical to know just where that logic fault is :) <|||||>@muellerzr Sure, here is the complete log: https://gist.github.com/JingyaHuang/9327adb701d9989da8cf4c33bfeb043e
(sorry if it looks messy, it's populated by sagemaker. Related tests are test_smmp and test_smdp.py)
<|||||>@JingyaHuang could you try installing transformers via `pip install git+https://github.com/huggingface/transformers@muellerzr-fix-sagemaker` and verify this fixes it? And if not, what other errors arise? Thanks!<|||||>Trying! It will take a while for the image build, will update asap.<|||||>@philschmid I thought SageMaker MP was broken since many releases ago?<|||||>> @philschmid I thought SageMaker MP was broken since many releases ago?
It was not broken, just not using all the features by default, e.g. Tensor Parallelism. <|||||>Hey @muellerzr, here is the log that I got with your patch: https://gist.github.com/JingyaHuang/3b60725d0a6f22f377b27694d22c18ca
There seems to be another issue now:
```
TypeError: init_process_group() got multiple values for argument 'backend'
``` <|||||>@JingyaHuang what version of `accelerate` are you running with? That should be fixed in v0.19.0<|||||>@JingyaHuang there was a fix on main for accelerate that may be related to this, are we trying to spawn on cpu/do distributed CPU?<|||||>Thanks @muellerzr, yes the tests were running in a docker container with accelerate 0.19.0 installed
```
Name: accelerate
Version: 0.19.0
Summary: Accelerate
Home-page: https://github.com/huggingface/accelerate
Author: The HuggingFace team
Author-email: [email protected]
License: Apache
Location: /opt/conda/lib/python3.10/site-packages
Requires: numpy, packaging, psutil, pyyaml, torch
Required-by:
```
The smmp test ran sagemaker distributed on GPU, I am not familiar with how sagemaker's distributed model parallel works. @philschmid might have a better answer, does it spawns CPU processes? <|||||>@JingyaHuang looked at the trace again, yes it does make sense that the fix to main should have changed it actually based on what's happening here. Can you try one more time with `pip install git+https://github.com/huggingface/accelerate` to verify? Thanks ๐ <|||||>(Auto-closed due to PR merging, will keep open until we know for sure w/ Accelerate fix :) )<|||||>Hey @muellerzr, here is the log that I got by the test with accelerate from source last week: https://gist.github.com/JingyaHuang/0026e8801e99d0df522fb2bcb2b2334c
(I did not configure accelerate, not sure if I should do that?)<|||||>Thanks @JingyaHuang (and appreciate your patience).
Let's try via the following:
```bash
pip install git+https://github.com/huggingface/transformers@muellerzr-sagemaker-dp git+https://github.com/huggingface/accelerate@sagemakerdp
```
Thanks!<|||||>@muellerzr No worries!
Another error occurs, we don't have the luck :( . Here is the log: https://gist.github.com/JingyaHuang/01de393f9da716ff094c248f12ec1465 <|||||>@JingyaHuang I'd actually say we're making progress! New errors, with easier solutions :) Let's try again, same branches etc for everything<|||||>Boom!
๐

<|||||>@muellerzr I just ran the sagemaker data parallel test but with question answering task(`run_qa.py` โ ) instead of text generation task(`run_glue.py` โ
) this time, and it failed during the evaluation (while doing post-processing of the predictions).
```
ValueError: Got 676 predictions and 10784 features.
```
Full tracing log here: https://gist.github.com/JingyaHuang/b824d3abd17c6db23e68968dec0cee13
Do you think it is related to the issue, or just an update need to be done for the examples? <|||||>@JingyaHuang to know for sure, try running it with `transformers==4.28.1`<|||||>@muellerzr I just did two tests:
* Run __qa__ example on __smmp__ test with __patched transformers & accelerate__ -> tests passed โ

(so with the patch, smmp test passed for both text classification and qa)
p.s. `run_qa` and `run_glue` are fetched from the main branch
* Run __qa__ example on __smdp__ test with __trfrs 4.28.1 & accelerate 0.19.0__ -> tests passed โ

p.s. `run_qa` and `run_glue` are fetched from the v4.28.1 branch
And the previous error log on the qa task was for __smdp__ test, so it seems smmp is good but smdp is still broken for 4.29.*, is there anything that needs to be done for smdp during the prediction step maybe?<|||||>@muellerzr The number of features for evaluation is 10784. Since the smdp test was run on two `ml.p3.16xlarge`(8 gpus) instances and 10784 / 676 = 15.9526627, intuitively I doubt when using smdp only the predictions on one worker are kept (676).
Ref smdp test: https://github.com/aws/deep-learning-containers/blob/master/test/sagemaker_tests/huggingface_pytorch/training/integration/sagemaker/test_smdp.py#L108
just a thought (ใแดใ)<|||||>Closing now via https://github.com/huggingface/transformers/pull/23681, as all tests pass |
transformers | 23,389 | closed | Fix RoBERTa vocab size | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #23388
## Before submitting
- [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [X] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
| 05-16-2023 09:16:39 | 05-16-2023 09:16:39 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_23389). All of your documentation changes will be reflected on that endpoint.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.