repo
stringclasses 1
value | number
int64 1
25.3k
| state
stringclasses 2
values | title
stringlengths 1
487
| body
stringlengths 0
234k
⌀ | created_at
stringlengths 19
19
| closed_at
stringlengths 19
19
| comments
stringlengths 0
293k
|
---|---|---|---|---|---|---|---|
transformers | 20,969 | closed | Support turning off the model uploading in ClearML | # What does this PR do?
Support turning off the model uploading in ClearML using an environment variable called `CLEARML_LOG_MODEL`
Fixes #20889
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts and @NielsRogge
- speech models: @sanchit-gandhi
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger and @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
| 01-02-2023 21:47:06 | 01-02-2023 21:47:06 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Hi @sgugger :) I modified the docstring, as you suggested. Can you please have a look? 🙏<|||||>Code looks good but it seems there is an issue with your CircleCI permissions, the tests won't run.
Could you try refreshing your permissions as shown [here](https://support.circleci.com/hc/en-us/articles/360048210711-How-to-Refresh-User-Permissions-)?<|||||>@sgugger I re-authenticated to CircleCI, and it seems that the CI passed :)
Can you have a look and approve? :)<|||||>@sgugger Thanks for the feedback. I accepted your change :)<|||||>Thanks for your contribution! |
transformers | 20,968 | closed | Graphormer model for Graph Classification | # What does this PR do?
Adds the Graphormer model for graph classification in Transformers.
Done:
- [x] Architecture ported
- [x] Collator (the model has no tokenizer) and preprocessing
- [x] Test results against original implementation, to make sure they are within precision range. Edit: exactly same results :fire:
- [x] Add checkpoints and make sure they load properly
- [x] Update test suite
- [x] Add model card for the checkpoints (https://huggingface.co/clefourrier/pcqm4mv2_graphormer_base, https://huggingface.co/clefourrier/pcqm4mv1_graphormer_base)
- [x] Update doc
## Dependencies
Cython - this could be ported to Python, but preprocessing will be considerably slower, as well as collation if preprocessing is done on the fly.
Linked to #20962
## Before submitting
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case. (Discussed with Thom on Slack)
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
| 01-02-2023 14:12:20 | 01-02-2023 14:12:20 | @sgugger Sorry for not having put my PR in draft and thank you so much for your comments!
I'll take them into account, keep working on this, and ping you back when I have cleaner code?<|||||>No worries at all, and yes!<|||||>@sgugger I think the code is better now, if you want to take a look.
I have several questions:
- ~should I add more documentation?~
- how can I express the fact that this model does not use a tokenizer, but a collator, so that it appears in the doc ?
- I'm having trouble with the common test suites: for the inputs, for example, we embed nodes and edges of graphs using two different embedding layers > what should "get_input_embeddings" return then? A concatenation of both? Similarly, `input_ids` has no equivalent in our case, as our inputs are both `input_nodes` and `input_edges`, so I'm a bit stuck on what to do here
Edit: talked to @LysandreJik, will edit tests (though they'll be model specific atm) and doc will stay minimal for now<|||||>@clefourrier
It seems there is an issue with your CircleCI permissions, the tests are not run.
Could you try refreshing your permissions as shown [here](https://support.circleci.com/hc/en-us/articles/360048210711-How-to-Refresh-User-Permissions-)?
<|||||>Just did, nothing changed, and I don't have the permission to manually trigger the pipeline on the [CI webpage](https://app.circleci.com/pipelines/github/huggingface/transformers?branch=pull%2F20968).
Do you have other ideas of things I could try? :hugs:<|||||>> Just did, nothing changed
you can do
```bash
git commit --allow-empty -m "Empty commit to trigger CI"
git push
```
> I don't have the permission to manually trigger the pipeline
This usually means you opened that job run page without login.<|||||>Cool, a commit seems to have solved it! :pray: @ydshieh !
<|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>@ydshieh All tests for my model are passing - thank you for your pointers!
<|||||>Thank you so much for all the time spent reviewing @sgugger @ydshieh ! :pray:
I hope this time it's finally up to par ^^
Quick question: once merged, how long till it is in a release of transformers? I have a blog post on Graph Classification with this model, when should I plan on publishing it/communicating about it?<|||||>There should be a release of Transformers next week by the way.<|||||>@clefourrier
There are a few tests failed on daily CI. See [here](https://github.com/huggingface/transformers/actions/runs/3964181931/jobs/6792864795).
I can also help, but I have one question: Does `GraphormerModel` require all the following inputs to be specified?
```python
def forward(
...
input_nodes,
input_edges,
attn_bias,
in_degree,
out_degree,
spatial_pos,
attn_edge_type,
```<|||||>Hi @clefourrier!
When you get some time, could you take a look on the following failed tests:
```
tests/models/graphormer/test_modeling_graphormer.py::GraphormerModelTest::test_model_from_pretrained
tests/models/graphormer/test_modeling_graphormer.py::GraphormerModelIntegrationTest::test_inference_graph_classification
```
which seems related to the missing or wrong checkpoint link/path.
Regarding the other 3 failed tests (`GraphormerModelTest::test_torchscript_xxx`), I can work on them, but I need a bit of context regarding the inputs for this model 🙏 , see my comment above. Thank you :-)<|||||>@ydshieh Hi! Back from vacations! :wave:
- Edit: The checkpoint is now here:https://huggingface.co/clefourrier/graphormer-base-pcqm4mv2 (The problem likely came from the wrong dash type, changed the path)
- The model needs as inputs all the inputs you mentioned, which are generated during data preprocessing. If you need more information on the model, I wrote a blog post (still a PR atm https://github.com/huggingface/blog/pull/781) which describes inputs and use and feel free to ask any questions which could help!<|||||>@ydshieh Opened a new PR #21367 to manage the checkpoint path problems. |
transformers | 20,967 | closed | Fix past CI | # What does this PR do?
I tried to launch Past CI (the 2nd round) after #20861, but there are some more fixes required: Past CI images don't install other dependencies, and we need more decorators to skip some tests if they are not installed. | 01-02-2023 14:07:12 | 01-02-2023 14:07:12 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 20,966 | closed | TF: serializable hubert | # What does this PR do?
Fixes #20954 -- some handling for dynamic shapes was missing on Wave2Vec/Hubert | 01-02-2023 11:42:05 | 01-02-2023 11:42:05 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 20,964 | closed | Generate: delete unused TF `_reorder_cache` | # What does this PR do?
Starts off the year with my favorite task: deleting unused code 🥳 The deleted private function (`_reorder_cache`) is no longer used due to the removal on non-XLA-compatible generate functions (#20927) | 01-02-2023 10:36:25 | 01-02-2023 10:36:25 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 20,963 | closed | [i18n-<languageCode>] Translating docs to <languageName> | <!--
Note: Please search to see if an issue already exists for the language you are trying to translate.
-->
Hi!
Let's bring the documentation to all the <languageName>-speaking community 🌐 (currently 0 out of 267 complete)
Who would want to translate? Please follow the 🤗 [TRANSLATING guide](https://github.com/huggingface/transformers/blob/main/docs/TRANSLATING.md). Here is a list of the files ready for translation. Let us know in this issue if you'd like to translate any, and we'll add your name to the list.
Some notes:
* Please translate using an informal tone (imagine you are talking with a friend about transformers 🤗).
* Please translate in a gender-neutral way.
* Add your translations to the folder called `<languageCode>` inside the [source folder](https://github.com/huggingface/transformers/tree/main/docs/source).
* Register your translation in `<languageCode>/_toctree.yml`; please follow the order of the [English version](https://github.com/huggingface/transformers/blob/main/docs/source/en/_toctree.yml).
* Once you're finished, open a pull request and tag this issue by including #issue-number in the description, where issue-number is the number of this issue. Please ping @ArthurZucker, @sgugger for review.
* 🙋 If you'd like others to help you with the translation, you can also post in the 🤗 [forums](https://discuss.huggingface.co/).
## Get Started section
- [ ] [index.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/index.mdx) https://github.com/huggingface/transformers/pull/20180
- [ ] [quicktour.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/quicktour.mdx) (waiting for initial PR to go through)
- [ ] [installation.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/installation.mdx).
## Tutorial section
- [ ] [pipeline_tutorial.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/pipeline_tutorial.mdx)
- [ ] [autoclass_tutorial.mdx](https://github.com/huggingface/transformers/blob/master/docs/source/autoclass_tutorial.mdx)
- [ ] [preprocessing.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/preprocessing.mdx)
- [ ] [training.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/training.mdx)
- [ ] [accelerate.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/accelerate.mdx)
- [ ] [model_sharing.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/model_sharing.mdx)
- [ ] [multilingual.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/multilingual.mdx)
<!--
Keep on adding more as you go 🔥
--> | 01-02-2023 10:20:59 | 01-02-2023 10:20:59 | |
transformers | 20,962 | closed | Graphormer | ### Model description
Graph Transformer model developed by Microsoft.
https://graphormer.readthedocs.io/en/latest/
### Open source status
- [X] The model implementation is available
- [X] The model weights are available
### Provide useful links for the implementation
Full implementation: https://github.com/microsoft/Graphormer/
Weights:
- pcqm4mv1_graphormer_base: https://ml2md.blob.core.windows.net/graphormer-ckpts/checkpoint_best_pcqm4mv1.pt
- pcqm4mv2_graphormer_base: https://ml2md.blob.core.windows.net/graphormer-ckpts/checkpoint_best_pcqm4mv2.pt
- pcqm4mv1_graphormer_base_for_molhiv: https://ml2md.blob.core.windows.net/graphormer-ckpts/checkpoint_base_preln_pcqm4mv1_for_hiv.pt | 01-02-2023 09:58:07 | 01-02-2023 09:58:07 | Hi, @clefourrier
can I contribute to this? also open to any other issue
I have previously contributed to PyTorch (350+ lines of C++, Objective C, and Python)<|||||>Hi @Raman-Kumar, I'd be delighted to have some help!
I'll try to wrap up what I have and clean up a bit by Friday, and either I'll need your help on this to finish up, or it will be good and we can work together on integrating TokenGT, another graph transformer (for which I have draft code), or if you are feeling very confident, you can integrate another graph transformer model of your choice, wdyt?<|||||>😊 Replying a little late, I was going through other issues and reading some PRs.
@clefourrier
I would like to work on integrating TokenGT
and am also ready to offer any other help if you have a delegable workload.
Meanwhile, I am reading more blogs and your work.<|||||>Solved by the merge of Graphormer |
transformers | 20,961 | closed | Add Transformer-Transducer: A Streamable Speech Recognition Model with Transformer Encoders and RNN-T Loss | ### Model description
paper: [Transformer Transducer: A Streamable Speech Recognition Model with Transformer Encoders and RNN-T Loss](https://arxiv.org/abs/2002.02562)
- Transformer-Transducer is a an End2End-based ASR streaming model that converts spoken speech into text in real time.
- model that implements RNN-T as Transformer and train using RNN-T loss.
- It consists of Label Encoder in charge of text, Audio Encoder in charge of voice, and Joint Network that combines the calculations of each Encoder
- And in order not to exceed the max_length of the Transformer, the audio is converted into log-Mel Spectrogram, and then each Mel is stacked to match the voice within the max_length
### Open source status
- [X] The model implementation is available
- [ ] The model weights are available
### Provide useful links for the implementation
jp1924: [Transformer-Transducer](https://github.com/jp1924/transformer-transducer) | 01-02-2023 08:08:36 | 01-02-2023 08:08:36 | Hey @jp1924! Thanks for opening this new model request. The Transformer-Transducer is indeed an exciting architecture in speech recognition - one that achieves both strong performance and low latency.
My hesitations in adding this model stem from the fact that the weights are not open-sourced. It's very rare that we add a model to transformers when the weights are not open-sourced; having no weights means that the model can only be used with randomly initialised parameters, which is not much good for downstream ASR!
Models with randomly initialised weights require extensive training with generous GPU/TPU compute to produce trained checkpoints. In many cases, it is very difficult to reproduce the exact results from the paper due to differences in data, training set-up and compute resources.
On the other hand, pre-trained models (weights) are extremely valuable to the community as they can be used directly without training (or with a small amount of fine-tuning) for downstream inference tasks. Consequently, we tend to focus our efforts in transformers on adding models where pre-trained weights are available.
This is not as to discourage you from contributing a Transformer-Transducer model to transformers. Such a contribution would be most welcome! However, taking into account the above points, I would advise that you focus the implementation on a Transformer-Transducer codebase where strong pre-trained weights are available and open-sourced. I'm more than happy to help find a suitable codebase + weights to port! This would be a valuable addition to the transformers library.<|||||>Thanks to project like hivemind and other community members like @fxtentacle it is possible to organize the computing capacity.
If Hajo is still interested in that use case we could try to pretrain an German model, what do you think?<|||||>Thank you advise! @sanchit-gandhi!
In addition to implementing the code, I will find a way to upload weight!
*The contents below have nothing to do with the exact implementation of the model!*
Actually, as you said, it takes a lot of resources to train Transformer-Transducers. it's need to run 730 epoch when i set it to the hyper-parameter described in the paper.
The problem with this is that, in a way, we have to train Encoders from scratch So what I'm thinking experimentally is that I'm thinking about changing Audio and Lable Encoder to a PreTraining model(like Wav2Vec2 or BERT).
---
You can always do that if model can help with the project you! @flozi00!
But this model hasn't been validated yet. I don't know when you start Pretrain, but I need to stabilize the algorithm of generate or tokenizer, so can you wait a little bit? I'm making this at the same time as the company's work, so I think it'll take some time!
And I have a question about using German data to proceed with training.
1. What dataset are you going to use?
2. Is it possible to verify the model when training the model using the data?
3. Do you have a comparator to use for verification?
There's this much question. I'd appreciate an answer!
*From here on, it's about verification! You don't have to read it if you don't need it.*
This is an empirical story. When I measured the performance of my native language data, KsponSpeech, using Test-Clean, the performance of Wav2Vec2 was around 20%(WER), and RNN-T was around 30%. I think the range of performance will be 5-10% if German is also taught.<|||||>> So what I'm thinking experimentally is that I'm thinking about changing Audio and Lable Encoder to a PreTraining model(like Wav2Vec2 or BERT)
This is a good idea! The only constraint then is that our encoder network must take the same architecture as Wav2Vec2 in order for all of the pre-trained weights to be compatible with the network.
Since Wav2Vec2 is a different architecture to the Transformer network used in the Transformer-Transducer model, we'll likely only be able to load a _subset_ of the pre-trained weights into the T-T model this way.<|||||>Thank you for answering even though it's an experimental idea! @sanchit-gandhi
There's something I didn't understand while reading. I didn't understand the "*subset*" well, but the structures of Wav2Vec2 and T-T models are different, so do you want to bring only the encoder part of the pre-trained Wav2Vec2?<|||||>Hey @jp1924 - we'll only be able to load the weights of the Wav2Vec2 model into the Transformer-Transducer **if** the T-T has the same encoder architecture as Wav2Vec2. If it doesn't then this won't be possible (or we'll only be able to load the weights that do match, which still leaves some randomly initialised)<|||||>I'd like to reiterate that adding a T-T model to Transformers would be amazing and think it's great you're excited by this too!
We should be selective though in adding a model where weights are already available, preferably 'official' ones as it's very hard to emulate these strong pre-trained checkpoints without the data/compute.
If this isn't the case, it's very difficult to justify adding the model to transformers (the torch model is not much use without the trained params to go with it!)<|||||>Sorry for the late reply! @sanchit-gandhi
I'll experiment right away, I think it'll be possible if I just modify the Encoder part of the Transformer Transducer Model!
But there's one thing I'm worried about. It's the CNN layer of wav2vec2, and the Streaming Model will have at least 25ms of voice. But I don't know how the CNN class will react to this. Maybe we need more experiments on this.
---
*This is the next best way to try if the above method doesn't work. You don't have to read it.*
The second best way is to pretrain AudioEncoder using gumble-softmax.
In fact, the difference between Wav2Vec2 and T-T's Audio Encoder is the difference in how raw-audio is put into the Transformer Encoder. Wav2Vec2 compresses the voice using CNN, and T-T converts audio to Mel, compresses the voice through windowing, and puts it in the encoder layer.
Then my idea is to convert audio into a windowed mel to pretrain the AudioEncoder of T-T.
It's just that I don't like it either. Because the way to pretrain the T-T model is obviously going to take a lot of resources and time... if possible, this is the last way I want to use it.....<|||||>Hey @jp1924, my feelings are that pre-training the model are going to be difficult from two perspectives:
1. Hacky architecture: we won't be able to pre-train the correct T-T architecture, but some modified Wav2Vec2-T version
2. Pre-training is expensive, both in terms of time and compute
Also, I'd like to re-iterate that it's very unlikely that we'd add such a model to Transformers - we can only really add 'official' implementations (i.e. the 'official' code and the 'official' weights), see https://github.com/huggingface/transformers/issues/20961#issuecomment-1382245091.
My recommendation would be to find an official T-T implementation where both the code and weights are available and see whether we can add this to Transformers!
Feel free to post any findings here - we can discuss them and pick the right one for the T-T integration!
Re-iterating my excitement for the T-T integration! We need to find 'official' code + checkpoints before we commit to integrating<|||||>Thank you for leaving a comment @sanchit-gandhi
The code and weight of the T-T model have not been officially released... and even the code of the T-T model that the users made personally has no weight. The code is not an formula, but is it possible to use that code to learn the model and upload it to the hub? Of course, when I heard that the model I learned had similar performance as the paper.<|||||>@flozi00 I personally probably have to pass on releasing an open-source T-T model due to a non-compete covering a closed-source T-T which I built. That said, the last time I talked to them, the University of Lübeck still had a few NVIDIA DGX available for research projects. The main requirement for such research GPU use is to write a 2-3 page paper about what worked and what didn't afterwards, so it's not a very big hurdle.
@sanchit-gandhi In my experience, a T-T can be trained quite cheaply with transfer learning. For the label encoder, you force it to produce the same logits as a pre-trained T5 (out of which there are plenty on HF). For the acoustic encoder, you force it to imitate the logits from a pre-trained wav2vec2. You can even pre-compute the label and acoustic logit I/O pairs as a temporary dataset. Because you're now training the T-T components against fixed I/O pairs, as opposed to doing alignment while training, they will converge really quickly, like a few days on a A100 each. For the join/merge network, you can pre-generate forced alignment data (e.g. from wav2vec2) and then train against those. <|||||>@fxtentacle
Hi!, In my experience, pre-trained wav2vec is full-attention model. so, I think that is not useful on T-T
When I printed output 1) 10sec and 2) 1sec in 10sec,
I compared 1)'s vector for 1sec and 2)'s vector, that is diffrent value each other.
so, I think, i talk 'i'm so hun' and 'i'm so hungry'
'hun' sound's acoustic vector is not verfied!
maybe, did you want to say about pre-trained wav2vec2 model on trained streaming-like dataset?<|||||>@YooSungHyun when you have a dataset of audio, you can use a pre-trained wav2vec2 to generate logits for every timestep. Normally, you would then resolve those logits into text using the language model, but instead you can also just save them as a new dataset. So then you have the raw audio one the one side and the time-aligned logits from wav2vec2 on the other side. And that data can be used to train the acoustic encoder of a T-T. You feed a chunk of the raw audio into the encoder and then use the difference to your "known good" logits from wav2vec2 as the loss signal. Doing so removes the uncertainty w.r.t. the time alignment, because you already know where in time each logit was emitted by wav2vec2. And that greatly speeds up training the acoustic encoder, because you can use an absolute error loss instead of using a CTC loss. And that produces a much cleaner gradient to learn from.<|||||>Thank you for your idea! @fxtentacle I tested it based on your idea!
I understand what @YooSungHyun said
```
when the full voice "i'm so hungry" was input.
in streaming case, the corresponding voice of "l'm" -> "so" -> "hugry" is come in order,
but Wav2Vec2 has a difference in the value of the vector
when a full voice like "i'm so hungry" is received and when "l'm" -> "so" -> "hugry" is partially received.
```
So the solution to this problem is
```
If there is a difference, let's make the split_vector similar or equal to each part of the full_vector through
the loss calculation between the full_vector from "i'm so hungry" and the split_vector from
split_audio (e.g., when separated per second).
```
So based on the above understanding, I made the code below, but there was a problem.
```
from transformers import Wav2Vec2Model, Wav2Vec2Config
import torch
def main() -> None:
model_name = r"patrickvonplaten/wav2vec2-librispeech-clean-100h-demo-dist"
cache_dir = r""
config = Wav2Vec2Config.from_pretrained(
model_name,
cache_dir=cache_dir,
apply_spec_augment=False,
)
model = Wav2Vec2Model.from_pretrained(model_name, cache_dir=cache_dir, config=config)
sampling_rate = 16000
batch_size = 2
audio_size = [254080, 101600, 293600, 82880]
# sec = 15.88, 6.35, 18.35, 5.18
dummy_datas = [torch.rand((batch_size, audio_len)) for audio_len in audio_size]
for full_audio in dummy_datas:
outputs = model(full_audio)
labels = outputs[0]
input_values = torch.zeros(labels.size())
full_size = full_audio.size(1)
stack_size = 0
check_list = list() # it's for test
# [NOTE]: Cut the voice in 1 seconds.
# If a 15.88 second voice is cut per second, 16 split_audios are generated.
for idx, split_idx in enumerate(range(0, full_size, sampling_rate), start=1):
split_audio = full_audio[:, split_idx : (split_idx + sampling_rate)]
outputs = model(split_audio)
hidden_states = outputs[0]
check_list.append(hidden_states)
hidden_size = hidden_states.shape[1]
input_values[:, stack_size : stack_size + hidden_size] = hidden_states
stack_size += hidden_size
state_size = sum([state.shape[1] for state in check_list])
print("\n---------- result ----------")
print(f"audio_length: {full_audio.shape[1] / sampling_rate}")
print(f"labels_length: {labels.shape[1]}")
print(f"actual_length: {state_size}")
print(f"differece: {labels.shape[1] - state_size}")
print(f"repeat_num: {idx}")
if "__main__" in __name__:
main()
```
For example, if you put full_audio with size n into Wav2Vec2, you get labels with length 7 of vector.
Then, cut the full_audio with the size of n, get four split_audio per second, put it in wav2vec2 to get split_vector, and add the values to get input_values.
My opinion is that the length between labels and input_values should be the same when audio is processed in the above way. However, there is a difference in length when I turn the code above.
The picture below is a brief description of the problem.

When you actually rewind the code above, a difference of 15 occurs when you extract labels and input_values from a voice of 15.88 seconds. The reason why the difference is 15 instead of 16 is because if length - 1 is applied to all the audio input, even the actual label would have been length -1, so the difference would be 15 instead of 16.
The serious point of this problem is that the difference in length between input_values and labels increases in proportion to the length of the voice.
When I looked up the cause, I think the length-1 problem occurs while going through the Wav2Vec2FeatureEncoder (CNN).
The solution I think is to add 0 pad to split_vector and make 0 pad xavier, kaming initialize, etc., but I'm worried because it's not a fundamental solution.
Is there any way to fundamentally solve the problem other than attaching a pad?
|
transformers | 20,960 | closed | There should partial forward in pretrained BERT and RoBERTa models | ### Feature request
There should be a way to to send inputs to specific encoder layers and finish the forward partially.
For example, there should be a way to send input hidden states to the fourth encoder layer, and get all hidden states after that through forward computation.
### Motivation
This kind of thing is required in causal intervention.
### Your contribution
I can submit a PR to this effect. | 01-02-2023 06:12:49 | 01-02-2023 06:12:49 | Each of those models is defined in its own modeling file that you can modify to suit your needs. This is why we have a one file per model policy in Transformers :-)<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 20,959 | closed | auxiliary_loss works for Deformable Detr | # What does this PR do?
DeformableDetr does not work when auxiliary_loss=True.
Since Deformable Detr has list of class_embed, bbox_embed, this code will raise NotImplementedError.
```python
intermediate = outputs.intermediate_hidden_states if return_dict else outputs[4]
outputs_class = self.class_embed(intermediate)
outputs_coord = self.bbox_embed(intermediate).sigmoid()
```
```python
outputs_class = self.class_embed(intermediate)
File "/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl
return forward_call(*input, **kwargs)
File "/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py", line 201, in _forward_unimplemented
raise NotImplementedError
NotImplementedError
```
To fix this, we can simply use predefined `outputs_class` and `outputs_coord` in this [line](https://github.com/huggingface/transformers/blob/main/src/transformers/models/deformable_detr/modeling_deformable_detr.py#L1942-L1943).
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@NielsRogge
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 01-01-2023 12:06:08 | 01-01-2023 12:06:08 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 20,958 | closed | Fix valid ratio for Deformable Detr | # What does this PR do?
I encountered unexpected behavior that single-batch image and multi-batch image return significantly different output.
I found that its reason is from function `get_valid_ratio`, which returns ratio of image size for each example from batch (which is padded for longest width and height in batch). Since mask has opposite value(True for real pixel, False for padded) from original repo, mask should be opposite with [original repo](https://github.com/fundamentalvision/Deformable-DETR/blob/main/models/deformable_transformer.py#L117-L124) inside of function `get_valid_ratio`. Otherwise it will return image ratio for **pad width and height**, which is obviously erroneous.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@NielsRogge
| 01-01-2023 11:55:20 | 01-01-2023 11:55:20 | _The documentation is not available anymore as the PR was closed or merged._<|||||>cc @NielsRogge and @amyeroberts |
transformers | 20,957 | closed | Fix T5 docstring | # What does this PR do?
Fix docstring for `T5Stack` 's `deparallelize` method :
`PARALLELIZE_DOCSTRING` -> `DEPARALLELIZE_DOCSTRING`
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@sgugger
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts and @NielsRogge
- speech models: @sanchit-gandhi
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger and @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
| 01-01-2023 08:07:00 | 01-01-2023 08:07:00 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 20,956 | closed | [docs] improve issue template for i18n | # What does this PR do?
The initial issue template (https://github.com/huggingface/transformers/pull/20199) includes minor typos and a closed PR link.
It makes it seem that the index page of a new language is already translated when it isn't.
Also some comment regarding what `langCode` or `langName` is would be helpful.
[x] Fixed typos.
[x] Removed irrelevant PR link.
[x] Explained what `langCode` or `langName` is more easily. (Replaced it rather)
<!-- Remove if not applicable -->
Fixes https://github.com/huggingface/transformers/issues/20955
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
May you please review this PR, @sgugger? | 12-31-2022 12:00:53 | 12-31-2022 12:00:53 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20956). All of your documentation changes will be reflected on that endpoint.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 20,955 | closed | Typos and minor changes needed for i18n issue template | The initial issue template (https://github.com/huggingface/transformers/pull/20199) includes minor typos and a closed PR link.
It makes it seem that the index page of a new language is already translated when it isn't.
Also some comment regarding what `langCode` or `langName` is would be helpful.
[ ] Fix typos.
[ ] Remove irrelevant PR link.
[ ] Explain what `langCode` or `langName` is more easily. | 12-31-2022 11:55:34 | 12-31-2022 11:55:34 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 20,954 | closed | Can't Save TFHubertForCTC as Saved_model | ### System Info
- `transformers` version: 4.25.1
- Platform: Linux-5.10.133+-x86_64-with-glibc2.27
- Python version: 3.8.16
- Huggingface_hub version: 0.11.1
- PyTorch version (GPU?): 1.13.0+cu116 (True)
- Tensorflow version (GPU?): 2.9.2 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: I am running on colab
### Who can help?
@Rocketknight1 @gante
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```
from transformers import Wav2Vec2Processor, TFHubertForCTC
model = TFHubertForCTC.from_pretrained("facebook/hubert-large-ls960-ft")
model.save("test")
```
```
Downloading: 100%
1.38k/1.38k [00:00<00:00, 53.4kB/s]
Downloading: 100%
1.26G/1.26G [00:32<00:00, 72.2MB/s]
TFHubertForCTC has backpropagation operations that are NOT supported on CPU. If you wish to train/fine-tine this model, you need a GPU or a TPU
All model checkpoint layers were used when initializing TFHubertForCTC.
All the layers of TFHubertForCTC were initialized from the model checkpoint at facebook/hubert-large-ls960-ft.
If your task is similar to the task the model of the checkpoint was trained on, you can already use TFHubertForCTC for predictions without further training.
---------------------------------------------------------------------------
OperatorNotAllowedInGraphError Traceback (most recent call last)
[<ipython-input-2-d87cdca07c35>](https://localhost:8080/#) in <module>
2
3 model = TFHubertForCTC.from_pretrained("facebook/hubert-large-ls960-ft")
----> 4 model.save("test")
4 frames
[/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py](https://localhost:8080/#) in error_handler(*args, **kwargs)
65 except Exception as e: # pylint: disable=broad-except
66 filtered_tb = _process_traceback_frames(e.__traceback__)
---> 67 raise e.with_traceback(filtered_tb) from None
68 finally:
69 del filtered_tb
[/usr/lib/python3.8/contextlib.py](https://localhost:8080/#) in __exit__(self, type, value, traceback)
118 if type is None:
119 try:
--> 120 next(self.gen)
121 except StopIteration:
122 return False
[/usr/local/lib/python3.8/dist-packages/transformers/models/hubert/modeling_tf_hubert.py](https://localhost:8080/#) in call(self, input_values, attention_mask, token_type_ids, position_ids, head_mask, inputs_embeds, output_attentions, output_hidden_states, return_dict, training, **kwargs)
1260 mask_time_indices = kwargs.get("mask_time_indices", None)
1261 if inputs["training"]:
-> 1262 hidden_states = self._mask_hidden_states(hidden_states, mask_time_indices=mask_time_indices)
1263
1264 encoder_outputs = self.encoder(
[/usr/local/lib/python3.8/dist-packages/transformers/models/hubert/modeling_tf_hubert.py](https://localhost:8080/#) in _mask_hidden_states(self, hidden_states, mask_time_indices)
1191 elif self.config.mask_time_prob > 0:
1192 # generate indices & apply SpecAugment along time axis
-> 1193 mask_time_indices = _compute_mask_indices(
1194 (batch_size, sequence_length),
1195 mask_prob=self.config.mask_time_prob,
[/usr/local/lib/python3.8/dist-packages/transformers/models/hubert/modeling_tf_hubert.py](https://localhost:8080/#) in _compute_mask_indices(shape, mask_prob, mask_length, min_masks)
222 raise ValueError("`mask_length` has to be bigger than 0.")
223
--> 224 if mask_length > sequence_length:
225 raise ValueError(
226 f"`mask_length` has to be smaller than `sequence_length`, but got `mask_length`: {mask_length} and"
OperatorNotAllowedInGraphError: Using a symbolic `tf.Tensor` as a Python `bool` is not allowed: AutoGraph did convert this function. This might indicate you are trying to use an unsupported feature.
```
### Expected behavior
This code should save the model as a tensorflow Saved_Model, this code works for version 4.22.2. Also by chanigng the value of sequence_length to some random value such as 100 in the soure code , it started working. | 12-31-2022 01:48:45 | 12-31-2022 01:48:45 | Hey @ahmedlone127 👋 I was able to reproduce locally on the latest version, will look into its causes!<|||||>@ahmedlone127: #20966 should fix it after it is merged 🤗
Two notes:
1 - You would have to install `transformers` from git (`pip install https://github.com/huggingface/transformers`)
2 - Your example script has some warnings after including the fix. There is a chance that the loaded model does not have the functionality you wish, you'd have to try it out :) |
transformers | 20,953 | closed | Add whisper converter for hf -> openai | ### Feature request
The inference pipeline in openai whisper has a couple of heuristics that aren't all covered in https://github.com/huggingface/transformers. Therefore, some users would like to fine-tune in huggingface and convert the model back to its original configuration.
https://github.com/huggingface/transformers/blob/main/src/transformers/models/whisper/convert_openai_to_hf.py provides a script to convert from openai to hf, so we should also have a script to go the other way.
### Motivation
It is difficult to get the same transcription quality in the current transformers library compared to the openai transcribe function https://github.com/openai/whisper/blob/main/whisper/transcribe.py.
### Your contribution
there is an existing approach in https://github.com/luigisaetta/whisper-app/blob/main/match_layers.py by @luigisaetta | 12-30-2022 21:37:02 | 12-30-2022 21:37:02 | 👋 @ArthurZucker, who created the conversion script
<|||||>Hey! I am not entirely sure if we should have this in our repo or just link it in the readme, but I think a contributor already wrote a script so pinging @bayartsogt-ya here.<|||||>For visibility, whisper.cpp provides conversion from HF models into their format.
It's much high performing that OpenAI implementation. See: https://github.com/ggerganov/whisper.cpp/tree/master/models<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>I've seen a lot of converter codes flying around. Also given that tranformers is catching up with respect to timestamp outputs there is not much need to use whisper itself for inference anymore<|||||>@faroit could you add a link to a conversion script? Just so if people end up here, they find what they are looking for. |
transformers | 20,952 | closed | Add generate kwargs to `AutomaticSpeechRecognitionPipeline` | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Hi @Narsil 👋,
This is the new PR of https://github.com/huggingface/transformers/pull/20935, as the commit history in the old one is messed up
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts and @NielsRogge
- speech models: @sanchit-gandhi
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger and @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
| 12-30-2022 19:31:24 | 12-30-2022 19:31:24 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Thanks for this modificaiton. @sgugger for final review |
transformers | 20,951 | closed | [trainer: `distributed_concat`] ensure `all_gather`'s inputs are contiguous | This PR fixes https://github.com/huggingface/transformers/issues/20942 where a user's code results in a non-contiguous tensor being passed to `all_gather` which fails with:
```
Traceback (most recent call last):
File "contiguous.py", line 83, in <module>
preds = torch.tensor(trainer.predict(eval_dataset)[0])
File "/mnt/nvme0/code/huggingface/transformers-master-2/src/transformers/trainer.py", line 2894, in predict
output = eval_loop(
File "/mnt/nvme0/code/huggingface/transformers-master-2/src/transformers/trainer.py", line 3024, in evaluation_loop
logits = self._nested_gather(logits)
File "/mnt/nvme0/code/huggingface/transformers-master-2/src/transformers/trainer.py", line 3140, in _nested_gather
tensors = distributed_concat(tensors)
File "/mnt/nvme0/code/huggingface/transformers-master-2/src/transformers/trainer_pt_utils.py", line 191, in distributed_concat
return type(tensor)(distributed_concat(t, num_total_examples) for t in tensor)
File "/mnt/nvme0/code/huggingface/transformers-master-2/src/transformers/trainer_pt_utils.py", line 191, in <genexpr>
return type(tensor)(distributed_concat(t, num_total_examples) for t in tensor)
File "/mnt/nvme0/code/huggingface/transformers-master-2/src/transformers/trainer_pt_utils.py", line 191, in distributed_concat
return type(tensor)(distributed_concat(t, num_total_examples) for t in tensor)
File "/mnt/nvme0/code/huggingface/transformers-master-2/src/transformers/trainer_pt_utils.py", line 191, in <genexpr>
return type(tensor)(distributed_concat(t, num_total_examples) for t in tensor)
File "/mnt/nvme0/code/huggingface/transformers-master-2/src/transformers/trainer_pt_utils.py", line 191, in distributed_concat
return type(tensor)(distributed_concat(t, num_total_examples) for t in tensor)
File "/mnt/nvme0/code/huggingface/transformers-master-2/src/transformers/trainer_pt_utils.py", line 191, in <genexpr>
return type(tensor)(distributed_concat(t, num_total_examples) for t in tensor)
File "/mnt/nvme0/code/huggingface/transformers-master-2/src/transformers/trainer_pt_utils.py", line 194, in distributed_concat
dist.all_gather(output_tensors, tensor)
File "/home/stas/anaconda3/envs/py38-pt113/lib/python3.8/site-packages/torch/distributed/distributed_c10d.py", line 2275, in all_gather
work = default_pg.allgather([tensor_list], [tensor])
RuntimeError: Tensors must be contiguous
```
the fix adds `.contiguous()` which will do nothing if the tensor is already contiguous and will make it contiguous if it is not.
Fixes: https://github.com/huggingface/transformers/issues/20942 | 12-30-2022 18:57:21 | 12-30-2022 18:57:21 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 20,950 | closed | Mask dimension expansion might be wrong | https://github.com/huggingface/transformers/blob/17292440c069118fbdb992b9a17da2098fab5b87/src/transformers/models/reformer/modeling_reformer.py#L845
I feel that the way mask is expanded here might be wrong. More specifically, I feel that mask need to be repeated by the number hashes and splited into chunks before running gather.
Please look into it and let me know if I misunderstood. Thank you so much! | 12-30-2022 18:32:45 | 12-30-2022 18:32:45 | cc @ArthurZucker <|||||>Hey! Do you happen to have found an error or any discrepancy ? I am not very familiar with this model but if you have a reproduction script or something to help me work on this, would be appreciated! <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 20,949 | closed | Remove T5 dependency from mT5 model | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes part of #19303
This PR removes T5 dependency from the mT5 model.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@sgugger
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts and @NielsRogge
- speech models: @sanchit-gandhi
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger and @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
| 12-30-2022 18:26:06 | 12-30-2022 18:26:06 | _The documentation is not available anymore as the PR was closed or merged._<|||||>@sgugger, I have made your suggested changes, please take a look. Thanks,<|||||>Thanks! You'll just need to add `MT5Stack` in [this list](https://github.com/huggingface/transformers/blob/56397471b454e8707b7865cfba0130f04a889592/utils/check_repo.py#L37) (along with T5Stack) to make all checks pass.<|||||>Failure is flaky so merging. Thanks for the work on this! |
transformers | 20,948 | closed | 🌐 [i18n-KO] Translated `installation.mdx` to Korean | # What does this PR do?
Translated the `installation.mdx` file of the documentation to Korean.
Thank you in advance for your review.
<!-- Remove if not applicable -->
Part of https://github.com/huggingface/transformers/issues/20179
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@sgugger, @ArthurZucker, @eunseojo may you please review this PR? | 12-30-2022 15:31:27 | 12-30-2022 15:31:27 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 20,947 | closed | `tf2` BERT checkpoint to `pytorch_model.bin` (with MLM head) | ### System Info
- `transformers` version: 4.25.1
- Platform: Windows-10-10.0.19044-SP0
- Python version: 3.9.5
- Huggingface_hub version: 0.11.1
- PyTorch version (GPU?): 1.13.0+cpu (False)
- Tensorflow version (GPU?): 2.10.0 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: False
- Using distributed or parallel set-up in script?: False
### Who can help?
@ArthurZucker @younesbelkada @gante
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
I am trying to convert a `tf2` BERT checkpoint to the `pytorch_model.bin` format in order to upload it on the Huggingface hub. I know that there are 2 scripts for doing this type of conversion (one for [tf1](https://github.com/huggingface/transformers/blob/main/src/transformers/models/bert/convert_bert_original_tf_checkpoint_to_pytorch.py) checkpoints and one for [tf2](https://github.com/huggingface/transformers/blob/main/src/transformers/models/bert/convert_bert_original_tf2_checkpoint_to_pytorch.py) checkpoints).
The problem is that my BERT checkpoint is in `tf2` format and I want to convert it to a pytorch model (encoder + heads).
I mention that I've followed this [guide](https://github.com/tensorflow/models/blob/master/official/nlp/docs/train.md#pre-train-a-bert-from-scratch) and used the [official ](https://github.com/tensorflow/models/tree/master/official/nlp) tensorflow scripts for obtaining the checkpoints.
### Expected behavior
I would like to export a `tf2` BERT checkpoint to a `pytorch` model, exporting the MLM head alongside the encoder. | 12-30-2022 12:43:26 | 12-30-2022 12:43:26 | Please use the [forums](https://discuss.huggingface.co/) for such questions as we keep issues for bugs and feature requests only.
Note that we do not provide conversion scripts to convert checkpoints obtained with other libraries, the ones exposed are to convert the checkpoints from their original implementation to the Hugging Face format. If you train a Hugging Face Tensorflow model, you'll then seamlessly be able to convert it to PyTorch/Flax. |
transformers | 20,946 | closed | [i18n-KO] Translated quicktour page to Korean | # What does this PR do?
Translated the `Quicktour.mdx` file of the documentation to Korean.
Thank you in advance for your review.
<!-- Remove if not applicable -->
Part of https://github.com/huggingface/transformers/issues/20179
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@sgugger, @ArthurZucker, @eunseojo may you please review this PR? | 12-30-2022 12:15:09 | 12-30-2022 12:15:09 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Thank you so much for your review @sgugger and especially @ArthurZucker for your awesome proof-read! I have commited your suggestion. Although it is a bit late, happy lunar new year everyone! 🐇 🌕 <|||||>Happy new year! 🐰 |
transformers | 20,945 | closed | Fixing DistilBert error message | # What does this PR do?
Distilbert fix from [this thread](https://github.com/huggingface/transformers/pull/20933/commits/6f0282dd13d646b0f58d99a0c19377646efc2d55#r1059242097)
## Before submitting
- [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@sgugger | 12-30-2022 07:44:06 | 12-30-2022 07:44:06 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 20,944 | closed | Replace `past` with `past_key_values` | # What does this PR do?
The argument `past` was completely replaced with `past_key_values` thus this PR should fix any problem with `kwargs` being swallowed for old models in generation.
Related to #20347 | 12-30-2022 07:23:12 | 12-30-2022 07:23:12 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Ok the failing tests were because I did not pull from main, were the `tf_utils` now uses the `generate_config`. LGTM the failing test seems to be unrelated<|||||>Yes, good to merge for me! |
transformers | 20,943 | closed | Incorrect type for TrainerArgs#report_to | ### System Info
A minor thing...
The type of the `TrainerArgs` `report_to` argument is defined here: https://github.com/huggingface/transformers/blob/main/src/transformers/training_args.py#L901 as `Optional[List[str]]`, but the [docstring](https://github.com/huggingface/transformers/blob/main/src/transformers/training_args.py#L430) describes `Optional[str | List[str]]`. The default is wrong too.
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
NA
### Expected behavior
NA | 12-30-2022 03:30:58 | 12-30-2022 03:30:58 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>I would like to politely question the logic of auto-closing issues. Do you not want GitHub issues to be a register of things that should be fixed? Auto-closing just means this bug will live on to annoy another user at some point in the future, and maybe they'll report it too. |
transformers | 20,942 | closed | `RuntimeError: tensors must be contiguous` when predicting GPTJForClassification trainer | ### System Info
- `transformers` version: 4.21.2
- Platform: Linux-5.15.0-1023-aws-x86_64-with-glibc2.29
- Python version: 3.8.10
- Huggingface_hub version: 0.10.0
- PyTorch version (GPU?): 1.12.1+cu113 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: 1
- Using distributed or parallel set-up in script?: huggingface transformers deepspeed
### Who can help?
@sgugger @stas00
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
```python
import os
import torch
from torch.utils.data import Dataset, random_split
from transformers import AutoTokenizer, TrainingArguments, Trainer, AutoModelForCausalLM, IntervalStrategy, AutoModel, AutoConfig, PreTrainedModel, AutoModelForSequenceClassification
import json
import deepspeed
import argparse
from datasets import load_dataset
import wandb
from tqdm import tqdm
class PairwiseEvalDataset(Dataset):
def __init__(self, pairs, tokenizer, max_length):
self.input_ids = []
self.attn_masks = []
for pair in tqdm(pairs):
prompt = pair["prompt"]
chosen, rejected = pair["chosen"], pair["rejected"]
tok_chosen = tokenizer(prompt + chosen + "<|endoftext|>", return_tensors="pt")["input_ids"]
tok_rejected = tokenizer(prompt + rejected + "<|endoftext|>", return_tensors="pt")["input_ids"]
# Reject data with num tokens > max_length
if tok_chosen.shape[-1] <= max_length and tok_rejected.shape[-1] <= max_length:
chosen_encodings_dict = tokenizer(prompt + chosen + '<|endoftext|>', truncation=True,
max_length=max_length, padding="max_length", return_tensors="pt")
rejected_encodings_dict = tokenizer(prompt + rejected + '<|endoftext|>', truncation=True,
max_length=max_length, padding="max_length", return_tensors="pt")
# First append chosen then rejected
self.input_ids.append(chosen_encodings_dict['input_ids'])
self.attn_masks.append(chosen_encodings_dict['attention_mask'])
self.input_ids.append(rejected_encodings_dict['input_ids'])
self.attn_masks.append(rejected_encodings_dict['attention_mask'])
def __len__(self):
return len(self.input_ids)
def __getitem__(self, idx):
return self.input_ids[idx], self.attn_masks[idx]
def pairwise_data_collator(data):
if len(data[0]) == 4:
return {'input_ids': torch.cat([f[0] for f in data] + [f[2] for f in data]),
'attention_mask': torch.cat([f[1] for f in data] + [f[3] for f in data])}
elif len(data[0]) == 2:
return {'input_ids': torch.cat([f[0] for f in data]),
'attention_mask': torch.cat([f[1] for f in data])}
else:
raise ValueError("Invalid data format")
class PairwiseTrainer(Trainer):
def compute_loss(self, model, inputs, return_outputs=False):
# forward pass
PAD_ID = model.PAD_ID
assert len(inputs["input_ids"].shape) == 2
bs = inputs["input_ids"].shape[0] // 2
chosen = inputs["input_ids"][:bs]
rejected = inputs["input_ids"][bs:]
rewards = model(**inputs).logits
chosen_rewards = rewards[:bs]
rejected_rewards = rewards[bs:]
loss = -torch.log(torch.sigmoid(chosen_rewards - rejected_rewards)).mean()
return (loss, outputs) if return_outputs else loss
def make_rm(model_name):
config = AutoConfig.from_pretrained(model_name)
config.num_labels = 1
reward_model = AutoModelForSequenceClassification.from_config(config)
return reward_model
tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-j-6B")
tokenizer.pad_token = tokenizer.eos_token
model = make_rm("Dahoas/gptj-sft-static")
data = load_dataset("Dahoas/rm-static")
max_length = 1024
eval_dataset = PairwiseEvalDataset(data["test"], tokenizer, max_length=max_length)
train_args = TrainingArguments(output_dir=".", per_device_eval_batch_size=1)
trainer = PairwiseTrainer(model=model, args=train_args, train_dataset=eval_dataset, data_collator=pairwise_data_collator)
# TODO(dahoas): Unsure how to compute metrics in trainer for non-classification task
preds = torch.tensor(trainer.predict(eval_dataset)[0])
```
with ds_config
```yaml
{
"train_batch_size": "auto",
"fp16": {
"enabled": "auto",
"min_loss_scale": 1,
"loss_scale_window": 1000,
"hysteresis": 2,
"initial_scale_power": 32
},
"bf16": {
"enabled": "auto"
},
"zero_optimization": {
"stage": 3,
"offload_param": {
"device": "none"
},
"offload_optimizer": {
"device": "none"
},
"allgather_partitions": true,
"allgather_bucket_size": 5e8,
"contiguous_gradients": true
},
"optimizer": {
"type": "AdamW",
"params": {
"lr": "auto",
"betas": [
0.9,
0.999
],
"eps": 1e-08
}
},
"scheduler": {
"type": "WarmupLR",
"params": {
"warmup_min_lr": 0,
"warmup_max_lr": "auto",
"warmup_num_steps": 100
}
}
}
```
Launch with `deepspeed --num_gpus 1 test.py --deepspeed ../configs/ds_configs/ds_config_gpt_j_z3.json`
I get the error `RuntimeError: Tensors must be contiguous`. The script runs as expected when replacing `gptj` with `gpt2`. I am using 1 A100 40gb gpu. Thank you for any insight.
### Expected behavior
trainer.predict should infer without error | 12-29-2022 22:47:51 | 12-29-2022 22:47:51 | Trying to run your example, @Dahoas, I get:
```
Traceback (most recent call last):
File "contiguous.py", line 8, in <module>
from rm_datasets import PairwiseDataset, PairwiseEvalDataset, pairwise_data_collator
ModuleNotFoundError: No module named 'rm_datasets'
```
fixed that by removing:
```
from rm_datasets import PairwiseDataset, PairwiseEvalDataset, pairwise_data_collator
```
and now it fails on:
```
Traceback (most recent call last):
File "contiguous.py", line 10, in <module>
from utils import freeze_bottom_causal_layers, load_yaml, make_rm
ModuleNotFoundError: No module named 'utils'
```<|||||>Sorry about that, I forgot to remove some extraneous imports. I've done so now. <|||||>Thank you for the corrections, @Dahoas - I'm able to reproduce the issue. Thank you for that.
Will follow up again once I get a chance to investigate the issue.<|||||>The missing from report full traceback is:
```
***** Running Prediction *****
Num examples = 10206
Batch size = 1
Traceback (most recent call last):
File "contiguous.py", line 83, in <module>
preds = torch.tensor(trainer.predict(eval_dataset)[0])
File "/mnt/nvme0/code/huggingface/transformers-master-2/src/transformers/trainer.py", line 2894, in predict
output = eval_loop(
File "/mnt/nvme0/code/huggingface/transformers-master-2/src/transformers/trainer.py", line 3024, in evaluation_loop
logits = self._nested_gather(logits)
File "/mnt/nvme0/code/huggingface/transformers-master-2/src/transformers/trainer.py", line 3140, in _nested_gather
tensors = distributed_concat(tensors)
File "/mnt/nvme0/code/huggingface/transformers-master-2/src/transformers/trainer_pt_utils.py", line 191, in distributed_concat
return type(tensor)(distributed_concat(t, num_total_examples) for t in tensor)
File "/mnt/nvme0/code/huggingface/transformers-master-2/src/transformers/trainer_pt_utils.py", line 191, in <genexpr>
return type(tensor)(distributed_concat(t, num_total_examples) for t in tensor)
File "/mnt/nvme0/code/huggingface/transformers-master-2/src/transformers/trainer_pt_utils.py", line 191, in distributed_concat
return type(tensor)(distributed_concat(t, num_total_examples) for t in tensor)
File "/mnt/nvme0/code/huggingface/transformers-master-2/src/transformers/trainer_pt_utils.py", line 191, in <genexpr>
return type(tensor)(distributed_concat(t, num_total_examples) for t in tensor)
File "/mnt/nvme0/code/huggingface/transformers-master-2/src/transformers/trainer_pt_utils.py", line 191, in distributed_concat
return type(tensor)(distributed_concat(t, num_total_examples) for t in tensor)
File "/mnt/nvme0/code/huggingface/transformers-master-2/src/transformers/trainer_pt_utils.py", line 191, in <genexpr>
return type(tensor)(distributed_concat(t, num_total_examples) for t in tensor)
File "/mnt/nvme0/code/huggingface/transformers-master-2/src/transformers/trainer_pt_utils.py", line 194, in distributed_concat
dist.all_gather(output_tensors, tensor)
File "/home/stas/anaconda3/envs/py38-pt113/lib/python3.8/site-packages/torch/distributed/distributed_c10d.py", line 2275, in all_gather
work = default_pg.allgather([tensor_list], [tensor])
RuntimeError: Tensors must be contiguous
```<|||||>@Dahoas, this should fix the problem: https://github.com/huggingface/transformers/pull/20951
Thank you for making it super easy for us to identify the problem!<|||||>Excellent thank you very much! |
transformers | 20,941 | closed | Add document token classification pipeline | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts and @NielsRogge
- speech models: @sanchit-gandhi
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger and @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
| 12-29-2022 19:40:12 | 12-29-2022 19:40:12 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20941). All of your documentation changes will be reflected on that endpoint. |
transformers | 20,940 | closed | Update run_wav2vec2_pretraining_no_trainer.py | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #18436 (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a [link](https://github.com/huggingface/transformers/issues/18436) if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@muellerzr
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts and @NielsRogge
- speech models: @sanchit-gandhi
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger and @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
| 12-29-2022 17:08:38 | 12-29-2022 17:08:38 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20940). All of your documentation changes will be reflected on that endpoint.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 20,939 | closed | Add X-MOD | Add the X-MOD models released with the paper [Lifting the Curse of Multilinguality by Pre-training Modular Transformers](http://dx.doi.org/10.18653/v1/2022.naacl-main.255).
## Implementation notes
- There are nine pre-trained models released in the fairseq repo: https://github.com/facebookresearch/fairseq/tree/main/examples/xmod. I will upload them under my own name and they can be moved to the [facebook](https://huggingface.co/facebook) organization after merging.
- The model code can be adapted from XLM-RoBERTa. Separate code is required due to the language adapters and the pre-norm.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
- text models: @ArthurZucker and @younesbelkada | 12-29-2022 16:24:31 | 12-29-2022 16:24:31 | _The documentation is not available anymore as the PR was closed or merged._<|||||>This PR is now ready for review.
Uploaded models:
- https://huggingface.co/jvamvas/xmod-base
- https://huggingface.co/jvamvas/xmod-large-prenorm
- https://huggingface.co/jvamvas/xmod-base-13-125k
- https://huggingface.co/jvamvas/xmod-base-30-125k
- https://huggingface.co/jvamvas/xmod-base-30-195k
- https://huggingface.co/jvamvas/xmod-base-60-125k
- https://huggingface.co/jvamvas/xmod-base-60-265k
- https://huggingface.co/jvamvas/xmod-base-75-125k
- https://huggingface.co/jvamvas/xmod-base-75-269k<|||||>@younesbelkada Thank you for the swift code review, much appreciated!
I have now implemented your comments.<|||||>@sgugger Thanks for the review. Your suggestions have now been implemented<|||||>Can you also add the model to the `documentation_tests.txt` file to and run the doctests to be sure that they are valid?<|||||>@ArthurZucker Thanks for the code review. I have now implemented the changes you requested.
I agree that the models should be moved to the [facebook](https://huggingface.co/facebook) organization but do not have the permissions to do so.
<|||||>About moving the weights, I think I am in the org, and can help with that / ask to add you to transfer them 😉
Looks very good, almost there! 🚀 <|||||>Hi @ArthurZucker, thanks for pointing out that there are missing tests in this PR.
Unfortunately, I have not been able to figure out which tests are missing, exactly.
As of now, there are the following tests:
- `tests.models.xmod.test_modeling_xmod.XmodModelTest` – checks that there are no errors when calling the methods of `XmodFor...`, including `model.generate()`
- `tests.models.xmod.test_modeling_xmod.XmodModelIntegrationTest` – checks that the output of the pre-trained models [jvamvas/xmod-base](https://huggingface.co/jvamvas/xmod-base) and [jvamvas/xmod-large-prenorm](https://huggingface.co/jvamvas/xmod-large-prenorm) is identical to the corresponding Fairseq models.
Could you please clarify which tests need to be added still?<|||||>Hey! Thanks for bearing with me.
- What is there but should not: a pipeline test inside the `test_modeling` file
- The missing tests :
Something like what we have in opt , which will be part of the tests.models.xmod.test_modeling_xmod.XmodModelIntegrationTest. You can also have a `class XmodGenerationTest(unittest.TestCase):`
A sample test is the following.
```python
def test_batch_generation(self):
model_id = "facebook/opt-350m"
tokenizer = GPT2Tokenizer.from_pretrained(model_id)
model = OPTForCausalLM.from_pretrained(model_id)
model.to(torch_device)
tokenizer.padding_side = "left"
# use different length sentences to test batching
sentences = [
"Hello, my dog is a little",
"Today, I",
]
inputs = tokenizer(sentences, return_tensors="pt", padding=True)
input_ids = inputs["input_ids"].to(torch_device)
outputs = model.generate(
input_ids=input_ids,
attention_mask=inputs["attention_mask"].to(torch_device),
)
inputs_non_padded = tokenizer(sentences[0], return_tensors="pt").input_ids.to(torch_device)
output_non_padded = model.generate(input_ids=inputs_non_padded)
num_paddings = inputs_non_padded.shape[-1] - inputs["attention_mask"][-1].long().sum().cpu().item()
inputs_padded = tokenizer(sentences[1], return_tensors="pt").input_ids.to(torch_device)
output_padded = model.generate(input_ids=inputs_padded, max_length=model.config.max_length - num_paddings)
batch_out_sentence = tokenizer.batch_decode(outputs, skip_special_tokens=True)
non_padded_sentence = tokenizer.decode(output_non_padded[0], skip_special_tokens=True)
padded_sentence = tokenizer.decode(output_padded[0], skip_special_tokens=True)
expected_output_sentence = [
"Hello, my dog is a little bit of a dork.\nI'm a little bit",
"Today, I was in the middle of a conversation with a friend about the",
]
self.assertListEqual(expected_output_sentence, batch_out_sentence)
self.assertListEqual(batch_out_sentence, [non_padded_sentence, padded_sentence])
```
Does that make sense? 😉
<|||||>The CI tests are broken but it is not your fault ! We are going to have to wait until the basic docker properly runs, but the added test looks good 😉 <|||||>hi @jvamvas !
For the code quality tests just need to rebase with `main` and run:
```
pip install --upgrade -e .["quality"]
```
Then run the usual `make style` or `make fixup`<|||||>@younesbelkada Sorry about the bad rebase. On the plus side, the tests are now passing again :tada: <|||||>Yeah hahah. Do you think you can reset, then rebase instead of merge? 😉
<|||||>@ArthurZucker Done. The failing test is not related to this PR<|||||>Great work! Thanks for working on this model! 🥳 |
transformers | 20,938 | closed | fix levit timm conversion file | # What does this PR do?
Fixes conversion file `convert_levit_timm_to_pytorch.py` for levit
Fixes # (issue)
https://github.com/huggingface/transformers/issues/20937
## Who can review?
@NielsRogge
| 12-29-2022 12:24:17 | 12-29-2022 12:24:17 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Thanks again for your contribution! |
transformers | 20,937 | closed | Levit conversion file argparser bug | ### Who can help?
@NielsRogge
### Reproduction
In the conversion file for the `levit` model (`convert_levit_timm_to_pytorch.py`) there is a wrong usage of `argparse.ArgumentParser()` for the `--push_to_hub` argument. Currently it is handled with the type `bool` but in the argparser `bool` type does not work as expected and the current code always generates `push_to_hub=True` not matter if in the CLI we use `--push_to_hub=False` or `--push_to_hub=True`. More canonical convention of handling boolean values with the argparser is to use two arguments. As an example: `--no-push_to_hub` to store `False` values and `--push_to_hub` to store `True` values.
### Expected behavior
Correct usage of the `argparser`. | 12-29-2022 12:24:01 | 12-29-2022 12:24:01 | Closing since fix was merged. |
transformers | 20,936 | closed | Fix error message in `WhisperFeatureExtractor` | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
cc @ArthurZucker :)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts and @NielsRogge
- speech models: @sanchit-gandhi
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger and @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
| 12-29-2022 12:22:56 | 12-29-2022 12:22:56 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 20,935 | closed | Add generate kwargs to `AutomaticSpeechRecognitionPipeline` | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Hi @Narsil 👋,
In this PR, I tried to add generate arguments to `AutomaticSpeechRecognitionPipeline` in order to run pipeline with seq2seq models using beam search, contrastive search, etc. I followed the style in [`TextGenerationPipeline`](https://github.com/huggingface/transformers/blob/8637316e5e94ba0a2493e5df7846f2f23f46eaef/src/transformers/pipelines/text2text_generation.py#L73).
```python
import torch
from transformers import pipeline
pipe = pipeline(model="openai/whisper-base", device=0, torch_dtype=torch.float16)
pipe("https://huggingface.co/datasets/Narsil/asr_dummy/resolve/main/1.flac", max_new_tokens=5)
# {'text': ' He hoped'}
```
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts and @NielsRogge
- speech models: @sanchit-gandhi
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger and @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
| 12-29-2022 11:02:12 | 12-29-2022 11:02:12 | _The documentation is not available anymore as the PR was closed or merged._<|||||>@Narsil it's indeed better this way. Thanks for the explanation!<|||||>Hi @Narsil,
Some tests of `ctc_with_lm` models failed. I think we could
1. Lift `decoder` in `__init__` as an individual argument
2. Add `**kwargs` into `_sanitize_parameters`
Personally I prefer the 1st one since the other one may introduce some silent errors. What's your opinion? <|||||>> Personally I prefer the 1st one since the other one may introduce some silent errors. What's your opinion?
In general I would agree with you. Pipelines accepting so many parameters I would tend to keep it simple, and maybe just change line 183
```diff
- self.decoder = kwargs["decoder"]
+ self.decoder = kwargs.pop("decoder")
```
This would be just so the signature is kept at a minimum (the docstring should be good) and avoiding accepting `decoder` as a positioned arguments instead of a keyword one. (I know we can do that within the signature, but it does complexify the docs, notably this part: https://huggingface.co/docs/transformers/v4.25.1/en/main_classes/pipelines#transformers.AutomaticSpeechRecognitionPipeline
<|||||>This is the sort of function complexity that I think is more detrimental than helping unfortunately: https://huggingface.co/docs/transformers/v4.25.1/en/main_classes/text_generation#transformers.GenerationMixin.generate
<|||||>> In general I would agree with you. Pipelines accepting so many parameters I would tend to keep it simple, and maybe just change line 183
>
> ```diff
> - self.decoder = kwargs["decoder"]
> + self.decoder = kwargs.pop("decoder")
> ```
The error occurs in the line 173 where `_sanitize_parameters` is called in parent :(<|||||>> The error occurs in the line 173 where `_sanitize_parameters` is called in parent :(
Ah so it happens before then, let's do it you way then
does
```
__init__(self, ....,, *args, *, decoder, **kwargs)
```
work ?
(Try and force to disable positional argument for `decoder` ?
<|||||>> does
>
> ```
> __init__(self, ....,, *args, *, decoder, **kwargs)
> ```
>
> work ? (Try and force to disable positional argument for `decoder` ?
No it's a syntax error :(
Can we do this ?
```diff
- def __init__(self, feature_extractor: Union["SequenceFeatureExtractor", str], *args, **kwargs):
+ def __init__(self, feature_extractor: Union["SequenceFeatureExtractor", str], decoder: Optional[Union["BeamSearchDecoderCTC", str]] = None, *args, **kwargs):
```<|||||>This will interpret `AutomaticSpeecRecognitionPipeline(feature_extractor, model)` and interpret `model` as `decoder` which will lead to confusing errors.
Can you try :
```python
+ def __init__(self, feature_extractor: Union["SequenceFeatureExtractor", str], *, decoder: Optional[Union["BeamSearchDecoderCTC", str]] = None, **kwargs):
```
Maybe ?<|||||>> Can you try :
>
> ```python
> + def __init__(self, feature_extractor: Union["SequenceFeatureExtractor", str], *, decoder: Optional[Union["BeamSearchDecoderCTC", str]] = None, **kwargs):
> ```
>
> Maybe ?
No we need `*args` for the line 173<|||||>> > Can you try :
> > ```python
> > + def __init__(self, feature_extractor: Union["SequenceFeatureExtractor", str], *, decoder: Optional[Union["BeamSearchDecoderCTC", str]] = None, **kwargs):
> > ```
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> > Maybe ?
>
> No we need `*args` for the line 173
Remove it there too. <|||||>@Narsil Oups, the commit history seems to be messed up. Let me create a new one!<|||||>Closed as the other one is cleaner https://github.com/huggingface/transformers/pull/20952 |
transformers | 20,934 | closed | Mismatched outputs from encoders of `transformers` and `whisper` | ### System Info
- `transformers` version: 4.25.1
- Platform: Linux-5.4.0-126-generic-x86_64-with-glibc2.27
- Python version: 3.9.15
- Huggingface_hub version: 0.10.1
- PyTorch version (GPU?): 1.12.1+cu113 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help?
@sanchit-gandhi
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
```python
import torch
import whisper
import transformers as ppb
x = torch.randn(1, 80, 3000) # random input feature
enc1 = ppb.models.whisper.modeling_whisper.WhisperEncoder.from_pretrained('openai/whisper-small')
enc2 = whisper.load_model('small').encoder
y1 = enc1(x)
y2 = enc2(x)
print(torch.sum(abs(y1.last_hidden_state - y2))) # expected 0, but got > 1e6
```
### Expected behavior
It's expected the outputs from encoders of `transformers` and `whisper` are same, but they're different. It seems that there're some weights in `transformers` `from_pretrained` are randomized, may I ask how to solve this problem? | 12-29-2022 08:32:18 | 12-29-2022 08:32:18 | I think it's mismatched because I used `WhisperEncoder`, but the checkpoints' keys are with prefix of `encoder.`. When I revised the above code as
```python
enc1 = ppb.WhisperModel.from_pretrained('openai/whisper-small').encoder
```
the error is dropped to 1e3, but the difference between each element is still about `1e-4`. May I ask why?<|||||>I've checked the weights from `huggingface` and `openai` and they're same. I think maybe there're some difference between the architectures, could anyone help check?<|||||>Hey @JinchaoLove! Thanks for opening this issue! I've answered on the HF Hub where the same question was posted: https://huggingface.co/openai/whisper-small/discussions/9#63b6ed4438471ff4c0818f19
Copying the response here for reference:
The proposed method of loading the WhisperEncoder `from_pretrained` is resulting in none of the pre-trained weights being loaded:
```python
import transformers as ppb
enc1 = ppb.models.whisper.modeling_whisper.WhisperEncoder.from_pretrained('openai/whisper-small')
```
<details>
<summary> Warning message: </summary>
```
Some weights of WhisperEncoder were not initialized from the model checkpoint at openai/whisper-small and are newly initialized: ['model.layers.3.self_attn.v_proj.weight', 'model.layers.6.self_attn_layer_norm.weight', 'model.layers.0.self_attn_layer_norm.bias', 'model.layers.3.final_layer_norm.bias', 'model.layers.2.fc2.weight', 'model.layers.9.fc2.bias', 'model.layers.6.self_attn_layer_norm.bias', 'model.layers.6.self_attn.v_proj.bias', 'model.layers.10.self_attn.q_proj.bias', 'model.layers.5.self_attn.k_proj.weight', 'model.layers.5.self_attn.q_proj.weight', 'model.layers.9.fc1.weight', 'model.layers.1.final_layer_norm.weight', 'model.layers.1.self_attn.q_proj.bias', 'model.layers.9.fc1.bias', 'model.layers.1.self_attn.q_proj.weight', 'model.conv2.weight', 'model.layers.3.self_attn.q_proj.weight', 'model.layers.11.self_attn.v_proj.bias', 'model.layers.3.final_layer_norm.weight', 'model.layers.2.self_attn.q_proj.weight', 'model.layers.3.self_attn.k_proj.weight', 'model.layers.4.self_attn.out_proj.weight', 'model.layers.11.final_layer_norm.bias', 'model.layers.8.self_attn.k_proj.weight', 'model.layers.8.final_layer_norm.bias', 'model.layers.4.self_attn.k_proj.weight', 'model.layers.1.fc1.weight', 'model.layers.5.fc2.bias', 'model.layers.5.self_attn.v_proj.weight', 'model.layers.8.self_attn.out_proj.bias', 'model.layers.8.self_attn.q_proj.weight', 'model.layers.6.final_layer_norm.bias', 'model.layers.10.fc1.weight', 'model.layers.11.self_attn_layer_norm.bias', 'model.layers.6.fc1.weight', 'model.layers.11.self_attn.v_proj.weight', 'model.layers.10.final_layer_norm.weight', 'model.layers.7.self_attn.v_proj.bias', 'model.layers.1.self_attn_layer_norm.weight', 'model.layers.3.fc2.weight', 'model.layers.2.self_attn.k_proj.weight', 'model.conv2.bias', 'model.layers.11.self_attn.out_proj.bias', 'model.layers.11.fc2.weight', 'model.layers.0.fc1.bias', 'model.layer_norm.bias', 'model.layers.10.self_attn_layer_norm.weight', 'model.layers.5.fc1.weight', 'model.layers.10.self_attn.k_proj.weight', 'model.layers.1.self_attn.v_proj.weight', 'model.layers.5.self_attn.out_proj.weight', 'model.layers.3.self_attn_layer_norm.bias', 'model.layers.3.fc1.weight', 'model.layers.1.self_attn.out_proj.weight', 'model.layers.4.final_layer_norm.bias', 'model.conv1.bias', 'model.layers.5.self_attn.out_proj.bias', 'model.layers.4.self_attn.out_proj.bias', 'model.layers.5.fc2.weight', 'model.layers.6.self_attn.out_proj.bias', 'model.layers.4.final_layer_norm.weight', 'model.layers.10.fc2.weight', 'model.layers.4.self_attn.q_proj.weight', 'model.layers.4.fc2.weight', 'model.layers.2.self_attn.q_proj.bias', 'model.layers.4.fc1.weight', 'model.layers.6.self_attn.q_proj.weight', 'model.layers.6.final_layer_norm.weight', 'model.layers.9.self_attn.q_proj.bias', 'model.layers.8.self_attn.v_proj.weight', 'model.layers.0.fc1.weight', 'model.layers.2.self_attn.v_proj.weight', 'model.layers.7.self_attn.k_proj.weight', 'model.layers.9.self_attn.q_proj.weight', 'model.layers.4.fc1.bias', 'model.layers.7.self_attn.out_proj.weight', 'model.layers.11.fc2.bias', 'model.layers.2.self_attn_layer_norm.bias', 'model.layers.5.fc1.bias', 'model.layers.9.self_attn_layer_norm.bias', 'model.layers.6.fc1.bias', 'model.layers.9.self_attn.v_proj.bias', 'model.layers.6.fc2.weight', 'model.layers.11.final_layer_norm.weight', 'model.layers.0.self_attn.k_proj.weight', 'model.layers.0.fc2.weight', 'model.layers.7.final_layer_norm.weight', 'model.layers.10.self_attn.out_proj.weight', 'model.layers.5.self_attn.q_proj.bias', 'model.layers.10.self_attn.out_proj.bias', 'model.layers.11.fc1.bias', 'model.layers.2.fc1.weight', 'model.layers.2.final_layer_norm.weight', 'model.layers.7.final_layer_norm.bias', 'model.layers.3.self_attn.v_proj.bias', 'model.layers.4.self_attn.q_proj.bias', 'model.layers.1.self_attn.k_proj.weight', 'model.layers.8.fc2.weight', 'model.layers.11.self_attn.k_proj.weight', 'model.layers.1.final_layer_norm.bias', 'model.layers.2.self_attn_layer_norm.weight', 'model.layers.5.final_layer_norm.weight', 'model.layers.8.self_attn_layer_norm.bias', 'model.layers.7.self_attn.q_proj.bias', 'model.layers.10.self_attn_layer_norm.bias', 'model.layers.5.self_attn.v_proj.bias', 'model.layers.10.self_attn.v_proj.weight', 'model.layers.3.self_attn.out_proj.bias', 'model.layers.9.final_layer_norm.bias', 'model.conv1.weight', 'model.layers.10.fc1.bias', 'model.layers.9.self_attn.k_proj.weight', 'model.layers.1.fc2.weight', 'model.layers.6.self_attn.k_proj.weight', 'model.layers.3.self_attn.out_proj.weight', 'model.layers.8.self_attn.out_proj.weight', 'model.layers.3.fc2.bias', 'model.layers.6.self_attn.q_proj.bias', 'model.layers.7.self_attn.out_proj.bias', 'model.layers.3.fc1.bias', 'model.layers.10.final_layer_norm.bias', 'model.layers.9.final_layer_norm.weight', 'model.layers.1.fc2.bias', 'model.layers.1.fc1.bias', 'model.layers.9.fc2.weight', 'model.layers.7.fc2.bias', 'model.layers.6.self_attn.v_proj.weight', 'model.layer_norm.weight', 'model.layers.8.fc1.bias', 'model.layers.8.self_attn_layer_norm.weight', 'model.layers.7.fc1.weight', 'model.layers.2.self_attn.out_proj.bias', 'model.layers.8.self_attn.v_proj.bias', 'model.layers.6.fc2.bias', 'model.layers.0.fc2.bias', 'model.layers.9.self_attn.v_proj.weight', 'model.layers.8.final_layer_norm.weight', 'model.layers.11.self_attn.q_proj.bias', 'model.layers.11.self_attn.q_proj.weight', 'model.layers.0.self_attn.q_proj.bias', 'model.layers.0.final_layer_norm.weight', 'model.layers.0.self_attn.v_proj.weight', 'model.layers.8.fc2.bias', 'model.layers.0.self_attn_layer_norm.weight', 'model.layers.10.self_attn.q_proj.weight', 'model.layers.7.fc2.weight', 'model.layers.4.self_attn_layer_norm.weight', 'model.layers.6.self_attn.out_proj.weight', 'model.layers.11.self_attn_layer_norm.weight', 'model.layers.5.self_attn_layer_norm.weight', 'model.layers.4.self_attn.v_proj.bias', 'model.layers.5.final_layer_norm.bias', 'model.layers.4.fc2.bias', 'model.layers.9.self_attn.out_proj.weight', 'model.layers.0.self_attn.q_proj.weight', 'model.layers.4.self_attn_layer_norm.bias', 'model.layers.10.fc2.bias', 'model.layers.7.self_attn.q_proj.weight', 'model.layers.0.self_attn.out_proj.bias', 'model.layers.2.self_attn.out_proj.weight', 'model.layers.1.self_attn.out_proj.bias', 'model.layers.7.fc1.bias', 'model.layers.2.fc1.bias', 'model.layers.8.self_attn.q_proj.bias', 'model.layers.10.self_attn.v_proj.bias', 'model.layers.2.fc2.bias', 'model.layers.7.self_attn_layer_norm.bias', 'model.layers.11.self_attn.out_proj.weight', 'model.layers.4.self_attn.v_proj.weight', 'model.layers.2.final_layer_norm.bias', 'model.layers.11.fc1.weight', 'model.layers.3.self_attn_layer_norm.weight', 'model.layers.0.self_attn.out_proj.weight', 'model.layers.7.self_attn.v_proj.weight', 'model.layers.9.self_attn.out_proj.bias', 'model.layers.2.self_attn.v_proj.bias', 'model.layers.9.self_attn_layer_norm.weight', 'model.layers.3.self_attn.q_proj.bias', 'model.layers.1.self_attn_layer_norm.bias', 'model.layers.0.final_layer_norm.bias', 'model.layers.1.self_attn.v_proj.bias', 'model.layers.0.self_attn.v_proj.bias', 'model.layers.7.self_attn_layer_norm.weight', 'model.embed_positions.weight', 'model.layers.5.self_attn_layer_norm.bias', 'model.layers.8.fc1.weight']
```
</details>
Instead, we should load all of the encoder-decoder weights using `WhisperForConditionalGeneration` and then extract the encoder module. This is the same logic we are using for the OpenAI implementation. When we do so, the maximum element-wise difference between the HF implementation and the OpenAI implementation is `8.5e-5` (to within numerical precision):
```python
import torch
from transformers import WhisperForConditionalGeneration
import whisper
x = torch.randn(1, 80, 3000) # random input feature
enc1 = WhisperForConditionalGeneration.from_pretrained('openai/whisper-small').model.encoder
enc2 = whisper.load_model('small').encoder
with torch.no_grad():
y1 = enc1(x)
y2 = enc2(x)
print(torch.max(abs(y1.last_hidden_state - y2)))
```
**Print Output:**
```
tensor(8.5831e-05)
```<|||||>@sanchit-gandhi That works for me, many thanks! |
transformers | 20,933 | closed | Remove Bert tokenizer dependency from DistillBert (slow/fast) tokenizers | Hi @sgugger,
Fixes https://github.com/huggingface/transformers/issues/19303
- The `BertTokenizer` dependency has been removed from `DistillBerTokenizer`
- The `BertTokenizerFast` dependency has been removed from `DistillBerTokenizerFast`
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@sgugger
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts and @NielsRogge
- speech models: @sanchit-gandhi
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger and @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
| 12-29-2022 07:18:02 | 12-29-2022 07:18:02 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 20,932 | closed | OpenAI/Whisper-large-v2 - Transcription & ONNX inference | ### System Info
- `transformers` version: 4.26.0.dev0
- Platform: Windows-10-10.0.22621-SP0
- Python version: 3.9.12
- Huggingface_hub version: 0.11.0
- PyTorch version (GPU?): 1.11.0 (False)
- Tensorflow version (GPU?): 2.10.1 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help?
@sanchit-gandhi
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
[OpenAI-Whisper_ONNX_Implementation.zip](https://github.com/huggingface/transformers/files/10317927/OpenAI-Whisper_ONNX_Implementation.zip)
### Expected behavior
Audio is not transcribed/translated post the 30 second mark. If the method mentioned in https://huggingface.co/openai/whisper-large-v2/discussions/7#6398809b11095028d87b16a2 is followed, there is no issue. But if the method mentioned in the model card is followed, the above error arises. I need this to be solved as the ONNX version requires input ('input_features', 'decoder_input_ids') in the form of arrays.
Also, if I use the model.onnx (as shown in the attached zip file) for prediction, it returns an array with float values. Can you help in decoding those values to the transcribed/translated text.
| 12-29-2022 06:35:16 | 12-29-2022 06:35:16 | Hey @Kirankumar2609! Thanks for linking the reproducible code snippet!
The fact that audios are not transcribed beyond the 30s mark is not a bug with the Whisper model. Rather, it's a pre-defined characteristic of the system. OpenAI designed Whisper such that all input audio sequences are padded / truncated to 30s prior to being passed to the model. This way, the model is only ever required to deal with inputs of fixed length (30s).
<details>
<summary>
Excerpt from [blog post](https://huggingface.co/blog/fine-tune-whisper#load-whisperfeatureextractor):
</summary>
> Samples shorter than 30s are padded to 30s by appending zeros to the end of the sequence (zeros in an audio signal corresponding to no signal or silence). Samples longer than 30s are truncated to 30s. Since all elements in the batch are padded/truncated to a maximum length in the input space, we don't require an attention mask when forwarding the audio inputs to the Whisper model. Whisper is unique in this regard - with most audio models, you can expect to provide an attention mask that details where sequences have been padded, and thus where they should be ignored in the self-attention mechanism. Whisper is trained to operate without an attention mask and infer directly from the speech signals where to ignore the inputs.
</details>
So, when audio samples longer than 30s are not transcribed it's due to the fact that the audio inputs are being truncated to 30s. This is somewhat suboptimal for a generalisable ASR system: ideally, we want a system that can handle audio inputs of arbitrary length! This is where `pipeline` comes it. `pipeline` chunks the audio samples into 30s blocks, generates the transcriptions for each chunk, and uses a novel 'stitching' algorithm to piece the transcriptions together. This way, we can transcribe audios of arbitrary length!
The code snippet you've provided performs **one forward pass** of the ONNX Whisper model. That is why you're required to pass the `decoder_input_ids` as well as the `input_features`. For **auto-regressive generation**, we only require the `input_features`. We perform one forward pass of the encoder and auto-regressively generate using the decoder. Could you ask in the [optimum](https://github.com/huggingface/optimum) repository if you require help getting this to work with the exported ONNX model please? The corresponding transformers code can be found here: https://github.com/openai/whisper/discussions/654<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 20,931 | closed | Problem with MBartForConditionalGeneration and MBart50TokenizerFast | ### System Info
sys info:
- `transformers` version: 4.25.1
- Platform: Linux-5.4.17-2136.308.9.el8uek.x86_64-x86_64-with-glibc2.28
- Python version: 3.9.7
- Huggingface_hub version: 0.11.1
- PyTorch version (GPU?): 1.13.1+cu117 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@ArthurZucker
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```python
from transformers import MBartForConditionalGeneration, MBart50TokenizerFast
model = MBartForConditionalGeneration.from_pretrained("facebook/mbart-large-50-many-to-many-mmt")
tokenizer = MBart50TokenizerFast.from_pretrained("facebook/mbart-large-50-many-to-many-mmt")
sentence_kz = "сәлем досым"
tokenizer.src_lang = "kk_KZ"
encoded_kk = tokenizer(sentence_kz , return_tensors="pt")
generated_tokens = model.generate(
**encoded_kk,
forced_bos_token_id=tokenizer.lang_code_to_id["ru_RU"]
)
tokenizer.batch_decode(generated_tokens, skip_special_tokens=True)
```
### Expected behavior
I use the official example of code from https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt
I just changed the abbreviations for the languages (hi_IN -> kk_KZ, fr_XX -> ru_RU), wanting to get the Russian language out, but I get english instead of russian

| 12-29-2022 05:56:28 | 12-29-2022 05:56:28 | cc @ArthurZucker <|||||>Maybe due to this ? https://github.com/huggingface/transformers/issues/20610#issuecomment-1407704129<|||||>The model should be predicting in Russian, and the only reason it is not is probably because `kk` language has a very small dataset size and was thus not trained a lot. The model and the generation process work as they function as expected for lanugages that have a bigger dataset.
For example if you use `ja_XX` you will get `['世界の友達']` which means friend of the world (Sekai no tomodachi).
It comes down to the fine-tuning and which pair was trained with which other. |
transformers | 20,930 | closed | opt-13b checkpoint missing final_layer_norm weights in pretrained checkpoint | ### System Info
Hello. Apologies if this has been brought up before, but it seems at least the flax version of opt-13b when using from_pretrained is missing weights for:
- model/decoder/final_layer_norm/bias
- model/decoder/final_layer_norm/scale
Every other version of the model I've tested didn't have this issue. You can see the message below.

### Who can help?
@ArthurZucker
### Information
- [X] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
I'm using a modified version of `run_clm_flax.py` under `examples/flax/language-modelling`, although my modification below should not make an impact and the default script should have the same issue.
`config = AutoConfig.from_pretrained(
'facebook/opt-13b',
)
`
`model, params = FlaxAutoModelForCausalLM.from_pretrained(
'facebook/opt-13b',
config=config,
seed=42,
_do_init = False,
)`
`params = model.init_weights(model.key, model.input_shape, params).unfreeze()` # inside a jit function
### Expected behavior
You will notice the message above which shows missing weights for only the last layer of this specific model. | 12-29-2022 04:44:48 | 12-29-2022 04:44:48 | Hey, there is indeed an issue with the `FlaxWeights`, if you try using `from_pt = True` you should have the correct layers loaded. There are actually 2 different checkpoints online, one is sharded and the other is not. This is pretty strange 😓 <|||||>Hi Arthur, thanks for the help! Managed to convert PyTorch weights and all of them are there!
Although, for anyone else using `_do_init = False` in `from_pretrained`, I had to convert and save to Flax weights first to a local dir as the `from_pt = True` flag will attempt to internally call `model.params`. |
transformers | 20,929 | closed | Remove non-breaking spaces | # What does this PR do?
This PR removes non-breaking spaces in various places in the codebase. The first commit was from when I first found the problem over a year ago, and the second commit fixes all other non-breaking spaces in the repository as of now.
I'm not sure of a good check to prevent this going forward, but it at least fixes the problem as it exists now.
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Library:
- trainer: @sgugger
Documentation: @sgugger and @stevhliu | 12-28-2022 19:54:29 | 12-28-2022 19:54:29 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 20,928 | closed | Convert assertions to exceptions in some examples | # What does this PR do?
For #12789.
This PR converts assertions to exceptions in some example files in `/examples/pytorch/language-modeling/`. I found this commit locally from over a year ago, so new scripts have been added since it was created.
## Who can review?
Maintained examples (not research project or legacy):
- PyTorch: @sgugger | 12-28-2022 19:49:52 | 12-28-2022 19:49:52 | _The documentation is not available anymore as the PR was closed or merged._<|||||>The issue was aimed at the modules in the library, not the examples. I'd keep the examples as they are for now.<|||||>@sgugger, I was not aware of that! Can you mention that prominently in the issue (#12789) so others know to avoid them?
Also, I modified the initial comment because I had forgotten to link the issue in the first place, even though I had made a note to before I created this PR.<|||||>Edited my comments on the issue to reflect this.<|||||>Thanks! I'm closing PR now, then. |
transformers | 20,927 | closed | Generate: TF XLA beam sample | # What does this PR do?
## Context
This is a 2-in-1 PR. While working on adding `generation_config` to TF's `generate`, I noticed that I would have twice the work. This is because on `main` we have `generate()` and `_generate()`, where the former is the legacy version that calls the latter except for beam sample (which is not XLA compatible before this PR). As such, this PR completes the transition to XLA and removes most legacy code, simplifying the transition to the generation config.
## Changes
1. Replaces `generate()` by `_generate()`, which was the original goal of the XLA refactor (and will make my life easier);
2. Deletes many private functions that are no longer reached;
3. Updates RAG accordingly (from the old beam search to the XLA-compatible beam search), slow tests are passing;
4. ⚠️ Adds beam sample on the existing `beam_search` function. Unlike PT implementation, this is NOT a stand-alone function. This was a deliberate decision to decrease maintenance costs, as I don't think it would be wise to add ~500 lines of code for a functionality that is infrequently used, and can be solved with a few extra lines. | 12-28-2022 18:28:53 | 12-28-2022 18:28:53 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 20,926 | closed | Adds type checking to PreTrainedConfig. | # What does this PR do?
Fixes [# (issue)](https://github.com/huggingface/transformers/issues/20915)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [X] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts and @NielsRogge
- speech models: @sanchit-gandhi
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger and @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
| 12-28-2022 16:24:52 | 12-28-2022 16:24:52 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Mmm, the tests are all failing for weird reasons. It seems the branch you are using for the PR is pretty outdated compared to main. Could you do a quick rebase?<|||||>Huh... sorry, I thought I corrected that before pushing. I'll do a rebase and get things squared away. <|||||>Thanks a lot! |
transformers | 20,925 | closed | Add: doc page for the object detection task | This is a PR for the [#20805](https://github.com/huggingface/transformers/issues/20805) issue.
The guide has content and working code examples for:
*
Introduction
*
Loading CPPE-5 dataset from Hub
*
Preprocessing both images and annotations. Images are augmented, and annotations are reformatted to be in the format DETR expects
*
Training with Trainer
*
Evaluation
*
Inference | 12-28-2022 15:53:28 | 12-28-2022 15:53:28 | This PR replaces the https://github.com/huggingface/transformers/pull/20874 <|||||>To preserve the discussion, here's @sayakpaul 's comment relevant to the CI issue: https://github.com/huggingface/transformers/pull/20874#issuecomment-1366717321<|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>> * Make sure to split long code samples in several smaller ones with text introducing each step. The evaluation in particular is too long and may be too technical for this guide.
I agree that the evaluation part is a bit too technical, but unfortunately at the moment, there's no simpler way (hopefully soon there will be an easier way to have coco evaluation metrics). But I can certainly split it up somewhat.
<|||||>Thank you for the feedback @sayakpaul !
> Do you have a Colab Notebook where this code has been tested (preferably with outputs)?
Yes, I do. Here's my playground notebook with outputs. All the code examples are working. The only issue is that I didn't really pay too much attention to the hyperparameters, so the resulting model isn't very good. It would probably improve with more epochs and better learning rate decay. But I ran out of free GPU in Colab today :D
https://colab.research.google.com/drive/1wPTZJajGRhhh00Lnz7-8E5qE1x_qL1Of#scrollTo=5w2lsRRYPXDN
<|||||>@NielsRogge note that finetune/fine-tune has no decided standard in the doc/transformers and both are used equally. |
transformers | 20,924 | closed | Getting different result with different batch size and sequence length | Here is the code:
```
import os
os.environ["CUDA_VISIBLE_DEVICES"] = '0'
from transformers import BertTokenizer, BertModel
import torch
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
model = BertModel.from_pretrained("bert-base-uncased")
model = model.to(device)
model.eval()
text1 = [
"Replace me by any text you'd like.",
# "The weather is great!"
]
encoded_input1 = tokenizer(text1, return_tensors='pt', padding=True)
encoded_input1 = encoded_input1.to(device)
with torch.no_grad():
output1 = model(**encoded_input1).last_hidden_state
cls1 = output1[:, 0, :]
text2 = [
"Replace me by any text you'd like.",
"The weather is great!"
# "The result changed with different batch size and sequence length"
]
encoded_input2 = tokenizer(text2, return_tensors='pt', padding=True)
encoded_input2 = encoded_input2.to(device)
with torch.no_grad():
output2 = model(**encoded_input2).last_hidden_state
cls2 = output2[:, 0, :]
text3 = [
"Replace me by any text you'd like.",
# "The weather is great!"
"The result is changed with different batch size or sequence length."
]
encoded_input3 = tokenizer(text3, return_tensors='pt', padding=True)
encoded_input3 = encoded_input3.to(device)
with torch.no_grad():
output3 = model(**encoded_input3).last_hidden_state
cls3 = output3[:, 0, :]
print(torch.equal(cls1[0], cls2[0]))
print(torch.equal(cls1[0], cls3[0]))
print(torch.equal(cls2[0], cls3[0]))
```
All of these results are False. Is it as expected? | 12-28-2022 12:49:23 | 12-28-2022 12:49:23 | Hey @JaheimLee 👋
Yes, minor fluctuations are to be expected. Their causes include (but are not limited to) the numerical masking from the attention mask and the order of operations in fp32 computations.
Because of these fluctuations, we typically consider results correct if their are within 1e-5 of each other, in examples like yours :)<|||||>Ok,thanks! |
transformers | 20,923 | closed | Encoding parameter for AutoModel.from_pretrained() module. | ### Feature request
transformers.AutoModel.from_pretrained() module allows to load pretrained models from local directories as well. The local pickle files path is an argument of the above function. These files can have different encoding types.
A parameter called 'encoding' can be added in the parameter list similar to
pandas.read_csv('path/to/csv/file',encoding='utf-8')
which takes encoding type as parameter. But this proposed encoding feature must be enabled only while loading local files and should not take parameters while loading models from other sources (like HF-Hub etc).
### Motivation
I face Unicode-Decode error while loading pickled model files from local directory. To overcome this issue, the above feature would help to avoid the unicode-decode error.
### Your contribution
Unfortunately, I can't contribute now. | 12-28-2022 12:24:08 | 12-28-2022 12:24:08 | Thanks for the report.
Please provide us with a reproducing example showing how a model saved with `save_pretrained` can't be reloaded with `from_pretrained`.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 20,922 | closed | Encoding parameter | ### Feature request
##transformers.AutoModel.from_pretrained()## module allows to load pretrained models from local directories as well. The local pickle files path is an argument of the above function. These files can have different encoding types.
A parameter called 'encoding' can be added in the parameter list similar to
pandas.read_csv('path/to/csv/file',encoding='utf-8')
which takes encoding type as parameter. But this proposed encoding feature must be enabled only while loading local files and should not take parameters while loading models from other sources (like HF-Hub etc).
### Motivation
I face Unicode-Decode error while loading pickled model files from local directory. To overcome this issue, the above feature would help to avoid the unicode-decode error.
### Your contribution
Unfortunately, I can't contribute now. | 12-28-2022 12:21:43 | 12-28-2022 12:21:43 | Duplicate of #20923 |
transformers | 20,921 | closed | add AltCLIP | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts and @NielsRogge
- speech models: @sanchit-gandhi
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger and @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
| 12-28-2022 09:28:34 | 12-28-2022 09:28:34 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20921). All of your documentation changes will be reflected on that endpoint. |
transformers | 20,920 | closed | Load the state dict on CPU to prevent unnecessary GPU memory surge | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
When loading the best checkpoint after training is finished, the code loads the weights in a `state_dict` on a GPU _before_ applying them on the model. This means that the weights use 2X GPU memory actually required, 1X for the model object, and 1X for the state_dict. This PR fixes it by using `map_location="cpu"` for loading the weights in `state_dict`.
Without this fix, one can get OOM even after the full training is done as I did. I've encountered [this issue](https://github.com/allenai/allennlp/pull/5518) before on Allennlp as well. That was also fixed in the same fashion. It's also mentioned in the pytorch docs [here](https://pytorch.org/docs/stable/generated/torch.load.html) (see the note about GPU RAM surge).
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts and @NielsRogge
- speech models: @sanchit-gandhi
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger and @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
| 12-28-2022 08:52:25 | 12-28-2022 08:52:25 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 20,919 | closed | ModuleNotFoundError: No module named 'evaluate' | ### System Info
Copy-and-paste the text below in your GitHub issue and FILL OUT the two last points.
- `transformers` version: 4.25.1
- Platform: Linux-3.10.0-1160.81.1.el7.x86_64-x86_64-with-glibc2.17
- Python version: 3.9.15
- Huggingface_hub version: 0.11.1
- PyTorch version (GPU?): 1.13.1+cu116 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@ArthurZucker and @younesbelkada@sgugger
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
https://github.com/huggingface/transformers/tree/main/examples/pytorch/language-modeling
```
python run_mlm.py \
--model_name_or_path roberta-base \
--train_file path_to_train_file \
--validation_file path_to_validation_file \
--per_device_train_batch_size 8 \
--per_device_eval_batch_size 8 \
--do_train \
--do_eval \
--output_dir /tmp/test-mlm
line 35, in <module>
import evaluate
ModuleNotFoundError: No module named 'evaluate'
myhugBert.sh:行4: --train_file: 未找到命令
myhugBert.sh:行11: --output_dir: 未找到命令
```
### Expected behavior
overcome the bug | 12-28-2022 01:54:05 | 12-28-2022 01:54:05 | You need to `pip install evaluate`, as the error message tells you. This is also in the [requirements](https://github.com/huggingface/transformers/blob/main/examples/pytorch/language-modeling/requirements.txt) for this example.<|||||>First run the pip install-r requirements.txt command then still if you won't find out the module then install individual module pip install evaluate @ucas010 <|||||>Closing this issue as it seems resolved. Feel free to reopen if needed. |
transformers | 20,918 | closed | Unable to save t5 model locally after training t5 using run_t5_mlm_flax.py | ### System Info
I want to develop a POC to train a t5 model on a domain dataset (txt file where each line is a sentence). I came across the run_t5_mlm_flax.py file and followed the steps mentioned in the README file.
After a lot of trial and error, I finally was able to get it running in colab using GPU (using 8 batchsize). After running successfully, I am unable to find the saved model anywhere locally within colab (checked provided output directory). Can anyone help me overcome this issue?
This is the command I am using to run the file (t5-trained is a folder I created during runtime):
**python run_t5_mlm_flax.py --output_dir="./t5-trained" --model_type="t5-small" --config_name="./t5-trained" --tokenizer_name="./t5-trained" --train_file="Input_Sent.txt" --max_seq_length="512" --per_device_train_batch_size="8" --per_device_eval_batch_size="8" --adafactor --learning_rate="0.005" --weight_decay="0.001" --warmup_steps="2000" --overwrite_output_dir --logging_steps="500" --save_steps="10000" --eval_steps="2500"**
<img width="513" alt="Screen Shot 2022-12-27 at 4 15 47 PM" src="https://user-images.githubusercontent.com/31246787/209727198-44a9b280-3a2b-43e7-893f-48e672095a90.png">
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
1. Create a small dataset of sentences and save it in txt file. Each line in txt file is a sentence.
2. Create a folder named "t5-trained"
3. Run code to generate tokenizer file using t5_tokenizer_model.py and code mentioned in README.
4. Generate config file using code mentioned in README file.
5. Run this command to train the t5 model -> python run_t5_mlm_flax.py --output_dir="./t5-trained" --model_type="t5-small" --config_name="./t5-trained" --tokenizer_name="./t5-trained" --train_file="Input_Sent.txt" --max_seq_length="512" --per_device_train_batch_size="8" --per_device_eval_batch_size="8" --adafactor --learning_rate="0.005" --weight_decay="0.001" --warmup_steps="2000" --overwrite_output_dir --logging_steps="500" --save_steps="10000" --eval_steps="2500"
### Expected behavior
Once these steps are completed, I am expecting a saved model somewhere locally which I can import and utilize for text generation or embedding generation from the encoder. | 12-27-2022 22:21:04 | 12-27-2022 22:21:04 | cc @sanchit-gandhi <|||||>Hey @patelvishwa112! Sorry for the late reply here! The fine-tuned checkpoint is saved periodically every `save_steps` training steps:
https://github.com/huggingface/transformers/blob/12313838d33373d06d35b48c3c501fa832f16443/examples/flax/language-modeling/run_t5_mlm_flax.py#L950
It looks as though you're setting `save_steps=10000`, but you're only training for 156 train steps. Since your maximum number of train steps is less than your `save_steps`, we never hit the minimum number of steps required to save the model!
If you set:
```
--save_steps="50"
```
You should see that the model is saved every 50 steps. Since this is less than our total number of train steps, we should see that the model is saved during training for a total of 3 times: at 50, 100 and 150 train steps respectively.
It would indeed be nice to update the examples to save the model at the end of training (irrespective of the value for `save_steps`). Feel free to open a PR for this change if you're interested! I'd be more than happy to help guide you through the process and help with the integration!<|||||>Thank you @sanchit-gandhi for the help. I was able to run the code successfully and it generated flax_model.msgpack (~800MB) file amoung others.
Can you tell me how can I import this model using either transformers or tensorflow to either get embedding from encoder or use it for text generation?
And for creating PR request, I will create one just to update the example so that everyone can follow it without error and I will reach out to you if I need any assistance :). <|||||>Hey @patelvishwa112!
You should be able to load the Flax model using:
```python
from transformers import FlaxT5ForConditionalGeneration
model = FlaxT5ForConditionalGeneration.from_pretrained(<path to your checkpoint>)
```
Looking at your training args, the model weights should be saved under `"./t5-trained"`, so this is the path to your checkpoint.
Here's an example of how you can get the encoder embeddings: https://huggingface.co/docs/transformers/model_doc/t5#transformers.FlaxT5ForConditionalGeneration.encode.example
And an example of how you can generate a sequence of text outputs using the Flax T5 model: https://huggingface.co/docs/transformers/model_doc/t5#transformers.FlaxT5ForConditionalGeneration.__call__.example
Hope that helps! Let me know if you have any other questions regarding how to use the trained Flax T5 model for inference 🤗
That sounds good regarding the PR - feel free to open one with the changes required to save the model at the end of training. You can tag me in the PR for a review! Feel free to reach out if you have any questions on the PR - I'm more than happy to help if you have any questions regarding the changes!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 20,917 | open | [i18n-<ao>] Translating docs to <am> | <!--
Note: Please search to see if an issue already exists for the language you are trying to translate.
-->
Hi!
Let's bring the documentation to all the <languageName>-speaking community 🌐 (currently 0 out of 267 complete)
Who would want to translate? Please follow the 🤗 [TRANSLATING guide](https://github.com/huggingface/transformers/blob/main/docs/TRANSLATING.md). Here is a list of the files ready for translation. Let us know in this issue if you'd like to translate any, and we'll add your name to the list.
Some notes:
* Please translate using an informal tone (imagine you are talking with a friend about transformers 🤗).
* Please translate in a gender-neutral way.
* Add your translations to the folder called `<languageCode>` inside the [source folder](https://github.com/huggingface/transformers/tree/main/docs/source).
* Register your translation in `<languageCode>/_toctree.yml`; please follow the order of the [English version](https://github.com/huggingface/transformers/blob/main/docs/source/en/_toctree.yml).
* Once you're finished, open a pull request and tag this issue by including #issue-number in the description, where issue-number is the number of this issue. Please ping @ArthurZucker, @sgugger for review.
* 🙋 If you'd like others to help you with the translation, you can also post in the 🤗 [forums](https://discuss.huggingface.co/).
## Get Started section
- [ ] [index.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/index.mdx) https://github.com/huggingface/transformers/pull/20180
- [ ] [quicktour.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/quicktour.mdx) (waiting for initial PR to go through)
- [ ] [installation.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/installation.mdx).
## Tutorial section
- [ ] [pipeline_tutorial.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/pipeline_tutorial.mdx)
- [ ] [autoclass_tutorial.mdx](https://github.com/huggingface/transformers/blob/master/docs/source/autoclass_tutorial.mdx)
- [ ] [preprocessing.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/preprocessing.mdx)
- [ ] [training.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/training.mdx)
- [ ] [accelerate.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/accelerate.mdx)
- [ ] [model_sharing.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/model_sharing.mdx)
- [ ] [multilingual.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/multilingual.mdx)
<!--
Keep on adding more as you go 🔥
-->
| 12-27-2022 20:51:17 | 12-27-2022 20:51:17 | Hi @arabaman. Could you please fill the template with your language? |
transformers | 20,916 | closed | Learning rate is set to zero for the entirety of the first epoch | https://github.com/huggingface/transformers/blob/31d452c68b34c2567b62924ee0df40a83cbc52d5/src/transformers/optimization.py#L210
Both in practice and based on my understanding of the code, this will produce a `LambdaLR` that will return a multiplicative factor of 0 for the entirety of the first epoch (as `current_step` will be 0), which will mean that the entire first epoch the model will not train.
The names given to variables in the code imply this might be intended to be set with training steps, not epochs; is that the desire? If not, should this be modified to account for the first epoch having `current_step` equal to 0? Or is something wrong in my specific use-case?
My use-case is in using the `get_polynomial_decay_schedule_with_warmup` as a scheduler in a pytorch lightning module. Note I also mentioned this on the forum, here: https://discuss.huggingface.co/t/huggingface-lr-decay-schedulers-spend-the-first-epoch-w-an-lr-of-0/28195 | 12-27-2022 20:36:34 | 12-27-2022 20:36:34 | Diving deeper into this, I think this is due to a mismatch between pytorch lightning's design and huggingface's. Huggingface trainer. [Huggingface's trainer calls the lr scheduler every step](https://github.com/huggingface/transformers/blob/31d452c68b34c2567b62924ee0df40a83cbc52d5/src/transformers/trainer.py#L1845), and [pytorch lightning can be configured to either call it once per step or once per epoch](https://github.com/Lightning-AI/lightning/blob/612d43e5bf38ba73b4f372d64594c2f9a32e6d6a/src/pytorch_lightning/loops/epoch/training_epoch_loop.py#L407), so I likely have something configured wrong. |
transformers | 20,915 | closed | Comparing a huggingface config with a dictionary raises an error as the `__eq__` method relies on `other` having a `__dict__` method | `PreTrainedConfig`s inherit their `__eq__` methods from this: https://github.com/huggingface/transformers/blob/3f936df66287f557c6528912a9a68d7850913b9b/src/transformers/configuration_utils.py#L736
This works fine for comparing two configs, but if you compare a config to something of a different type, (in particular a type without a `__dict__` attribute, like a pure `dict`), it throws an error. It would be easy to add a type check into the equals comparison to ensure that off-type comparisons return False instead. | 12-27-2022 19:29:55 | 12-27-2022 19:29:55 | Would you like to make a PR with such a change? |
transformers | 20,914 | closed | AttributeError: 'DummyVecEnv' object has no attribute 'render_mode' | ### System Info
I'm doing the Deep RL course, I don't know what is happening, something is wrong in the notebook on unit 1: https://colab.research.google.com/github/huggingface/deep-rl-class/blob/master/notebooks/unit1/unit1.ipynb#scrollTo=xMkkkukIBQJM
when I try to push my Agent to HF Hub I received this error message: AttributeError: 'DummyVecEnv' object has no attribute 'render_mode'
my code is very similar to the notebook example.
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Code example:
import gym
from stable_baselines3 import PPO
from stable_baselines3.common.vec_env import DummyVecEnv
from stable_baselines3.common.env_util import make_vec_env
from huggingface_sb3 import package_to_hub
env_id = "LunarLander-v2"
model_architecture = "PPO"
repo_id = "Felipe474/ppo-LunarLander-v2" # Change with your repo id, you can't push with mine 😄
commit_message = "Upload PPO LunarLander-v2 trained agent"
eval_env = DummyVecEnv([lambda: gym.make(env_id)])
package_to_hub(model=model, # Our trained model
model_name=model_name, # The name of our trained model
model_architecture=model_architecture, # The model architecture we used: in our case PPO
env_id=env_id, # Name of the environment
eval_env=eval_env, # Evaluation Environment
repo_id=repo_id, # id of the model repository from the Hugging Face Hub (repo_id = {organization}/{repo_name} for instance ThomasSimonini/ppo-LunarLander-v2
commit_message=commit_message)
### Expected behavior
Successively push agent to HF hub. | 12-27-2022 19:14:03 | 12-27-2022 19:14:03 | |
transformers | 20,913 | closed | Fix FP16 inference in TextGenerationPipeline | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Hi @Narsil,
I tried to fix https://github.com/huggingface/transformers/issues/20912 here by settiing `torch_dtype` as a regular attribute to help with the `preprocess` function in `AutomaticSpeechRecognitionPipeline`. This way we keep it out of `kwargs`, so don't need to modify the `_sanitize_parameters` function in other pipelines. Looking forward to hearing your opinion :)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts and @NielsRogge
- speech models: @sanchit-gandhi
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger and @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
| 12-27-2022 18:18:54 | 12-27-2022 18:18:54 | _The documentation is not available anymore as the PR was closed or merged._<|||||>In general its better for parameters to be in `_sanitize_parameters` but this doesn't apply here, since the models already uses `torch_dtype` and so using `pipe(..., torch_dtype=torch.float16)` cannot work anyway.
I think the proposed fix is elegant.
Do you mind adding a test for `text-generation` and `float16` too ?
For the quality you should be able to do
```
pip install -e .[dev] # To get dependencies
make fixup
```<|||||>Thanks for the review @Narsil !
Do you think it should be better to change the name `torch_dtype` to `dtype`, since `Pipeline` can also be used for Tensorflow? I don't really use Tensorflow so I'm not sure about it.
Just add a test and fix the quality. Thanks for the tips!
<|||||>> Do you think it should be better to change the name torch_dtype to dtype, since Pipeline can also be used for Tensorflow? I don't really use Tensorflow so I'm not sure about it.
Later if we do it. Better stick to the named used elsewhere in the lib which is indeed `torch_dtype`. I'm also unfamiliar with fp16 computation in Tensorflow, but I'm guessing it could work differently.
The good thing is that we could always alias later if needed. |
transformers | 20,912 | closed | Run TextGenerationPipeline in FP16 | ### System Info
- `transformers` version: 4.26.0.dev0
- Platform: Linux-4.19.0-22-cloud-amd64-x86_64-with-glibc2.17
- Python version: 3.8.15
- Huggingface_hub version: 0.11.1
- PyTorch version (GPU?): 1.13.0+cu117 (True)
- Tensorflow version (GPU?): 2.11.0 (False)
- Flax version (CPU?/GPU?/TPU?): 0.5.3 (cpu)
- Jax version: 0.3.6
- JaxLib version: 0.3.5
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help?
Hi @Narsil,
I just found some other pipelines (e.g., `TextGenerationPipeline`, `Text2TextGenerationPipeline`) can't run fp16 inference any more due to the change in this PR https://github.com/huggingface/transformers/pull/20864.
In fact, the added `torch_dtype` attribute will be unexpectedly thrown into `forward_params` by `_sanitize_parameters()`, then raises an error in `generate()` function.
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Below is a code snippet to reproduce the behavior.
```python
import torch
from transformers import pipeline
generator = pipeline(model="gpt2", device=0, torch_dtype=torch.float16)
generator("I can't believe you did such a ")
```
When running this we see the following stack trace:
```
╭──────────────────────────── Traceback (most recent call last) ────────────────────────────╮
│ <ipython-input-1-f80bcf5e17e1>:6 in <module> │
│ /home/bhuang/transformers/src/transformers/pipelines/text_generation.py:210 in __call__ │
│ │
│ 207 │ │ │ - **generated_token_ids** (`torch.Tensor` or `tf.Tensor`, present when │
│ 208 │ │ │ ids of the generated text. │
│ 209 │ │ """ │
│ ❱ 210 │ │ return super().__call__(text_inputs, **kwargs) │
│ 211 │ │
│ 212 │ def preprocess(self, prompt_text, prefix="", handle_long_generation=None, **gen │
│ 213 │ │ inputs = self.tokenizer( │
│ │
│ /home/bhuang/transformers/src/transformers/pipelines/base.py:1074 in __call__ │
│ │
│ 1071 │ │ elif is_iterable: │
│ 1072 │ │ │ return self.iterate(inputs, preprocess_params, forward_params, postpro │
│ 1073 │ │ else: │
│ ❱ 1074 │ │ │ return self.run_single(inputs, preprocess_params, forward_params, post │
│ 1075 │ │
│ 1076 │ def run_multi(self, inputs, preprocess_params, forward_params, postprocess_par │
│ 1077 │ │ return [self.run_single(item, preprocess_params, forward_params, postproce │
│ │
│ /home/bhuang/transformers/src/transformers/pipelines/base.py:1081 in run_single │
│ │
│ 1078 │ │
│ 1079 │ def run_single(self, inputs, preprocess_params, forward_params, postprocess_pa │
│ 1080 │ │ model_inputs = self.preprocess(inputs, **preprocess_params) │
│ ❱ 1081 │ │ model_outputs = self.forward(model_inputs, **forward_params) │
│ 1082 │ │ outputs = self.postprocess(model_outputs, **postprocess_params) │
│ 1083 │ │ return outputs │
│ 1084 │
│ │
│ /home/bhuang/transformers/src/transformers/pipelines/base.py:990 in forward │
│ │
│ 987 │ │ │ │ inference_context = self.get_inference_context() │
│ 988 │ │ │ │ with inference_context(): │
│ 989 │ │ │ │ │ model_inputs = self._ensure_tensor_on_device(model_inputs, dev │
│ ❱ 990 │ │ │ │ │ model_outputs = self._forward(model_inputs, **forward_params) │
│ 991 │ │ │ │ │ model_outputs = self._ensure_tensor_on_device(model_outputs, d │
│ 992 │ │ │ else: │
│ 993 │ │ │ │ raise ValueError(f"Framework {self.framework} is not supported") │
│ │
│ /home/bhuang/transformers/src/transformers/pipelines/text_generation.py:252 in _forward │
│ │
│ 249 │ │ │ in_b = input_ids.shape[0] │
│ 250 │ │ prompt_text = model_inputs.pop("prompt_text") │
│ 251 │ │ # BS x SL │
│ ❱ 252 │ │ generated_sequence = self.model.generate(input_ids=input_ids, attention_mas │
│ 253 │ │ out_b = generated_sequence.shape[0] │
│ 254 │ │ if self.framework == "pt": │
│ 255 │ │ │ generated_sequence = generated_sequence.reshape(in_b, out_b // in_b, *g │
│ │
│ /home/bhuang/anaconda3/envs/asr/lib/python3.8/site-packages/torch/autograd/grad_mode.py:2 │
│ 7 in decorate_context │
│ │
│ 24 │ │ @functools.wraps(func) │
│ 25 │ │ def decorate_context(*args, **kwargs): │
│ 26 │ │ │ with self.clone(): │
│ ❱ 27 │ │ │ │ return func(*args, **kwargs) │
│ 28 │ │ return cast(F, decorate_context) │
│ 29 │ │
│ 30 │ def _wrap_generator(self, func): │
│ │
│ /home/bhuang/transformers/src/transformers/generation/utils.py:1145 in generate │
│ │
│ 1142 │ │ │
│ 1143 │ │ generation_config = copy.deepcopy(generation_config) │
│ 1144 │ │ model_kwargs = generation_config.update(**kwargs) # All unused kwargs mus │
│ ❱ 1145 │ │ self._validate_model_kwargs(model_kwargs.copy()) │
│ 1146 │ │ │
│ 1147 │ │ # 2. Set generation parameters if not already defined │
│ 1148 │ │ logits_processor = logits_processor if logits_processor is not None else L │
│ │
│ /home/bhuang/transformers/src/transformers/generation/utils.py:973 in │
│ _validate_model_kwargs │
│ │
│ 970 │ │ │ │ unused_model_args.append(key) │
│ 971 │ │ │
│ 972 │ │ if unused_model_args: │
│ ❱ 973 │ │ │ raise ValueError( │
│ 974 │ │ │ │ f"The following `model_kwargs` are not used by the model: {unused_ │
│ 975 │ │ │ │ " generate arguments will also show up in this list)" │
│ 976 │ │ │ ) │
╰───────────────────────────────────────────────────────────────────────────────────────────╯
ValueError: The following `model_kwargs` are not used by the model: ['torch_dtype'] (note:
typos in the generate arguments will also show up in this list)
```
| 12-27-2022 18:12:42 | 12-27-2022 18:12:42 | |
transformers | 20,911 | closed | Generate: correctly detect default max length | # What does this PR do?
Fixes #20894.
Now that we are using the generation config, we can detect the use of a default `max_length` and a potential clash with `max_new_tokens` with values other than `max_length=20` :)
After this change, the example in the issue linked above works correctly. | 12-27-2022 14:33:32 | 12-27-2022 14:33:32 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 20,910 | closed | Request for scripts/helper function to create custom jsonl files for translation | ### Feature request
In the examples for machine translation there's a section that states that current scripts can only consume data in a custom jsonl format as follows
```json
{ "translation": { "en": "Others have dismissed him as a joke.", "ro": "Alții l-au numit o glumă." } }
{ "translation": { "en": "And some are holding out for an implosion.", "ro": "Iar alții așteaptă implozia." } }
```
It would be great if there are helper scripts that could convert pandas data frames into this particular format
### Motivation
Its frustrating data has to be converted into the particular format for it to be consumed by the training scripts, it would be nice if the scripts consumed CSV's with one column being language1 and the other column being language2.
### Your contribution
I could help testing and validating the code | 12-27-2022 13:54:54 | 12-27-2022 13:54:54 | Examples are just that examples. You should adapt the data processing part to your specific data format. |
transformers | 20,909 | closed | Add distributed training example with Accelerate for run_clm_no_trainer.py | # What does this PR do?
Adds additional documentation for running distributed training using accelerate for training causal language models without the HuggingFace Trainer. The example that uses Trainer API defaults to using multi GPU for training while the no trainer example defaults to single GPU training. The added documentation clears the confusion up and provides an example for distributed training when not using the Trainer API
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts and @NielsRogge
- speech models: @sanchit-gandhi
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger and @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
@sgugger | 12-27-2022 07:29:43 | 12-27-2022 07:29:43 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20909). All of your documentation changes will be reflected on that endpoint.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 20,908 | closed | OSError: Token is required (`token=True`), but no token found. You need to provide a token or be logged in to Hugging Face with `huggingface-cli login` or `huggingface_hub.login`. See https://huggingface.co/settings/tokens. | ### System Info
copy the [RP](https://github.com/huggingface/notebooks/blob/main/examples/language_modeling.ipynb) and got the ERROR
could you pls help me ?
### Who can help?
@ArthurZucker and @younesbelkada
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
same as official script
### Expected behavior
overcome the error | 12-27-2022 07:21:13 | 12-27-2022 07:21:13 | You need to make sure to execute the cell `notebook_login()` at the beginning and pass it your token (it provides a direct link to your token pages on hf.co)<|||||>pass it your token ??? I have token but how to use it ? @sgugger <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 20,907 | closed | Extend Script to enable conversion of Encoder Only T5x Models to Pytorch | # What does this PR do?
This PR Extends the [Script that converts T5x Models to Pytorch](https://github.com/huggingface/transformers/blob/main/src/transformers/models/t5/convert_t5x_checkpoint_to_pytorch.py). This is particularly useful for converting [T5x Retrieval Dual Encoder](https://github.com/google-research/t5x_retrieval) Models to Pytorch.
To Use:
- In case you don't have gsutil, install according to https://cloud.google.com/storage/docs/gsutil_install
- Pretrained T5X_Retrieval checkpoints are at https://console.cloud.google.com/storage/browser/t5-data/pretrained_models/t5x/retrieval/. Example:
gsutil -m cp -r : [gs://t5-data/pretrained_models/t5x/retrieval/gtr_base](https://console.cloud.google.com/storage/browser/t5-data/pretrained_models/t5x/retrieval/gtr_base/) $HOME/
- Create a corresponding config.json for the downloaded checkpoint. Often one already exists, e.g. here we can use https://huggingface.co/google/t5-v1_1-base/blob/main/config.json
I tested this on the GTR-base released checkpoint and compared the Jax and Pytorch checkpoints and the outputs are similar,
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@patrickvonplaten
@patil-suraj
@bastings
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts and @NielsRogge
- speech models: @sanchit-gandhi
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger and @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
| 12-27-2022 06:36:50 | 12-27-2022 06:36:50 | _The documentation is not available anymore as the PR was closed or merged._<|||||>cc @ArthurZucker <|||||>Once the tests are good we can merge!<|||||>Hi @ArthurZucker , `make style` seems to be changing a lot of files, is there any fix for this?<|||||>Yes, you probably have the wrong version of `black`. Something like `pip install --upgrade black` should fix this. <|||||>Fixed🤓 |
transformers | 20,906 | closed | add model resources for CPMAnt (new) | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Since the previous submission(#20711 ) had problems here and there, we have now resubmitted a new one.
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts and @NielsRogge
- speech models: @sanchit-gandhi
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger and @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
| 12-27-2022 02:48:11 | 12-27-2022 02:48:11 | > Thanks very much @pioliverse for iterating! I left a couple of comments, I think that some refactoring needs to be considered, after that we should be close to merge this! My main comments are:
>
> * I think that you can wrap `CPMAntEmbedding` around a `nn.Embedding` layer even though scaling is needed. You can scale down after each call to the embedding module and make sure the input is scaled down before the `projection` call.
> * Make sure to inherit `CPMAntForCausalLM` from `CPMAntPreTrainedModel`, also make sure to follow the convention / good practices by checking what is done in OPT for instance: https://github.com/huggingface/transformers/blob/1543cee7c8c95ef47f832b1f37625ba2923c4994/src/transformers/models/opt/modeling_opt.py#L808
> - this includes defining correctly a `lm_head` module, functions such as `get_input_embeddings`, `set_input_embeddings`, etc.
> * A lot of arguments from module's init seems to be unused, e.g. `init_std`. Try also to take the `config` object as a single argument from the init whenever possible (e.g. `CPMAntEncoder`)
> * Please make sure to follow the correct styling for docstrings (check my comments about that below)
> * If you have to initialize some weights with a specific distribution, try to initialize all the submodules weights inside `_init_weights` function from `CPMAntPreTrainedModel`
> * It's unclear to me why `forward` function is not defined in `CPMAntForCausalLM`
> * The code can be optimized here and there, I left some comments below on how you can achieve that
> * Please do not raise `RuntimeErrors` outside `if torch_is_available()`, otherwise `flax` & `tf` tests will fail
> Again thanks a lot for your efforts!
@younesbelkada Thanks for your patience in reviewing, I followed OPT convention and made the following changes:
> * `CPMAntEmbedding` and `CPMAntLinear` has been replaced by `nn.Embedding` and `nn.Linear` respectively.
> * `CPMAntForCausalLM` has been inherited from `CPMAntPreTrainedModel`, and `lm_head` and some functions have been added.
> * Useless initial arguments have been removed.
> * `forward` has been defined in `CPMAntForCausalLM`<|||||>@younesbelkada Thanks again for your patience in reviewing.<|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>> Thanks so much for your patience ! Looks pretty clean thank you! We should be close merging this once most of the comments are addressed. My comments being:
>
> ### Docstring and comments:
> * please harmonize the function docstrings to match the convention `transformers` model follow
> * please make sure to clean up some comments
> * Also would be nice to add a small explanation on the code on why `generate` needs to be overriden
>
> ### `dtype`:
> * I don't think the argument `dtype` is needed. The dtype of the whole model is managed by the kwarg `torch_dtype` so you can load your model using `model = xxxForCausalLM.from_pretrained(xxx, torch_dtype=torch.float16)` or `torch_dtype="auto"` (if the weights are pushed in fp16) and the model will be loaded in the desired precision.
>
> ### tests
> * I think that a test is failing, please double check that
>
> ### general comments
> * For classes that are public (i.e. that are ported in `__init__.py`, basically`CPMAntModel` &. `CPMAntForCausalLM` it is preferable to adopt this logic: https://github.com/huggingface/transformers/blob/main/src/transformers/models/gpt_neox_japanese/modeling_gpt_neox_japanese.py#L693-L703 --> outputa tuple if not `return_dict` otherwise return a dataclass. Please check other modeling files as reference
> * you can wrap `attention_mask` creation process inside class methods, e.g. `_prepare_attention_mask(`
>
> Thanks!
Hi @younesbelkada, we have made some changes as follows:
1. add some docstrings.
2. modified `forward` following the style of transformers.
3. rewrote some functions to adapt the `generate` function
>* in `modeling_cpmant.py`, we rewrote some functions like `prepare_inputs_for_generation`, `_expand_inputs_for_generation`
>* in `tokenization_cpmant.py`, rewrote some functions like `prepare_for_model`, `_pad`, `_encode_plus`, `_batch_encode_plus`
4. cleaned some comments.
<|||||>I am a bit surprised that when I use `make style`, some other files are also reformatted, which causes `check_code_quality` to fail.<|||||>Hi @pioliverse
You need to rebase with `main` branch as the styling has been updated for most of the files in `transformers` , and update your black version as follows:
```
pip install --upgrade -e .["quality"]
```
Then `make style` or `make fixup`<|||||>> Hi @pioliverse You need to rebase with `main` branch as the styling has been updated for most of the files in `transformers` , and update your black version as follows:
>
> ```
> pip install --upgrade -e .["quality"]
> ```
>
> Then `make style` or `make fixup`
Thanks @younesbelkada , this has been solved.<|||||>> Thanks a lot for addressing most of the comments of the previous review! And thank you for your huge work on refactoring the modeling script I left some comments, mostly nits that can be solved easily. Note that for arguments such as `use_cache` etc, we prefer to pass them through the forward pass rather than setting them as a class attribute. Also, please consider passing a `CPMAntConfig` for the classes that have several attributes such as `CPMAntEncoder` Make sure also to correctly pass the required keyword arguments such as `past_key_values`, `output_attentions` etc, that are crucial for caching mechanism. You can check how this is done in OPT for example Finally, the naming convention in `transformers` has changed a bit, we prefer to name models with a single capital letter (i.e. here `CPMAnt -> Cpmant`) Again thanks for your efforts on this! Once the comments being solved, we should be very close merging this!
Thanks for your review @younesbelkada , we have modified some code.
>* We pass the `use_cache` in `forward` function from a class attribute.
>* We simplify the code for the class attribute assignment and replace it with `CPMAntconfig`.
>* We added `past_key_values` and `output_attentions` in `forward` of CPMAntModel.
>* I kind of wonder if all files that contain the name `CPMAnt` should be changed to `Cpmant`?<|||||>Hi @younesbelkada , I am a member of OpenBMB, and I will help @pioliverse finish this PR.
All the issues mentioned above have been resolved. Please kindly have a look.
For the unit tests, I rebase `pioliverse:cpmantmodel` with `huggingface:main`, but it cannot pass the test. It seems some other models cause the failure?
for instance, in tests_onnx I met the error:
`
ERROR tests/models/altclip/test_modeling_altclip.py
============ 72 passed, 551 skipped, 29 warnings, 1 error in 28.26s ============
`
How can I avoid such error?<|||||>Hi @gongbaitao
Thanks for jumping in! And sorry for the delay
Rebasing with `main` should be probably solve this issue, will look into the PR asap, let me know once you think this is ready for review!<|||||>Hi @younesbelkada, thanks for your reply and advice!
All the problems in unit test have beed solved. In the latest commit, we largely refactor the code to make it clear and simple. Hope you can give a review soon! <|||||>Thanks a lot for the heads-up @pioliverse @gongbaitao !
Quickly looking at the README_ja file it seems that a some unnecessarly changes were made, I suggest you merge this branch with the upstream `transformers` `main` branch:
```
git remote add upstream https://github.com/huggingface/transformers.git
git fetch upstream
git merge upstream/main
git push
```
I'll have a closer look on the other files asap! <|||||>> Thanks a lot for the heads-up @pioliverse @gongbaitao ! Quickly looking at the README_ja file it seems that a some unnecessarly changes were made, I suggest you merge this branch with the upstream `transformers` `main` branch:
>
> ```
> git remote add upstream https://github.com/huggingface/transformers.git
> git fetch upstream
> git merge upstream/main
> git push
> ```
>
> I'll have a closer look on the other files asap!
Thanks for the tips @younesbelkada!
I have merged with `huggingface main` branch and update the README_ja, and it's up-to-date with the `main` now. We hope to merge this PR in this week, please kindly have a look. Thanks!<|||||>> Regarding slow integration tests, we might need to totally skip them as the weights are 40GB in fp32 (and 20GB in fp16), I think our daily CI runners have 16GB GPU VRAM, so friendly pinging here @ydshieh to see what could be the alternative.
Yeah, just skip it/them.<|||||>Hi @gongbaitao
Great work on refactoring the code and making the CI tests pass! 🎉
Let me know once this is ready for review!<|||||>Hi @younesbelkada, thanks for your comment yesterday! This helps me a lot to find hidden errors in unit test part. I have fixed these bugs and make the new commit.
But I don't know why ci/cicleci: test_tf always failed with `tests/models/opt/test_modeling_tf_opt.py::TFOPTModelTest::test_pipeline_text_generation` time out, even I have tried 3 times.
Is there anything I missed?<|||||>Hello @gongbaitao
Don't worry I think this is fine. If you give me the green light, I can review the PR now<|||||>> Hello @gongbaitao Don't worry I think this is fine. If you give me the green light, I can review the PR now
Yeah it's solved by luck i guess.
I think It's ready for review, thanks for your help again! @younesbelkada <|||||>Hi @younesbelkada, thanks for your meaningful comments!
1. The link for checkpoint, and some other comments problems, have been corrected.
2. As for the name, because till now CPMAnt has just one Chinese case, I guess it's no need to call it like CPMAntChinese.
3. Besides, I have made some local tests on tokenizers, for example, the `CPMAntTokenizationTest.test_pre_tokenization()`. But some methods in `TokenizerTesterMixin` use different logic to load vocab as a `dict`, while CPMAnt has it's own `load_vocab`. Refactor is not that convenient and necessary i think, so I just set it as `custom`. I can make a change if there's any better solutions.
It's ok for the new review : )<|||||>Hi @younesbelkada . I have fixed the problems mentioned in comments. I think it's ready for new review:)
Thanks for your detailed review and comments!<|||||>@younesbelkada @sgugger Thanks for the valued comments!
According to the new comments, I have dropped some redundant codes, and rename the model class in a camel-cased way
: )<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Hi @sgugger @younesbelkada , sorry for the delay!
In the last few weeks, I have fixed the problems mentioned above and refactored the CPMAnt tokenizer. Please kindly have a look again, thanks for your help!<|||||>Thanks for your quick review! @sgugger
It seems this problem https://github.com/huggingface/transformers/pull/20906#discussion_r1161687724 is because the changed file didn't show all commits. Maybe check this page https://github.com/huggingface/transformers/pull/20906/files will be helpful:)
As the https://github.com/huggingface/transformers/pull/20906#discussion_r1161688147, it cannot pass the code quality check, so shall I keep it unchanged?<|||||>@sgugger Thanks for your meaningful comments!
Sorry I forget to drop the trailing comma in styling issue. Now I have fixed the trailing comma problem and add `tooslow` decorator. Please kindly have a review:) |
transformers | 20,905 | closed | Issues attempting to implement P-TuningV2 with huggingface's BART | @patrickvonplaten
Hello, I am trying to implement P-Tuningv2 with BART using huggingface's transformers v4.25.1 ([P-TuningV2 official repo](https://github.com/THUDM/P-tuning-v2)). However, when I try to train the model I get the following error:
```
[/usr/local/lib/python3.8/dist-packages/transformers/models/bart/modeling_bart.py](https://localhost:8080/#) in forward(self, hidden_states, key_value_states, past_key_value, attention_mask, layer_head_mask, output_attentions)
238 if attention_mask is not None:
239 if attention_mask.size() != (bsz, 1, tgt_len, src_len):
--> 240 raise ValueError(
241 f"Attention mask should be of size {(bsz, 1, tgt_len, src_len)}, but is {attention_mask.size()}"
242 )
ValueError: Attention mask should be of size (4, 1, 648, 648), but is torch.Size([4, 1, 652, 652])
```
Any ideas where the issue is coming from or how to resolve this? I am a little unfamiliar with the codebase so any help will be greatly appreciated.
Thanks,
Here's the code I'm using to run the model:
```
import torch
from torch import nn
from transformers import BartPretrainedModel, BartConfig, BartModel
import copy
import math
import random
import warnings
import torch
import torch.utils.checkpoint
from torch import nn
from torch.nn import CrossEntropyLoss
from transformers.modeling_outputs import Seq2SeqLMOutput
def shift_tokens_right(
input_ids: torch.Tensor, pad_token_id: int, decoder_start_token_id: int
):
"""
Shift input ids one token to the right.
"""
shifted_input_ids = input_ids.new_zeros(input_ids.shape)
shifted_input_ids[:, 1:] = input_ids[:, :-1].clone()
shifted_input_ids[:, 0] = decoder_start_token_id
if pad_token_id is None:
raise ValueError("self.model.config.pad_token_id has to be defined.")
# replace possible -100 values in labels by `pad_token_id`
shifted_input_ids.masked_fill_(shifted_input_ids == -100, pad_token_id)
return shifted_input_ids
class PrefixEncoder(torch.nn.Module):
r"""
The torch.nn model to encode the prefix
Input shape: (batch-size, prefix-length)
Output shape: (batch-size, prefix-length, 2*layers*hidden)
"""
def __init__(self, config):
super().__init__()
self.prefix_projection = config.prefix_projection
if self.prefix_projection:
# Use a two-layer MLP to encode the prefix
self.embedding = torch.nn.Embedding(config.pre_seq_len, config.hidden_size)
self.trans = torch.nn.Sequential(
torch.nn.Linear(config.hidden_size, config.prefix_hidden_size),
torch.nn.Tanh(),
torch.nn.Linear(
config.prefix_hidden_size,
config.num_hidden_layers * 2 * config.hidden_size,
),
)
else:
self.embedding = torch.nn.Embedding(
config.pre_seq_len, config.num_hidden_layers * 2 * config.hidden_size
)
def forward(self, prefix: torch.Tensor):
if self.prefix_projection:
prefix_tokens = self.embedding(prefix)
past_key_values = self.trans(prefix_tokens)
else:
past_key_values = self.embedding(prefix)
return past_key_values
class PrefixBartForConditionalGeneration(BartPretrainedModel):
base_model_prefix = "model"
_keys_to_ignore_on_load_missing = [
r"final_logits_bias",
r"lm_head.weight",
"encoder.embed_tokens.weight",
"decoder.embed_tokens.weight",
]
def __init__(self, config: BartConfig):
# MAX - testing the config default values from (https://github.com/THUDM/P-tuning-v2/blob/main/arguments.py)
config.pre_seq_len = 4
config.hidden_dropout_prob = 0.1
config.prefix_hidden_size = 512
config.prefix_projection = False
super().__init__(config)
# MAX :: get the layer, embedding and heads to generate the prefix
self.pre_seq_len = config.pre_seq_len
self.n_layer = config.num_hidden_layers
self.n_head = config.num_attention_heads
self.n_embd = (
config.hidden_size // config.num_attention_heads
) # MAX - here we change the embed dims..
self.model = BartModel(config)
self.register_buffer(
"final_logits_bias", torch.zeros((1, self.model.shared.num_embeddings))
)
self.lm_head = nn.Linear(
config.d_model, self.model.shared.num_embeddings, bias=False
)
# MAX :: add the prefix encoder/tokens and dropout for the prefixes
self.dropout = torch.nn.Dropout(config.hidden_dropout_prob)
self.prefix_encoder = PrefixEncoder(config)
self.prefix_tokens = torch.arange(self.pre_seq_len).long()
# MAX :: freeze the model parameters
for param in self.model.parameters():
param.requires_grad = False
# Initialize weights and apply final processing
self.post_init()
# MAX :: modify and adapt for bart
def get_prompt(self, batch_size):
prefix_tokens = (
self.prefix_tokens.unsqueeze(0).expand(batch_size, -1).to(self.model.device)
)
past_key_values = self.prefix_encoder(prefix_tokens)
bsz, seqlen, _ = past_key_values.shape
past_key_values = past_key_values.view(
bsz, seqlen, self.n_layer * 2, self.n_head, self.n_embd
)
past_key_values = self.dropout(past_key_values)
past_key_values = past_key_values.permute([2, 0, 3, 1, 4]).split(2)
return past_key_values
def get_encoder(self):
return self.model.get_encoder()
def get_decoder(self):
return self.model.get_decoder()
def resize_token_embeddings(self, new_num_tokens: int) -> nn.Embedding:
new_embeddings = super().resize_token_embeddings(new_num_tokens)
self._resize_final_logits_bias(new_num_tokens)
return new_embeddings
def _resize_final_logits_bias(self, new_num_tokens: int) -> None:
old_num_tokens = self.final_logits_bias.shape[-1]
if new_num_tokens <= old_num_tokens:
new_bias = self.final_logits_bias[:, :new_num_tokens]
else:
extra_bias = torch.zeros(
(1, new_num_tokens - old_num_tokens),
device=self.final_logits_bias.device,
)
new_bias = torch.cat([self.final_logits_bias, extra_bias], dim=1)
self.register_buffer("final_logits_bias", new_bias)
def get_output_embeddings(self):
return self.lm_head
def set_output_embeddings(self, new_embeddings):
self.lm_head = new_embeddings
def forward(
self,
input_ids=None,
attention_mask=None,
decoder_input_ids=None,
decoder_attention_mask=None,
head_mask=None,
decoder_head_mask=None,
cross_attn_head_mask=None,
encoder_outputs=None,
past_key_values=None,
inputs_embeds=None,
decoder_inputs_embeds=None,
labels=None,
use_cache=None,
output_attentions=None,
output_hidden_states=None,
return_dict=None,
):
r"""
labels (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*):
Labels for computing the masked language modeling loss. Indices should either be in `[0, ...,
config.vocab_size]` or -100 (see `input_ids` docstring). Tokens with indices set to `-100` are ignored
(masked), the loss is only computed for the tokens with labels in `[0, ..., config.vocab_size]`.
Returns:
"""
return_dict = (
return_dict if return_dict is not None else self.config.use_return_dict
)
# MAX-NOTE :: run the prefix layer
batch_size = input_ids.shape[0]
past_key_values = self.get_prompt(batch_size=batch_size)
prefix_attention_mask = torch.ones(batch_size, self.pre_seq_len).to(
self.model.device
)
attention_mask = torch.cat((prefix_attention_mask, attention_mask), dim=1)
print("encoder mask: {}".format(attention_mask.size()))
# BUG attention_mask is changed but no the size of the hidden_states and and the key_states (past_key_value[0])?
if labels is not None:
if use_cache:
logger.warning(
"The `use_cache` argument is changed to `False` since `labels` is provided."
)
use_cache = False
if decoder_input_ids is None and decoder_inputs_embeds is None:
decoder_input_ids = shift_tokens_right(
labels, self.config.pad_token_id, self.config.decoder_start_token_id
)
outputs = self.model(
input_ids,
attention_mask=attention_mask,
decoder_input_ids=decoder_input_ids,
encoder_outputs=encoder_outputs,
decoder_attention_mask=decoder_attention_mask,
head_mask=head_mask,
decoder_head_mask=decoder_head_mask,
cross_attn_head_mask=cross_attn_head_mask,
past_key_values=past_key_values, # MAX-NOTE :: unlike bert this did not need to be added here?
inputs_embeds=inputs_embeds,
decoder_inputs_embeds=decoder_inputs_embeds,
use_cache=use_cache,
output_attentions=output_attentions,
output_hidden_states=output_hidden_states,
return_dict=return_dict,
)
lm_logits = self.lm_head(outputs[0])
lm_logits = lm_logits + self.final_logits_bias.to(lm_logits.device)
masked_lm_loss = None
if labels is not None:
loss_fct = CrossEntropyLoss()
masked_lm_loss = loss_fct(
lm_logits.view(-1, self.config.vocab_size), labels.view(-1)
)
if not return_dict:
output = (lm_logits,) + outputs[1:]
return (
((masked_lm_loss,) + output) if masked_lm_loss is not None else output
)
return Seq2SeqLMOutput(
loss=masked_lm_loss,
logits=lm_logits,
past_key_values=outputs.past_key_values,
decoder_hidden_states=outputs.decoder_hidden_states,
decoder_attentions=outputs.decoder_attentions,
cross_attentions=outputs.cross_attentions,
encoder_last_hidden_state=outputs.encoder_last_hidden_state,
encoder_hidden_states=outputs.encoder_hidden_states,
encoder_attentions=outputs.encoder_attentions,
)
def prepare_inputs_for_generation(
self,
decoder_input_ids,
past=None,
attention_mask=None,
head_mask=None,
decoder_head_mask=None,
cross_attn_head_mask=None,
use_cache=None,
encoder_outputs=None,
**kwargs,
):
# cut decoder_input_ids if past is used
if past is not None:
decoder_input_ids = decoder_input_ids[:, -1:]
return {
"input_ids": None, # encoder_outputs is defined. input_ids not needed
"encoder_outputs": encoder_outputs,
"past_key_values": past,
"decoder_input_ids": decoder_input_ids,
"attention_mask": attention_mask,
"head_mask": head_mask,
"decoder_head_mask": decoder_head_mask,
"cross_attn_head_mask": cross_attn_head_mask,
"use_cache": use_cache, # change this to avoid caching (presumably for debugging)
}
def prepare_decoder_input_ids_from_labels(self, labels: torch.Tensor):
return shift_tokens_right(
labels, self.config.pad_token_id, self.config.decoder_start_token_id
)
@staticmethod
def _reorder_cache(past, beam_idx):
reordered_past = ()
for layer_past in past:
# cached cross_attention states don't have to be reordered -> they are always the same
reordered_past += (
tuple(
past_state.index_select(0, beam_idx)
for past_state in layer_past[:2]
)
+ layer_past[2:],
)
return reordered_past
```
| 12-26-2022 21:47:11 | 12-26-2022 21:47:11 | cc @ArthurZucker <|||||>Hello @patrickvonplaten @ArthurZucker,
I wrote a simple test case to reproduce the error I am getting for the model I am trying to implement using a few examples from SQuAD.
### 1. Loading the dataset
```
from datasets import Dataset
def formatToMI(dataset):
"""take a squad-like qa dataset and transform into MLM format"""
masked_strings = []
full_strings = []
qa_strings = []
answer_strings = []
for i in range(len(dataset["question"])):
question = dataset["question"][i]
answer = dataset["answers"][i]["text"][0]
context = dataset["context"][i]
masked_strings.append(
"Question: {} Answer: <mask>. Context: {}".format(question, context)
)
full_strings.append(
"Question: {} Answer: {}. Context: {}".format(question, answer, context)
)
qa_strings.append("Question: {} Answer: {}.".format(question, answer))
answer_strings.append(answer)
return {
"masked_strings": masked_strings,
"full_strings": full_strings,
"qa_strings": qa_strings,
"answer_strings": answer_strings,
"id": dataset["id"],
}
def loadSquadMI(n=None):
"""create a dataloader for SQuAD"""
from datasets import load_dataset
raw_datasets = load_dataset("squad")
if n is not None:
squad_subset = formatToMI(raw_datasets["train"][:n])
return squad_subset
else:
return 0
samples = loadSquadMI(n=100)
tiny_squad = Dataset.from_dict(samples)
```
### 2. Creating the dataloader
```
from transformers import AutoTokenizer, BartForConditionalGeneration, DataCollatorForSeq2Seq
import torch
from torch.utils.data import DataLoader
# initialize BART and PrefixBART for MI
tokenizer = AutoTokenizer.from_pretrained("facebook/bart-base")
examples = tiny_squad
prefixbart_model = PrefixBartForConditionalGeneration.from_pretrained("facebook/bart-base")
bart_model = BartForConditionalGeneration.from_pretrained("facebook/bart-base")
data_collator = DataCollatorForSeq2Seq(
tokenizer,
model=prefixbart_model,
label_pad_token_id=-100,
pad_to_multiple_of=8,
)
# preprocessing
def training_preprocessing(examples):
"""examples have all three types of string"""
padding = "max_length"
model_inputs = tokenizer(
examples["masked_strings"],
max_length=384,
padding=padding,
truncation=False,
)
labels = tokenizer(
text_target=examples["qa_strings"],
max_length=128,
padding=padding,
truncation=True,
)
# If we are padding here, replace all tokenizer.pad_token_id in the labels by -100 when we want to ignore
# padding in the loss.
if padding == "max_length":
labels["input_ids"] = [
[(l if l != tokenizer.pad_token_id else -100) for l in label]
for label in labels["input_ids"]
]
model_inputs["labels"] = labels["input_ids"]
return model_inputs
proc_train_dataset = examples.map(
training_preprocessing,
batched=True,
remove_columns=examples.column_names,
)
train_tensor = proc_train_dataset
train_tensor.set_format("torch")
train_dataloader = DataLoader(
train_tensor,
shuffle=True,
collate_fn=data_collator,
batch_size=4,
num_workers=0,
)
```
### 3. Test: a single forward pass
#### With BART : successful
```
bart_model.train()
batch = next(iter(train_dataloader))
outputs = bart_model(**batch)
loss = outputs.loss
print(loss)
```
**Output:**
`tensor(0.8271, grad_fn=<NllLossBackward0>)`
#### With PrefixBART : failure (same error as above)
```
prefixbart_model.train()
batch = next(iter(train_dataloader))
outputs = prefixbart_model(**batch)
loss = outputs.loss
print(loss)
```
**Output**
```
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
[<ipython-input-26-ebc93e8e099a>](https://localhost:8080/#) in <module>
3 prefixbart_model.train()
4 batch = next(iter(train_dataloader))
----> 5 outputs = prefixbart_model(**batch)
6 loss = outputs.loss
7 print(loss)
9 frames
[/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py](https://localhost:8080/#) in _call_impl(self, *input, **kwargs)
1188 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1189 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1190 return forward_call(*input, **kwargs)
1191 # Do not call functions when jit is used
1192 full_backward_hooks, non_full_backward_hooks = [], []
[<ipython-input-5-71e56dfc61a6>](https://localhost:8080/#) in forward(self, input_ids, attention_mask, decoder_input_ids, decoder_attention_mask, head_mask, decoder_head_mask, cross_attn_head_mask, encoder_outputs, past_key_values, inputs_embeds, decoder_inputs_embeds, labels, use_cache, output_attentions, output_hidden_states, return_dict)
211 )
212
--> 213 outputs = self.model(
214 input_ids,
215 attention_mask=attention_mask,
[/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py](https://localhost:8080/#) in _call_impl(self, *input, **kwargs)
1188 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1189 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1190 return forward_call(*input, **kwargs)
1191 # Do not call functions when jit is used
1192 full_backward_hooks, non_full_backward_hooks = [], []
[/usr/local/lib/python3.8/dist-packages/transformers/models/bart/modeling_bart.py](https://localhost:8080/#) in forward(self, input_ids, attention_mask, decoder_input_ids, decoder_attention_mask, head_mask, decoder_head_mask, cross_attn_head_mask, encoder_outputs, past_key_values, inputs_embeds, decoder_inputs_embeds, use_cache, output_attentions, output_hidden_states, return_dict)
1231
1232 if encoder_outputs is None:
-> 1233 encoder_outputs = self.encoder(
1234 input_ids=input_ids,
1235 attention_mask=attention_mask,
[/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py](https://localhost:8080/#) in _call_impl(self, *input, **kwargs)
1188 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1189 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1190 return forward_call(*input, **kwargs)
1191 # Do not call functions when jit is used
1192 full_backward_hooks, non_full_backward_hooks = [], []
[/usr/local/lib/python3.8/dist-packages/transformers/models/bart/modeling_bart.py](https://localhost:8080/#) in forward(self, input_ids, attention_mask, head_mask, inputs_embeds, output_attentions, output_hidden_states, return_dict)
848 )
849 else:
--> 850 layer_outputs = encoder_layer(
851 hidden_states,
852 attention_mask,
[/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py](https://localhost:8080/#) in _call_impl(self, *input, **kwargs)
1188 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1189 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1190 return forward_call(*input, **kwargs)
1191 # Do not call functions when jit is used
1192 full_backward_hooks, non_full_backward_hooks = [], []
[/usr/local/lib/python3.8/dist-packages/transformers/models/bart/modeling_bart.py](https://localhost:8080/#) in forward(self, hidden_states, attention_mask, layer_head_mask, output_attentions)
323 """
324 residual = hidden_states
--> 325 hidden_states, attn_weights, _ = self.self_attn(
326 hidden_states=hidden_states,
327 attention_mask=attention_mask,
[/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py](https://localhost:8080/#) in _call_impl(self, *input, **kwargs)
1188 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1189 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1190 return forward_call(*input, **kwargs)
1191 # Do not call functions when jit is used
1192 full_backward_hooks, non_full_backward_hooks = [], []
[/usr/local/lib/python3.8/dist-packages/transformers/models/bart/modeling_bart.py](https://localhost:8080/#) in forward(self, hidden_states, key_value_states, past_key_value, attention_mask, layer_head_mask, output_attentions)
238 if attention_mask is not None:
239 if attention_mask.size() != (bsz, 1, tgt_len, src_len):
--> 240 raise ValueError(
241 f"Attention mask should be of size {(bsz, 1, tgt_len, src_len)}, but is {attention_mask.size()}"
242 )
ValueError: Attention mask should be of size (4, 1, 384, 384), but is torch.Size([4, 1, 388, 388])
```<|||||>Hello again @patrickvonplaten @ArthurZucker,
I just found out about `adapter-transformers` which implements prefix-tuning for BART on which P-TuningV2 is based. Maybe this issue can be closed?<|||||>Hey! Cool that you found something that works for you! The issue might just have been from a config parameter defining the `hidden_size`<|||||>Hello, thank you for replying. I will try out the modified config and see if it resolves the issue.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 20,904 | closed | Don't call deprecated method | # What does this PR do?
Call `pad_image` instead of `pad` which has been deprecated in order to maintain consistent method naming across image processors.
There's no difference in logic, as `pad` calls `pad_image`, it just reduces excessive logging raised in [this comment](https://github.com/huggingface/transformers/pull/20425#issuecomment-1364747167).
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests? | 12-26-2022 20:33:54 | 12-26-2022 20:33:54 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 20,903 | closed | Informer - Transformer For Time-Series Forecasting | # Model description
Following the new support for Time Series Transformers in the [API](https://huggingface.co/docs/transformers/model_doc/time_series_transformer) (and the great blog by @NielsRogge and @kashif [here](https://huggingface.co/blog/time-series-transformers)), I propose adding "Informer" - AAAI 2021 Best Paper model.
* Paper: [Informer: Beyond Efficient Transformer for Long Sequence Time-Series Forecasting
](https://arxiv.org/abs/2012.07436)
* Model implementation: https://github.com/zhouhaoyi/Informer2020
## Why this model?
Compared to other forcasting transformers (see below), Informer seems to be the most "code-stable" one, with the most starts & forks in github.
Popular forcasting Transformes, with link to the repository:
[LogTrans](https://github.com/mlpotter/Transformer_Time_Series) - NIPS 2019
[Informer](https://github.com/zhouhaoyi/Informer2020) - AAAI 2021 (Best Paper)
[Autoformer](https://github.com/thuml/Autoformer) - NIPS 2021
[Pyraformer](https://github.com/alipay/Pyraformer) - ICLR 2022
[FEDformer](https://github.com/MAZiqing/FEDformer) - ICML 2022
This list based on the paper: [Are Transformers Effective for Time Series Forecasting?](https://arxiv.org/abs/2205.13504) (AAAI-23)
I would like to implement the model :)
Thank you,
Eli
### Open source status
- [X] The model implementation is available
- [ ] The model weights are available
### Provide useful links for the implementation
@zhouhaoyi - repository creator | 12-26-2022 16:50:48 | 12-26-2022 16:50:48 | thanks @elisim for the issue... indeed i have informer in my list of models to port over. I have the initial implementation of informer and other done: https://github.com/kashif/pytorch-transformer-ts and will move them over to the transformers API<|||||>Wow! saw your repo and it looks great! Maybe I might help? :) I sent you an email.
Thanks,
Eli <|||||>merged in https://github.com/huggingface/transformers/pull/21099 |
transformers | 20,902 | closed | Cache size limit for generation | # What does this PR do?
Following #20767, it adds a `cache_limit` argument for `generate` for PyTorch and TensorFlow (except xla), limiting the size of the cache (`past_key_values`).
`position_ids` is stored in `model_kwargs` for concerned models.
This is a bit above 100 lines. No big deal if you consider the maintenance effort is not worth it, this is still a simple feature that can be implemented by users by overriding model methods.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? #20767
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
@gante & @sgugger
| 12-26-2022 16:37:01 | 12-26-2022 16:37:01 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20902). All of your documentation changes will be reflected on that endpoint.<|||||>Hey @Natooz 👋
Thank you for the PR! Looking at the PR, it is not too complex... but given the non-existent demand, it still amounts to a terrible maintenance-per-demand ratio 🙈 Our team is small, so we have to be extremely picky.
I am afraid that I will have to reject this PR. Nevertheless, I am happy to be proved wrong, and if I see demand for this feature I will come back to this PR as a reference implementation!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 20,901 | closed | 🚨🚨 Generate: correct beam search best possible score computation and handling | # What does this PR do?
As initially uncovered by @ydshieh in #20853, there is a gross TF/PT mismatch on the number of steps beam search takes under some circumstances. In practice, all three frameworks had a different and incomplete implementation (see below why), and this PR fixes it.
Added "🚨🚨" to the title, as this PR may change the output of beam search.
### Rationale:
We know that logprobs is a negative value, and we want to maximize it in beam search (i.e. make it as close to 0 as possible). Since logprobs is always negative, and the final score is the sum of the logprobs, we can anticipate the best possible score a running sequence can ever achieve, and use it to terminate beam search early with no drawback (without this shortcut, beam search will always run `max_length` steps unless `early_stopping=True`). Well, it turns out that the method to compute the best possible score depends on the signal of `length_penalty`, and we are not accounting for that!
- Scenario 1, `length_penalty > 0.0`: In this case, as the sentence grows, the denominator grows as well. This means the score can get closer to 0 (i.e. higher) as the sentence grows, and longer sentences are promoted. In this case, the best possible score can be determined from the maximum sequence length (original TF/FLAX implementation).
- Scenario 2, `length_penalty < 0.0`: In this case, as the sentence grows, the denominator gets smaller. This means the score will get farther away to 0 (i.e. lower) as the sentence grows, and shorter sentences are promoted. In this case, the best possible score can be determined from the current sequence length (original PT implementation).
On top of this, FLAX and TF were incorrectly terminating early when `batch_size > 1`: we were saying that a score improvement was no longer possible as soon as one of the batch members could no longer improve (as opposed to all batch members can no longer improve).
Finally, there was an issue with TF where early stopping was not correctly triggered (my bad).
In summary, for different reasons, all frameworks were stopping beam search incorrectly under certain circumstances:
1. PT: when `length_penalty > 0.0` (which is the default case!)
2. Flax: with `batch_size > 1` || `length_penalty < 0.0`
3. TF: with `batch_size > 1` || `length_penalty < 0.0` || incorrect (missing) early stopping trigger. | 12-26-2022 16:17:52 | 12-26-2022 16:17:52 | _The documentation is not available anymore as the PR was closed or merged._<|||||>@ydshieh regarding the original issue (https://github.com/huggingface/transformers/issues/18149) -- the problem was not TF with too many beam search iterations, but rather PT with not enough 😅 After this fix, in the example you shared (which I paste below, for reference), both PT and TF run >300 steps to conclude that "bonjour" is the answer. Please note that TF includes the padding in its output (as opposed to PT, which doesn't) because its output tensors are pre-padded and sliced based on the number of iterations, whereas in PT they are growing tensors that can be stored as candidate outputs without padding.
`early_stopping=True` can be used with TF for quicker results.
___________________________________________________
python example:
```python
from transformers import MarianMTModel, MarianTokenizer, TFMarianMTModel
import tensorflow as tf
model_name = "Helsinki-NLP/opus-mt-en-ROMANCE"
tokenizer = MarianTokenizer.from_pretrained(model_name)
text_in = ['>>fr<< hello']
# PT generates a few tokens then stops early -> very fast
model = MarianMTModel.from_pretrained(model_name)
batch = tokenizer(text_in, return_tensors='pt', padding=True)
translated = model.generate(**batch)
o = tokenizer.batch_decode(translated, skip_special_tokens=True)
print(translated)
print(o)
# TF generates 512 tokens, although the decoded version gives the same result as PT -> very slow
model = TFMarianMTModel.from_pretrained(model_name, from_pt=False)
batch = tokenizer(text_in, return_tensors='tf', padding=True)
translated = model.generate(**batch)
o = tokenizer.batch_decode(translated, skip_special_tokens=True)
print(translated)
print(o)
```<|||||>That's a great find! Well done, on finding the inconsistency here.
While this change is mathematically completely correct, I'm a bit worried whether it leads to bad/annoying side-effects in practice. I think most people don't think too deeply about `length_pentalty` and just use a parameter that works "well enough".
There are some problems here I think:
- 1.) As noted the default case is `length_penalty=1.0` and `do_early_stopping=False` which means that this PR changes the default case of all beam search applications. While it will certainly always improve "mathematically" the output result there are two problems in practice:
- 1.1) Some people have probably unknowingly found a high `length_penalty` to work reasonably well. A high `length_penalty` combined with a high `max_length` can now lead to the beam search giving some super long results as the best solution (which would be mathematically correct given the high `length_penalty`, but I don't think people understand/understood the length penalty well enough to understand why this is).
- 1.2) Beam search will now always run much much longer if `max_length` is very high (there are lots of models with set `max_length` to something like 128 or even 256 for short sentence tasks like `translation`.
- 2.) (smaller problem) - we were trying to move away from having to require `max_length` overall - ideally the user should be able to use **any kind** of stopping criteria with beam search.
2.) is not a big problem, but I'm a bit worried that 1.) is one. What do you think about 1.) @gante - especially when looking at generation configs like the one of BART (the model is downloaded a lot and has many "derivation" models):
- https://huggingface.co/facebook/bart-large-cnn/blob/main/config.json#L42
The change here is definitely logically/mathematically correct, but I'm worried that it has too many negative effects. It's also a bit unreasonable when doing the math:
```
best_running_score = state.running_scores[:, -1:] / (max_length**length_penalty)
```
for `max_length=256` and `length_penalty=2` will essentially make beam search rarely stop before the end `x/(256*256)` = `x/65536` is very low for log-probs no? Or do log-probs became extremely large as soon as the text becomes bad?
On the other hand, maybe the log probs become very quickly so low for bad results that this change doesn't have that much of an impact. Can we maybe run some tests here @gante ? Maybe with the default setting of https://huggingface.co/facebook/bart-large-cnn/blob/main/config.json#L42 . If there are no major changes in outputs, ok to merge for me!
Also should we maybe add a warning "We detected that you use `length_penalty > 1.0` which strongly encourages long sequences to be generated. Recently there has been a change that might cause your generation to last longer than expected and lead to different results. You might want to consider lowering the `length_penalty`."
? <|||||>@patrickvonplaten I agree entirely with your points above. Yes, these changes are technically correct, but the cost can be quite high -- here's a rundown of the results in a few models, for the PT changes:
1. Models with `early_stopping=True` in the config, such as `facebook/bart-large-cnn`: no output change, same number of beam search iterations 👍
2. Models with `early_stopping=False` in the config, such as Marian or T5: no output change, one order of magnitude (!) more iterations for short inputs 🙅 This is because of what you wrote above -- the `best_running_score` can stay very high for a large number of iterations, even with `length_penalty=1.0`.
This probably means that the output text will only see changes in corner cases, which removes some of our concerns regarding this PR. However, the additional computational cost can be prohibitively high in some typical applications. That will likely create annoyed users, which does not seem wise.
____________________________________________
So, what can we do here?
a) Don't merge some or all of the changes, especially on the PT side, since they introduce unwanted (although correct) behavior. [probably not great, as we would be intentionally keeping a bug in the code]
b) Add warnings so that users pick the right flags. [users ignore warnings most of the time...]
c) Add some flag and/or `transformers` version gating, to keep the old behavior. [adds complexity, undesirable and, like b), requires users to use flags]
d) Update the default `length_penalty` to `0.0`, which stops biasing beam search toward long searches. In the examples I tried, this keeps the same outputs while not causing the number of beam search iterations to grow with this PR. [changing a default can be tricky, and some models might rely on `length_penalty=1.0` to get the expected output. On the plus side, most users intuitively think that a positive `length_penalty` promotes shorter sentences, which is not true, so we might be killing two birds with one stone]
e) Update the default of `early_stopping` to `True`. [similar to d), but less good imo]
I struggle to see a good compromise solution 🤔 Given that many research groups use our code to conduct research, I'd like to avoid a) (i.e. keeping the bug). For downstream users, assuming that most wouldn't react to announcements, we will have to pick between keeping a bug or risking changing behavior :(
Personally, I'd go with d), but it is extremely debatable (and you folks probably have more experience).
P.S.: TF XLA benchmarks showed that it was not much faster with beam search, compared to PT. Maybe this problem explains part of it!<|||||>Hmmm, ok this is a very tricky one then :-/
`length_penalty` is a pretty important parameter, and it's somewhat natural IMO to bias the model to slightly prefer longer output lengths (as longer output sequences always have <= log prob than shorter sequences). I think especially summarization models gain performance from using a length penalty.
Just to better understand, are there a lot of cases where the current implementation (the correct use of length penalty) leads to better results? Could you maybe post some failure cases of the current implementation? <|||||>Another option would be to frame everything as setting a "lower bound".
Currently, we have a "heustic lower bound" in PT, another option as done is this PR is a "absolute lower bound"<|||||>@patrickvonplaten some data about a potential `length_penalty` change -- I've tried setting the default to `0.0` (from `1.0`), and run our test suite for potentially impacted tests. More precisely, running `NVIDIA_TF32_OVERRIDE=0 RUN_SLOW=1 py.test tests/ -k WORD -vv`, with `WORD = {beam_search, summ, translat}`, which catches most (or all) of the hard beam search tests on all 3 frameworks, had the following results:
- 810 tests ran in total, including the challenging generate tests for beam search
- 4 failed due to GPU OOM
- 1 TF test failed (on T5-small, a translation outcome was ruined by the change -- `Ich liebe es so sehr!` to `!` )
- 1 PT test failed (on a pipeline test, a translation had 1 differing character but was equally correct -- `هذا اختبار` to `هذا إختبار`)
Looking at the catastrophic failure in the TF test, having the right `length_penalty` does make a difference, so a change may result in very annoyed users 👎
_____________________________________________________
I like the "lower bound" framing, with users being able to pick how precise they want to be in their beam search while keeping the current defaults. However, I'm reluctant to add yet another flag. We *could* change the `early_stopping` flag from a binary one to a ternary one (like the `verbose` flag in many CLIs), as it already controls how long beam search runs. Something like:
1. [no change] `early_stopping = 0` would be equivalent to `early_stopping = false` (on PyTorch, i.e. stops in a few iterations because it does not consider the `max_length` when computing the best score). This would be the default;
2. [no change] `early_stopping = 1` would be equivalent to `early_stopping = true`;
3. [new] `early_stopping = -1` would be the mathematically correct (yet ineffective) best possible score computation.
That way:
1. TF/FLAX would start behaving like PT, running fewer beam search iterations by default with minimal impact on the output;
2. PT users would see no changes;
3. Users still have the option of setting the mathematically correct version of beam search.
WDYT?<|||||>Nice good idea! I like the idea of using `early_stopping` to decide what do here! Would probably slightly favor:
`early_stopping: Union[bool, str] = {False, True, "never"}`
Guess we have to leave the reasoning of `False` as is for PyTorch. Using 1,0,-1 is also ok for me, but think it's nicer for the user to make early_stopping accept both str and bool<|||||>Applied the contents of the discussion in #21368, closing this one. |
transformers | 20,900 | closed | fix docs typos in "add_new_model" | # What does this PR do?
This PR fixes a typo or improves the docs:
I changed typos in the "add_new_model" docs, from "Jupiter" to "Jupyter"
| 12-26-2022 16:16:38 | 12-26-2022 16:16:38 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 20,899 | closed | TFGPT2ForSequenceClassification.from_pretrained with num_labels parameter creates a model with reversed layers order | ### System Info
platform: macos, m1 max
python version: 3.9.13
transformers version: 4.25.1
### Who can help?
@Rocketknight1
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
The following code:
```python
from transformers import GPT2Tokenizer, TFGPT2ForSequenceClassification
model = TFGPT2ForSequenceClassification.from_pretrained('gpt2-medium', num_labels=30)
model.summary()
```
It shows then:
```
Model: "tfgpt2_for_sequence_classification_7"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
score (Dense) multiple 30720
transformer (TFGPT2MainLaye multiple 354823168
r)
=================================================================
Total params: 354,853,888
Trainable params: 354,853,888
Non-trainable params: 0
```
So as you can see dense layer goes before transformer layer, it should be opposite.
Also there is a Colab link: https://colab.research.google.com/drive/1MeNzHHXnccLAkNlSRpWELQhWF5Y7aIyS#scrollTo=hIbZMVd7xumr
### Expected behavior
I think that the dense layer should go after the transformer layer so that model could be trained. Looks like that something like this has been brought up some time ago:
https://github.com/huggingface/transformers/issues/11515 | 12-26-2022 14:17:33 | 12-26-2022 14:17:33 | Hi @justnoxx 👋
The layers are being called in the expected order, as you can see in the [model's forward pass](https://github.com/huggingface/transformers/blob/bbcd961897aa6cc439ef4cca5cef6db4283c5b76/src/transformers/models/gpt2/modeling_tf_gpt2.py#L1139).
`model.summary()` is not fully compatible with our models because we rely on [Keras model subclassing](https://keras.io/api/models/), as opposed to Keras sequential/functional API (whose `model.summary()` produces the expected output).<|||||>@gante now I see it, thanks a lot for your help. Closing this issue now. |
transformers | 20,965 | closed | Improve Mlflow Callbacks documentation. | I recently followed https://julsimon.medium.com/using-mlflow-with-hugging-face-transformers-4f69093a6c04 , transformers documentation on mlflow callback is not formatted properly.

It is better to read from the source code https://github.com/huggingface/transformers/blob/accad48e5b4a98302ea396b9f15c5f1c987b6f7f/src/transformers/integrations.py#L894 than the documentation site.

I am sure that in the past, I might have seen such examples over other classes. I thought may be I should report it. | 12-26-2022 13:02:50 | 12-26-2022 13:02:50 | |
transformers | 20,898 | closed | Keep getting ChildFailedError Error in distributed Eval/Train | ### System Info
- `transformers` version: 4.25.1
- Platform: Linux-4.15.0-200-generic-x86_64-with-glibc2.29
- Python version: 3.8.10
- Huggingface_hub version: 0.11.1
- PyTorch version (GPU?): 1.12.1+cu113 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes, multiple GPUs on a single node.
- Using distributed or parallel set-up in script?: Yes. Running my script with `torchrun`.
### Who can help?
@sgugger
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
I'm running a slightly modified [run_clm.py script](https://github.com/huggingface/transformers/blob/v4.24.0/examples/pytorch/language-modeling/run_clm.py) with vary number of A100 GPUs (2-8) on a single node, and keep getting the ChildFailedError right after the training/evaluation ends.
I’m running [GPT2 (smallest model)](https://huggingface.co/gpt2) on the [OpenWebText dataset](https://huggingface.co/datasets/openwebtext).
### An example how I run my code from a shell script is as follow:
> GPU=0,1,2,3,4,5
export TORCH_CPP_LOG_LEVEL=INFO NCCL_DEBUG=INFO
export CUDA_VISIBLE_DEVICES=$GPU
>
> torchrun \
--standalone \
--nnodes=1 \
--nproc_per_node=${NUM_GPU} \
run_clm.py \
--model_name_or_path ${MODEL} \
--dataset_name ${DS_NAME} \
--preprocessing_num_workers 16 \
--logging_steps 5000 \
--save_steps ${SAVE_STEPS} \
--do_eval \
--per_device_eval_batch_size ${EVAL_BATCH} \
--seed ${RANDOM} \
--evaluation_strategy steps \
--logging_dir ${OUTPUT_DIR} \
--output_dir ${OUTPUT_DIR} \
--overwrite_output_dir \
### And getting the follow error:
> 100%|██████████| 3155/3155 [43:38<00:00, 1.20it/s]WARNING:torch.distributed.elastic.multiprocessing.api:Sending process 650 closing signal SIGTERM
WARNING:torch.distributed.elastic.multiprocessing.api:Sending process 651 closing signal SIGTERM
WARNING:torch.distributed.elastic.multiprocessing.api:Sending process 652 closing signal SIGTERM
WARNING:torch.distributed.elastic.multiprocessing.api:Sending process 653 closing signal SIGTERM
WARNING:torch.distributed.elastic.multiprocessing.api:Sending process 654 closing signal SIGTERM
WARNING:torch.distributed.elastic.multiprocessing.api:Sending process 655 closing signal SIGTERM
ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: -9) local_rank: 0 (pid: 649) of binary: /venv/bin/python3
Traceback (most recent call last):
File "/venv/bin/torchrun", line 8, in <module>
sys.exit(main())
File "/venv/lib/python3.8/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 345, in wrapper
return f(*args, **kwargs)
File "/venv/lib/python3.8/site-packages/torch/distributed/run.py", line 761, in main
run(args)
File "/venv/lib/python3.8/site-packages/torch/distributed/run.py", line 752, in run
elastic_launch(
File "/venv/lib/python3.8/site-packages/torch/distributed/launcher/api.py", line 131, in __call__
return launch_agent(self._config, self._entrypoint, list(args))
File "/venv/lib/python3.8/site-packages/torch/distributed/launcher/api.py", line 245, in launch_agent
raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:
============================================================
./code/gpt2/Model-Compression-Research-Package/examples/transformers/language-modeling/run_clm.py FAILED
Failures:
<NO_OTHER_FAILURES>
Root Cause (first observed failure):
[0]:
time : 2022-12-26_10:59:55
host : distributed-05-pt266-zvdww
rank : 0 (local_rank: 0)
exitcode : -9 (pid: 649)
error_file: <N/A>
traceback : Signal 9 (SIGKILL) received by PID 649
============================================================
### Full log is attached:
[eval_log.txt](https://github.com/huggingface/transformers/files/10303349/eval_log.txt)
### Notes:
1. The error occurs in training and in evaluation.
2. I tried to run using torchrun and using torch.distributed.launch and faced the same issue.
3. The number of samples in my training/eval doesn’t affect and the issue remain.
4. I track my memory usage and OOM is not the case here (kinda wish it was).
5. The error occurs only in distributed setup. When not using distributed, or when using it with a single GPU, the problem doesn't pop.
6. The error doesn't reproduce in much smaller dataset, such as wikitext-2. In this case both train and eval works in distributed setup.
### Expected behavior
Expect evaluation/training to finish successfully, log results (sample per second, loss, perplexity, etc..) and save json files of results, as I achieve in non-distributed setup. | 12-26-2022 12:10:44 | 12-26-2022 12:10:44 | Update: after reading this [thread](https://github.com/facebookresearch/detectron2/issues/3319), tried to add the following exports:
>export NCCL_IB_DISABLE=1
export NCCL_P2P_DISABLE=1
But no luck.
The warnings regards `NCCL_P2P` disappeared and the same error remains.<|||||>Thanks for your report. I have no idea why PyTorch gobbles the error message, but without any clue in the logs there is little we can do to investigate if you don't share the script you are running.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 20,897 | closed | Update flan-t5 original model link | # What does this PR do?
Update flan-t5 original model link
@sgugger
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts and @NielsRogge
- speech models: @sanchit-gandhi
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger and @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
| 12-26-2022 10:07:23 | 12-26-2022 10:07:23 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 20,896 | closed | device_map='auto' gives bad results | ### System Info
- `transformers` version: 4.25.1
- Platform: Linux-5.15.0-56-generic-x86_64-with-glibc2.17
- Python version: 3.8.15
- Huggingface_hub version: 0.11.1
- PyTorch version (GPU?): 1.11.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
- GPUs: two A100
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Minimal test example:
```
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = 'EleutherAI/gpt-neo-125M'
model = AutoModelForCausalLM.from_pretrained(model_name, device_map='auto')
tokenizer = AutoTokenizer.from_pretrained(model_name)
sentence = 'Hello, nice to meet you. How are'
with torch.no_grad():
tokenize_input = tokenizer.tokenize(sentence)
tensor_input = torch.tensor([tokenizer.convert_tokens_to_ids(tokenize_input)])
gen_tokens = model.generate(tensor_input, max_length=32)
generated = tokenizer.batch_decode(gen_tokens)[0]
print(generated)
```
Results:
```
Hello, nice to meet you. How are noise retaliateousy retaliateousy retaliateousy retaliateousy retaliateousy retaliateousy retaliateousy retaliateousy retaliateousy retaliateousy retaliateousy
```
The above result is not expected behavior.
Without `device_map='auto'` at line 5, it works correctly.
Line 5 becomes `model = AutoModelForCausalLM.from_pretrained(model_name)`
Results:
```
Hello, nice to meet you. How are you?
I’m a bit of a newbie to the world of web development, but I
```
My machine has two A100 (80 GB) GPUs, and I confirmed that the model is loaded on two GPUs when I use `device_map='auto'`.
### Expected behavior
Explained above | 12-26-2022 08:35:34 | 12-26-2022 08:35:34 | Hi @youngwoo-yoon
Thanks for the issue!
What is your version of `accelerate` ? With the latest version (`0.15.0`) & same pytorch version I get (on a NVIDIA T4) on the minimal test example shared above that uses `device_map=auto` :
```
Hello, nice to meet you. How are you?
I’m a bit of a newbie to the world of web development, but I
```<|||||>Hello, @younesbelkada
I'm using the same version `0.15.0` of `accelerate`.
I also got the correct result when I ran with `export CUDA_VISIBLE_DEVICES=0`
Still wrong results with two GPUS `export CUDA_VISIBLE_DEVICES=0,1`<|||||>Thanks for the details! I still did not managed to reproduce, can you try this snippet instead:
```
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = 'EleutherAI/gpt-neo-125M'
model = AutoModelForCausalLM.from_pretrained(model_name, device_map={"transformer.wte":0, "transformer.wpe":0, "transformer.h":1, "transformer.ln_f":1, "lm_head":1})
tokenizer = AutoTokenizer.from_pretrained(model_name)
sentence = 'Hello, nice to meet you. How are'
with torch.no_grad():
tokenize_input = tokenizer.tokenize(sentence)
tensor_input = torch.tensor([tokenizer.convert_tokens_to_ids(tokenize_input)])
gen_tokens = model.generate(tensor_input, max_length=32)
generated = tokenizer.batch_decode(gen_tokens)[0]
print(generated)
```
and let me know if the problem still persists?
We're using the same Pytorch, `transformers`, `accelerate` version. The only difference is on the hardware (I am using 2xNvidia T4)
Can you also try your script with `export CUDA_VISIBLE_DEVICES=1` instead of `export CUDA_VISIBLE_DEVICES=0`?<|||||>Thanks for the quick replies.
This is the result and it still doesn't look good.
```
Hello, nice to meet you. How are!!!!!!!!!!!!!!!!!!!!!!!
```
My original test code with `export CUDA_VISIBLE_DEVICES=1` gives the same correct result with `export CUDA_VISIBLE_DEVICES=0`
```
Hello, nice to meet you. How are you?
I’m a bit of a newbie to the world of web development, but I
```<|||||>I am slightly unsure here about what could be causing the issue but I suspect it's highly correlated to the fact that you're running your script under two RTX A6000 but not sure
@sgugger do you think that the problem can be related to `accelerate` & the fact that the script is running under two RTX A6000 instead of another hardware (i.e. have you seen similar discrepancy errors in the past)?
@youngwoo-yoon could you ultimately try the script with the latest pytorch version (1.13.1)?<|||||>@younesbelkada, I got the same wrong result with PyTorch 1.13.1.
```
Hello, nice to meet you. How are noise retaliateousy retaliateousy retaliateousy retaliateousy retaliateousy retaliateousy retaliateousy retaliateousy retaliateousy retaliateousy retaliateousy
```<|||||>Mmmm there is no reason for the script to give different results for different GPUs, especially since removing the device_map="auto" gives the same results.
I also can't reproduce on my side. Are you absolutely certain your script is launched in the same Python environment you are reporting? E.g. can you print the versions of Accelerate/Transformers/Pytorch in the same script?<|||||>I put the test scripts using cpu, gpu0, gpu1, and device_map=auto on a single python file to be sure.
```
from importlib.metadata import version
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
print('torch', version('torch'))
print('transformers', version('transformers'))
print('accelerate', version('accelerate'))
print('# of gpus: ', torch.cuda.device_count())
# cpu
model_name = 'EleutherAI/gpt-neo-125M'
model = AutoModelForCausalLM.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
sentence = 'Hello, nice to meet you. How are'
with torch.no_grad():
tokenize_input = tokenizer.tokenize(sentence)
tensor_input = torch.tensor([tokenizer.convert_tokens_to_ids(tokenize_input)])
gen_tokens = model.generate(tensor_input, max_length=32)
generated = tokenizer.batch_decode(gen_tokens)[0]
print(generated)
print('-------------------------------------------')
# on the gpu 0
model = AutoModelForCausalLM.from_pretrained(model_name)
model = model.to('cuda:0')
with torch.no_grad():
tokenize_input = tokenizer.tokenize(sentence)
tensor_input = torch.tensor([tokenizer.convert_tokens_to_ids(tokenize_input)])
tensor_input = tensor_input.to('cuda:0')
gen_tokens = model.generate(tensor_input, max_length=32)
generated = tokenizer.batch_decode(gen_tokens)[0]
print(generated)
print('-------------------------------------------')
# on the gpu 1
model = AutoModelForCausalLM.from_pretrained(model_name)
model = model.to('cuda:1')
with torch.no_grad():
tokenize_input = tokenizer.tokenize(sentence)
tensor_input = torch.tensor([tokenizer.convert_tokens_to_ids(tokenize_input)])
tensor_input = tensor_input.to('cuda:1')
gen_tokens = model.generate(tensor_input, max_length=32)
generated = tokenizer.batch_decode(gen_tokens)[0]
print(generated)
print('-------------------------------------------')
# with device_map=auto
model = AutoModelForCausalLM.from_pretrained(model_name, device_map='auto')
with torch.no_grad():
tokenize_input = tokenizer.tokenize(sentence)
tensor_input = torch.tensor([tokenizer.convert_tokens_to_ids(tokenize_input)])
gen_tokens = model.generate(tensor_input, max_length=32)
generated = tokenizer.batch_decode(gen_tokens)[0]
print(generated)
```
And this the result
```
torch 1.13.1
transformers 4.25.1
accelerate 0.15.0
# of gpus: 2
The attention mask and the pad token id were not set. As a consequence, you may observe unexpected behavior. Please pass your input's `attention_mask` to obtain reliable results.
Setting `pad_token_id` to `eos_token_id`:50256 for open-end generation.
Hello, nice to meet you. How are you?
I’m a bit of a newbie to the world of web development, but I
-------------------------------------------
The attention mask and the pad token id were not set. As a consequence, you may observe unexpected behavior. Please pass your input's `attention_mask` to obtain reliable results.
Setting `pad_token_id` to `eos_token_id`:50256 for open-end generation.
Hello, nice to meet you. How are you?
I’m a bit of a newbie to the world of web development, but I
-------------------------------------------
The attention mask and the pad token id were not set. As a consequence, you may observe unexpected behavior. Please pass your input's `attention_mask` to obtain reliable results.
Setting `pad_token_id` to `eos_token_id`:50256 for open-end generation.
Hello, nice to meet you. How are you?
I’m a bit of a newbie to the world of web development, but I
-------------------------------------------
The attention mask and the pad token id were not set. As a consequence, you may observe unexpected behavior. Please pass your input's `attention_mask` to obtain reliable results.
Setting `pad_token_id` to `eos_token_id`:50256 for open-end generation.
/home/user/anaconda3/envs/task_temp/lib/python3.10/site-packages/transformers/generation/utils.py:1470: UserWarning: You are calling .generate() with the `input_ids` being on a device type different than your model's device. `input_ids` is on cpu, whereas the model is on cuda. You may experience unexpected behaviors or slower generation. Please make sure that you have put `input_ids` to the correct device by calling for example input_ids = input_ids.to('cuda') before running `.generate()`.
warnings.warn(
Hello, nice to meet you. How are noise retaliateousy retaliateousy retaliateousy retaliateousy retaliateousy retaliateousy retaliateousy retaliateousy retaliateousy retaliateousy retaliateousy
```
And this is `nvidia-smi` results
```
Tue Dec 27 16:57:48 2022
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 460.106.00 Driver Version: 460.106.00 CUDA Version: 11.2 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 A100 80GB PCIe Off | 00000000:4F:00.0 Off | 0 |
| N/A 36C P0 47W / 300W | 9MiB / 81251MiB | 0% Default |
| | | Disabled |
+-------------------------------+----------------------+----------------------+
| 1 A100 80GB PCIe Off | 00000000:52:00.0 Off | 0 |
| N/A 37C P0 45W / 300W | 9MiB / 81251MiB | 0% Default |
| | | Disabled |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
| 0 N/A N/A 2915 G /usr/lib/xorg/Xorg 4MiB |
| 0 N/A N/A 119486 G /usr/lib/xorg/Xorg 4MiB |
| 1 N/A N/A 2915 G /usr/lib/xorg/Xorg 4MiB |
| 1 N/A N/A 119486 G /usr/lib/xorg/Xorg 4MiB |
+-----------------------------------------------------------------------------+
```<|||||>There is a warning
``/home/user/anaconda3/envs/task_temp/lib/python3.10/site-packages/transformers/generation/utils.py:1470: UserWarning: You are calling .generate() with the `input_ids` being on a device type different than your model's device. `input_ids` is on cpu, whereas the model is on cuda. You may experience unexpected behaviors or slower generation. Please make sure that you have put `input_ids` to the correct device by calling for example input_ids = input_ids.to('cuda') before running `.generate()`.
``
You did move the inputs when processing on one of the two GPUs, it might be necessary here too. Could you print the `hf_device_map` attribute of the model and try to move the inputs to cuda device 0 and 1?<|||||>I moved inputs to cuda:0 and cuda:1 but both gave the same wrong result.
Below is the output when I moved inputs to cuda:0.
```
torch 1.13.1
transformers 4.25.1
accelerate 0.15.0
# of gpus: 2
hf_device_map output: {'transformer.wte': 0, 'lm_head': 0, 'transformer.wpe': 0, 'transformer.drop': 0, 'transformer.h.0': 0, 'transformer.h.1': 0, 'transformer.h.2': 0, 'transformer.h.3': 0, 'transformer.h.4': 0, 'transformer.h.5': 0, 'transformer.h.6': 1, 'transformer.h.7': 1, 'transformer.h.8': 1, 'transformer.h.9': 1, 'transformer.h.10': 1, 'transformer.h.11': 1, 'transformer.ln_f': 1}
The attention mask and the pad token id were not set. As a consequence, you may observe unexpected behavior. Please pass your input's `attention_mask` to obtain reliable results.
Setting `pad_token_id` to `eos_token_id`:50256 for open-end generation.
Hello, nice to meet you. How are noiseleanor pressuring retaliate incarcer boundousy]= incarcer incarcer high * Karin�� Annotationsousyousyousy pressuring retaliateousyousyousy
```
I will try to reproduce this issue on another machine having two GPUs.<|||||>It works well on another machine with two Quadro 6000 GPUs.
I've tried different `device_map` strategies 'sequential' and 'balanced_low_0', but it still fails when two A100 GPUs are used.
I ran `accelerate test` command which tests accelerate library but it also failed. It seems like a problem of `accelerate` library.
I found some other people also had problems with A100 GPUs.
Related issue: https://github.com/huggingface/accelerate/issues/934
<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Hi @younesbelkada I got the same error with two V100, with `accelerate` version 0.18.0
`prompt = 'Q: What is the largest animal?\nA:'`
output:
```<s>Q: What is the largest animal?
A: The blue whale.
Q: What is the largest animal?
A: The blue whale. It is the largest animal on Earth. It is also the largest mammal. It is the largest creature that has ever lived.
Q: What is the largest animal?
A: The blue whale is the largest animal on Earth. It is also the largest mammal. It is the largest creature that has ever lived.
Q: What is the largest animal?
A: The blue whale is the largest animal on Earth. It is also the largest mammal. It is the largest creature that has ever lived.
Q: What is the largest animal?
A: The blue whale is the largest animal on Earth. It is also the largest mammal. It is the largest creature that has ever lived.
Q: What is the largest animal?
A: The blue whale is the largest animal on Earth. It is also the largest mammal. It is the largest creature that has ever lived.
Q: What is the largest animal?
A: The blue whale is the largest animal on Earth. It is also the largest mammal. It is the largest creature that has ever lived.
Q
```
code:
```
model_path = 'openlm-research/open_llama_3b'
tokenizer = LlamaTokenizer.from_pretrained(model_path)
model = LlamaForCausalLM.from_pretrained(
model_path, torch_dtype=torch.float16, device_map='auto'
)
prompt = 'Q: What is the largest animal?\nA:'
input_ids = tokenizer(prompt, return_tensors="pt").input_ids
input_ids = input_ids.to('cuda')
generation_output = model.generate(
input_ids=input_ids, max_length=400
)
print(tokenizer.decode(generation_output[0]))
```
Have you found a solution? |
transformers | 20,895 | closed | Can't access ViTImageProcessor on transformers==4.25.1 | ### System Info
ImportError: cannot import name 'ViTImageProcessor' from 'transformers' (/opt/conda/lib/python3.7/site-packages/transformers/__init__.py)
Am I using the wrong version? Should I downgrade? Where can I get the release history in general? I often find discrepancies like this for usage documentation in the website and it's often related to versioning.
### Who can help?
@sgugger @stevhliu
### Information
- [X] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
!pip install transformers==4.25.1
from transformers import ViTImageProcessor, BertTokenizer, VisionEncoderDecoderModel
### Expected behavior
It should work. | 12-26-2022 07:37:38 | 12-26-2022 07:37:38 | Hi,
Release history is here: https://github.com/huggingface/transformers/releases.
I just tried it out on Google Colab, it works fine for me. This might be an issue with your environment. Could you uninstall and install Transformers again?
<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Hello, just do: **pip install transformers --upgrade**
then problem fixed |
transformers | 20,894 | closed | `max_length` and `max_new_tokens` in `.generate()` | Hi @gante,
I got some error related to the change of `max_length` and `max_new_tokens` in this PR https://github.com/huggingface/transformers/pull/20388.
For model like Whisper, the `max_length` has already been defined by the max `PositionalEmbedding` length which is 448 (https://huggingface.co/openai/whisper-base/blob/main/config.json#L42).
Sometimes I want to run faster inference by setting a smaller `max_new_tokens`. But I can no more do it with the current change.
### Who can help?
@gante @sanchit-gandhi @ArthurZucker
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Below is a code snippet to reproduce the behavior.
```python
import torch
from datasets import load_dataset
from transformers import AutoProcessor, AutoModelForSpeechSeq2Seq
model = AutoModelForSpeechSeq2Seq.from_pretrained("openai/whisper-base")
processor = AutoProcessor.from_pretrained("openai/whisper-base")
dataset = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
audio = dataset[0]["audio"]["array"]
inputs = processor(audio, return_tensors="pt")
input_features = inputs["input_features"]
generated_ids = model.generate(inputs=input_features, max_new_tokens=225)
```
### Expected behavior
When running this we see the following stack trace:
```
Using the latest cached version of the module from /home/bhuang/.cache/huggingface/modules/datasets_modules/datasets/hf-internal-testing--librispeech_asr_dummy/d3bc4c2bc2078fcde3ad0f0f635862e4c0fef78ba94c4a34c4c250a097af240b (last modified on Sun Dec 25 15:33:28 2022) since it couldn't be found locally at hf-internal-testing/librispeech_asr_dummy., or remotely on the Hugging Face Hub.
Found cached dataset librispeech_asr_dummy (/home/bhuang/.cache/huggingface/datasets/hf-internal-testing___librispeech_asr_dummy/clean/2.1.0/d3bc4c2bc2078fcde3ad0f0f635862e4c0fef78ba94c4a34c4c250a097af240b)
It is strongly recommended to pass the `sampling_rate` argument to this function. Failing to do so can result in silent errors that might be hard to debug.
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
Cell In[1], line 15
12 inputs = processor(audio, return_tensors="pt")
13 input_features = inputs["input_features"]
---> 15 generated_ids = model.generate(inputs=input_features, max_new_tokens=225)
File ~/anaconda3/envs/asr/lib/python3.8/site-packages/torch/autograd/grad_mode.py:27, in _DecoratorContextManager.__call__.<locals>.decorate_context(*args, **kwargs)
24 @functools.wraps(func)
25 def decorate_context(*args, **kwargs):
26 with self.clone():
---> 27 return func(*args, **kwargs)
File ~/transformers/src/transformers/generation/utils.py:1230, in GenerationMixin.generate(self, inputs, generation_config, logits_processor, stopping_criteria, prefix_allowed_tokens_fn, synced_gpus, **kwargs)
1228 generation_config.max_length = generation_config.max_new_tokens + input_ids_seq_length
1229 elif not has_default_max_length and generation_config.max_new_tokens is not None:
-> 1230 raise ValueError(
1231 "Both `max_new_tokens` and `max_length` have been set but they serve the same purpose -- setting a"
1232 " limit to the generated output length. Remove one of those arguments. Please refer to the"
1233 " documentation for more information. "
1234 "(https://huggingface.co/docs/transformers/main/en/main_classes/text_generation)"
1235 )
1237 if generation_config.min_length is not None and generation_config.min_length > generation_config.max_length:
1238 raise ValueError(
1239 f"Unfeasible length constraints: the minimum length ({generation_config.min_length}) is larger than"
1240 f" the maximum length ({generation_config.max_length})"
1241 )
ValueError: Both `max_new_tokens` and `max_length` have been set but they serve the same purpose -- setting a limit to the generated output length. Remove one of those arguments. Please refer to the documentation for more information. (https://huggingface.co/docs/transformers/main/en/main_classes/text_generation)
```
I can set `model.config.max_length = 226` after loading the model to generate with the `max_length` I want. But I think it might be a better choice to enable this option in the `.generate()` function. | 12-25-2022 16:13:59 | 12-25-2022 16:13:59 | I kind of agree with you, but I think the goal is to get rid of `max_length` and rather use the `max_new_token` in the configuration of the model, which is why we should rather deprecate (as we did with previous version) the usage of both of the arguments<|||||>In this instance, only one was passed though, so this is clearly a bug :-)<|||||>Hey @bofenghuang 👋
Definitely an unwanted bug that arose from the ongoing transition to generation config files. Having a look! |
transformers | 20,893 | closed | Finetune BLIP on customer dataset | ### System Info
Dear the team,
I was trying to finetune BLIP and so far I got an error, not sure how to solve it. Is it possible that you can give me some advice? Thanks
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
```
from PIL import Image
import requests
from transformers import BlipProcessor, BlipForQuestionAnswering
model = BlipForQuestionAnswering.from_pretrained("Salesforce/blip-vqa-base")
processor = BlipProcessor.from_pretrained("Salesforce/blip-vqa-base")
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model.to(device)
import torch
from PIL import Image
class VQADataset(torch.utils.data.Dataset):
"""VQA (v2) dataset."""
def __init__(self, questions, answers, image_paths, processor):
self.questions = questions
self.answers = answers
self.image_paths = image_paths
self.processor = processor
def __len__(self):
return len(self.questions)
def __getitem__(self, idx):
# get image + text
question = self.questions[idx]
answer = self.answers[idx]
image = Image.open(self.image_paths[idx]).convert("RGB")
text = question
encoding = self.processor(image, text, padding="max_length", truncation=True, return_tensors="pt")
labels = self.processor.tokenizer.encode(
answer, max_length= 512, pad_to_max_length=True, return_tensors='pt'
)
encoding["labels"] = labels
# remove batch dimension
# for k,v in encoding.items(): encoding[k] = v.squeeze()
return encoding
from torch.utils.data import DataLoader
from tqdm import tqdm
def collate_fn(batch):
input_ids = [item['input_ids'] for item in batch]
pixel_values = [item['pixel_values'] for item in batch]
attention_mask = [item['attention_mask'] for item in batch]
labels = [item['labels'] for item in batch]
return batch
questions = list of questions
answers = list of corresponding answers
image_paths = list of paths of corresponding images
train_dataset = VQADataset(questions = questions,
answers = answers,
image_paths = images,
processor=processor)
test_dataset = VQADataset(questions = questions,
answers = answers,
image_paths = images,
processor=processor)
batch_size = 1
train_dataloader = DataLoader(train_dataset, collate_fn=collate_fn, batch_size=batch_size, shuffle=False)
test_dataloader = DataLoader(test_dataset, collate_fn=collate_fn, batch_size=batch_size, shuffle=False)
batch = next(iter(train_dataloader))
print(batch[0].keys()) # dict_keys(['pixel_values', 'input_ids', 'attention_mask', 'labels'])
import copy
test_input = copy.copy(batch[0]).to(device)
outputs = model(**test_input)
```
Example of the input:
```
questions = ["How many cats are there?"]
answers = ["two"]
image_paths = ["./img_125.png"]
```
### Expected behavior
```
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
[<ipython-input-27-f4758beea430>](https://localhost:8080/#) in <module>
2
3 test_input = copy.copy(batch[0]).to(device)
----> 4 outputs = model(**test_input)
6 frames
[/usr/local/lib/python3.8/dist-packages/torch/nn/functional.py](https://localhost:8080/#) in cross_entropy(input, target, weight, size_average, ignore_index, reduce, reduction, label_smoothing)
3024 if size_average is not None or reduce is not None:
3025 reduction = _Reduction.legacy_get_string(size_average, reduce)
-> 3026 return torch._C._nn.cross_entropy_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index, label_smoothing)
3027
3028
ValueError: Expected input batch_size (0) to match target batch_size (511). | 12-25-2022 16:12:56 | 12-25-2022 16:12:56 | Please use the [forums](https://discuss.huggingface.co/) to help debug your code as we keep issues for bugs and feature requests only.<|||||>hi @dxlong2000
Thanks for the issue!
Can you open an issue on the forums as suggested by @sgugger , and ping me there? (my handle on the forum is @ybelkada) As I am interested in this question and help you
Thanks!<|||||>Did I tag you in the correct way @younesbelkada?
You can check here: https://discuss.huggingface.co/t/finetune-blip-on-customer-dataset-20893/28446/2
Thanks @sgugger!
<|||||>Thanks I can see the issue now! <|||||>Ok I close the issue now! |
transformers | 20,892 | closed | `MinNewTokensLengthLogitsProcessor` for `.generate` method #20814 | ### **Approved** by [#20814 issue](https://github.com/huggingface/transformers/issues/20814)
### What does this PR do?
It implements `MinNewTokensLengthLogitsProcessor` class for enforcing a min-length of **NEW** tokens by setting EOS (end-of-sequence) token probability to 0.
Framework: `pytorch`
### Who can review?
@gante | 12-24-2022 22:00:51 | 12-24-2022 22:00:51 | _The documentation is not available anymore as the PR was closed or merged._<|||||>There seems to be a problem with CI that we'll have to fix before merging. @kotikkonstantin, what fo you see when you click on "Details" next to "setup_and_quality" in the checks section below?<|||||>> There seems to be a problem with CI that we'll have to fix before merging. @kotikkonstantin, what fo you see when you click on "Details" next to "setup_and_quality" in the checks section below?
Skipped:

I suppose it's skipped because it keeps previous successful parts of the CI pipeline if it's not unchanged in the last commit<|||||>> > There seems to be a problem with CI that we'll have to fix before merging. @kotikkonstantin, what fo you see when you click on "Details" next to "setup_and_quality" in the checks section below?
>
> Skipped:
>
> I suppose it's skipped because it keeps previous successful parts of the CI pipeline if it's not unchanged in the last commit
I'm not right here. After failed CI-pipeline run, in the following successful CI-pipeline run, `setup and quality` just was stopped instead of launching<|||||>@kotikkonstantin CircleCI is complaining about terms of service -- are you based in one of the countries linked [here](https://support.circleci.com/hc/en-us/articles/360043679453-CircleCI-Terms-of-Service-Violation-Sanctioned-Country)?<|||||>It seems there is an issue with your CircleCI permissions, the tests won't run.
Could you try refreshing your permissions as shown [here](https://support.circleci.com/hc/en-us/articles/360048210711-How-to-Refresh-User-Permissions-)? You might need to push an empty commit afterward.<|||||>@gante @sgugger
Thank you, guys, for your assistance in approaching it!
I've filled out [Individual Appeal Form](https://docs.google.com/forms/d/e/1FAIpQLSeaVwzPnt2xREoZxe_ysnmNEJQUfBWrTI1TzkE7bq1h06eHqA/viewform). I hope I get access. If not, could you launch the CI pipeline on your own? <|||||>@kotikkonstantin I think we can. Let's try it out:
1 - add me as a contributor to your fork of `transformers`
2 - I will push an empty commit there
3 - maybe CI gets triggered<|||||>> @kotikkonstantin I think we can. Let's try it out: 1 - add me as a contributor to your fork of `transformers` 2 - I will push an empty commit there 3 - maybe CI gets triggered
@gante done<|||||>@gante I can see CI-logs:
<img width="1663" alt="image" src="https://user-images.githubusercontent.com/22777646/210263144-92cc0aae-5e86-4547-a6b5-012cd2346a06.png">
<|||||>@kotikkonstantin yup, I've took the liberty to run the `make fixup` shell command and push :) (which should fix it) |
transformers | 20,891 | closed | typo fix | Hello!
I just fixed this tiny typo. Just getting into open source, one day hopefully I can contribute non-trivial PRs.
Happy holidays!
| 12-24-2022 15:11:42 | 12-24-2022 15:11:42 | Let me know what I should put in the original comment for typo fixes, am starting to go through the docs and will submit another if I spot any<|||||>_The documentation is not available anymore as the PR was closed or merged._ |
transformers | 20,890 | closed | update pyknp to rhoknp | # What does this PR do?
- This PR update [pyknp](https://github.com/ku-nlp/pyknp) to [rhoknp](https://github.com/ku-nlp/rhoknp), a newer Jumanpp package for Japanese morphological analysis.
- A bug was found in `pyknp` (see below), which is also confirmed when using `JumanppTokenizer` in `BertJapaneseTokenizer`. `rhoknp` is more robust and it can avoid this bug.
Code for reproduce:
```
from pyknp import Juman
text = "ありがとうございますm(_ _)m見つけるのが大変です。"
jumanpp = Juman()
for mrph in jumanpp.analysis(text).mrph_list():
print(mrph)
```
Error message:
```
Traceback (most recent call last):
...
File "/local/11249119.1.gpu/venv/python38-transformers/lib/python3.8/site-packages/pyknp/juman/juman.py", line 98, in analysis
return self.juman(input_str, juman_format)
File "/local/11249119.1.gpu/venv/python38-transformers/lib/python3.8/site-packages/pyknp/juman/juman.py", line 85, in juman
result = MList(self.juman_lines(input_str), juman_format)
File "/local/11249119.1.gpu/venv/python38-transformers/lib/python3.8/site-packages/pyknp/juman/mlist.py", line 29, in __init__
mrph = Morpheme(line, mid, juman_format)
File "/local/11249119.1.gpu/venv/python38-transformers/lib/python3.8/site-packages/pyknp/juman/morpheme.py", line 81, in __init__
self._parse_spec(spec.strip("\n"))
File "/local/11249119.1.gpu/venv/python38-transformers/lib/python3.8/site-packages/pyknp/juman/morpheme.py", line 145, in _parse_spec
self.hinsi_id = int(parts[4])
ValueError: invalid literal for int() with base 10: 'm(_'
```
We believe the reason is that `pyknp` was made for fullwidth characters, and halfwidth characters `m(_ _)` are not expected. `rhoknp` is more robust and could avoid this bug.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
@sgugger | 12-24-2022 13:15:59 | 12-24-2022 13:15:59 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Could you please edit the description of the PR to explain the reason for your change? Note that `rhoknp` does not seem to have available wheels for Python3.7 and Python 3.8, which we do support, so switching to this dependency does not seem possible until there is support for more Python versions.<|||||>@sgugger Sorry for the lack of description. We are currently waiting `rhoknp` to support Python3.7: https://github.com/ku-nlp/rhoknp/issues/93#issuecomment-1364685954
We will reopen this PR after supporting.<|||||>@sgugger Hi, I updated `rhoknp` to the newest version which supports Python3.7.
However, I don't know why CI died. It seems that the system ran CI twice because I reopened the PR, one passed but one failed...
passed: https://github.com/huggingface/transformers/actions/runs/3806897762/jobs/6476101506
died: https://github.com/huggingface/transformers/actions/runs/3806897943/jobs/6476101712
Have a Happy New Year~ |
transformers | 20,889 | closed | Disable ClearML automatic model uploading | ### Feature request
Allow users of `transformers` to disable the automatic model uploading of ClearML. Perhaps we can also allow users to write their own integration callbacks in case they want to configure some more stuff.
The place where the saving happens is `src/transformers/integrations.py`:
```
def on_save(self, args, state, control, **kwargs):
if self._clearml_task and state.is_world_process_zero:
ckpt_dir = f"checkpoint-{state.global_step}"
artifact_path = os.path.join(args.output_dir, ckpt_dir)
logger.info(f"Logging checkpoint artifacts in {ckpt_dir}. This may take time.")
self._clearml_task.update_output_model(artifact_path, iteration=state.global_step, auto_delete_file=False)
```
We should add a condition to the main `if`, similar to what `NeptuneCallback` or `MLflowCallback` is doing.
### Motivation
Several experiments that I ran were interrupted because I reached the max limit of model uploading. However, I was not interested in uploading my models in the first place. Hence, a configuration would be appropriate in this case.
### Your contribution
I can submit a PR if the contributors would help figure out the correct way of handling it :) | 12-24-2022 09:53:57 | 12-24-2022 09:53:57 | Would you like to make a PR using the same kind of environment variable as WandB and CometML to control model logging through clearML?
(PS: There is no limit to the number of uploaded models on the Hugging Face Hub when you set `push_to_hub=True` ;-) )<|||||>Yes, I'll try to create a PR :)<|||||>I'm having the same issue!<|||||>@sgugger I created a [PR](https://github.com/huggingface/transformers/pull/20969). Can you please review? :) |
transformers | 20,888 | closed | Unable to import name 'pad_shard_unpad' from 'flax.jax_utils' (clm language modelling flax example) | ### System Info
- `transformers` version: 4.26.0.dev0
- Platform: Linux-5.4.88+-x86_64-with-glibc2.2.5
- Python version: 3.8.15
- Huggingface_hub version: 0.11.1
- PyTorch version (GPU?): 1.11.0+cu102 (False)
- Tensorflow version (GPU?): 2.10.0 (False)
- Flax version (CPU?/GPU?/TPU?): 0.4.2 (tpu)
- Jax version: 0.3.10
- JaxLib version: 0.3.10
- Flax version: 0.4.2
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: Yes (TPUv3-8 1VM Kaggle)
### Who can help?
@sanchit-gandhi @ArthurZucker @younesbelkada @sgugger
### Information
- [X] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
I've been trying to run `transformers/examples/flax/language-modeling/run_clm_flax.py` on Kaggle's new TPUv3-8 1VM type of instance. it's a TPU instance with the TPU devices directly attached.
When I ran the example code in `transformers/examples/flax/language-modeling` for causal language modelling with the following code
```
!python /kaggle/working/transformers/examples/flax/language-modeling/run_clm_flax.py \
--output_dir="<models_direcotry>" \
--model_type="gpt2" \
--config_name="<custom_GPT_Config>" \
--tokenizer_name="<custom_tokenizer>" \
--dataset_name="<path_to_dataset_on_hf_hub>" \
--do_train --do_eval \
--block_size="128" \
--per_device_train_batch_size="64" \
--per_device_eval_batch_size="64" \
--learning_rate="5e-3" --warmup_steps="1000" \
--adam_beta1="0.9" --adam_beta2="0.98" --weight_decay="0.01" \
--overwrite_output_dir \
--num_train_epochs="20" \
--logging_steps="500" \
--save_steps="2500" \
--eval_steps="2500"
```
I'm getting the following error
```
WARNING: Logging before InitGoogle() is written to STDERR
I0000 00:00:1671849395.203410 2540 tpu_initializer_helper.cc:116] libtpu.so is already in use by process with pid 12. Not attempting to load libtpu.so in this process.
2022-12-24 02:36:35.892840: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory
2022-12-24 02:36:35.936831: E tensorflow/stream_executor/cuda/cuda_blas.cc:2981] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2022-12-24 02:36:36.693369: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer.so.7'; dlerror: libnvinfer.so.7: cannot open shared object file: No such file or directory
2022-12-24 02:36:36.693463: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer_plugin.so.7'; dlerror: libnvinfer_plugin.so.7: cannot open shared object file: No such file or directory
2022-12-24 02:36:36.693483: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Cannot dlopen some TensorRT libraries. If you would like to use Nvidia GPU with TensorRT, please make sure the missing libraries mentioned above are installed properly.
Traceback (most recent call last):
File "/kaggle/working/transformers/examples/flax/language-modeling/run_clm_flax.py", line 46, in <module>
from flax.jax_utils import pad_shard_unpad, unreplicate
ImportError: cannot import name 'pad_shard_unpad' from 'flax.jax_utils' (/usr/local/lib/python3.8/site-packages/flax/jax_utils.py)
```
I can't upgrade jax, jaxlib, flax as it messes with the TPU's connected causing them to become unavailable.
### Expected behavior
GPT2 training should begin on 8 TPUv3 devices | 12-24-2022 02:49:27 | 12-24-2022 02:49:27 | @sanchit-gandhi @ArthurZucker, could you help us with this ^<|||||>Hi @SupreethRao99
Indeed the function `pad_shard_unpad` cannot be imported from flax.jax_utils using flax==0.4.2
Can you try with a latest version of `flax`? For example `from flax.jax_utils import pad_shard_unpad` works fine under `flax==0.5.3` | `pip install --upgrade flax` or `pip install flax==0.5.3`<|||||>Hi @younesbelkada , Upgrading flax to the latest version caused some issues jax and jaxlib but using `flax==0.5.3` along with the latest kaggle runtime (30-12-2022) fixed the issue. Thanks a lot ! |
transformers | 20,887 | closed | eval OOM when loading a pretrained model with output_hidden_states set to True for BertForSequenceClassification | ### System Info
- `transformers` version: 4.23.1
- Platform: Linux-4.15.0-166-generic-x86_64-with-debian-buster-sid
- Python version: 3.7.12
- Huggingface_hub version: 0.11.1
- PyTorch version (GPU?): 1.10.0+cu102 (True)
- Tensorflow version (GPU?): 2.7.0 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@sgugger
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Run the following snippet
```
model = BertForSequenceClassification.from_pretrained('snunlp/KR-BERT-char16424', output_hidden_states=True)
# Now create any training_args, tokenized_train, tokenized_valid, and compute_metrics function (such as those in the offical tutorial).
trainer = Trainer(
model=model,
args=training_args,
train_dataset=tokenized_train,
eval_dataset=tokenized_valid,
compute_metrics=compute_metrics,
)
trainer.train()
```
Eval will experience CUDA OOM on a gpu with 24Gb, after about 100 examples.
### Expected behavior
I tried warmstarting a BERT classification model from a pretrained embedding model, which sets output_hidden_states to True in config.json. But eval runs into OOM issue.
| 12-23-2022 22:48:29 | 12-23-2022 22:48:29 | You will need to use the [eval_accumulation_steps](https://huggingface.co/docs/transformers/main_classes/trainer#transformers.TrainingArguments.eval_accumulation_steps) argument in your `TrainingArguments` as it's not possible to accumulate all those tensors coming from the hidden states on the GPU.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 20,886 | closed | [RobertaPreLayernom] Fixes the CI daily test | # What does this PR do?
The checkpoint was not correct, it is a simply typo as the `flax` and `tf` tests were not affected by this. | 12-23-2022 17:17:15 | 12-23-2022 17:17:15 | _The documentation is not available anymore as the PR was closed or merged._<|||||>BTW, this model does not have tf or flax port yet, no?<|||||>It does 😉 |
transformers | 20,885 | closed | update template | # What does this PR do?
Adds a better template for Korean Readme and replaces the previous text. | 12-23-2022 16:36:20 | 12-23-2022 16:36:20 | _The documentation is not available anymore as the PR was closed or merged._<|||||>The following substitution pattern was used :
- match : `\((?:f|F)rom ([^\(]*)\)(?:,)? released (?:together )?with the paper (.*) by (.*).`
- sub : `($1 에서) $3 의 $2 논문과 함께 발표했습니다.` |
transformers | 20,884 | closed | santacoder: saved checkpoints after fine-tuning do not have required .py files | ### System Info
- `transformers` version: 4.25.1
- Platform: Linux-3.10.0-1160.49.1.el7.x86_64-x86_64-with-glibc2.17
- Python version: 3.9.12
- Huggingface_hub version: 0.11.1
- PyTorch version (GPU?): 1.12.1 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
The santacoder model uses `trust_remote_code=True` to load Python files from the model repository. However, when I fine-tune a model and save a checkpoint, these Python files are not placed in the repository. Thus I get an error when trying to load the saved checkpoint. Here is the smallest program that shows the problem:
```python
from transformers import AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained("bigcode/santacoder", revision="dedup-alt-comments", trust_remote_code=True)
model.save_pretrained("./silly-checkpoint")
model = AutoModelForCausalLM.from_pretrained(f"./silly-checkpoint", trust_remote_code=True, revision="dedup-alt-comments")
```
This produces the error `Could not locate the configuration_gpt2_mq.py inside ./silly-checkpoint.`
I can work around it by manually downloading the two Python files from the model repository:
https://huggingface.co/bigcode/santacoder/tree/dedup-alt-comments
But, this should probably not be necessary.
### Expected behavior
I think my script should work as-is, and should not require copy-pasta Python code. | 12-23-2022 14:48:54 | 12-23-2022 14:48:54 | Thanks for reporting! I'll have a look into it after the holidays, the first week of January.<|||||>Thanks for your patience. Could you try the PR linked above?<|||||>I'm away this week. But, I'll check it out next week. Thanks! |
transformers | 20,883 | closed | Fixes typo in the help text for --max_length | This PR fixes a typo in the help text of an example script.
- PyTorch: @sgugger | 12-23-2022 12:43:54 | 12-23-2022 12:43:54 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 20,882 | closed | Add OPT-IML Checkpoints | ### Model description
OPT-IML models are instruction-finetuned from the OPT checkpoints. Here is the [technical report](https://github.com/facebookresearch/metaseq/blob/main/projects/OPT-IML/optimal_paper_v1.pdf).
### Open source status
- [X] The model implementation is available
- [X] The model weights are available
### Provide useful links for the implementation
* **Technical report:** https://github.com/facebookresearch/metaseq/blob/main/projects/OPT-IML/optimal_paper_v1.pdf
* **Model implementation:** same as OPT
* **Model weights:** https://github.com/facebookresearch/metaseq/tree/main/projects/OPT-IML
* **Authors:** Meta AI | 12-23-2022 02:56:45 | 12-23-2022 02:56:45 | 🙏 <|||||>I tried to convert and also ran into this issue:
https://github.com/facebookresearch/metaseq/issues/567
https://github.com/facebookresearch/metaseq/issues/594
But it seems like meta folks are working to upload it to huggingface:
https://github.com/facebookresearch/metaseq/issues/567#issuecomment-1370415582<|||||>maybe I will tag @patrickvonplaten in case who I believe converted the last OPT checkpoints :-)<|||||>I believe it was @ArthurZucker <|||||>Thanks for bumping on this! <|||||>Also working on this, have run into the same issues mentioned above. Let me know if I can be an extra pair of eyes/hands for working on this.<|||||>Weights are available here thanks to (https://huggingface.co/rpasunuru) :
- https://huggingface.co/facebook/opt-iml-30b
- https://huggingface.co/facebook/opt-iml-1.3b
Closing! |
transformers | 20,881 | closed | __init__() missing 1 required positional argument | ### System Info
- `transformers` version: 4.9.2
- Platform: Windows-10-10.0.19041-SP0
- Python version: 3.7.11
- PyTorch version (GPU?): 1.9.0+cpu (False)
- Tensorflow version (GPU?): 2.6.0 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: No
- Using distributed or parallel set-up in script?:No
### Who can help?
@you
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
from transformers.modeling_utils import PretrainedConfig
class TClass(PretrainedConfig):
def __init__(self, config):
super(TClass, self).__init__()
self.config = config
if __name__ == '__main__':
c = TClass(config='setting')
print(c)
### Expected behavior
Hi!
I'am a green hands, I wanted to define a class which is based on PretrainedConfig and sent a variate when initiate the object, but I faced a issue discribed by beflowing text. I change many transformers versions and python versions, but the issue is still happened, could you help me to solve it? Thanks a lot!
File "F:/workProject//test/test.py", line 12, in <module>
print(c)
File "E:\ProgramData\Anaconda3\envs\py37_tf250_torch\lib\site-packages\transformers\configuration_utils.py", line 613, in __repr__
return f"{self.__class__.__name__} {self.to_json_string()}"
File "E:\ProgramData\Anaconda3\envs\py37_tf250_torch\lib\site-packages\transformers\configuration_utils.py", line 674, in to_json_string
config_dict = self.to_diff_dict()
File "E:\ProgramData\Anaconda3\envs\py37_tf250_torch\lib\site-packages\transformers\configuration_utils.py", line 629, in to_diff_dict
class_config_dict = self.__class__().to_dict() if not self.is_composition else {}
TypeError: __init__() missing 1 required positional argument: 'config'"
| 12-23-2022 02:35:41 | 12-23-2022 02:35:41 | Hi there! There is no way we will be able to help you without seeing the code you are running.<|||||>> The code is below
from transformers.modeling_utils import PretrainedConfig
class TClass(PretrainedConfig):
def init(self, config):
super(TClass, self).init()
self.config = config
if name == 'main':
c = TClass(config='setting')
print(c)<|||||>Hi! I'm having the same problem... is there a documentation on how to extend transformers configs? |
transformers | 20,880 | closed | Fix model parallelism for ByT5 | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #20879
Only assign device_map explicitly in parallelize() to model encoder and decoder if it was explicitly passed in from the caller. The encoder and decoder will automatically create a device_map if None is passed in, so the original code was redundant.
ByT5 has a much bigger encoder than decoder, so assuming that the two are the same size (and can use the same device_map) is not correct and results in an assertion error.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts and @NielsRogge
- speech models: @sanchit-gandhi
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger and @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
@ArthurZucker, @younesbelkada | 12-23-2022 02:20:50 | 12-23-2022 02:20:50 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20880). All of your documentation changes will be reflected on that endpoint.<|||||>Hey, parallelise is very deprecated, but I believe other models might also benefit from this if it is a fix no?
<|||||>Wasn't aware, is the recommendation to use accelerate now?
This fix only affects models with more encoder than decoder blocks. I'm pretty sure this is rare (only seen it done with ByT5, which I am using for character sensitive tasks)<|||||>Yes, the recommendation is to use Accelerate for this form of parallelism (which Accelerate supports for all T5 models), the old API is on its way to be deprecated and won't be maintained.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 20,879 | closed | Calling parallelize() on T5ForConditionalGeneration for ByT5 results in device_map error | ### System Info
4.25.1
### Who can help?
@ArthurZucker @younesbelkada
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
```
model = T5ForConditionalGeneration.from_pretrained("google/byt5-xl")
model.parallelize()
```
Results in:
```
The device_map contains more attention blocks than this model has. Remove these from the device_map: {...}
```
### Expected behavior
The model should parallelize attention blocks properly. This is needed because ByT5 has a 3x deeper encoder than decoder, so the same device_map can't be used for both. | 12-23-2022 02:16:42 | 12-23-2022 02:16:42 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Note that the `parallelize` API is going to be deprecated soon. You should load your model like this to use Accelerate instead:
```python
model = T5ForConditionalGeneration.from_pretrained("google/byt5-xl", device_map="balanced")
```<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 20,878 | closed | [ `T5`] fix fp16 loading issue | # What does this PR do?
This PR mainly fixes https://github.com/huggingface/transformers/actions/runs/3754402958/jobs/6378652143
Since the PR https://github.com/huggingface/accelerate/pull/920 has been merged, the fix proposed in https://github.com/huggingface/transformers/pull/20760 seems to not work anymore using the main branch of `accelerate` for some specific cases.
To reproduce (use the main branch of `accelerate`):
```
import torch
from transformers import T5ForConditionalGeneration
model = T5ForConditionalGeneration.from_pretrained("t5-small", torch_dtype=torch.float16)
print(model.decoder.block[0].layer[2].DenseReluDense.wo.weight.dtype)
>>> torch.float16
```
Why?
I believe this is because the aforementioned PR introduced a new argument `dtype` on the function `set_module_tensor_to_device`, if this argument is set to `None` (by default), the target value [is automatically set to the `dtype` of the old tensor](https://github.com/huggingface/accelerate/blob/53b8ed1e8ed5fb8e9d2978744515c31c09e1423e/src/accelerate/utils/modeling.py#L129) - which slightly breaks some assumptions made in https://github.com/huggingface/transformers/pull/20760
I believe upstreaming this change on `modeling_utils` by adding the support of this new argument should be the fix. As some users might not use the latest version of accelerate, I added a small hack to make this change backward compatible, but I am not sure if this is the best solution
Tested this fix on the main branch of `accelerate`, `accelerate==0.15.0` and all relevant tests pass
cc @sgugger | 12-22-2022 19:59:18 | 12-22-2022 19:59:18 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 20,877 | closed | [`BLIP`] Fix daily CI failing test | # What does this PR do?
This PR fixes: https://github.com/huggingface/transformers/actions/runs/3754402958/jobs/6378634199
## Why this fix is relevant?
The reference logits for this test were obtained under pytorch==1.13.1+cu116 and the daily CI uses pytorch==1.13.0+cu116. Setting the tolerance slightly higher (`4e-2`) fixes the test to make it cross-versions compatible.
cc @LysandreJik @sgugger @ydshieh | 12-22-2022 19:24:37 | 12-22-2022 19:24:37 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Hmm at the beginning I thought that the `Softmax` was causing the issue, leading to large round errors but the test pass locally with `torch+cu116==1.13.0` but does not pass on the docker image that uses the same version. Will investigate more!<|||||>On GCP (my own/ CI runners), all torch versions give
(torch 1.13.x)
```python
[[0.97982633 0.02017363]]
[[0.50528485]]
```
or (torch 1.12.1)
```
[[0.97982633 0.02017365]]
[[0.5052849]]
```
so
```python
[[0.9798, 0.0202]]
[[0.5053]]
```
will work. Not sure why you got larger differ though, but it is likely an env issue.<|||||>Thanks so much @ydshieh 💯 , the tests seem to pass now on the CI docker image with your suggested values!
Seems that something was wrong with my env indeed |
transformers | 20,876 | closed | add "local_files_first" parameter | add "local_files_first" parameter to AutoConfig.from_pretrained
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #20875
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts and @NielsRogge
- speech models: @sanchit-gandhi
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger and @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
| 12-22-2022 15:28:40 | 12-22-2022 15:28:40 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20876). All of your documentation changes will be reflected on that endpoint.<|||||>how to pass the code quality check?<|||||>As mentioned in the issue, this is not a fix we are interested in adding as it would break other functionality.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 20,875 | closed | make internet connection only if local cache is missing | ### Feature request
Check if local cache has the model, and download the model only if have to.
### Motivation
My connection to github or huggingface is unstable. I don't want make this unstable internet connection if I can find the cache of the model, breaking things and logics.
### Your contribution
I already mentioned it at #2867. I also made some changes at #20876 | 12-22-2022 15:27:15 | 12-22-2022 15:27:15 | hi james, can you assign this issue to me?<|||||>Thanks for opening this issue, but we're not interested in implementing this feature as this would break the auto-update mechanism (if someone updates the model, it would no longer be downloaded).
If the connection fails for any reason, local files are used instead.<|||||>okay, no problem, i'll look for another issue, which i can fix.
thanx for replying.<|||||>> this would break the auto-update mechanism
In the code the default value of this parameter is set to "False", so it won't be turned on unless you set it to "True".
Auto-updating model is not always needed, though by default it will check for updates everytime.<|||||>It's based on the "local_files_only" #2930, which will skip updates after all.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>I've made some updates on my fork, though might be incomplete, shall cover most cases on model loading.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 20,874 | closed | Adding doc page for the object detection task | This is a PR for the https://github.com/huggingface/transformers/issues/20805 issue.
The guide has content and working code examples for:
- [x] Introduction
- [x] Loading CPPE-5 dataset from Hub
- [x] Preprocessing both images and annotations. Images are augmented, annotations are reformatted to be in the format DETR expects
- [x] Training with `Trainer`
- [x] Evaluation
- [x] Inference | 12-22-2022 15:08:37 | 12-22-2022 15:08:37 | _The documentation is not available anymore as the PR was closed or merged._<|||||>If I missed someone who has to be invited as a reviewer, please feel free to add them. <|||||>@MKhalusova thanks for doing this! I will take a look tomorrow my time.
I think you can follow the instructions noted [here](https://github.com/huggingface/transformers/pull/16255#discussion_r830432539) to resolve the quality bug in the CI. Let me know if anything's unclear. <|||||>Did a rebase in an attempt to fix the CI issue. Accidentally added a whole bunch of unrelated commits to the PR. Figuring out how to remove them. <|||||>> Did a rebase in an attempt to fix the CI issue. Accidentally added a whole bunch of unrelated commits to the PR. Figuring out how to remove them.
You might want to revert to the previous commit. [This thread](https://stackoverflow.com/questions/4114095/how-do-i-revert-a-git-repository-to-a-previous-commit) might be helpful in that regard. And then from there:
* Create a separate Python virtual environment.
* Make sure you're in the virtual environment you just created and then from the `transformers` directory root run `pip install -e .[quality]`.
* Now, once the dependencies have been installed to the new virtual environment, run `make style`.
This should likely fix it. Since the code quality errors were previously coming from a doc page (c.f. https://app.circleci.com/pipelines/github/huggingface/transformers/54549/workflows/18b53122-1cc3-4c87-ad12-486853427500/jobs/657389), I suspect this to be stemming from the task page we're adding in this PR.
Let me know if anything is unclear. <|||||>Closing this due to messed up rebase. The new PR is now here https://github.com/huggingface/transformers/pull/20925 |
transformers | 20,873 | closed | `model_kwargs` not used in `model.generate()` | ### System Info
- `transformers` version: 4.25.1
- Platform: Linux-5.4.0-124-generic-x86_64-with-debian-bullseye-sid
- Python version: 3.7.13
- Huggingface_hub version: 0.10.1
- PyTorch version (GPU?): 1.12.1+cu102 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@ArthurZucker and @younesbelkada
### Information
- [X] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
**Issue**: the extra `model_kwargs` are not used when using `model.generate(**model_kwargs)` (tried with transformer version '4.25.1' and '4.23.1'.
**Code to replicate**:
``` python
from transformers import GPT2LMHeadModel
class ToyGPT(GPT2LMHeadModel):
def forward(self, *args, added_param=None, **kwargs):
print("added_param", added_param)
return super().forward(*args, **kwargs)
toy_model = ToyGPT.from_pretrained("gpt2")
toy_model.generate(added_param=1, max_length=5)
```
**Current output**:
`"added_param, None"`
**Current behaviour**:
The extra `added_param` is not passed as input to the `forward()` of the model when generating new inputs, thus, `None` is printed during the forward pass.
### Expected behavior
**Expected output**
"added_param, 1"
**Expected behaviour**
The generate function should print the updated version of the model_kwargs. | 12-22-2022 14:41:30 | 12-22-2022 14:41:30 | This snippet should work (tested on the `main` branch):
```
from transformers import GPT2LMHeadModel
class ToyGPT(GPT2LMHeadModel):
def forward(self, *args, added_param=None, **kwargs):
print("added_param", added_param)
return super().forward(*args, **kwargs)
def prepare_inputs_for_generation(self, *args, added_param=None, **kwargs):
output = super().prepare_inputs_for_generation(*args, **kwargs)
output.update({"added_param": added_param})
return output
toy_model = ToyGPT.from_pretrained("gpt2")
toy_model.generate(added_param=1, max_length=5)
```
you'll need to update the method `prepare_inputs_for_generation` to consider also your new args<|||||>Amazing. It works. Thank you very much ! |
transformers | 20,872 | closed | Add resources | # What does this PR do?
This PR adds a lot of resources for all models. | 12-22-2022 13:38:57 | 12-22-2022 13:38:57 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 20,871 | closed | Error loading text generation pipeline: Exception: Python patch version not an integer | ### System Info
Platform: Ubuntu 20.04.5, Jupyter Lab 3.5.2, dockerized
Python version: 3.8.13
`pip freeze` output:
absl-py==1.2.0
accelerate==0.15.0
aiohttp==3.8.3
aiosignal==1.3.1
alabaster==0.7.12
anyio==3.6.1
apex==0.1
appdirs==1.4.4
argon2-cffi==21.3.0
argon2-cffi-bindings==21.2.0
asttokens @ file:///home/conda/feedstock_root/build_artifacts/asttokens_1660605382950/work
async-timeout==4.0.2
attrs==22.1.0
audioread==3.0.0
Babel==2.10.3
backcall @ file:///home/conda/feedstock_root/build_artifacts/backcall_1592338393461/work
backports.functools-lru-cache @ file:///home/conda/feedstock_root/build_artifacts/backports.functools_lru_cache_1618230623929/work
beautifulsoup4 @ file:///home/conda/feedstock_root/build_artifacts/beautifulsoup4_1649463573192/work
bitsandbytes==0.35.4
bleach==5.0.1
blis @ file:///home/conda/feedstock_root/build_artifacts/cython-blis_1656314523915/work
brotlipy @ file:///home/conda/feedstock_root/build_artifacts/brotlipy_1648854175163/work
cachetools==5.2.0
catalogue @ file:///home/conda/feedstock_root/build_artifacts/catalogue_1661366525041/work
certifi==2022.9.24
cffi @ file:///home/conda/feedstock_root/build_artifacts/cffi_1656782821535/work
chardet @ file:///home/conda/feedstock_root/build_artifacts/chardet_1656142044710/work
charset-normalizer @ file:///home/conda/feedstock_root/build_artifacts/charset-normalizer_1655906222726/work
click @ file:///home/conda/feedstock_root/build_artifacts/click_1651215152883/work
cloudpickle==2.2.0
codecov==2.1.12
colorama @ file:///home/conda/feedstock_root/build_artifacts/colorama_1655412516417/work
conda-package-handling @ file:///home/conda/feedstock_root/build_artifacts/conda-package-handling_1663583601093/work
contourpy==1.0.5
coverage==6.5.0
cryptography @ file:///home/conda/feedstock_root/build_artifacts/cryptography_1665535545125/work
cuda-python @ file:///rapids/cuda_python-11.7.0%2B0.g95a2041.dirty-cp38-cp38-linux_x86_64.whl
cudf @ file:///rapids/cudf-22.8.0a0%2B304.g6ca81bbc78.dirty-cp38-cp38-linux_x86_64.whl
cugraph @ file:///rapids/cugraph-22.8.0a0%2B132.g2daa31b6.dirty-cp38-cp38-linux_x86_64.whl
cuml @ file:///rapids/cuml-22.8.0a0%2B52.g73b8d00d0.dirty-cp38-cp38-linux_x86_64.whl
cupy-cuda118 @ file:///rapids/cupy_cuda118-11.0.0-cp38-cp38-linux_x86_64.whl
cycler==0.11.0
cymem @ file:///home/conda/feedstock_root/build_artifacts/cymem_1636053152744/work
Cython==0.29.32
dask @ file:///rapids/dask-2022.7.1-py3-none-any.whl
dask-cuda @ file:///rapids/dask_cuda-22.8.0a0%2B36.g9860cad-py3-none-any.whl
dask-cudf @ file:///rapids/dask_cudf-22.8.0a0%2B304.g6ca81bbc78.dirty-py3-none-any.whl
dataclasses @ file:///home/conda/feedstock_root/build_artifacts/dataclasses_1628958434797/work
debugpy==1.6.3
decorator @ file:///home/conda/feedstock_root/build_artifacts/decorator_1641555617451/work
defusedxml==0.7.1
diffusers==0.11.0
distributed @ file:///rapids/distributed-2022.7.1-py3-none-any.whl
docutils==0.17.1
entrypoints==0.3
et-xmlfile==1.1.0
executing @ file:///home/conda/feedstock_root/build_artifacts/executing_1665301981797/work
expecttest==0.1.3
fastjsonschema==2.16.2
fastrlock==0.8
filelock @ file:///home/conda/feedstock_root/build_artifacts/filelock_1660129891014/work
flake8==3.7.9
Flask==2.2.2
fonttools==4.37.4
frozenlist==1.3.3
fsspec==2022.8.2
functorch==0.3.0a0
future==0.18.2
glob2==0.7
google-auth==2.12.0
google-auth-oauthlib==0.4.6
graphsurgeon @ file:///workspace/TensorRT-8.5.0.12/graphsurgeon/graphsurgeon-0.4.6-py2.py3-none-any.whl
grpcio==1.49.1
HeapDict==1.0.1
huggingface-hub==0.11.0
hypothesis==4.50.8
idna @ file:///home/conda/feedstock_root/build_artifacts/idna_1642433548627/work
imagesize==1.4.1
importlib-metadata==5.0.0
importlib-resources==5.10.0
iniconfig==1.1.1
iopath==0.1.10
ipykernel==6.16.0
ipython @ file:///home/conda/feedstock_root/build_artifacts/ipython_1662481517711/work
ipython-genutils==0.2.0
ipywidgets==8.0.2
itsdangerous==2.1.2
jedi @ file:///home/conda/feedstock_root/build_artifacts/jedi_1659959867326/work
Jinja2 @ file:///home/conda/feedstock_root/build_artifacts/jinja2_1654302431367/work
joblib==1.2.0
json5==0.9.10
jsonschema==4.16.0
jupyter-core==4.11.1
jupyter-server==1.21.0
jupyter-tensorboard @ git+https://github.com/cliffwoolley/jupyter_tensorboard.git@ffa7e26138b82549453306e06b535a9ac36db17a
jupyter_client==7.4.2
jupyterlab==2.3.2
jupyterlab-pygments==0.2.2
jupyterlab-server==1.2.0
jupyterlab-widgets==3.0.3
jupytext==1.14.1
kiwisolver==1.4.4
langcodes @ file:///home/conda/feedstock_root/build_artifacts/langcodes_1636741340529/work
libarchive-c @ file:///home/conda/feedstock_root/build_artifacts/python-libarchive-c_1649436017468/work
librosa==0.9.2
lightning-utilities==0.4.2
llvmlite==0.39.1
lmdb==1.3.0
locket==1.0.0
Markdown==3.4.1
markdown-it-py==2.1.0
MarkupSafe @ file:///home/conda/feedstock_root/build_artifacts/markupsafe_1648737563195/work
matplotlib==3.6.2
matplotlib-inline @ file:///home/conda/feedstock_root/build_artifacts/matplotlib-inline_1660814786464/work
mccabe==0.6.1
mdit-py-plugins==0.3.1
mdurl==0.1.2
mistune==2.0.4
mock @ file:///home/conda/feedstock_root/build_artifacts/mock_1648992799371/work
msgpack==1.0.4
multidict==6.0.3
murmurhash @ file:///home/conda/feedstock_root/build_artifacts/murmurhash_1636019583024/work
nbclassic==0.4.5
nbclient==0.7.0
nbconvert==7.2.1
nbformat==5.7.0
nest-asyncio==1.5.6
networkx==2.6.3
nltk==3.7
notebook==6.4.10
notebook-shim==0.1.0
numba==0.56.2
numpy @ file:///home/conda/feedstock_root/build_artifacts/numpy_1643958805350/work
nvidia-dali-cuda110==1.18.0
nvidia-pyindex==1.0.9
nvtx==0.2.5
oauthlib==3.2.1
onnx @ file:///opt/pytorch/pytorch/third_party/onnx
openpyxl==3.0.10
packaging @ file:///home/conda/feedstock_root/build_artifacts/packaging_1637239678211/work
pandas==1.4.4
pandocfilters==1.5.0
parso @ file:///home/conda/feedstock_root/build_artifacts/parso_1638334955874/work
partd==1.3.0
pathy @ file:///home/conda/feedstock_root/build_artifacts/pathy_1656568808184/work
pexpect @ file:///home/conda/feedstock_root/build_artifacts/pexpect_1602535608087/work
pickleshare @ file:///home/conda/feedstock_root/build_artifacts/pickleshare_1602536217715/work
Pillow @ file:///tmp/pillow-simd
pkginfo @ file:///home/conda/feedstock_root/build_artifacts/pkginfo_1654782790443/work
pkgutil_resolve_name==1.3.10
pluggy==1.0.0
polygraphy==0.42.1
pooch==1.6.0
portalocker==2.5.1
preshed @ file:///home/conda/feedstock_root/build_artifacts/preshed_1636077712344/work
prettytable==3.4.1
prometheus-client==0.15.0
prompt-toolkit @ file:///home/conda/feedstock_root/build_artifacts/prompt-toolkit_1662384672173/work
protobuf==3.20.1
psutil @ file:///home/conda/feedstock_root/build_artifacts/psutil_1662356143277/work
ptyprocess @ file:///home/conda/feedstock_root/build_artifacts/ptyprocess_1609419310487/work/dist/ptyprocess-0.7.0-py2.py3-none-any.whl
pure-eval @ file:///home/conda/feedstock_root/build_artifacts/pure_eval_1642875951954/work
py==1.11.0
pyarrow @ file:///rapids/pyarrow-8.0.0-cp38-cp38-linux_x86_64.whl
pyasn1==0.4.8
pyasn1-modules==0.2.8
pybind11==2.10.0
pycocotools @ git+https://github.com/nvidia/cocoapi.git@142b17a358fdb5a31f9d5153d7a9f3f1cd385178#subdirectory=PythonAPI
pycodestyle==2.5.0
pycosat @ file:///home/conda/feedstock_root/build_artifacts/pycosat_1649384811940/work
pycparser @ file:///home/conda/feedstock_root/build_artifacts/pycparser_1636257122734/work
pydantic @ file:///home/conda/feedstock_root/build_artifacts/pydantic_1636021149719/work
pydot==1.4.2
pyflakes==2.1.1
Pygments @ file:///home/conda/feedstock_root/build_artifacts/pygments_1660666458521/work
pylibcugraph @ file:///rapids/pylibcugraph-22.8.0a0%2B132.g2daa31b6.dirty-cp38-cp38-linux_x86_64.whl
pynvml==11.4.1
pyOpenSSL @ file:///home/conda/feedstock_root/build_artifacts/pyopenssl_1643496850550/work
pyparsing @ file:///home/conda/feedstock_root/build_artifacts/pyparsing_1652235407899/work
pyrsistent==0.18.1
PySocks @ file:///home/conda/feedstock_root/build_artifacts/pysocks_1661604839144/work
pytest==6.2.5
pytest-cov==4.0.0
pytest-pythonpath==0.7.4
python-dateutil==2.8.2
python-hostlist==1.22
python-nvd3==0.15.0
python-slugify==6.1.2
pytorch-lightning==1.8.5.post0
pytorch-quantization==2.1.2
pytz @ file:///home/conda/feedstock_root/build_artifacts/pytz_1664798238822/work
PyYAML @ file:///home/conda/feedstock_root/build_artifacts/pyyaml_1648757091578/work
pyzmq==24.0.1
raft @ file:///rapids/raft-22.8.0a0%2B70.g9070c30.dirty-cp38-cp38-linux_x86_64.whl
regex==2022.9.13
requests @ file:///home/conda/feedstock_root/build_artifacts/requests_1656534056640/work
requests-oauthlib==1.3.1
resampy==0.4.2
revtok @ git+git://github.com/jekbradbury/revtok.git@f1998b72a941d1e5f9578a66dc1c20b01913caab
rmm @ file:///rapids/rmm-22.8.0a0%2B62.gf6bf047.dirty-cp38-cp38-linux_x86_64.whl
rsa==4.9
ruamel-yaml-conda @ file:///home/conda/feedstock_root/build_artifacts/ruamel_yaml_1653464386701/work
sacremoses==0.0.53
safetensors==0.2.6
scikit-learn @ file:///rapids/scikit_learn-0.24.2-cp38-cp38-manylinux2010_x86_64.whl
scipy @ file:///home/conda/feedstock_root/build_artifacts/scipy_1619561901336/work
seaborn==0.12.1
Send2Trash==1.8.0
shellingham @ file:///home/conda/feedstock_root/build_artifacts/shellingham_1659638615822/work
six @ file:///home/conda/feedstock_root/build_artifacts/six_1620240208055/work
smart-open @ file:///home/conda/feedstock_root/build_artifacts/smart_open_1630238320325/work
sniffio==1.3.0
snowballstemmer==2.2.0
sortedcontainers==2.4.0
soundfile==0.11.0
soupsieve @ file:///home/conda/feedstock_root/build_artifacts/soupsieve_1658207591808/work
spacy @ file:///home/conda/feedstock_root/build_artifacts/spacy_1644657943105/work
spacy-legacy @ file:///home/conda/feedstock_root/build_artifacts/spacy-legacy_1660748275723/work
spacy-loggers @ file:///home/conda/feedstock_root/build_artifacts/spacy-loggers_1661365735520/work
Sphinx==5.2.3
sphinx-glpi-theme==0.3
sphinx-rtd-theme==1.0.0
sphinxcontrib-applehelp==1.0.2
sphinxcontrib-devhelp==1.0.2
sphinxcontrib-htmlhelp==2.0.0
sphinxcontrib-jsmath==1.0.1
sphinxcontrib-qthelp==1.0.3
sphinxcontrib-serializinghtml==1.1.5
srsly @ file:///home/conda/feedstock_root/build_artifacts/srsly_1638879568141/work
stack-data @ file:///home/conda/feedstock_root/build_artifacts/stack_data_1664126450622/work
tabulate==0.9.0
tblib==1.7.0
tensorboard==2.10.1
tensorboard-data-server==0.6.1
tensorboard-plugin-wit==1.8.1
tensorboardX==2.5.1
tensorrt @ file:///workspace/TensorRT-8.5.0.12/python/tensorrt-8.5.0.12-cp38-none-linux_x86_64.whl
terminado==0.16.0
text-unidecode==1.3
thinc @ file:///home/conda/feedstock_root/build_artifacts/thinc_1638980259098/work
threadpoolctl==3.1.0
tinycss2==1.1.1
tokenizers==0.13.2
toml @ file:///home/conda/feedstock_root/build_artifacts/toml_1604308577558/work
tomli==2.0.1
toolz @ file:///home/conda/feedstock_root/build_artifacts/toolz_1657485559105/work
torch==1.13.0a0+d0d6b1f
torch-tensorrt @ file:///opt/pytorch/torch_tensorrt/py/dist/torch_tensorrt-1.3.0a0-cp38-cp38-linux_x86_64.whl
torchinfo==1.7.1
torchmetrics==0.11.0
torchtext==0.11.0a0
torchvision @ file:///opt/pytorch/vision
tornado==6.2
tqdm==4.64.1
traitlets @ file:///home/conda/feedstock_root/build_artifacts/traitlets_1663005918942/work
transformer-engine @ file:///tmp/te_wheel/transformer_engine-0.1.0-cp38-cp38-linux_x86_64.whl
transformers==4.25.1
treelite @ file:///rapids/treelite-2.4.0-py3-none-manylinux2014_x86_64.whl
treelite-runtime @ file:///rapids/treelite_runtime-2.4.0-py3-none-manylinux2014_x86_64.whl
typer @ file:///home/conda/feedstock_root/build_artifacts/typer_1657029164904/work
typing_extensions @ file:///home/conda/feedstock_root/build_artifacts/typing_extensions_1665144421445/work
ucx-py @ file:///rapids/ucx_py-0.27.0a0%2B29.ge9e81f8-cp38-cp38-linux_x86_64.whl
uff @ file:///workspace/TensorRT-8.5.0.12/uff/uff-0.6.9-py2.py3-none-any.whl
urllib3 @ file:///home/conda/feedstock_root/build_artifacts/urllib3_1658789158161/work
wasabi @ file:///home/conda/feedstock_root/build_artifacts/wasabi_1658931821849/work
wcwidth @ file:///home/conda/feedstock_root/build_artifacts/wcwidth_1600965781394/work
webencodings==0.5.1
websocket-client==1.4.1
Werkzeug==2.2.2
widgetsnbextension==4.0.3
xgboost @ file:///rapids/xgboost-1.6.1-cp38-cp38-linux_x86_64.whl
yarl==1.8.2
zict==2.2.0
zipp==3.9.0
### Who can help?
@ArthurZucker @younesbelkada @gante @Narsil
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```
from transformers import pipeline
generator = pipeline('text-generation', model='gpt2')
```
Output:
```
Downloading: 0%| | 0.00/665 [00:00<?, ?B/s]
Downloading: 0%| | 0.00/548M [00:00<?, ?B/s]
---------------------------------------------------------------------------
Exception Traceback (most recent call last)
Cell In [2], line 1
----> 1 generator = pipeline('text-generation', model='gpt2')
File /storage/morrisalper/notebooks/envs/notebook_env/lib/python3.8/site-packages/transformers/pipelines/__init__.py:724, in pipeline(task, model, config, tokenizer, feature_extractor, framework, revision, use_fast, use_auth_token, device, device_map, torch_dtype, trust_remote_code, model_kwargs, pipeline_class, **kwargs)
720 # Infer the framework from the model
721 # Forced if framework already defined, inferred if it's None
722 # Will load the correct model if possible
723 model_classes = {"tf": targeted_task["tf"], "pt": targeted_task["pt"]}
--> 724 framework, model = infer_framework_load_model(
725 model,
726 model_classes=model_classes,
727 config=config,
728 framework=framework,
729 task=task,
730 **hub_kwargs,
731 **model_kwargs,
732 )
734 model_config = model.config
735 hub_kwargs["_commit_hash"] = model.config._commit_hash
File /storage/morrisalper/notebooks/envs/notebook_env/lib/python3.8/site-packages/transformers/pipelines/base.py:257, in infer_framework_load_model(model, config, model_classes, task, framework, **model_kwargs)
251 logger.warning(
252 "Model might be a PyTorch model (ending with `.bin`) but PyTorch is not available. "
253 "Trying to load the model with Tensorflow."
254 )
256 try:
--> 257 model = model_class.from_pretrained(model, **kwargs)
258 if hasattr(model, "eval"):
259 model = model.eval()
File /storage/morrisalper/notebooks/envs/notebook_env/lib/python3.8/site-packages/transformers/models/auto/auto_factory.py:463, in _BaseAutoModelClass.from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs)
461 elif type(config) in cls._model_mapping.keys():
462 model_class = _get_model_class(config, cls._model_mapping)
--> 463 return model_class.from_pretrained(
464 pretrained_model_name_or_path, *model_args, config=config, **hub_kwargs, **kwargs
465 )
466 raise ValueError(
467 f"Unrecognized configuration class {config.__class__} for this kind of AutoModel: {cls.__name__}.\n"
468 f"Model type should be one of {', '.join(c.__name__ for c in cls._model_mapping.keys())}."
469 )
File /storage/morrisalper/notebooks/envs/notebook_env/lib/python3.8/site-packages/transformers/modeling_utils.py:2230, in PreTrainedModel.from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs)
2227 if from_pt:
2228 if not is_sharded and state_dict is None:
2229 # Time to load the checkpoint
-> 2230 state_dict = load_state_dict(resolved_archive_file)
2232 # set dtype to instantiate the model under:
2233 # 1. If torch_dtype is not None, we use that dtype
2234 # 2. If torch_dtype is "auto", we auto-detect dtype from the loaded state_dict, by checking its first
2235 # weights entry that is of a floating type - we assume all floating dtype weights are of the same dtype
2236 # we also may have config.torch_dtype available, but we won't rely on it till v5
2237 dtype_orig = None
File /storage/morrisalper/notebooks/envs/notebook_env/lib/python3.8/site-packages/transformers/modeling_utils.py:386, in load_state_dict(checkpoint_file)
381 """
382 Reads a PyTorch checkpoint file, returning properly formatted errors if they arise.
383 """
384 if checkpoint_file.endswith(".safetensors") and is_safetensors_available():
385 # Check format of the archive
--> 386 with safe_open(checkpoint_file, framework="pt") as f:
387 metadata = f.metadata()
388 if metadata.get("format") not in ["pt", "tf", "flax"]:
Exception: Python patch version not an integer
```
### Expected behavior
Should not output an exception. E.g. this code runs as-is (after `pip install transformers`) in Google Colab. | 12-22-2022 13:03:42 | 12-22-2022 13:03:42 | This is very odd.
Could you share maybe a bit more about your environment so we could reproduce ?
It seems like the way Python itself is installed is odd (I'm purely inferring from the error message), maybe ?
Is it possible to provide a way to reproduce maybe ? Like a docker image or something ?
It does seem to work on colab so it's hard to know what is wrong with the enviroment. It also seems like there's a mix of `conda` and `pip` install which might be at play (both link to different things, so maybe the linker is confused somehow ?)
I tried googling your error message but nothing came up..
<|||||>I met the same problem and fixed it by degrading the transformers version like 4.22.0 or others.<|||||>Experienced the same problem. I also downgraded to make it work.
i dont know what the _commit_hash variable is used for, but removing the line in transformers/pipelines/__init__.py also seems to work.
this line `hub_kwargs["_commit_hash"] = model.config._commit_hash`
A fix for this would be very appreciated<|||||>I think it was related to this [issue](https://github.com/huggingface/safetensors/issues/142). All PyTorch container images of [NVIDIA NGC](https://docs.nvidia.com/deeplearning/frameworks/pytorch-release-notes/rel-22-12.html#rel-22-12) have alpha version tags for PyTorch. cc @Narsil <|||||>Thanks @Codle .
It seems to be indeed the issue. Releasing a new version soon so everyone has access.<|||||>Should be fixed with new version (0.2.8), could you confirm ?<|||||>Hi @Narsil, sorry for my late response. After updating safetensors to 0.2.8, it works fine for me.<|||||>Closing this. Thank you for sharing !<|||||>Updating safetensors solved it for me too. Thanks!<|||||>despite downgrading my safetensors, I get the following error
```
Traceback (most recent call last):
File "/home/suryahari/Vornoi/QA.py", line 5, in <module>
model = AutoModelForQuestionAnswering.from_pretrained("deepset/roberta-base-squad2")
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/suryahari/miniconda3/envs/diffusers/lib/python3.11/site-packages/transformers/models/auto/auto_factory.py", line 493, in from_pretrained
return model_class.from_pretrained(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/suryahari/miniconda3/envs/diffusers/lib/python3.11/site-packages/transformers/modeling_utils.py", line 2629, in from_pretrained
state_dict = load_state_dict(resolved_archive_file)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/suryahari/miniconda3/envs/diffusers/lib/python3.11/site-packages/transformers/modeling_utils.py", line 447, in load_state_dict
with safe_open(checkpoint_file, framework="pt") as f:
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
OSError: No such device (os error 19)
```
|
transformers | 20,870 | closed | Add japanese translation of template | # What does this PR do?
Adds Japanese template of the README-jp.md following the same convention as what is done in Chinese
cc @sgugger @ArthurZucker | 12-22-2022 10:10:16 | 12-22-2022 10:10:16 | You might be able to modify the README with simple regex/pattern finding <|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>yes, I'll do it now<|||||>Thanks for giving me the tip about https://regex101.com/ which made the conversion process much faster!
Should be good now |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.