repo
stringclasses 1
value | number
int64 1
25.3k
| state
stringclasses 2
values | title
stringlengths 1
487
| body
stringlengths 0
234k
โ | created_at
stringlengths 19
19
| closed_at
stringlengths 19
19
| comments
stringlengths 0
293k
|
---|---|---|---|---|---|---|---|
transformers | 17,557 | closed | T5ForConditionalGeneration does not require resize_position_embeddings when input sequence length is longer than 512? | Hi, thanks in advance! I am looking at the run_summarization.py under examples/pytorch/summarization/, in the following code snippets where I want to set `max_source_length` bigger than 512 where 512 is the max length T5 was pre-trained on:
```
if (
hasattr(model.config, "max_position_embeddings")
and model.config.max_position_embeddings < data_args.max_source_length
):
if model_args.resize_position_embeddings is None:
logger.warning(
"Increasing the model's number of position embedding vectors from"
f" {model.config.max_position_embeddings} to {data_args.max_source_length}."
)
model.resize_position_embeddings(data_args.max_source_length)
elif model_args.resize_position_embeddings:
model.resize_position_embeddings(data_args.max_source_length)
else:
raise ValueError(
f"`--max_source_length` is set to {data_args.max_source_length}, but the model only has"
f" {model.config.max_position_embeddings} position encodings. Consider either reducing"
f" `--max_source_length` to {model.config.max_position_embeddings} or to automatically resize the"
" model's position encodings by passing `--resize_position_embeddings`."
)
```
My questions are:
1. I remembered T5Config was having a `max_position_embeddings` parameter before (was 512), why it is removed now?
2. In the script, the default `max_sequence_length` is set to 1024. Since it is bigger than 512, why it is not required to call `resize_position_embeddings` method like before in this issue: https://github.com/huggingface/transformers/issues/5204#issuecomment-648045999
4. Bart also used relative position embedding like T5, but BartConfig's `max_position_embeddings` is kept with 1024 and when setting `max_source_length` longer than 1024, it does require calling `resize_position_embeddings` according to the code snippets above. Is it because of different relative position embedding between BART and T5.
I think I must be misunderstanding something, appreciate if some explanations can be given here. Thanks!! | 06-04-2022 21:41:33 | 06-04-2022 21:41:33 | cc @patrickvonplaten <|||||>Hey @mshen2,
I don't think BART uses relative position embeddings, but rather "fixed" position embeddings ("fixed" in the sence that if seq_len > 1024 is provided the model gives an index error).
Could you maybe look into this line of code in BART: https://github.com/huggingface/transformers/blob/66e8656778392609e1fb769f1a0d0839af3cd76a/src/transformers/models/bart/modeling_bart.py#L718 -> it shows that the position ids are a fixed-size matrix
Also cc @patil-suraj here<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 17,555 | closed | Fixes the LevitIntegrationTest | There was a mismatch of logits which wasn't corrected. It's done now.
@NielsRogge | 06-04-2022 09:21:31 | 06-04-2022 09:21:31 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 17,554 | closed | TF implementation of RegNets | In this PR in which we (/w @sayakpaul) are proting the RegNets model into TensorFlow.
| 06-04-2022 04:23:43 | 06-04-2022 04:23:43 | _The documentation is not available anymore as the PR was closed or merged._<|||||>@amyeroberts
If we run the following:
```
from PIL import Image
import numpy as np
from src.transformers.models.regnet.modeling_tf_regnet import (
TFRegNetForImageClassification
)
from transformers import AutoFeatureExtractor
def prepare_img():
image = Image.open("./tests/fixtures/tests_samples/COCO/000000039769.png")
return image
feature_extractor = AutoFeatureExtractor.from_pretrained("facebook/regnet-y-040")
model = TFRegNetForImageClassification.from_pretrained("facebook/regnet-y-040", from_pt=True)
image = prepare_img()
inputs = feature_extractor(images=image, return_tensors="tf")
outputs = model(**inputs, training=False)
print(outputs.logits.shape)
expected_slice = np.array([-0.4180, -1.5051, -3.4836])
np.testing.assert_allclose(outputs.logits[0, :3].numpy(), expected_slice, atol=1e-4)
```
First, it complains the `moving_mean` and `moving_variance` params are not loaded properly.
We tested your solution in https://github.com/huggingface/transformers/pull/17571. With that, we're running into mismatches of `num_batches_tracked` and even `moving_mean`. It also complains about some of the mismatches stemming from the `shortcut` layer which wasn't the case for the earlier setup.
Do you have any thoughts? <|||||>Hi @sayakpaul
Could you give a bit more information about the mismatches i.e. the printouts you're currently getting?
Regarding `num_batches_tracked`, I don't believe this parameter will ever be cross-loaded into a `tf.keras.layers.BatchNormalization` layer as there isn't an equivalent parameter. This is only important if the corresponding PyTorch batch norm layer doesn't have its momentum set c.f. [param updates](https://github.com/pytorch/pytorch/blob/67badf0d5cefeb0d39767609e78aa5ff668a262e/torch/nn/modules/batchnorm.py#L149), which you'll need to verify for this model. I suggest looking at the implementations of both the [TF](https://github.com/keras-team/keras/blob/07e13740fd181fc3ddec7d9a594d8a08666645f6/keras/layers/normalization/batch_normalization.py#L1107-L1249) and PyTorch layer to see when/if these differences are important. If the parameter is necessary, then I think one approach might be subclassing to build a new layer and include the parameter as a registered weight + any necessary logic to use it, but I'm not sure at the moment. <|||||>I tried debugging this today but no luck yet. But here's some information for all of us to navigate this through:
* Amending [`src/transformers/modeling_tf_pytorch_utils.py`](https://github.com/huggingface/transformers/pull/17554/files#diff-67993ece845388c9a9a1f342f4c82a2ed7f790f454134c39c3a63146616ea37a) (following https://github.com/huggingface/transformers/pull/17571) resulted in this: https://pastebin.com/0CZJmvzh.
* `num_batches_tracked` is likely not needed, I don't suspect that to be a trained parameter anyway. However, happy to stand corrected.
* But what is surprising is even after incorporating the changes from https://github.com/huggingface/transformers/pull/17571 there's a complaint about `moving_mean` and `moving_variance`.
* There's also a complaint about `convolution` params.
All these mismatches seem to be stemming from the `layers.0` of RegNet stages. Mismatches stemming from other `layers` (`layers.2` for example) are related to `num_batches_tracked`.
The test used to gather this information is the same one as mentioned in https://github.com/huggingface/transformers/pull/17554#issuecomment-1147700055.
@amyeroberts <|||||>@sayakpaul Thanks for your detailed update. Comments below:
1. OK - thanks for posting that it really helps!
2. `num_batches_tracked` isn't trainable, but it is updated during training. As I mentioned above, if the layer has `momentum` set (it's not `None`) then you can ignore it. However, if `momentum` isn't set, then the layer uses `num_batches_tracked` to update the `running_mean` and `running_var` calculations, which are used during evaluation to normalize the batch. You can quickly check if the momentum is set for the batchnorm layers running something like `all([x.momentum is not None for x in model.modules() if isinstance(x, nn.BatchNorm2d)])`.
3. Looking at the printout you pasted above, it says `All the weights of TFRegNetForImageClassification were initialized from the PyTorch model.`. If this is the case, and some of the PyTorch weights weren't used, it makes me think some layers might be missing in your implementation. I would look at the two architectures and see if they differ anywhere. <|||||>@amyeroberts a quick update:
* `momentum` is actually not set. This is why we need to also retrieve `num_batches_tracked` too. We need to figure out a way to factor it in to use with `layers.BatchNormalization` in TensorFlow.
* The TF model has a fewer number of params than the PT model so we'll look into why this is the case. One immediate reason would be the absence of `num_batches_tracked`. But that contributes a very small difference. We currently have 629440 fewer parameters in the TF model than the PT one. <|||||>@sayakpaul Thanks for the update!
* OK, this makes things a bit more difficult. Let me know if you want any help for this step. It's something that will likely need to be done in other PT -> TF ports so definitely valuable to the community if you added this!
* It might be easier to print out the weight names instead of comparing number of parameters. The porting code works on the names, and so seeing where the two models differ can really help pinpoint what's happening. What I typically do is use the porting code to convert the tensorflow weight names and compare the two sets. For this model, it would look something like:
```
from transformers import RegNetForImageClassification
# import directly once __init__ files updated
from transformers.models.regnet.modeling_tf_regnet import TFRegNetForImageClassification
from transformers.modeling_tf_pytorch_utils import convert_tf_weight_name_to_pt_weight_name
checkpoint = "facebook/regnet-y-040"
tf_model = TFRegNetForImageClassification.from_pretrained(checkpoint, from_pt=True)
pt_model = RegNetForImageClassification.from_pretrained(checkpoint)
tf_model_weights = set([convert_tf_weight_name_to_pt_weight_name(x.name)[0] for x in tf_model.trainable_variables])
pt_model_weights = set(pt_model.state_dict().keys())
print(tf_model_weights - pt_model_weights)
print(pt_model_weights - tf_model_weights)
```<|||||>Thanks for the suggestions. Will try them out and update.<|||||>@amyeroberts
I had to do a few minor modifications to your snippet in https://github.com/huggingface/transformers/pull/17554#issuecomment-1150933208:
```
tf_model_weights = set(
[
convert_tf_weight_name_to_pt_weight_name(x.name)[0]
for x in tf_model.trainable_variables + tf_model.non_trainable_variables
]
)
pt_model_weights = set(pt_model.state_dict().keys())
tf_model_weights_new = set()
for name in tf_model_weights:
if "moving_mean" in name:
name = name.replace("moving_mean", "running_mean")
elif "moving_variance" in name:
name = name.replace("moving_variance", "running_var")
tf_model_weights_new.add(name)
print(f"Differences in the TF model and PT model: {tf_model_weights_new - pt_model_weights}")
print(f"Differences in the PT model and TF model: {pt_model_weights - tf_model_weights_new}")
print(f"Total weights differing: {len(pt_model_weights - tf_model_weights_new)}")
```
`convert_tf_weight_name_to_pt_weight_name()` doesn't change the `moving_mean` and `moving_variance` to `running_mean` and `running_var` respectively. Instead, currently, it's handled [here](https://github.com/ariG23498/transformers/blob/aritra-regnets/src/transformers/modeling_tf_pytorch_utils.py#L160-#L172) so that [this query](https://github.com/ariG23498/transformers/blob/aritra-regnets/src/transformers/modeling_tf_pytorch_utils.py#L205) is successful.
With this change, the result of `pt_model_weights - tf_model_weights_new` is exactly matching with the complaint:
```
Some weights of the PyTorch model were not used when initializing the TF 2.0 model TFRegNetForImageClassification ...
```
(Full output [here](https://pastebin.com/cg10ET1c)).
I have gone over the `modeling_tf_regnet.py` script a couple of times but I don't yet know what I can do here. Let me know what you usually do when you have these differences. <|||||>Also an oversight on my end in reporting `momentum` in https://github.com/huggingface/transformers/pull/17554#issuecomment-1150851986.
`all([x.momentum is not None for x in model.modules() if isinstance(x, nn.BatchNorm2d)])` actually gives `True` which means it's okay to ignore `num_batches_tracked`. <|||||>@amyeroberts we were able to rectify the model implementation and make it work. The integration test (mentioned in https://github.com/huggingface/transformers/pull/17554#issuecomment-1147700055) is passing now.
The tests, however, are failing for a weird reason:
```
Parameter config in `TFRegNetModel(config)` should be an instance of class `PretrainedConfig`. To create a model from a pretrained model use `model = TFRegNetModel.from_pretrained(PRETRAINED_MODEL_NAME)`
```
Weird because we tested a couple of things in isolation:
```py
from transformers import RegNetConfig
config_class = RegNetConfig()
print(f"RegNet Config class type: {type(config_class)}.")
print(f"RegNet Config is an instance of PretrainedConfig: {isinstance(config_class, PretrainedConfig)}")
```
The final print statement gives `True`. But when we do the following:
```py
from src.transformers.models.regnet.modeling_tf_regnet import TFRegNetForImageClassification, TFRegNetModel
class_from_config = TFRegNetModel(config_class)
print("Model class from config was initialized.")
```
it complains:
```
Parameter config in `TFRegNetModel(config)` should be an instance of class `PretrainedConfig`. To create a model from a pretrained model use `model = TFRegNetModel.from_pretrained(PRETRAINED_MODEL_NAME)`
```
Do you have any suggestions for this?<|||||>@sgugger @Rocketknight1 the PR is now ready for review.
This particular model actually has the largest vision model checkpoint available to date: https://huggingface.co/facebook/regnet-y-10b-seer. It's still in PyTorch and the corresponding model makes use of the `low_cpu_usage` argument.
I had a chat with @Rocketknight1 a few days back on the possibility of supporting this checkpoint in TensorFlow too. This will require tweaks and they will be contributed in a separate PR. <|||||>@ydshieh I thought I could use your help here. There's something really weird happening here.
If I omit the `image_size` argument from the [`RegNetConfig`](https://github.com/ariG23498/transformers/blob/aritra-regnets/src/transformers/models/regnet/configuration_regnet.py#L76) the cross-testing is failing (full [stack-trace](https://pastebin.com/Us6BgKvh)).
```
RUN_SLOW=1 RUN_PT_TF_CROSS_TESTS=1 python -m pytest tests/models/regnet/test_modeling_tf_regnet.py
```
But if I keep the argument, it runs successfully. The only use of `image_size` is [here](https://github.com/ariG23498/transformers/blob/aritra-regnets/src/transformers/models/regnet/modeling_tf_regnet.py#L425).
In any case, the PyTorch cross-test of the same model is failing ([full stack-trace with `image_size` set, trace without `image_size` set](https://pastebin.com/LjmPjqpZ)):
```
RUN_SLOW=1 RUN_PT_TF_CROSS_TESTS=1 python -m pytest tests/models/regnet/test_modeling_regnet.py
```
Have you got any suggestions?
Cc: @Rocketknight1
<|||||>Hey @sayakpaul
By `I omit the image_size argument from the` [RegNetConfig](https://github.com/ariG23498/transformers/blob/aritra-regnets/src/transformers/models/regnet/configuration_regnet.py#L76), could you specify where you call config without passing `image_size `. I guess you mean somewhere in the TF Reg test file, but which line exactly ๐ ?
`dummy_inputs` is also used in https://github.com/huggingface/transformers/blob/3c7e56fbb11f401de2528c1dcf0e282febc031cd/src/transformers/modeling_tf_utils.py#L1975
to build the network, so the weights (with random values) will present in order to load the real weights. My best guess is that without specifying somewhere, the model is initialized to handle size 224 images, and it causes some shape inconsistency issue for your test. For more detailed investigation, I need to know where you don't specify this argument. But probably you are able to figure this out with this info. maybe?<|||||>BTW, why you need `pastebin` to post the traceback? A link to the CircleCI job run page is ok, no?<|||||>@ydshieh thanks for your inputs.
So, what I meant is I just comment [this line](https://github.com/ariG23498/transformers/blob/aritra-regnets/src/transformers/models/regnet/configuration_regnet.py#L76) and [this line](https://github.com/ariG23498/transformers/blob/aritra-regnets/src/transformers/models/regnet/configuration_regnet.py#L89) and just pass a hardcoded value (224) [here](https://github.com/ariG23498/transformers/blob/aritra-regnets/src/transformers/models/regnet/modeling_tf_regnet.py#L425) (`3, self.config.num_channels, 224, 224`). Is this better to understand now?
So, if `image_size` specified in the config (like it is currently), the cross-test with TF (`test_modeling_tf_regnet.py`) passes successfully but not with PT (i.e., `test_modeling_regnet.py`).
> BTW, why you need `pastebin` to post the traceback? A link to the CircleCI job run page is ok, no?
The CI trace seemed clunky so I ran the individual test to keep the outputs cleaner. <|||||>The TF implementation of `TFAdaptiveAvgPool1D` (used for `TFAdaptiveAvgPool2D`) is not capable to handle any image size (once being initialized with an input), as its `build` method contains `self.map = tf.constant(sparse_map)`, which determines the input shape it can handle in the subsequent calls.
I didn't check how PyTorch implements `nn.AdaptiveAvgPool2d((1, 1))`, but I think it is possible not to build `self.map`, but instead to prepare the necessary matrix dynamically in the `call` method.
Regarding the test, the situation is similar: the method `dummy_inputs` is called in `from_pretrained` method. While hard-coded with `224`, the model can't run with other image size anymore. Even if you add `image_size` argument in `RegNetConfig`, once a
`TFRegNetModel` is run with an input, it can't run with other input shape anymore. This contradicts to the PyTorch implementation.
Maybe you can figure out a more robust way for TF?
Here is the code snippet to make things more concrete:
```python
import torch
import tensorflow as tf
import math
from typing import List
import numpy as np
# Copied from:
# https://gist.github.com/Rocketknight1/43abbe6e73f1008e6e459486e01e0ceb
class TFAdaptiveAvgPool1D(tf.keras.layers.Layer):
def __init__(self, output_dim, mode="dense", **kwargs):
super().__init__(**kwargs)
self.output_dim = output_dim
self.mode = mode
self.map = None
def build(self, input_shape):
super().build(input_shape)
"""We pre-compute the sparse matrix for the build() step once. The below code comes
from https://stackoverflow.com/questions/53841509/how-does-adaptive-pooling-in-pytorch-work/63603993#63603993."""
def get_kernels(ind, outd) -> List:
"""Returns a List [(kernel_offset_start,kernel_length)] defining all the pooling kernels for a 1-D adaptive
pooling layer that takes an input of dimension `ind` and yields an output of dimension `outd`"""
def start_index(a, b, c):
return math.floor((float(a) * float(c)) / b)
def end_index(a, b, c):
return math.ceil((float(a + 1) * float(c)) / b)
results = []
for ow in range(outd):
start = start_index(ow, outd, ind)
end = end_index(ow, outd, ind)
sz = end - start
results.append((start, sz))
return results
in_dim = int(input_shape[-1])
kernels = get_kernels(in_dim, self.output_dim)
sparse_map = np.zeros((in_dim, self.output_dim), dtype=np.float32)
for i, kernel in enumerate(kernels):
sparse_map[kernel[0] : kernel[0] + kernel[1], i] = 1 / kernel[1]
if self.mode == "dense":
self.map = tf.constant(sparse_map)
else:
self.map = tf.sparse.from_dense(sparse_map)
def call(self, inputs):
if self.mode == "dense":
return inputs @ self.map
else:
input_dims = inputs.shape
input_matrix = tf.reshape(inputs, (-1, input_dims[-1]))
out = tf.sparse.sparse_dense_matmul(input_matrix, self.map)
return tf.reshape(out, input_dims[:-1].as_list() + [-1])
class TFAdaptiveAvgPool2D(tf.keras.layers.Layer):
def __init__(self, output_shape, mode="dense", **kwargs):
super().__init__(**kwargs)
self.w_pool = TFAdaptiveAvgPool1D(output_shape[1], mode=mode)
def call(self, inputs):
# Rearrange from NHWC -> NCHW
inputs = tf.transpose(inputs, perm=[0, 3, 1, 2])
# Perform W-pooling
inputs = self.w_pool(inputs)
pt_2d_pooler = torch.nn.AdaptiveAvgPool2d((1, 1))
tf_1d_pooler = TFAdaptiveAvgPool1D(output_dim=1, mode="dense")
# For image size 224
N, C, H, W = (3, 10, 56, 56)
np_input_224_56 = np.random.random(size=(N, C, H, W))
pt_input_224_56 = torch.tensor(np_input_224_56)
tf_input_224_56 = tf.constant(np_input_224_56)
# For image size 32
N, C, H, W = (3, 10, 8, 8)
np_input_32_8 = np.random.random(size=(N, C, H, W))
pt_input_32_8 = torch.tensor(np_input_32_8)
tf_input_32_8 = tf.constant(np_input_32_8)
# 1st run: pt OK
pt_o = pt_2d_pooler(pt_input_224_56)
print(pt_o.shape)
# 1st run: tf OK
tf_o = tf_1d_pooler(tf_input_224_56)
print(tf_o.shape)
print(f"tf_1d_pooler.map has shape: {tf_1d_pooler.map.shape}")
# 2nd run: pt OK
pt_o = pt_2d_pooler(pt_input_32_8)
print(pt_o.shape)
# 2nd run: tf failed
tf_o = tf_1d_pooler(tf_input_32_8)
print(tf_o.shape)
```<|||||>Thank you so much @ydshieh! Really appreciate this.
> Regarding the test, the situation is similar: the method dummy_inputs is called in from_pretrained method. While hard-coded with 224, the model can't run with other image size anymore. Even if you add image_size argument in RegNetConfig, once a
TFRegNetModel is run with an input, it can't run with other input shape anymore. This contradicts to the PyTorch implementation.
I see. I am still a little unsure as to why the cross tests in TF would run then.
Also ccing @Rocketknight1 for https://github.com/huggingface/transformers/pull/17554#issuecomment-1159556670.<|||||>
> I see. I am still a little unsure as to why the cross tests in TF would run then.
It runs successfully only if the config has the argument `image_size` (..right?). Is this where you have the question?
<|||||>It runs successfully (the cross-test) with the `image_size` specified in the config. If that is the case, why the PyTorch cross-test would fail. This is my question. Sorry if it wasn't clear previously. <|||||>In `RegNetModelTester`, the `get_config` method doesn't use `image_size`.
https://github.com/huggingface/transformers/blob/6fdcc6dcb5b9396aa4481513c69cc89a22c5533f/tests/models/regnet/test_modeling_regnet.py#L84-L92
Even if the `image_size` argument is added to `RegNetConfig` with a default value `224`, the test doesn't pass `self.image_size (32)` to it.
Therefore, the TF model will get `224` for `dummy_inputs`, but the subsequent calls in the test prepare image size 32 for testing. That's why it fails.
In TF test, you (also) added `image_size=self.image_size, ` to `get_config`
https://github.com/huggingface/transformers/blob/6fdcc6dcb5b9396aa4481513c69cc89a22c5533f/tests/models/regnet/test_modeling_tf_regnet.py#L81-L90
which is why it works.<|||||>Thanks, @ydshieh. That solved the problem.
@sgugger @Rocketknight1 the tests should pass now but @ydshieh pointed out a potential concern here: https://github.com/huggingface/transformers/pull/17554#issuecomment-1159556670. Does it make sense to tackle it in a separate PR given Adaptive Average Pooling impacts quite a few models (RegNet, ResNet, Swin, etc.)? <|||||>Ugh, yes. The layer precomputes a map for the input and output shapes at build() time, and will break if you pass inputs with different shapes to the layer afterwards.
With the implementation as written, I think it will be quite difficult to generate the `sparse_map` in the `call()` method. The reason is that there's a lot of computation that's just happening on the CPU to make it, which is fine as a once-off task in the `init`, but that won't really work if it compiled into the graph.
I think this is a sign that we might have to write a proper `AdaptivePool` op for TF and make a PR to TFA, which I was talking with @amyeroberts about.<|||||>> With the implementation as written, I think it will be quite difficult to generate the `sparse_map` in the `call()` method.
Do you think it is necessary to keep the logic for the sparse part?<|||||>We can absolutely drop the actual sparse matrix, however we'll still need to compute the dense matrix (which is called `sparse_map` because it's mostly zeros)<|||||>@Rocketknight1 WDYT we should do about this PR given the current state? Should we wait for ...
> I think this is a sign that we might have to write a proper AdaptivePool op for TF and make a PR to TFA, which I was talking with @amyeroberts about.
... or is there anything I can do at my end? <|||||>@sayakpaul Hang on! I have a new implementation for `AdaptivePool` that I think will resolve some of these issues, and I should be able to finish it by today.<|||||>@sayakpaul I have an implementation [here](https://gist.github.com/Rocketknight1/b0baa8236f379b811fc6bce3da05cc2b), working on testing and optimizations now<|||||>@sayakpaul added comments as requested: https://gist.github.com/Rocketknight1/efc47242914788def0144b341b1ad638<|||||>> @sayakpaul added comments as requested: https://gist.github.com/Rocketknight1/efc47242914788def0144b341b1ad638
Just read through. Excellent ๐<|||||>@Rocketknight1 I tested your new implementation locally ([branch](https://github.com/ariG23498/transformers/tree/new-adaptive-pooler)).
It works across both TF and PT cross-tests. It also works without `image_size` being supplied to `RegNetConfig()` in the PT test (see [here](https://github.com/ariG23498/transformers/blob/new-adaptive-pooler/tests/models/regnet/test_modeling_regnet.py#L86)).
Cc: @ydshieh <|||||>@sgugger @Rocketknight1 here are the updates:
* Removed `image_size` argument everywhere relevant to avoid any confusion.
* Used @Rocketknight1's new implementation of adaptive average pooling (https://github.com/huggingface/transformers/pull/17554#issuecomment-1160654493).
Tests are all passing. Let me know if there is anything left on my end.
The TF weights need to be uploaded to the model repository on Hub. After that, I will remove the `from_pt` argument [here](https://github.com/ariG23498/transformers/blob/aritra-regnets/tests/models/regnet/test_modeling_tf_regnet.py#L251) and [here](https://github.com/ariG23498/transformers/blob/aritra-regnets/tests/models/regnet/test_modeling_tf_regnet.py#L275). <|||||>Please also incorporate the updates made in #17731 <|||||>> Please also incorporate the updates made in #17731
Copy-paste the changes and add you as a co-author of the commits?<|||||>No I think @NielsRogge means making sure you apply on TFRegNet the changes he is working on for all other TF models.<|||||>> Please also incorporate the updates made in https://github.com/huggingface/transformers/pull/17731
@NielsRogge could you be a bit more specific?
RegNets don't have `to_2tuple()`. I understand what needs to be changed regarding `num_channels` but it's hard to isolate that from your PR. If you could point me to the relevant places in this PR that need changing (with respect to yours), that'd be helpful. <|||||>@Rocketknight1 @amyeroberts I have updated the use of Adaptive Average Pooling locally but when I tried running the tests (`RUN_SLOW=1 RUN_PT_TF_CROSS_TESTS=1 python -m pytest tests/models/regnet/test_modeling_tf_regnet.py`) I'm running into:
```
/Users/sayakpaul/.local/bin/.virtualenvs/hf/lib/python3.8/site-packages/pytest_sugar.py:169: DeprecationWarning: distutils Version classes are deprecated. Use packaging.version instead.
xdist_version = LooseVersion(xdist.__version__)
/Users/sayakpaul/.local/bin/.virtualenvs/hf/lib/python3.8/site-packages/pytest_sugar.py:170: DeprecationWarning: distutils Version classes are deprecated. Use packaging.version instead.
if xdist_version >= LooseVersion('1.14'):
Test session starts (platform: darwin, Python 3.8.2, pytest 7.0.0, pytest-sugar 0.9.4)
rootdir: /Users/sayakpaul/Downloads/transformers, configfile: setup.cfg
plugins: dash-2.1.0, sugar-0.9.4, picked-0.4.6, xdist-2.5.0, forked-1.4.0, timeout-2.1.0, hypothesis-6.36.1
collecting ...
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ ERROR collecting tests/models/regnet/test_modeling_tf_regnet.py โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
ImportError while importing test module '/Users/sayakpaul/Downloads/transformers/tests/models/regnet/test_modeling_tf_regnet.py'.
Hint: make sure your test modules/packages have valid Python names.
Traceback:
/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/importlib/__init__.py:127: in import_module
return _bootstrap._gcd_import(name[level:], package, level)
tests/models/regnet/test_modeling_tf_regnet.py:25: in <module>
from ...test_configuration_common import ConfigTester
tests/test_configuration_common.py:26: in <module>
from huggingface_hub import HfFolder, Repository, delete_repo, set_access_token
E ImportError: cannot import name 'set_access_token' from 'huggingface_hub' (/Users/sayakpaul/.local/bin/.virtualenvs/hf/lib/python3.8/site-packages/huggingface_hub/__init__.py)
------------------------------------------------------------------------------------------------------- Captured stderr --------------------------------------------------------------------------------------------------------
WARNING:absl:No GPU/TPU found, falling back to CPU. (Set TF_CPP_MIN_LOG_LEVEL=0 and rerun for more info.)
======================================================================================================= warnings summary =======================================================================================================
../../.local/bin/.virtualenvs/hf/lib/python3.8/site-packages/flatbuffers/compat.py:19
/Users/sayakpaul/.local/bin/.virtualenvs/hf/lib/python3.8/site-packages/flatbuffers/compat.py:19: DeprecationWarning: the imp module is deprecated in favour of importlib; see the module's documentation for alternative uses
import imp
-- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html
=================================================================================================== short test summary info ====================================================================================================
FAILED tests/models/regnet/test_modeling_tf_regnet.py
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! Interrupted: 1 error during collection !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
Results (0.77s):
```
@amyeroberts since you incorporated similar changed in https://github.com/huggingface/transformers/pull/17427 yesterday, wondering if you faced something similar. If so, what was the solution? <|||||>> RegNets don't have to_2tuple(). I understand what needs to be changed regarding num_channels but it's hard to isolate that from your PR. If you could point me to the relevant places in this PR that need changing (with respect to yours), that'd be helpful.
So we'd like to add a sanity check regarding `num_channels` to all vision models for which it is relevant.
In PyTorch, I add the following to the models that have an initial embedding layer (stem):
```
num_channels = pixel_values.shape[1]
if num_channels != self.num_channels:
raise ValueError(
"Make sure that the channel dimension of the pixel values match with the one set in the configuration."
)
```
However, when adding something similar to TensorFlow models, like `TFViTMAE`, this failed when using `save_pretrained`. @ydshieh pointed me that it is due to graph mode. So he suggests to use the following for TF models:
```
if tf.executing_eagerly() and num_channels != self.num_channels:
raise ValueError(
"Make sure that the channel dimension of the pixel values match with the one set in the configuration."
)
```<|||||>Thanks for explaining. It's clear now.
```if tf.executing_eagerly() and num_channels != self.num_channels:
raise ValueError(
"Make sure that the channel dimension of the pixel values match with the one set in the configuration."
)
```
Where did you add it and how did you obtain `num_channels`? Similarly as `num_channels = pixel_values.shape[1]`? @ydshieh would `shape_list()` have been a better choice for TF models here? <|||||>Well, I could not find the occurrence of `num_channels = pixel_values.shape[1]` in my local clone. My main suggestion (@amyeroberts mentioned) is about adding `if tf.executing_eagerly()` in the condition.<|||||>Also with a discussion with Niels, we found `if getattr(height, "numpy", None) and getattr(width, "numpy", None):` should be replace by `if tf.executing_eagerly()` -- that was my bad when I added TF ViT model.<|||||>If the check is being added from the `modeling_*.py` scripts, then I think it's better to access the number of channels with `self.config.num_channels`. <|||||>@Rocketknight1 @amyeroberts regarding https://github.com/huggingface/transformers/pull/17554#issuecomment-1164086078, I was able to fix it by upgrading `huggingface_hub`: `pip install -U huggingface_hub`.
I have incorporated the rest of the changes:
* Adaptive average pool -> Global average pool
* Changes as per https://github.com/huggingface/transformers/pull/17554#issuecomment-1164094973
Cc: @NielsRogge @sgugger
This part is pending:
> The TF weights need to be uploaded to the model repository on Hub. After that, I will remove the from_pt argument [here](https://github.com/ariG23498/transformers/blob/aritra-regnets/tests/models/regnet/test_modeling_tf_regnet.py#L251) and [here](https://github.com/ariG23498/transformers/blob/aritra-regnets/tests/models/regnet/test_modeling_tf_regnet.py#L275).
<|||||>@sgugger @Rocketknight1 anything pending on my end to move this ahead?<|||||>Thank you!
> The TF weights need to be uploaded to the model repository on Hub. After that, I will remove the from_pt argument here and here.
This part is pending then. <|||||>As soon as @ariG23498 performs a rebase the errors should go away (I can't since the forked repo's owner can only do that I guess).<|||||>Hey folks! The hub PRs for most models are open (the exception is the 10b models, which I'm having a look at), you should be able to remove the `from_pt` soon :)<|||||>Thanks, @gante! Keep us posted.
<|||||>@gante thanks for your hard work on getting the TF parameters of this model to Hub (there are a total of 32 checkpoints in case anyone's curious). I really appreciate this help!
I have removed the occurrences of `from_pt=True` from TF test script. With this, I think the PR is ready to merge an exception on the 10B checkpoint of the model. It requires some functionalities that are apparently missing but will be likely added soon. <|||||>The failing tests seem to be unrelated to this PR?<|||||>Yes, pytorch v1.12 was released a few hours ago and broke a few things here. [We have pinned pytorch to <1.12](https://github.com/huggingface/transformers/blob/main/setup.py#L163) -- rebasing with `main` should fix the problems :)<|||||>@gante we performed a rebase but the `pipelines` test for Torch seems to be still failing. <|||||>That error seems unrelated to this PR, so I think you could probably merge anyway.<|||||>Over to you @gante then since I can't merge :D <|||||>The error is on `main` as well -- merging ๐ค |
transformers | 17,553 | closed | [deepspeed / testing] reset global state | This PR:
- adds a reset to the global state at the end of each in-pytest deepspeed test (and API to do that)
- fixes `test_load_best_model_zero2_fp16` to run the trainingargs first to get the ds config state right
cc: @ydshieh, who discovered the issues on CI
@sgugger | 06-03-2022 21:59:18 | 06-03-2022 21:59:18 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 17,552 | closed | Add examples telemetry | # What does this PR do?
This PR adds a function to send telemetry to help us track the examples usage and uses it in the current examples. For now, I've just added in the PyTorch `run_glue.py`, but will paste it in all other examples if you agree with the format/data tracked. | 06-03-2022 18:46:53 | 06-03-2022 18:46:53 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Thanks a lot for working on this @sgugger - that's super useful! |
transformers | 17,551 | closed | Character limit when tokenizing? | ### System Info
```shell
- `transformers` version: 4.19.2
- Platform: macOS-12.2.1-arm64-arm-64bit
- Python version: 3.9.13
- Huggingface_hub version: 0.7.0
- PyTorch version (GPU?): 1.11.0 (False)
- Tensorflow version (GPU?): 2.8.0 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
```
### Who can help?
@LysandreJik
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
from transformers import DistilBertTokenizerFast
tokenizer = DistilBertTokenizerFast.from_pretrained("distilbert-base-multilingual-cased")
tokens = tokenizer(["Hello","Hello"*20, "Hello"*26], truncation=True)
### Expected behavior
```shell
I noticed that when using DistilBertTokenizerFast, there appears to be a character limit of tokenization for a word. For example:
from transformers import DistilBertTokenizerFast
tokenizer = DistilBertTokenizerFast.from_pretrained("distilbert-base-multilingual-cased")
tokens = tokenizer(["Hello","Hello"*20, "Hello"*26], truncation=True)
returns:
{'input_ids': [[101, 31178, 102],
[101, 31178, 12396, 24829, 12396, 24829, 12396, 24829, 12396, 24829, 12396, 24829, 12396, 24829, 12396, 24829, 12396, 24829, 12396, 24829, 12396, 24829, 12396, 24829, 12396, 24829, 12396, 24829, 12396, 24829, 12396, 24829, 12396, 24829, 12396, 24829, 12396, 24829, 12396, 24829, 102],
[101, 100, 102]],
'attention_mask': [[1, 1, 1],
[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1],
[1, 1, 1]]}
The third word here was assigned three tokens, which doesn't seem right to me. The second word was assigned many more tokens when the character length was at 100. Is this the intended behaviour of the tokenizer? This seems to happen with words longer than 100 characters.
```
| 06-03-2022 18:08:22 | 06-03-2022 18:08:22 | cc @SaulLu <|||||>Hi @luadamek ,
Thank you for your detailed issue! I think you found a limitation of the Wordpiece model of the `tokenizers` library.
Indeed, looking at the content of the tokens we can see that in the last case the text is identified as unknown:
```python
from transformers import DistilBertTokenizerFast
tokenizer = DistilBertTokenizerFast.from_pretrained("distilbert-base-multilingual-cased")
tokens = tokenizer(["Hello","Hello"*20, "Hello"*26])
for _input_ids in tokens.input_ids:
print(tokenizer.convert_ids_to_tokens(_input_ids))
# ['[CLS]', 'Hello', '[SEP]']
# ['[CLS]', 'Hello', '##H', '##ello', '##H', '##ello', '##H', '##ello', '##H', '##ello', '##H', '##ello', '##H', '##ello', '##H', '##ello', '##H', '##ello', '##H', '##ello', '##H', '##ello', '##H', '##ello', '##H', '##ello', '##H', '##ello', '##H', '##ello', '##H', '##ello', '##H', '##ello', '##H', '##ello', '##H', '##ello', '##H', '##ello', '[SEP]']
# ['[CLS]', '[UNK]', '[SEP]']
```
Personally I think that this is a trade-off that was made for performance reasons. If you think this is a problem worth discussing further, the best thing to do would be to open an issue on the library that codes the model: https://github.com/huggingface/tokenizers. :relaxed: <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Hi @SaulLu
That makes sense. Apologies for the late reply. I was worried there might be a dirtier bug hiding underneath this. If it's just for performance reasons, this seems totally reasonable.
Thanks! |
transformers | 17,550 | closed | [deepspeed] fix load_best_model test | Fixes https://github.com/huggingface/transformers/pull/17151 to run `from_pretrained` with emulated dist env.
While it was working on my setup on CI it failed with:
```
tests/deepspeed/test_deepspeed.py:756: in test_load_best_model
model = T5ForConditionalGeneration.from_pretrained(T5_TINY)
src/transformers/modeling_utils.py:2116: in from_pretrained
init_contexts = [deepspeed.zero.Init(config_dict_or_path=deepspeed_config())] + init_contexts
/opt/conda/lib/python3.8/site-packages/deepspeed/runtime/zero/partition_parameters.py:693: in __init__
self.local_device = torch.device('cuda:{}'.format(os.environ["LOCAL_RANK"]))
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = environ({'NPP_VERSION': '11.3.2.139', 'NVIDIA_VISIBLE_DEVICES': 'all', 'DALI_BUILD': '2054952', 'GITHUB_WORKSPACE': '/...RRENT_TEST': 'tests/deepspeed/test_deepspeed.py::TrainerIntegrationDeepSpeed::test_load_best_model_zero3_fp16 (call)'})
key = 'LOCAL_RANK'
def __getitem__(self, key):
try:
value = self._data[self.encodekey(key)]
except KeyError:
# raise KeyError with the original key value
> raise KeyError(key) from None
E KeyError: 'LOCAL_RANK'
```
the test is exactly the same, just moved a big chunk of it into the `with mockenv_context` - no code changes
@sgugger | 06-03-2022 18:08:18 | 06-03-2022 18:08:18 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 17,549 | closed | fix `train_new_from_iterator` in the case of byte-level tokenizers | # What does this PR do?
This PR aims at allowing to use `train_new_from_iterator` when the original tokenizer backend was using a ByteLevel pre-tokenization. Before this fix, the vocabulary learn wasn't correct because the initial bytes were missing.
Fixes #17371
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Would love to have the feedback of @LysandreJik and @sgugger on the tokenizer part and @Narsil on the pipeline tests (and also the tokenizer if you have more time!)
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 06-03-2022 18:06:53 | 06-03-2022 18:06:53 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 17,548 | closed | Update index.mdx | # What does this PR do?

This removes the extra space in front of the new /support image. Thank you for the suggestion @sgugger !
Got too excited to merge the previous image update and missed this housekeeping fix. | 06-03-2022 18:04:09 | 06-03-2022 18:04:09 | @sgugger I don't think I agree with the suggestion. This is not my area of expertise ๐
, but I think HTML elements should be indented (e.g. `<img>` child element indented within the `<a>` parent element), even on a markdown file. <|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_17548). All of your documentation changes will be reflected on that endpoint.<|||||>It wasn't indented before the first PR and was working perfectly fine.<|||||>Yes, it works in either case. I think it's usually suggested to use indentation in HTML elements for readability purposes. No strong opinion on this, though!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 17,547 | closed | Update index.mdx | # What does this PR do?
This PR updates our Expert Acceleration Program image with a new image featuring our experts.
This is similar to our [Transformers/README.md image update](https://github.com/huggingface/transformers/pull/16615) that has proven to be successful.

| 06-03-2022 17:37:47 | 06-03-2022 17:37:47 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 17,546 | closed | fix(typo): Update run_glue_no_trainer.py | @sgugger @patil-suraj | 06-03-2022 16:12:50 | 06-03-2022 16:12:50 | |
transformers | 17,545 | closed | Repetitive sampling generations from opt1.3b but not from opt350m | ### System Info
```shell
- `transformers` version: 4.19.2
- Platform: Linux-5.4.0-104-generic-x86_64-with-glibc2.29
- Python version: 3.8.10
- Huggingface_hub version: 0.5.1
- PyTorch version (GPU?): 1.7.1+cu110 (True)
- Tensorflow version (GPU?): 2.8.0 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
```
### Who can help?
@LysandreJik, @patrickvonplaten
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```
from transformers import OPTForCausalLM, GPT2Tokenizer
prompt = ["Hey, are you consciours? Can you talk to me?",
"Hey, are you consciours? Can you talk to me?",
"Hey, are you consciours? Can you talk to me?",
"Hey, are you consciours? Can you talk to me?",
"Hey, are you consciours? Can you talk to me?"]
print("=====OPT 1.3b Sampling=====")
model = OPTForCausalLM.from_pretrained("facebook/opt-1.3b")
tokenizer = GPT2Tokenizer.from_pretrained("facebook/opt-1.3b")
inputs = tokenizer(prompt, return_tensors="pt", padding=True)
generate_ids = model.generate(inputs=inputs.input_ids, attention_mask=inputs.attention_mask,
max_new_tokens=100, do_sample=True, temperature=1.0)
print(tokenizer.batch_decode(generate_ids, skip_special_tokens=True))
'''
Generations are repetitive, e.g.,
["Hey, are you consciours? Can you talk to me? Please?\nI'm sorry but I'm afraid I'm incapable of talking to you :(",
'Hey, are you consciours? Can you talk to me?\nYeah sure thing buddy! Whats your discord?',
"Hey, are you consciours? Can you talk to me?\nI'm sorry I'm not consciours :(",
'Hey, are you consciours? Can you talk to me? Please? Please? Please? Please? Please? Please? Please? Please? Please? Please? Please? Please? Please? Please? Please? Please? Please? Please? Please? Please? Please? Please? Please? Please? Please? Please? Please? Please? Please? Please? Please? Please? Please? Please? Please? Please? Please? Please? Please? Please? Please? Please? Please? Please? Please? Please? Please? Please? Please? Please?',
'Hey, are you consciours? Can you talk to me? Please? Please? Please? Please? Please? Please? Please? Please? Please? Please? Please? Please? Please? Please? Please? Please? Please? Please? Please? Please? Please? Please? Please? Please? Please? Please? Please? Please? Please? Please? Please? Please? Please? Please? Please? Please? Please? Please? Please? Please? Please? Please? Please? Please? Please? Please? Please? Please? Please? Please?']
'''
print("=====OPT 350m Sampling=====")
model = OPTForCausalLM.from_pretrained("facebook/opt-350m")
tokenizer = GPT2Tokenizer.from_pretrained("facebook/opt-350m")
inputs = tokenizer(prompt, return_tensors="pt", padding=True)
generate_ids = model.generate(inputs=inputs.input_ids, attention_mask=inputs.attention_mask,
max_new_tokens=100, do_sample=True, temperature=1.0)
print(tokenizer.batch_decode(generate_ids, skip_special_tokens=True))
'''
Generations look normal, e.g.,
["Hey, are you consciours? Can you talk to me?\nThis is a repost, stop doing it lol\nI'm not trying to repost.... it's just happened to me like that, so I thought I'd do it, too.",
"Hey, are you consciours? Can you talk to me? A few months ago there seemed to be a large influx of new and existing consciourns around Melbourne. All we see at the top are people wanting to participate. They've got no skills, no money, no interest and they're not interested in anything other than the show and being popular. I'm not even kidding about that!\n\nI understand that the show is quite difficult and is full of competition, but does your hobby really need such an appeal?\n\nFor example, my",
"Hey, are you consciours? Can you talk to me?\nThe reason I can't talk to him is because I'm in a hurry. He's at the airport, waiting for his driver to come back (he doesn't drive, and he's not comfortable being in someone's car and trying to take it with him). I think I'm overthinking this one.\nIf you need someone to talk to, please PM me.",
"Hey, are you consciours? Can you talk to me?\nNo I'm not\nMy mistake, I've been looking around for a little while and don't have a chance to look up the exact wording. I just noticed your 'cognizant'.\nno worries and yea it's a common one so let's move on haha",
"Hey, are you consciours? Can you talk to me? I'm not a cop, but I've dealt with one or two\nOf course. I am in Texas, but we're not all like that."]
'''
```
### Expected behavior
```shell
Sampling generations from opt1.3b are often very repetitive (see the example above).
It does not seem to happen by chance. If you run the code multiple times, similar patterns will always appear.
It is unexpected because usually standard sampling should produce diverse results.
Interestingly, I did not see this issue when using opt350m.
```
| 06-03-2022 15:28:54 | 06-03-2022 15:28:54 | Interesting, thanks for opening the issue!
@stephenroller @suchenzang have you see similar behavior with the fairseq model ? Could this be due to a bug in porting the model?<|||||>One thing I'd like to add here is that we enable `topk=50` by default -> does changing this value maybe help? But it indeed looks like a modeling issue<|||||>Related issue https://github.com/facebookresearch/metaseq/issues/136<|||||>Should be fixed now in https://github.com/huggingface/transformers/releases/tag/v4.20.1<|||||>Thanks for the update! After this patch, it should be able to pass the end-to-end regression test between metaseq and huggingface (https://github.com/facebookresearch/metaseq/issues/136)? I dug through the convo and it seems this is on @stephenroller @ArthurZucker @thomasw21 's radar? โค๏ธ
It would be great if we can load the model into metaseq directly without merging if possible, so that we can catch subtle bug in conversion.<|||||>thanks so much for fixing it! @patrickvonplaten |
transformers | 17,544 | closed | Descriptors cannot not be created directly. | Hi @patrickvonplaten
I'm trying to import the following:
from transformers import BartTokenizer, BartForConditionalGeneration
tokenizer = BartTokenizer.from_pretrained('facebook/bart-large')
and get the error:
TypeError: Descriptors cannot not be created directly.
If this call came from a _pb2.py file, your generated code is out of date and must be regenerated with protoc >= 3.19.0.
If you cannot immediately regenerate your protos, some other possible workarounds are:
1. Downgrade the protobuf package to 3.20.x or lower.
2. Set PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION=python (but this will use pure-Python parsing and will be much slower).
All I can find on the web to fix it is to set:
!protobuf==3.20.1
but it's not working for me. Sorry I can't manage to fix it myself. | 06-03-2022 14:51:07 | 06-03-2022 14:51:07 | plus one<|||||>This error doesn't show up when you run the code on google colab. you should try that @gunjan075 in the meantime<|||||>I have the same problem with the T0 model. My protobuf version is `4.21.1`. Any idea how to fix it? Cannot use Google Colab, only desktop machines.
Environment specs:
```
numpy==1.22.4
protobuf==4.21.1
sentencepiece==0.1.96
tokenizers==0.12.1
torch==1.11.0
tqdm==4.64.0
transformers==4.19.2
```<|||||>Update: I used `protobuf==3.20.0` and it worked. It's not ideal but it will do for now.<|||||>Indeed please make sure to use the correct `protobuf` version . Google's protobuf release broke a lot of codebases - even TF https://github.com/tensorflow/tensorflow/issues/56077 .
Please make sure to use `"protobuf<=3.20.1"`
Just FYI @sgugger <|||||>This is all fixed on the main branch FYI.<|||||>There was a patch release v4.19.3 just done to fix this issue FYI. <|||||>Awesome
Thank you
On Thu, Jun 9, 2022, 12:12 Sylvain Gugger ***@***.***> wrote:
> There was a patch release v4.19.3 just done to fix this issue FYI.
>
> โ
> Reply to this email directly, view it on GitHub
> <https://github.com/huggingface/transformers/issues/17544#issuecomment-1151388804>,
> or unsubscribe
> <https://github.com/notifications/unsubscribe-auth/AAZHGWG6MCJIES4XVCBN72LVOIQWFANCNFSM5XZIE55A>
> .
> You are receiving this because you authored the thread.Message ID:
> ***@***.***>
>
<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>With protobuf 4.21.8 and debertav2, this error still appears
```
AutoTokenizer.from_pretrained('microsoft/deberta-v3-small')
```
```
TypeError: Descriptors cannot not be created directly.
If this call came from a _pb2.py file, your generated code is out of date and must be regenerated with protoc >= 3.19.0.
If you cannot immediately regenerate your protos, some other possible workarounds are:
1. Downgrade the protobuf package to 3.20.x or lower.
2. Set PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION=python (but this will use pure-Python parsing and will be much slower).
```<|||||>This error still happens for me. Downgrading is not an elegant solution because it breaks other packages that rely on latest version of protobuf e.g. google-cloud-documentai. Someome please fix properly.
I just created a new issue for this. https://github.com/huggingface/transformers/issues/21128<|||||>I met this issue when using `AutoTokenizer`, I fix it by specific the tokenizer to `LlamaTokenizer`.<|||||>
> I met this issue when using `AutoTokenizer`, I fix it by specific the tokenizer to `LlamaTokenizer`.
This absolutely save my day! I meet with this issue for LLMs/vicuna. And this is a useful workaround. |
transformers | 17,543 | closed | word_ids() is not available when using FlauBERT | ### System Info
```shell
- `transformers` version: 4.19.2
- Platform: Linux-5.4.188+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.13
- Huggingface_hub version: 0.7.0
- PyTorch version (GPU?): 1.11.0+cu113 (True)
- Tensorflow version (GPU?): 2.8.2 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: No
```
### Who can help?
@SaulLu
Hi HuggingFace community,
I'm trying to fine-tune models for token classification task.
As french, I want to try different models trained in french or in multi languages.
I succeded to train camembert !
However, when using flauBERT, I have an issue when I align labels to the tokens after tokenisation.
I used to function :
`def tokenize_and_align_labels(examples):
tokenized_inputs = tokenizer(examples["tokens"], truncation=True, is_split_into_words=True)
labels = []
for i, label in enumerate(examples[f"ner_tags"]):
word_ids = tokenized_inputs.word_ids(batch_index=i) # Map tokens to their respective word.
previous_word_idx = None
label_ids = []
for word_idx in word_ids: # Set the special tokens to -100.
if word_idx is None:
label_ids.append(-100)
elif word_idx != previous_word_idx: # Only label the first token of a given word.
label_ids.append(label[word_idx])
else:
label_ids.append(-100)
previous_word_idx = word_idx
labels.append(label_ids)
tokenized_inputs["labels"] = labels
return tokenized_inputs`
Eventhough the function worked perfectly for camembert, I have the following error when using flauBERT:
`word_ids() is not available when using Python-based tokenizers`
I don't know if it's a tokenizer issue or if I have to write a new function to align the labels
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Steps to reproduce the error:
1. Import flaubert model
`model_name = "flaubert/flaubert_base_cased"
tokenizer = AutoTokenizer.from_pretrained(model_name)`
2. Run the function
`def tokenize_and_align_labels(examples):
tokenized_inputs = tokenizer(examples["tokens"], truncation=True, is_split_into_words=True)
labels = []
for i, label in enumerate(examples[f"ner_tags"]):
word_ids = tokenized_inputs.word_ids(batch_index=i) # Map tokens to their respective word.
previous_word_idx = None
label_ids = []
for word_idx in word_ids: # Set the special tokens to -100.
if word_idx is None:
label_ids.append(-100)
elif word_idx != previous_word_idx: # Only label the first token of a given word.
label_ids.append(label[word_idx])
else:
label_ids.append(-100)
previous_word_idx = word_idx
labels.append(label_ids)
tokenized_inputs["labels"] = labels
return tokenized_inputs`
### Expected behavior
```shell
Get a new dataset with the labels align to the tokens after tokenization
```
| 06-03-2022 12:46:33 | 06-03-2022 12:46:33 | Hi @rgriot ,
You do get this error because Flaubert does not have a fast version implemented (yet!) in the library and unfortunately only fast versions support this feature.
If you're interested, feel free to work on a PR to add this fast version to FlauBERT! :hugs: <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 17,542 | closed | Lazy load in pipelines | ### Feature request
Currently pipeline classes expect model and tokenizer objects. I think it'd be beneficial to have lazy load option with load/unload options implemented in the base pipeline class. That would yield better memory utilization especially in custom classes that are derived from primitive pipelines. E.g for example for some postprocess custom operations, you may not need the loaded model for the pipeline, and use additional space which would decrease memory utilization or may cause in large swap which also decreases performances, if the model is redundant for your use case and if the model's work is finished, then we should be able to unload it within the scope.
### Motivation
Allow better memory utilization.
### Your contribution
I'd like to contribute for this FR as my schedule allows. | 06-03-2022 12:41:14 | 06-03-2022 12:41:14 | cc @Narsil <|||||>Hi @devrimcavusoglu ,
I am unsure I understand the actual issue, do you have a reproducing script ?
For `preprocess` / `postprocess` you never need the model (everything model related should be in `_forward`).
Usually pipelines are intended to run on many example (either live behind a http server or on a dataset) so loading/unloading would be innefficient.
I am curious to understand better your use case to see how we could support that.
Cheers !<|||||>Hi @Narsil,
> For preprocess / postprocess you never need the model (everything model related should be in _forward).
Actually, I was emphasizing what you said here, and I meant instantiation of the pipeline object, which requires model & tokenizer objects (not lazy load). Having said this, consider the current situation and assume that "some-task" is defined in `transformers`
```python
# normally we do this
from transformers import pipeline
my_pipeline = pipeline("some-task", model="bert-base-cased", tokenizer="bert-base-cased")
```
with a pipeline class, we can alternatively perform this without the factory function `pipeline()` like this
```python
# and alternatively this can be done
model = AutoModel.from_pretrained("bert-base-cased")
tokenizer = AutoTokenizer.from_pretrained("bert-base-cased")
my_pipeline = SomeTaskPipeline(model, tokenizer)
```
I gave the alternative option (2) to emphasize my example. Now consider that I want to write a custom pipeline class `SomeTask2Pipeline` by extending an existing pipeline class `SomeTaskPipeline` (although this is not required)
```python
class SomeTask2Pipeline(SomeTaskPipeline):
pass
```
Now, afaik there is no way for me to inject `SomeTask2Pipeline` into `transformers` such that I can create a pipeline with the `pipeline()` factory function. If there is please let me know. Thus, I'm forced to use the second alternative above (2), as follows
```python
# and alternatively this can be done
model = AutoModel.from_pretrained("bert-base-cased")
tokenizer = AutoTokenizer.from_pretrained("bert-base-cased")
my_pipeline = SomeTask2Pipeline(model, tokenizer)
```
I can still wrap this class/instantiation of `SomeTask2Pipeline` into another class and still introduce lazy loading which is fine. However, I now come to my second point, that we probably do not need model in any methods other than `forward()` and called functions from `forward()`, thus we should be able to unload/unset the model, this is more needy in custom pipelines (`SomeTask2Pipeline`) as I want to use two seperate models inside the pipeline e.g consequently to process former's one outputs, having two models in memory is not a good thing as one of them is totally redundant.
All in all, this behavior can be considered as something that is not a big deal for a user to do it actually.<|||||>Hi @devrimcavusoglu ,
Sorry but I still don't really understand what you're doing, could you share a minimal example ? I don't really understand how the 2 models you describe interact.
You can use your custom class if you want (entirely bypassing the need to pass the `task` argument to `pipeline(...)`)
```python
my_pipeline = pipeline("some-task", model="bert-base-cased", tokenizer="bert-base-cased", pipeline_class=SomeTask2Pipeline)
```
this was intended as simple overloading of pre/processing so I am not sure it fits your use case.<|||||>> Hi @devrimcavusoglu ,
>
> Sorry but I still don't really understand what you're doing, could you share a minimal example ? I don't really understand how the 2 models you describe interact.
>
> You can use your custom class if you want (entirely bypassing the need to pass the `task` argument to `pipeline(...)`)
>
> ```python
> my_pipeline = pipeline("some-task", model="bert-base-cased", tokenizer="bert-base-cased", pipeline_class=SomeTask2Pipeline)
> ```
>
> this was intended as simple overloading of pre/processing so I am not sure it fits your use case.
Thank you @Narsil. Actually what you stated is addressing my first point, and I can handle my second point in my custom pipeline class, so I will close this issue for now. It can be re-opened though if there is a need for the following.
As a side note, I'll try to elaborate my 2nd point more clearly. In short, we do not need the model in some methods, e.g `preprocess()` or `postprocess()`, but the model is loaded into the memory with the object instantiation at `__init__`, not when used in the pipeline classes. Thus, my point is to load the model when used (before calling `forward()`), and unload the model afterwards, when the forward returns. But I can implement this in my custom class, or generally it can be implemented by the user, but having those load/unload model options in the pipeline classes would give better memory utilization to the users, especially for custom usecases (so I'd suggest if these load/unload methods are implemented, then they should be public methods, where user can call at desired points.). I hope this clarification gives better insight about what I was considering.
A pseudo-ish minimal example, assume that I add my additional_postprocess to the run_single such that I will apply another task to modify my inputs
```python
ANOTHER_TASK_MODEL_NAME = "some-model"
class SomeTask2Pipeline(SomeTaskPipeline):
def run_single(...):
self.preprocess()
self.forward()
self.postprocess()
self.my_additional_postprocess()
return ...
def my_additional_postprocess(self, inputs):
my_pipeline = pipeline("another-task", model=ANOTHER_TASK_MODEL_NAME, tokenizer=ANOTHER_TASK_MODEL_NAME)
# At this point two models in the memory the first one is the model for SomeTask2Pipeline
# and the other one is the model for AnotherTask.
# Note that the model for SomeTask2Pipeline is no longer needed.
out = my_pipeline(**inputs)
return out
```
with load/unload methods it'd be,
```python
ANOTHER_TASK_MODEL_NAME = "some-model"
class SomeTask2Pipeline(SomeTaskPipeline):
def run_single(...):
self.preprocess()
self.load_model() # loads the model into the memory
self.forward()
self.unload_model() # unloads the model from the memory
self.postprocess()
self.my_additional_postprocess()
return ...
def my_additional_postprocess(self, inputs):
my_pipeline = pipeline("another-task", model=ANOTHER_TASK_MODEL_NAME, tokenizer=ANOTHER_TASK_MODEL_NAME)
# At this point only the model associated with AnotherTask is in the memory.
out = my_pipeline(**inputs)
return out
```
This is still can be managed by the user though, so you could say that one can apply the additional postprocess after having the output from the pipeline, and that's why I'm closing the issue. <|||||>> Thus, my point is to load the model when used (before calling forward()), and unload the model afterwards, when the forward returns. But I can implement this in my custom class, or generally it can be implemented by the user, but having those load/unload model options in the pipeline classes would give better memory utilization to the users, especially for custom usecases (so I'd suggest if these load/unload methods are implemented, then they should be public methods, where user can call at desired points.). I hope this clarification gives better insight about what I was considering.
Thanks for the pseudo code much clearer !
I think it makes sense in your case, but we shouldn't do it by default since loading/unloading models is usually quite slow (compared to inferring on them) especially on GPU. so anyone that can afford to have everything loaded into CPU/GPU RAM should do so.
If you are short on either RAM memory, then yes, loading/unloading is necessary.
Still thanks for your input, if more usage like yours seems to be developing, maybe we could add a flag or something in the future. |
transformers | 17,541 | closed | [RAG] token discrepancy between question token which should be input to generator and the one actually encoded in postprocess_docs() | ### System Info
```shell
- `transformers` version: 4.19.2
- Platform: Linux-5.4.188+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.13
- Huggingface_hub version: 0.7.0
- PyTorch version (GPU?): 1.11.0+cu113 (False)
- Tensorflow version (GPU?): 2.8.2 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
```
### Who can help?
RAG, DPR: @patrickvonplaten, @lhoestq
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
In the [RAG implementation](https://github.com/huggingface/transformers/tree/v4.19.2/src/transformers/models/rag), especially in [here](https://github.com/huggingface/transformers/blob/6e535425feae20ca61a8b10ae5e8a7fab4d394ba/src/transformers/models/rag/retrieval_rag.py#L466), input question and retrieved documents by DPR are concatenated in the form like
` doc_title + / + doc_text + // + input_string (question)`,
in `postprocess_docs(docs, input_strings, ... )` and this is the input to the generator. (eg, BART, T5 ).
This postprocess_docs() receives input_strings decoded by question tokenizer at [here](https://github.com/huggingface/transformers/blob/6e535425feae20ca61a8b10ae5e8a7fab4d394ba/src/transformers/models/rag/retrieval_rag.py#L610-L613).
However, this postprocess may cause token mismatch in the case that question and generator tokenizer are different.
To see this, I choose basic retriever and generator as follows (original paper setting).
```
from transformers import DPRQuestionEncoderTokenizer, BartTokenizer
dpr_tokenizer = DPRQuestionEncoderTokenizer.from_pretrained('facebook/dpr-question_encoder-single-nq-base')
bart_tokenizer = BartTokenizer.from_pretrained('facebook/bart-large')
```
and then, following the implementation, compare their token ids each other
```
def check_ids(q_tokenizer, g_tokenizer, text):
print(f'>> {g_tokenizer.tokenize(text)}')
ids = q_tokenizer(text)['input_ids']
decode = q_tokenizer.decode(ids, skip_special_tokens=True)
print(f'>> {g_tokenizer.tokenize(decode)}')
_ids = g_tokenizer(decode)['input_ids']
print(ids == _ids)
```
As a sample text, `text = "Don't you love Transformers? We sure do."`
```
check_ids(dpr_tokenizer, bart_tokenizer, text)
>> ['Don', "'t", 'ฤ you', 'ฤ love', 'ฤ Transformers', '?', 'ฤ We', 'ฤ sure', 'ฤ do', '.']
>> ['don', "'t", 'ฤ you', 'ฤ love', 'ฤ transform', 'ers', '?', 'ฤ we', 'ฤ sure', 'ฤ do', '.']
False
```
In conclusion, I would be wrong, but postprocess_docs() should be better if it receives the raw question string without tokenizer encode-decode process.
### Expected behavior
```shell
Question string part of the input to the generator is expected to be same with the question (input) to the RAG model.
```
| 06-03-2022 09:53:45 | 06-03-2022 09:53:45 | Interesting question! @ola13 do you have a good answer here maybe?
Intuitively I would think that this shouldn't be a problem since the model was trained also this way, but think @ola13 knows best here :-) <|||||>Hi, sorry for the delayed response! Thanks for your question @4kasha!
To answer your point here:
> In conclusion, I would be wrong, but postprocess_docs() should be better if it receives the raw question string without tokenizer encode-decode process.
This is in fact what happens - the `postprocess_docs` function does receivesthe question as a string and the documents as strings and only encodes them once with the generator's tokenizer to create inputs to the generator model. If we were passing tokens between the retriever and the generator we would indeed have a mismatch - we're not doing this though, and this is by design. Making a round-trip (first decoding retrieved docs tokens to pure strings and then encoding them with the generator tokenizer gives us the flexibility to combine retrievers and generators which don't have matching tokenization schemes.
I hope this clarifies things, but feel free to re-open the issue if there's anything unclear still. |
transformers | 17,540 | open | TFRemBertModelTest.test_resize_token_embeddings not working | ### System Info
```shell
- `transformers` version: 4.20.0.dev0
- Platform: Windows-10-10.0.22000-SP0
- Python version: 3.9.11
- Huggingface_hub version: 0.5.1
- PyTorch version (GPU?): 1.11.0+cu113 (True)
- Tensorflow version (GPU?): 2.8.0 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
```
### Who can help?
@gante @Rocketknight1
### Reproduction
`TFRemBertModelTest.test_resize_token_embeddings` has CI failed [here](https://github.com/huggingface/transformers/runs/6682139350?check_suite_focus=true)
This method (called during `resize_token_embeddings`)
https://github.com/huggingface/transformers/blob/028d4b7c8be2c2fc1146fcc1e9bd253c1a7ea346/src/transformers/modeling_tf_utils.py#L1449
assumes that `word_embedding_weight` has the same shape as `old_lm_head_decoder`, but this is not the case for `TFRemBertModel`, as it has `input_embedding_size` and `output_embedding_size` in config.
An PR #17511 was opened, but we decided to not merge it. Instead, a cleaning up of TF embeddings should be done first.
### Expected behavior
```shell
`resize_token_embeddings` should work for `TFRemBertModelTest`
```
| 06-03-2022 09:49:42 | 06-03-2022 09:49:42 | @gante Feel free to add WIP tag :-)<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Hi @ydshieh, I was able to reproduce the issue by running `python -m pytest -n auto --dist=loadfile -s -v ./tests/models/rembert` from the root directory on the latest transfomers version v4.23.1. The `test_resize_token_embeddings` on `TFRemBertModelTest` seemed to pass now without the merged code from https://github.com/huggingface/transformers/pull/17511. By any chance the cleanup of TF embeddings has been done and we could now close this issue? Let me know if I miss anything.
**Test output:**
```
tests/models/rembert/test_modeling_tf_rembert.py::TFRemBertModelTest::test_resize_token_embeddings If you want to use `TFRemBertForCausalLM` as a standalone, add `is_decoder=True.`
If you want to use `TFRemBertForCausalLM` as a standalone, add `is_decoder=True.`
If you want to use `TFRemBertForCausalLM` as a standalone, add `is_decoder=True.`
[gw1] PASSED tests/models/rembert/test_modeling_tf_rembert.py::TFRemBertModelTest::test_resize_token_embeddings
tests/models/rembert/test_modeling_tf_rembert.py::TFRemBertModelTest::test_save_load All model checkpoint layers were used when initializing TFRemBertModel.
```<|||||>Hi @katiele47 This test is still failing on our CI. Could you share your environment information, by running `python utils\print_env.py`. Also, could you share your hardware information (CPU, GPU) please? Thank you!<|||||>@ydshieh Sorry for the delay in response! This is what I got for environment (may ignore the last error):
```
Python version: 3.8.9 (default, Oct 26 2021, 07:25:54)
[Clang 13.0.0 (clang-1300.0.29.30)]
transformers version: 4.24.0.dev0
Torch version: 1.12.1
Cuda available: False
Cuda version: None
CuDNN version: None
Number of GPUs available: 0
Traceback (most recent call last):
File "utils/print_env.py", line 39, in <module>
print("NCCL version:", torch.cuda.nccl.version())
File "/Users/Bibi/transformers/menv/lib/python3.8/site-packages/torch/cuda/nccl.py", line 35, in version
ver = torch._C._nccl_version()
AttributeError: module 'torch._C' has no attribute '_nccl_version'
```
GPU: Intel Iris Plus Graphics 1536 MB
CPU: 2 GHz Quad-Core Intel Core i5
Platform: Mac version 11.6
Let me know if you need any further information! Thanks. |
transformers | 17,539 | closed | Fx support for Deberta-v[1-2], Hubert and LXMERT | # What does this PR do?
Adds `torch.fx` tracing support for:
- Deberta v1
- Deberta v2
- Hubert
- LXMERT | 06-03-2022 09:45:06 | 06-03-2022 09:45:06 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 17,538 | closed | Will data be loaded into multiple GPUs automatically? | Will models that use TensorFlow as a backend be loaded into all available GPU memory or not?
I'm interested in training the OPT model's 1.3b and 30b variants. However there is no cloud computing GPU i can use which has more than 16gb at a time. The problem is that each individual sample in the batch is at least 24gb, when making use of tensorflow, will the memory be distributed among GPUs or will I still get an allocation error? | 06-03-2022 08:20:21 | 06-03-2022 08:20:21 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Maybe of interest for @Rocketknight1 @gante<|||||>Hi @Leli1024 ๐ By default, TensorFlow will allocate memory in all available GPUs, but will only execute in one GPU -- meaning that what you want to do won't happen by default. At the moment, we have no multiple GPU examples (@Rocketknight1 correct me if I'm wrong), so you will have to build a custom solution for yourself.
Here is a TensorFlow guide for it: https://www.tensorflow.org/guide/distributed_training<|||||>We're co-ordinating with the TensorFlow team to make some examples of exactly this process available using the new `DTensor` API introduced in TensorFlow 2.9. I suspect it'll be another month or so before they're available, though!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 17,537 | closed | Loading sharded model in `tf` from pytorch checkpoints | ### System Info
```shell
- `transformers` version: 4.20.0.dev0
- Platform: macOS-12.4-arm64-arm-64bit
- Python version: 3.9.12
- Huggingface_hub version: 0.6.0
- PyTorch version (GPU?): 1.13.0.dev20220521 (False)
- Tensorflow version (GPU?): 2.9.0 (True)
- Flax version (CPU?/GPU?/TPU?): 0.4.2 (cpu)
- Jax version: 0.3.6
- JaxLib version: 0.3.5
```
### Who can help?
@LysandreJik I am not sure who to ping on that ๐
Loading a big model from the hub in tensorflow is impossible if the model is sharded.
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```python
>>> tf_model = TFOPTModel.from_pretrained("facebook/opt-13b",from_pt = True)
```
```python
Traceback (most recent call last):
File "/home/arthur_huggingface_co/transformers/src/transformers/modeling_tf_utils.py", line 1789, in from_pretrained
resolved_archive_file = cached_path(
File "/home/arthur_huggingface_co/transformers/src/transformers/utils/hub.py", line 282, in cached_path
output_path = get_from_cache(
File "/home/arthur_huggingface_co/transformers/src/transformers/utils/hub.py", line 486, in get_from_cache
_raise_for_status(r)
File "/home/arthur_huggingface_co/transformers/src/transformers/utils/hub.py", line 409, in _raise_for_status
raise EntryNotFoundError(f"404 Client Error: Entry Not Found for url: {request.url}")
transformers.utils.hub.EntryNotFoundError: 404 Client Error: Entry Not Found for url: https://huggingface.co/facebook/opt-13b/resolve/main/pytorch_model.bin
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/arthur_huggingface_co/transformers/src/transformers/modeling_tf_utils.py", line 1833, in from_pretrained
raise EnvironmentError(
OSError: facebook/opt-13b does not appear to have a file named pytorch_model.bin.
```
The following script has to be used in order to convert the weights:
```python
path = "facebook/opt-13b"
pt_model = OPTModel.from_pretrained(path)
pt_model.save_pretrained(path,max_shard_size = "1000GB")
tf_model = TFOPTModel.from_pretrained(path,from_pt = True)
tf_model.save_pretrained(path,save_config=False)
```
### Expected behavior
```shell
Automatically do this in background?
```
| 06-03-2022 07:14:03 | 06-03-2022 07:14:03 | Indeed, nice catch! Putting @sgugger in the loop<|||||>Simple reproducer:
```py
from transformers import TFBertModel
model = TFBertModel.from_pretrained("sgugger/bert-sharded")
```<|||||>Putting it on my TODO (might take a few weeks as I have more urgent items, and we don't have a good solution on the TF side for large models right now anyway).<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Fixing this ๐ <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 17,536 | closed | [WIP] Add ResNets in TF | This PR adds the OG of the OGs when it comes to computer vision models: ResNet. There are a few rough edges we need to figure out in the cross-loading of weights.
When I ran `RUN_SLOW=1 python -m pytest tests/models/resnet/test_modeling_resnet.py`, during the integration test, it complained about weight mismatches for all the layers.
So, naturally, when I did a little standalone integration test locally with the TF model the same issue surfaced and the logit assertion failed. FYI, it fails for the PT model too as mentioned earlier. Here's my integration test for TF model:
```py
from PIL import Image
import numpy as np
from src.transformers.models.resnet.modeling_tf_resnet import TFResNetForImageClassification
from transformers import AutoFeatureExtractor
def prepare_img():
image = Image.open("./tests/fixtures/tests_samples/COCO/000000039769.png")
return image
feature_extractor = AutoFeatureExtractor.from_pretrained(
"microsoft/resnet-50"
)
model = TFResNetForImageClassification.from_pretrained(
"microsoft/resnet-50", from_pt=True
)
image = prepare_img()
inputs = feature_extractor(images=image, return_tensors="tf")
outputs = model(**inputs)
expected_shape = [1, 1000]
assert outputs.logits.shape == expected_shape
expected_slice = np.array([-11.1069, -9.7877, -8.3777])
np.testing.assert_allclose(outputs.logits[0, :3].numpy(), expected_slice, atol=1e-4)
```
@amyeroberts @FrancescoSaverioZuppichini please advise here. After this issue is resolved I will start working on the test cases. | 06-03-2022 06:17:09 | 06-03-2022 06:17:09 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Should have checked before.
Closing in spirit of https://github.com/huggingface/transformers/pull/17427 |
transformers | 17,535 | closed | require_accelerate wrapper missing? | ### System Info
```shell
- `transformers` version: 4.20.0.dev0
- Platform: macOS-12.4-arm64-arm-64bit
- Python version: 3.9.12
- Huggingface_hub version: 0.6.0
- PyTorch version (GPU?): 1.13.0.dev20220521 (False)
- Tensorflow version (GPU?): 2.9.0 (True)
- Flax version (CPU?/GPU?/TPU?): 0.4.2 (cpu)
- Jax version: 0.3.6
- JaxLib version: 0.3.5
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
```
### Who can help?
@LysandreJik Sorry again I am not really sure who is responsible of that ๐
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
If the accelerate library is not installed, loading a model fails with the following error :
```python
pt_model = OPTModel.from_pretrained(path)
```
```python
Traceback (most recent call last):
File "src/transformers/convert_opt.py", line 4, in <module>
pt_model = OPTModel.from_pretrained(path)
File "/home/arthur_huggingface_co/transformers/src/transformers/modeling_utils.py", line 2166, in from_pretrained
model, missing_keys, unexpected_keys, mismatched_keys, error_msgs = cls._load_pretrained_model(
File "/home/arthur_huggingface_co/transformers/src/transformers/modeling_utils.py", line 2397, in _load_pretrained_model
save_offload_index(offload_index, offload_folder)
```
Maybe a `@require_accelerate` is missing in `"/home/arthur_huggingface_co/transformers/src/transformers/modeling_utils.py"` , but my understanding is very limited.
### Expected behavior
```shell
Model should be loaded or accelerate should be in requirements.txt
```
| 06-03-2022 06:13:29 | 06-03-2022 06:13:29 | As seen offline, cannot reproduce this but let's keep it open in case someone runs into the same issue/we identify a reproducible example.<|||||>Okay found the reproducing script :) (you need to have a good internet connection) Every sharded model would work I think
```python
>>> from transformers import OPTModel
>>> model = OPTModel.from_pretrained("facebook/opt-13b")
```
```python
Downloading: 100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 611/611 [00:00<00:00, 197kB/s]
Downloading: 100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 51.0k/51.0k [00:00<00:00, 309kB/s]
Downloading: 100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 9.29G/9.29G [08:48<00:00, 18.9MB/s]
Downloading: 100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 9.18G/9.18G [10:26<00:00, 15.7MB/s]
Downloading: 100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 5.47G/5.47G [07:18<00:00, 13.4MB/s]
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/arthurzucker/Work/transformers/src/transformers/modeling_utils.py", line 2166, in from_pretrained
model, missing_keys, unexpected_keys, mismatched_keys, error_msgs = cls._load_pretrained_model(
File "/Users/arthurzucker/Work/transformers/src/transformers/modeling_utils.py", line 2397, in _load_pretrained_model
save_offload_index(offload_index, offload_folder)
NameError: name 'save_offload_index' is not defined
```
<|||||>[](https://blog.ethereum.org/2022/06/03/ropsten-merge-ttd/)<|||||>This is normally fixed on main, are you sure you have the latest?<|||||>You are right sorry about this ๐๐ป |
transformers | 17,534 | closed | How to use finetuner.py to train t5-large model | ### System Info
```shell
- `transformers` version: 4.3.0.dev0
- Platform: Linux-4.15.0-177-generic-x86_64-with-glibc2.17
- Python version: 3.8.13
- PyTorch version (GPU?): 1.8.1+cu111 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: <Yes>
- Using distributed or parallel set-up in script?: <Yes>
```
### Who can help?
@stas00
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Follow the steps [here](https://github.com/huggingface/transformers/issues/8771#issuecomment-759248400)
git clone https://github.com/huggingface/transformers
cd transformers
git checkout 7e662e6a3be0ece4
cd examples/seq2seq
wget https://cdn-datasets.huggingface.co/translation/wmt_en_ro.tar.gz
tar -xzvf wmt_en_ro.tar.gz
pip install -r requirement.txt
cd ../..
pip install .
cd examples/seq2seq
pip install fairscale, deepspeed==[0.3.10](https://github.com/huggingface/transformers/issues/9996#issuecomment-773725303)
#[run script 1](https://github.com/huggingface/transformers/issues/8771#issuecomment-759248400)
export BS=16; rm -r output_dir; PYTHONPATH=../../src USE_TF=0 python -m torch.distributed.launch --nproc_per_node=2 ./finetune_trainer.py --model_name_or_path t5-large --output_dir output_dir --adam_eps 1e-06 --data_dir wmt_en_ro --do_eval --do_train --evaluation_strategy=steps --freeze_embeds --label_smoothing 0.1 --learning_rate 3e-5 --logging_first_step --logging_steps 1000 --max_source_length 128 --max_target_length 128 --num_train_epochs 1 --overwrite_output_dir --per_device_eval_batch_size $BS --per_device_train_batch_size $BS --predict_with_generate --eval_steps 25000 --sortish_sampler --task translation_en_to_ro --test_max_target_length 128 --val_max_target_length 128 --warmup_steps 500 --n_train 2000 --n_val 500
# Error trace1
```
Traceback (most recent call last):
File "./finetune_trainer.py", line 367, in <module>
main()
File "./finetune_trainer.py", line 152, in main
parser = HfArgumentParser((ModelArguments, DataTrainingArguments, Seq2SeqTrainingArguments))
File "/home/zeyi/transformers/src/transformers/hf_argparser.py", line 52, in __init__
self._add_dataclass_arguments(dtype)
File "/home/zeyi/transformers/src/transformers/hf_argparser.py", line 93, in _add_dataclass_arguments
elif hasattr(field.type, "__origin__") and issubclass(field.type.__origin__, List):
File "/home/zeyi/.conda/envs/test/lib/python3.8/typing.py", line 774, in __subclasscheck__
return issubclass(cls, self.__origin__)
TypeError: issubclass() arg 1 must be a class
Traceback (most recent call last):
File "./finetune_trainer.py", line 367, in <module>
main()
File "./finetune_trainer.py", line 152, in main
parser = HfArgumentParser((ModelArguments, DataTrainingArguments, Seq2SeqTrainingArguments))
File "/home/zeyi/transformers/src/transformers/hf_argparser.py", line 52, in __init__
self._add_dataclass_arguments(dtype)
File "/home/zeyi/transformers/src/transformers/hf_argparser.py", line 93, in _add_dataclass_arguments
elif hasattr(field.type, "__origin__") and issubclass(field.type.__origin__, List):
File "/home/zeyi/.conda/envs/test/lib/python3.8/typing.py", line 774, in __subclasscheck__
return issubclass(cls, self.__origin__)
TypeError: issubclass() arg 1 must be a class
Killing subprocess 69967
Killing subprocess 69968
Traceback (most recent call last):
File "/home/zeyi/.conda/envs/test/lib/python3.8/runpy.py", line 194, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/home/zeyi/.conda/envs/test/lib/python3.8/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/home/zeyi/.conda/envs/test/lib/python3.8/site-packages/torch/distributed/launch.py", line 340, in <module>
main()
File "/home/zeyi/.conda/envs/test/lib/python3.8/site-packages/torch/distributed/launch.py", line 326, in main
sigkill_handler(signal.SIGTERM, None) # not coming back
File "/home/zeyi/.conda/envs/test/lib/python3.8/site-packages/torch/distributed/launch.py", line 301, in sigkill_handler
raise subprocess.CalledProcessError(returncode=last_return_code, cmd=cmd)
subprocess.CalledProcessError: Command '['/home/zeyi/.conda/envs/test/bin/python', '-u', './finetune_trainer.py', '--local_rank=1', '--model_name_or_path', 't5-large', '--output_dir', 'output_dir', '--adam_eps', '1e-06', '--data_dir', 'wmt_en_ro', '--do_eval', '--do_train', '--evaluation_strategy=steps', '--freeze_embeds', '--label_smoothing', '0.1', '--learning_rate', '3e-5', '--logging_first_step', '--logging_steps', '1000', '--max_source_length', '128', '--max_target_length', '128', '--num_train_epochs', '1', '--overwrite_output_dir', '--per_device_eval_batch_size', '16', '--per_device_train_batch_size', '16', '--predict_with_generate', '--eval_steps', '25000', '--sortish_sampler', '--task', 'translation_en_to_ro', '--test_max_target_length', '128', '--val_max_target_length', '128', '--warmup_steps', '500', '--n_train', '2000', '--n_val', '500']' returned non-zero exit status 1.
```
#[run script 2](https://github.com/huggingface/transformers/issues/10036#issue-802491462)
export BS=16; rm -r output_dir; PYTHONPATH=../../src USE_TF=0 python -m torch.distributed.launch --nproc_per_node=2 ./run_seq2seq.py --model_name_or_path t5-large --output_dir output_dir --adam_eps 1e-06 --dataset_name wmt16 --dataset_config "ro-en" --do_eval --do_train --evaluation_strategy=steps --freeze_embeds --label_smoothing 0.1 --learning_rate 3e-5 --logging_first_step --logging_steps 1000 --max_source_length 128 --max_target_length 128 --num_train_epochs 1 --overwrite_output_dir --per_device_eval_batch_size $BS --per_device_train_batch_size $BS --predict_with_generate --eval_steps 25000 --sortish_sampler --task translation_en_to_ro --test_max_target_length 128 --val_max_target_length 128 --warmup_steps 500 --n_train 2000 --n_val 500
#Error trace2
```
Traceback (most recent call last):
File "./run_seq2seq.py", line 499, in <module>
main()
File "./run_seq2seq.py", line 212, in main
model_args, data_args, training_args = parser.parse_args_into_dataclasses()
File "/home/zeyi/transformers/src/transformers/hf_argparser.py", line 166, in parse_args_into_dataclasses
raise ValueError(f"Some specified arguments are not used by the HfArgumentParser: {remaining_args}")
ValueError: Some specified arguments are not used by the HfArgumentParser: ['--freeze_embeds', '--test_max_target_length', '128', '--n_train', '2000', '--n_val', '500']
Traceback (most recent call last):
File "./run_seq2seq.py", line 499, in <module>
main()
File "./run_seq2seq.py", line 212, in main
model_args, data_args, training_args = parser.parse_args_into_dataclasses()
File "/home/zeyi/transformers/src/transformers/hf_argparser.py", line 166, in parse_args_into_dataclasses
raise ValueError(f"Some specified arguments are not used by the HfArgumentParser: {remaining_args}")
ValueError: Some specified arguments are not used by the HfArgumentParser: ['--freeze_embeds', '--test_max_target_length', '128', '--n_train', '2000', '--n_val', '500']
Killing subprocess 72522
Killing subprocess 72523
Traceback (most recent call last):
File "/home/zeyi/.conda/envs/test/lib/python3.8/runpy.py", line 194, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/home/zeyi/.conda/envs/test/lib/python3.8/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/home/zeyi/.conda/envs/test/lib/python3.8/site-packages/torch/distributed/launch.py", line 340, in <module>
main()
File "/home/zeyi/.conda/envs/test/lib/python3.8/site-packages/torch/distributed/launch.py", line 326, in main
sigkill_handler(signal.SIGTERM, None) # not coming back
File "/home/zeyi/.conda/envs/test/lib/python3.8/site-packages/torch/distributed/launch.py", line 301, in sigkill_handler
raise subprocess.CalledProcessError(returncode=last_return_code, cmd=cmd)
subprocess.CalledProcessError: Command '['/home/zeyi/.conda/envs/test/bin/python', '-u', './run_seq2seq.py', '--local_rank=1', '--model_name_or_path', 't5-large', '--output_dir', 'output_dir', '--adam_eps', '1e-06', '--dataset_name', 'wmt16', '--dataset_config', 'ro-en', '--do_eval', '--do_train', '--evaluation_strategy=steps', '--freeze_embeds', '--label_smoothing', '0.1', '--learning_rate', '3e-5', '--logging_first_step', '--logging_steps', '1000', '--max_source_length', '128', '--max_target_length', '128', '--num_train_epochs', '1', '--overwrite_output_dir', '--per_device_eval_batch_size', '16', '--per_device_train_batch_size', '16', '--predict_with_generate', '--eval_steps', '25000', '--sortish_sampler', '--task', 'translation_en_to_ro', '--test_max_target_length', '128', '--val_max_target_length', '128', '--warmup_steps', '500', '--n_train', '2000', '--n_val', '500']' returned non-zero exit status 1.
```
### Expected behavior
I hope that it will run the model with deepspeed or shared techniques. Actually I want to train the t5-11b model and want to change the dataset dir to my dataset but even can not reproduce what @stas00 shared before.
| 06-03-2022 04:37:48 | 06-03-2022 04:37:48 | This approach you tried is very old and is not supported any longer.
Please switch to modern tools and it should just work.
Here are a few current examples:
straight DDP:
```
rm -r output_dir; PYTHONPATH=src USE_TF=0 CUDA_VISIBLE_DEVICES=0,1 python \
examples/pytorch/translation/run_translation.py --model_name_or_path t5-small \
--output_dir output_dir --adam_eps 1e-06 --do_train --label_smoothing 0.1 \
--learning_rate 3e-5 --logging_first_step --logging_steps 500 \
--max_source_length 128 --max_target_length 128 --val_max_target_length 128 \
--num_train_epochs 1 --overwrite_output_dir --per_device_train_batch_size 2 \
--predict_with_generate --sortish_sampler --source_lang en --target_lang ro \
--dataset_name wmt16 --dataset_config ro-en --source_prefix \
'translate English to Romanian: ' --warmup_steps 50 --max_train_samples 50
```
same with deepspeed
```
rm -r output_dir; PYTHONPATH=src USE_TF=0 deepspeed --num_gpus 2 \
examples/pytorch/translation/run_translation.py --model_name_or_path t5-small \
--output_dir output_dir --overwrite_output_dir --max_source_length 128 \
--max_target_length 128 --val_max_target_length 128 --do_train \
--num_train_epochs 1 --per_device_train_batch_size 2 --learning_rate 3e-3 \
--dataset_name wmt16 --dataset_config ro-en --source_lang en --target_lang ro \
--source_prefix 'translate English to Romanian: ' --max_train_samples 50 \
--deepspeed tests/deepspeed/ds_config_zero3.json --save_steps 1
```
make sure it works, adapt to your data, and then replace with the large model size.
Please let me know if this unblocked you and please share the link where you found the old info so that we could update that thread with the new information.
Thank you
<|||||>Hi @stas00 , the order info comes from [here](https://github.com/huggingface/transformers/issues/8771#issuecomment-759248400).
I run the following scripts to install required package:
> pip install torch==1.8.1+cu111 torchvision==0.9.1+cu111 torchaudio==0.8.1 -f https://download.pytorch.org/whl/torch_stable.html
>
> git clone https://github.com/huggingface/transformers
> pip install .
>
> pip install fairscale, deepspeed
>
> pip install -r /exmaples/pytorch/translation/requirement.txt
>os.environment['CUDA_VISIBLE_DEVICES"] = "0,1,2,3"
I tried the straight DPP and deepspeed scripts, all says the following error though I add "--per_device_train_batch_size 2":
> run_translation.py: error: argument --per_device_train_batch_size: expected one argument.
What's more, I want to run language inference task with t5 model and do you have any recommendation which example script should I use?<|||||>> run_translation.py: error: argument --per_device_train_batch_size: expected one argument.
oops, my bad - I fixed the examples in my reply https://github.com/huggingface/transformers/issues/17534#issuecomment-1146249686
> What's more, I want to run language inference task with t5 model and do you have any recommendation which example script should I use?
same script, you just tell it to eval instead of train, here is a few ways for one gpu:
```
# non-distributed 1-gpu fp32 eval only
rm -r output_dir; PYTHONPATH=src USE_TF=0 CUDA_VISIBLE_DEVICES=0 python examples/pytorch/translation/run_translation.py --model_name_or_path t5-small --output_dir output_dir --adam_eps 1e-06 --do_eval --evaluation_strategy=steps --label_smoothing 0.1 --learning_rate 3e-5 --logging_first_step --logging_steps 500 --max_source_length 128 --max_target_length 128 --num_train_epochs 1 --overwrite_output_dir --per_device_eval_batch_size 2 --predict_with_generate --eval_steps 2500 --sortish_sampler --source_lang en --target_lang ro --dataset_name wmt16 --dataset_config "ro-en" --source_prefix "translate English to Romanian: " --val_max_target_length 128 --warmup_steps 50 --max_eval_samples 50
# non-distributed 1-gpu --fp16_full_eval eval only
rm -r output_dir; PYTHONPATH=src USE_TF=0 CUDA_VISIBLE_DEVICES=0 python examples/pytorch/translation/run_translation.py --model_name_or_path t5-small --output_dir output_dir --adam_eps 1e-06 --do_eval --evaluation_strategy=steps --label_smoothing 0.1 --learning_rate 3e-5 --logging_first_step --logging_steps 500 --max_source_length 128 --max_target_length 128 --num_train_epochs 1 --overwrite_output_dir --per_device_eval_batch_size 2 --predict_with_generate --eval_steps 2500 --sortish_sampler --source_lang en --target_lang ro --dataset_name wmt16 --dataset_config "ro-en" --source_prefix "translate English to Romanian: " --val_max_target_length 128 --warmup_steps 50 --max_eval_samples 50 --fp16_full_eval
```
and you can adapt those to multi-gpu and/or deepspeed based on the first examples I shared.
but basically I removed the training args and replaced those with eval-only args.
The 2nd (last) example shows how to do it in half-precision which may not work well (depending on the model), so start with the normal fp32 eval (i.e. w/o `--fp16_full_eval`)
Of course, play with the values of the args to fit your environment.
> I just wonder how to download this dataset as the following script:
you don't download it directly - `load_dataset` does it automatically for you at runtime (should have Internet).
<|||||>Thanks for your detailed reply @stas00 ! I tried the t5-small model and they works so I changed it to t5-11b with 3 questions here.
1.
In my case, I could not use straight DDP otherwise CUDA will run out of memory.
When I use deepspeed script
```
export MASTER_PORT=9999; rm -r output_dir; PYTHONPATH=src USE_TF=0 deepspeed --num_gpus 4
examples/pytorch/translation/run_translation.py --model_name_or_path t5-11b --output_dir output_dir --
overwrite_output_dir --max_source_length 128 --max_target_length 128 --val_max_target_length 128 --do_train --
num_train_epochs 4 --per_device_train_batch_size 8 --learning_rate 1e-4 --source_lang prompt --target_lang completion
--train_file=/home/zeyi/lr_dataset/data/processed/logic_comp1_nt_v0_infer1.0_balance_seed42_filtered/csv_file/train/train.json
--test_file=/home/zeyi/lr_dataset/data/processed/logic_comp1_nt_v0_infer1.0_balance_seed42_filtered/csv_file/test/test.json
--validation_file=/home/zeyi/lr_dataset/data/processed/logic_comp1_nt_v0_infer1.0_balance_seed42_filtered/csv_file/dev/dev.json
--max_train_samples 50 --deepspeed tests/deepspeed/ds_config_zero3.json --save_strategy epoch
```
It said that
> Traceback (most recent call last):
File "examples/pytorch/translation/run_translation.py", line 652, in <module>
main()
File "examples/pytorch/translation/run_translation.py", line 261, in main
model_args, data_args, training_args = parser.parse_args_into_dataclasses()
File "/home/zeyi/transformers/src/transformers/hf_argparser.py", line 214, in parse_args_into_dataclasses
obj = dtype(**inputs)
File "<string>", line 102, in __init__
File "/home/zeyi/transformers/src/transformers/training_args.py", line 1012, in __post_init__
and (self.device.type != "cuda")
File "/home/zeyi/transformers/src/transformers/utils/import_utils.py", line 802, in wrapper
return func(*args, **kwargs)
File "/home/zeyi/transformers/src/transformers/training_args.py", line 1264, in device
return self._setup_devices
File "/home/zeyi/transformers/src/transformers/utils/generic.py", line 49, in __get__
cached = self.fget(obj)
File "/home/zeyi/transformers/src/transformers/utils/import_utils.py", line 802, in wrapper
return func(*args, **kwargs)
File "/home/zeyi/transformers/src/transformers/training_args.py", line 1225, in _setup_devices
deepspeed.init_distributed()
File "/home/zeyi/.conda/envs/test/lib/python3.8/site-packages/deepspeed/utils/distributed.py", line 51, in init_distributed
torch.distributed.init_process_group(backend=dist_backend,
File "/home/zeyi/.conda/envs/test/lib/python3.8/site-packages/torch/distributed/distributed_c10d.py", line 500, in init_process_group
store, rank, world_size = next(rendezvous_iterator)
File "/home/zeyi/.conda/envs/test/lib/python3.8/site-packages/torch/distributed/rendezvous.py", line 190, in _env_rendezvous_handler
store = TCPStore(master_addr, master_port, world_size, start_daemon, timeout)
RuntimeError: Address already in use
But I find some info and do add `export MASTER_PORT=9999` at the beginning of scripts.
I also use `netstat -nltp` but can not find which jobs is the zombie task.
What should I do to delete those zombie running process.
2.
And can I add a parameter like [here](https://github.com/huggingface/transformers/issues/8771#issuecomment-759248400) i.e. --sharded_ddp to use sharded_ddp instead of straight ddp?(I am not sure I totally understand the definition of straight ddp and sharded ddp)
3.
In my previous code, I will pass some generator option to t5 model
```
self.generator_options = {'min_length': 1, 'max_length': 128, 'num_beams': 1, 'num_return_sequences': 1, 'do_sample': False, 'top_k': 50, 'top_p': 1.0,
'temperature': 1.0, 'length_penalty': 1.0, 'repetition_penalty': 1.0}
output_ids = self.reasoner.generate(batch['all_inps'], **self.generator_options)
```
So how can I do the same thing here?
<|||||>> > RuntimeError: Address already in use
> But I find some info and do add `export MASTER_PORT=9999` at the beginning of scripts. I also use `netstat -nltp` but can not find which jobs is the zombie task. What should I do to delete those zombie running process.
Normally you just kill them manually. Upgrade your `deepspeed`, the zombies should get killed automatically.
You should pass an explicit argument to `deepspeed` with the desired setting if you don't want the default port.
```
--master_port MASTER_PORT
(optional) Port used by PyTorch distributed for communication during training.
--master_addr MASTER_ADDR
(optional) IP address of node 0, will be inferred via 'hostname -I' if not specified.
```
> And can I add a parameter like [here](https://github.com/huggingface/transformers/issues/8771#issuecomment-759248400) i.e. --sharded_ddp to use sharded_ddp instead of straight ddp?(I am not sure I totally understand the definition of straight ddp and sharded ddp)
That's another implementation of ZeRO protocol. You don't need it.
> In my previous code, I will pass some generator option to t5 model
>
> ```
> self.generator_options = {'min_length': 1, 'max_length': 128, 'num_beams': 1, 'num_return_sequences': 1, 'do_sample': False, 'top_k': 50, 'top_p': 1.0,
> 'temperature': 1.0, 'length_penalty': 1.0, 'repetition_penalty': 1.0}
>
> output_ids = self.reasoner.generate(batch['all_inps'], **self.generator_options)
> ```
>
> So how can I do the same thing here?
Please run:
```
python examples/pytorch/translation/run_translation.py --help
```
you will see the existing options there (e.g. . `--num_beams`)
If you want to customize the example script, these `generate` args are passed here (`num_beams`)
https://github.com/huggingface/transformers/blob/26e5e129b43760138aed2dfc1cc3c75b481a95e6/examples/pytorch/translation/run_translation.py#L589-L593
all the `generate` options are here:
https://github.com/huggingface/transformers/blob/26e5e129b43760138aed2dfc1cc3c75b481a95e6/src/transformers/generation_utils.py#L844-L887<|||||>Hi @stas00 , thank you for your reply! I trained it wity your updated scripts but the job was stopped accidently.
So I tried to resume from checkpoints by this scripts(without --overwrite_output_dir, and output_dir_1 is the folder with checkpoints)
```
deepspeed examples/pytorch/translation/run_translation.py --model_name_or_path t5-11b --output_dir output_dir_1 --max_source_length 128 --max_target_length 128 --val_max_target_length 128 --do_train --num_train_epochs 4 --per_device_train_batch_size 16 --learning_rate 1e-4 --source_lang prompt --target_lang completion
--train_file=
/home/zeyi/lr_dataset/data/processed/logic_comp1_nt_v0_infer1.0_balance_seed42_trim_filtered/json_file_t5_11b/train/train.json
--test_file=
/home/zeyi/lr_dataset/data/processed/logic_comp1_nt_v0_infer1.0_balance_seed42_trim_filtered/json_file_t5_11b/test/test.json
--validation_file=
/home/zeyi/lr_dataset/data/processed/logic_comp1_nt_v0_infer1.0_balance_seed42_trim_filtered/json_file_t5_11b/dev/dev.json
--deepspeed tests/deepspeed/ds_config_zero3.json --save_strategy epoch --evaluation_strategy epoch --load_best_model_at_end
```
But it said that
```
Using /home/zeyi/.cache/torch_extensions as PyTorch extensions root...
No modifications detected for re-loaded extension module utils, skipping build step...
Loading extension module utils...
Time to load utils op: 0.0007061958312988281 seconds
[INFO|deepspeed.py:444] 2022-06-09 15:28:40,179 >> Attempting to resume from output_dir_1/checkpoint-3126
[2022-06-09 15:31:04,178] [INFO] [launch.py:178:sigkill_handler] Killing subprocess 40204
[2022-06-09 15:31:04,178] [INFO] [launch.py:178:sigkill_handler] Killing subprocess 40205
[2022-06-09 15:31:04,178] [INFO] [launch.py:178:sigkill_handler] Killing subprocess 40206
[2022-06-09 15:31:04,178] [INFO] [launch.py:178:sigkill_handler] Killing subprocess 4020
```
<|||||>this usually means that you didn't have enough cpu memory to resume
Unfortunately it's a bug in deepspeed, where instead of loading the checkpoint directly to gpu it first loads it to cpu.
I filed a bug report here https://github.com/microsoft/DeepSpeed/issues/1971
Please voice your need in this issue so that it's seen that it needs higher priority.
I can offer you a hack that may help. Basically you need to stagger the checkpoint loading so that not all 4 processes try to load it to cpu memory at once.
<|||||>something like this should work to stagger the checkpoint loading:
```
diff --git a/src/transformers/deepspeed.py b/src/transformers/deepspeed.py
index 9fa22d462..ce2f39cc5 100644
--- a/src/transformers/deepspeed.py
+++ b/src/transformers/deepspeed.py
@@ -447,6 +447,12 @@ def deepspeed_init(trainer, num_training_steps, resume_from_checkpoint=None, inf
deepspeed_checkpoint_dirs = sorted(glob.glob(f"{resume_from_checkpoint}/global_step*"))
if len(deepspeed_checkpoint_dirs) > 0:
+
+ # hack to stagger checkpoint loading so that they don't all try to use cpu at the same time
+ rank = trainer.args.local_rank
+ from time import sleep
+ sleep(rank*20)
+
logger.info(f"Attempting to resume from {resume_from_checkpoint}")
# this magically updates self.optimizer and self.lr_scheduler
load_path, _ = deepspeed_engine.load_checkpoint(
```
adjust 20 to perhaps smaller or longer wait in secs.
so here the following happens:
process 0 sleeps for 0 secs, process 1 for 20 secs, 2 for 40 secs, etc. so each process gets full use of CPU memory alone.
you can apply the patch manually or with:
```
git clone https://github.com/huggingface/transformers
cd transformers
git apply patch.txt
pip install -e .
```
assuming you saved my code as patch.txt (attached it to this comment as well so you can just download it)
[patch.txt](https://github.com/huggingface/transformers/files/8874389/patch.txt)
<|||||>@stas00 ,Thank you! I have sucessfully trained the t5-11b.
And here, I want to do the inference in my setup code. Since it's hard to load t5-11b on one GPU, I use model.parallelize to do the inference part.
```
model = T5ForConditionalGeneration.from_pretrained('./checkpoint)
device_map = {
0: [0, 1, 2],
1: [3, 4, 5, 6, 7, 8, 9],
2: [10, 11, 12, 13, 14, 15, 16],
3: [17, 18, 19, 20, 21, 22, 23],
}
model.parallelize(device_map)
model.predict()
```
But the errors said:
```
Traceback (most recent call last):
File "/home/zeyi/lr_dataset/src/main.py", line 294, in <module>
trainer.test(model=model_ckpt, test_dataloaders=loader)
File "/home/zeyi/.conda/envs/lr_dataset/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 907, in test
return self._call_and_handle_interrupt(self._test_impl, model, dataloaders, ckpt_path, verbose, datamodule)
File "/home/zeyi/.conda/envs/lr_dataset/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 683, in _call_and_handle_interrupt
return trainer_fn(*args, **kwargs)
File "/home/zeyi/.conda/envs/lr_dataset/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 950, in _test_impl
results = self._run(model, ckpt_path=self.tested_ckpt_path)
File "/home/zeyi/.conda/envs/lr_dataset/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 1195, in _run
self._dispatch()
File "/home/zeyi/.conda/envs/lr_dataset/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 1271, in _dispatch
self.training_type_plugin.start_evaluating(self)
File "/home/zeyi/.conda/envs/lr_dataset/lib/python3.7/site-packages/pytorch_lightning/plugins/training_type/ddp_spawn.py", line 178, in start_evaluating
self.spawn(self.new_process, trainer, self.mp_queue, return_result=False)
File "/home/zeyi/.conda/envs/lr_dataset/lib/python3.7/site-packages/pytorch_lightning/plugins/training_type/ddp_spawn.py", line 201, in spawn
mp.spawn(self._wrapped_function, args=(function, args, kwargs, return_queue), nprocs=self.num_processes)
File "/home/zeyi/.conda/envs/lr_dataset/lib/python3.7/site-packages/torch/multiprocessing/spawn.py", line 230, in spawn
return start_processes(fn, args, nprocs, join, daemon, start_method='spawn')
File "/home/zeyi/.conda/envs/lr_dataset/lib/python3.7/site-packages/torch/multiprocessing/spawn.py", line 188, in start_processes
while not context.join():
File "/home/zeyi/.conda/envs/lr_dataset/lib/python3.7/site-packages/torch/multiprocessing/spawn.py", line 136, in join
signal_name=name
torch.multiprocessing.spawn.ProcessExitedException: process 2 terminated with signal SIGABRT
wandb: Waiting for W&B process to finish... (failed 1). Press Control-C to abort syncing.
wandb:
wandb: Synced lrgenerative_logic_comp1_v7_1.0_new_seed42_trim_filtered_t5_11b_13_06_2022_45964ce7: https://wandb.ai/soumya_research/lr_dataset/runs/snh11aqq
wandb: Synced 5 W&B file(s), 0 media file(s), 0 artifact file(s) and 0 other file(s)
wandb: Find logs at: ./wandb/run-20220613_132827-snh11aqq/logs
[W CudaIPCTypes.cpp:21] Producer process has been terminated before all shared CUDA tensors released. See Note [Sharing CUDA tensors]
```
I have find some solution said that set `num_workers = 0`, but it still doesn't work.
<|||||>> @stas00 ,Thank you! I have sucessfully trained the t5-11b.
Super!
> And here, I want to do the inference in my setup code. Since it's hard to load t5-11b on one GPU, I use model.parallelize to do the inference part.
`parallelize` is about to be deprecated and as such is no longer supported. Please use deepspeed instead, it's many folds more superior to the naive parallelization.
<|||||>@stas00 ,Thanks a lot!
In my case, we use pytorch-lightning and what I want to do is
`model = T5ForConditionalGeneration.from_pretrained('./checkpoint)`
And follow the [doc ](https://pytorch-lightning.readthedocs.io/en/stable/advanced/model_parallel.html#deepspeed-zero-stage-3-tips)here to set
```
trainer = Trainer(accelerator="gpu", devices=4, strategy="deepspeed_stage_3_offload")
trainer.predict()
```
But although I am just doing prediction, why it will still call the `def configure_optimizers(self)` function.
In addition to that, it gave an error although I do have ninja package.
```
[2022-06-13 16:55:48,399] [WARNING] [engine.py:1122:_configure_optimizer] **** You are using ZeRO with an untested optimizer, proceed with caution *****
[2022-06-13 16:55:48,405] [WARNING] [coalesced_collectives.py:26:<module>] unable to find torch.distributed._reduce_scatter_base. will fall back to torch.distributed.reduce_scatter which will result in suboptimal performance. please consider upgrading your pytorch installation.
Using /home/zeyi/.cache/torch_extensions as PyTorch extensions root...
Traceback (most recent call last):
File "/home/zeyi/.conda/envs/lr_dataset/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 683, in _call_and_handle_interrupt
return trainer_fn(*args, **kwargs)
File "/home/zeyi/.conda/envs/lr_dataset/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 950, in _test_impl
results = self._run(model, ckpt_path=self.tested_ckpt_path)
File "/home/zeyi/.conda/envs/lr_dataset/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 1184, in _run
self._pre_dispatch()
File "/home/zeyi/.conda/envs/lr_dataset/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 1219, in _pre_dispatch
self.accelerator.pre_dispatch(self)
File "/home/zeyi/.conda/envs/lr_dataset/lib/python3.7/site-packages/pytorch_lightning/accelerators/accelerator.py", line 136, in pre_dispatch
self.training_type_plugin.pre_dispatch()
File "/home/zeyi/.conda/envs/lr_dataset/lib/python3.7/site-packages/pytorch_lightning/plugins/training_type/deepspeed.py", line 389, in pre_dispatch
self.init_deepspeed()
File "/home/zeyi/.conda/envs/lr_dataset/lib/python3.7/site-packages/pytorch_lightning/plugins/training_type/deepspeed.py", line 461, in init_deepspeed
self._initialize_deepspeed_inference(model)
File "/home/zeyi/.conda/envs/lr_dataset/lib/python3.7/site-packages/pytorch_lightning/plugins/training_type/deepspeed.py", line 563, in _initialize_deepspeed_inference
dist_init_required=False,
File "/home/zeyi/.conda/envs/lr_dataset/lib/python3.7/site-packages/deepspeed/__init__.py", line 130, in initialize
config_params=config_params)
File "/home/zeyi/.conda/envs/lr_dataset/lib/python3.7/site-packages/deepspeed/runtime/engine.py", line 294, in __init__
self._configure_optimizer(optimizer, model_parameters)
File "/home/zeyi/.conda/envs/lr_dataset/lib/python3.7/site-packages/deepspeed/runtime/engine.py", line 1124, in _configure_optimizer
self.optimizer = self._configure_zero_optimizer(basic_optimizer)
File "/home/zeyi/.conda/envs/lr_dataset/lib/python3.7/site-packages/deepspeed/runtime/engine.py", line 1439, in _configure_zero_optimizer
communication_data_type=self.communication_data_type)
File "/home/zeyi/.conda/envs/lr_dataset/lib/python3.7/site-packages/deepspeed/runtime/zero/stage3.py", line 292, in __init__
util_ops = UtilsBuilder().load()
File "/home/zeyi/.conda/envs/lr_dataset/lib/python3.7/site-packages/deepspeed/ops/op_builder/builder.py", line 463, in load
return self.jit_load(verbose)
File "/home/zeyi/.conda/envs/lr_dataset/lib/python3.7/site-packages/deepspeed/ops/op_builder/builder.py", line 512, in jit_load
verbose=verbose)
File "/home/zeyi/.conda/envs/lr_dataset/lib/python3.7/site-packages/torch/utils/cpp_extension.py", line 1091, in load
keep_intermediates=keep_intermediates)
File "/home/zeyi/.conda/envs/lr_dataset/lib/python3.7/site-packages/torch/utils/cpp_extension.py", line 1302, in _jit_compile
is_standalone=is_standalone)
File "/home/zeyi/.conda/envs/lr_dataset/lib/python3.7/site-packages/torch/utils/cpp_extension.py", line 1373, in _write_ninja_file_and_build_library
verify_ninja_availability()
File "/home/zeyi/.conda/envs/lr_dataset/lib/python3.7/site-packages/torch/utils/cpp_extension.py", line 1429, in verify_ninja_availability
raise RuntimeError("Ninja is required to load C++ extensions")
RuntimeError: Ninja is required to load C++ extensions
python-BaseException
```
I am just worried about is it reasonable to work like this?
1. Trained the t5-11b by transformer.Trainer.
2. Just load the checkpoint saved before and use Pytorch-lightning to do the prediction
3.Since can not load t5-11b on one GPU, I set the strategy to `deepspeed_stage_3_offload`for trainer.<|||||>wrt to the traceback you shared, `pip install ninja` should do the trick, even though it should have already been installed. something `$PATH` env var is missing the bin dir where pip installs to, check with:
```
which ninja
```
it should give you the path to the binary. Don't try to run deepspeed again until the above returns the path. if it returns nothing it means that your python's env `bin` dir is not in your `$PATH` env var.
wrt PL-specific issues please ask at PL Issues as I'm not a PL user.<|||||>there is another workaround that requires no ninja and it's to prebuild deepspeed https://huggingface.co/docs/transformers/main/main_classes/deepspeed#installation (local install where you clone deepspeed and then build it)<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>>
The git apply patch.txt throws an error of
error: corrupt patch at line 17
Am I missing something in the application of it, or missing an argument?
<|||||>bad copy-n-paste? Just insert it manually - it's just a few lines of code and you can tell where to insert by the context around it.<|||||>Hi @stas00 , hope you all godd! And would deep-speed be compatible with Auto-regressive model [here](https://github.com/hpcaitech/ColossalAI-Examples/tree/f743872c2089d6bb5e593db6a8a48d427e6b2b1e/language/opt), like I need to fine-tuning a large OPT model. (BTW:Tried hard on PL trainer but always miss some weight of layers). Thanks!<|||||>I haven't tried it, but I don't see any reason why it shouldn't work. OPT has been out for quite a few months now so surely if it didn't work we would have heard by now and fixed it. Give it a try and if you run into problems please start a new Issue. Thank you. |
transformers | 17,533 | closed | Fix all offload and MP tests | # What does this PR do?
This should fix all GPU tests for model parallelism and offload. I stopped trying to make the model bigger since it was causing so many failures (unrelated to current failing tests). Then there was a tiny bug in `from_pretrained` when tied wieghts are present, as highlighted by the failure of the GPT-Neo offload tests. Adding the tie weights before computing the auto device map is necessary to know which weights are tied when auto-generated this device map.
The OPT model for testing was too tiny for the MP/offload tests (that's why there was some logic to make tiny models bigger there) so I just adjusted its size (speed is not really affected).
Finally the T5 tests starting failing after the tied weights because the decoder has to be tied with the shared layer, which require bigger models than the tiny one used for test. Here I didn't make the model bigger since the tests already take some time, so I adjusted the percentages used for the total model size in those tests. I added a new class variable for that, but happy to overwrite the tests in the T5 modeling test if you prefer it. | 06-02-2022 20:55:23 | 06-02-2022 20:55:23 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 17,532 | closed | Update URL for Hub PR docs | # What does this PR do?
Now that the new Hub docs have been deployed, we can point users to the rendered version on Hub PRs instead of the raw Markdown. | 06-02-2022 19:38:45 | 06-02-2022 19:38:45 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 17,531 | closed | Clean imports to fix test_fetcher | # What does this PR do?
We noticed that a lot of tests are being picked from some modeling files, there are two reasons for that.
1. The circular import being `modeling_tf_utils` and `modelcard` makes `modeling_tf_utils` be a file impacted by `modelcard`. `modeling_tf_utils` triggers pretty much all tests as it's tightly linked to `modeling_utils` via `modeling_tf_pytorch_utils`.
2. ONNX also has some circular thing going on as:
- it's imported in models that define an ONNX Config
- and in return the module imports all models having an ONNX Config in `onnx.features`
This was discovered by developing a tool that prints the tree of depending modules for a given module/test which is incuded in this PR. To use it, in the root of the repo, just do:
```
python utils/tests_fetcher.py --print_dependencies_of src/transformers/models/bert/modeling_bert.py
```
The fix for 1 is to use the special comment that will make the `test_fetcher` ignore the circular import.
The fix for 2 is to remove all configs imports in `onnx.feaures` but rely on the config names, then import them dynamically at the creation of `FeaturesManager` (in a 100% backward compatible manner).
cc @ydshieh since you reported the slowdown. | 06-02-2022 19:32:46 | 06-02-2022 19:32:46 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Failure is flaky, so merging :-) |
transformers | 17,530 | closed | Add installation.mdx Italian translation | # What does this PR do?
Italian translation of doc related to the installation of ๐ค Transformers.
See issue: #17459
## Before submitting
- [ x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
@omarespejel | 06-02-2022 18:55:13 | 06-02-2022 18:55:13 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Grazie @mfumanelli!
LGTM @sgugger :) |
transformers | 17,529 | closed | MarianMT Doesn't export to ONNX correctly | I am running Python FastAPI in a docker container via VSCode's devcontainer setup. I am using the pre-trained MarianMT model for translation of chinese_simple to english and running into an error when trying to use the exported model from the transformers.onnx module. This is the error I am receiving:
```sh
ERROR: Exception in ASGI application
Traceback (most recent call last):
File "/usr/local/lib/python3.8/site-packages/uvicorn/protocols/http/h11_impl.py", line 366, in run_asgi
result = await app(self.scope, self.receive, self.send)
File "/usr/local/lib/python3.8/site-packages/uvicorn/middleware/proxy_headers.py", line 75, in __call__
return await self.app(scope, receive, send)
File "/usr/local/lib/python3.8/site-packages/fastapi/applications.py", line 208, in __call__
await super().__call__(scope, receive, send)
File "/usr/local/lib/python3.8/site-packages/starlette/applications.py", line 112, in __call__
await self.middleware_stack(scope, receive, send)
File "/usr/local/lib/python3.8/site-packages/starlette/middleware/errors.py", line 181, in __call__
raise exc from None
File "/usr/local/lib/python3.8/site-packages/starlette/middleware/errors.py", line 159, in __call__
await self.app(scope, receive, _send)
File "/usr/local/lib/python3.8/site-packages/starlette/exceptions.py", line 82, in __call__
raise exc from None
File "/usr/local/lib/python3.8/site-packages/starlette/exceptions.py", line 71, in __call__
await self.app(scope, receive, sender)
File "/usr/local/lib/python3.8/site-packages/starlette/routing.py", line 580, in __call__
await route.handle(scope, receive, send)
File "/usr/local/lib/python3.8/site-packages/starlette/routing.py", line 241, in handle
await self.app(scope, receive, send)
File "/usr/local/lib/python3.8/site-packages/starlette/routing.py", line 52, in app
response = await func(request)
File "/usr/local/lib/python3.8/site-packages/fastapi/routing.py", line 226, in app
raw_response = await run_endpoint_function(
File "/usr/local/lib/python3.8/site-packages/fastapi/routing.py", line 159, in run_endpoint_function
return await dependant.call(**values)
File "./app/main.py", line 300, in run_onnx_inference
onnx_output = onnx_session.run(output_names=["last_hidden_state"], input_feed=dict(onnx_inputs))
File "/usr/local/lib/python3.8/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 188, in run
raise ValueError("Model requires {} inputs. Input Feed contains {}".format(num_required_inputs, num_inputs))
ValueError: Model requires 4 inputs. Input Feed contains 2
```
Dockerfile:
```Docker
FROM python:3.8
# Install required binaries and data
RUN apt-get update
RUN apt install wget tesseract-ocr -y
RUN apt install tesseract-ocr-chi-sim -y
RUN apt install python3-opencv -y
# Install python requirements
COPY ./requirements.txt .
RUN pip3 install -r requirements.txt
# Copy all source code and data
COPY ./README.md /README.md
COPY ./app /app
COPY ./data /data
CMD ["uvicorn", "app.main:app", "--host", "0.0.0.0", "--port", "80"]
```
Package Versions:
onnx 1.11.0
onnxmltools 1.11.0
onnxruntime 1.11.1
transformers 4.19.2
Code causing the error:
```python
import onnx
import onnxruntime as onnxrt # pylint: disable=import-error
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer, pipeline
@app.post("/onnx")
async def run_onnx_inference(text="ไธ่ฆๆไน่ถฃๆๅพไธๅข็ณ"):
onnx_model_filename = f"{MODELS_DIR}/model.onnx"
mt_tokenizer = AutoTokenizer.from_pretrained(f"{MODELS_DIR}/marian")
onnx_session = onnxrt.InferenceSession(onnx_model_filename)
onnx_inputs = mt_tokenizer(str(text), return_tensors="np")
onnx_output = onnx_session.run(output_names=["last_hidden_state"], input_feed=dict(onnx_inputs))
return onnx_output
```
Command used to export the model to onnx:
```sh
python -m transformers.onnx --model=app/models/marian --atol=2e-04 --feature=seq2seq-lm app/models
```
I have also tried exporting with the following, but I receive the same error:
```sh
python -m transformers.onnx --model=app/models/marian --atol=2e-04 app/models
```
It appears that, even though https://huggingface.co/docs/transformers/serialization clearly lists Marian as a valid model to export to ONNX, it doesn't export the model in a way that allows ONNX to run inference on the model.
Any help would be greatly appreciated. | 06-02-2022 15:16:23 | 06-02-2022 15:16:23 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Hi @calebdkofahl, thanks for sharing this issue!
The reason you see the error
```
ValueError: Model requires 4 inputs. Input Feed contains 2
```
is because MarianMT is a seq2seq model, so we need to pass the decoder input IDs and attention masks in addition to the encoder's ones.
The simplest way to generate inference with ONNX Runtime would be to use the new inference pipelines in our `optimum` library: https://huggingface.co/docs/optimum/onnxruntime/modeling_ort#optimum.onnxruntime.ORTModelForSeq2SeqLM.forward.example
This will allows you to skip the annoying step of needing to manually create the decoder inputs - hope that helps!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 17,528 | closed | Issues with mypy when using Transformers | ### System Info
```shell
- `transformers` version: 4.19.2
- Platform: Linux-5.18.1-arch1-1-x86_64-with-glibc2.35
- Python version: 3.9.12
- Huggingface_hub version: 0.7.0
- PyTorch version (GPU?): 1.11.0+cu102 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
```
### Who can help?
@sgugger
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Since some weeks there seem to be issues when using Transnsformers as a 3rs party lib
and mypy as a type checker.
Example:
```
hpoflow/optuna_transformers.py:83: error: Item "TFPreTrainedModel" of "Union[PreTrainedModel, TFPreTrainedModel]" has no attribute "config"
hpoflow/optuna_transformers.py:85: error: Item "PreTrainedModel" of "Union[PreTrainedModel, TFPreTrainedModel]" has no attribute "config"
hpoflow/optuna_transformers.py:85: error: Item "TFPreTrainedModel" of "Union[PreTrainedModel, TFPreTrainedModel]" has no attribute "config"
tests/test_optuna_transformers.py:135: error: "Trainer" has no attribute "train"
tests/test_optuna_transformers.py:139: error: "Trainer" has no attribute "state"
```
see https://github.com/telekom/HPOflow/runs/6664387914?check_suite_focus=true
It might be because of some lazy loading / import magic.
The Optuna project is doing some extra affords to avoid this. See here:
https://github.com/telekom/lazy-imports#usage--example-for-lazyimporter
```
# Direct imports for type-checking
if TYPE_CHECKING:
from hpoflow.mlflow import ( # noqa: F401
check_repo_is_dirty,
normalize_mlflow_entry_name,
normalize_mlflow_entry_names_in_dict,
)
from hpoflow.optuna import SignificanceRepeatedTrainingPruner # noqa: F401
from hpoflow.optuna_mlflow import OptunaMLflow # noqa: F401
from hpoflow.optuna_transformers import OptunaMLflowCallback # noqa: F401
from hpoflow.utils import func_no_exception_caller # noqa: F401
else:
sys.modules[__name__] = LazyImporter(
__name__,
globals()["__file__"],
_import_structure,
extra_objects={"__version__": __version__},
)
```
### Expected behavior
```shell
see above
```
| 06-02-2022 15:02:04 | 06-02-2022 15:02:04 | This is not a bug in Transformers, so let's remove that label please. MyPy is not part of the Python standard library and we never say anywhere we support it.
If you want to work on a PR that has some fixes that don't make the code unreadable, we'll be happy to have a look, but no one in the team is going to actively work on this.<|||||>Ok. Closing it then. |
transformers | 17,527 | closed | Update configuration_auto.py | # What does this PR do?
Documentation fix for Autoconfig
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
@sgugger | 06-02-2022 13:59:04 | 06-02-2022 13:59:04 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 17,526 | closed | center_crop in image_utils.py is broken for inputs that are not PIL Images | ### System Info
```shell
- `transformers` version: 4.20.0.dev0
- Platform: macOS-10.16-x86_64-i386-64bit
- Python version: 3.9.12
- Huggingface_hub version: 0.5.1
- PyTorch version (GPU?): 1.11.0 (False)
- Tensorflow version (GPU?): 2.8.0 (False)
- Flax version (CPU?/GPU?/TPU?): 0.4.1 (cpu)
- Jax version: 0.3.7
- JaxLib version: 0.3.7
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
```
### Who can help?
@Niels
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
The feature extractor for MobileViT (see the PR at https://github.com/huggingface/transformers/pull/17354) implements its own version of the `center_crop` method, instead of using the one from image_utils. The CLIP model also does this. The reason for this custom cropping method is that the "official" one is broken for certain inputs.
In normal usage, the feature extractor first resizes the image and then performs a center crop. Resizing always turns the input into a PIL Image, and then `center_crop` correctly works on the PIL Image.
However, the feature extractor can be configured to not perform the resize with the option `do_resize=False`. Now the input is passed directly into `center_crop`. When the input is a PIL Image this works fine, but when it is a numpy array or torch tensor, `center_crop` may calculate the wrong thing.
To reproduce:
```python
# load an image
from datasets import load_dataset
dataset = load_dataset("huggingface/cats-image")
image = dataset["test"]["image"][0]
# get the base class from image_utils
from transformers.image_utils import ImageFeatureExtractionMixin
mixin = ImageFeatureExtractionMixin()
```
The following outputs a PIL Image, which is correct and as expected:
```python
cropped = mixin.center_crop(image, size=256)
```
But when the input is a NumPy array, the output is a `(480, 256, 255)` tensor:
```python
import numpy as np
image_np = np.array(image)
inputs = mixin.center_crop(image_np, size=256)
inputs.shape
```
This happens because `center_crop` assumes that the tensor is already in the shape (channels, H, W) but it isn't.
When you transpose the image before cropping, the output is correct again:
```python
import numpy as np
image_np = np.array(image).transpose(2, 0, 1)
inputs = mixin.center_crop(image_np, size=256)
inputs.shape
```
However, the `resize` method does accept arrays and tensors in the shape (H, W, channels), and so it would be reasonable to assume that `center_crop` also should do this.
### Expected behavior
```shell
Any input that works correctly for `resize` should also work correctly for `center_crop`.
```
| 06-02-2022 12:40:39 | 06-02-2022 12:40:39 | You tagged the wrong Niels ;) also cc'ing @sgugger <|||||>LOL, sorry Niels and other Niels.<|||||>That's because we don't convert images back to PIL in `center_crop` (which always happens in `resize`), but happy to look at a PR adding support for this.<|||||>I can add it to my to-do list, as it would be better to have this logic only in image_utils. <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 17,525 | closed | Is the addition of the 'OPTforSequenceClassification' class scheduled? | ### Feature request
Is the addition of the 'OPTforSequenceClassification' class scheduled?
Is someone handling it?
When adding these functions, I wonder if it is possible to PR one by one, or if I have to PR all classes supported by other models.
### Motivation
Added function of OPT class, which is being actively discussed recently
### Your contribution
I personally use the forSequenceClassification class because I need it for my experiments, but I would like to inquire if I can contribute to the addition of the function. | 06-02-2022 11:09:33 | 06-02-2022 11:09:33 | Don't think anyone is working on this yet (cc @younesbelkada), so feel free to contribute<|||||>Hey! We are not working on that as Niels mentioned ๐
Opening a PR for each framework (`pytorch`, `tensorflow` and `flax`) is recommended. If you have any question about adding these models, feel free to reach out! <|||||>@ArthurZucker , @NielsRogge I do not see anyone working on this so far. I want to start contributing to huggingface with this good first issue. Can I go ahead and open a WIP PR for this issue ? <|||||>Hi, yes you can definitely go ahead and we'll be happy to review it ! ;)
Please also double check with @penpaperkeycode if it is not already a WIP on their side <|||||>@penpaperkeycode as @younesbelkada mentioned , can you please let me know if you are already working on the issue ? <|||||>@VijayKalmath Sorry for late reply.
I tested OPT's 'forSequenceClassification' class and verified accuracy in MNLI-glue task.
it looks good to me. But since this is my first huggingface PR, I need to figure out how to do it.
In addition, I am working on a conditional generation class, but there is no other work other than these two classes.
<|||||>@younesbelkada @ArthurZucker I am new to the hugging face community and am looking to contribute, I looked through the first issue list but all issues seem to have pre existing contributors assigned, can I help with this one or is there another first issue I can dive into and help out with. Thanks<|||||>Hey, it seems that this was already taken car of, see #18123, but thanks a lot for wanting to contribute! I am going to look for an issue that might be interesting for you :)
BTW, you can also check the closed issues as some are close because of a lack of activity. I assigned myself to #17514 but have not had the time to really look into it! Feel free to fork `transformers` and create a branch for the fix ! |
transformers | 17,524 | closed | Fix bug - layer names and activation from previous refactor | # What does this PR do?
Removes two issues introduced by the PR, removing `nn.Sequential` subclasses: https://github.com/huggingface/transformers/commit/bdc01711d67161ef5c2097b6d2d885645e0a0f08
1. `nn.Sequential` block removed within an `__init__` causing differences in layer naming, resulting in weights not being loaded
2. Fixes logic issue where a final linear weight was missing in the maskformer model
All the models affected by the previous PR (maskformer, resnet, regnet, van) have been loaded in with their default checkpoint weights to double check all layers have their weights initialised, and all weights from the checkpoint are used.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. | 06-02-2022 09:53:54 | 06-02-2022 09:53:54 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Hey @amyeroberts! Could you show a snippet of where the current implementation is faulty (ideally for both resnet and maskformer)? Running a model load with either current `main` or this PR seems to result in the same layers being loaded/not loaded.<|||||>@LysandreJik - Sure. Here's how I found the issues (and checked it was working).
The script I was running:
```
from transformers import VanForImageClassification, ResNetForImageClassification, RegNetForImageClassification, MaskFormerForInstanceSegmentation
print("\nLoading ResNet")
model = ResNetForImageClassification.from_pretrained("microsoft/resnet-50")
print("\nLoading VAN")
model = VanForImageClassification.from_pretrained("Visual-Attention-Network/van-base")
print("\nLoading RegNet")
model = RegNetForImageClassification.from_pretrained("facebook/regnet-y-040")
print("\nLoading MaskFormer")
model = MaskFormerForInstanceSegmentation.from_pretrained("facebook/maskformer-swin-base-ade")
```
Running on `main` I get the following output:
```
(tenv) aroberts:transformers $ git checkout main
Already on 'main'
Your branch is ahead of 'origin/main' by 75 commits.
(use "git push" to publish your local commits)
(tenv) aroberts:transformers $ python ../test_model_weight_loading.py
Loading ResNet
Some weights of the model checkpoint at microsoft/resnet-50 were not used when initializing ResNetForImageClassification: ['resnet.encoder.stages.1.layers.3.layer.0.normalization.weight', 'resnet.encoder.stages.0.layers.1.layer.2.convolution.weight', 'resnet.encoder.stages.1.layers.0.layer.1.normalization.running_var', 'resnet.encoder.stages.3.layers.0.shortcut.normalization.bias', 'resnet.encoder.stages.1.layers.0.layer.1.normalization.bias', 'resnet.encoder.stages.1.layers.0.layer.0.normalization.bias', 'resnet.encoder.stages.0.layers.1.layer.0.convolution.weight', 'resnet.encoder.stages.1.layers.2.layer.2.normalization.running_var', 'resnet.encoder.stages.0.layers.1.layer.2.normalization.running_mean', 'resnet.encoder.stages.1.layers.3.layer.2.normalization.running_mean', 'resnet.encoder.stages.2.layers.1.layer.0.normalization.running_mean', 'resnet.encoder.stages.2.layers.4.layer.0.convolution.weight', 'resnet.encoder.stages.2.layers.2.layer.2.normalization.running_mean', 'resnet.encoder.stages.3.layers.0.layer.1.normalization.running_mean', 'resnet.encoder.stages.2.layers.1.layer.0.convolution.weight', 'resnet.encoder.stages.2.layers.0.shortcut.normalization.num_batches_tracked', 'resnet.encoder.stages.1.layers.3.layer.0.normalization.bias', 'resnet.encoder.stages.3.layers.2.layer.0.normalization.bias', 'resnet.encoder.stages.1.layers.2.layer.2.normalization.running_mean', 'resnet.encoder.stages.2.layers.3.layer.1.normalization.num_batches_tracked', 'resnet.encoder.stages.0.layers.0.layer.0.normalization.bias', 'resnet.encoder.stages.0.layers.2.layer.2.normalization.running_mean', 'resnet.encoder.stages.1.layers.3.layer.1.normalization.num_batches_tracked', 'resnet.encoder.stages.2.layers.0.layer.0.normalization.bias', 'resnet.encoder.stages.3.layers.2.layer.0.normalization.weight', 'resnet.encoder.stages.0.layers.2.layer.0.normalization.running_mean', 'resnet.encoder.stages.1.layers.1.layer.2.normalization.running_var', 'resnet.encoder.stages.2.layers.3.layer.1.normalization.weight', 'resnet.encoder.stages.1.layers.2.layer.0.normalization.num_batches_tracked', 'resnet.encoder.stages.2.layers.1.layer.2.normalization.weight', 'resnet.encoder.stages.2.layers.1.layer.0.normalization.num_batches_tracked', 'resnet.encoder.stages.0.layers.2.layer.0.convolution.weight', 'resnet.encoder.stages.3.layers.2.layer.2.normalization.num_batches_tracked', 'resnet.encoder.stages.0.layers.0.layer.1.normalization.num_batches_tracked', 'resnet.encoder.stages.3.layers.2.layer.0.convolution.weight', 'resnet.encoder.stages.2.layers.0.layer.2.normalization.running_mean', 'resnet.encoder.stages.1.layers.1.layer.1.normalization.bias', 'resnet.encoder.stages.2.layers.4.layer.2.normalization.num_batches_tracked', 'resnet.encoder.stages.2.layers.0.shortcut.normalization.bias', 'resnet.encoder.stages.3.layers.2.layer.1.normalization.bias', 'resnet.encoder.stages.0.layers.2.layer.0.normalization.running_var', 'resnet.encoder.stages.3.layers.2.layer.1.convolution.weight', 'resnet.encoder.stages.1.layers.1.layer.0.normalization.running_mean', 'resnet.encoder.stages.2.layers.3.layer.0.normalization.weight', 'resnet.encoder.stages.1.layers.1.layer.2.normalization.bias', 'resnet.encoder.stages.1.layers.3.layer.0.convolution.weight', 'resnet.encoder.stages.1.layers.1.layer.2.convolution.weight', 'resnet.encoder.stages.2.layers.0.shortcut.convolution.weight', 'resnet.encoder.stages.0.layers.2.layer.0.normalization.weight', 'resnet.encoder.stages.3.layers.0.layer.2.normalization.bias', 'resnet.encoder.stages.1.layers.0.layer.0.normalization.running_mean', 'resnet.encoder.stages.2.layers.2.layer.1.normalization.running_var', 'resnet.encoder.stages.2.layers.3.layer.1.convolution.weight', 'resnet.encoder.stages.2.layers.0.layer.1.normalization.num_batches_tracked', 'resnet.encoder.stages.2.layers.4.layer.1.normalization.running_var', 'resnet.encoder.stages.1.layers.1.layer.1.normalization.running_mean', 'resnet.encoder.stages.2.layers.3.layer.0.convolution.weight', 'resnet.encoder.stages.2.layers.2.layer.0.normalization.running_var', 'resnet.encoder.stages.2.layers.0.layer.0.normalization.weight', 'resnet.encoder.stages.2.layers.2.layer.1.normalization.bias', 'resnet.encoder.stages.0.layers.1.layer.1.normalization.running_var', 'resnet.encoder.stages.2.layers.0.layer.2.convolution.weight', 'resnet.encoder.stages.2.layers.2.layer.1.convolution.weight', 'resnet.encoder.stages.0.layers.1.layer.0.normalization.weight', 'resnet.encoder.stages.3.layers.0.layer.0.normalization.running_var', 'resnet.encoder.stages.0.layers.2.layer.2.normalization.num_batches_tracked', 'resnet.encoder.stages.0.layers.0.shortcut.normalization.num_batches_tracked', 'resnet.encoder.stages.1.layers.0.shortcut.normalization.running_mean', 'resnet.encoder.stages.0.layers.0.layer.2.normalization.running_var', 'resnet.encoder.stages.1.layers.1.layer.2.normalization.weight', 'resnet.encoder.stages.2.layers.2.layer.0.normalization.running_mean', 'resnet.encoder.stages.2.layers.3.layer.2.normalization.running_var', 'resnet.encoder.stages.1.layers.1.layer.2.normalization.num_batches_tracked', 'resnet.encoder.stages.1.layers.2.layer.2.normalization.num_batches_tracked', 'resnet.encoder.stages.3.layers.0.layer.1.normalization.weight', 'resnet.encoder.stages.2.layers.5.layer.0.convolution.weight', 'resnet.encoder.stages.0.layers.0.layer.1.normalization.weight', 'resnet.encoder.stages.0.layers.1.layer.2.normalization.running_var', 'resnet.encoder.stages.3.layers.2.layer.2.normalization.bias', 'resnet.encoder.stages.2.layers.0.layer.1.normalization.running_var', 'resnet.encoder.stages.0.layers.2.layer.1.normalization.num_batches_tracked', 'resnet.encoder.stages.2.layers.0.layer.2.normalization.weight', 'resnet.encoder.stages.2.layers.3.layer.2.normalization.weight', 'resnet.encoder.stages.1.layers.2.layer.2.normalization.weight', 'resnet.encoder.stages.1.layers.2.layer.0.normalization.running_var', 'resnet.encoder.stages.1.layers.1.layer.0.normalization.weight', 'resnet.encoder.stages.2.layers.0.layer.1.normalization.bias', 'resnet.encoder.stages.2.layers.4.layer.0.normalization.bias', 'resnet.encoder.stages.1.layers.3.layer.0.normalization.num_batches_tracked', 'resnet.encoder.stages.0.layers.1.layer.2.normalization.weight', 'resnet.encoder.stages.3.layers.0.layer.0.normalization.weight', 'resnet.encoder.stages.3.layers.0.layer.0.normalization.num_batches_tracked', 'resnet.encoder.stages.3.layers.1.layer.0.normalization.num_batches_tracked', 'resnet.encoder.stages.0.layers.0.layer.2.convolution.weight', 'resnet.encoder.stages.2.layers.3.layer.0.normalization.bias', 'resnet.encoder.stages.1.layers.0.layer.0.normalization.weight', 'resnet.encoder.stages.2.layers.2.layer.0.normalization.bias', 'resnet.encoder.stages.2.layers.4.layer.0.normalization.num_batches_tracked', 'resnet.encoder.stages.1.layers.3.layer.1.normalization.bias', 'resnet.encoder.stages.3.layers.0.layer.1.normalization.bias', 'resnet.encoder.stages.2.layers.2.layer.2.normalization.num_batches_tracked', 'resnet.encoder.stages.0.layers.1.layer.1.normalization.weight', 'resnet.encoder.stages.1.layers.0.layer.0.normalization.running_var', 'resnet.encoder.stages.1.layers.2.layer.0.convolution.weight', 'resnet.encoder.stages.1.layers.2.layer.1.convolution.weight', 'resnet.encoder.stages.2.layers.0.layer.2.normalization.running_var', 'resnet.encoder.stages.2.layers.0.shortcut.normalization.running_mean', 'resnet.encoder.stages.1.layers.2.layer.2.normalization.bias', 'resnet.encoder.stages.2.layers.0.layer.0.normalization.running_var', 'resnet.encoder.stages.0.layers.2.layer.2.normalization.weight', 'resnet.encoder.stages.1.layers.3.layer.1.normalization.running_mean', 'resnet.encoder.stages.2.layers.5.layer.2.normalization.num_batches_tracked', 'resnet.encoder.stages.2.layers.0.layer.2.normalization.num_batches_tracked', 'resnet.encoder.stages.2.layers.4.layer.1.normalization.num_batches_tracked', 'resnet.encoder.stages.1.layers.2.layer.0.normalization.bias', 'resnet.encoder.stages.2.layers.5.layer.1.normalization.running_var', 'resnet.encoder.stages.3.layers.1.layer.2.normalization.running_mean', 'resnet.encoder.stages.1.layers.0.layer.1.normalization.weight', 'resnet.encoder.stages.2.layers.2.layer.2.normalization.weight', 'resnet.encoder.stages.2.layers.5.layer.1.normalization.weight', 'resnet.encoder.stages.2.layers.1.layer.0.normalization.weight', 'resnet.encoder.stages.0.layers.0.layer.2.normalization.bias', 'resnet.encoder.stages.0.layers.0.shortcut.normalization.bias', 'resnet.encoder.stages.0.layers.0.layer.0.normalization.running_var', 'resnet.encoder.stages.0.layers.0.shortcut.normalization.running_mean', 'resnet.encoder.stages.3.layers.1.layer.2.normalization.running_var', 'resnet.encoder.stages.1.layers.1.layer.0.convolution.weight', 'resnet.encoder.stages.0.layers.0.shortcut.normalization.running_var', 'resnet.encoder.stages.1.layers.3.layer.2.normalization.bias', 'resnet.encoder.stages.1.layers.1.layer.1.normalization.running_var', 'resnet.encoder.stages.2.layers.5.layer.1.convolution.weight', 'resnet.encoder.stages.2.layers.4.layer.2.normalization.running_mean', 'resnet.encoder.stages.3.layers.1.layer.1.normalization.num_batches_tracked', 'resnet.encoder.stages.2.layers.4.layer.1.normalization.weight', 'resnet.encoder.stages.0.layers.1.layer.0.normalization.running_var', 'resnet.encoder.stages.3.layers.1.layer.1.convolution.weight', 'resnet.encoder.stages.3.layers.0.layer.2.convolution.weight', 'resnet.encoder.stages.3.layers.0.shortcut.normalization.num_batches_tracked', 'resnet.encoder.stages.2.layers.3.layer.0.normalization.num_batches_tracked', 'resnet.encoder.stages.2.layers.5.layer.0.normalization.weight', 'resnet.encoder.stages.1.layers.3.layer.1.convolution.weight', 'resnet.encoder.stages.1.layers.1.layer.1.normalization.num_batches_tracked', 'resnet.encoder.stages.3.layers.1.layer.2.normalization.num_batches_tracked', 'resnet.encoder.stages.1.layers.0.layer.1.normalization.running_mean', 'resnet.encoder.stages.2.layers.0.layer.2.normalization.bias', 'resnet.encoder.stages.2.layers.3.layer.0.normalization.running_mean', 'resnet.encoder.stages.0.layers.1.layer.0.normalization.running_mean', 'resnet.encoder.stages.2.layers.5.layer.2.normalization.running_mean', 'resnet.encoder.stages.3.layers.0.layer.0.normalization.running_mean', 'resnet.encoder.stages.2.layers.5.layer.2.convolution.weight', 'resnet.encoder.stages.2.layers.3.layer.2.normalization.running_mean', 'resnet.encoder.stages.1.layers.1.layer.1.normalization.weight', 'resnet.encoder.stages.0.layers.1.layer.1.normalization.bias', 'resnet.encoder.stages.1.layers.3.layer.0.normalization.running_mean', 'resnet.encoder.stages.1.layers.0.layer.2.normalization.num_batches_tracked', 'resnet.encoder.stages.0.layers.0.layer.0.normalization.weight', 'resnet.encoder.stages.2.layers.5.layer.0.normalization.num_batches_tracked', 'resnet.encoder.stages.0.layers.2.layer.1.convolution.weight', 'resnet.encoder.stages.0.layers.1.layer.2.normalization.num_batches_tracked', 'resnet.encoder.stages.0.layers.1.layer.1.normalization.running_mean', 'resnet.encoder.stages.3.layers.0.shortcut.normalization.running_var', 'resnet.encoder.stages.1.layers.0.shortcut.normalization.num_batches_tracked', 'resnet.encoder.stages.1.layers.0.layer.2.normalization.running_mean', 'resnet.encoder.stages.1.layers.0.layer.1.normalization.num_batches_tracked', 'resnet.encoder.stages.1.layers.1.layer.2.normalization.running_mean', 'resnet.encoder.stages.1.layers.3.layer.2.normalization.num_batches_tracked', 'resnet.encoder.stages.3.layers.2.layer.0.normalization.running_var', 'resnet.encoder.stages.2.layers.3.layer.1.normalization.running_mean', 'resnet.encoder.stages.0.layers.0.layer.0.normalization.running_mean', 'resnet.encoder.stages.3.layers.0.shortcut.normalization.weight', 'resnet.encoder.stages.2.layers.0.layer.1.normalization.weight', 'resnet.encoder.stages.2.layers.0.shortcut.normalization.weight', 'resnet.encoder.stages.1.layers.0.layer.2.normalization.bias', 'resnet.encoder.stages.1.layers.3.layer.0.normalization.running_var', 'resnet.encoder.stages.1.layers.2.layer.1.normalization.running_mean', 'resnet.encoder.stages.3.layers.2.layer.2.normalization.weight', 'resnet.encoder.stages.0.layers.2.layer.2.normalization.bias', 'resnet.encoder.stages.3.layers.0.shortcut.convolution.weight', 'resnet.encoder.stages.2.layers.0.layer.1.convolution.weight', 'resnet.encoder.stages.2.layers.4.layer.0.normalization.weight', 'resnet.encoder.stages.2.layers.1.layer.1.normalization.running_var', 'resnet.encoder.stages.2.layers.0.layer.0.convolution.weight', 'resnet.encoder.stages.2.layers.2.layer.1.normalization.running_mean', 'resnet.encoder.stages.1.layers.0.layer.1.convolution.weight', 'resnet.encoder.stages.2.layers.0.layer.0.normalization.num_batches_tracked', 'resnet.encoder.stages.1.layers.3.layer.2.normalization.running_var', 'resnet.encoder.stages.1.layers.0.layer.2.convolution.weight', 'resnet.encoder.stages.0.layers.0.layer.1.normalization.bias', 'resnet.encoder.stages.3.layers.2.layer.1.normalization.running_var', 'resnet.encoder.stages.0.layers.1.layer.1.convolution.weight', 'resnet.encoder.stages.1.layers.0.shortcut.normalization.bias', 'resnet.encoder.stages.1.layers.3.layer.2.convolution.weight', 'resnet.encoder.stages.3.layers.1.layer.0.normalization.running_mean', 'resnet.encoder.stages.3.layers.1.layer.1.normalization.running_var', 'resnet.encoder.stages.1.layers.3.layer.1.normalization.weight', 'resnet.encoder.stages.3.layers.0.layer.2.normalization.running_var', 'resnet.encoder.stages.2.layers.4.layer.2.normalization.bias', 'resnet.encoder.stages.2.layers.3.layer.2.normalization.bias', 'resnet.encoder.stages.1.layers.0.layer.0.normalization.num_batches_tracked', 'resnet.encoder.stages.0.layers.0.shortcut.convolution.weight', 'resnet.encoder.stages.0.layers.0.layer.2.normalization.running_mean', 'resnet.encoder.stages.2.layers.3.layer.1.normalization.bias', 'resnet.encoder.stages.2.layers.2.layer.1.normalization.weight', 'resnet.encoder.stages.2.layers.3.layer.2.convolution.weight', 'resnet.encoder.stages.2.layers.4.layer.2.normalization.weight', 'resnet.encoder.stages.3.layers.0.layer.2.normalization.running_mean', 'resnet.encoder.stages.0.layers.0.layer.1.normalization.running_mean', 'resnet.encoder.stages.2.layers.1.layer.2.normalization.running_var', 'resnet.encoder.stages.1.layers.3.layer.1.normalization.running_var', 'resnet.encoder.stages.3.layers.1.layer.0.normalization.running_var', 'resnet.encoder.stages.2.layers.4.layer.0.normalization.running_mean', 'resnet.encoder.stages.2.layers.5.layer.0.normalization.running_mean', 'resnet.encoder.stages.3.layers.0.layer.1.normalization.num_batches_tracked', 'resnet.encoder.stages.3.layers.1.layer.1.normalization.weight', 'resnet.encoder.stages.3.layers.0.layer.0.normalization.bias', 'resnet.encoder.stages.0.layers.2.layer.0.normalization.bias', 'resnet.encoder.stages.2.layers.4.layer.1.convolution.weight', 'resnet.encoder.stages.2.layers.2.layer.0.convolution.weight', 'resnet.encoder.stages.3.layers.1.layer.1.normalization.running_mean', 'resnet.encoder.stages.1.layers.0.shortcut.convolution.weight', 'resnet.encoder.stages.2.layers.4.layer.1.normalization.running_mean', 'resnet.encoder.stages.0.layers.0.layer.1.normalization.running_var', 'resnet.encoder.stages.0.layers.0.shortcut.normalization.weight', 'resnet.encoder.stages.2.layers.3.layer.2.normalization.num_batches_tracked', 'resnet.encoder.stages.3.layers.0.layer.2.normalization.num_batches_tracked', 'resnet.encoder.stages.0.layers.0.layer.2.normalization.num_batches_tracked', 'resnet.encoder.stages.3.layers.2.layer.2.convolution.weight', 'resnet.encoder.stages.2.layers.0.layer.0.normalization.running_mean', 'resnet.encoder.stages.2.layers.1.layer.2.normalization.running_mean', 'resnet.encoder.stages.0.layers.2.layer.2.normalization.running_var', 'resnet.encoder.stages.2.layers.0.layer.1.normalization.running_mean', 'resnet.encoder.stages.1.layers.2.layer.1.normalization.num_batches_tracked', 'resnet.encoder.stages.2.layers.1.layer.0.normalization.running_var', 'resnet.encoder.stages.2.layers.4.layer.2.convolution.weight', 'resnet.encoder.stages.3.layers.1.layer.2.normalization.weight', 'resnet.encoder.stages.3.layers.2.layer.1.normalization.weight', 'resnet.encoder.stages.0.layers.1.layer.2.normalization.bias', 'resnet.encoder.stages.2.layers.5.layer.2.normalization.bias', 'resnet.encoder.stages.2.layers.1.layer.2.normalization.bias', 'resnet.encoder.stages.1.layers.0.layer.0.convolution.weight', 'resnet.encoder.stages.1.layers.0.shortcut.normalization.weight', 'resnet.encoder.stages.1.layers.2.layer.0.normalization.running_mean', 'resnet.encoder.stages.1.layers.0.shortcut.normalization.running_var', 'resnet.encoder.stages.2.layers.0.shortcut.normalization.running_var', 'resnet.encoder.stages.1.layers.1.layer.1.convolution.weight', 'resnet.encoder.stages.2.layers.2.layer.2.normalization.running_var', 'resnet.encoder.stages.0.layers.1.layer.0.normalization.bias', 'resnet.encoder.stages.2.layers.2.layer.2.normalization.bias', 'resnet.encoder.stages.0.layers.2.layer.1.normalization.bias', 'resnet.encoder.stages.2.layers.5.layer.0.normalization.running_var', 'resnet.encoder.stages.3.layers.1.layer.0.normalization.bias', 'resnet.encoder.stages.1.layers.1.layer.0.normalization.num_batches_tracked', 'resnet.encoder.stages.2.layers.3.layer.1.normalization.running_var', 'resnet.encoder.stages.3.layers.2.layer.0.normalization.num_batches_tracked', 'resnet.encoder.stages.2.layers.5.layer.0.normalization.bias', 'resnet.encoder.stages.2.layers.4.layer.2.normalization.running_var', 'resnet.encoder.stages.3.layers.2.layer.1.normalization.num_batches_tracked', 'resnet.encoder.stages.2.layers.1.layer.1.normalization.running_mean', 'resnet.encoder.stages.0.layers.2.layer.1.normalization.running_var', 'resnet.encoder.stages.2.layers.1.layer.1.convolution.weight', 'resnet.encoder.stages.2.layers.1.layer.0.normalization.bias', 'resnet.encoder.stages.2.layers.3.layer.0.normalization.running_var', 'resnet.encoder.stages.0.layers.1.layer.1.normalization.num_batches_tracked', 'resnet.encoder.stages.2.layers.5.layer.1.normalization.bias', 'resnet.encoder.stages.0.layers.2.layer.2.convolution.weight', 'resnet.encoder.stages.0.layers.2.layer.0.normalization.num_batches_tracked', 'resnet.encoder.stages.3.layers.0.layer.2.normalization.weight', 'resnet.encoder.stages.3.layers.0.shortcut.normalization.running_mean', 'resnet.encoder.stages.1.layers.2.layer.1.normalization.weight', 'resnet.encoder.stages.1.layers.1.layer.0.normalization.running_var', 'resnet.encoder.stages.3.layers.0.layer.0.convolution.weight', 'resnet.encoder.stages.2.layers.5.layer.1.normalization.num_batches_tracked', 'resnet.encoder.stages.0.layers.0.layer.2.normalization.weight', 'resnet.encoder.stages.0.layers.2.layer.1.normalization.running_mean', 'resnet.encoder.stages.3.layers.1.layer.0.convolution.weight', 'resnet.encoder.stages.0.layers.2.layer.1.normalization.weight', 'resnet.encoder.stages.2.layers.2.layer.0.normalization.weight', 'resnet.encoder.stages.3.layers.1.layer.0.normalization.weight', 'resnet.encoder.stages.2.layers.5.layer.2.normalization.weight', 'resnet.encoder.stages.2.layers.2.layer.0.normalization.num_batches_tracked', 'resnet.encoder.stages.3.layers.2.layer.2.normalization.running_mean', 'resnet.encoder.stages.1.layers.2.layer.1.normalization.bias', 'resnet.encoder.stages.2.layers.2.layer.2.convolution.weight', 'resnet.encoder.stages.2.layers.4.layer.0.normalization.running_var', 'resnet.encoder.stages.2.layers.1.layer.1.normalization.weight', 'resnet.encoder.stages.2.layers.1.layer.2.convolution.weight', 'resnet.encoder.stages.1.layers.0.layer.2.normalization.running_var', 'resnet.encoder.stages.0.layers.0.layer.1.convolution.weight', 'resnet.encoder.stages.2.layers.4.layer.1.normalization.bias', 'resnet.encoder.stages.2.layers.5.layer.2.normalization.running_var', 'resnet.encoder.stages.3.layers.1.layer.1.normalization.bias', 'resnet.encoder.stages.2.layers.1.layer.2.normalization.num_batches_tracked', 'resnet.encoder.stages.0.layers.1.layer.0.normalization.num_batches_tracked', 'resnet.encoder.stages.3.layers.1.layer.2.normalization.bias', 'resnet.encoder.stages.3.layers.1.layer.2.convolution.weight', 'resnet.encoder.stages.2.layers.5.layer.1.normalization.running_mean', 'resnet.encoder.stages.2.layers.1.layer.1.normalization.bias', 'resnet.encoder.stages.1.layers.3.layer.2.normalization.weight', 'resnet.encoder.stages.2.layers.2.layer.1.normalization.num_batches_tracked', 'resnet.encoder.stages.2.layers.1.layer.1.normalization.num_batches_tracked', 'resnet.encoder.stages.1.layers.2.layer.1.normalization.running_var', 'resnet.encoder.stages.1.layers.2.layer.0.normalization.weight', 'resnet.encoder.stages.3.layers.2.layer.2.normalization.running_var', 'resnet.encoder.stages.1.layers.2.layer.2.convolution.weight', 'resnet.encoder.stages.3.layers.2.layer.0.normalization.running_mean', 'resnet.encoder.stages.3.layers.0.layer.1.normalization.running_var', 'resnet.encoder.stages.1.layers.0.layer.2.normalization.weight', 'resnet.encoder.stages.0.layers.0.layer.0.convolution.weight', 'resnet.encoder.stages.3.layers.2.layer.1.normalization.running_mean', 'resnet.encoder.stages.1.layers.1.layer.0.normalization.bias', 'resnet.encoder.stages.3.layers.0.layer.1.convolution.weight', 'resnet.encoder.stages.0.layers.0.layer.0.normalization.num_batches_tracked']
- This IS expected if you are initializing ResNetForImageClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing ResNetForImageClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
Some weights of ResNetForImageClassification were not initialized from the model checkpoint at microsoft/resnet-50 and are newly initialized: ['resnet.encoder.stages.3.1.layer.1.normalization.bias', 'resnet.encoder.stages.3.1.layer.0.normalization.running_var', 'resnet.encoder.stages.3.0.layer.2.normalization.running_var', 'resnet.encoder.stages.1.0.shortcut.normalization.running_mean', 'resnet.encoder.stages.1.2.layer.2.normalization.num_batches_tracked', 'resnet.encoder.stages.1.3.layer.0.normalization.running_mean', 'resnet.encoder.stages.3.1.layer.1.normalization.weight', 'resnet.encoder.stages.2.4.layer.2.normalization.running_mean', 'resnet.encoder.stages.2.1.layer.1.normalization.running_mean', 'resnet.encoder.stages.0.2.layer.1.convolution.weight', 'resnet.encoder.stages.3.1.layer.2.normalization.bias', 'resnet.encoder.stages.2.0.layer.0.convolution.weight', 'resnet.encoder.stages.0.0.shortcut.normalization.weight', 'resnet.encoder.stages.1.0.shortcut.convolution.weight', 'resnet.encoder.stages.2.3.layer.0.normalization.weight', 'resnet.encoder.stages.1.1.layer.2.convolution.weight', 'resnet.encoder.stages.3.1.layer.2.normalization.num_batches_tracked', 'resnet.encoder.stages.1.3.layer.1.normalization.running_var', 'resnet.encoder.stages.0.1.layer.1.normalization.bias', 'resnet.encoder.stages.2.3.layer.1.convolution.weight', 'resnet.encoder.stages.2.4.layer.0.normalization.bias', 'resnet.encoder.stages.1.2.layer.2.normalization.bias', 'resnet.encoder.stages.3.0.layer.0.normalization.num_batches_tracked', 'resnet.encoder.stages.0.2.layer.2.normalization.bias', 'resnet.encoder.stages.0.1.layer.0.normalization.num_batches_tracked', 'resnet.encoder.stages.3.2.layer.2.normalization.running_var', 'resnet.encoder.stages.2.5.layer.0.normalization.bias', 'resnet.encoder.stages.2.0.layer.0.normalization.weight', 'resnet.encoder.stages.1.0.shortcut.normalization.weight', 'resnet.encoder.stages.3.2.layer.2.normalization.bias', 'resnet.encoder.stages.3.1.layer.2.normalization.running_var', 'resnet.encoder.stages.1.1.layer.0.normalization.bias', 'resnet.encoder.stages.3.0.shortcut.normalization.num_batches_tracked', 'resnet.encoder.stages.2.5.layer.2.convolution.weight', 'resnet.encoder.stages.2.1.layer.1.normalization.running_var', 'resnet.encoder.stages.2.3.layer.2.normalization.bias', 'resnet.encoder.stages.2.0.layer.1.normalization.running_var', 'resnet.encoder.stages.2.4.layer.0.normalization.running_var', 'resnet.encoder.stages.3.0.shortcut.convolution.weight', 'resnet.encoder.stages.0.1.layer.2.normalization.weight', 'resnet.encoder.stages.3.1.layer.2.normalization.weight', 'resnet.encoder.stages.1.1.layer.1.normalization.num_batches_tracked', 'resnet.encoder.stages.2.2.layer.1.normalization.running_mean', 'resnet.encoder.stages.3.1.layer.1.normalization.running_var', 'resnet.encoder.stages.1.0.layer.2.normalization.num_batches_tracked', 'resnet.encoder.stages.2.3.layer.0.normalization.running_mean', 'resnet.encoder.stages.1.2.layer.0.normalization.bias', 'resnet.encoder.stages.2.5.layer.2.normalization.running_mean', 'resnet.encoder.stages.3.2.layer.2.convolution.weight', 'resnet.encoder.stages.1.3.layer.0.normalization.weight', 'resnet.encoder.stages.3.0.layer.2.normalization.weight', 'resnet.encoder.stages.3.2.layer.1.convolution.weight', 'resnet.encoder.stages.2.1.layer.2.normalization.weight', 'resnet.encoder.stages.3.0.layer.0.normalization.bias', 'resnet.encoder.stages.2.4.layer.0.normalization.weight', 'resnet.encoder.stages.2.2.layer.0.normalization.running_var', 'resnet.encoder.stages.2.5.layer.0.normalization.num_batches_tracked', 'resnet.encoder.stages.1.3.layer.1.convolution.weight', 'resnet.encoder.stages.2.0.layer.2.normalization.bias', 'resnet.encoder.stages.0.0.layer.0.normalization.running_var', 'resnet.encoder.stages.3.2.layer.1.normalization.num_batches_tracked', 'resnet.encoder.stages.2.2.layer.0.normalization.running_mean', 'resnet.encoder.stages.2.0.shortcut.normalization.num_batches_tracked', 'resnet.encoder.stages.3.0.layer.1.normalization.running_var', 'resnet.encoder.stages.0.1.layer.1.normalization.running_var', 'resnet.encoder.stages.3.2.layer.2.normalization.running_mean', 'resnet.encoder.stages.2.1.layer.1.normalization.weight', 'resnet.encoder.stages.0.0.layer.2.normalization.bias', 'resnet.encoder.stages.0.2.layer.2.convolution.weight', 'resnet.encoder.stages.1.0.layer.2.convolution.weight', 'resnet.encoder.stages.2.2.layer.0.normalization.weight', 'resnet.encoder.stages.3.2.layer.2.normalization.num_batches_tracked', 'resnet.encoder.stages.0.0.shortcut.convolution.weight', 'resnet.encoder.stages.2.0.layer.2.normalization.running_mean', 'resnet.encoder.stages.0.2.layer.1.normalization.num_batches_tracked', 'resnet.encoder.stages.2.5.layer.1.normalization.bias', 'resnet.encoder.stages.3.0.shortcut.normalization.running_mean', 'resnet.encoder.stages.1.0.layer.2.normalization.running_mean', 'resnet.encoder.stages.0.0.layer.1.normalization.num_batches_tracked', 'resnet.encoder.stages.2.2.layer.0.normalization.bias', 'resnet.encoder.stages.2.0.layer.0.normalization.running_mean', 'resnet.encoder.stages.0.0.layer.0.normalization.bias', 'resnet.encoder.stages.3.0.layer.1.convolution.weight', 'resnet.encoder.stages.3.2.layer.1.normalization.weight', 'resnet.encoder.stages.2.1.layer.0.normalization.running_mean', 'resnet.encoder.stages.0.1.layer.0.normalization.bias', 'resnet.encoder.stages.3.1.layer.0.normalization.bias', 'resnet.encoder.stages.2.2.layer.1.normalization.bias', 'resnet.encoder.stages.1.1.layer.0.normalization.running_var', 'resnet.encoder.stages.0.1.layer.1.normalization.num_batches_tracked', 'resnet.encoder.stages.2.1.layer.1.normalization.bias', 'resnet.encoder.stages.3.0.layer.1.normalization.num_batches_tracked', 'resnet.encoder.stages.0.2.layer.2.normalization.running_mean', 'resnet.encoder.stages.2.1.layer.1.convolution.weight', 'resnet.encoder.stages.3.1.layer.0.normalization.num_batches_tracked', 'resnet.encoder.stages.3.0.layer.0.normalization.running_var', 'resnet.encoder.stages.1.2.layer.2.normalization.running_mean', 'resnet.encoder.stages.2.4.layer.1.normalization.weight', 'resnet.encoder.stages.0.2.layer.1.normalization.weight', 'resnet.encoder.stages.1.3.layer.1.normalization.running_mean', 'resnet.encoder.stages.2.2.layer.2.normalization.num_batches_tracked', 'resnet.encoder.stages.0.0.layer.0.normalization.weight', 'resnet.encoder.stages.3.2.layer.0.normalization.bias', 'resnet.encoder.stages.1.0.shortcut.normalization.num_batches_tracked', 'resnet.encoder.stages.2.3.layer.2.convolution.weight', 'resnet.encoder.stages.3.1.layer.1.normalization.num_batches_tracked', 'resnet.encoder.stages.1.2.layer.1.convolution.weight', 'resnet.encoder.stages.2.0.layer.2.normalization.running_var', 'resnet.encoder.stages.0.2.layer.2.normalization.weight', 'resnet.encoder.stages.3.0.shortcut.normalization.running_var', 'resnet.encoder.stages.1.3.layer.2.convolution.weight', 'resnet.encoder.stages.0.1.layer.1.normalization.running_mean', 'resnet.encoder.stages.0.0.layer.1.normalization.bias', 'resnet.encoder.stages.1.1.layer.1.normalization.weight', 'resnet.encoder.stages.1.0.layer.1.normalization.running_var', 'resnet.encoder.stages.2.1.layer.0.convolution.weight', 'resnet.encoder.stages.1.0.layer.1.normalization.running_mean', 'resnet.encoder.stages.3.2.layer.0.normalization.running_mean', 'resnet.encoder.stages.3.2.layer.0.normalization.weight', 'resnet.encoder.stages.2.2.layer.1.normalization.weight', 'resnet.encoder.stages.1.0.shortcut.normalization.bias', 'resnet.encoder.stages.0.0.layer.0.convolution.weight', 'resnet.encoder.stages.3.0.layer.1.normalization.weight', 'resnet.encoder.stages.1.0.layer.2.normalization.bias', 'resnet.encoder.stages.2.0.shortcut.normalization.running_mean', 'resnet.encoder.stages.2.1.layer.2.convolution.weight', 'resnet.encoder.stages.1.0.layer.0.normalization.num_batches_tracked', 'resnet.encoder.stages.2.5.layer.2.normalization.running_var', 'resnet.encoder.stages.3.1.layer.0.normalization.weight', 'resnet.encoder.stages.1.3.layer.0.normalization.running_var', 'resnet.encoder.stages.2.0.layer.1.convolution.weight', 'resnet.encoder.stages.1.1.layer.1.convolution.weight', 'resnet.encoder.stages.0.0.layer.2.normalization.weight', 'resnet.encoder.stages.0.0.layer.1.normalization.running_var', 'resnet.encoder.stages.1.1.layer.2.normalization.weight', 'resnet.encoder.stages.3.0.layer.2.normalization.running_mean', 'resnet.encoder.stages.1.1.layer.1.normalization.running_var', 'resnet.encoder.stages.1.2.layer.1.normalization.weight', 'resnet.encoder.stages.0.1.layer.2.normalization.num_batches_tracked', 'resnet.encoder.stages.2.2.layer.2.normalization.running_mean', 'resnet.encoder.stages.2.1.layer.2.normalization.running_mean', 'resnet.encoder.stages.2.0.layer.2.normalization.num_batches_tracked', 'resnet.encoder.stages.2.0.layer.2.normalization.weight', 'resnet.encoder.stages.2.4.layer.2.normalization.bias', 'resnet.encoder.stages.2.5.layer.1.convolution.weight', 'resnet.encoder.stages.0.0.shortcut.normalization.running_var', 'resnet.encoder.stages.2.4.layer.1.normalization.num_batches_tracked', 'resnet.encoder.stages.0.0.layer.1.normalization.weight', 'resnet.encoder.stages.2.4.layer.1.convolution.weight', 'resnet.encoder.stages.2.4.layer.1.normalization.running_mean', 'resnet.encoder.stages.0.2.layer.0.convolution.weight', 'resnet.encoder.stages.1.3.layer.2.normalization.num_batches_tracked', 'resnet.encoder.stages.3.0.layer.0.normalization.weight', 'resnet.encoder.stages.2.2.layer.1.convolution.weight', 'resnet.encoder.stages.2.5.layer.1.normalization.num_batches_tracked', 'resnet.encoder.stages.1.2.layer.0.normalization.num_batches_tracked', 'resnet.encoder.stages.0.1.layer.2.normalization.running_mean', 'resnet.encoder.stages.0.0.layer.0.normalization.num_batches_tracked', 'resnet.encoder.stages.2.2.layer.1.normalization.num_batches_tracked', 'resnet.encoder.stages.1.3.layer.2.normalization.bias', 'resnet.encoder.stages.1.3.layer.1.normalization.num_batches_tracked', 'resnet.encoder.stages.0.1.layer.1.normalization.weight', 'resnet.encoder.stages.0.2.layer.1.normalization.running_var', 'resnet.encoder.stages.1.1.layer.2.normalization.running_var', 'resnet.encoder.stages.1.0.layer.0.normalization.weight', 'resnet.encoder.stages.1.1.layer.0.normalization.weight', 'resnet.encoder.stages.1.0.layer.1.normalization.num_batches_tracked', 'resnet.encoder.stages.2.0.layer.0.normalization.running_var', 'resnet.encoder.stages.2.5.layer.0.normalization.running_mean', 'resnet.encoder.stages.3.2.layer.1.normalization.running_var', 'resnet.encoder.stages.2.3.layer.0.normalization.bias', 'resnet.encoder.stages.2.1.layer.2.normalization.bias', 'resnet.encoder.stages.3.1.layer.0.normalization.running_mean', 'resnet.encoder.stages.2.2.layer.1.normalization.running_var', 'resnet.encoder.stages.2.5.layer.1.normalization.weight', 'resnet.encoder.stages.3.1.layer.2.convolution.weight', 'resnet.encoder.stages.0.2.layer.0.normalization.weight', 'resnet.encoder.stages.2.4.layer.2.normalization.num_batches_tracked', 'resnet.encoder.stages.2.0.layer.1.normalization.num_batches_tracked', 'resnet.encoder.stages.0.0.layer.2.normalization.running_mean', 'resnet.encoder.stages.2.5.layer.2.normalization.weight', 'resnet.encoder.stages.2.5.layer.0.normalization.weight', 'resnet.encoder.stages.1.1.layer.0.normalization.num_batches_tracked', 'resnet.encoder.stages.1.2.layer.0.normalization.running_var', 'resnet.encoder.stages.0.1.layer.0.convolution.weight', 'resnet.encoder.stages.1.2.layer.0.normalization.weight', 'resnet.encoder.stages.2.5.layer.1.normalization.running_mean', 'resnet.encoder.stages.1.0.layer.2.normalization.running_var', 'resnet.encoder.stages.2.4.layer.2.normalization.weight', 'resnet.encoder.stages.2.1.layer.0.normalization.num_batches_tracked', 'resnet.encoder.stages.3.1.layer.2.normalization.running_mean', 'resnet.encoder.stages.1.2.layer.1.normalization.num_batches_tracked', 'resnet.encoder.stages.0.0.shortcut.normalization.bias', 'resnet.encoder.stages.2.4.layer.1.normalization.running_var', 'resnet.encoder.stages.3.0.layer.2.normalization.bias', 'resnet.encoder.stages.2.1.layer.2.normalization.running_var', 'resnet.encoder.stages.1.0.layer.2.normalization.weight', 'resnet.encoder.stages.1.0.layer.0.normalization.bias', 'resnet.encoder.stages.2.3.layer.0.normalization.running_var', 'resnet.encoder.stages.2.4.layer.1.normalization.bias', 'resnet.encoder.stages.0.1.layer.1.convolution.weight', 'resnet.encoder.stages.2.5.layer.0.convolution.weight', 'resnet.encoder.stages.0.0.layer.1.normalization.running_mean', 'resnet.encoder.stages.3.1.layer.0.convolution.weight', 'resnet.encoder.stages.0.2.layer.1.normalization.bias', 'resnet.encoder.stages.2.0.layer.1.normalization.weight', 'resnet.encoder.stages.2.3.layer.1.normalization.num_batches_tracked', 'resnet.encoder.stages.2.4.layer.2.normalization.running_var', 'resnet.encoder.stages.2.0.layer.1.normalization.running_mean', 'resnet.encoder.stages.2.3.layer.1.normalization.weight', 'resnet.encoder.stages.2.2.layer.0.normalization.num_batches_tracked', 'resnet.encoder.stages.1.2.layer.1.normalization.running_var', 'resnet.encoder.stages.3.0.layer.2.normalization.num_batches_tracked', 'resnet.encoder.stages.2.0.layer.0.normalization.num_batches_tracked', 'resnet.encoder.stages.2.0.layer.2.convolution.weight', 'resnet.encoder.stages.2.0.layer.1.normalization.bias', 'resnet.encoder.stages.1.3.layer.1.normalization.weight', 'resnet.encoder.stages.0.2.layer.0.normalization.running_var', 'resnet.encoder.stages.2.3.layer.2.normalization.weight', 'resnet.encoder.stages.3.2.layer.1.normalization.running_mean', 'resnet.encoder.stages.3.2.layer.0.normalization.running_var', 'resnet.encoder.stages.0.1.layer.0.normalization.running_var', 'resnet.encoder.stages.3.2.layer.2.normalization.weight', 'resnet.encoder.stages.1.2.layer.1.normalization.bias', 'resnet.encoder.stages.3.2.layer.0.convolution.weight', 'resnet.encoder.stages.1.2.layer.2.convolution.weight', 'resnet.encoder.stages.2.3.layer.2.normalization.num_batches_tracked', 'resnet.encoder.stages.2.0.shortcut.normalization.running_var', 'resnet.encoder.stages.2.1.layer.0.normalization.bias', 'resnet.encoder.stages.3.0.layer.0.normalization.running_mean', 'resnet.encoder.stages.1.1.layer.2.normalization.running_mean', 'resnet.encoder.stages.0.0.layer.0.normalization.running_mean', 'resnet.encoder.stages.3.0.shortcut.normalization.bias', 'resnet.encoder.stages.1.2.layer.0.normalization.running_mean', 'resnet.encoder.stages.1.1.layer.0.normalization.running_mean', 'resnet.encoder.stages.2.3.layer.1.normalization.running_mean', 'resnet.encoder.stages.0.0.layer.2.normalization.running_var', 'resnet.encoder.stages.1.1.layer.1.normalization.running_mean', 'resnet.encoder.stages.1.0.layer.0.convolution.weight', 'resnet.encoder.stages.1.1.layer.2.normalization.bias', 'resnet.encoder.stages.2.0.shortcut.normalization.weight', 'resnet.encoder.stages.3.2.layer.1.normalization.bias', 'resnet.encoder.stages.2.3.layer.1.normalization.running_var', 'resnet.encoder.stages.2.1.layer.2.normalization.num_batches_tracked', 'resnet.encoder.stages.3.0.layer.1.normalization.running_mean', 'resnet.encoder.stages.0.2.layer.1.normalization.running_mean', 'resnet.encoder.stages.2.4.layer.0.normalization.num_batches_tracked', 'resnet.encoder.stages.3.0.layer.0.convolution.weight', 'resnet.encoder.stages.1.0.layer.0.normalization.running_var', 'resnet.encoder.stages.1.0.layer.0.normalization.running_mean', 'resnet.encoder.stages.2.5.layer.0.normalization.running_var', 'resnet.encoder.stages.2.1.layer.0.normalization.running_var', 'resnet.encoder.stages.2.3.layer.2.normalization.running_var', 'resnet.encoder.stages.1.3.layer.0.normalization.bias', 'resnet.encoder.stages.2.5.layer.2.normalization.num_batches_tracked', 'resnet.encoder.stages.2.1.layer.0.normalization.weight', 'resnet.encoder.stages.2.4.layer.0.convolution.weight', 'resnet.encoder.stages.1.0.layer.1.normalization.bias', 'resnet.encoder.stages.1.0.layer.1.convolution.weight', 'resnet.encoder.stages.0.1.layer.0.normalization.weight', 'resnet.encoder.stages.2.2.layer.2.convolution.weight', 'resnet.encoder.stages.3.2.layer.0.normalization.num_batches_tracked', 'resnet.encoder.stages.1.3.layer.2.normalization.weight', 'resnet.encoder.stages.0.0.shortcut.normalization.running_mean', 'resnet.encoder.stages.1.1.layer.2.normalization.num_batches_tracked', 'resnet.encoder.stages.1.2.layer.2.normalization.running_var', 'resnet.encoder.stages.2.4.layer.2.convolution.weight', 'resnet.encoder.stages.2.5.layer.2.normalization.bias', 'resnet.encoder.stages.2.3.layer.0.normalization.num_batches_tracked', 'resnet.encoder.stages.2.3.layer.2.normalization.running_mean', 'resnet.encoder.stages.2.5.layer.1.normalization.running_var', 'resnet.encoder.stages.0.0.layer.2.normalization.num_batches_tracked', 'resnet.encoder.stages.2.3.layer.0.convolution.weight', 'resnet.encoder.stages.3.0.shortcut.normalization.weight', 'resnet.encoder.stages.0.0.layer.1.convolution.weight', 'resnet.encoder.stages.0.1.layer.2.convolution.weight', 'resnet.encoder.stages.1.1.layer.0.convolution.weight', 'resnet.encoder.stages.3.0.layer.2.convolution.weight', 'resnet.encoder.stages.1.3.layer.0.convolution.weight', 'resnet.encoder.stages.1.3.layer.1.normalization.bias', 'resnet.encoder.stages.0.1.layer.2.normalization.bias', 'resnet.encoder.stages.2.2.layer.2.normalization.bias', 'resnet.encoder.stages.1.3.layer.2.normalization.running_mean', 'resnet.encoder.stages.3.1.layer.1.normalization.running_mean', 'resnet.encoder.stages.2.1.layer.1.normalization.num_batches_tracked', 'resnet.encoder.stages.1.2.layer.2.normalization.weight', 'resnet.encoder.stages.0.0.layer.2.convolution.weight', 'resnet.encoder.stages.2.3.layer.1.normalization.bias', 'resnet.encoder.stages.2.2.layer.0.convolution.weight', 'resnet.encoder.stages.0.2.layer.0.normalization.bias', 'resnet.encoder.stages.1.1.layer.1.normalization.bias', 'resnet.encoder.stages.2.2.layer.2.normalization.running_var', 'resnet.encoder.stages.2.0.shortcut.convolution.weight', 'resnet.encoder.stages.0.1.layer.2.normalization.running_var', 'resnet.encoder.stages.3.0.layer.1.normalization.bias', 'resnet.encoder.stages.0.2.layer.2.normalization.running_var', 'resnet.encoder.stages.0.2.layer.2.normalization.num_batches_tracked', 'resnet.encoder.stages.2.0.shortcut.normalization.bias', 'resnet.encoder.stages.1.2.layer.1.normalization.running_mean', 'resnet.encoder.stages.2.2.layer.2.normalization.weight', 'resnet.encoder.stages.1.0.shortcut.normalization.running_var', 'resnet.encoder.stages.2.0.layer.0.normalization.bias', 'resnet.encoder.stages.0.0.shortcut.normalization.num_batches_tracked', 'resnet.encoder.stages.0.1.layer.0.normalization.running_mean', 'resnet.encoder.stages.1.0.layer.1.normalization.weight', 'resnet.encoder.stages.0.2.layer.0.normalization.running_mean', 'resnet.encoder.stages.1.3.layer.2.normalization.running_var', 'resnet.encoder.stages.1.2.layer.0.convolution.weight', 'resnet.encoder.stages.2.4.layer.0.normalization.running_mean', 'resnet.encoder.stages.3.1.layer.1.convolution.weight', 'resnet.encoder.stages.0.2.layer.0.normalization.num_batches_tracked', 'resnet.encoder.stages.1.3.layer.0.normalization.num_batches_tracked']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
Loading VAN
Loading RegNet
Loading MaskFormer
/Users/aroberts/.virtualenvs/tenv/lib/python3.9/site-packages/torch/functional.py:568: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at /Users/distiller/project/pytorch/aten/src/ATen/native/TensorShape.cpp:2228.)
return _VF.meshgrid(tensors, **kwargs) # type: ignore[attr-defined]
Some weights of the model checkpoint at facebook/maskformer-swin-base-ade were not used when initializing MaskFormerForInstanceSegmentation: ['mask_embedder.2.0.weight', 'mask_embedder.2.0.bias']
- This IS expected if you are initializing MaskFormerForInstanceSegmentation from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing MaskFormerForInstanceSegmentation from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
```
Running on this branch - `fix-weight-naming`
```
(tenv) aroberts:transformers $ git checkout fix-weight-naming
Switched to branch 'fix-weight-naming'
(tenv) aroberts:transformers $ python ../test_model_weight_loading.py
Loading ResNet
Loading VAN
Loading RegNet
Loading MaskFormer
/Users/aroberts/.virtualenvs/tenv/lib/python3.9/site-packages/torch/functional.py:568: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at /Users/distiller/project/pytorch/aten/src/ATen/native/TensorShape.cpp:2228.)
return _VF.meshgrid(tensors, **kwargs) # type: ignore[attr-defined]
(tenv) aroberts:transformers $
```
Let me know if there's anything else you'd like me to run or check. |
transformers | 17,523 | closed | Not able to import tensorflow OPT | ### System Info
```shell
Using the latest git repo as a source install, when I try to import TFOPTForCasualLM I will get the following error.
ImportError: cannot import name 'TFOPTForCasualLM' from 'transformers' (/anaconda/envs/azureml_py38/lib/python3.8/site-packages/transformers/__init__.py)
```
```
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```
from transformers import TFOPTForCausalLM, GPT2Tokenizer
model = TFOPTForCausalLM.from_pretrained("facebook/opt-350m")
tokenizer = GPT2Tokenizer.from_pretrained("facebook/opt-350m")
```
### Expected behavior
```shell
Was expecting model class to import normally
```
| 06-02-2022 09:43:15 | 06-02-2022 09:43:15 | I just tried this with latest source install and it's working for me. Make sure you have TF installed to be able to import TF models.<|||||>> I just tried this with latest source install and it's working for me. Make sure you have TF installed to be able to import TF models.
I have tensorflow installed, even checked it by running a quick test<|||||>Could you share details about your script and maybe your environment? <|||||>> Could you share details about your script and maybe your environment?
I was working on an Azure compute machine. I found the problem, apparently I needed to install tensorflow GPU. as well as the regular distro |
transformers | 17,522 | closed | [WIP] Adding support for `clip` in `feature-extraction`. | # What does this PR do?
Adds support for the feature extraction pipeline for clip.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
--> | 06-02-2022 08:52:55 | 06-02-2022 08:52:55 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_17522). All of your documentation changes will be reflected on that endpoint.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 17,521 | closed | Support returning raw logits in `generate` | ### Feature request
Support returning raw logits in `generate` by either:
1. creating a new arg that enables return of raw logits
2. or support callback that allow users to collect the raw logits
### Motivation
* Raw logits "would be the most understandable & consistent across generation methods" (@patrickvonplaten)
* For testing, returning raw logits would help "identify which parts get wrong if any test failure occurs" (@ydshieh)
* There's concern about "rampant too many options" (@Narsil), thus I would prefer the second option to support this feature.
* However, the second option still needs code change to support it. As the user provided `logits_processor` is appended to a new instance of `LogitsProcessorList`. As a result, users cannot get the raw logits using the current implementation even with a custom `LogitsProcessor`.
See further discussion in https://github.com/huggingface/transformers/issues/17424
### Your contribution
I could open a PR to reorder how `logits_processor` is merged with the predefined list of `LogitsProcessorList`. | 06-02-2022 08:42:59 | 06-02-2022 08:42:59 | cc @patil-suraj @gante as well<|||||>I'm personally fine with adding a `output_logits` flag to `generate` since it already has 50+ flags it won't make a difference and it's a useful feature indeed. What do you think @patil-suraj @gante ?<|||||>I'm cool with it ๐ (and it might be interesting to use as part of PT-TF cross tests)<|||||>@patil-suraj what do you think? Do you want to open a PR to work on it? <|||||>> @patil-suraj what do you think? Do you want to open a PR to work on it?
@shijie-wu seems willing to open a PR, as mentioned at the end of the issue description.<|||||>I could open a PR for this. <|||||>I'm okay with this, let me know if you need any help @shijie-wu :) <|||||>Cool thanks for taking care of it @shijie-wu <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>So, is there any work on it? I did not find a new feature about getting the raw logits.<|||||>I don't think so -- gently pinging @shijie-wu, who manifested interest in opening a PR :)<|||||>sorry about the delay! i will resume working on it in the coming week.<|||||>gently ping @shijie-wu --- any updates on this?<|||||>@gante should I open a PR? I think the change is fairly minor.<|||||>@xkianteb sounds good ๐ <|||||>is there any update on this...?<|||||>None that I know of. Open to contributors :) |
transformers | 17,520 | closed | Loading BertModel from BertForMaskedLM without randomly initializing weights | I've trained a `BertForMaskedLM` model for a few days, and I've `save_pretrained()` it.
I want to compare encoder vectors from a `BertModel`, with the embeddings from
this trained `BertForMaskedLM`.
However, simply using `BertModel.from_pretrained("path/to/new_BertForMaskedLM_model")`
warns me that "You should probably TRAIN this model....".
It seems that `bert.pooler.dense.weight` and `bert.pooler.dense.bias` have been added and randomly initialized after the LM head was removed. I assume this means the output of this current model will be useless.
What do I need to do to compare `BertForMaskedLM` encoder embeddings (after removing the LM head) with `BertModel` encoder embeddings? | 06-02-2022 01:03:15 | 06-02-2022 01:03:15 | Hey! The model returns several outputs:
```
last_hidden_state: torch.FloatTensor = None
pooler_output: torch.FloatTensor = None
hidden_states: Optional[Tuple[torch.FloatTensor]] = None
past_key_values: Optional[Tuple[Tuple[torch.FloatTensor]]] = None
attentions: Optional[Tuple[torch.FloatTensor]] = None
cross_attentions: Optional[Tuple[torch.FloatTensor]] = None
```
The `pooler_output` is going to be random as the pooler weights will be randomly generated, but the rest will not be! In order to compare the two, you could compare the `hidden_states` values, which contain all intermediary values from the word embeddings up until the last transformer layer's output.<|||||>I suppose more specifically I'm after the following output:
```
model = BertModel('path/to/new_BertForMaskedLM_model')
output = model(**encoded_input)[1]
```
I've got other baselines with embeddings extracted like this.
Do you mind explaining the difference between `hidden_states` output and `model(**encoded_input)[1]` please?<|||||>`[1]` will take the second value of the output. It may differ across models, while specifying `output.hidden_states` will always return the hidden states.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 17,519 | closed | Clean README in post release job as well. | # What does this PR do?
The README cleanup (removing the main in the links to the doc) hasn't been done in the past releases because we do them on branches (so it has been done, but not on main). This PR redoes the cleanup in the `post-release` job (which is always done on main) and updates the instruction in the setup (no need to run `make post-patch` anymore but the post-release job will change the README so `make fix-copies` is necessary). | 06-02-2022 00:25:08 | 06-02-2022 00:25:08 | |
transformers | 17,518 | closed | Fix when Accelerate is not installed | # What does this PR do?
As pointed out in #17516, the current `from_pretrained` on the main branch tries to use a function in Accelerate even if it's not necessary. This PR adds a check on the offloading (which requires Accelerate and is enforced at [this line](https://github.com/huggingface/transformers/blob/58fb3c9f98877bf76efb03e376a5c92cf80f7952/src/transformers/modeling_utils.py#L1847) since it requires a `device_map`) before using that function.
Fixes #17516 | 06-02-2022 00:20:13 | 06-02-2022 00:20:13 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 17,517 | closed | Check list of models in the main README and sort it | # What does this PR do?
In our effort to have the repo consistency check catch all potential mistakes a contributor can make when adding a new model, this PR adds a new script to make sure all added models are also put in the main README. As a bonus, it also enforces that said README is sorted in alphabetical order cause that's prettier.
Some models are not supposed to be in the main README, so there is a new list for those black sheeps. Some models have different names in the main README and in the lib, there is a map for that.
MobileBERT and RAG were not in the README and I think they should o I added them, the other absent do not have a paper from what I checked. I fixed typos (usually in casing) as well. | 06-02-2022 00:14:04 | 06-02-2022 00:14:04 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 17,516 | closed | NameError: name 'save_offload_index' is not defined when use --model_revision sharded | ### System Info
```shell
- `transformers` version: 4.20.0.dev0
- Platform: Linux-4.15.0-142-generic-x86_64-with-debian-stretch-sid
- Python version: 3.7.7
- Huggingface_hub version: 0.5.1
- PyTorch version (GPU?): 1.9.0+cu111 (True)
- Tensorflow version (GPU?): 2.9.0 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
```
### Who can help?
@stas00
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
`export BS=8; rm -r output_dir; PYTHONPATH=src USE_TF=0 deepspeed --num_gpus=4 run_translation.py --model_name_or_path google/mt5-xxl --output_dir output_dir --adam_eps 1e-06 --evaluation_strategy=steps --do_train --do_eval --label_smoothing 0.1 --learning_rate 3e-5 --logging_first_step --logging_steps 500 --max_source_length 128 --max_target_length 128 --num_train_epochs 1 --overwrite_output_dir --per_device_train_batch_size $BS --per_device_eval_batch_size $BS --predict_with_generate --sortish_sampler --source_lang en --target_lang ro --dataset_name wmt16 --dataset_config "ro-en" --source_prefix "translate English to Romanian: " --val_max_target_length 128 --warmup_steps 50 --max_train_samples 500 --max_eval_samples 50 --deepspeed ds_zero3.json --fp16 --model_revision sharded`
error:
```
[INFO|modeling_utils.py:2115] 2022-06-01 16:46:52,833 >> Detected DeepSpeed ZeRO-3: activating zero.init() for this model
[2022-06-01 16:47:17,303] [INFO] [partition_parameters.py:464:__exit__] finished initializing model with 12.92B parameters
Traceback (most recent call last):
File "run_translation.py", line 654, in <module>
Traceback (most recent call last):
File "run_translation.py", line 654, in <module>
main()
File "run_translation.py", line 377, in main
main()
File "run_translation.py", line 377, in main
use_auth_token=True if model_args.use_auth_token else None,
File "/srv/scratch/ychen3411/anaconda3/envs/unify_srl/lib/python3.7/site-packages/transformers/models/auto/auto_factory.py", line 446, in from_pretrained
use_auth_token=True if model_args.use_auth_token else None,
File "/srv/scratch/ychen3411/anaconda3/envs/unify_srl/lib/python3.7/site-packages/transformers/models/auto/auto_factory.py", line 446, in from_pretrained
return model_class.from_pretrained(pretrained_model_name_or_path, *model_args, config=config, **kwargs)
File "/srv/scratch/ychen3411/anaconda3/envs/unify_srl/lib/python3.7/site-packages/transformers/modeling_utils.py", line 2179, in from_pretrained
return model_class.from_pretrained(pretrained_model_name_or_path, *model_args, config=config, **kwargs)
File "/srv/scratch/ychen3411/anaconda3/envs/unify_srl/lib/python3.7/site-packages/transformers/modeling_utils.py", line 2179, in from_pretrained
dtype=torch_dtype,
File "/srv/scratch/ychen3411/anaconda3/envs/unify_srl/lib/python3.7/site-packages/transformers/modeling_utils.py", line 2397, in _load_pretrained_model
dtype=torch_dtype,
File "/srv/scratch/ychen3411/anaconda3/envs/unify_srl/lib/python3.7/site-packages/transformers/modeling_utils.py", line 2397, in _load_pretrained_model
save_offload_index(offload_index, offload_folder)
NameError: name 'save_offload_index' is not defined
save_offload_index(offload_index, offload_folder)
NameError: name 'save_offload_index' is not defined
Traceback (most recent call last):
Traceback (most recent call last):
File "run_translation.py", line 654, in <module>
File "run_translation.py", line 654, in <module>
main()main()
File "run_translation.py", line 377, in main
File "run_translation.py", line 377, in main
use_auth_token=True if model_args.use_auth_token else None,
File "/srv/scratch/ychen3411/anaconda3/envs/unify_srl/lib/python3.7/site-packages/transformers/models/auto/auto_factory.py", line 446, in from_pretrained
use_auth_token=True if model_args.use_auth_token else None,
File "/srv/scratch/ychen3411/anaconda3/envs/unify_srl/lib/python3.7/site-packages/transformers/models/auto/auto_factory.py", line 446, in from_pretrained
return model_class.from_pretrained(pretrained_model_name_or_path, *model_args, config=config, **kwargs)
File "/srv/scratch/ychen3411/anaconda3/envs/unify_srl/lib/python3.7/site-packages/transformers/modeling_utils.py", line 2179, in from_pretrained
return model_class.from_pretrained(pretrained_model_name_or_path, *model_args, config=config, **kwargs)
File "/srv/scratch/ychen3411/anaconda3/envs/unify_srl/lib/python3.7/site-packages/transformers/modeling_utils.py", line 2179, in from_pretrained
dtype=torch_dtype,
File "/srv/scratch/ychen3411/anaconda3/envs/unify_srl/lib/python3.7/site-packages/transformers/modeling_utils.py", line 2397, in _load_pretrained_model
dtype=torch_dtype,
File "/srv/scratch/ychen3411/anaconda3/envs/unify_srl/lib/python3.7/site-packages/transformers/modeling_utils.py", line 2397, in _load_pretrained_model
save_offload_index(offload_index, offload_folder)
NameError: name 'save_offload_index' is not defined
save_offload_index(offload_index, offload_folder)
NameError: name 'save_offload_index' is not defined
[2022-06-01 16:53:23,841] [INFO] [launch.py:178:sigkill_handler] Killing subprocess 81406
[2022-06-01 16:53:23,841] [INFO] [launch.py:178:sigkill_handler] Killing subprocess 81407
[2022-06-01 16:53:23,841] [INFO] [launch.py:178:sigkill_handler] Killing subprocess 81408
[2022-06-01 16:53:23,841] [INFO] [launch.py:178:sigkill_handler] Killing subprocess 81409
[2022-06-01 16:53:23,841] [ERROR] [launch.py:184:sigkill_handler] ['/srv/scratch/ychen3411/anaconda3/envs/unify_srl/bin/python', '-u', 'run_translation.py', '--local_rank=3', '--model_name_or_path', 'google/mt5-xxl', '--output_dir', 'output_dir', '--adam_eps', '1e-06', '--evaluation_strategy=steps', '--do_train', '--do_eval', '--label_smoothing', '0.1', '--learning_rate', '3e-5', '--logging_first_step', '--logging_steps', '500', '--max_source_length', '128', '--max_target_length', '128', '--num_train_epochs', '1', '--overwrite_output_dir', '--per_device_train_batch_size', '8', '--per_device_eval_batch_size', '8', '--predict_with_generate', '--sortish_sampler', '--source_lang', 'en', '--target_lang', 'ro', '--dataset_name', 'wmt16', '--dataset_config', 'ro-en', '--source_prefix', 'translate English to Romanian: ', '--val_max_target_length', '128', '--warmup_steps', '50', '--max_train_samples', '500', '--max_eval_samples', '50', '--deepspeed', 'ds_zero3.json', '--fp16', '--model_revision', 'sharded'] exits with return code = 1`
```
### Expected behavior
```shell
I tried to run mt5-xxl (12b) on 4 gpus with deepspeed zero3 and sharded.
But I got the following error:
NameError: name 'save_offload_index' is not defined
```
| 06-01-2022 21:03:15 | 06-01-2022 21:03:15 | @sgugger, would you please kindly look at this - this looks related to the accelerate import - probably not checking if it was imported and using it anyway?
https://github.com/huggingface/transformers/blob/58fb3c9f98877bf76efb03e376a5c92cf80f7952/src/transformers/modeling_utils.py#L75-L80
https://github.com/huggingface/transformers/blob/58fb3c9f98877bf76efb03e376a5c92cf80f7952/src/transformers/modeling_utils.py#L2397
Thank you!<|||||>Indeed, this needs some gating. Will send a fix shortly, thanks for flagging! |
transformers | 17,515 | closed | Fix flakey no-trainer test | # What does this PR do?
This PR fixes an occasional fail in `test_accelerate_examples::test_run_squad_no_trainer` due to multi-gpu's sometimes showing a slight drop in accuracy.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@sgugger
| 06-01-2022 17:36:54 | 06-01-2022 17:36:54 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 17,514 | closed | Flax OPT batch generation test | ### System Info
```shell
- `transformers` version: 4.20.0.dev0
- Platform: macOS-12.4-arm64-arm-64bit
- Python version: 3.9.12
- Huggingface_hub version: 0.6.0
- PyTorch version (GPU?): 1.13.0.dev20220521 (False)
- Tensorflow version (GPU?): 2.9.0 (True)
- Flax version (CPU?/GPU?/TPU?): 0.4.2 (cpu)
- Jax version: 0.3.6
- JaxLib version: 0.3.5
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
```
### Who can help?
@patil-suraj
The `test_batch_generation` test in FLAX OPT (currently commented) fails.
This is due to improper handling of the padding.
The fail output is the following :
```python
def test_batch_generation(self):
model_id = "facebook/opt-350m"
tokenizer = GPT2Tokenizer.from_pretrained(model_id)
model = FlaxOPTForCausalLM.from_pretrained(model_id)
tokenizer.padding_side = "left"
# use different length sentences to test batching
sentences = [
"Hello, my dog is a little",
"Today, I",
]
inputs = tokenizer(sentences, return_tensors="jax", padding=True)
input_ids = inputs["input_ids"]
outputs = model.generate(input_ids=input_ids, attention_mask=inputs["attention_mask"], trace=False)
inputs_non_padded = tokenizer(sentences[0], return_tensors="jax").input_ids
output_non_padded = model.generate(input_ids=inputs_non_padded)
num_paddings = inputs_non_padded.shape[-1] - inputs["attention_mask"][-1].sum()
inputs_padded = tokenizer(sentences[1], return_tensors="jax").input_ids
output_padded = model.generate(input_ids=inputs_padded, max_length=model.config.max_length - num_paddings)
batch_out_sentence = tokenizer.batch_decode(outputs[0], skip_special_tokens=True)
non_padded_sentence = tokenizer.decode(output_non_padded[0][0], skip_special_tokens=True)
padded_sentence = tokenizer.decode(output_padded[0][0], skip_special_tokens=True)
expected_output_sentence = [
"Hello, my dog is a little bit of a dork.\nI'm a little bit",
"Today, I<s><s><s><s><s><s><s><s><s><s><s><s>"
# TODO fix this test in next PR
# "Today, I was in the middle of a conversation with a friend about the",
]
print(batch_out_sentence, [non_padded_sentence, padded_sentence])
self.assertListEqual(expected_output_sentence, batch_out_sentence)
# TODO outputs will be similar, fix in next PR
self.assertListEqual(batch_out_sentence, [non_padded_sentence, padded_sentence])
````
```python
["Hello, my dog is a little bit of a dork.\nI'm a little bit", 'Today, I<s><s><s><s><s><s><s><s><s><s><s><s>'] ["Hello, my dog is a little bit of a dork.\nI'm a little bit", 'Today, I was in the middle of a conversation with a friend about the']
E AssertionError: Lists differ: ["Hel[62 chars]ay, I<s><s><s><s><s><s><s><s><s><s><s><s>'] != ["Hel[62 chars]ay, I was in the middle of a conversation with[16 chars]the']
E
E First differing element 1:
E 'Today, I<s><s><s><s><s><s><s><s><s><s><s><s>'
E 'Today, I was in the middle of a conversation with a friend about the'
E
E ["Hello, my dog is a little bit of a dork.\nI'm a little bit",
E - 'Today, I<s><s><s><s><s><s><s><s><s><s><s><s>']
E + 'Today, I was in the middle of a conversation with a friend about the']
tests/models/opt/test_modeling_flax_opt.py:406: AssertionError
```
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Just un-comment the test in the main branch
### Expected behavior
```shell
["Hello, my dog is a little bit of a dork.\nI'm a little bit","Today, I was in the middle of a conversation with a friend about the"]
```
| 06-01-2022 17:24:17 | 06-01-2022 17:24:17 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>@ArthurZucker I was going to start digging into this, please confirm if thats ok per your. earlier advice, it seems strange because the issue itself is closed so its a bit confusing :)<|||||>Hi @skanjila & @ArthurZucker. I had commented on this issue recently as I ran into the same problem with the PyTorch version of OPT when performing batch generation with half precision. However, later I found out it's also related to issue #17433 and solved in the PR #17437. I installed the latest version of the library from the `main` branch and I can confirm the issue was fixed.<|||||>Nice sorry I didn't know it was already fixed! Good job โบ๏ธ thanks both |
transformers | 17,513 | closed | Implemented loss for training AudioFrameClassification | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #17509 on WavLMForAudioFrameClassification model
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
I think @anton-l or @patrickvonplaten
| 06-01-2022 17:08:58 | 06-01-2022 17:08:58 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Hi @MorenoLaQuatra, thank you for the contribution! :hugs:
As you may have seen, the `check_repository_consistency` test doesn't pass. That's because `WavLMForAudioClassification` is auto-copied from `Wav2Vec2ForAudioClassification` using this line: https://github.com/huggingface/transformers/blob/80bb27abb4710267a99443736bde44fe64724615/src/transformers/models/wavlm/modeling_wavlm.py#L1534
Could you please move the loss inside `Wav2Vec2ForAudioFrameClassification` and then run `make fix-copies` from the root of your `transformers` directory? Then your implementation will propagate to all of the models that support audio frame classification :slightly_smiling_face: <|||||>Thank you @anton-l for pointing it out. I was not aware of the copy mechanism (sorry!). I think now it should be good, I modified `modeling_wav2vec2.py` and run `make fix-copies`. Let me know if something is missing. |
transformers | 17,512 | closed | fix OPT-Flax CI tests | # What does this PR do?
Fixes the OPT Flax tests.
A `require_flax` decorator was missing. The test is also `slow` so it will not be run.
@lysandre
| 06-01-2022 16:41:47 | 06-01-2022 16:41:47 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 17,511 | closed | Fix `TFRemBertModelTest.test_resize_token_embeddings` | # What does this PR do?
Fix `TFRemBertModelTest.test_resize_token_embeddings`.
This method
https://github.com/huggingface/transformers/blob/028d4b7c8be2c2fc1146fcc1e9bd253c1a7ea346/src/transformers/modeling_tf_utils.py#L1449
assumes that `word_embedding_weight` has the same shape as `old_lm_head_decoder`, but this is not the case for `TFRemBertModel`, as it has `input_embedding_size` and `output_embedding_size` in config.
This PR checks the shape before checking the values. If shape is not equal, it means the input/output are not equal.
Fix the CI failure [here](https://github.com/huggingface/transformers/runs/6682139350?check_suite_focus=true) | 06-01-2022 16:41:24 | 06-01-2022 16:41:24 | A quick look on
https://github.com/huggingface/transformers/blob/028d4b7c8be2c2fc1146fcc1e9bd253c1a7ea346/src/transformers/modeling_utils.py#L1315
~~I couldn't find equivalent logic about `is_input_output_equals` that was in TF.~~
Guess it is because we use `self.config.tie_word_embeddings` to check in PyTorch.<|||||>> # What does this PR do?
> Fix `TFRemBertModelTest.test_resize_token_embeddings`.
>
> This method
>
> https://github.com/huggingface/transformers/blob/028d4b7c8be2c2fc1146fcc1e9bd253c1a7ea346/src/transformers/modeling_tf_utils.py#L1449
>
> assumes that `word_embedding_weight` has the same shape as `old_lm_head_decoder`, but this is not the case for `TFRemBertModel`, as it has `input_embedding_size` and `output_embedding_size` in config.
>
> This PR checks the shape before checking the values. If shape is not equal, it means the input/output are not equal.
>
> Fix the CI failure [here](https://github.com/huggingface/transformers/runs/6682139350?check_suite_focus=true)
Sounds like an exception to me! I'm not super well versed in TF word embeddings. Think we don't have the prettiest logic there...
@Rocketknight1 @gante what would you suggest here?
Also cc @sgugger <|||||>I confess I don't know much about TF word embeddings, so I'll have to dive deeper to review.
There is one thing I know, though -- it uses TF1 code, and it is on my update list ๐
<|||||>The embeddings in TF are using very very dark magic. I would refrain from any change in `modeling_tf_utils` (so let RemBERT fail for now) until it has been cleaned up by the TF team :-)<|||||>OK, so I will close this PR today without merge, if everyone is OK<|||||>@ydshieh it'd be great if you could open an issue (update TF embeddings) and link this closed PR to it<|||||>Closed for now with this issue #17540 opened. |
transformers | 17,510 | closed | Fix Tapas tests | # What does this PR do?
Time for (the fix of) Tapas.
Need to add a few `require_tensorflow_probability`. | 06-01-2022 14:50:56 | 06-01-2022 14:50:56 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 17,509 | closed | Finetuning AudioFrameClassification model | ### System Info
```shell
- `transformers` version: 4.19.2
- Platform: Linux-5.10.0-051000-generic-x86_64-with-glibc2.32
- Python version: 3.9.12
- Huggingface_hub version: 0.7.0
- PyTorch version (GPU?): 1.7.1+cu110 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: No
```
### Who can help?
@patrickvonplaten @anton-l
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
I'm trying to finetune a WavLMForAudioFrameClassification model using Trainer and a custom dataset.
It is not my first project with transformers.
When I tried running the training of the model I got this very **strange** warning:
`The following columns in the training set don't have a corresponding argument in WavLMForAudioFrameClassification.forward and have been ignored: labels. If labels are not expected by WavLMForAudioFrameClassification.forward, you can safely ignore this message.`
and then the following error:
```python
File "XXX/lib/python3.9/site-packages/transformers/utils/generic.py", line 220, in __getitem__
return inner_dict[k]
KeyError: 'loss'
```
Looking at the code [here](https://github.com/huggingface/transformers/blob/6e535425feae20ca61a8b10ae5e8a7fab4d394ba/src/transformers/models/wavlm/modeling_wavlm.py#L1648) it seems that `labels` are not used and loss function is not computed. Is it possible to finetune an AudioFrameClassification model? Is `labels` the wrong keyword?
### Expected behavior
```shell
Standard finetuning.
```
| 06-01-2022 14:16:54 | 06-01-2022 14:16:54 | Hi, if possible could you share your code for how to fine-tune WavLMForAudioFrameClassification for custom dataset.
Thank you very much.<|||||>I can share with you some code I actually use to train an AudioFrameClassification model. It is not intended for Speaker Diarization but the concept behind is the same.
Here you can find the training loop using HF Trainer: https://github.com/MorenoLaQuatra/ComParE2022_MED/blob/master/train.py
Here instead you can find the dataset class: https://github.com/MorenoLaQuatra/ComParE2022_MED/blob/master/MosTimestampDataset.py - in this specific case, the `__get_item__` function is much more complex than what you need for the simple diarization case. What you should consider is the return value. For the AudioFrameClassification case, you should use as labels something like the following:
```python
labels = [
[0, 1, 0, 0], # for each frame it contains the one-hot encoded class, 2nd speaker in this case
[0, 0, 1, 0], # 3rd speaker in this case
[โฆ],
[โฆ],
]
```
Let me know if you have any issues. |
transformers | 17,508 | closed | Fix CTRL tests | # What does this PR do?
Fix ctrl tests failed due to GPU memory issue. The fix is the same as in #16881
Job page (failed test)
https://github.com/huggingface/transformers/runs/6682129553?check_suite_focus=true | 06-01-2022 12:58:45 | 06-01-2022 12:58:45 | I verified the fix on GCP VM.<|||||>_The documentation is not available anymore as the PR was closed or merged._ |
transformers | 17,507 | closed | Translation/italian: added pipeline_tutorial.mdx [Issue: #17459] | - added italian translation of pipeline_tutorial.mdx
- updated _toctree.yml | 06-01-2022 09:42:33 | 06-01-2022 09:42:33 | Hi @nickprock! Thank you very much for your contribution to the ๐ค Italian documentation! ๐
Can I ask you if you can fix a couple of things?
- "Usare uno specifico tokenizer or modello" --> "Usare uno specifico tokenizer o modello"
- "La mansione text-generation ha un [~generation_utils.GenerationMixin.generate] metodo" --> La mansione text-generation ha un metodo [~generation_utils.GenerationMixin.generate]
- Here seeing the formatting I think there are missing quotation marks "[AutoTokenizer']. Ad esempio, carica la classe [AutoModelForCausalLM`]", I think the problem is also there in the English doc, if you can fix it for the ita one and then I will fix it for the eng version
- "Trova un [audio classification](https://huggingface.co/models?pipeline_tag=audio-classification) modello per eseguire emotion recognition" --> "Trova un modello per la [classificazione audio](https://huggingface.co/models?pipeline_tag=audio-classification) per eseguire un compito di riconoscimento automatico delle emozioni"
Thanks! ๐<|||||>Thanks @mfumanelli,
I correct and submit.
One question, how do you translate task? I used "compito", "mansione", "attivitร ".<|||||>Thanks for the great PR @nickprock! And @mfumanelli for the amazing review ๐<|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>Yes @nickprock, I agree with you, I also use "compito." It would be more natural for me to leave the English word "task" directly, so I also had this doubt. In my opinion, "compito", "mansione" and "attivitร " are actually the best words with which to translate the word "task". ๐ค
btw, looks perfect to me now!
<|||||>@nickprock grazie for the great PR! @mfumanelli grazie for the detailed review!
I am learning Italian with your work!
@sgugger LGTM :)
*PR related to #17459.<|||||>sorry, there was a little problem. I absentmindedly pushed my updates to the same branch. I undo but a check fails<|||||>Thanks a lot for this new translation (the failure in Build PR doc is spurious here). |
transformers | 17,506 | closed | Fix LayoutXLMProcessorTest | # What does this PR do?
Fix `LayoutXLMProcessorTest` test failure found in
https://github.com/huggingface/transformers/runs/6663596212?check_suite_focus=true
```
E ValueError: Calling LayoutXLMTokenizerFast.from_pretrained() with the path to a single file or url is not supported for this tokenizer. Use a model identifier or the path to a directory instead.
```
I just use a tiny model's hub name to fix it. | 06-01-2022 09:14:30 | 06-01-2022 09:14:30 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 17,505 | closed | Training large huggingface models on Azure with CUDA? [OPT] | I am trying to finetune the 1.3b and 30b OPT variant models respectively, however each time I try to train them I will always get the following error:
```
RuntimeError: CUDA out of memory. Tried to allocate 16.00 MiB (GPU 0; 11.17 GiB total capacity; 1.95 GiB already allocated; 8.50 MiB free; 1.95 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
```
I have tried reducing the batch size to 1. However short of running the model on CPU which would take way too much time, it will crash each time. If it's relevant I am using an Azure environment which uses the Tesla K80 accelerator. Has anyone managed to train or use these models on GPU? | 06-01-2022 07:21:04 | 06-01-2022 07:21:04 | Hi, for training such large models, a Tesla K80 won't suffice. You typically need # of parameters * 18 in terms of bytes of RAM when fine-tuning. So for the 30 billion parameter model, it's 30 billion * 18 = 540 GB. It's because you need 18 bytes per parameter to store not only the parameter itself, but also its gradient and optimizer states.
However there are tricks like mixed precision and frameworks like DeepSpeed to fit giant models, you can read more in our guide here: https://huggingface.co/docs/transformers/performance |
transformers | 17,504 | closed | bad_words_ids not working | ### Feature request
I'm using gpt2 for text generation with a word blacklist and noticed that some words on the blacklist were still being generated.
I found that even though the word ["badword"] would not be generated, it would still generate ["bad", "word"] in two tokens.
an example of this is [11908] and [7286, 1754]
this seems to be a different issue from the leading space issue and padding issue. I think I could get around it by adding the split tokens to the blacklist, but I can't seem to get the tokenizer to split the string to produce [7286, 1754]. Is there a way to get all possible permutations of a string to add to the blacklist?
### Motivation
Without this feature bad_words_ids basically doesn't work most of the time
### Your contribution
Not familiar with the tokenizer code unfortunately | 06-01-2022 02:07:36 | 06-01-2022 02:07:36 | I wrote a function to enumerate all possible permutations of " Badword", but it quickly blows up with hundreds of permutations like [" B","a","d","w","o","r","d"]. Limiting the token length works ok, but still doesn't prevent generation of variations like [" Bad","words"]
I think this overall approach just doesn't really work for preventing the generation of bad_words. Don't know if there's a better solution than generate + filter.
```
def get_bad_words_ids(tokenizer, bad_words, min_strlen=2):
vocab_tokens = tokenizer.get_vocab()
vocab = {}
for token in vocab_tokens:
vocab[tokenizer.convert_tokens_to_string([token])] = token
results = []
for bad_word in bad_words:
confirmed_tokens = []
possible_tokens = []
for token in vocab:
if bad_word == token:
confirmed_tokens.append([token])
elif bad_word.startswith(token):
possible_tokens.append([token])
while len(possible_tokens) > 0:
new_possible_tokens = []
for prefixes in possible_tokens:
prefix = ''.join(prefixes)
for token in vocab:
if len(token) < min_strlen:
continue
if bad_word == prefix + token:
found_prefix = prefixes.copy()
found_prefix.append(token)
confirmed_tokens.append(found_prefix)
elif bad_word.startswith(prefix + token):
found_prefix = prefixes.copy()
found_prefix.append(token)
new_possible_tokens.append(found_prefix)
possible_tokens = new_possible_tokens
results += confirmed_tokens
ids = []
for tokens in results:
gtokens = []
for token in tokens:
gtokens.append(vocab[token])
ids.append(tokenizer.convert_tokens_to_ids(gtokens))
return ids
```<|||||>Hey @Jack000 ๐ It is not clear from your description -- have you tried using the tokenizer with the instructions given in the `NoBadWordsLogitsProcessor` [docs](https://huggingface.co/docs/transformers/main/en/internal/generation_utils#transformers.NoBadWordsLogitsProcessor.bad_words_ids)?
["...in order to get the token ids of the words that should not appear in the generated text, use `tokenizer(bad_words, add_prefix_space=True, add_special_tokens=False).input_ids`."]<|||||>That's what I did. This will consistently tokenize [" Badword"] as [11908] but during inference the model will generate [7286, 1754] which is [" Bad", "word"]
as I mentioned above I wrote a function to enumerate all possible ways of combining tokens to form "Badword", but the problem is that it doesn't work for variations like "Badwords" and "Badwordo". Extending the permutations to include these variations results in thousands of permutations per bad_word and doesn't really scale.<|||||>Okay, I think I got your issue :) When you add a word to `bad_word_ids`, you would like to have its sub-words and/or related words banned as well, correct?
There are a few things worth mentioning here:
1. It is intentional that sub-words do NOT get banned. Think about the word "doctorate", which is very different from two of its subwords ("doctor" and "ate"). Banning a word doesn't imply banning the subwords in most scenarios, and our implementation has to be flexible in that regard.
2. When a long word gets broken into more than a token, the first token has a prefix space and will be different from the corresponding token without the space. This is to avoid banning valid sequences that would contain the same characters. Example: if you ban "doctorate", "doctor ate" is a valid sequence. This is because the banned tokens will be " doctor" and "ate", not " doctor" and " ate" (notice the spaces).
3. Banned tokens resulting from a long word are never considered in isolation. Example: if you ban "doctorate", you can still generate " doctor" and "ate" in isolation, "the doctor wants to dictate" is a valid sequence.
4. I've tried running the "Badword" example you mentioned, and I do get two tokens (one for " Bad", the other for "word").
You can see an example for a few cases mentioned above [here](https://colab.research.google.com/drive/1ECYuKjDt76vw7uQ-5nRaUPjU2oG-eFBt#scrollTo=RdMoVNcbwhvZ).
The solution for banning subwords is to explicitly add them to the list of `bad_word_ids`. @patrickvonplaten have you seen tools to generate sub-words and/or derived words from a list of candidate words?<|||||>ah the actual bad word I was trying to ban was [" Hitler"].
I do understand how the bad_words_ids feature works, but I guess my issue is that I don't want the word "Hitler" generated under any circumstances subwords or otherwise. As you can see I did implement a function to enumerate all possible ways tokens can be combined to form "Hitler" to add to bad_words_ids, but if I include "Hitlers" and other such variations the possible permutations will number in the thousands.
anyways, I don't see a simple solution to this but the function I wrote in addition to filtering afterwards works ok for now.<|||||>> I do understand how the bad_words_ids feature works
My apologies :D Better safe than sorry, in case there was some confusion about the intended behavior.<|||||>@patil-suraj could you maybe also take a look here? Otherwise happy to dive deeper if necessary<|||||>Sorry could I ping @ArthurZucker or @gante on this one maybe? :-) <|||||>Hey! I looked at the problem a bit, and as you mentioned, the permutations would be a bit too problematic.
We can probably work this out by rather banning a normalized string. Instead of checking if [Bad_id,Word_id] are generated, we can should convert the a string by deciding, normalize and remove the bad word. This is more efficient but might not have its place in the generate function, as the tokenizer is not available. But it probably makes sens to have a custom logit processor that needs to be initialized with the tokenizer. Let me ask around ๐ค
<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 17,503 | closed | Fix MP and CPU offload tests for Funnel and GPT-Neo | # What does this PR do?
This PR should fix the last failing GPU/multi-GPU tests. | 05-31-2022 23:45:41 | 05-31-2022 23:45:41 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 17,502 | closed | support ONNX export of XDropout in deberta{,_v2} and sew_d | # What does this PR do?
Enables `torch.onnx.export` of the `StableDropout` module in training mode.
In training mode, the `XDropout` torch.autograd.Function is included. This change
adds a `symbolic` function to `XDropout` which produces an ONNX graph that
is equivalent to the `forward` function.
## Before submitting
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
No. LMK if you want me to open an issue.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
No, LMK if there's some doc that I should update.
- [x] Did you write any new necessary tests?
## Who can review?
@LysandreJik
| 05-31-2022 19:16:11 | 05-31-2022 19:16:11 | _The documentation is not available anymore as the PR was closed or merged._<|||||>@michaelbenayoun thanks for taking a look. I added a test.
Unfortunately I couldn't use the existing testing style because none of the affected models have an ONNX config (that's tracked by https://github.com/huggingface/transformers/issues/16308).<|||||>@michaelbenayoun bump, PTAL<|||||>@michaelbenayoun bump<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>@garymm any update on this? <|||||>@michaelbenayoun @lewtun do you have any updates on this PR?<|||||>Thank you for your contribution |
transformers | 17,501 | closed | Refactor to inherit from nn.Module instead of nn.ModuleList | # What does this PR do?
Refactors classes inheriting from `nn.ModuleList` to inherit from `nn.Module` instead. This is to make debugging and inspecting layer outputs more easy.
See also: https://github.com/huggingface/transformers/pull/17493
The following was run to check the weight loading in:
```
from transformers import BeitForImageClassification, Data2VecVisionForImageClassification
print("\nLoading in Data2VecVision model...")
model_checkpoint = "facebook/data2vec-vision-base"
model = Data2VecVisionForImageClassification.from_pretrained(model_checkpoint)
print("\nLoading in BeiT model...")
model_checkpoint = "microsoft/beit-base-patch16-224-pt22k"
model = BeitForImageClassification.from_pretrained(model_checkpoint)
```
Output:
```
Loading in Data2VecVision model...
/Users/aroberts/.virtualenvs/tenv/lib/python3.9/site-packages/torch/functional.py:568: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at /Users/distiller/project/pytorch/aten/src/ATen/native/TensorShape.cpp:2228.)
return _VF.meshgrid(tensors, **kwargs) # type: ignore[attr-defined]
Some weights of Data2VecVisionForImageClassification were not initialized from the model checkpoint at facebook/data2vec-vision-base and are newly initialized: ['classifier.bias', 'classifier.weight']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
Loading in BeiT model...
Some weights of the model checkpoint at microsoft/beit-base-patch16-224-pt22k were not used when initializing BeitForImageClassification: ['layernorm.bias', 'layernorm.weight', 'lm_head.weight', 'lm_head.bias']
- This IS expected if you are initializing BeitForImageClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing BeitForImageClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
Some weights of BeitForImageClassification were not initialized from the model checkpoint at microsoft/beit-base-patch16-224-pt22k and are newly initialized: ['beit.pooler.layernorm.bias', 'classifier.bias', 'classifier.weight', 'beit.pooler.layernorm.weight']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
```
Running on `main` we see the same weights are newly initialized:
```
Loading in Data2VecVision model...
/Users/aroberts/.virtualenvs/tenv/lib/python3.9/site-packages/torch/functional.py:568: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at /Users/distiller/project/pytorch/aten/src/ATen/native/TensorShape.cpp:2228.)
return _VF.meshgrid(tensors, **kwargs) # type: ignore[attr-defined]
Some weights of Data2VecVisionForImageClassification were not initialized from the model checkpoint at facebook/data2vec-vision-base and are newly initialized: ['classifier.bias', 'classifier.weight']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
Loading in BeiT model...
Some weights of the model checkpoint at microsoft/beit-base-patch16-224-pt22k were not used when initializing BeitForImageClassification: ['layernorm.bias', 'lm_head.bias', 'lm_head.weight', 'layernorm.weight']
- This IS expected if you are initializing BeitForImageClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing BeitForImageClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
Some weights of BeitForImageClassification were not initialized from the model checkpoint at microsoft/beit-base-patch16-224-pt22k and are newly initialized: ['beit.pooler.layernorm.weight', 'classifier.bias', 'classifier.weight', 'beit.pooler.layernorm.bias']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
```
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. | 05-31-2022 17:53:45 | 05-31-2022 17:53:45 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Let us know when you'd like for us to review! :)<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 17,500 | closed | Fix `tokenizer` type annotation in `pipeline(...)` | I think you mean to accept either an instance of `PreTrainedTokenizer` or `PreTrainedTokenizerFast` inside of the `pipeline(...)` factory function, if the `tokenizer` argument isn't a `str`.
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 05-31-2022 16:45:45 | 05-31-2022 16:45:45 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 17,499 | closed | Debug LukeForMaskedLM | # What does this PR do?
Fix some undesirable behaviors of `LukeForMaskedLM`.
#### 1. Make `LukeForMaskedLM` accept inputs without entity ids
**Beforeโ:**
```
>>> from transformers import LukeForMaskedLM, MLukeTokenizer
>>> import torch
>>> text = "test string"
>>> model = LukeForMaskedLM.from_pretrained('studio-ousia/luke-base')
>>> tokenizer = MLukeTokenizer.from_pretrained('studio-ousia/luke-base')
>>> encoding = tokenizer(text, return_tensors="pt")
>>> outputs = model(**encoding)
TypeError: linear(): argument 'input' (position 1) must be Tensor, not NoneType
```
I have fixed this by making entity inputs optional in the forward function.
#### 2. Make `LukeForMaskedLM` instantiable from `AutoModelForMaskedLM`
**Beforeโ:**
```
>>> from transformers import AutoModelForMaskedLM
>>> model = AutoModelForMaskedLM.from_pretrained("studio-ousia/luke-base")
ValueError: Unrecognized configuration class <class 'transformers.models.luke.configuration_luke.LukeConfig'> for this kind of AutoModel: AutoModelForMaskedLM.
```
I have fixed this by adding `("luke", "LukeForMaskedLM")` to `MODEL_FOR_MASKED_LM_MAPPING_NAMES`.
## Who can review?
@NielsRogge, could you check this PR? Thanks! | 05-31-2022 16:37:49 | 05-31-2022 16:37:49 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 17,498 | closed | [GPT2Tokenizer] Raise ValueError for Fast GPT2Tokenizer with bos token for now | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
@sgugger - As discussed offline the best fix here will be to make sure GPT2TokenizerFast works correctly which is however dependent on https://github.com/huggingface/tokenizers/pull/1005 and will probs take some time. Think it's important that we raise a ValueError though as otherwise users will run into silently not adding BOS to OPT which I'd like to avoid.
The error message should be clear enough for the user to understand how to change.
@LysandreJik @sgugger also make it's worth doing a patch release here actually (not sure maybe not super important though)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 05-31-2022 16:30:12 | 05-31-2022 16:30:12 | cc @thomasw21 @SaulLu @mishig25 @ArthurZucker <|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>Thanks @patrickvonplaten ! It seems really good to have this error raised while figuring out how to do the processor.<|||||>> Also make it's worth doing a patch release here actually (not sure maybe not super important though)
We're likely going to release tomorrow or as soon as BLOOM is merged, so this will be included in it!<|||||>Hello! when using the GPT2TokenizerFast for the OPT model, I get a warning that the fast version of the tokenizer is working incorrectly. I'm wondering what is the status of this problem, or is it okay to use the fast tokenizer now?
<|||||>I think the fast OPTTokenizer should work now after it has been merged to `tokenizers` no? cc @Narsil <|||||>Actually no need for any modifications or `tokenizers` specific version. Everything should have been fixed within `transformers`.
But I am not sure about every single OPT model on the hub, I didn't make modifications there (nor am I sure how I should do that).
So if we are using an old incorrect tokenizer, the warning might still be valid. |
transformers | 17,497 | closed | CLI: tool to convert PT into TF weights and open hub PR | # What does this PR do?
This PR adds a CLI to convert PT weights into TF weights, validate them, and (optionally) open a PR. The open PR part depends on https://github.com/huggingface/huggingface_hub/pull/884, and is wrapped in a try/except for the time being.
Here are 3 PRs open with the tool (`transformers-cli pt-to-tf --model-name [model-name] --local-dir [local-dir] --open-pr`):
1. Text modality: https://huggingface.co/joaogante/test_text/discussions/1
2. Audio modality: https://huggingface.co/joaogante/test_audio/discussions/1
3. Image modality: https://huggingface.co/joaogante/test_img/discussions/1
This tool can also be used to check existing weights. Sadly, there is no programmatic way to check the weights in existing hub PRs, they have to be downloaded manually. | 05-31-2022 16:20:32 | 05-31-2022 16:20:32 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 17,496 | closed | Exclude Databricks from notebook env | # What does this PR do?
This PR makes sures `is_in_notebook()` returns `False` when inside Databricks.
Fixes #17406 | 05-31-2022 16:08:58 | 05-31-2022 16:08:58 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 17,495 | closed | has_attentions - consistent test skipping logic and tf tests | # What does this PR do?
Two linked changes regarding the control of tests being run:
**PyTorch and TF consistency**: `has_attentions` flag is used in `test_modeling_common.py` to control some of the logic in tests that are run, only applying them if the model has attention(s). This PR adds equivalent logic to tests in `test_modeling_tf_common.py`
**Skipping tests consistency**: `unittest.skip` is used to skip entire tests if they cannot/do not apply to that model e.g. for [input embedding test in ConvNext](https://github.com/huggingface/transformers/blob/f394a2a50d8729cd1ca9b368e330ec50664c3292/tests/models/convnext/test_modeling_convnext.py#L161). For `test_attention_outputs` this was controlled with an if-else statement. This was changed to be controlled with `unittest.skip` instead for two reasons: 1) consistency with the rest of the code base 2) prevent confusing pytest outputs i.e. models without attention are shown to skip `test_attention_outputs` instead of passing it.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
| 05-31-2022 16:04:58 | 05-31-2022 16:04:58 | _The documentation is not available anymore as the PR was closed or merged._<|||||>@NielsRogge, @ydshieh, @patrickvonplaten adding you to cover: git blame ownership, test ownership, some git blame and general transformers ownership. I hope that's OK. Feel free to remove yourselves and/or add others you think are more suitable. <|||||>Thanks a lot for cleaning this up @amyeroberts ! <|||||>@NielsRogge I decide not to remove `has_attentions` in this PR, and would like to focus on making the PT/TF test consistency. If that's OK with you, I'll go ahead and merge. |
transformers | 17,494 | closed | Fix TF _generate | # What does this PR do?
To be added | 05-31-2022 15:57:11 | 05-31-2022 15:57:11 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 17,493 | closed | Refactor classes to inherit from nn.Module instead of nn.Sequential | # What does this PR do?
Refactors classes that inherit from `nn.Sequential` to inherit from `nn.Module` instead. This is to make the code easier to debug and inspect.
Changes:
* Explicit `forward` method implemented for classes
* Iterating over layers and them registering to the module using `add_module(str_ind, layer)` in `__init__`. This provides backwards compatibility as the submodules will be named according to their position in the call stack, like in nn.Sequential, which is needed to load in the same checkpoints.
Note: This does not include other possible `nn.Sequential` refactoring within modules e.g. [lxmert](https://github.com/huggingface/transformers/blob/6ee1474b67b088829555364a14ebfb45e661fac4/src/transformers/models/lxmert/modeling_lxmert.py#L721), or inheriting from `ModuleList` e.g. in [Beit](https://github.com/huggingface/transformers/blob/2ef09ecfb8afb6624aab87afdad9fe72030397af/src/transformers/models/beit/modeling_beit.py#L962). These are more involved changes and should be address in separate PRs.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed.
| 05-31-2022 14:41:13 | 05-31-2022 14:41:13 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 17,492 | closed | Adding the Portuguese version of the tasks/token_classification.mdx documentation | # What does this PR do?
Adding the Portuguese version of the tasks/token_classification.mdx documentation
Work on #16824
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@omarespejel
| 05-31-2022 13:30:27 | 05-31-2022 13:30:27 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Thank you @jonatasgrosman for the translation! ๐ค
LGTM @sgugger. |
transformers | 17,491 | closed | [ViT_MAE] fix num of channels in `patchify` and `unpatchify` | # What does this PR do?
Fix the hard-coded number of channels in `patchify` and `unpatchify` methods.
Fixes #17473
## Who can review?
@NielsRogge
| 05-31-2022 13:21:04 | 05-31-2022 13:21:04 | _The documentation is not available anymore as the PR was closed or merged._<|||||>@NielsRogge I didn't find what is breaking the doc now<|||||>@NielsRogge , can you re-run the gh action? looks like something interrupted it<|||||>Hi,
I've created PR #17710 that fixes some more things, like the variable names and docstrings. |
transformers | 17,490 | closed | GPT-2 Forward w/ and w/o caching of past values gives different results | ### System Info
```shell
- `transformers` version: 4.18.0
- Platform: Linux-3.10.0-1160.45.1.el7.x86_64-x86_64-with-debian-buster-sid
- Python version: 3.7.13
- Huggingface_hub version: 0.5.1
- PyTorch version (GPU?): 1.11.0 (True)
- Tensorflow version (GPU?): 2.9.0 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
```
### Who can help?
@patrickvonplaten
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Hey, I am trying to do a forward() pass of the GPT-2 model with and without caching of past values and observed that the logits are slightly different. Is this to be expected or I am missing something? I highly appreciate it if someone could help me with this.
Code snippet for an MWE below (Check the last assert statement which fails)
```python
from transformers import GPT2LMHeadModel
import torch
model = GPT2LMHeadModel.from_pretrained('gpt2')
model.eval()
with torch.no_grad():
#########################################################################
# with forward and no caching of past
# left padded to size of 5
# step 0
input_ids = torch.tensor([50256, 50256, 50256, 50256, 2]).reshape(1, -1)
attention_mask = torch.tensor([0, 0, 0, 0, 1]).reshape(1, -1)
position_ids = torch.tensor([1, 1, 1, 1, 0]).reshape(1, -1)
gen_outputs = model(input_ids=input_ids,
attention_mask=attention_mask,
position_ids=position_ids,
return_dict=True)
no_cache_0_next_token_logits = gen_outputs.logits[0, -1, :].clone()
# step 1 - input grown by 1
input_ids = torch.tensor([50256, 50256, 50256, 50256, 2, 5]).reshape(1, -1)
attention_mask = torch.tensor([0, 0, 0, 0, 1, 1]).reshape(1, -1)
position_ids = torch.tensor([1, 1, 1, 1, 0, 1]).reshape(1, -1)
gen_outputs = model(input_ids=input_ids,
attention_mask=attention_mask,
position_ids=position_ids,
return_dict=True)
no_cache_1_next_token_logits = gen_outputs.logits[0, -1, :].clone()
########################################################################
# with forward with caching
# left padded to size of 5
# step 0
input_ids = torch.tensor([50256, 50256, 50256, 50256, 2]).reshape(1, -1)
model_kwargs = {
"attention_mask": torch.tensor([0, 0, 0, 0, 1]).reshape(1, -1)
}
model_inputs = model.prepare_inputs_for_generation(
input_ids, **model_kwargs)
gen_outputs = model(**model_inputs,
return_dict=True)
cache_0_next_token_logits = gen_outputs.logits[0, -1, :].clone()
assert torch.equal(cache_0_next_token_logits,
no_cache_0_next_token_logits) == True
model_kwargs = model._update_model_kwargs_for_generation(
gen_outputs, model_kwargs, is_encoder_decoder=model.config.is_encoder_decoder
)
# step 1 - input grown by 1
input_ids = torch.tensor([50256, 50256, 50256, 50256, 2, 5]).reshape(1, -1)
model_inputs = model.prepare_inputs_for_generation(
input_ids, **model_kwargs)
gen_outputs = model(**model_inputs,
return_dict=True)
cache_1_next_token_logits = gen_outputs.logits[0, -1, :].clone()
assert torch.equal(cache_1_next_token_logits,
no_cache_1_next_token_logits) == True
```
### Expected behavior
```shell
Expected behavior: Caching does not affect the logits and only speeds up the computation.
```
| 05-31-2022 11:51:35 | 05-31-2022 11:51:35 | Hey @rajcscw,
I'm not 100% sure what your codesnippet is doing there exactly (note that I wouldn't try to pass the position_ids, but instead let GPT2 handle that).
We have exactly that test in transformers which you can find here: https://github.com/huggingface/transformers/blob/975dd2bbbcd4e8bdaf07c275c090d218d88c7c12/tests/models/gpt2/test_modeling_gpt2.py#L288
Could you take a look here and see whether you can use this code?
Also cc @patil-suraj @ArthurZucker just FYI
<|||||>Not sure either but the inputs fed to the network might be different as :
1. the `position_ids` are specified with :
```python
attention_mask = torch.tensor([0, 0, 0, 0, 1, 1]).reshape(1, -1)
position_ids = torch.tensor([1, 1, 1, 1, 0, 1]).reshape(1, -1)
```
in the first case
2. in the second case they are not specified and should be automatically created.
Check if you still have an issue when you use the same position ids vectors, or just don't input them.
<|||||>@patrickvonplaten I will check that test case and adapt my example. @ArthurZucker In both cases, the position IDs should be identical; in the first case, it is created explicitly and in the other, it is generated automatically. I will test it without passing any position IDs.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 17,489 | closed | `do_eval` is True when setting `do_predict`=True | ### System Info
```shell
- `transformers` version: 4.12.3
- Platform: Linux-3.10.0-1062.el7.x86_64-x86_64-with-centos-7.7.1908-Core
- Python version: 3.7.13
- PyTorch version (GPU?): 1.7.1 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
```
### Who can help?
@patil-suraj @sgugger
### Information
- [X] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
This problem happens generally to me after I started to use transformers>=4.12.3. To easily reproduce the issue, we can use the text summarization example. After training the model, we remove `--do_train` and `--do_eval` and add `--do_predict`. However, the model will run ``evaluation'' first before running ``prediction''. I checked the source of this issue and it seems to be the data parsing issue from this line: https://github.com/huggingface/transformers/blob/main/src/transformers/hf_argparser.py#L214.
Before running this line, the value of `do_eval` is still False. However, it turns out to be True afterward.
### Expected behavior
```shell
When only setting `--do_predict`, the model should not parse `do_eval` to be True.
```
| 05-31-2022 09:00:08 | 05-31-2022 09:00:08 | Hi, could you post a code-snippet to reproduce this. I just tried this command and it works as expected and does not run eval.
```bash
python examples/pytorch/summarization/run_summarization.py \
--model_name_or_path t5-small \
--do_predict \
--dataset_name xsum \
--source_prefix "summarize: " \
--output_dir /tmp/tst-summarization \
--per_device_train_batch_size=4 \
--per_device_eval_batch_size=4 \
--overwrite_output_dir \
--predict_with_generate
```<|||||>The above command works great for me! I just found that usually I just modify the script for prediction by removing `--do_train` and `--do_eval` and add `--do_predict` without changing the other commands. However, seems this is causing the problem. When running the command below, this issue happens.
```
python examples/pytorch/summarization/run_summarization.py \
--model_name_or_path t5-small \
--do_predict \
--dataset_name xsum \
--source_prefix "summarize: " \
--output_dir /tmp/tst-summarization \
--per_device_train_batch_size=4 \
--per_device_eval_batch_size=4 \
--overwrite_output_dir \
--predict_with_generate
--save_strategy epoch \
--evaluation_strategy epoch
```<|||||>in your command you have set `evaluation_strategy` to `epoch` and `do_eval` defaults to `True` if `evaluation_strategy` is set. cf
https://github.com/huggingface/transformers/blob/28d0048218ad7bce69510b16024510afba0daed2/src/transformers/training_args.py#L114-L118<|||||>Got it. Thank you very much for your fast response! Have a nice day! |
transformers | 17,488 | closed | _batch_encode_plus() got an unexpected keyword argument 'is_pretokenized' using BertTokenizerFast | ### System Info
```shell
tokenizer = BertTokenizerFast.from_pretrained('bert-base-uncased')
for token, label in zip(tokenizer.convert_ids_to_tokens(training_set[0]["input_ids"]), training_set[0]["labels"]):
print('{0:10} {1}'.format(token, label))
The error I am getting is:
Traceback (most recent call last):
File "C:\Users\1632613\Documents\Anit\NER_Trans\test.py", line 108, in <module>
for token, label in zip(tokenizer.convert_ids_to_tokens(training_set[0]["input_ids"]), training_set[0]["labels"]):
File "C:\Users\1632613\Documents\Anit\NER_Trans\test.py", line 66, in __getitem__
encoding = self.tokenizer(sentence,
File "C:\Users\1632613\AppData\Local\conda\conda\envs\ner\lib\site-packages\transformers\tokenization_utils_base.py", line 2477, in __call__
return self.batch_encode_plus(
File "C:\Users\1632613\AppData\Local\conda\conda\envs\ner\lib\site-packages\transformers\tokenization_utils_base.py", line 2668, in batch_encode_plus
return self._batch_encode_plus(
TypeError: _batch_encode_plus() got an unexpected keyword argument 'is_pretokenized'
```
### Who can help?
@SaulLu
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
1. Download the NER Dataset from the Kaggle link (https://www.kaggle.com/datasets/namanj27/ner-dataset)
2. Use the Script with the mentioned versions of transformers and tokenizers:
tokenizer = BertTokenizerFast.from_pretrained('bert-base-uncased')
for token, label in zip(tokenizer.convert_ids_to_tokens(training_set[0]["input_ids"]), training_set[0]["labels"]):
print('{0:10} {1}'.format(token, label))
### Expected behavior
```shell
I expect to get the token, label from the script above.
Python Version: 3.9
tokenizers-0.12.1
transformers-4.19.2
Anyone can shed some light please?
```
| 05-31-2022 08:19:18 | 05-31-2022 08:19:18 | Hi @anitchakraborty ,
Could you share an example of `training_set[0]["input_ids"]`. I don't see "input_ids" in the columns of the kaggle dataset you shared - which are "Sentence #", "Word", "POS" and "Tag". Without a toy example, we can't reproduce your problem and it's hard for us to help you.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>I'm closing this issue due to lack of activity, but don't hesitate to come back to us with an extract of your data so that we can help you! :blush: <|||||>I am encountering the same issue, suggestions?<|||||>Hi @ludwigwittgenstein2 ,
Thank you for sharing that you also have this issue too. To understand what is going on, could you please share a code snippet that reproduces the error and the output of `transformers-cli env` ? Thanks in advance!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>I am having the same problem
here is the output of `transformers-cli env`
```
- `transformers` version: 4.25.1
- Platform: Linux-5.10.133+-x86_64-with-glibc2.27
- Python version: 3.8.16
- Huggingface_hub version: 0.11.1
- PyTorch version (GPU?): 1.13.0+cu116 (True)
- Tensorflow version (GPU?): 2.9.2 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
```
you can also find the colab notebook [here](https://drive.google.com/file/d/1HyTLxHs8S4tAsdpF4GHd34tKuVorBNOz/view?usp=sharing) <|||||>Experiencing the same issue. I think it depends on the version compatibility of PyTorch or Transformers. This notebook is different from the others since the predictions are made sentence-wise.
It works very well with Python 3.7, Transformers 3.0.2. @SaulLu would appreciate your help. <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>```
from transformers import BertTokenizerFast, EncoderDecoderModel
import torch
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
tokenizer = BertTokenizerFast.from_pretrained('mrm8488/bert-mini2bert-mini-finetuned-cnn_daily_mail-summarization')
model = EncoderDecoderModel.from_pretrained('mrm8488/bert-mini2bert-mini-finetuned-cnn_daily_mail-summarization').to(device)
def generate_summary(text):
# cut off at BERT max length 512
inputs = tokenizer([text], padding="max_length", truncation=True, max_new_tokens=512, return_tensors="pt")
input_ids = inputs.input_ids.to(device)
attention_mask = inputs.attention_mask.to(device)
output = model.generate(input_ids, attention_mask=attention_mask)
return tokenizer.decode(output[0], skip_special_tokens=True)
text = "your text to be summarized here..."
generate_summary(text)
```
**TypeError: PreTrainedTokenizerFast._batch_encode_plus() got an unexpected keyword argument 'max_new_tokens'**
|
transformers | 17,487 | closed | How can i use bpe tokenizer in t5 pretrain from scratch | ### System Info
```shell
transformers version: 4.20.0.dev0
when i use the transformers/examples/flax/language-modeling/run_t5_mlm_flax.py to pretrain the t5 from scratch, the preprocess i use the bpe codes to split sentence but not the original tokenizer. How can i use the bpe codes?
```
### Who can help?
@patrickvonplaten
@SaulLu
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction

### Expected behavior
```shell
replace the tokenizer with load bpe codes to split sentence
```
| 05-31-2022 08:14:37 | 05-31-2022 08:14:37 | Hey @520jefferson,
Could you please provide us with a codesnippet that we can copy-paste directly into a terminal? We sadly don't have the time to try to guess what we should reproduce.
Thanks a lot!<|||||>Hey @patrickvonplaten
i just want to make sure whether the run_t5_mlm_flax.py script provider another tokenizer which can just load bpe codes or vocab to initialize the tokenizer.
Or maybe i don't want to use tokenizer, i just need vocab.txt, because i can preprocess with bpe tokenizer before training. How should i do my training without tokenizer?
Thanks
<|||||>Think you have to use a tokenizer to do training (the model needs numbers not letters). You could otherwise try to use `Canine` or `ByT5`<|||||>@patrickvonplaten
Tokenizer can just split the sentence according to space like txts = text.split(" ") , and the token txts[i] can find in vocab.txt, then it can be transfer to number, so i just need the tokenizer load vocab.txt and split according to space , then they can be transfer to numbers.
Thanks for reply !<|||||>
Hi!
If you're ever sure you want to do this (you probably know this, but there are an infinite number of possible words and the size of the vocabulary is computationally expensive), you can create this type of tokenizer with the tokenizers library by instantiating a particular [WordLevel](https://huggingface.co/docs/tokenizers/api/models#tokenizers.models.WordLevel) model. Then you will have to load it in a `PreTrainedTokenizerFast` of transformers. All the necessary steps are explained [here](https://huggingface.co/docs/transformers/fast_tokenizers).<|||||>Hi, @SaulLu
1. I set like like:
**>>> from transformers import PreTrainedTokenizerFast
>>> from tokenizers.models import WordLevel
>>> vocab = WordLevel.from_file("./chitchat-t5-base/vocab.json","<unk>")
>>> fast_tokenizer = PreTrainedTokenizerFast(tokenizer_object=vocab)
>>> fast_tokenizer.decode("ๅนณๆถ ไฝ ๅนฒ็น ไปไน <sep> ๆพ ๅทฅไฝ",skip_special_tokens=True)
**
Then i got this: AttributeError: 'tokenizers.models.WordLevel' object has no attribute 'decode':

and i also use tokenize, it sames i don't set the trunctation:

and i use encode, i met this(**_AttributeError: 'tokenizers.models.WordLevel' object has no attribute 'truncation_**'):

2. Or if i replace autotokenizer to PreTrainedTokenizerFast in run_t5_mlm_flax.py, and train a new t5, i got this:

3.My vocab.json like this below, it seems the tokenizer cannot load this vocab. In order to use the tokenizer how should i load this to a proper tokenizer?
{
"<unk>": 0,
"<eod>": 1,
"<pad>": 2,
"<mask>": 3,
"<sep>": 4,
"๏ผ": 5,
"็": 6,
"๏ผ": 7,
"ไบ": 8,
.....
.....
.....
"<s_181>": 33786,
"<s_182>": 33787,
"<s_183>": 33788,
"<s_184>": 33789,
"<s_185>": 33790,
"<s_186>": 33791,
"<s_187>": 33792,
"<s_188>": 33793,
"<s_189>": 33794
}
4. i also try like this, it doesno't work:

then i uninstall transformers and sentencepiece and pip install transformers sentencepiece, the errors are same.

<|||||>Hey @520jefferson,
Could you please try to to not post screenshots here? Not that we cannot reproduce code from screenshots as it's not easily copy-pastable into the command line<|||||>@patrickvonplaten @SaulLu
Thanks for reply, codes and vocab like follows.
1. The vocab.txt (vocab.json is manually constructed from vocab.txt ) and meregs.txt i upload to google drive as follows:
vocab.txt:https://drive.google.com/file/d/10jC8L_-RDLRv5QkAato8nJWGU1UQQcz1/view?usp=sharing
vocab.json:https://drive.google.com/file/d/1e5Ll0bAHhikhnYV5XaW3NB8aTSWdCvnC/view?usp=sharing
merges.txt:https://drive.google.com/file/d/1ifXlQaYod_kobqgNe82tmHTtHpxBYnBq/view?usp=sharing
2.The sentences for training and validation and test like this (after bpe, tokens split by " "):
ไฝ ่งๅพ ๅคงไบบ ่พ่ฆ ่ฟๆฏ ๅญฆ็ ่พ่ฆ <sep> ้ฝ ๅพ ่พ่ฆ
ๅคดๆก ๆ็ซ ๆฒก ๅฅ ่ฟ่ง ๏ผ ๅด ่ขซ ๅฐ@@ ๆตช@@ ๆตช ๅฑ่ฝ ไบ ๏ผ ่ไธ ๅ ไบ ๅ
็ ็ ่ฝฌๅ ่ฏไปท ๏ผ ๅๅ ๆฐๅนด ๅฐ ่ณ ๏ผ ไฟบ ไธๆณ ๅ็ซ ๏ผ ่ก ๏ผ ไฟบ ๅ ๅ ไธ้ ๏ผ <sep> ๆไน ๅ ไบ ๏ผ ่ฟ ๆฒก ็ ๅข
ไธ่พ ๆ ็ญพๅ ไน ๏ผ ๏ผ โฆ <sep> ๆฒกๆ ๆบไผ ๅป ็ญพ@@ ๅฎ@@ ไผ ๅฆ ๅนธๅฅฝ ้้ข ็ ๅฎน ๅ ๅฐ ๅก็ ๆ ็ญพๅ
ไฝ ๅธฎ ๆ ไนฐ ไธ่ฅฟ ๅ <sep> ไฝ ็ป้ฑ ๆ ๏ผ ๅฝ็ถ ๅธฎ ไฝ ไนฐ ่ถ
ไฝ ่ฏด ้ฃไธช ๆฉๆจ ๅ ้ฃไธช ๆฐดๆ ไปไน ๅฅฝๅค <sep> ๅฏไปฅ ๆ้ซ ็ก็ ่ดจ้ <sep> ๅ
ปๆ ่ฏๅฅฝ ็ ็ก็ ๆถ้ด ๅ ไน ๆฏ <sep> ๆ
ขๆ
ข ๅ
ปๆ ๆฉ็กๆฉ่ตท ็ ไน ๆฏ ๏ผ ไน @@ ๆฏ@@ ๆ@@ ่ช็ถ
ๆฑไธช ้ฃๆฏ ่ถ
็พ็ ็ฝๆธธ ๆๅฅฝ ๆฏ ้ฉๅฝ ็ <sep> ๅไพ ๆ
็ผ ๅ
็ฐๅจ ็พๅบฆ ๅธๅท ๆฏ ไธ่ฝ ๆฟ ้ฎ็ฎฑ ๆณจๅ ไบ ไน ๏ผ ๅช่ฝ ๆฟ ๆๆบๅท ไบ ไน ๏ผ ๅฆๆ ๅฏไปฅ ๅบ่ฏฅ ๆไน ๆฟ ้ฎ็ฎฑ ๆณจๅ ๏ผ ่ฐข่ฐข ๏ผ <sep> ๅ
็จ ๆๆบ ๆณจๅ ๏ผ ็ถๅ ็ปๅฎ ไธไธช ้ฎ็ฎฑ ๏ผ ๅ@@ ่งฃ ็ป ๆๆบ ๅณๅฏ
ๅฑไปฌ ๅบๅป ่ฝฌ ไผๅฟ ้@@ ๅผฏ@@ ๅฟ ๅป ๅ <sep> ๆ ๅจ ๅทฅ@@ ไฝ ็ ๆผซ ๅๅก ๏ผ ่ฆ ไธ่ฆ ๆฅ ๅ ไผๅฟ
ๆ ็ฅ้ ๆ่ฟ ๅ ไปไน <sep> ๅๅค ๆผๅฑไผ ็ ไบ ๅง
3.i want to use the tokenizer to load the vocab and tokenizer to tokenizer my sentence and give it to the t5 model.
load model like this(config: https://drive.google.com/file/d/1WOb-gqjkt1m6GBTFeq4wOWS3dW3Qt1oK/view?usp=sharing):
from transformers import T5Config, T5ForConditionalGeneration
config = T5Config.from_json_file(config_file)
model = T5ForConditionalGeneration(config)
load tokenizer:
from tokenizers.models import WordLevel
from transformers import PreTrainedTokenizerFast
vocab = WordLevel.from_file("vocab.json","<unk>")
fast_tokenizer=PreTrainedTokenizerFast(tokenizer_object=vocab)
fast_tokenizer.encode("ไฝ ่งๅพ ๅคงไบบ ่พ่ฆ ่ฟๆฏ ๅญฆ็ ่พ่ฆ <sep> ้ฝ ๅพ ่พ่ฆ")
then i met this errror:AttributeError: 'tokenizers.models.WordLevel' object has no attribute 'truncation'
4. So i want to load the vocab into tokenizer and use it like this { source = tokenizer.batch_encode_plus([source_text], max_length= 75, pad_to_max_length=True, truncation=True, padding="max_length", return_tensors='pt')
} and return { 'source_ids': source_ids.to(dtype=torch.long), 'source_mask': source_mask.to(dtype=torch.long), 'target_ids': target_ids.to(dtype=torch.long), 'target_ids_y': target_ids.to(dtype=torch.long) } , and give the tokenizer result to model and train the model like translation task, how should i do ?
<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 17,486 | closed | Transformers get stuck in from_pretraind | ### System Info
```shell
- `transformers` version: 4.19.2
- Platform: Linux-4.15.0-180-generic-x86_64-with-debian-11.0
- Python version: 3.7.11
- Huggingface_hub version: 0.7.0
- PyTorch version (GPU?): 1.7.1 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
```
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Requirements
```
celery==4.4.7
redis==3.4.1
Flask==1.1.4
itsdangerous==1.1.0
markupsafe==1.1.1
Flask-Cors==3.0.3
flower==0.9.7
Flask-Migrate==2.5.2
flask-restplus==0.13.0
Flask-Script==2.0.6
Werkzeug==0.16.1
summa==1.2.0
pattern3==3.0.0
gensim==3.8.3
pandas==1.1.5
keybert==0.5.0
numpy==1.19.5
nltk==3.6.3
beautifulsoup4==4.10.0
requests==2.21.0
text2text==0.6.6
wikipedia-api
jinja2<3.1.0
torch==1.7.1 #conda install pytorch cudatoolkit=10.1
transformers==4.19.2 #pip install transformers #pip install transformers[sentencepiece]
streamlit==1.5.0 #Review if finally used
seaborn==0.11.2 #Review if finally used
spacy==3.2.1
segtok==1.5.11
datasets==1.18.2
wiktionaryparser==0.0.97
en-core-web-md @ https://github.com/explosion/spacy-models/releases/download/en_core_web_md-3.2.0/en_core_web_md-3.2.0-py3-none-any.whl
es-core-news-md @ https://github.com/explosion/spacy-models/releases/download/es_core_news_md-3.2.0/es_core_news_md-3.2.0-py3-none-any.whl
rouge-score
absl-py
sacrebleu
meteor
```
Python code
```python
model_name="Vamsi/T5_Paraphrase_Paws"
AutoTokenizer.from_pretrained(model_name)
print(f"1 Loading model {str(model_name)}")
AutoModelForSeq2SeqLM.from_pretrained(model_name)
print(f"2 Loading model {str(model_name)}")
```
### Expected behavior
```shell
Good evening, sorry, I'm new working with Huggingface library.
I'm having an issue deploying a Flask application that I've deployed in Python.
It gets stuck at AutoModelForSeq2SeqLM.from_pretrained(model_name) and the loading process last forever.
I'm not sure if I've committed any mistake and this is not a bug so in case please notifiy me.
Thank you so much.
```
| 05-31-2022 07:42:01 | 05-31-2022 07:42:01 | I am experiencing the same problem with the .from_pretrained calls. For me the temporary solution was to use a VPN, but of course it isn't fun. I was wondering why we need internet connection at all, since after the first call the data should be cached locally. Maybe add a flag in the call to explicitly run offline only?<|||||>In case of connection issues, you can try the [offline mode](https://huggingface.co/docs/transformers/v4.19.2/en/installation#offline-mode).<|||||>Could you share a bit regarding your location/network? Do you mange to download files when going through the UI on the hub?<|||||>> In case of connection issues, you can try the [offline mode]
Thanks! This is what I was looking for!
> Could you share a bit regarding your location/network? Do you mange to download files when going through the UI on the hub?
On my side, I am connecting through the VPN of my org. VPN location is chosen automatically, but when I turn it off the code execution of the .from_pretrained(..) is normal, whereas in the last days with VPN on it takes maybe 15 minutes. Before there were no issues with this. I am capable of downloading without issues the files from the hub directly.
Thanks for your time! |
transformers | 17,485 | closed | Add HF.co for PRs / Issues regarding specific model checkpoints | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Checkpoint issues should be put up directly to the Hub
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 05-31-2022 07:26:51 | 05-31-2022 07:26:51 | _The documentation is not available anymore as the PR was closed or merged._<|||||>cc @lhoestq for datasets |
transformers | 17,484 | closed | Fix checkpoint name | # What does this PR do?
Fix some `eleutherai/gpt-neox-20b` to `EleutherAI/gpt-neox-20b`.
The uppercase vs. lowercase in the checkpoint name matters, otherwise I got error like
`is not a local folder and is not a valid model identifier listed on` | 05-31-2022 07:03:43 | 05-31-2022 07:03:43 | cc @zphang :-)<|||||>_The documentation is not available anymore as the PR was closed or merged._ |
transformers | 17,483 | closed | Performance (perplexity) decrease after conversion megatronGTP2 to hugging face model | ### System Info
```shell
transformers==4.19.2
PyTorch: 1.11.0
CUDA: cu11.0
Train GPUs: 1node (A100 8gpus)
Test GPUs: A100 1gpu
Megatron-LM: https://github.com/NVIDIA/Megatron-LM
```
### Who can help?
@younesbelkada
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
1. Pretrain my own megatronGPT2 on corpus almost similar to that used in a pre-trained megatronGPT2 by Nvidia (https://ngc.nvidia.com/catalog/models/nvidia:megatron_lm_345m)
- I used same vocab and merge files with megatronGPT2 by Nvidia
- vocab: https://s3.amazonaws.com/models.huggingface.co/bert/gpt2-vocab.json
- merge: https://s3.amazonaws.com/models.huggingface.co/bert/gpt2-merges.txt
2. Test perpexity on WIKITEXT103 testset and compare performance of the above pre-trained megatronGPT2 models using evaluation script
- https://github.com/NVIDIA/Megatron-LM/blob/main/tasks/main.py
3. Convert the above pre-trained models to hugging models using the conversion script in transformers
- https://github.com/huggingface/transformers/blob/main/src/transformers/models/megatron_gpt2/convert_megatron_gpt2_checkpoint.py
- Belows show the config files of converted models. `activation_function` and `vocab_size` are different.
```
- Mine
{
"activation_function": "gelu_fast",
"architectures": [
"GPT2LMHeadModel"
],
"attn_pdrop": 0.1,
"bos_token_id": 50256,
"embd_pdrop": 0.1,
"eos_token_id": 50256,
"initializer_range": 0.02,
"layer_norm_epsilon": 1e-05,
"model_type": "gpt2",
"n_embd": 1024,
"n_head": 16,
"n_inner": 4096,
"n_layer": 24,
"n_positions": 1024,
"reorder_and_upcast_attn": false,
"resid_pdrop": 0.1,
"scale_attn_by_inverse_layer_idx": false,
"scale_attn_weights": true,
"summary_activation": null,
"summary_first_dropout": 0.1,
"summary_proj_to_labels": true,
"summary_type": "cls_index",
"summary_use_proj": true,
"tokenizer_class": "GPT2TokenizerFast",
"transformers_version": "4.19.2",
"use_cache": true,
"vocab_size": 50304
}
```
```
- Nvidia
{
"activation_function": "gelu_new",
"architectures": [
"GPT2LMHeadModel"
],
"attn_pdrop": 0.1,
"bos_token_id": 50256,
"embd_pdrop": 0.1,
"eos_token_id": 50256,
"initializer_range": 0.02,
"layer_norm_epsilon": 1e-05,
"model_type": "gpt2",
"n_embd": 1024,
"n_head": 16,
"n_inner": 4096,
"n_layer": 24,
"n_positions": 1024,
"reorder_and_upcast_attn": false,
"resid_pdrop": 0.1,
"scale_attn_by_inverse_layer_idx": false,
"scale_attn_weights": true,
"summary_activation": null,
"summary_first_dropout": 0.1,
"summary_proj_to_labels": true,
"summary_type": "cls_index",
"summary_use_proj": true,
"tokenizer_class": "GPT2TokenizerFast",
"transformers_version": "4.19.2",
"use_cache": true,
"vocab_size": 50257
}
```
4. Test perpexity on WIKITEXT103 testset and compare performance of the converted hugging models by following the guide
- https://huggingface.co/docs/transformers/perplexity#calculating-ppl-with-fixedlength-models
5. Below table is my test results
- Before means pre-trained models' and After means converted models'.
| | Before | After |
| -- | -- |-- |
| NVIDIA Megatron_345M | **14.77** | **17.15** |
| **My_Model_345M** | 15.73 | 23.89 |
### Expected behavior
```shell
I am wandering where the performance difference between converted Mine and Nvidia models comes from.
In addition, I do not know why the vocab size of Mine had been changed from 50,257 to 50,304.
(the number 50,304 means vocab size 50,257 plus dummy token)
I manually changed `activation_function` and `vocab_size` in config file of Mine to same with Nvidia and test it again. But, the performance difference is same.
I expect similar perplexity from converted hugging face models of both pre-trained my own model and Nvidia.
Do anyone have similar experience?
```
| 05-31-2022 02:51:25 | 05-31-2022 02:51:25 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 17,482 | closed | model.parallelize() for OPT models | ### Feature request
It would be great if we can have function for fitting OPT model on multiple GPUs using **model.parallelize()**, similar to what we already have for [GPT-J](https://github.com/huggingface/transformers/blob/6e535425feae20ca61a8b10ae5e8a7fab4d394ba/src/transformers/models/gptj/modeling_gptj.py#L494)
### Motivation
It would be extremely helpful for fitting large OPT models such as opt-30b, opt-13b. An ideal solution would be similar to what has been integrated in [GPT-J](https://github.com/huggingface/transformers/blob/6e535425feae20ca61a8b10ae5e8a7fab4d394ba/src/transformers/models/gptj/modeling_gptj.py#L494), where we can simply call **model.parallelize()** to load the model on multiple gpus.
### Your contribution
glad to help on making this feature possible | 05-30-2022 22:37:03 | 05-30-2022 22:37:03 | @sgugger can share more on that, but we now recommend leveraging `accelerate` in order to parallelize models. See [tweet](https://twitter.com/huggingface/status/1524783489593360385) that contains a [colab example](https://colab.research.google.com/drive/14wnxMvD9zsiBQo2FtTpxn6w2cpXCcb-7#scrollTo=XUlpcU3iQNhu&uniqifier=1).<|||||>On Transformers main branch, you can also load the OPT models on several GPUs directly with `AutoModel.from_pretrained(checkpoint, device_map="auto")` (or pass along you own device map).
The old `parallelize` API will be deprecated soon.<|||||>@sgugger Hi, that's very helpful for me. However I wonder what's the difference between directly using ```AutoModel.from_pretrained(checkpoint, device_map="auto")``` and that using ```load_checkpoint_and_dispatch()``` and ```infer_auto_device_map()```? Does both of them support fine-tuning something like OPT-30B on multiple GPUs with model parallelism? Thanks a lot for your help!<|||||>No this is inference only. For training/fine-tuning we recommend the use of DeepSpeed Zero-3.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 17,481 | closed | modeling_swin.py img_mask doesn't have expected torch dtype | ### System Info
```shell
transformers: master branch
```
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
How to repro:
Run Swin Transformer with NVIDIA mixed precision apex amp opt_level=O2.
The problem is at this line: https://github.com/huggingface/transformers/blob/main/src/transformers/models/swin/modeling_swin.py#L600, it creates img_mask as float32 type, which will conflict other types (float16) when mixed precision O2 is used.
One way to fix is shown below:
```
- def get_attn_mask(self, height, width):
+ def get_attn_mask(self, height, width, dtype):
if self.shift_size > 0:
# calculate attention mask for SW-MSA
- img_mask = torch.zeros((1, height, width, 1))
+ img_mask = torch.zeros((1, height, width, 1), dtype=dtype)
height_slices = (
slice(0, -self.window_size),
slice(-self.window_size, -self.shift_size),
@@ -641,7 +641,7 @@ class SwinLayer(nn.Module):
# partition windows
hidden_states_windows = window_partition(shifted_hidden_states, self.window_size)
hidden_states_windows = hidden_states_windows.view(-1, self.window_size * self.window_size, channels)
- attn_mask = self.get_attn_mask(height_pad, width_pad)
+ attn_mask = self.get_attn_mask(height_pad, width_pad, dtype=hidden_states.dtype)
```
### Expected behavior
```shell
the img_mask tensor should be created with the same type of hidden_states.
```
| 05-30-2022 22:15:04 | 05-30-2022 22:15:04 | Hi,
Thanks for your interest in Swin Transformer! Do you mind opening a PR to fix this?
Thanks!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Hi @LiweiPeng do you mind opening a PR for this?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Gently pinging @LiweiPeng here<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 17,480 | closed | MarianMTModel no longer has postprocess_next_token_scores function | Hello I just saw that in file https://github.com/huggingface/transformers/blob/v4.19.2/src/transformers/data/test_generation_utils.py you mention calling the postprocess_next_token_scores() method from MarianMTModel, which doesn't seem to exist anymore in it's source file anymore: https://github.com/huggingface/transformers/blob/v4.19.2/src/transformers/models/marian/modeling_marian.py and neither exist as a PreTrainedModel method anymore, and neither in GenerationMixin model (in GenerationMixin it used to be most recently in https://huggingface.co/transformers/v3.3.1/_modules/transformers/generation_utils.html).
I would be really happy if you could tell what is the alternative to this function in the most recent library version! | 05-30-2022 21:20:36 | 05-30-2022 21:20:36 | cc @patil-suraj <|||||>Hi @dsvilarkovic !
This functionality still exists, but the `generate` method is refactored to support this more cleanly. This is now implemented as `LogitsProcessor`. cf #6949<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 17,479 | closed | TF: BART compatible with XLA generation | # What does this PR do?
Adds `position_ids` to `TFBart`, so that we can do generation with a padded past -- a requirement for XLA generation.
This PR was built on top of #17426 (so it will contain its diff until it gets merged), and is a requirement for #17458.
๐จ Important notes:
1. **Review suggestion**: check the Bart file, then its test file. The other changes are either cosmetic changes (e.g. correcting comments) or the result of `make fix-copies` (several files have copies from Bart).
2. There are several failing tests, but it's intentional -- some models' `prepare_inputs_for_generation` were copied from Bart, but the models do not have the `position_ids` input. If the PR gets a positive review, I will propagate the changes to the affected models. | 05-30-2022 18:59:24 | 05-30-2022 18:59:24 | _The documentation is not available anymore as the PR was closed or merged._<|||||>@ydshieh tagging you for TF review, as Matt is off and you are also familiar with generate :)<|||||>> @ydshieh tagging you for TF review, as Matt is off and you are also familiar with generate :)
Actually not very familiar, but would love to get more involved ๐. Thanks for tagging me!<|||||>Hey @ydshieh ๐ answering your questions:
> From the change in prepare_inputs_for_generation, both in the PR for TF-GPT2 and this PR, my understanding of the main change is that: we need to use (decoder) attention mask in order to calculate the correct position_ids for both left/right padding. And this is done using tf.math.cumsum. Do I understand these PR correctly?
Correct ๐
> Why we need decoder_position_ids when past_key_values is passed?
In the original PT code and eager execution TF, the position ids can be obtained by default (i.e. when not explicitly passed) from the past length, as the past length corresponds to the next position id if there is no left padding. In FLAX and XLA TF, the past is zero-padded, so the past length is not the default position id. As such, it is dangerous to leave the default path active -- this path should only be used in generate anyways, and the updated generate passes the position ids. (The GPT2 should also get the same guard, to be safe!)<|||||>>
OK, I might got it. The `past` sent to the model is the padded (on the right) version! (which is required by XLA to have a fixed shape during loop, right?)
Thank you @gante ! <|||||>I didn't think it in a thorough way, but in `prepare_inputs_for_generation`, when we return the actual inputs to a model,
https://github.com/huggingface/transformers/blob/9089b7b95a1b12f19b65872323f13f1f68a6eaa7/src/transformers/models/bart/modeling_tf_bart.py#L1428
it seems to me that we could cut `past` to the actual (non-padded) version. And when the model returns `past`, in `_update_model_kwargs_for_xla_generation`, we just always pad on the right.
(of course, we need to pass the current length info. to `prepare_inputs_for_generation` if we want to do so)
- this will keep `model_kwargs["past"]` compatible with XLA
- the actual `past` to model is the same as before
- especially, it won't get `max_length - 1` as length, so we no longer have overhead due to the increasing length
- it might make the logic a bit easier in `_update_model_kwargs_for_xla_generation`
@gante I don't want to make you too busy. I will let you judge if this is a good idea, and even if it is, if we should change it now, or we can do it later. I know we want to publish our work soon!
<|||||>> it seems to me that we could cut past to the actual (non-padded) version.
I would love to do that, and it would be a great idea to simplify the code, but sadly XLA does not allow dynamic-sized slices (i.e. cutting `past` based on the current length or based on its non-zero values). I've had the same idea too, but then I came across this limitation (documented [here](https://github.com/huggingface/transformers/pull/17378#issuecomment-1133641201))๐ข Sadly, we have to keep working with the full padded array everywhere when XLA is on.<|||||>Think we can move towards finishing this PR here :-)<|||||>@patrickvonplaten it is ready to merge -- would you like to make a final review, or can I merge the PR? :) |
transformers | 17,478 | closed | Training hangs in the end while calling dist.barrier() | ### System Info
```shell
- `transformers` version: 4.18.0
- Platform: Linux-5.4.0-1073-azure-x86_64-with-glibc2.27
- Python version: 3.8.0
- Huggingface_hub version: 0.7.0
- PyTorch version (GPU?): 1.10.1+cu113 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: YES
- Using distributed or parallel set-up in script?: DDP
```
### Who can help?
@sgugger
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
I am working on a custom TokenClassificationTask. For a specific model type, I am experiencing a hanging process at the end of the training. After I set `TORCH_DISTRIBUTED_DEBUG=DETAIL` and also added rank numbers to logs (to do this, I overwrote `train` method of `Trainer` class with additional loggings), the training failed and I received the below stack trace.
```
Training completed for rank 6. Do not forget to share your model on huggingface.co/models =)
2022-05-30 15:54:58 INFO nlp_ner_layoutlm.layoutlm.trainers.re_trainer Before barrier for rank 6
2022-05-30 15:54:58 INFO nlp_ner_layoutlm.layoutlm.trainers.re_trainer Entering into barrier for rank 6
2022-05-30 15:54:59 INFO transformers.modeling_utils Model weights saved in ./data/tmpm8wxl12l/checkpoint-590/pytorch_model.bin
2022-05-30 15:55:01 INFO transformers.trainer Deleting older checkpoint [data/tmpm8wxl12l/checkpoint-585] due to args.save_total_limit
2022-05-30 15:55:01 ERROR __main__ Detected mismatch between collectives on ranks. Rank 6 is running inconsistent collective: CollectiveFingerPrint(OpType=BARRIER
Traceback (most recent call last):
File "nlp_ner_layoutlm/train_pipeline/training_step/training_script.py", line 53, in <module>
train_model(
File "/app/nlp_ner_layoutlm/layoutlm/utils/training_utils.py", line 160, in train_model
raise e
File "/app/nlp_ner_layoutlm/layoutlm/utils/training_utils.py", line 158, in train_model
trainer.train(resume_from_checkpoint=get_last_checkpoint(checkpoint_dir))
File "/app/nlp_ner_layoutlm/layoutlm/trainers/re_trainer.py", line 698, in train
dist.barrier()
File "/usr/local/lib/python3.8/dist-packages/torch/distributed/distributed_c10d.py", line 2709, in barrier
work = default_pg.barrier(opts=opts)
RuntimeError: Detected mismatch between collectives on ranks. Rank 6 is running inconsistent collective: CollectiveFingerPrint(OpType=BARRIER
2022-05-30 15:55:01 ERROR __main__ Detected mismatch between collectives on ranks. Rank 3 is running inconsistent collective: CollectiveFingerPrint(OpType=BROADCAST, TensorShape=[514], TensorDtypes=Long, TensorDeviceTypes=TensorOptions(dtype=float (default), device=cuda, layout=Strided (default), requires_grad=false (default), pinned_memory=false (default), memory_format=(nullopt))
Traceback (most recent call last):
File "nlp_ner_layoutlm/train_pipeline/training_step/training_script.py", line 53, in <module>
train_model(
File "/app/nlp_ner_layoutlm/layoutlm/utils/training_utils.py", line 160, in train_model
raise e
File "/app/nlp_ner_layoutlm/layoutlm/utils/training_utils.py", line 158, in train_model
trainer.train(resume_from_checkpoint=get_last_checkpoint(checkpoint_dir))
File "/app/nlp_ner_layoutlm/layoutlm/trainers/re_trainer.py", line 603, in train
tr_loss_step = self.training_step(model, inputs)
File "/usr/local/lib/python3.8/dist-packages/transformers/trainer.py", line 2011, in training_step
loss = self.compute_loss(model, inputs)
File "/usr/local/lib/python3.8/dist-packages/transformers/trainer.py", line 2043, in compute_loss
outputs = model(**inputs)
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/torch/nn/parallel/distributed.py", line 878, in forward
self._sync_params()
File "/usr/local/lib/python3.8/dist-packages/torch/nn/parallel/distributed.py", line 1379, in _sync_params
self._distributed_broadcast_coalesced(
File "/usr/local/lib/python3.8/dist-packages/torch/nn/parallel/distributed.py", line 1334, in _distributed_broadcast_coalesced
dist._broadcast_coalesced(
RuntimeError: Detected mismatch between collectives on ranks. Rank 3 is running inconsistent collective: CollectiveFingerPrint(OpType=BROADCAST, TensorShape=[514], TensorDtypes=Long, TensorDeviceTypes=TensorOptions(dtype=float (default), device=cuda, layout=Strided (default), requires_grad=false (default), pinned_memory=false (default), memory_format=(nullopt))
2022-05-30 15:55:01 ERROR __main__ Detected mismatch between collectives on ranks. Rank 1 is running inconsistent collective: CollectiveFingerPrint(OpType=BROADCAST, TensorShape=[514], TensorDtypes=Long, TensorDeviceTypes=TensorOptions(dtype=float (default), device=cuda, layout=Strided (default), requires_grad=false (default), pinned_memory=false (default), memory_format=(nullopt))
Traceback (most recent call last):
... (Same error for other processes)
```
According to the trace, while the process with rank 6 is running `dist.barrier()` from `trainer.py line 1536`, the other processes are running a `forward_call`. I think this is the issue and due to this mis-communication the training hangs. When I checked similar issues on the web, I came across [this issue from speechbrain](https://github.com/speechbrain/speechbrain/issues/1166). It is exactly the same issue and they fixed it with a PR. Currently, I can't understand why the processes are ending up in different places in the code and can't figure out how to fix this issue.
### Expected behavior
```shell
As far as I understand, the processes should meet at `dist.barrier()` and training should succeed. Could you help me or point me to a fix that I can work on it?
```
| 05-30-2022 18:46:38 | 05-30-2022 18:46:38 | I do not see a `dist.barrier()` at line 1536 of the `trainer.py` file and I can't really help you without knowing which training script you are launching and how you launched it.<|||||>Maybe the line corresponds to the HF version that I use. The new line number is 1679 according to main branch. You can check out from [here](https://github.com/huggingface/transformers/blob/567d9c061d04513c24872b5709adc7a1384b8129/src/transformers/trainer.py#L1679)<|||||>I think I have found the issue, my custom model has outputs with variable lengths and I wasn't gathering all outputs with `distributed_concat` function as they are not torch tensors. This results in different metrics in each process due to different outputs without gathering. In addition, I am using `EarlyStoppingCallback` during my training. As the metrics are different for each process, one process can stop the training and enter `dist.barrier` while the others go on training. This results in hanging training.
Until now, I haven't implemented the fix yet. After the implementation, I will confirm here and close the issue. Thanks for your time anyway. <|||||>I have just tested my fix and concluded that it is related to what I mentioned above. Thanks for your time @sgugger. I am closing the issue.<|||||>Thanks for confirming the fix worked!<|||||>> I have just tested my fix and concluded that it is related to what I mentioned above. Thanks for your time @sgugger. I am closing the issue.
Hello,
Could you please elaborate the solution? |
transformers | 17,477 | closed | Allow from transformers import TypicalLogitsWarper | # What does this PR do?
Currently, in order to use the `TypicalLogitsWarper` outside of `generate` (with `typical_p` > 0) it can only imported from `transformers.generation_logits_process`. I have simply added it to the relevant `__init__.py` so that it can be imported directly from `transformers`, like the other `LogitsWarpers`.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@cimeister
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 05-30-2022 17:41:08 | 05-30-2022 17:41:08 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 17,476 | closed | PyTorch JIT trace on Swin Transformer pretrained checkpoint fails | ### System Info
```shell
- `transformers` version: 4.20.0.dev0
- Platform: macOS-10.16-x86_64-i386-64bit
- Python version: 3.9.12
- Huggingface_hub version: 0.5.1
- PyTorch version (GPU?): 1.11.0 (False)
- Tensorflow version (GPU?): 2.8.0 (False)
- Flax version (CPU?/GPU?/TPU?): 0.4.1 (cpu)
- Jax version: 0.3.7
- JaxLib version: 0.3.7
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
```
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
The goal is to run `torch.jit.trace` on the Swin Transformer model from a pretrained checkpoint so it can be exported to another format (e.g. Core ML, ONNX, etc).
Steps to reproduce the issue are:
```python
from transformers import SwinForImageClassification
model_checkpoint = "microsoft/swin-small-patch4-window7-224"
model = SwinForImageClassification.from_pretrained(model_checkpoint, torchscript=True).eval()
import torch
example_input = torch.rand([1, model.config.num_channels, model.config.image_size, model.config.image_size])
traced_model = torch.jit.trace(model, example_input, strict=False)
```
The trace gives a lot of warnings, which are not important, and then fails with the error:
```python
TracingCheckError: Tracing failed sanity checks!
ERROR: Graphs differed across invocations!
```
### Expected behavior
There should be no error and the trace completes successfully.
The unit tests for Swin Transformer also perform a JIT trace and they succeed. The model architecture used in the test case is simpler than from the pretrained checkpoint.
In particular, the issue seems to be the `window_size` parameter. This can be demonstrated as follows:
```python
from transformers import SwinConfig
config = SwinConfig.from_pretrained(model_checkpoint, window_size=6)
config.torchscript = True
model = SwinForImageClassification.from_pretrained(model_checkpoint, config=config, ignore_mismatched_sizes=True).eval()
traced_model = torch.jit.trace(model, example_input, strict=False)
```
By forcing the window size to be a different number, the trace now completes without errors. (But of course, this window size is not appropriate for the pretrained checkpoint.)
| 05-30-2022 14:54:41 | 05-30-2022 14:54:41 | Using `check_trace=False` with `torch.jit.trace` avoids this problem. The traced model seems to work OK and gives the same results as the original (tested with several different input images).
It's probably still a good idea to make the trace work without the error (if possible) but as using `check_trace=False` solves my immediate problem, this is a very low priority issue.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>>
I have found that the traced jit module will got dismatched result compared to nnmodule at difference batch inputs as `window_reverse` is not supporting dynamic batch [modeling_swin.py#L218](https://github.com/huggingface/transformers/blob/main/src/transformers/models/swin/modeling_swin.py#L218)๏ผ
```python
def window_reverse(windows, window_size, height, width):
"""
Merges windows to produce higher resolution features.
"""
batch_size = math.floor(windows.shape[0] / (height * width / window_size / window_size))
windows = windows.view(batch_size, height // window_size, width // window_size, window_size, window_size, -1)
windows = windows.permute(0, 1, 3, 2, 4, 5).contiguous().view(batch_size, height, width, -1)
return windows
```
Is that a possible reason? In my practice๏ผI write `window_reverse` as following for supporting dynamic batch๏ผ
```python
def window_reverse(windows, window_size, height, width):
"""
Merges windows to produce higher resolution features.
"""
channels = int(windows.shape[-1])
windows = windows.view(-1, height // window_size, width // window_size, window_size, window_size, channels)
windows = windows.permute(0, 1, 3, 2, 4, 5).contiguous().view(-1, height, width, channels)
return windows
``` |
transformers | 17,475 | closed | Add support for pruning whole layers in transformer models. | ### Feature request
Dear HuggingFace team,
In the ViT Model folder (namely modeling_vit.py), there is an option to prune the attention heads of a model. However, at the moment, if I want to prune a whole layer, I get an error due to the dense layer because the input features is of size 0 and hence I get an issue with 1/sqrt(in_features). Would it be possible to do something similar to https://github.com/sai-prasanna/bert-experiments/blob/master/src/model_bert.py where they simply check if the number of heads to prune is equal to the number of heads in that layer and hence take the attentions and dense layer to be None?
### Motivation
The motivation for this is that I want my pruning algorithm to be able to prune whole layers if it thinks that this will give the best performance when compressing a model. I imagine that other researchers would appreciate this feature as well.
### Your contribution
I am able to take inspiration from Sai-prasanna and add it to the ViT model if you would like. Please let me know. | 05-30-2022 14:41:33 | 05-30-2022 14:41:33 | WDYT @NielsRogge?
@chrisdt1998 do you have an example of how big of a change it would result in the code?<|||||>Yes, the change would be about 10 lines of code added to the prune_heads method in the ViTAttention class in modeling_vit.py. This could also be extended to other transformer models in the same corresponding functions, for example in modeling_bert.py, the change would be in the prune_heads method in the BertAttention class.
The change for the ViTAttention class would be:
```
class ViTAttention(nn.Module):
def __init__(self, config: ViTConfig) -> None:
super().__init__()
self.attention = ViTSelfAttention(config)
self.output = ViTSelfOutput(config)
self.pruned_heads = set()
def prune_heads(self, heads: Set[int]) -> None:
if len(heads) == 0:
return
heads, index = find_pruneable_heads_and_indices(
heads, self.attention.num_attention_heads, self.attention.attention_head_size, self.pruned_heads
)
# Prune linear layers
self.attention.query = prune_linear_layer(self.attention.query, index)
self.attention.key = prune_linear_layer(self.attention.key, index)
self.attention.value = prune_linear_layer(self.attention.value, index)
self.output.dense = prune_linear_layer(self.output.dense, index, dim=1)
# Update hyper params and store pruned heads
self.attention.num_attention_heads = self.attention.num_attention_heads - len(heads)
self.attention.all_head_size = self.attention.attention_head_size * self.attention.num_attention_heads
self.pruned_heads = self.pruned_heads.union(heads)
```
To:
```
class ViTAttention(nn.Module):
def __init__(self, config: ViTConfig) -> None:
super().__init__()
self.attention = ViTSelfAttention(config)
self.output = ViTSelfOutput(config)
self.pruned_heads = set()
def prune_heads(self, heads: Set[int]) -> None:
if len(heads) == 0:
return
if self.attention is None:
return
all_pruned = self.pruned_heads.union(heads)
if len(all_pruned) == self.attention.num_attention_heads:
self.attention = None
self.output.dense = None
# Update hyper params and store pruned heads
self.pruned_heads = all_pruned
return
heads, index = find_pruneable_heads_and_indices(
heads, self.attention.num_attention_heads, self.attention.attention_head_size, self.pruned_heads
)
# Prune linear layers
self.attention.query = prune_linear_layer(self.attention.query, index)
self.attention.key = prune_linear_layer(self.attention.key, index)
self.attention.value = prune_linear_layer(self.attention.value, index)
self.output.dense = prune_linear_layer(self.output.dense, index, dim=1)
# Update hyper params and store pruned heads
self.attention.num_attention_heads = self.attention.num_attention_heads - len(heads)
self.attention.all_head_size = self.attention.attention_head_size * self.attention.num_attention_heads
self.pruned_heads = self.pruned_heads.union(heads)
```
Please note that all credits go to Sai Prasanna, from this link https://github.com/sai-prasanna/bert-experiments/blob/master/src/model_bert.py and corresponding paper "When BERT Plays the Lottery, All Tickets Are Winning".
Please let me know if you'd like me to clarify further.
|
transformers | 17,474 | closed | BLOOM | # What does this PR do?
Adding BLOOM models into transformers library - recreating the PR as the old PR has a bad git commit history
Original PR: #17202
- [x] add a generation test with a small model pushed on the hub
- [x] slow tests needs to be modified accordingly
- [x] add final credits to all reviewers in a final commit | 05-30-2022 12:43:42 | 05-30-2022 12:43:42 | The commit's history should be better now..! @patrickvonplaten <|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>There are several unresolved discussions in the original PR:
- https://github.com/huggingface/transformers/pull/17202#discussion_r883151955
- https://github.com/huggingface/transformers/pull/17202#discussion_r882980948
In the future please let's not rush to open new PRs but fix the existing ones. as it makes incomplete discussions unresolved and also reading the PR's history for why a certain decision was made will be more difficult to do.<|||||>I propose to continue the discussions on #17202 until all the conversations are resolved. Once they're resolved, I'll ping here for a final review!
There is only one unresolved discussion left for now<|||||>The discussions on the old PR are now resolved,
Would love to have a final review (@sgugger and @patrickvonplaten already approved the previous PR + the test that is failing seems to be unrelated to Bloom)
cc @LysandreJik @stas00 ! Hope we can merge it soon ๐ค<|||||>Should have fixed the nits, will quickly test the slow tests now<|||||>[comment moved elsewhere]<|||||>Hey @justheuristic !
thanks for the nit! Just tested in and the tests passed - I think that we should add an extra test for this sanity check :-)
EDIT: I just completely removed the token_emb since we do not need it at all<|||||>Thank you @patrickvonplaten !
I have benchmarked the performance of fused vs unfused version of the `bias_add` function and I am observing the same performance (with a slightly higher speed with unfused operation). So I will remove that, the bias term will not be passed in a weird manner anymore now
Here is the script to reproduce the benchmarking in case you are interested:
```
import torch, timeit
from transformers import BloomForCausalLM
model = BloomForCausalLM.from_pretrained("bigscience/bigscience-small-testing")
model = model.eval()
model_unfused = BloomForCausalLM.from_pretrained("bigscience/bigscience-small-testing", bias_dropout_fusion=False)
model_unfused = model_unfused.eval()
input_ids = torch.LongTensor([1, 2, 3, 4, 5, 6, 7, 8, 9, 10])
n_cycles, n_repeats = 100, 100
def model_fused():
for _ in range(n_cycles): _ = model(input_ids)
def model_unfused_test():
for _ in range(n_cycles): _ = model_unfused(input_ids)
print(f'model_fused={timeit.Timer("model_fused()", globals=globals()).timeit(number=n_repeats)}')
print(f'model_unfused_test={timeit.Timer("model_unfused_test()", globals=globals()).timeit(number=n_repeats)}')
```<|||||>[...]
> input_ids = torch.LongTensor([1, 2, 3, 4, 5, 6, 7, 8, 9, 10])
Just be careful when you benchmark to use realistic shapes of tensors - when tiny tensors are used it's unlikely you are benchmarking the real situation. As the framework dominates the compute rather than data processing and you're not benchmarking the performance of the gpu ops.
actually, this benchmark is run on cpu, you probably want to switch it to `.cuda`<|||||>I updated the test with the following snippet:
```
import torch, timeit
from transformers import BloomForCausalLM, BloomTokenizerFast
tokenizer = BloomTokenizerFast.from_pretrained("bigscience/tokenizer")
model = BloomForCausalLM.from_pretrained("bigscience/bigscience-small-testing").cuda()
model = model.eval()
model_unfused = BloomForCausalLM.from_pretrained("bigscience/bigscience-small-testing", bias_dropout_fusion=False).cuda()
model_unfused = model_unfused.eval()
batch_size = 16
input_sequences = ["test sequence"*50 for _ in range(batch_size)]
input_ids = tokenizer(input_sequences, return_tensors='pt')["input_ids"]
n_cycles, n_repeats = 100, 100
def model_fused():
for _ in range(n_cycles): _ = model(input_ids.cuda())
torch.cuda.synchronize()
def model_unfused_test():
for _ in range(n_cycles): _ = model_unfused(input_ids.cuda())
torch.cuda.synchronize()
print(f'model_fused={timeit.Timer("model_fused()", globals=globals()).timeit(number=n_repeats)}')
print(f'model_unfused_test={timeit.Timer("model_unfused_test()", globals=globals()).timeit(number=n_repeats)}')
```
And I got:
```
model_fused=321.8526800679999
model_unfused_test=322.2609531270002
```
I guess the difference would be much higher if we increase the sequence length/ batch size. Should we keep the fused operation ? @stas00
PS: I emulated the test on google colab with transformers built on a separate branch: `pip install git+https://github.com/younesbelkada/transformers.git@501ceed44d26988a17cd884b73eeb573f1f2bea8`<|||||>I will put both since the fused operation is slower on the CPU!<|||||>On A100 I get:
```
model_fused=35.0372433779994
model_unfused_test=35.158229669999855
```
I suppose choose the most sensible default and allow the user to override it.
If I increase the batch size to 128, the fused version reports to be slower 5%. Give it a try.
We are heading towards using automatic optimizers down the road (torchdynamo/aotautograd/nvfuser) where most of these things will be fused/optimized on the fly, so I think all the custom fusions will become redundant - we should start seeing these tools becoming more common towards the 2nd half of this year.<|||||>Great thanks for the information!
In that case let me put the unfused op by default <|||||>Hi all! We are working with the smaller BLOOM models in the Bigscience Multilingual Modeling WG and would like to use a `BloomForSequenceClassification` and `BloomForTokenClassification` class in our experiments.
I'm happy to contribute the code for these, but would it be preferred to let this PR be merged first, or for me to open a PR after this is merged?<|||||>Hi!
Very happy to hear that you want to contribute on that!
On my side I would say that it is preferable to wait until the PR gets merged. Hopefully very soon ;) <|||||>Can we merge? I think we have pretty much everything now (Accelerate+DeepSpeed compatibility, etc.) ?
As soon as we merge I need to make the small models public for the slow tests to pass<|||||>@sgugger I can confirm the slow tests + non slow tests pass on a A100 node in Jean Zay. I have emulated the tests by running `export RUN_SLOW=1` before running the testing script <|||||>Good to merge once all tests are green (test hub failure can be ignored)!<|||||>@younesbelkada I think you can press the merge button now if you want ;-)<|||||>We need to send @younesbelkada a plaque with engraving:
`I authored a PR with 120 conversations, 195 commits and 9 reviewers and lived to tell the story!`
Amazing!
I'd have given up long time ago.<|||||>Congratz!<|||||>The lights are green !
Thank you all!!
@stas00 if you count also the other PR there are 149 commits + 240 conversations ;)<|||||>Yes, indeed! Good call, @younesbelkada! |
transformers | 17,473 | closed | Number of channels in the ViTMAE model | ### Feature request
Dear huggingface community,
I am experimenting with the ViTMAE model from the transformers library. The ViTMAEConfig class has the option "num_channels" to specify the number of input (color) channels belonging to an image. If I modify this, say, to 1 (for processing grayscale images), the model throws an error, due to the number "3" being hard-coded into the functions "patchify" and "unpatchify" in the file "modeling_vit_mae.py".
### Motivation
I would like to request to change this such that any number of input channels is possible.
### Your contribution
As noted above, one only has to change the functions "patchify" and "depatchify" to either scan for the input color channels, or provide the input color channels as a class variable such that both functions can use it (instead of the hard-coded value "3"). I checked this and on my system it worked out just fine. | 05-30-2022 11:16:54 | 05-30-2022 11:16:54 | Hi,
Yes the "patchify" and "unpatchify" methods would need to be updated to supported 1 channel.
Are you interested in opening a PR to add support for this? <|||||>Hey,
thanks for the fast reply. I am a complete beginner to github (I only used it to manage my personal projects so far) and I will not be getting into the PR-procedure anytime soon. I did not know how to reach out to someone managing the vitmae project and tried out this issue just to point things out. If noone else is having the same problem or noone has time to care for this then this is just how it is I guess.<|||||>@NielsRogge updating the hard-coded part to the config value will work?
I can open a PR to help on this |
transformers | 17,472 | closed | Setup for Italian translation and add quicktour.mdx translation | I created the folder for files translated into Italian and made the translation of quicktour.mdx and index.mdx files.
The folder (called 'it') is located in transformers/docs/source/.
The translated documents are located in the transformers/docs/source/it folder.
Fixes [#17459](https://github.com/huggingface/transformers/issues/17459)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
@omarespejel @sgugger
| 05-30-2022 11:01:58 | 05-30-2022 11:01:58 | _The documentation is not available anymore as the PR was closed or merged._<|||||>This looks great! Thanks @mfumanelli for the PR and opening the venue for an Italian ๐ฎ๐น translation!
Since this is the first IT translation could please add `it` to the languages section in the `.github/workflows` ([here](https://github.com/huggingface/transformers/blob/main/.github/workflows/build_documentation.yml) and [here](https://github.com/huggingface/transformers/blob/main/.github/workflows/build_pr_documentation.yml))?<|||||>Thank you @mfumanelli ๐ฎ๐น! LGTM, @sgugger :) |
transformers | 17,471 | closed | Fix CI tests hang forever (sometimes) | # What does this PR do?
Set `--max-worker-restart=0` for `pytest` command in CircieCI workflow file.
Currently, the tests could hang forever if `--dist=loadfile` is specified, and when a worker is crashed, and we finally got `Too long with no output (exceeded 10m0s): context deadline exceeded`. For example, [this job run](https://app.circleci.com/pipelines/github/huggingface/transformers/41092/workflows/9d84d20f-be88-4a20-b17e-07d5437b59f9/jobs/467619).
## Remark:
- I opened [an issue](https://github.com/pytest-dev/pytest-xdist/issues/784#issue-1252433606) in `pytest-xdist` repo.
- The reason that the worker crashed is still unclear yet. It might be related to [memory issue #17470](https://github.com/huggingface/transformers/pull/17470). But a general issue of memory leak in tests also exists. | 05-30-2022 09:38:23 | 05-30-2022 09:38:23 | _The documentation is not available anymore as the PR was closed or merged._<|||||>From the experimentation, with the argument `--max-worker-restart=0`:
- all tests passed before the worker crashed --> marked as passed
- the test where the worker crashes --> marked as failed
- all tests not yet run before the worker crashed -> won't be marked at all, neither as passed nor as failed (i.e. won't be sent to other workers)<|||||>> From the experimentation, with the argument `--max-worker-restart=0`:
>
> * all tests passed before the worker crashed --> marked as passed
> * the test where the worker crashes --> marked as failed
> * all tests not yet run before the worker crashed -> won't be marked at all, neither as passed nor as failed (i.e. won't be sent to other workers)
I understand we might want to the tests to be run on other workers, but currently without `--max-worker-restart=0`, the CI hangs forever (if a worker crashes), which is not good neither.<|||||>~~(BTW, it looks like this issue happens in docker, say `circleci/python:3.7`, but not on the GCP VM without docker)~~ |
transformers | 17,470 | closed | Fix ViTMAEModelTester | # What does this PR do?
Current `ViTMAE` test has `(TF)ViTMAEModelTester` not setting `decoder` configs, like `decoder_hidden_size`, `decoder_intermediate_size` etc.
For the model class `(TF)ViTMAEForPretraining`, the default values for these attributes in `ViTMAEConfig` are used, and cause some tests being slow and consuming quite a lot of memory.
This PR fixes it.
(This might be `one` of the reasons where sometimes CI tests get `node down: Not properly terminated` -> `replacing crashed worker gw7` -> hang forever -> timeout) | 05-30-2022 08:27:32 | 05-30-2022 08:27:32 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 17,469 | closed | Add swin transformer v2 | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
This PR will fix #17268
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@NielsRogge
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 05-30-2022 08:25:35 | 05-30-2022 08:25:35 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Thank you for constantly guiding me for this model addition and answering all my queries ๐. Looking forward to contribute more in future.
> Great work! Main comments are:
>
> * you can also add `Copied from` statements to methods, so it can be leveraged to copy forward methods, in case the init method differs.
> * would be great to add the model to the doc tests
<|||||>> Thank you for your contribution! Everything looks great and all major issues have already been addressed. I made a few comments on code comments and minor issues, they shouldn't take long and we can merge this PR afterwards :)
Thanks, I have added the changes as you suggested.<|||||>Thanks again for your contribution! |
transformers | 17,468 | closed | Any attempt to export any model to onnx returns an ATOL value of nan. | `ValueError: Outputs values doesn't match between reference model and ONNX exported model: Got max absolute difference of: nan` | 05-30-2022 01:05:36 | 05-30-2022 01:05:36 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 17,467 | closed | Error building docs locally: No matching distribution found for ray[tune]; extra == "docs" | ### System Info
```shell
- `transformers` version: 4.17.0
- Platform: macOS-10.16-x86_64-i386-64bit
- Python version: 3.10.0
- PyTorch version (GPU?): 1.11.0 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
```
### Who can help?
@sgugger @stevhl
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
On my system, when I try to install the packages necessary to build documentation using `pip install -e "[.docs]"`, I get the following error:
```
ERROR: Could not find a version that satisfies the requirement ray[tune]; extra == "docs" (from transformers[docs]) (from versions: none)
ERROR: No matching distribution found for ray[tune]; extra == "docs"
```
I'm trying to build the documentation locally as I'm a contributor to the [Dash user contributed datasets](https://github.com/Kapeli/Dash-User-Contributions#contribute-a-new-docset).
I'm developing on an Apple silicon Mac, but the conda environment is setup as a `x86_64` environment.
### Expected behavior
Documentation can be built and viewed locally.
| 05-29-2022 23:41:48 | 05-29-2022 23:41:48 | Not sure why you're opening an issue here as it seems you'd like `ray` to have a package installable on your end :-).
I can't guarantee the doc will build without all integrations, but it might work to build the doc without Ray installed.<|||||>I'm going to close the issue now, since I've found that using `python=3.7` fixed the issue with install `ray`. However, I seem to be running into more problems now. Firstly, it looks like the module `black` is not listed as a dependency, I got an error that `doc_builder` requires `black` to work. However, even after installing `black`, I'm running into a new error:
```
fish: 'doc-builder build transformersโฆ' terminated by signal SIGILL (Illegal instruction)
```
I'm not sure what's causing this error -- I might open a new issue if I find that its something related to this repo rather than my setup.
EDIT: The issue seems to be more general. After installing the dependencies and `doc-builder`, even trying to import the `transformers` package exits with the SIGILL error. |
transformers | 17,466 | closed | Adding LeViT Model by Facebook | Adds LeViT Model (a Vision Transformer) to HF Library. All the checkpoints are on my HF account hub.
Please review and suggest the required changes.
@NielsRogge | 05-29-2022 19:30:55 | 05-29-2022 19:30:55 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Thanks for suggestion. I have done the requested changes. I need help here:
- I cannot figure rendering of warning problem. If I give `enter` or `space`. There is `style error` of `white space`.
- The image in `convnext.mdx` is added in HF Dataset repo in hub. I can give the image to add there.
The weights are also uploaded on hub correctly after change in `self.bn` -> `self.batch_norm`.
Please review and suggest the changes @NielsRogge <|||||>CI error seems unrelated, thanks for your work, merging! |
transformers | 17,465 | closed | Add MVP model | ### Model description
Hi, I want to add my model MVP to hugging face. My model is very similar with BART. The difference is that it has pre-trained soft prompt in the format of Prefix-tuning.
We have pretrained a series of prompts combining with backbone transformer. And users can choose which prompt to load. So I wonder where I can change the behaviour of loading which prompt?
To put it simply, our model is composed of a big nn.Transformer and a small nn.Linear. We have pre-trained a Transformer and _**n**_ Linear. Users can choose which Linear to load.
One solution can be that, we upload _**n**_ chechpoints, including the common Transformer and respective Linear. And users just write `model.from.pretrained('model1/2/3')` to load which prompt. However, the it is a bit waste to upload and download the common Transformer **_n_** times.
So I wonder how I can solve it with a simple method. Such as our uploaded model is consist of Transformer and **_n_** prompts, and users can choose which prompt to use through config.
### Open source status
- [ ] The model implementation is available
- [ ] The model weights are available
### Provide useful links for the implementation
_No response_ | 05-29-2022 09:07:20 | 05-29-2022 09:07:20 | Hello! I think this is a situation where the code-on-the-hub approach is likely to be the best way to share it! See this guide: [Sharing custom models](https://huggingface.co/docs/transformers/custom_models).
This will give you the most amount of freedom with regards to your model.
cc @sgugger <|||||>Thank you! I will take a look.<|||||>Great! Please let us know if you run into any issues, or if anything is unclear from the guide. Thanks!<|||||>Hi, sorry I still don't know how to handle a situation like mine.
My model is
```
(
BertModel()
nn.Linear()
)
```
And my checkpoint is composed of
```
(
BertModel()
'a': nn.Linear()
'b': nn.Linear()
'c': nn.Linear()
)
```
And my expected effect is that user can determine which Linear to use through the config, such as `XXX.from_pretrained('my_model', linear='a' or 'b' or 'c')` to load according linear.
However, the model initilization is before the model loading. So I don't know how to solve it.<|||||>A quick solution would be to have three architectures, one with `a`, one with `b`, and one with `c` linear layers and load the one you want accordingly.<|||||>Do you mean I upload three models, named my-model-a, my-model-b and my-model-c, then load the one I want?
<|||||>You can upload one single checkpoint, but three different architectures that each use the `a`, `b`, and `c`. Similarly to models trained on sequence classification (or any other task): they can be loaded in base models, in question answering models, etc. It will only load the layers it needs, and ignore the rest.<|||||>Thanks for your response! I understand what you mean.
Actually, my situation is a little different from the task head. The prompt is the part of the encoder and decoder, and I have 8 different prompts rather than 3. So I wonder if there is any other way, considering they have the same architecture. |
transformers | 17,464 | closed | ValueError: AlbertForMaskedLM does not support gradient checkpointing. | ValueError: AlbertForMaskedLM does not support gradient checkpointing.
`model.gradient_checkpointing_enable()` doesn't help. | 05-29-2022 08:24:54 | 05-29-2022 08:24:54 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 17,463 | closed | transformers model doesn't output zeros for padded subtokens | ### System Info
I have noticed that mean pooling over last_hidden_state returns different results depending on the batch size. When I started to go deeper to understand the reason of the problem I found that padded by tokenizer subtokens after passing it through the model are not zeros and actually this leads to the problem behavior. Then batch size = 1 there is no padded subtokens, so results are 100% correct, but batch size = 1 leads to much longer computations (obviously).
Is it correct behavior and is it possible to avoid this?
The snippet below uses `bert-base-uncased` model, but this behavior also related to other models.
### Who can help?
@LysandreJik @SaulLu
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
```python3
import torch
from transformers import AutoModel, AutoTokenizer
model_name = 'bert-base-uncased'
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModel.from_pretrained(model_name)
sentences = [
"Hello, World!",
"How are you doing today?",
]
tokenizer_kwargs = {
'return_tensors': 'pt',
'padding': True,
'truncation': True,
'max_length': 512,
}
tokens_1 = tokenizer(sentences, **tokenizer_kwargs)
tokens_2 = []
for sentence in sentences:
tokens = tokenizer(sentence, **tokenizer_kwargs)
tokens_2.append(tokens)
with torch.no_grad():
tensor_1 = model(**tokens_1)['last_hidden_state']
tensor_2 = []
for tokens in tokens_2:
with torch.no_grad():
tensor = model(**tokens)['last_hidden_state']
tensor_2.append(tensor)
print(torch.allclose(tensor_1[1], tensor_2[1].squeeze(0), atol=1e-05)) # True
print(torch.allclose(tensor_1[0][:6], tensor_2[0].squeeze(0), atol=1e-05)) # True
print(tensor_1[0][6:]) # padded tokens
```
### Expected behavior
I expect that embeddings corresponding to padded subtokens would be equal to zero. | 05-28-2022 08:47:38 | 05-28-2022 08:47:38 | > This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
>
> Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
ping<|||||>Hi @dayyass ,
Indeed, I don't think there is a condition on embeddings that would be at an masked position. But as you have shown in your two tests:
```python
print(torch.allclose(tensor_1[1], tensor_2[1].squeeze(0), atol=1e-05)) # True
print(torch.allclose(tensor_1[0][:6], tensor_2[0].squeeze(0), atol=1e-05)) # True
```
The most important thing is that the embeddings of the unmasked positions are identical, no matter how many pad tokens are added at the end of the sequence - which is verified on your tests.
Did I miss something in your question? Why do you expect that embeddings corresponding to padded subtokens would be equal to zero?<|||||>Hi, @SaulLu!
Thanks for answering!
No, I put all information in issue.
I assume that embeddings corresponding to padded subtokens should be equal to zero, because it is easier to find such embeddings and exclude those from averaging. But if we doesn't exclude those, it is better (most likely) to average embeddings when it is equal to zero.
<|||||>Still actual issue.<|||||>> I assume that embeddings corresponding to padded subtokens should be equal to zero, because it is easier to find such embeddings and exclude those from averaging. But if we doesn't exclude those, it is better (most likely) to average embeddings when it is equal to zero.
Thanks for the details, in your case, I think the attention mask is exactly what you're looking for. If you ever want your output to have the value set to 0 for pad tokens, you can multiply the output by the attention mask. In your example:
```python
tensor_1 * tokens_1.attention_mask.unsqueeze(2)
```
<|||||>> > I assume that embeddings corresponding to padded subtokens should be equal to zero, because it is easier to find such embeddings and exclude those from averaging. But if we doesn't exclude those, it is better (most likely) to average embeddings when it is equal to zero.
>
> Thanks for the details, in your case, I think the attention mask is exactly what you're looking for. If you ever want your output to have the value set to 0 for pad tokens, you can multiply the output by the attention mask. In your example:
>
> ```python
> tensor_1 * tokens_1.attention_mask.unsqueeze(2)
> ```
That's an excellent and elegant solution, thank you for that!
I guess the issue might be closed.<|||||>Glad it helped you! :hugs: |
transformers | 17,462 | closed | Add EfficientNet model for PyTorch | # What does this PR do?
This PR adds the EfficientNet model family to HuggingFace transformers proposed in #15759 (PyTorch only for this PR).
The implementation is based on that of Ross Wightman's pytorch-image-models [implementation](https://github.com/rwightman/pytorch-image-models/blob/master/timm/models/efficientnet.py).
## Who can review?
@NielsRogge | 05-28-2022 05:18:12 | 05-28-2022 05:18:12 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Working on this.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 17,461 | closed | Spanish docs - Links don't work | ## TLDR
Links to docs that don't exist (eg ./main_classes/pipelines) don't work. Issue first mentioned in #17349.
## Description
Currently, the links lead to an error if the doc is not yet translated. For example, this fragment in [`autoclass_tutorial`](https://huggingface.co/docs/transformers/main/es/autoclass_tutorial) leads a to an error for not having [model_doc/auto.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/model_doc/auto.mdx)
translated yet.
> Finalmente, las clases AutoModelFor te permiten cargar un modelo preentrenado para una tarea dada (revisa [aquรญ](https://huggingface.co/docs/transformers/main/es/model_doc/auto) para conocer la lista completa de tareas disponibles).
## Possible solution
Linking to the English docs while the Spanish versions become available.
## Possible reviewers
@sgugger @mishig25 | 05-28-2022 04:02:24 | 05-28-2022 04:02:24 | @omarespejel thanks a lot for notifying. I'm working on a fix |
transformers | 17,460 | closed | fx.symbolic_trace not working for Roberta | ### System Info
```shell
transformers version: 4.18.0
PyTorch version: 1.10.2
Is debug build: False
CUDA used to build PyTorch: Could not collect
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.4 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.31
Python version: 3.9.12 (main, Apr 5 2022, 06:56:58) [GCC 7.5.0] (64-bit runtime)
Python platform: Linux-5.10.102.1-microsoft-standard-WSL2-x86_64-with-glibc2.31
Is CUDA available: False
CUDA runtime version: Could not collect
GPU models and configuration: GPU 0: NVIDIA T500
Nvidia driver version: 472.91
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] mypy-extensions==0.4.3
[pip3] numpy==1.21.5
[pip3] numpydoc==1.2
[pip3] torch==1.10.2
[pip3] torchvision==0.11.3
[conda] blas 1.0 mkl
[conda] cudatoolkit 11.3.1 h2bc3f7f_2
[conda] ffmpeg 4.3 hf484d3e_0 pytorch
[conda] mkl 2021.4.0 h06a4308_640
[conda] mkl-service 2.4.0 py39h7f8727e_0
[conda] mkl_fft 1.3.1 py39hd3c417c_0
[conda] mkl_random 1.2.2 py39h51133e4_0
[conda] numpy 1.21.5 py39he7a7128_1
[conda] numpy-base 1.21.5 py39hf524024_1
[conda] numpydoc 1.2 pyhd3eb1b0_0
[conda] pytorch 1.10.2 cpu_py39hfa7516b_0
[conda] torchvision 0.11.3 py39_cu113 pytorch
```
### Who can help?
@LysandreJi
### Information
- [X] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
```python
from transformers import RobertaForMaskedLM
from transformers import RobertaConfig
from transformers.utils import fx
import inspect
config = RobertaConfig(
vocab_size=52_000,
max_position_embeddings=514,
num_attention_heads=12,
num_hidden_layers=12,
type_vocab_size=1,
)
model = RobertaForMaskedLM(config)
input_names = ["input_ids", "attention_mask", "decoder_input_ids", "decoder_attention_mask",'token_type_ids']
sig = inspect.signature(model.forward)
concrete_args = {p.name: None for p in sig.parameters.values() if p.name not in input_names}
gm = fx.symbolic_trace(model,concrete_args)
```
### Expected behavior
```shell
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
Input In [11], in <cell line: 1>()
----> 1 gm = fx.symbolic_trace(model,concrete_args)
File ~/anaconda3/lib/python3.9/site-packages/transformers/utils/fx.py:594, in symbolic_trace(model, input_names)
592 # Tracing.
593 tracer = HFTracer()
--> 594 traced_graph = tracer.trace(model, concrete_args=concrete_args)
595 traced = torch.fx.GraphModule(model, traced_graph)
597 return traced
File ~/anaconda3/lib/python3.9/site-packages/transformers/utils/fx.py:470, in HFTracer.trace(self, root, concrete_args, method_names)
467 sig = inspect.signature(root.forward)
468 input_names = sig.parameters.keys() - concrete_args.keys()
--> 470 self.record(root, input_names, method_names=method_names)
472 # TODO: adapt the way leaf function are wrapped with the "autowrap function" feature from Tracer.
473 autowrap_functions = [patched for (_, _, patched) in self._leaf_functions_register.values()]
File ~/anaconda3/lib/python3.9/site-packages/transformers/utils/fx.py:426, in HFTracer.record(self, model, input_names, method_names)
423 cache_names, original_methods = self._monkey_patch_tensor_methods_for_model_recording(model, method_names)
424 self.original_methods = original_methods
--> 426 model(**inputs)
428 _reset_tensor_methods(original_methods)
430 self.recorded_methods = {
431 method_name: cache_name for method_name, cache_name in cache_names.items() if hasattr(model, cache_name)
432 }
File ~/anaconda3/lib/python3.9/site-packages/torch/nn/modules/module.py:1102, in Module._call_impl(self, *input, **kwargs)
1098 # If we don't have any hooks, we want to skip the rest of the logic in
1099 # this function, and just call forward.
1100 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1101 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1102 return forward_call(*input, **kwargs)
1103 # Do not call functions when jit is used
1104 full_backward_hooks, non_full_backward_hooks = [], []
File ~/anaconda3/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py:1098, in RobertaForMaskedLM.forward(self, input_ids, attention_mask, token_type_ids, position_ids, head_mask, inputs_embeds, encoder_hidden_states, encoder_attention_mask, labels, output_attentions, output_hidden_states, return_dict)
1088 r"""
1089 labels (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*):
1090 Labels for computing the masked language modeling loss. Indices should be in `[-100, 0, ...,
(...)
1094 Used to hide legacy arguments that have been deprecated.
1095 """
1096 return_dict = return_dict if return_dict is not None else self.config.use_return_dict
-> 1098 outputs = self.roberta(
1099 input_ids,
1100 attention_mask=attention_mask,
1101 token_type_ids=token_type_ids,
1102 position_ids=position_ids,
1103 head_mask=head_mask,
1104 inputs_embeds=inputs_embeds,
1105 encoder_hidden_states=encoder_hidden_states,
1106 encoder_attention_mask=encoder_attention_mask,
1107 output_attentions=output_attentions,
1108 output_hidden_states=output_hidden_states,
1109 return_dict=return_dict,
1110 )
1111 sequence_output = outputs[0]
1112 prediction_scores = self.lm_head(sequence_output)
File ~/anaconda3/lib/python3.9/site-packages/torch/nn/modules/module.py:1102, in Module._call_impl(self, *input, **kwargs)
1098 # If we don't have any hooks, we want to skip the rest of the logic in
1099 # this function, and just call forward.
1100 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1101 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1102 return forward_call(*input, **kwargs)
1103 # Do not call functions when jit is used
1104 full_backward_hooks, non_full_backward_hooks = [], []
File ~/anaconda3/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py:851, in RobertaModel.forward(self, input_ids, attention_mask, token_type_ids, position_ids, head_mask, inputs_embeds, encoder_hidden_states, encoder_attention_mask, past_key_values, use_cache, output_attentions, output_hidden_states, return_dict)
842 head_mask = self.get_head_mask(head_mask, self.config.num_hidden_layers)
844 embedding_output = self.embeddings(
845 input_ids=input_ids,
846 position_ids=position_ids,
(...)
849 past_key_values_length=past_key_values_length,
850 )
--> 851 encoder_outputs = self.encoder(
852 embedding_output,
853 attention_mask=extended_attention_mask,
854 head_mask=head_mask,
855 encoder_hidden_states=encoder_hidden_states,
856 encoder_attention_mask=encoder_extended_attention_mask,
857 past_key_values=past_key_values,
858 use_cache=use_cache,
859 output_attentions=output_attentions,
860 output_hidden_states=output_hidden_states,
861 return_dict=return_dict,
862 )
863 sequence_output = encoder_outputs[0]
864 pooled_output = self.pooler(sequence_output) if self.pooler is not None else None
File ~/anaconda3/lib/python3.9/site-packages/torch/nn/modules/module.py:1102, in Module._call_impl(self, *input, **kwargs)
1098 # If we don't have any hooks, we want to skip the rest of the logic in
1099 # this function, and just call forward.
1100 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1101 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1102 return forward_call(*input, **kwargs)
1103 # Do not call functions when jit is used
1104 full_backward_hooks, non_full_backward_hooks = [], []
File ~/anaconda3/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py:492, in RobertaEncoder.forward(self, hidden_states, attention_mask, head_mask, encoder_hidden_states, encoder_attention_mask, past_key_values, use_cache, output_attentions, output_hidden_states, return_dict)
479 def forward(
480 self,
481 hidden_states: torch.Tensor,
(...)
490 return_dict: Optional[bool] = True,
491 ) -> Union[Tuple[torch.Tensor], BaseModelOutputWithPastAndCrossAttentions]:
--> 492 all_hidden_states = () if output_hidden_states else None
493 all_self_attentions = () if output_attentions else None
494 all_cross_attentions = () if output_attentions and self.config.add_cross_attention else None
File ~/anaconda3/lib/python3.9/site-packages/transformers/utils/fx.py:326, in HFTracer._wrap_method_for_model_recording.<locals>.wrapped(*args, **kwargs)
324 setattr(model, cache_name, [])
325 cache = getattr(model, cache_name)
--> 326 res = method(*args, **kwargs)
327 cache.append(res)
328 return res
File ~/anaconda3/lib/python3.9/site-packages/transformers/utils/fx.py:326, in HFTracer._wrap_method_for_model_recording.<locals>.wrapped(*args, **kwargs)
324 setattr(model, cache_name, [])
325 cache = getattr(model, cache_name)
--> 326 res = method(*args, **kwargs)
327 cache.append(res)
328 return res
RuntimeError: Boolean value of Tensor with more than one value is ambiguous
```
| 05-27-2022 21:43:25 | 05-27-2022 21:43:25 | After specifying:
```python
input_names = ['input_ids',
'attention_mask',
'token_type_ids',
'position_ids',
'encoder_hidden_states',
'encoder_attention_mask',
'labels']
```
I saw another bug in:
```python
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
Input In [2], in <cell line: 22>()
14 model = RobertaForMaskedLM(config)
15 input_names = ['input_ids',
16 'attention_mask',
17 'token_type_ids',
(...)
20 'encoder_attention_mask',
21 'labels']
---> 22 gm = symbolic_trace(model,input_names)
File ~/anaconda3/lib/python3.9/site-packages/transformers/utils/fx.py:595, in symbolic_trace(model, input_names)
593 tracer = HFTracer()
594 traced_graph = tracer.trace(model, concrete_args=concrete_args)
--> 595 traced = torch.fx.GraphModule(model, traced_graph)
597 return traced
File ~/anaconda3/lib/python3.9/site-packages/torch/fx/graph_module.py:312, in GraphModule.__init__(self, root, graph, class_name)
309 else:
310 raise RuntimeError('Unsupported type ' + str(root) + ' passed for root!')
--> 312 self.graph = graph
314 # Store the Tracer class responsible for creating a Graph separately as part of the
315 # GraphModule state, except when the Tracer is defined in a local namespace.
316 # Locally defined Tracers are not pickleable. This is needed because torch.package will
317 # serialize a GraphModule without retaining the Graph, and needs to use the correct Tracer
318 # to re-create the Graph during deserialization.
319 self._tracer_cls = None
File ~/anaconda3/lib/python3.9/site-packages/torch/nn/modules/module.py:1225, in Module.__setattr__(self, name, value)
1223 buffers[name] = value
1224 else:
-> 1225 object.__setattr__(self, name, value)
File ~/anaconda3/lib/python3.9/site-packages/torch/fx/graph_module.py:346, in GraphModule.graph(self, g)
344 self._graph = g
345 g.owning_module = self
--> 346 self.recompile()
File ~/anaconda3/lib/python3.9/site-packages/torch/fx/graph_module.py:562, in GraphModule.recompile(self)
560 self._in_spec = self._graph._pytree_info.in_spec
561 self._out_spec = self._graph._pytree_info.out_spec
--> 562 python_code = self._graph.python_code(root_module='self')
563 self._code = python_code.src
565 cls = type(self)
File ~/anaconda3/lib/python3.9/site-packages/torch/fx/graph.py:834, in Graph.python_code(self, root_module)
831 node._repr_fn = orig_repr_fns[node]
833 with override_node_repr(self):
--> 834 return self._python_code(root_module, namespace)
File ~/anaconda3/lib/python3.9/site-packages/torch/fx/graph.py:988, in Graph._python_code(self, root_module, namespace)
983 raise NotImplementedError(f'node: {node.op} {node.target}')
985 for node in self.nodes:
986 # NOTE: emit_node does not emit a string with newline. It depends
987 # on delete_unused_values to append one
--> 988 emit_node(node)
989 delete_unused_values(node)
991 if len(body) == 0:
992 # If the Graph has no non-placeholder nodes, no lines for the body
993 # have been emitted. To continue to have valid Python code, emit a
994 # single pass statement
File ~/anaconda3/lib/python3.9/site-packages/torch/fx/graph.py:928, in Graph._python_code.<locals>.emit_node(node)
927 def emit_node(node : Node):
--> 928 maybe_type_annotation = '' if node.type is None else f' : {type_repr(node.type)}'
929 if node.op == 'placeholder':
930 assert isinstance(node.target, str)
File ~/anaconda3/lib/python3.9/site-packages/torch/fx/graph.py:885, in Graph._python_code.<locals>.type_repr(o)
882 origin_typename = add_global(_type_repr(origin_type), origin_type)
884 # Assign global names for each of the inner type variables.
--> 885 args = [type_repr(arg) for arg in o.__args__]
887 return f'{origin_typename}[{",".join(args)}]'
889 # Common case: this is a regular module name like 'foo.bar.baz'
File ~/anaconda3/lib/python3.9/site-packages/torch/fx/graph.py:885, in <listcomp>(.0)
882 origin_typename = add_global(_type_repr(origin_type), origin_type)
884 # Assign global names for each of the inner type variables.
--> 885 args = [type_repr(arg) for arg in o.__args__]
887 return f'{origin_typename}[{",".join(args)}]'
889 # Common case: this is a regular module name like 'foo.bar.baz'
File ~/anaconda3/lib/python3.9/site-packages/torch/fx/graph.py:885, in Graph._python_code.<locals>.type_repr(o)
882 origin_typename = add_global(_type_repr(origin_type), origin_type)
884 # Assign global names for each of the inner type variables.
--> 885 args = [type_repr(arg) for arg in o.__args__]
887 return f'{origin_typename}[{",".join(args)}]'
889 # Common case: this is a regular module name like 'foo.bar.baz'
File ~/anaconda3/lib/python3.9/typing.py:711, in _BaseGenericAlias.__getattr__(self, attr)
709 if '__origin__' in self.__dict__ and not _is_dunder(attr):
710 return getattr(self.__origin__, attr)
--> 711 raise AttributeError(attr)
AttributeError: __args__
```
@michaelbenayoun Do you mind take a look? Thank you very much!<|||||>Hi @WeiHao97 ,
To trace the model, you need to specify the input names and not the concrete args, the input_names represent the inputs your traced model will take.
If you want to trace Roberta, you most likely need to only provide "input_ids" and "attention_mask":
```python
model = RobertaForMaskedLM(config)
input_names = ["input_ids", "attention_mask"]
gm = fx.symbolic_trace(model, input_names)
```<|||||>> Hi @WeiHao97 ,
>
> To trace the model, you need to specify the input names and not the concrete args, the input_names represent the inputs your traced model will take. If you want to trace Roberta, you most likely need to only provide "input_ids" and "attention_mask":
>
> ```python
> model = RobertaForMaskedLM(config)
> input_names = ["input_ids", "attention_mask"]
> gm = fx.symbolic_trace(model, input_names)
> ```
After specifying input_names = ['input_ids',
'attention_mask',
'token_type_ids',
'position_ids',
'encoder_hidden_states',
'encoder_attention_mask',
'labels'] or input_names = ["input_ids", "attention_mask"], I got the second error.<|||||>Could you share the error message please?
Also what are you trying to do? It seems to be an odd set of inputs you want to use.
That's weird that you get an error with this:
```python
model = RobertaForMaskedLM(config)
input_names = ["input_ids", "attention_mask"]
gm = fx.symbolic_trace(model, input_names)
```
I am able to get this to work on my end.
<|||||>> Could you share the error message please? Also what are you trying to do? It seems to be an odd set of inputs you want to use.
>
> That's weird that you get an error with this:
>
> ```python
> model = RobertaForMaskedLM(config)
> input_names = ["input_ids", "attention_mask"]
> gm = fx.symbolic_trace(model, input_names)
> ```
>
> I am able to get this to work on my end.
I ran with transformers version '4.18.0' :
```python
from transformers import RobertaForMaskedLM
from transformers import RobertaConfig
from transformers.utils import fx
config = RobertaConfig(
vocab_size=52_000,
max_position_embeddings=514,
num_attention_heads=12,
num_hidden_layers=12,
type_vocab_size=1,
)
model = RobertaForMaskedLM(config)
input_names = ["input_ids", "attention_mask"]
gm = fx.symbolic_trace(model, input_names)
```
and got:
```python
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
Input In [1], in <cell line: 15>()
13 model = RobertaForMaskedLM(config)
14 input_names = ["input_ids", "attention_mask"]
---> 15 gm = fx.symbolic_trace(model, input_names)
File ~/anaconda3/lib/python3.9/site-packages/transformers/utils/fx.py:595, in symbolic_trace(model, input_names)
593 tracer = HFTracer()
594 traced_graph = tracer.trace(model, concrete_args=concrete_args)
--> 595 traced = torch.fx.GraphModule(model, traced_graph)
597 return traced
File ~/anaconda3/lib/python3.9/site-packages/torch/fx/graph_module.py:312, in GraphModule.__init__(self, root, graph, class_name)
309 else:
310 raise RuntimeError('Unsupported type ' + str(root) + ' passed for root!')
--> 312 self.graph = graph
314 # Store the Tracer class responsible for creating a Graph separately as part of the
315 # GraphModule state, except when the Tracer is defined in a local namespace.
316 # Locally defined Tracers are not pickleable. This is needed because torch.package will
317 # serialize a GraphModule without retaining the Graph, and needs to use the correct Tracer
318 # to re-create the Graph during deserialization.
319 self._tracer_cls = None
File ~/anaconda3/lib/python3.9/site-packages/torch/nn/modules/module.py:1225, in Module.__setattr__(self, name, value)
1223 buffers[name] = value
1224 else:
-> 1225 object.__setattr__(self, name, value)
File ~/anaconda3/lib/python3.9/site-packages/torch/fx/graph_module.py:346, in GraphModule.graph(self, g)
344 self._graph = g
345 g.owning_module = self
--> 346 self.recompile()
File ~/anaconda3/lib/python3.9/site-packages/torch/fx/graph_module.py:562, in GraphModule.recompile(self)
560 self._in_spec = self._graph._pytree_info.in_spec
561 self._out_spec = self._graph._pytree_info.out_spec
--> 562 python_code = self._graph.python_code(root_module='self')
563 self._code = python_code.src
565 cls = type(self)
File ~/anaconda3/lib/python3.9/site-packages/torch/fx/graph.py:834, in Graph.python_code(self, root_module)
831 node._repr_fn = orig_repr_fns[node]
833 with override_node_repr(self):
--> 834 return self._python_code(root_module, namespace)
File ~/anaconda3/lib/python3.9/site-packages/torch/fx/graph.py:988, in Graph._python_code(self, root_module, namespace)
983 raise NotImplementedError(f'node: {node.op} {node.target}')
985 for node in self.nodes:
986 # NOTE: emit_node does not emit a string with newline. It depends
987 # on delete_unused_values to append one
--> 988 emit_node(node)
989 delete_unused_values(node)
991 if len(body) == 0:
992 # If the Graph has no non-placeholder nodes, no lines for the body
993 # have been emitted. To continue to have valid Python code, emit a
994 # single pass statement
File ~/anaconda3/lib/python3.9/site-packages/torch/fx/graph.py:928, in Graph._python_code.<locals>.emit_node(node)
927 def emit_node(node : Node):
--> 928 maybe_type_annotation = '' if node.type is None else f' : {type_repr(node.type)}'
929 if node.op == 'placeholder':
930 assert isinstance(node.target, str)
File ~/anaconda3/lib/python3.9/site-packages/torch/fx/graph.py:885, in Graph._python_code.<locals>.type_repr(o)
882 origin_typename = add_global(_type_repr(origin_type), origin_type)
884 # Assign global names for each of the inner type variables.
--> 885 args = [type_repr(arg) for arg in o.__args__]
887 return f'{origin_typename}[{",".join(args)}]'
889 # Common case: this is a regular module name like 'foo.bar.baz'
File ~/anaconda3/lib/python3.9/site-packages/torch/fx/graph.py:885, in <listcomp>(.0)
882 origin_typename = add_global(_type_repr(origin_type), origin_type)
884 # Assign global names for each of the inner type variables.
--> 885 args = [type_repr(arg) for arg in o.__args__]
887 return f'{origin_typename}[{",".join(args)}]'
889 # Common case: this is a regular module name like 'foo.bar.baz'
File ~/anaconda3/lib/python3.9/site-packages/torch/fx/graph.py:885, in Graph._python_code.<locals>.type_repr(o)
882 origin_typename = add_global(_type_repr(origin_type), origin_type)
884 # Assign global names for each of the inner type variables.
--> 885 args = [type_repr(arg) for arg in o.__args__]
887 return f'{origin_typename}[{",".join(args)}]'
889 # Common case: this is a regular module name like 'foo.bar.baz'
File ~/anaconda3/lib/python3.9/typing.py:711, in _BaseGenericAlias.__getattr__(self, attr)
709 if '__origin__' in self.__dict__ and not _is_dunder(attr):
710 return getattr(self.__origin__, attr)
--> 711 raise AttributeError(attr)
AttributeError: __args__
```<|||||>Could you try with transformers sources?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 17,459 | open | Tranformers documentation translation to Italian | Hi!
Let's bring the documentation to all the Italian-speaking community :)
Who would want to translate? Please follow the ๐ค [TRANSLATING guide](https://github.com/huggingface/transformers/blob/main/docs/TRANSLATING.md). Here is a list of the files ready for translation. Let us know here if you'd like to translate any and we'll add your name to the list.
Some notes:
- Please translate using an informal tone (imagine you are talking with a friend about transformers ๐ค). For example, use Tu instead of Lei.
- Please translate in a gender-neutral way.
- Add your translations to the folder called `it` inside the [source folder](https://github.com/huggingface/transformers/tree/main/docs/source).
- Register your translation in [it/_toctree.yml](https://github.com/huggingface/transformers/blob/main/docs/source/it/_toctree.yml); please follow the order of the [English version](https://github.com/huggingface/transformers/blob/main/docs/source/en/_toctree.yml).
- Once you're finished, open a pull request and tag this issue by including #issue-number in the description, where issue-number is the number of this issue.
- ๐ If you'd like others to help you with the translation, you can also post in the ๐ค [forums](https://discuss.huggingface.co/).
## Get Started section
- [x] [index.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/index.mdx) @mfumanelli
- [x] [quicktour.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/quicktour.mdx). @mfumanelli
- [x] [installation.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/installation.mdx). @mfumanelli
## Tutorial section
- [x] [pipeline_tutorial.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/pipeline_tutorial.mdx) @nickprock
- [x] [autoclass_tutorial.mdx](https://github.com/huggingface/transformers/blob/master/docs/source/autoclass_tutorial.mdx) @mfumanelli
- [x] [preprocessing.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/preprocessing.mdx) @nickprock
- [x] [training.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/training.mdx) @nickprock
- [x] [accelerate.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/accelerate.mdx) @mfumanelli
- [ ] [model_sharing.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/model_sharing.mdx) WIP @mfumanelli
- [ ] [multilingual.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/multilingual.mdx) WIP @nickprock
## How-to guides
- [x] [fast_tokenizers.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/fast_tokenizers.mdx "fast_tokenizers.mdx") @andreafailla
- [ ] [create_a_model.mdx](https://github.com/huggingface/transformers/blob/b9a768b3ffa80c4c19d024f9f42d5917e7d8109e/docs/source/en/create_a_model.mdx "create_a_model.mdx") WIP @F02934
- [x] [custom_models.mdx](https://github.com/huggingface/transformers/blob/b9a768b3ffa80c4c19d024f9f42d5917e7d8109e/docs/source/en/custom_models.mdx "custom_models.mdx") @Xpiri
- [x] [run_scripts.mdx](https://github.com/huggingface/transformers/blob/b9a768b3ffa80c4c19d024f9f42d5917e7d8109e/docs/source/en/run_scripts.mdx "run_scripts.mdx") @lorenzobalzani
- [x] [sagemaker.mdx](https://github.com/huggingface/transformers/blob/b9a768b3ffa80c4c19d024f9f42d5917e7d8109e/docs/source/en/sagemaker.mdx "sagemaker.mdx") @andreafailla
- [ ] [converting_tensorflow_models.mdx](https://github.com/huggingface/transformers/blob/b9a768b3ffa80c4c19d024f9f42d5917e7d8109e/docs/source/en/converting_tensorflow_models.mdx "converting_tensorflow_models.mdx") WIP @Xpiri
- [ ] [serialization.mdx](https://github.com/huggingface/transformers/blob/9c9db751e29432e8924624ef44856cd9fa671ef3/docs/source/en/serialization.mdx "serialization.mdx") WIP @F02934
- [ ] [performance.mdx](https://github.com/huggingface/transformers/blob/9c9db751e29432e8924624ef44856cd9fa671ef3/docs/source/en/performance.mdx "performance.mdx") WIP @machicomio
- [ ] [perf_train_gpu_one](https://github.com/huggingface/transformers/blob/main/docs/source/en/perf_train_gpu_one.mdx)
- [ ] [perf_train_gpu_many](https://github.com/huggingface/transformers/blob/main/docs/source/en/perf_train_gpu_many.mdx)
- [ ] [perf_train_cpu](https://github.com/huggingface/transformers/blob/main/docs/source/en/perf_train_cpu.mdx)
- [ ] [perf_train_cpu_many](https://github.com/huggingface/transformers/blob/main/docs/source/en/perf_train_cpu_many.mdx)
- [ ] [perf_train_tpu](https://github.com/huggingface/transformers/blob/main/docs/source/en/perf_train_tpu.mdx)
- [ ] [perf_train_special](https://github.com/huggingface/transformers/blob/main/docs/source/en/perf_train_special.mdx)
- [ ] [perf_infer_cpu](https://github.com/huggingface/transformers/blob/main/docs/source/en/perf_infer_cpu.mdx)
- [ ] [perf_infer_gpu_one](https://github.com/huggingface/transformers/blob/main/docs/source/en/perf_infer_gpu_one.mdx)
- [ ] [perf_infer_gpu_many](https://github.com/huggingface/transformers/blob/main/docs/source/en/perf_infer_gpu_many.mdx)
- [ ] [perf_infer_special](https://github.com/huggingface/transformers/blob/main/docs/source/en/perf_infer_special.mdx)
- [ ] [perf_hardware](https://github.com/huggingface/transformers/blob/main/docs/source/en/perf_hardware.mdx)
- [ ] [big_models](https://github.com/huggingface/transformers/blob/main/docs/source/en/big_models.mdx)
- [x] [parallelism.mdx](https://github.com/huggingface/transformers/blob/9c9db751e29432e8924624ef44856cd9fa671ef3/docs/source/en/parallelism.mdx "parallelism.mdx") WIP @Xpiri
- [ ] [benchmarks.mdx](https://github.com/huggingface/transformers/blob/9c9db751e29432e8924624ef44856cd9fa671ef3/docs/source/en/benchmarks.mdx "benchmarks.mdx") WIP @mfumanelli
- [ ] [migration.mdx](https://github.com/huggingface/transformers/blob/9c9db751e29432e8924624ef44856cd9fa671ef3/docs/source/en/migration.mdx "migration.mdx") WIP @Baelish03
- [ ] [troubleshooting.mdx](https://github.com/huggingface/transformers/blob/9c9db751e29432e8924624ef44856cd9fa671ef3/docs/source/en/troubleshooting.mdx "troubleshooting.mdx") WIP @F02934
- [ ] [debugging.mdx](https://github.com/huggingface/transformers/blob/9c9db751e29432e8924624ef44856cd9fa671ef3/docs/source/en/debugging.mdx "debugging.mdx") WIP @nickprock
- [ ] notebooks
- [ ] [community.mdx](https://github.com/huggingface/transformers/blob/9c9db751e29432e8924624ef44856cd9fa671ef3/docs/source/en/community.mdx "community.mdx") WIP @lorenzobalzani
- [ ] [add_new_model.mdx](https://github.com/huggingface/transformers/blob/b9a768b3ffa80c4c19d024f9f42d5917e7d8109e/docs/source/en/add_new_model.mdx "docs/source/en/add_new_model.mdx") WIP @Steboss89
- [ ] [add_new_pipeline.mdx](https://github.com/huggingface/transformers/blob/9c9db751e29432e8924624ef44856cd9fa671ef3/docs/source/en/add_new_pipeline.mdx "add_new_pipeline.mdx")
- [ ] [testing.mdx](https://github.com/huggingface/transformers/blob/9c9db751e29432e8924624ef44856cd9fa671ef3/docs/source/en/testing.mdx "testing.mdx")
- [ ] [pr_checks.mdx](https://github.com/huggingface/transformers/blob/9c9db751e29432e8924624ef44856cd9fa671ef3/docs/source/en/pr_checks.mdx "pr_checks.mdx") | 05-27-2022 18:49:20 | 05-27-2022 18:49:20 | Hi @mfumanelli if it has not yet been taken I can translate pipeline_tutorial.mdx (only the textual part).
<|||||>Hey, @nickprock thank you! That would be great. I added you to the list of contributors in @mfumanelli 's tasks list. The textual part is ok :)<|||||>Hey @mfumanelli in the next week I can translate preprocessing and training.<|||||>> Hey @mfumanelli in the next week I can translate preprocessing and training.
Perfect @nickprock ๐ ๐ค thanks!<|||||>Hey @mfumanelli if you have other documents to assign to me I'm ready<|||||>>
Hi @nickprock! <3 I saw the pull request for preprocessing but not for training, is that still in WIP or is there a PR I missed? <|||||>@mfumanelli I hope to submit the PR for training tomorrow.<|||||>>
Super! Thanks @nickprock, if it's ok for you I will assign you the multilingual doc. I also asked @omarespejel if there are any priority docs to be translated, so that we can add them to the issue or if we can proceed without priority with all the other docs ๐<|||||>Hi @mfumanelli, if you have any file to translate I'd be happy to help :)<|||||>Hi @mfumanelli I would be happy to help in translating the documentation if it was possible!<|||||>@mfumanelli thanks! ๐ค I just added the next priority docs to the main comment on this issue.<|||||>> Hi @mfumanelli, if you have any file to translate I'd be happy to help :)
Hi @andreafailla! If it is ok with you, you can start translating the file: [fast_tokenizers.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/fast_tokenizers.mdx). If you have any doubts write to me :) Thanks for your help! ๐๐<|||||>> Hi @mfumanelli I would be happy to help in translating the documentation if it was possible!
Hi @F02934 thanks!! ๐ is it OK for you to start by translating the file [create_a_model.mdx](https://github.com/huggingface/transformers/blob/b9a768b3ffa80c4c19d024f9f42d5917e7d8109e/docs/source/en/create_a_model.mdx)?<|||||>Hi @mfumanelli, if there's something for me to work on I would be really glad to help! :)<|||||>> Hi @mfumanelli, if there's something for me to work on I would be really glad to help! :)
Hi @Xpiri! perfect, it would be perfect if you could start translating the file: [custom_models.mdx](https://github.com/huggingface/transformers/blob/b9a768b3ffa80c4c19d024f9f42d5917e7d8109e/docs/source/en/custom_models.mdx) ๐ thank you very much for your contribution ๐<|||||>@mfumanelli Perfect I can start translating right now !<|||||>Hi @mfumanelli, I should be able to start working on some translations in the next few days.<|||||>> Hi @mfumanelli, I should be able to start working on some translations in the next few days.
Hi @lorenzobalzani thanks! If it's ok for you, you can start translating the run_scripts.mdx file ๐ let me know!
<|||||>Hi @mfumanelli I hope you are well :) You should see my PR by now.
If that's ok, i can take on the [run_scripts.mdx](https://github.com/huggingface/transformers/blob/b9a768b3ffa80c4c19d024f9f42d5917e7d8109e/docs/source/en/run_scripts.mdx) and [sagemaker.mdx](https://github.com/huggingface/transformers/blob/b9a768b3ffa80c4c19d024f9f42d5917e7d8109e/docs/source/en/sagemaker.mdx) and PR them tomorrow :)
Thanks for the great work!<|||||>> Hi @mfumanelli I hope you are well :) You should see my PR by now.
>
> If that's ok, i can take on the [run_scripts.mdx](https://github.com/huggingface/transformers/blob/b9a768b3ffa80c4c19d024f9f42d5917e7d8109e/docs/source/en/run_scripts.mdx) and [sagemaker.mdx](https://github.com/huggingface/transformers/blob/b9a768b3ffa80c4c19d024f9f42d5917e7d8109e/docs/source/en/sagemaker.mdx) and PR them tomorrow :) Thanks for the great work!
Hi @andreafailla ! Thank you very much ๐ ! can you start with the sagemaker one? In the previous comment I asked Lorenzo if the one from run_scripts would be ok for him, I will now add his name to the list on the issue! Thanks ๐๐<|||||>@mfumanelli yes, that's ok for me! <|||||>Hi @mfumanelli ! You should see my pull request for [custom_models.mdx](https://github.com/huggingface/transformers/blob/b9a768b3ffa80c4c19d024f9f42d5917e7d8109e/docs/source/en/custom_models.mdx), please let me know if everything is fine and if I can help some more! ๐ <|||||>Hi @Xpiri! Perfect, if you want you can translate the doc: converting_tensorflow_models.mdx ๐ ๐ I will now have a look at your PR, thanks! ๐ฅ <|||||>Hello @mfumanelli I made a my pull up request. It's my first time so you can give me some pointers if something not good. If everything is fine I'd like to help more so I can improve myself!<|||||>@F02934 ๐ thank you for your contribution, if it is ok with you, you can translate the file: serialization.mdx<|||||>@mfumanelli sure I will!<|||||>@mfumanelli I submitted my PR, let me know if there are some translations unclear. <|||||>Hi @mfumanelli! <|||||>๐ Hi @machicomio ๐ If you want you can start by translating the performance.mdx file, if you have any doubts do not hesitate to write to me ๐<|||||>Hi @mfumanelli, is migration.mdx already taken?<|||||>@mfumanelli I'd like to take troubleshooting.mdx<|||||>@Baelish03 and @F02934 I added your names to the issue for files migration.mdx and troubleshooting.mdx ๐<|||||>hi @mfumanelli I'm waiting for the reviews about my PRs, in the next week I can work on debugging.mdx<|||||>Perfect @nickprock ๐ I added your name on the issue for the debugging.mdx translation<|||||>Hi @mfumanelli, is there anything else that I could contribute on?<|||||>Hi @lorenzobalzani! If it's ok with you, I'll add you on the issue for the community.mdx file ๐ <|||||>> Hi @lorenzobalzani! If it's ok with you, I'll add you on the issue for the community.mdx file ๐
Sounds good to me!<|||||>Hi @mfumanelli
As promised I have now a bit of time to help you out in this :)
Let me know if there's an open issue I can contribute to :)
Thanks
Stefano<|||||>Hi @Steboss89 ! Thanks ๐ if you want you can start translating the add_new_model file! <|||||>@mfumanelli fantastic :) I'll start with it :) <|||||>Hi everyone, if you have submitted a PR some time ago it is likely that it was closed automatically because it was inactive. Comment to reopen it. I did and they were immediately merged.<|||||>> Hi everyone, if you have submitted a PR some time ago it is likely that it was closed automatically because it was inactive. Comment to reopen it. I did and they were immediately merged.
I've commented on my closed [PR](https://github.com/huggingface/transformers/pull/17642) but nothing has happened!<|||||>@lorenzobalzani You should make sure to ping maintainers to get your PR merged :-)<|||||>Hi everyone, so sorry some of your PRs slipped through the cracks for a month or more, we hadn't implemented a clear process internally to make sure they were quickly reviewed and merged. In the future or if you still have one standing, make sure to tag me (@sgugger) and I promise we'll react faster!<|||||>Hi everyone, while I was waiting for my [PR #17631](https://github.com/huggingface/transformers/pull/17631) to be accepted I have also finished the translation of [converting_tensorflow_models.mdx](https://github.com/huggingface/transformers/blob/b9a768b3ffa80c4c19d024f9f42d5917e7d8109e/docs/source/en/converting_tensorflow_models.mdx).
I was waiting for my PR to be accepted before submitting this new one, mostly because I did not want to create issues with the toctree.yml file. Should I still open a PR for this new documentation page? <|||||>Hi @Xpiri I created a branch in my fork and opend a PR for each document. I updated the _toctree.yml for each branch, when the PR is merged the master solve the conflicts.
This is my "strategy".
<|||||>Kudos to @mfumanelli and @nickprock for reviewing the PRs ๐ค and organizing this translation. [Here you can see](https://huggingface.co/docs/transformers/main/it/index) how the translation is going ๐<|||||>Hi ๐ฎ๐น team. Sorry for the delay in reviewing your PRs. Particularly to @mfumanelli and @nickprock, who have been kindly reviewing the existing PRs.<|||||>@mfumanelli, the documentation had some recent changes. In particular, `parallelism` was partitioned into several docs. I applied this change to the main text of the issue; no need to do anything else ๐ค.<|||||>@mfumanelli I'll be ready with the Italian translated version of [add_new_model.mdx](https://github.com/huggingface/transformers/blob/b9a768b3ffa80c4c19d024f9f42d5917e7d8109e/docs/source/en/add_new_model.mdx) next week :) <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Hi @mfumanelli, what's the status of the issue? My tasks (and also others) are completed and merged, could you flag the list of documents?
I can translate `add_new_pipeline`<|||||>Hi @mfumanelli , would it be possible to contribute at [testing.mdx](https://github.com/huggingface/transformers/blob/9c9db751e29432e8924624ef44856cd9fa671ef3/docs/source/en/testing.mdx)? <|||||>Hi @mfumanelli
I've done [add_new_model.mdx](https://github.com/huggingface/transformers/blob/b9a768b3ffa80c4c19d024f9f42d5917e7d8109e/docs/source/en/add_new_model.mdx) and it's been approved and merge :) <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>@sgugger, I submitted my PR for [community.mdx](https://github.com/huggingface/transformers/blob/9c9db751e29432e8924624ef44856cd9fa671ef3/docs/source/en/community.mdx).<|||||>Is there something else I could work on? Let me know @mfumanelli @nickprock.<|||||>> Is there something else I could work on? Let me know @mfumanelli @nickprock.
Hi @lorenzobalzani I think you could translate `testing.mdx` that has not yet been assigned.
@mfumanelli can we check the documentation that has already been translated?<|||||>> > Is there something else I could work on? Let me know @mfumanelli @nickprock.
>
>
>
> Hi @lorenzobalzani I think you could translate `testing.mdx` that has not yet been assigned.
>
> @mfumanelli can we check the documentation that has already been translated?
Yes, you can definitely assign me to that.<|||||>Hi!
I chatted with @mfumanelli and I check the folder with Italian doc, at the moment the situation is as follows (if I have forgotten anything please post in this thread):
## Get Started section
* [x] [index.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/index.mdx) @mfumanelli
* [x] [quicktour.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/quicktour.mdx). @mfumanelli
* [x] [installation.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/installation.mdx). @mfumanelli
## Tutorial section
* [x] [pipeline_tutorial.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/pipeline_tutorial.mdx) @nickprock
* [x] [autoclass_tutorial.mdx](https://github.com/huggingface/transformers/blob/master/docs/source/autoclass_tutorial.mdx) @mfumanelli
* [x] [preprocessing.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/preprocessing.mdx) @nickprock
* [x] [training.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/training.mdx) @nickprock
* [x] [accelerate.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/accelerate.mdx) @mfumanelli
* [x] [model_sharing.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/model_sharing.mdx) @mfumanelli
* [x] [multilingual.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/multilingual.mdx) @nickprock
## How-to guides
* [x] [fast_tokenizers.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/fast_tokenizers.mdx) @andreafailla
* [x] [create_a_model.mdx](https://github.com/huggingface/transformers/blob/b9a768b3ffa80c4c19d024f9f42d5917e7d8109e/docs/source/en/create_a_model.mdx) @F02934
* [x] [custom_models.mdx](https://github.com/huggingface/transformers/blob/b9a768b3ffa80c4c19d024f9f42d5917e7d8109e/docs/source/en/custom_models.mdx) @Xpiri
* [x] [run_scripts.mdx](https://github.com/huggingface/transformers/blob/b9a768b3ffa80c4c19d024f9f42d5917e7d8109e/docs/source/en/run_scripts.mdx) @lorenzobalzani
* [x] [sagemaker.mdx](https://github.com/huggingface/transformers/blob/b9a768b3ffa80c4c19d024f9f42d5917e7d8109e/docs/source/en/sagemaker.mdx) @andreafailla
* [x] [converting_tensorflow_models.mdx](https://github.com/huggingface/transformers/blob/b9a768b3ffa80c4c19d024f9f42d5917e7d8109e/docs/source/en/converting_tensorflow_models.mdx) @Xpiri
* [x] [serialization.mdx](https://github.com/huggingface/transformers/blob/9c9db751e29432e8924624ef44856cd9fa671ef3/docs/source/en/serialization.mdx) @F02934
* [ ] [performance.mdx](https://github.com/huggingface/transformers/blob/9c9db751e29432e8924624ef44856cd9fa671ef3/docs/source/en/performance.mdx) WIP @machicomio
* [ ] [perf_train_gpu_one](https://github.com/huggingface/transformers/blob/main/docs/source/en/perf_train_gpu_one.mdx) WIP @Baelish03
* [ ] [perf_train_gpu_many](https://github.com/huggingface/transformers/blob/main/docs/source/en/perf_train_gpu_many.mdx)
* [x] [perf_train_cpu](https://github.com/huggingface/transformers/blob/main/docs/source/en/perf_train_cpu.mdx) @nickprock
* [x] [perf_train_cpu_many](https://github.com/huggingface/transformers/blob/main/docs/source/en/perf_train_cpu_many.mdx) @nickprock
* [x] [perf_train_tpu](https://github.com/huggingface/transformers/blob/main/docs/source/en/perf_train_tpu.mdx) @nickprock
* [x] [perf_train_special](https://github.com/huggingface/transformers/blob/main/docs/source/en/perf_train_special.mdx) @nickprock
* [x] [perf_infer_cpu](https://github.com/huggingface/transformers/blob/main/docs/source/en/perf_infer_cpu.mdx) @nickprock
* [x] [perf_infer_gpu_one](https://github.com/huggingface/transformers/blob/main/docs/source/en/perf_infer_gpu_one.mdx) @davidegazze
* [x] [perf_infer_gpu_many](https://github.com/huggingface/transformers/blob/main/docs/source/en/perf_infer_gpu_many.mdx) @nickprock
* [x] [perf_infer_special](https://github.com/huggingface/transformers/blob/main/docs/source/en/perf_infer_special.mdx) @nickprock
* [x] [perf_hardware](https://github.com/huggingface/transformers/blob/main/docs/source/en/perf_hardware.mdx) @draperkm
* [x] [big_models](https://github.com/huggingface/transformers/blob/main/docs/source/en/big_models.mdx) @nickprock
> * [x] [parallelism.mdx](https://github.com/huggingface/transformers/blob/9c9db751e29432e8924624ef44856cd9fa671ef3/docs/source/en/parallelism.mdx) @Xpiri
> **splitted in multiple files** [pull #18301](https://github.com/huggingface/transformers/pull/18301)
* [ ] [benchmarks.mdx](https://github.com/huggingface/transformers/blob/9c9db751e29432e8924624ef44856cd9fa671ef3/docs/source/en/benchmarks.mdx) WIP @mfumanelli
* [x] [migration.mdx](https://github.com/huggingface/transformers/blob/9c9db751e29432e8924624ef44856cd9fa671ef3/docs/source/en/migration.mdx) @Baelish03
* [ ] [troubleshooting.mdx](https://github.com/huggingface/transformers/blob/9c9db751e29432e8924624ef44856cd9fa671ef3/docs/source/en/troubleshooting.mdx) WIP @F02934
* [x] [debugging.mdx](https://github.com/huggingface/transformers/blob/9c9db751e29432e8924624ef44856cd9fa671ef3/docs/source/en/debugging.mdx) @nickprock
* [ ] notebooks
* [x] [community.mdx](https://github.com/huggingface/transformers/blob/9c9db751e29432e8924624ef44856cd9fa671ef3/docs/source/en/community.mdx) @lorenzobalzani
* [x] [add_new_model.mdx](https://github.com/huggingface/transformers/blob/b9a768b3ffa80c4c19d024f9f42d5917e7d8109e/docs/source/en/add_new_model.mdx) @Steboss89
* [x] [add_new_pipeline.mdx](https://github.com/huggingface/transformers/blob/9c9db751e29432e8924624ef44856cd9fa671ef3/docs/source/en/add_new_pipeline.mdx) @nickprock
* [ ] [testing.mdx](https://github.com/huggingface/transformers/blob/9c9db751e29432e8924624ef44856cd9fa671ef3/docs/source/en/testing.mdx) WIP @lorenzobalzani
* [x] [pr_checks.mdx](https://github.com/huggingface/transformers/blob/9c9db751e29432e8924624ef44856cd9fa671ef3/docs/source/en/pr_checks.mdx) @alexcalabrese
Have a good day :relaxed:
@sgugger @omarespejel @MKhalusova <|||||>Hi @nickprock, I would like to translate `pr_checks.mdx`, can you please assign it to me?<|||||>if for you it is ok, i would like to translate **big model**<|||||>Hi @davidegazze , **big model** has already been translated. You can pick a document from the list:
https://github.com/huggingface/transformers/issues/17459#issuecomment-1465123774
Thanks<|||||>sorry, then [perf_infer_gpu_one](https://github.com/huggingface/transformers/blob/main/docs/source/en/perf_infer_gpu_one.mdx)<|||||>is perf_train_gpu_one.mdx free? if yes, i will translate it <|||||>> is perf_train_gpu_one.mdx free? if yes, i will translate it
It's yours! |
transformers | 17,458 | closed | TF: XLA Beam Search | # What does this PR do?
WIP
Dependencies:
1. #17426
2. #17479
Status log:
1. 2022/05/27: XLA compiles, but fails at runtime. The same code runs in eager execution. The XLA compiler seems like it is not able to pick up the right information during beam search, as it complains inside the forward pass on things that work fine with greedy search/sample. Maybe it's GPT-2 specific. Going to fix enable XLA on BART and retry there.
2. 2022/05/30: Now with BART. Same thing -- it compiles, but gets confused at run time. The exact same code path, with eager execution, runs fine. I suspect the cache creation inside the while loop doesn't help.
Ideas yet to explore:
1. Split past cache creation from past cache update, and create the cache before the loop. XLA expects variable creation at trace time, and we can do it with the proper separation; | 05-27-2022 16:52:47 | 05-27-2022 16:52:47 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_17458). All of your documentation changes will be reflected on that endpoint.<|||||>Closed in favor of #17857 |
transformers | 17,457 | closed | [Json configs] Make json prettier for all saved tokenizer files & ensure same json format for all processors (tok + feat_extract) | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
As an example, see: https://huggingface.co/facebook/wav2vec2-base-100h/commit/9c1fef36b62a428a658e5b022ef9f21b38f47e0b
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 05-27-2022 16:17:59 | 05-27-2022 16:17:59 | _The documentation is not available anymore as the PR was closed or merged._<|||||>@julien-c @sgugger do you think it could make sense to make a huge automated PR creation to correct all tokenizer configs? Or maybe too much given that we have 80,000 checkpoints?
Don't think it's possible to break anything, but still not sure if it makes sense<|||||>would be a good stress test i guess =) |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.