repo
stringclasses 1
value | number
int64 1
25.3k
| state
stringclasses 2
values | title
stringlengths 1
487
| body
stringlengths 0
234k
⌀ | created_at
stringlengths 19
19
| closed_at
stringlengths 19
19
| comments
stringlengths 0
293k
|
---|---|---|---|---|---|---|---|
transformers | 21,371 | closed | Error while loading a model on 8bit | I'm trying to run inference on a model which doesn't fit on my GPU using this code:
```from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
device_map = {'transformer.wte': 0,
'transformer.drop': 0,
'transformer.h.0': 0,
'transformer.h.1': 0,
'transformer.h.2': 0,
'transformer.h.3': 0,
'transformer.h.4': 0,
'transformer.h.5': 0,
'transformer.h.6': 0,
'transformer.h.7': 0,
'transformer.h.8': 0,
'transformer.h.9': 0,
'transformer.h.10': 0,
'transformer.h.11': 0,
'transformer.h.12': 0,
'transformer.h.13': 0,
'transformer.h.14': 0,
'transformer.h.15': 0,
'transformer.h.16': 0,
'transformer.h.17': 0,
'transformer.h.18': 0,
'transformer.h.19': 0,
'transformer.h.20': 0,
'transformer.h.21': 0,
'transformer.h.22': 0,
'transformer.h.23': 'cpu',
'transformer.h.24': 'cpu',
'transformer.h.25': 'cpu',
'transformer.h.26': 'cpu',
'transformer.h.27': 'cpu',
'transformer.ln_f': 'cpu',
'lm_head': 'cpu'}
tokenizer = AutoTokenizer.from_pretrained("tomaxe/fr-boris-sharded")
model = AutoModelForCausalLM.from_pretrained("tomaxe/fr-boris-sharded", load_in_8bit = True, load_in_8bit_skip_modules = ['lm_head',
'transformer.ln_f',
'transformer.h.27',
'transformer.h.26',
'transformer.h.25',
'transformer.h.24',
'transformer.h.23'], device_map = device_map)
input_text = "salut"
input_ids = tokenizer(input_text, return_tensors="pt").input_ids.to("cuda")
outputs = model.generate(input_ids, max_length = 20)
print(tokenizer.decode(outputs[0]))
```
And I'm running into this error :
@younesbelkada Do you know what I could do ? Thanks
```===================================BUG REPORT===================================
Welcome to bitsandbytes. For bug reports, please submit your error trace to: https://github.com/TimDettmers/bitsandbytes/issues
For effortless bug reporting copy-paste your error into this form: https://docs.google.com/forms/d/e/1FAIpQLScPB8emS3Thkp66nvqwmjTEgxp8Y9ufuWTzFyr9kJ5AoI47dQ/viewform?usp=sf_link
================================================================================
CUDA SETUP: CUDA runtime path found: /home/thomas/anaconda3/lib/libcudart.so
CUDA SETUP: Highest compute capability among GPUs detected: 8.6
CUDA SETUP: Detected CUDA version 117
CUDA SETUP: Loading binary /home/thomas/anaconda3/lib/python3.9/site-packages/bitsandbytes/libbitsandbytes_cuda117.so...
Loading checkpoint shards: 0%| | 0/30 [00:00<?, ?it/s]
The attention mask and the pad token id were not set. As a consequence, you may observe unexpected behavior. Please pass your input's `attention_mask` to obtain reliable results.
Setting `pad_token_id` to `eos_token_id`:50256 for open-end generation.
A: torch.Size([2, 4096]), B: torch.Size([4096, 4096]), C: (2, 4096); (lda, ldb, ldc): (c_int(64), c_int(131072), c_int(64)); (m, n, k): (c_int(2), c_int(4096), c_int(4096))
Traceback (most recent call last):
File "/home/thomas/anaconda3/lib/python3.9/site-packages/spyder_kernels/py3compat.py", line 356, in compat_exec
exec(code, globals, locals)
File "/home/thomas/Downloads/infersharded.py", line 46, in <module>
outputs = model.generate(input_ids, max_length = 20)
File "/home/thomas/anaconda3/lib/python3.9/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)
File "/home/thomas/anaconda3/lib/python3.9/site-packages/transformers/generation/utils.py", line 1391, in generate
return self.greedy_search(
File "/home/thomas/anaconda3/lib/python3.9/site-packages/transformers/generation/utils.py", line 2179, in greedy_search
outputs = self(
File "/home/thomas/anaconda3/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "/home/thomas/anaconda3/lib/python3.9/site-packages/accelerate/hooks.py", line 156, in new_forward
output = old_forward(*args, **kwargs)
File "/home/thomas/anaconda3/lib/python3.9/site-packages/transformers/models/gptj/modeling_gptj.py", line 813, in forward
transformer_outputs = self.transformer(
File "/home/thomas/anaconda3/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "/home/thomas/anaconda3/lib/python3.9/site-packages/transformers/models/gptj/modeling_gptj.py", line 668, in forward
outputs = block(
File "/home/thomas/anaconda3/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "/home/thomas/anaconda3/lib/python3.9/site-packages/accelerate/hooks.py", line 156, in new_forward
output = old_forward(*args, **kwargs)
File "/home/thomas/anaconda3/lib/python3.9/site-packages/transformers/models/gptj/modeling_gptj.py", line 302, in forward
attn_outputs = self.attn(
File "/home/thomas/anaconda3/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "/home/thomas/anaconda3/lib/python3.9/site-packages/accelerate/hooks.py", line 156, in new_forward
output = old_forward(*args, **kwargs)
File "/home/thomas/anaconda3/lib/python3.9/site-packages/transformers/models/gptj/modeling_gptj.py", line 203, in forward
query = self.q_proj(hidden_states)
File "/home/thomas/anaconda3/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "/home/thomas/anaconda3/lib/python3.9/site-packages/accelerate/hooks.py", line 156, in new_forward
output = old_forward(*args, **kwargs)
File "/home/thomas/anaconda3/lib/python3.9/site-packages/bitsandbytes/nn/modules.py", line 254, in forward
out = bnb.matmul(x, self.weight, bias=self.bias, state=self.state)
File "/home/thomas/anaconda3/lib/python3.9/site-packages/bitsandbytes/autograd/_functions.py", line 405, in matmul
return MatMul8bitLt.apply(A, B, out, bias, state)
File "/home/thomas/anaconda3/lib/python3.9/site-packages/bitsandbytes/autograd/_functions.py", line 311, in forward
out32, Sout32 = F.igemmlt(C32A, state.CxB, SA, state.SB)
File "/home/thomas/anaconda3/lib/python3.9/site-packages/bitsandbytes/functional.py", line 1410, in igemmlt
raise Exception('cublasLt ran into an error!')
Exception: cublasLt ran into an error!
cuBLAS API failed with status 15
error detected``` | 01-30-2023 15:00:32 | 01-30-2023 15:00:32 | Hi @toma-x
Thanks for the issue,
What you are currently trying to do (mixing cpu + int8) is not supported yet
I think that this feature should be addressed in `QuantizationConfig` in the next weeks, I will keep you updated in this issue
<|||||>Glad to know this, looking forward to hear you soon about this @younesbelkada <|||||>Hi @toma-x
This is now supported on the `main` branch of `transformers`, can you check this section of the docs? 🙏
https://huggingface.co/docs/transformers/main/en/main_classes/quantization#offload-between-cpu-and-gpu <|||||>Hi @younesbelkada thank you for letting me updated, I will sure take a look this is very interesting, have a great day 😁<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 21,370 | closed | [CLAP] Add CLAP to the library | # What does this PR do?
Adds CLAP to the HF library cc @younesbelkada | 01-30-2023 14:15:51 | 01-30-2023 14:15:51 | Now we should create a new task : `zero-shot audio classification`!
<|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>Ok! Looking very good, last thing is the `configuration` docstring. @younesbelkada will finish this 😉 <|||||>Models are here (with nice model cards hehe):
- https://huggingface.co/ybelkada/clap-htsat-fused
- https://huggingface.co/ybelkada/clap-htsat-unfused
Will transfer them on `laion-ai` once we got an approval :)<|||||>1. Some renaming has to happen to normalise the parameters around mel extraction. Will do this and everything should look good 😉
2. Add the dependency on torchvision as the np implementation received a big NO 😢 <|||||>Found a few discrepancies with the various variables in the config that are not used. Will finish asap should makes things clearer. <|||||>Ok I broke the history
|
transformers | 21,369 | closed | "Both `max_new_tokens` and `max_length` have been set but they serve the same purpose" when only setting max_new_tokens. | ### System Info
- `transformers` version: 4.27.0.dev0
- Platform: Windows-10-10.0.19044-SP0
- Python version: 3.10.4
- Huggingface_hub version: 0.12.0
- PyTorch version (GPU?): 1.13.1+cpu (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help?
@gante I believe.
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
1. Install transformers
2. Run the following script
```
from transformers import pipeline
summarizer = pipeline("summarization", max_new_tokens=50)
prompt = "text to summarize"
summary = summarizer(prompt)
```
3. It crashes with the following error:
> ValueError: Both `max_new_tokens` and `max_length` have been set but they serve the same purpose -- setting a limit to the generated output length. Remove one of those arguments. Please refer to the documentation for more information. (https://huggingface.co/docs/transformers/main/en/main_classes/text_generation)
### Expected behavior
Expected behavior is that the script should run. I only set `max_new_tokens` and not `max_length`.
This might seem like a duplicate of https://github.com/huggingface/transformers/issues/20894 but the issue persists even after installing the latest version of transformers with the fix for that issue.
| 01-30-2023 14:03:43 | 01-30-2023 14:03:43 | Hey @Gvanderl 👋
We are aware of this issue (some downstream uses of `.generate()`, like the `pipeline`, fail when `max_new_tokens` is set). #21347 will fix it 🎉
After it gets merged, install from main and it should work! (i.e. `pip install --upgrade git+https://github.com/huggingface/transformers.git`)<|||||>Amazing, thanks for being so responsive.
I'm closing the issue. <|||||>Hello,
Trying the new main, I encounter a new issue. It seems that `max_new_tokens` behaves like `max_length`.
However, according to the documentation their behavior should be the following:
> max_length (int, optional, defaults to 20) — The maximum length the generated tokens can have. Corresponds to the length of the input prompt + max_new_tokens. In general, prefer the use of max_new_tokens, which ignores the number of tokens in the prompt.
max_new_tokens (int, optional) — The maximum numbers of tokens to generate, ignoring the number of tokens in the prompt.
Using the code from above, I get the following stack trace:
```
--- Logging error ---
Traceback (most recent call last):
File "AppData\Local\Programs\Python\Python310\lib\logging\__init__.py", line 1100, in emit
msg = self.format(record)
File "AppData\Local\Programs\Python\Python310\lib\logging\__init__.py", line 943, in format
return fmt.format(record)
File "AppData\Local\Programs\Python\Python310\lib\logging\__init__.py", line 678, in format
record.message = record.getMessage()
File "AppData\Local\Programs\Python\Python310\lib\logging\__init__.py", line 368, in getMessage
msg = msg % self.args
TypeError: not all arguments converted during string formatting
Call stack:
File "AppData\Roaming\JetBrains\PyCharm2022.3\scratches\scratch.py", line 6, in <module>
summary = summarizer(prompt)
File "venv\lib\site-packages\transformers\pipelines\text2text_generation.py", line 265, in __call__
return super().__call__(*args, **kwargs)
File "venv\lib\site-packages\transformers\pipelines\text2text_generation.py", line 165, in __call__
result = super().__call__(*args, **kwargs)
File "venv\lib\site-packages\transformers\pipelines\base.py", line 1089, in __call__
return self.run_single(inputs, preprocess_params, forward_params, postprocess_params)
File "venv\lib\site-packages\transformers\pipelines\base.py", line 1096, in run_single
model_outputs = self.forward(model_inputs, **forward_params)
File "venv\lib\site-packages\transformers\pipelines\base.py", line 995, in forward
model_outputs = self._forward(model_inputs, **forward_params)
File "venv\lib\site-packages\transformers\pipelines\text2text_generation.py", line 187, in _forward
output_ids = self.model.generate(**model_inputs, **generate_kwargs)
File "venv\lib\site-packages\torch\autograd\grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)
File "venv\lib\site-packages\transformers\generation\utils.py", line 1285, in generate
logger.warn(
Message: 'Both `max_new_tokens` (=50) and `max_length`(=51) seem to have been set. `max_new_tokens` will take precedence. Please refer to the documentation for more information. (https://huggingface.co/docs/transformers/main/en/main_classes/text_generation)'
Arguments: (<class 'UserWarning'>,)
Traceback (most recent call last):
File "AppData\Roaming\JetBrains\PyCharm2022.3\scratches\scratch.py", line 6, in <module>
summary = summarizer(prompt)
File "venv\lib\site-packages\transformers\pipelines\text2text_generation.py", line 265, in __call__
return super().__call__(*args, **kwargs)
File "venv\lib\site-packages\transformers\pipelines\text2text_generation.py", line 165, in __call__
result = super().__call__(*args, **kwargs)
File "venv\lib\site-packages\transformers\pipelines\base.py", line 1089, in __call__
return self.run_single(inputs, preprocess_params, forward_params, postprocess_params)
File "venv\lib\site-packages\transformers\pipelines\base.py", line 1096, in run_single
model_outputs = self.forward(model_inputs, **forward_params)
File "venv\lib\site-packages\transformers\pipelines\base.py", line 995, in forward
model_outputs = self._forward(model_inputs, **forward_params)
File "venv\lib\site-packages\transformers\pipelines\text2text_generation.py", line 187, in _forward
output_ids = self.model.generate(**model_inputs, **generate_kwargs)
File "venv\lib\site-packages\torch\autograd\grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)
File "venv\lib\site-packages\transformers\generation\utils.py", line 1294, in generate
raise ValueError(
ValueError: Unfeasible length constraints: the minimum length (56) is larger than the maximum length (51)
Process finished with exit code 1
```
<|||||>Hey @Gvanderl -- That's because `min_length` is set for that pipeline (in that case, `min_length=56`), and `min_length` has to be smaller than the maximum length you define :)
You can either decrease `min_length`, by setting it, or increase `max_new_tokens`<|||||>Oh I see.
This fixed the issue, though now every time it runs it gives me the following warning:
```
Message: 'Both `max_new_tokens` (=50) and `max_length`(=51) seem to have been set. `max_new_tokens` will take precedence. Please refer to the documentation for more information. (https://huggingface.co/docs/transformers/main/en/main_classes/text_generation)'
```
Any way to get rid of it ? Setting `max_length=None` resulted in a crash. <|||||>You can raise the level of the logger to ignore warnings :) I will be working on the pipelines in the coming week or two, that warning should disappear by then.<|||||>Can anyone please tell me how to pass the max_token_size. I am passing max_length in the generate method, still it is taking the default value
inputs = processor(batch["audio"]["array"], return_tensors="pt",sampling_rate=16_000)
generated_ids = model.generate(inputs=inputs.input_features,max_length=1000)
print(len(generated_ids))
batch["pred_str"] = processor.batch_decode(generated_ids,skip_special_tokens=True,group_tokens=True,max_length=500)
The length is not getting set , I am getting output till length 448. Please help me on this.<|||||>@enankobh that happens because, [if I'm seeing correctly](https://huggingface.co/openai/whisper-large-v2/blob/main/config.json#L44), Whisper's maximum output length is 448 tokens. @ArthurZucker can you confirm? |
transformers | 21,368 | closed | 🚨🚨 Generate: standardize beam search behavior across frameworks | # What does this PR do?
Applies the discussion of #20901 into code. In a nutshell, standardizes beam search behavior across all three frameworks through `early_stopping`, keeping PT's behavior untouched for the previously accepted values of `early_stopping`.
Changes:
1. `early_stopping` was changed from a binary variable (`True` or `False`, defaulting to `False`) to a ternary variable (`True`, `False`, or `"never"`, defaulting to `False`).
- `early_stopping=True` means that beam search will stop whenever `num_beam` complete candidates are obtained, ignoring all room for improvement. No changes across all frameworks;
- `early_stopping=False` means that beam search will use a heuristic to stop. It effectively blocks minor "tail" improvements when `length_penalty` is positive (the default), while saving many beam search iterations. This was already PT's behavior for `early_stopping=False`, and is the new default for TF/FLAX;
- `early_stopping="never"` means that beam search will only stop when it is mathematically impossible to improve. This was TF/FLAX's behavior for `early_stopping=False` (and is the canonical beam search implementation).
2. As a consequence of 1.: PT users can now run the canonical beam search with `early_stopping="never"`.
3. As a consequence of 1.: TF users will notice a significant speedup if they keep the default generation parameters, while increasing `max_new_tokens`/`max_length`. This is the default case for the Marian models, and what triggered all these changes to begin with (thanks @ydshieh #20853 ).
4. As a consequence of 1.: Flax users will get the same benefits as TF users.
Points 3. and 4. imply that there may be some minor differences in the output of `.generate()` with beam search on TF and FLAX. That difference should be very small (it has been PT's behavior all along, which is also our reference implementation) and will come with significant speedups. Still, being a numerically breaking change, it deserves a visible warning in the title (🚨).
Fixes https://github.com/huggingface/transformers/issues/18149
_______________________________________________________
Slow tests were ran across all 3 frameworks for:
- [x] BART
- [x] GPT2
- [x] T5
- [x] Marian
_______________________________________________________
Speed test script
```py
from transformers import MarianMTModel, MarianTokenizer, TFMarianMTModel
import time
model_name = "Helsinki-NLP/opus-mt-en-ROMANCE"
tokenizer = MarianTokenizer.from_pretrained(model_name)
text_in = ['>>fr<< hello']
model = MarianMTModel.from_pretrained(model_name)
batch = tokenizer(text_in, return_tensors='pt', padding=True)
start = time.time()
translated = model.generate(**batch)
end = time.time()
output = tokenizer.batch_decode(translated, skip_special_tokens=True)
print(output)
print(end - start)
model = TFMarianMTModel.from_pretrained(model_name)
batch = tokenizer(text_in, return_tensors='tf', padding=True)
start = time.time()
translated = model.generate(**batch)
end = time.time()
output = tokenizer.batch_decode(translated, skip_special_tokens=True)
print(output)
print(end - start)
```
Before the PR: same `output`, PT time = `0.084`, TF time = `50.85` (time in seconds, based on my local machine)
After the PR: same `output`, PT time = `0.084`, TF time = `0.701` (time in seconds, based on my local machine)
👉 ~70x faster, same generated text | 01-30-2023 13:54:42 | 01-30-2023 13:54:42 | _The documentation is not available anymore as the PR was closed or merged._<|||||>> # What does this PR do?
> Applies the discussion of #20901 into code. In a nutshell, standardizes beam search behavior across all three frameworks through `early_stopping`, keeping PT's behavior untouched for the previously accepted values of `early_stopping`.
>
> Changes:
>
> 1. `early_stopping` was changed from a binary variable (`True` or `False`, defaulting to `False`) to a ternary variable (`True`, `False`, or `"never"`, defaulting to `False`).
>
> * `early_stopping=True` means that beam search will stop whenever `num_beam` complete candidates are obtained, ignoring all room for improvement. No changes across all frameworks;
> * `early_stopping=False` means that beam search will use a heuristic to stop. It effectively blocks minor "tail" improvements when `length_penalty` is positive (the default), while saving many beam search iterations. This was already PT's behavior for `early_stopping=False`, and is the new default for TF/FLAX;
> * `early_stopping="never"` means that beam search will only stop when it is mathematically impossible to improve. This was TF/FLAX's behavior for `early_stopping=False` (and is the canonical beam search implementation).
> 2. As a consequence of 1.: PT users can now run the canonical beam search with `early_stopping="never"`.
> 3. As a consequence of 1.: TF users will notice a significant speedup if they keep the default generation parameters, while increasing `max_new_tokens`/`max_length`. This is the default case for the Marian models, and what triggered all these changes to begin with (thanks @ydshieh [Fix TF generation (especially for `TFMarian`) #20853](https://github.com/huggingface/transformers/pull/20853) ).
> 4. As a consequence of 1.: Flax users will get the same benefits as TF users.
>
> Points 3. and 4. imply that there may be some minor differences in the output of `.generate()` with beam search on TF and FLAX. That difference should be very small (it has been PT's behavior all along, which is also our reference implementation) and will come with significant speedups. Still, being a numerically breaking change, it deserves a visible warning in the title (🚨).
>
> Fixes #18149
>
> Slow tests were ran across all 3 frameworks for:
>
> * [x] BART
> * [x] GPT2
> * [x] T5
> * [x] Marian
>
> Speed test script
>
> ```python
> from transformers import MarianMTModel, MarianTokenizer, TFMarianMTModel
> import time
>
> model_name = "Helsinki-NLP/opus-mt-en-ROMANCE"
> tokenizer = MarianTokenizer.from_pretrained(model_name)
> text_in = ['>>fr<< hello']
>
> model = MarianMTModel.from_pretrained(model_name)
> batch = tokenizer(text_in, return_tensors='pt', padding=True)
> start = time.time()
> translated = model.generate(**batch)
> end = time.time()
> output = tokenizer.batch_decode(translated, skip_special_tokens=True)
> print(output)
> print(end - start)
>
> model = TFMarianMTModel.from_pretrained(model_name)
> batch = tokenizer(text_in, return_tensors='tf', padding=True)
> start = time.time()
> translated = model.generate(**batch)
> end = time.time()
> output = tokenizer.batch_decode(translated, skip_special_tokens=True)
> print(output)
> print(end - start)
> ```
>
> Before the PR: same `output`, PT time = `0.084`, TF time = `50.85` (time in seconds, based on my local machine) After the PR: same `output`, PT time = `0.084`, TF time = `0.701` (time in seconds, based on my local machine) 👉 ~70x faster, same generated text
Hi @gante, thanks for solving the issue. It is still a little bit weird though, why does TF generate the same output much slower than PT (as stated: 0.084 vs 0.701)?<|||||>Hey @jamie0725 👋 TF eager mode is super slow 🙃
If you compile the TF model (i.e. do `xla_generate = tf.function(model.generate, jit_compile=True)` then call `generate_output = xla_generate(input_ids, ...)`), you'll see that TF is probably faster than PT, depending on the hardware. |
transformers | 21,367 | closed | Fixes path for Graphormer checkpoint | # What does this PR do?
@ydshieh - should fix the graphormer checkpoint path problem.
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR. | 01-30-2023 12:49:48 | 01-30-2023 12:49:48 | _The documentation is not available anymore as the PR was closed or merged._<|||||>@clefourrier We now get some other issues, see the [failed job run](https://github.com/huggingface/transformers/actions/runs/4060331156/jobs/6989383705)
Could you take a look 🙏? Don't hesitate if you need some help.
Error message provided here
```bash
FAILED tests/models/graphormer/test_modeling_graphormer.py::GraphormerModelTest::test_model_from_pretrained - RuntimeError: Error(s) in loading state_dict for GraphormerForGraphClassification:
size mismatch for classifier.classifier.weight: copying a param with shape torch.Size([1, 768]) from checkpoint, the shape in current model is torch.Size([2, 768]).
You may consider adding `ignore_mismatched_sizes=True` in the model `from_pretrained` method.
FAILED tests/models/graphormer/test_modeling_graphormer.py::GraphormerModelIntegrationTest::test_inference_graph_classification - RuntimeError: Error(s) in loading state_dict for GraphormerForGraphClassification:
size mismatch for classifier.classifier.weight: copying a param with shape torch.Size([1, 768]) from checkpoint, the shape in current model is torch.Size([2, 768]).
You may consider adding `ignore_mismatched_sizes=True` in the model `from_pretrained` method.
```
<|||||>@ydshieh Opened PR #21419 to fix this! |
transformers | 21,366 | closed | ValueError: text input must of type `str` (single example), `List[str]` (batch or single pretokenized example) or `List[List[str]]` (batch of pretokenized examples). | I am trying to run the evaluation of both MCLIP on zero-shot learning task found on this notebook [colab](https://colab.research.google.com/drive/1zfWeVWY79XXH63Ci-pk8xxx3Vu_RRgW-?usp=sharing).
the model is loaded using the below code
```
if MODEL_TYPE == 'mClip':
from sentence_transformers import SentenceTransformer
# Here we load the multilingual CLIP model. Note, this model can only encode text.
# If you need embeddings for images, you must load the 'clip-ViT-B-32' model
se_language_model = SentenceTransformer('clip-ViT-B-32-multilingual-v1')
se_image_model = SentenceTransformer('clip-ViT-B-32')
language_model = lambda queries: se_language_model.encode(queries, convert_to_tensor=True, show_progress_bar=False).cpu().detach().numpy()
image_model = lambda images: se_image_model.encode(images, batch_size=1024, convert_to_tensor=True, show_progress_bar=False).cpu().detach().numpy()
```
when running the below prediction cell
```
top_ns = [1, 5, 10, 100]
acc_counters = [0. for _ in top_ns]
n = 0.
for i, (images, target) in enumerate(tqdm(loader)):
images = images
target = target.numpy()
# predict
image_features = image_model(images)
image_features = image_features / np.linalg.norm(image_features, axis=-1, keepdims=True)
logits = 100. * image_features @ zeroshot_weights
# measure accuracy
accs = accuracy(logits, target, topk=top_ns)
for j in range(len(top_ns)):
acc_counters[j] += accs[j]
n += images.shape[0]
tops = {f'top{top_ns[i]}': acc_counters[i] / n * 100 for i in range(len(top_ns))}
print(tops)
```
I get the below error
```
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-41-3500c9b4df73> in <module>
11 target = target.numpy()
12 # predict
---> 13 image_features = image_model(images)
14 image_features = image_features / np.linalg.norm(image_features, axis=-1, keepdims=True)
15 logits = 100. * image_features @ zeroshot_weights
6 frames
<ipython-input-39-f2cc72683291> in <lambda>(images)
6 se_image_model = SentenceTransformer('clip-ViT-B-32')
7 language_model = lambda queries: se_language_model.encode(queries, convert_to_tensor=True, show_progress_bar=False).cpu().detach().numpy()
----> 8 image_model = lambda images: se_image_model.encode(images, batch_size=64, convert_to_tensor=False, show_progress_bar=False).cpu().detach().numpy()
9 elif MODEL_TYPE == 'bothclip':
10 import jax
/usr/local/lib/python3.8/dist-packages/sentence_transformers/SentenceTransformer.py in encode(self, sentences, batch_size, show_progress_bar, output_value, convert_to_numpy, convert_to_tensor, device, normalize_embeddings)
159 for start_index in trange(0, len(sentences), batch_size, desc="Batches", disable=not show_progress_bar):
160 sentences_batch = sentences_sorted[start_index:start_index+batch_size]
--> 161 print("sentences_batch")
162 print(sentences_batch)
163 features = self.tokenize(sentences_batch)
/usr/local/lib/python3.8/dist-packages/sentence_transformers/SentenceTransformer.py in tokenize(self, texts)
317 def tokenize(self, texts: Union[List[str], List[Dict], List[Tuple[str, str]]]):
318 """
--> 319 Tokenizes the texts
320 """
321 return self._first_module().tokenize(texts)
/usr/local/lib/python3.8/dist-packages/sentence_transformers/models/CLIPModel.py in tokenize(self, texts)
69 images = None
70
---> 71 inputs = self.processor(text=texts_values, images=images, return_tensors="pt", padding=True)
72 inputs['image_text_info'] = image_text_info
73 return inputs
/usr/local/lib/python3.8/dist-packages/transformers/models/clip/processing_clip.py in __call__(self, text, images, return_tensors, **kwargs)
97
98 if text is not None:
---> 99 encoding = self.tokenizer(text, return_tensors=return_tensors, **kwargs)
100
101 if images is not None:
/usr/local/lib/python3.8/dist-packages/transformers/tokenization_utils_base.py in __call__(self, text, text_pair, text_target, text_pair_target, add_special_tokens, padding, truncation, max_length, stride, is_split_into_words, pad_to_multiple_of, return_tensors, return_token_type_ids, return_attention_mask, return_overflowing_tokens, return_special_tokens_mask, return_offsets_mapping, return_length, verbose, **kwargs)
2525 if not self._in_target_context_manager:
2526 self._switch_to_input_mode()
-> 2527 encodings = self._call_one(text=text, text_pair=text_pair, **all_kwargs)
2528 if text_target is not None:
2529 self._switch_to_target_mode()
/usr/local/lib/python3.8/dist-packages/transformers/tokenization_utils_base.py in _call_one(self, text, text_pair, add_special_tokens, padding, truncation, max_length, stride, is_split_into_words, pad_to_multiple_of, return_tensors, return_token_type_ids, return_attention_mask, return_overflowing_tokens, return_special_tokens_mask, return_offsets_mapping, return_length, verbose, **kwargs)
2583
2584 if not _is_valid_text_input(text):
-> 2585 raise ValueError(
2586 "text input must of type `str` (single example), `List[str]` (batch or single pretokenized example) "
2587 "or `List[List[str]]` (batch of pretokenized examples)."
ValueError: text input must of type `str` (single example), `List[str]` (batch or single pretokenized example) or `List[List[str]]` (batch of pretokenized examples).
```
How can this be fixed? | 01-30-2023 11:36:42 | 01-30-2023 11:36:42 | Looks like an issue with the sentence-transformers library, not Transformers. Cc-ing @ArthurZucker who may other ideas.<|||||>I don't really but gonna try to have a look through the notebook. <|||||>@alhuri could you provide a functioning notebook with the reproduction script? This one does not work for me (missing packages etc) with the config you are using? Thanks<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 21,365 | closed | Fix DETR tests after #21144 | # What does this PR do?
Bug introduced with #21144 when checking whether annotations were batched. In part, because of a mismatch between type annotations and input types. Updated logic and annotations.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests? | 01-30-2023 11:23:31 | 01-30-2023 11:23:31 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 21,364 | closed | Why the 'tokenizer' will return more than one token for single word in GPT2? | I'm trying to use the GPT2 to get the feature of a single word. But i find not every single word will get a single token from the 'tokenizer'. I don't know why it's like that.
Here is my code:
```
def test_gpt2():
tokenizer = GPT2Tokenizer.from_pretrained('gpt2')
model = GPT2Model.from_pretrained('gpt2')
model.to(torch.device("cuda"))
# model.feature_extractor._freeze_parameters()
text = 'irving'#"irving"
encoded_input = tokenizer(text, return_tensors='pt')
print(encoded_input)
encoded_input = encoded_input.to(torch.device("cuda"))
output = model(**encoded_input)
print(encoded_input)
# print(output.last_hidden_state)
print(output.last_hidden_state.shape)
```
and the output is:
{'input_ids': tensor([[ 81, 1075]]), 'attention_mask': tensor([[1, 1]])}
{'input_ids': tensor([[ 81, 1075]], device='cuda:0'), 'attention_mask': tensor([[1, 1]], device='cuda:0')}
torch.Size([1, 2, 768]
It seems like the word 'irving' is mapped to '81' and '1075'
It also happens on some other words. | 01-30-2023 11:17:37 | 01-30-2023 11:17:37 | Please use the [forums](https://discuss.huggingface.co/) to ask questions like this as we keep issues for bugs and feature requests only.
Most Transformer models use subword tokenizers, which mean that one word can be split into several tokens. Here the GPT2 tokenizer splits `"irving"` into `["ir", "ving"]`.<|||||>> Please use the [forums](https://discuss.huggingface.co/) to ask questions like this as we keep issues for bugs and feature requests only.
>
> Most Transformer models use subword tokenizers, which mean that one word can be split into several tokens. Here the GPT2 tokenizer splits `"irving"` into `["ir", "ving"]`.
I am sry for putting the question in the wrong place....
and thanks for your help, I've got it! |
transformers | 21,363 | closed | TPU out of memory (OOM) with flax train a language model GPT2 | I'm training the GPT2 on the Flax and TPU i get TPU out of memory error while i have enough memory and without filling the memory it says out of memory like this:
** jax._src.traceback_util.UnfilteredStackTrace: jaxlib.xla_extension.XlaRuntimeError: RESO**URCE_EXHAUSTED: Ran out of memory in memory space hbm. Used 17.91G of 15.48G hbm. Exceeded hbm capacity by 2.43G.**
Im training the below code:
the orginal repository use 64 batch size it works but when i use 256 batch size i get OOM TPU error while i have much more memory
Github page of below code:
**https://github.com/huggingface/transformers/tree/main/examples/flax/language-modeling**
**python run_clm_flax.py \
--output_dir="./norwegian-gpt2" \
--model_type="gpt2" \
--config_name="./norwegian-gpt2" \
--tokenizer_name="./norwegian-gpt2" \
--dataset_name="oscar" \
--dataset_config_name="unshuffled_deduplicated_no" \
--do_train --do_eval \
--block_size="512" \
--per_device_train_batch_size="256" \
--per_device_eval_batch_size="256" \
--learning_rate="5e-3" --warmup_steps="1000" \
--adam_beta1="0.9" --adam_beta2="0.98" --weight_decay="0.01" \
--overwrite_output_dir \
--num_train_epochs="20" \
--logging_steps="500" \
--save_steps="2500" \
--eval_steps="2500" \
--push_to_hub**
in the github i found a similar erro:
**link: https://github.com/google/flax/discussions/1690**
it propose to implement the model on cup then transfer it to TPU but it did not explain where to transfer to cpu and when back to TPU:
here is the explanation to transfer on cup first:
**This is quite odd for sure. Fragmentation and being close to the limit in terms of memory could off course result in errors that appear almost randomly. One thing you could try is to initialize the model on CPU jax.jit(model.init, backend="cpu") The params are moved to TPU automatically during training or during replication of the state (eg jax_utils.replicate)**
Here is the FULL error i get while training on TPU:
Traceback (most recent call last):
File "/kaggle/input/yyyyyyyyyyyyy/casual_model_unsupervised_train.py.txt", line 845, in <module>
main()
File "/kaggle/input/yyyyyyyyyyyyy/casual_model_unsupervised_train.py.txt", line 752, in main
state, train_metric = p_train_step(state, batch)
File "/usr/local/lib/python3.8/site-packages/jax/_src/traceback_util.py", line 162, in reraise_with_filtered_traceback
return fun(*args, **kwargs)
File "/usr/local/lib/python3.8/site-packages/jax/_src/api.py", line 2026, in cache_miss
out_tree, out_flat = f_pmapped_(*args, **kwargs)
File "/usr/local/lib/python3.8/site-packages/jax/_src/api.py", line 1902, in pmap_f
out = pxla.xla_pmap(
File "/usr/local/lib/python3.8/site-packages/jax/core.py", line 1859, in bind
return map_bind(self, fun, *args, **params)
File "/usr/local/lib/python3.8/site-packages/jax/core.py", line 1891, in map_bind
outs = primitive.process(top_trace, fun, tracers, params)
File "/usr/local/lib/python3.8/site-packages/jax/core.py", line 1862, in process
return trace.process_map(self, fun, tracers, params)
File "/usr/local/lib/python3.8/site-packages/jax/core.py", line 680, in process_call
return primitive.impl(f, *tracers, **params)
File "/usr/local/lib/python3.8/site-packages/jax/interpreters/pxla.py", line 792, in xla_pmap_impl
compiled_fun, fingerprint = parallel_callable(
File "/usr/local/lib/python3.8/site-packages/jax/linear_util.py", line 285, in memoized_fun
ans = call(fun, *args)
File "/usr/local/lib/python3.8/site-packages/jax/interpreters/pxla.py", line 823, in parallel_callable
pmap_executable = pmap_computation.compile()
File "/usr/local/lib/python3.8/site-packages/jax/_src/profiler.py", line 206, in wrapper
return func(*args, **kwargs)
File "/usr/local/lib/python3.8/site-packages/jax/interpreters/pxla.py", line 1091, in compile
self._executable = PmapExecutable.from_hlo(self._hlo, **self.compile_args)
File "/usr/local/lib/python3.8/site-packages/jax/interpreters/pxla.py", line 1214, in from_hlo
compiled = dispatch.compile_or_get_cached(
File "/usr/local/lib/python3.8/site-packages/jax/_src/dispatch.py", line 768, in compile_or_get_cached
return backend_compile(backend, computation, compile_options)
File "/usr/local/lib/python3.8/site-packages/jax/_src/profiler.py", line 206, in wrapper
return func(*args, **kwargs)
File "/usr/local/lib/python3.8/site-packages/jax/_src/dispatch.py", line 713, in backend_compile
return backend.compile(built_c, compile_options=options)
jax._src.traceback_util.UnfilteredStackTrace: jaxlib.xla_extension.XlaRuntimeError: RESOURCE_EXHAUSTED: Ran out of memory in memory space hbm. Used 17.91G of 15.48G hbm. Exceeded hbm capacity by 2.43G.
Total hbm usage >= 18.43G:
reserved 530.00M
program 17.91G
arguments 0B
Output size 0B; shares 0B with arguments.
Program hbm requirement 17.91G:
global 132.0K
HLO temp 17.91G (99.8% utilization: Unpadded (14.75G) Padded (14.78G), 17.5% fragmentation (3.13G))
Largest program allocations in hbm:
1. Size: 6.14G
Operator: op_name="pmap(train_step)/jit(main)/dot_general[dimension_numbers=(((2,), (0,)), ((), ())) precision=None preferred_element_type=None]" source_file="/usr/local/lib/python3.8/site-packages/flax/linen/linear.py" source_line=196
Shape: f32[64,511,50257]{1,2,0:T(8,128)}
Unpadded size: 6.12G
Extra memory due to padding: 13.14M (1.0x expansion)
XLA label: fusion.3719 = fusion(get-tuple-element.1407, bitcast.1602), kind=kOutput, calls=fused_computation.2774
Allocation type: HLO temp
==========================
2. Size: 3.07G
Operator: op_name="pmap(train_step)/jit(main)/jit(transpose(jvp(log_softmax)))/add_any" source_file="/usr/local/lib/python3.8/site-packages/optax/_src/loss.py" source_line=172
Shape: bf16[64,511,50257]{1,2,0:T(8,128)(2,1)}
Unpadded size: 3.06G
Extra memory due to padding: 6.57M (1.0x expansion)
XLA label: fusion.8 = fusion(get-tuple-element.1576, get-tuple-element.1575, slice.3481, divide.161), kind=kLoop, calls=fused_computation.8
Allocation type: HLO temp
==========================
**Does any smart guy know the solution to this problem?**
thanks
| 01-30-2023 10:29:03 | 01-30-2023 10:29:03 | Or you just need to use a smaller batch size to avoid the OOM error. cc @sanchit-gandhi who may have other ideas.<|||||>
> Or you just need to use a smaller batch size to avoid the OOM error. cc @sanchit-gandhi who may have other ideas.
thanks for the reply i really appreciate it
smaller batch size has some disadvantages
1- it takes huge amount of time for training
2-it is impossible to train large models like gpt-neo 1.3B and 2.7B parameters if i set the batch size to 1 still i get the OOM error so training large models would be to tally impossible,
any way do you think is there any reliable solution like transfer model on cpu then TPU or change the TPU memory limitation directly?
<|||||>Hey @sarataylor2000!
> the orginal repository use 64 batch size
I believe this was the _effective_ batch size, which is computed as: `effective_batch_size = per_device_batch_size * num_devices`
In your script, you're setting `per_device_batch_size=256`. Supposing you're using a TPU v3-8, you have 8 TPU cores, which means you have 8 devices. This means your effective batch size is: 256 * 8 = 2048. This is a factor of 2048/64 = 32 times larger than the original repo!
If you need to use an effective batch size of 64, you can work out your per device batch size as: `per_device_batch_size = effective_batch_size / num_devices = 64 / 8 = 8`
So we only need a `per_device_batch_size=8` here!
> it propose to implement the model on cup then transfer it to TPU but it did not explain where to transfer to cpu and when back to TPU
The OOM memory we're getting with your example is happening on the `pmap` step, i.e. when we're performing training. The model is loaded up perfectly fine, so no need to change the model loading logic (transferring from CPU -> TPU)
You can try setting `dtype="bfloat16"` to load the params in bfloat16 precision and save memory on the model weights + optimiser states: https://github.com/huggingface/transformers/blob/92ce53aab859012f7714dae6d6fce7a7d701e75f/examples/flax/language-modeling/run_mlm_flax.py#L168<|||||>> Hey @sarataylor2000!
>
> > the orginal repository use 64 batch size
>
> I believe this was the _effective_ batch size, which is computed as: `effective_batch_size = per_device_batch_size * num_devices`
>
> In your script, you're setting `per_device_batch_size=256`. Supposing you're using a TPU v3-8, you have 8 TPU cores, which means you have 8 devices. This means your effective batch size is: 256 * 8 = 2048. This is a factor of 2048/64 = 32 times larger than the original repo!
>
> If you need to use an effective batch size of 64, you can work out your per device batch size as: `per_device_batch_size = effective_batch_size / num_devices = 64 / 8 = 8`
>
> So we only need a `per_device_batch_size=8` here!
>
> > it propose to implement the model on cup then transfer it to TPU but it did not explain where to transfer to cpu and when back to TPU
>
> The OOM memory we're getting with your example is happening on the `pmap` step, i.e. when we're performing training. The model is loaded up perfectly fine, so no need to change the model loading logic (transferring from CPU -> TPU)
>
> You can try setting `dtype="bfloat16"` to load the params in bfloat16 precision and save memory on the model weights + optimiser states:
>
> https://github.com/huggingface/transformers/blob/92ce53aab859012f7714dae6d6fce7a7d701e75f/examples/flax/language-modeling/run_mlm_flax.py#L168
first of thanks so much for reply i really appreciate it,
1_the total batch is 256 and per devise batch is 32
2_for changing the dtype it works for models up to T5_large but doesn't work with models like T5 x-large or T5 xx-large
or gpt-nep 1.3 & 2.3 b parameter and gpt-J 8bit this big models if i set per device batch=1 NOT WORK AT ALL!
**so is there any way to prevent OOM and fine-tune this big models?**
<|||||>ok seems no one is capable to solve this TPU OOM problem i suspend here there would be some genuine guys but all SUCKS! No one know the solution<|||||>@sarataylor2000 We do not tolerate this kind of language in this repository. You can learn more by reading our code of conduct [here](https://github.com/huggingface/transformers/blob/main/CODE_OF_CONDUCT.md). As a result, I have blocked you for seven days.
This is an opensource repository, you get all the code for free. You are not entitled to an answer or a free debugging session.<|||||>Hey @sarataylor2000,
What TPU device are you using? A v3-8? If so, it's going to be difficult running super large models like T5 XXL using `pmap`, as each TPU core only has 16GB of memory.
One thing we should definitely try is enabling gradient checkpointing:
https://github.com/huggingface/transformers/blob/21a2d900eceeded7be9edc445b56877b95eda4ca/examples/flax/language-modeling/run_mlm_flax.py#L110
If that doesn't work, we'll have to resort to some heavy engineering to make this work. This is very advanced and thus outside the scope of the `transformers` library, so I've left some pointers here:
1. Add [`scan_with_axes`](https://github.com/google/flax/blob/1f6b0949d964fbc99f8f8b9541caff54226d0a78/flax/linen/partitioning.py#L378). See [seq2seq-speech/modeling_flax_bart.py](https://github.com/sanchit-gandhi/seq2seq-speech/blob/main/models/modeling_flax_bart.py) for an example of the code changes you need to make here
2. Use `pmap` to shard the model and activations across devices, see [JAX pmap](https://jax.readthedocs.io/en/latest/_autosummary/jax.pmap.html) and [bloom-jax-inference](https://github.com/huggingface/bloom-jax-inference/blob/main/bloom_inference/modeling_bloom/modeling_bloom.py)<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 21,362 | closed | Fix `GitModelIntegrationTest.test_batched_generation` device issue | # What does this PR do?
In PR #21282, `input_ids` is not on the target device and CI fails with
```bash
E RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! (when checking argument for argument index in method wrapper__index_select)
``` | 01-30-2023 07:06:16 | 01-30-2023 07:06:16 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 21,361 | closed | Fix TextGeneration and Text2TextGeneration pipeline issue with return_dict_in_generate | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
When running `return_dict_in_generate=True` in pipelines, the `generate` method errors out, e.g.
```python
generate_kwargs["min_length"] = generate_kwargs.get("min_length", self.model.config.min_length)
generate_kwargs["max_length"] = generate_kwargs.get("max_length", self.model.config.max_length)
self.check_inputs(input_length, generate_kwargs["min_length"], generate_kwargs["max_length"])
output_ids = self.model.generate(**model_inputs, **generate_kwargs)
E AttributeError: 'GreedySearchEncoderDecoderOutput' object has no attribute 'shape'
```
Reproducible code:
```python
from transformers import pipeline
generator = pipeline('text-generation', 'gpt2')
generator('hello', return_dict_in_generate=True)
```
Fix is to add a check if `return_dict_in_generate=True`, then let `generated_sequence = generated_sequence.sequences`.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
@gante, @Narsil
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
| 01-30-2023 06:34:27 | 01-30-2023 06:34:27 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_21361). All of your documentation changes will be reflected on that endpoint.<|||||>> Hi thank you for this PR. Do you mind sharing in which context you need to use this return changing flag?
Currently I use the scores for a custom stopping criterion (e.g. stop when cumulative probs are > x).<|||||>@tokestermw interesting! `pipeline` only returns the output sequence, but to store the `scores` internally in `.generate()`, `return_dict_in_generate` is indeed needed. Is this what is happening in your use case?<|||||>@gante yes that's right! to access the `scores` here: https://github.com/huggingface/transformers/blob/42b60f8b02941b0c40c42e150a101eb372c3856e/src/transformers/generation/stopping_criteria.py#L37
<|||||>> Currently I use the scores for a custom stopping criterion (e.g. stop when cumulative probs are > x).
Shouldn't that be done with a custom `StoppingCriteria` ?
```python
class MyStoppingCriteria:
def __init__():
self.cumulative = 0.0
def __call__():
self.cumulative += scores
if self.cumulative >
return True
else:
return False
```
This PR a small enough to be ok anyway, just wondering if there's not a "cleaner" way for you to solve your issue.<|||||>@Narsil we do use a custom `StoppingCriteria`, and use it inside pipelines.
But yeah arguably custom stuff should maybe be done outside of pipelines.
Sidenote is that we can't currently pass in tokenizer args like `truncation=True` in pipelines, so we've had to either write a custom pipeline, or rewrite to not use pipelines :)<|||||>> Sidenote is that we can't currently pass in tokenizer args like truncation=True in pipelines, so we've had to either write a custom pipeline, or rewrite to not use pipelines :)
We could definitely add `tokenizer_kwargs` to enable all the args you want on the tokenizer.
Another way of doing it would be by subclassing
https://huggingface.co/docs/transformers/v4.26.0/en/add_new_pipeline#how-to-create-a-custom-pipeline
And then using `pipeline(...., pipeline_class=MyPipelineClass)` to use it instead of the default one.
Then you can add all the fancy logic you need.
The docs also refer on how to share it !
<|||||>Thanks @Narsil ! i can take a look at `tokenizer_kwargs` in a separate PR, please let me know if anything else is needed in this PR<|||||>Seems good, just don't pass all `tokenizer_kwargs` as a generic `**kwargs`.
Like let's capture `pipeline(...tokenizer_kwargs={"truncation": True})` so that it cannot clash with `generate_kwargs` (inconveninent but necessary, `max_length` is an argument for both)<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>cc @echarlaix, potentialy including `tokenizer_kwargs` is planned 😉 <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 21,360 | closed | Segment Fault when exporting GPT-2 as ONNX format | ### System Info
- `transformers` version: 4.25.1
- Platform: Linux-5.15.79.1-microsoft-standard-WSL2-x86_64-with-debian-buster-sid
- Python version: 3.7.11
- Huggingface_hub version: 0.11.1
- PyTorch version (GPU?): 1.10.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
1. launch container with `pytorch/pytorch:1.10.0-cuda11.3-cudnn8-runtime`
2. install `transformers` and `onnxruntime` with `pip`
3. Execute the following script
```
# %% [markdown]
# ## Reference
# - [ONNX Tutorial by HuggingFace](https://huggingface.co/docs/transformers/serialization)
# - [Custom ONNX Config for HuggingFace Transformers](https://huggingface.co/docs/transformers/serialization#implementing-a-custom-onnx-configuration)
#
# %%
import torch as t
from transformers import GPT2Tokenizer, GPT2Model
from transformers.models.gpt2 import GPT2Config, GPT2OnnxConfig
# %%
tokenizer = GPT2Tokenizer.from_pretrained('gpt2')
model = GPT2Model.from_pretrained('gpt2')
# %%
model.eval()
# %%
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
output
# %%
from pathlib import Path
from transformers.onnx import export
from transformers import AutoTokenizer, AutoModel
onnx_path = Path("gpt2-.onnx")
onnx_config = GPT2OnnxConfig(model.config)
print(list(onnx_config.outputs.keys()))
print(f"ONNX OP Seet: {onnx_config.default_onnx_opset}")
onnx_inputs, onnx_outputs = export(tokenizer, model, onnx_config, onnx_config.default_onnx_opset, onnx_path)
```
Log:
```
['last_hidden_state']
ONNX OP Seet: 13
/opt/conda/lib/python3.7/site-packages/torch/onnx/utils.py:90: UserWarning: 'enable_onnx_checker' is deprecated and ignored. It will be removed in the next PyTorch release. To proceed despite ONNX checker failures, catch torch.onnx.ONNXCheckerError.
warnings.warn("'enable_onnx_checker' is deprecated and ignored. It will be removed in "
/opt/conda/lib/python3.7/site-packages/torch/onnx/utils.py:103: UserWarning: `use_external_data_format' is deprecated and ignored. Will be removed in next PyTorch release. The code will work as it is False if models are not larger than 2GB, Otherwise set to False because of size limits imposed by Protocol Buffers.
warnings.warn("`use_external_data_format' is deprecated and ignored. Will be removed in next "
/opt/conda/lib/python3.7/site-packages/transformers/models/gpt2/modeling_gpt2.py:796: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
if batch_size <= 0:
Segmentation fault
(base) root@f2f3e984f0bd:/home/polonsky/Documents/model-infernce-demo# /opt/conda/bin/python /home/polonsky/Documents/model-infernce-demo/torchscript/gpt2_onnx.py
['last_hidden_state']
ONNX OP Seet: 13
/opt/conda/lib/python3.7/site-packages/torch/onnx/utils.py:90: UserWarning: 'enable_onnx_checker' is deprecated and ignored. It will be removed in the next PyTorch release. To proceed despite ONNX checker failures, catch torch.onnx.ONNXCheckerError.
warnings.warn("'enable_onnx_checker' is deprecated and ignored. It will be removed in "
/opt/conda/lib/python3.7/site-packages/torch/onnx/utils.py:103: UserWarning: `use_external_data_format' is deprecated and ignored. Will be removed in next PyTorch release. The code will work as it is False if models are not larger than 2GB, Otherwise set to False because of size limits imposed by Protocol Buffers.
warnings.warn("`use_external_data_format' is deprecated and ignored. Will be removed in next "
/opt/conda/lib/python3.7/site-packages/transformers/models/gpt2/modeling_gpt2.py:796: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
if batch_size <= 0:
Segmentation fault
```
### Expected behavior
Yield model in onnx for production use | 01-30-2023 04:05:14 | 01-30-2023 04:05:14 | Make sure you have the optimum library installed to have the latest version of our ONNX export, and if the bug still persists, please open an issue in that repo as they will be able to help you :-) <|||||>> Make sure you have the optimum library installed to have the latest version of our ONNX export, and if the bug still persists, please open an issue in that repo as they will be able to help you :-)
I've update the libraries via `pip3 install -U transformers optimum onnxruntime` and executed the script above and things ended up with `Segement Fault` despite having fewer warnings than before
```
['last_hidden_state']
ONNX OP Seet: 13
/opt/conda/lib/python3.7/site-packages/torch/onnx/utils.py:90: UserWarning: 'enable_onnx_checker' is deprecated and ignored. It will be removed in the next PyTorch release. To proceed despite ONNX checker failures, catch torch.onnx.ONNXCheckerError.
warnings.warn("'enable_onnx_checker' is deprecated and ignored. It will be removed in "
/opt/conda/lib/python3.7/site-packages/torch/onnx/utils.py:103: UserWarning: `use_external_data_format' is deprecated and ignored. Will be removed in next PyTorch release. The code will work as it is False if models are not larger than 2GB, Otherwise set to False because of size limits imposed by Protocol Buffers.
warnings.warn("`use_external_data_format' is deprecated and ignored. Will be removed in next "
/opt/conda/lib/python3.7/site-packages/transformers/models/gpt2/modeling_gpt2.py:794: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
if batch_size <= 0:
Segmentation fault
```<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Any response?<|||||>Hi @BorisPolonsky , the transformers ONNX export is not maintained anymore, and you should install `optimum` to get the latest updates related to the ONNX export, see https://huggingface.co/docs/optimum/main/en/exporters/onnx/usage_guides/export_a_model
```
pip install -U optimum
optimum-cli export onnx --model gpt2 gpt2_onnx/
```
If you encounter an issue with this command, feel free to open an issue in https://github.com/huggingface/optimum/issues and I will have a second look.
Thank you!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Migrating to optimum. |
transformers | 21,359 | closed | Difference between FlaxViTModel and FlaxCLIPVisionTransformer | Hi, I noticed that HuggingFace has two different Flax-based implementations of the vision transformer, the [FlaxCLIPVisionTransformer](https://github.com/huggingface/transformers/blob/main/src/transformers/models/clip/modeling_flax_clip.py#L535) and the [FlaxViTModule](https://github.com/huggingface/transformers/blob/main/src/transformers/models/vit/modeling_flax_vit.py#L507). Is there a reason there is a separate ViT implementation for CLIP? Are there any notable differences between the FlaxCLIPVisionTransformer and the FlaxViTModel? Thank you! | 01-29-2023 22:02:24 | 01-29-2023 22:02:24 | Please use the [forums](https://discuss.huggingface.co/) for questions like this as we keep issues for bugs and feature requests only.<|||||>I think they're equivalent (one would need to check they are both applying pre-norm or post-norm etc.), but the reason we have 2 different implementations is because Transformers has a [one model, one file philosophy](https://huggingface.co/blog/transformers-design-philosophy). |
transformers | 21,358 | closed | `TFAutoModelForSequenceClassification` Onnx export not working | ### System Info
- `transformers` version: 4.26.0
- Platform: macOS-10.16-x86_64-i386-64bit
- Python version: 3.9.16
- Huggingface_hub version: 0.12.0
- PyTorch version (GPU?): not installed (NA)
- Tensorflow version (GPU?): 2.11.0 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help?
@gante and @Rocketknight1
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Run:
```py
from transformers import AutoTokenizer, TFAutoModelForSequenceClassification
# Load tokenizer and TensorFlow weights from the Hub
tokenizer = AutoTokenizer.from_pretrained("distilbert-base-uncased")
tf_model = TFAutoModelForSequenceClassification.from_pretrained("distilbert-base-uncased")
# Save to disk
tokenizer.save_pretrained("local-tf-checkpoint")
tf_model.save_pretrained("local-tf-checkpoint")
```
Then:
```console
python -m transformers.onnx --model=local-tf-checkpoint onnx/
```
Taken directly from here: https://huggingface.co/docs/transformers/serialization
Looks like some package version issue since I installed it with
```console
pip install transformers[tf,onnx]
```
### Expected behavior
Error
```console
2023-01-30 00:32:22.939113: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
Local TensorFlow model found.
Framework not requested. Using tf2onnx to export to ONNX.
2023-01-30 00:32:25.820716: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
Some layers from the model checkpoint at local-tf-checkpoint were not used when initializing TFDistilBertModel: ['dropout_19', 'pre_classifier', 'classifier']
- This IS expected if you are initializing TFDistilBertModel from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing TFDistilBertModel from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
All the layers of TFDistilBertModel were initialized from the model checkpoint at local-tf-checkpoint.
If your task is similar to the task the model of the checkpoint was trained on, you can already use TFDistilBertModel for predictions without further training.
WARNING:tensorflow:From /Users/goku/miniconda3/envs/tmp/lib/python3.9/site-packages/tensorflow/python/autograph/pyct/static_analysis/liveness.py:83: Analyzer.lamba_check (from tensorflow.python.autograph.pyct.static_analysis.liveness) is deprecated and will be removed after 2023-09-23.
Instructions for updating:
Lambda fuctions will be no more assumed to be used in the statement where they are used, or at least in the same block. https://github.com/tensorflow/tensorflow/issues/56089
2023-01-30 00:32:29.729090: I tensorflow/core/grappler/devices.cc:75] Number of eligible GPUs (core count >= 8, compute capability >= 0.0): 0 (Note: TensorFlow was not compiled with CUDA or ROCm support)
2023-01-30 00:32:37.305540: I tensorflow/core/grappler/devices.cc:75] Number of eligible GPUs (core count >= 8, compute capability >= 0.0): 0 (Note: TensorFlow was not compiled with CUDA or ROCm support)
/Users/goku/miniconda3/envs/tmp/lib/python3.9/site-packages/tf2onnx/tf_utils.py:58: FutureWarning: In the future `np.str` will be defined as the corresponding NumPy scalar. (This may have returned Python scalars in past versions.
np_data = np_data.astype(np.str).astype(object)
Traceback (most recent call last):
File "/Users/goku/miniconda3/envs/tmp/lib/python3.9/site-packages/tf2onnx/tf_utils.py", line 58, in tf_to_onnx_tensor
np_data = np_data.astype(np.str).astype(object)
File "/Users/goku/miniconda3/envs/tmp/lib/python3.9/site-packages/numpy/__init__.py", line 284, in __getattr__
raise AttributeError("module {!r} has no attribute "
AttributeError: module 'numpy' has no attribute 'str'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Users/goku/miniconda3/envs/tmp/lib/python3.9/runpy.py", line 197, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/Users/goku/miniconda3/envs/tmp/lib/python3.9/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/Users/goku/miniconda3/envs/tmp/lib/python3.9/site-packages/transformers/onnx/__main__.py", line 240, in <module>
main()
File "/Users/goku/miniconda3/envs/tmp/lib/python3.9/site-packages/transformers/onnx/__main__.py", line 232, in main
export_with_transformers(args)
File "/Users/goku/miniconda3/envs/tmp/lib/python3.9/site-packages/transformers/onnx/__main__.py", line 165, in export_with_transformers
onnx_inputs, onnx_outputs = export(
File "/Users/goku/miniconda3/envs/tmp/lib/python3.9/site-packages/transformers/onnx/convert.py", line 355, in export
return export_tensorflow(preprocessor, model, config, opset, output, tokenizer=tokenizer)
File "/Users/goku/miniconda3/envs/tmp/lib/python3.9/site-packages/transformers/onnx/convert.py", line 282, in export_tensorflow
onnx_model, _ = tf2onnx.convert.from_keras(model, input_signature, opset=opset)
File "/Users/goku/miniconda3/envs/tmp/lib/python3.9/site-packages/tf2onnx/convert.py", line 494, in from_keras
model_proto, external_tensor_storage = _convert_common(
File "/Users/goku/miniconda3/envs/tmp/lib/python3.9/site-packages/tf2onnx/convert.py", line 164, in _convert_common
g = process_tf_graph(tf_graph, const_node_values=const_node_values,
File "/Users/goku/miniconda3/envs/tmp/lib/python3.9/site-packages/tf2onnx/tfonnx.py", line 459, in process_tf_graph
main_g, subgraphs = graphs_from_tf(tf_graph, input_names, output_names, shape_override, const_node_values,
File "/Users/goku/miniconda3/envs/tmp/lib/python3.9/site-packages/tf2onnx/tfonnx.py", line 474, in graphs_from_tf
ordered_func = resolve_functions(tf_graph)
File "/Users/goku/miniconda3/envs/tmp/lib/python3.9/site-packages/tf2onnx/tf_loader.py", line 760, in resolve_functions
_, _, _, _, _, functions = tflist_to_onnx(tf_graph, {})
File "/Users/goku/miniconda3/envs/tmp/lib/python3.9/site-packages/tf2onnx/tf_utils.py", line 441, in tflist_to_onnx
onnx_tensor = tf_to_onnx_tensor(value, name=port_name(node.name))
File "/Users/goku/miniconda3/envs/tmp/lib/python3.9/site-packages/tf2onnx/tf_utils.py", line 63, in tf_to_onnx_tensor
raise RuntimeError("Not support type: {}".format(type(np_data.flat[0])))
RuntimeError: Not support type: <class 'bytes'>
```
Should export models without any error | 01-29-2023 19:09:20 | 01-29-2023 19:09:20 | Uhmmm I am not knowledgeable about ONNX -- maybe @michaelbenayoun has some ideas? 🤔 <|||||>It works on my side. In any case you can try exporting it with optimum:
```python
optimum-cli export onnx --model distilbert-base-uncased onnx/
```<|||||>seems like the issue is with `numpy==1.24.1`. Works with `numpy==1.21.6`. Maybe tf2onnx needs to make some updates. Closing this. Thank you both 😃 |
transformers | 21,357 | closed | 'T5Config' object has no attribute '__deepcopy__' | ### System Info
- `transformers` version: 4.27.0.dev0
- Platform: Linux-5.4.0-125-generic-x86_64-with-glibc2.17
- Python version: 3.8.15
- Huggingface_hub version: 0.11.0
- PyTorch version (GPU?): 1.13.0+cu117 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Reproduction
The following fails for me:
```python
model_name = "google/t5-v1_1-small"
config = AutoConfig.from_pretrained(model_name)
model_ref = T5ForConditionalGeneration._from_config(config)
'T5Config' object has no attribute '__deepcopy__'
File "/home/nouamane/projects/transformers/src/transformers/configuration_utils.py", line 260, in __getattribute__ (Current frame)
return super().__getattribute__(key)
File "/home/nouamane/miniconda/envs/py38/lib/python3.8/copy.py", line 151, in deepcopy
copier = getattr(x, "__deepcopy__", None)
File "/home/nouamane/projects/transformers/src/transformers/models/t5/modeling_t5.py", line 1498, in __init__
encoder_config = copy.deepcopy(config)
File "/home/nouamane/projects/transformers/src/transformers/modeling_utils.py", line 1077, in _from_config
model = cls(config, **kwargs)
File "/home/nouamane/projects/brrr/examples/t5/test_t5.py", line 78, in test_t5
model_ref = T5ForConditionalGeneration._from_config(config)
File "/home/nouamane/projects/brrr/examples/t5/test_t5.py", line 298, in <module>
test_t5()
AttributeError: 'T5Config' object has no attribute '__deepcopy__'
```
Could be a cache problem (because I just installed transformers from source today)
### Expected behavior
Loading config should work fine | 01-29-2023 15:37:02 | 01-29-2023 15:37:02 | |
transformers | 21,356 | closed | Patch rag tf generate | # What does this PR do?
Patches the failing tests on `main`. After merging #21324, which added support for `logits_processor` in TF framework. | 01-29-2023 09:39:37 | 01-29-2023 09:39:37 | Just need to rebase<|||||>Oups seems that this is to prehemptive! This will be for the tf timestamps PR #21334 <|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_21356). All of your documentation changes will be reflected on that endpoint. |
transformers | 21,355 | closed | Is a Transformer-based image caption model trained to predict the last token only in training phase? | For the following code (which is snippet from https://keras.io/examples/vision/image_captioning/), I do not see the steps for entering the input sequence to model token by token in training phase. But instead of that, the input sequence is entered at once except the last toke by: batch_seq_inp = batch_seq[:, :-1] in the function def _compute_caption_loss_and_acc as shown below. Based on my knowladge, if we have an image that is captioned with a sentence like (image_1 : a man is running), the input output pair in training should be like:
image_1 SOS ==> a
image_1 SOS a ==> man
image_1 SOS a man ==> is
image_1 SOS a man is ==> running
image_1 SOS a man is running ==> END
So I am little confused.
class ImageCaptioningModel(keras.Model):
def __init__(
self, cnn_model, encoder, decoder, num_captions_per_image=5, image_aug=None,
):
super().__init__()
self.cnn_model = cnn_model
self.encoder = encoder
self.decoder = decoder
self.loss_tracker = keras.metrics.Mean(name="loss")
self.acc_tracker = keras.metrics.Mean(name="accuracy")
self.num_captions_per_image = num_captions_per_image
self.image_aug = image_aug
def calculate_loss(self, y_true, y_pred, mask):
loss = self.loss(y_true, y_pred)
mask = tf.cast(mask, dtype=loss.dtype)
loss *= mask
return tf.reduce_sum(loss) / tf.reduce_sum(mask)
def calculate_accuracy(self, y_true, y_pred, mask):
accuracy = tf.equal(y_true, tf.argmax(y_pred, axis=2))
accuracy = tf.math.logical_and(mask, accuracy)
accuracy = tf.cast(accuracy, dtype=tf.float32)
mask = tf.cast(mask, dtype=tf.float32)
return tf.reduce_sum(accuracy) / tf.reduce_sum(mask)
def _compute_caption_loss_and_acc(self, img_embed, batch_seq, training=True):
encoder_out = self.encoder(img_embed, training=training)
batch_seq_inp = batch_seq[:, :-1]
batch_seq_true = batch_seq[:, 1:]
mask = tf.math.not_equal(batch_seq_true, 0)
batch_seq_pred = self.decoder(
batch_seq_inp, encoder_out, training=training, mask=mask
)
loss = self.calculate_loss(batch_seq_true, batch_seq_pred, mask)
acc = self.calculate_accuracy(batch_seq_true, batch_seq_pred, mask)
return loss, acc
def train_step(self, batch_data):
batch_img, batch_seq = batch_data
batch_loss = 0
batch_acc = 0
if self.image_aug:
batch_img = self.image_aug(batch_img)
# 1. Get image embeddings
img_embed = self.cnn_model(batch_img)
# 2. Pass each of the five captions one by one to the decoder
# along with the encoder outputs and compute the loss as well as accuracy
# for each caption.
for i in range(self.num_captions_per_image):
with tf.GradientTape() as tape:
loss, acc = self._compute_caption_loss_and_acc(
img_embed, batch_seq[:, i, :], training=True
)
# 3. Update loss and accuracy
batch_loss += loss
batch_acc += acc
# 4. Get the list of all the trainable weights
train_vars = (
self.encoder.trainable_variables + self.decoder.trainable_variables
)
# 5. Get the gradients
grads = tape.gradient(loss, train_vars)
# 6. Update the trainable weights
self.optimizer.apply_gradients(zip(grads, train_vars))
# 7. Update the trackers
batch_acc /= float(self.num_captions_per_image)
self.loss_tracker.update_state(batch_loss)
self.acc_tracker.update_state(batch_acc)
# 8. Return the loss and accuracy values
return {"loss": self.loss_tracker.result(), "acc": self.acc_tracker.result()} | 01-29-2023 09:17:54 | 01-29-2023 09:17:54 | You should ask your question on the [forums](https://discuss.huggingface.co/) where the community can help you, as we keep issues for bugs and feature requests only.<|||||>Thanks for this advice |
transformers | 21,354 | closed | fix the issue that the output dict of jit model could not get [0] | # What does this PR do?
Fixes # (issue)
when the model is optimized by jit.trace. and then used in pipeline inference for token classification. there is error like KeyError: 0,
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
- pipelines: @Narsil
| 01-29-2023 08:12:05 | 01-29-2023 08:12:05 | @sgugger @yao-matrix please help to review<|||||>_The documentation is not available anymore as the PR was closed or merged._ |
transformers | 21,353 | open | Megatron-11B | ### Model description
I discovered two implementations of this facebook model on the hub, which was trained on the same corpus as Roberta/Bert. I want to try out some prompting, but when I try to download it with
from transformers import AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained("hyunwoongko/megatron-11B")
I get a
KeyError: 'megatron' exception.
This is a relatively sizable model which is as interesting as gpt-j to me, does it work with transformers?
I found one of the two model re-publishers below wrote their own library, but it depends on outdated versions of transformer. Can we use this model with transformers?
### Open source status
- [X] The model implementation is available
- [X] The model weights are available
### Provide useful links for the implementation
The two megatron-to-pytorch models on the hub:
https://huggingface.co/models?search=megatron-11
Extra py lib which should make it work:
https://pypi.org/project/megatron-11b/ | 01-29-2023 07:47:17 | 01-29-2023 07:47:17 | I'm still learning prompting, but after poking around a bit with this model using the pylib I shared above, I found the generated text was of quite low quality for the model size. I got way better results with GPT-J and even GPT-2.
Despite its size, I doubt this model is useful enough. <|||||>When I got coherent sentences out, it switched topics every second sentence. |
transformers | 21,352 | closed | Remove duplicate declarations in dummy inputs for TFLongformer | # What does this PR do?
Remove duplicated lines in [modeling_tf_longformer.py](https://github.com/huggingface/transformers/compare/main...peakji:transformers:patch-1#diff-782b222e9d393fe6750cf8e4cd870bcf3748a92ade5086e518b4d716a80080f8).
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts and @NielsRogge
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
| 01-29-2023 05:44:21 | 01-29-2023 05:44:21 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 21,351 | closed | translate index to zh(#20095) | # What does this PR do?
Translate index doc to zh
#20095 | 01-29-2023 04:31:31 | 01-29-2023 04:31:31 | _The documentation is not available anymore as the PR was closed or merged._<|||||>@ydshieh Could you have a look please?<|||||>@ydshieh I'm ok with the translation. |
transformers | 21,350 | closed | Corrected | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR.
@sgugger , @stevhliu & @MKhalusova
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts and @NielsRogge
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
| 01-29-2023 02:46:31 | 01-29-2023 02:46:31 | _The documentation is not available anymore as the PR was closed or merged._<|||||>@ydshieh could have a quick look? |
transformers | 21,349 | closed | Add Ernie-M Model to huggingface | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Ports Ernie-M from paddle to huggingface(pytorch) and also Fixes #21123
I have uploaded the pytorch converted weights [here](https://huggingface.co/susnato/ernie-m-base_pytorch) and [here](https://huggingface.co/susnato/ernie-m-large_pytorch). The paddle2pytorch weights conversion script has been provided there too.
Work done till now -
1. ported the weights.
2. Added `configuration_ernie_m.py`
`from transformers import AutoConfig`
`config = AutoConfig.from_pretrained("susnato/ernie-m-base_pytorch")`)
3. Added `tokenization_ernie_m.py` (Only Slow Tokenizer implemented)
`from transformers import ErnieMTokenizer`
`tokenizer = ErnieMTokenizer.from_pretrained("susnato/ernie-m-base_pytorch")`
4. ErnieMModel in now working.
`from transformers import AutoModel`
`model = AutoModel.from_pretrained("susnato/ernie-m-base_pytorch") # susnato/ernie-m-large_pytorch`
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case. [link here](https://github.com/huggingface/transformers/issues/21123)
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@ArthurZucker and @younesbelkada
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts and @NielsRogge
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
| 01-28-2023 18:42:25 | 01-28-2023 18:42:25 | Great work @susnato ! Looking forward to reviewing your PR :)
Let us know when you think the PR is ready<|||||>Hi @younesbelkada the official paddlenlp implementation of ErnieM does not have any LM head class, since it was neither trained on Causal nor on Masked LM. It was pretrained on both Cross Attention Masked LM and Back Translation Masked LM (both implementations are missing in paddlenlp). Do I need to add MaskedLM in this huggingface implementation since it's a encoder based model or should I bypass it and don't include any LM head like the paddlenlp implementation did?<|||||>Hi @susnato !
Thanks for your your message
I think this quite depends on the use case of your model. I'd expect most of the users will rely on `ErnieMModel` since it's the model that is present at `paddlepaddle`. If there is an interest to add these models in the future we can always open follow-up PRs
<|||||>> Hi @susnato ! Thanks for your your message I think this quite depends on the use case of your model. I'd expect most of the users will rely on `ErnieMModel` since it's the model that is present at `paddlepaddle`. If there is an interest to add these models in the future we can always open follow-up PRs
@younesbelkada Ok, then I will not add any LMhead for now, and also the rest of the model is ready(with all tests passed), I am currently looking why circleci tests are failing.<|||||>Thanks!
Currently some tests are not passing because you need to define a `ERNIE_M_PRETRAINED_CONFIG_ARCHIVE_MAP` inside`configuration_ernie_m.py`, check here how it is done for `bert`: https://github.com/huggingface/transformers/blob/main/src/transformers/models/bert/configuration_bert.py<|||||>Hi @younesbelkada I added that and did bunch of others changes with make repo-consistency, make style, but when I run make fixup still it says this error
`
python utils/check_config_docstrings.py
Traceback (most recent call last):
File "/home/susnato/temp_files/transformers/utils/check_config_docstrings.py", line 89, in <module>
check_config_docstrings_have_checkpoints()
File "/home/susnato/temp_files/transformers/utils/check_config_docstrings.py", line 85, in check_config_docstrings_have_checkpoints
raise ValueError(f"The following configurations don't contain any valid checkpoint:\n{message}")
ValueError: The following configurations don't contain any valid checkpoint:
ErnieMConfig
`
The values I set are - `ERNIE_M_PRETRAINED_CONFIG_ARCHIVE_MAP = {
"ernie-m-base_pytorch": "https://huggingface.co/susnato/ernie-m-base_pytorch/blob/main/config.json",
"ernie-m-large_pytorch": "https://huggingface.co/susnato/ernie-m-large_pytorch/blob/main/config.json",
}`<|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>Hi @younesbelkada all checks are successful! the PR is ready to review, please review it.<|||||>Hi, @younesbelkada I made all the changes that you requested,
Let me know if any other changes are needed or not and if I have missed any!<|||||>Hi, @younesbelkada I made all the changes as you requested. The tests are now all successful! Please check it.<|||||>Hi @younesbelkada, I made all those changes that you requested.<|||||>Thanks a lot @susnato ! Again great work on the integration so far!
Please wait for the next reviewers to add their review and we should be good merging the PR ;) <|||||>Hi @sgugger I made those changes as you requested and the tests are passed too, please review them.<|||||>Hi @sgugger please forgive me for the previous unaddressed changes, but now I have tried to make all changes that you addressed. Please check it and let me know if I need to make new changes or not.<|||||>Hi, @ArthurZucker I made the changes you said, added the modeling & config file to the `documentation_tests.txt` and also created a new file `tests/models/ernie_m/test_tokenization_ernie_m.py` for testing `ernie-m` tokenizer.
Also there is one thing I want to mention here regarding tokenizer is that, I saw some inconsistency in ErnieMTokenizer(paddle implementation) regarding how it treats white space, for example -
`from paddlenlp.transformers import AutoTokenizer`
`tokenizer = AutoTokenizer.from_pretrained("PaddlePaddle/ernie-m-base", from_hf_hub=True)`
`tokenizer.tokenize("The quick brown fox jumps over the lazy dog.")`
```
['▁The', '▁quick', '▁brown', 'fox', '▁jump', 's', '▁over', '▁the', '▁la', 'zy', '▁dog', '.']
```
here, despite "brown fox" being two different words, the tokenizer doesn't seperate them(by "▁") so when we decode them we get
``
[CLS] The quick brownfox jumps over the lazy dog.[SEP]
``
where there should be a space between brown and fox. This issue is not present in most of the words in vocab.
I managed to solved this issue by inserting a "▁" (at line 215 of `src/transformers/models/ernie_m/tokenization_ernie_m.py`)when we see this condition, but this might lead to very slightly different word_embeddings at times(at most there will be this "▁" character in some sentences between words)! <|||||>Hi @ArthurZucker there seems to be a problem with `tests_tf` of
```
FAILED tests/models/hubert/test_modeling_tf_hubert.py::TFHubertModelTest::test_dataset_conversion
```
which I think is unrelated to this PR, I rebased to `upstream/main` multiple times but still facing this same issue, could you please have a look at this issue?
EDIT : this [PR](https://github.com/huggingface/transformers/pull/21606) states this very issue, I will rebase again after it is merged .<|||||>Hi @ArthurZucker since the `tests_tf` are failing as before, could you please review this code(the recent changes that I made)? It will be very helpful to me because, then I will be able to work on new suggestions/comments, otherwise I am stuck here until this check is fixed.
I will definitely rebase and push again when the test is fixed but in meantime this PR will gain some progress. :)<|||||>Of course! I was waiting for you to ping me 😉 <|||||>Hi @ArthurZucker I pushed the changes please check!<|||||>Okay! LGTM
@sgugger feel free to merge if you think this is ok! 😉 |
transformers | 21,348 | closed | Template for framework-agnostic tests | # What does this PR do?
There are a few cross-framework pain points I often encounter:
1. Ensuring the interface of `.generate()` or models stay consistent across frameworks;
2. Ensuring that TF doesn't get neglected and satisfies 1., when contributors add features/fixes on the PT side;
3. Blocking cases where the interface is the same, but there are numerical differences.
After a brief chat with @sgugger, I thought inheritable framework-agnostic tests could be nice to help with these problems. This week I'll have to write a few TF `.generate()` tests that already exist on the PT side, so this could be a great chance to kill 2 birds with one stone.
However, I'd like to get the pattern right, hence this small PR -- replaces a pair of framework-specific tests with a framework-agnostic test. `Flax` is intentionally left out, as it is missing many testable features and is not being maintained, but it can easily be added in the future.
Let me know what you think of it! | 01-28-2023 16:00:57 | 01-28-2023 16:00:57 | _The documentation is not available anymore as the PR was closed or merged._<|||||>> I might be missing something, but not clear to me how/where this is tested here.
@amyeroberts I see, this example does not test numerical differences :)
However, consider the tests [here](https://github.com/huggingface/transformers/blob/main/tests/generation/test_logits_process.py), which also exist for TF. They are primarily numerical tests, they test the resulting vector against a constant. It is not uncommon for a user to find an edge case on these processors, fix the PT side of it, and add a few more numerical checks. These new numerical checks would fail in TF, but I often can't convince the users to make the change on the TF side (and then fail to follow up myself after it is merged). Result: a bug that is trivial to fix in that moment gets lost 🙈 This unified framework would prevent it -- the fix would need to include the corresponding TF change to pass the tests (either fixed by the user or by one of us). PT == expected values == TF |
transformers | 21,347 | closed | Generate: Relaxed `max_length` and `max_new_tokens` coexistence | # What does this PR do?
TL;DR: stops raising an exception in `.generate()` when `max_length` and `max_new_tokens` are both set -- `max_new_tokens` will take precedence.
Context: Some downstream uses of `.generate()`, for legacy reasons, set `max_length` (e.g. pipelines, API). If a user tries manually setting `max_new_tokens`, as suggested in the documentation, an exception is thrown (even if `max_length` is manually set to `None`).
Because `max_length` can be set outside the `GenerationConfig` and `.generate()`, it's hard to detect whether the `max_length` is intentionally set (and thus shouldn't be allowed together with `max_new_tokens`) or simply a helpful default. Raising an exception can thus block a correct usage of `max_new_tokens`. This PR relaxes this requirement, making `max_new_tokens` take precedence and raising an informative warning instead.
Fixes #21369
___________________________________________________________________
Example of failing code before this PR:
```py
import requests
API_URL = "https://api-inference.huggingface.co/models/google/flan-t5-xl"
headers = {"Authorization": "Bearer hf_xxx"}
def query(payload):
response = requests.post(API_URL, headers=headers, json=payload)
return response.json()
output = query(
{"inputs": "The answer to the universe is", "parameters": {"max_new_tokens": 100}}
)
output
``` | 01-28-2023 14:57:08 | 01-28-2023 14:57:08 | cc @lewtun -- thank you for raising the issue 🙏 <|||||>_The documentation is not available anymore as the PR was closed or merged._ |
transformers | 21,346 | closed | Stored XSS | ### System Info
Hi Team,
I've discovered a **stored cross-site scripting** vulnerability on your domain (https://transformer.huggingface.co/). Is your organization currently conducting a bug bounty program for this website? If so, kindly provide me with the appropriate information. Additionally, would it be possible for you to furnish me with the email contact of your security team?
Best Regards
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
null
### Expected behavior
null | 01-28-2023 03:15:31 | 01-28-2023 03:15:31 | Hi @Dark-Aura, Thanks for reaching out to us! 🤗 We have a bug bounty program with HackerOne and would love for you to submit security vulnerability reports to https://hackerone.com/hugging_face. We'll need to send you an invite since this is a private program, so please feel free to send us an email at [email protected] or let me know your H1 username. Please let us know if there are any questions. Thanks again!<|||||>Thanks for your reply, however, I had already visited this page and I'm getting a 404 error, for your perusal I'm attaching the screenshot please take a look at it

<|||||>Hi @Dark-Aura,
Thanks for sending your H1 username to us via email; you should receive an invite to our bug bounty program soon. Please let us know if you run into any issues submitting reports!
Thanks again,
Michelle |
transformers | 21,345 | closed | Add the GeLU activation from pytorch with the tanh approximation | Fixes #21344. See that issue for more details. | 01-27-2023 23:00:12 | 01-27-2023 23:00:12 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Thanks for working on this! Does the new implementation in Pytorch produce the exact same results as `gelu_fast`? If that is the case, I would prefer we just replace the current `gelu_fast` with this when PyTorch is 1.12 or above.<|||||>> Thanks for working on this! Does the new implementation in Pytorch produce the exact same results as `gelu_fast`? If that is the case, I would prefer we just replace the current `gelu_fast` with this when PyTorch is 1.12 or above.
The results are similar but there are still rounding errors, see my analysis in the related issue #21344. I would also be in favor of replacing the existing implementation / using it as default, but I would introduce small numerical differences in some models, is that a problem?<|||||>Ah yes, the difference is quite significant sadly, so this will probably introduce a difference that is too big :-/
So let's go with a new activation. Maybe `gelu_pytorch` is a better name?<|||||>> Ah yes, the difference is quite significant sadly, so this will probably introduce a difference that is too big :-/ So let's go with a new activation. Maybe `gelu_pytorch` is a better name?
Wouldn't it cause confusion with the default pytorch implementation? That one is currently named "gelu". (And the one named "gelu_python").
Also should I add an explicit pytorch version check?
<|||||>Ok for the name then. For the version check, you will need to create a function that returns the instance of GELU and issues an import error if the PyTorch version is too low, then put that function in the mappinh.<|||||>> Ok for the name then. For the version check, you will need to create a function that returns the instance of GELU and issues an import error if the PyTorch version is too low, then put that function in the mappinh.
Made a class to match the other activations, and raising a `NotImplementedError` (I don't think an `ImportError` is the best here since the function exists in earlier versions.) Also added to `test_get_activation`.<|||||>Failure is unrelated so merging. Thanks again for your contribution! |
transformers | 21,344 | closed | Add the pytorch implementation of the OpenAI GeLU approximation | ### Feature request
Add support for the pytorch implementation of OpenAI's approximation of the GeLU function, added in pytorch 1.12. This implementation is equivalent to `gelu_new` or `gelu_fast` but much faster. It can come as a separate activation function, for example `gelu_new_python`, to avoid distrupting existing models.
### Motivation
Many transformer models use OpenAI's approximation (tanh) for the GeLU, through the activation function `gelu_new` or `gelu_fast`. These implementations are extremely slow (despite their name) because they consist of multiple operations/kernels (8 and 9 respectively).
Since version 1.12, pytorch supports a single-kernel, C/cuda implementation through the argument `approximate='tanh'` ( https://pytorch.org/docs/stable/generated/torch.nn.GELU.html). This implementation is 6-10x faster than what currently exists in transformers, and is numerically equal up to rounding errors.
When benchmarking the inference speed of the [SantaCoder models](https://huggingface.co/bigcode/santacoder), I found that using the pytorch implementation allowed for an end-to-end speedup of ~15-20%.
I also benchmarked the speed and accuracy using the following code (on a A100-80GB):
```
import time
import torch
from transformers.activations import NewGELUActivation, FastGELUActivation
dtype=torch.float32
eps=torch.finfo(dtype).eps
x=torch.empty([2**30], device="cuda", dtype=dtype).normal_()
torch.cuda.synchronize()
t0=time.perf_counter()
y0=torch.nn.functional.gelu(x, approximate="tanh")
torch.cuda.synchronize()
t1=time.perf_counter()
y1=NewGELUActivation()(x)
torch.cuda.synchronize()
t2=time.perf_counter()
y2=FastGELUActivation()(x)
torch.cuda.synchronize()
t3=time.perf_counter()
y3=torch.nn.functional.gelu(x)
torch.cuda.synchronize()
t4=time.perf_counter()
print(f"Torch tanh: {1000*(t1-t0):.3f} ms")
print(f"New: {1000*(t2-t1):.3f} ms")
print(f"Fast: {1000*(t3-t2):.3f} ms")
print(f"Torch orig: {1000*(t4-t3):.3f} ms")
print(f"Torch tanh vs new: {(y1-y0).float().std().cpu().item()/eps:.3f}")
print(f"Torch tanh vs fast: {(y2-y0).float().std().cpu().item()/eps:.3f}")
print(f"New vs fast: {(y2-y1).float().std().cpu().item()/eps:.3f}")
print(f"Torch tanh vs torch orig: {(y3-y0).float().std().cpu().item()/eps:.3f}")
```
With output
```
Torch tanh: 4.921 ms
New: 43.253 ms
Fast: 50.269 ms
Torch orig: 4.989 ms
Torch tanh vs new: 0.042
Torch tanh vs fast: 0.147
New vs fast: 0.147
Torch tanh vs torch orig: 971.960
```
I.e., the tanh version of torch matches the fast and new gelu within epsilon while being 8.8x/10.2x faster, but is different from the original version
With dtype=torch.float16:
```
Torch tanh: 3.342 ms
New: 22.667 ms
Fast: 26.104 ms
Torch orig: 3.395 ms
Torch tanh vs new: 0.244
Torch tanh vs fast: 0.243
New vs fast: 0.143
Torch tanh vs torch orig: 0.216
```
I.e., it's 6.8x/7.8x faster, and the implementation doesn't matters because rounding errors dominate.
On cpu (float32), size 2**28 (268M):
```
Torch tanh: 182.575 ms
New: 1683.934 ms
Fast: 1925.547 ms
Torch orig: 141.410 ms
Torch tanh vs new: 0.043
Torch tanh vs fast: 0.144
New vs fast: 0.144
Torch tanh vs torch orig: 971.852
```
I.e., same accuracy and speedup (9.2x/10.5x faster)
### Your contribution
Opened a draft PR (#21345) | 01-27-2023 22:32:29 | 01-27-2023 22:32:29 | |
transformers | 21,343 | closed | [`run_(clm|mlm).py` examples] add streaming dataset support | This PR adds streaming dataset support. It's fine if it remains a PR for those who might need it. if it's to be merged should probably check when streaming was added to `datasets` and require that version.
1. API-wise everything is as before but need to pass `--streaming` to load the dataset in a streaming mode.
2. and also since `IterableDataset` has no `__len__` this makes the `--max_steps` flag required (possibly can make this more clear by asserting earlier if `--streaming` and `not --max_steps`)
This should make a huge speed up to start working with large datasets. So that work can start immediately and the data gets loaded progressively (which overall is likely to be slower - haven't measured yet - but it makes starting much easier).
Example run:
```
python examples/pytorch/language-modeling/run_clm.py --bf16 --seed 42 \
--model_name_or_path facebook/opt-1.3b --dataset_name wikitext \
--dataset_config_name wikitext-103-raw-v1 --per_device_train_batch_size 1 \
--per_device_eval_batch_size 1 --gradient_accumulation_steps 1 --do_train \
--do_eval --logging_steps 10 --save_steps 1000 --eval_steps 100 --weight_decay \
0.1 --num_train_epochs 1 --adam_beta1 0.9 --adam_beta2 0.95 --learning_rate \
0.0002 --lr_scheduler_type linear --warmup_steps 500 --report_to tensorboard \
--output_dir save_dir --max_steps 1_000_000 --streaming
```
Ported this to `run_mlm.py` as well.
Thank you, @lhoestq for helping with this | 01-27-2023 22:20:01 | 01-27-2023 22:20:01 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Reworked w/o `kwargs` and copying the code where needed, please let me know when it's good for you and I can replicate to mlm<|||||>should I figure out when streaming was added and put a assert if an earlier datasets is used?<|||||>Good point. Streaming was introduced a while ago, but I think it stabilized with the 2.0 version, so maybe use this one as a minimal requirement for streaming?<|||||>- added version check
- ported to mlm
- added a doc entry
This is good to go for a final review, Sylvain. Thank you.<|||||>And thank you for reviewing my work, Sylvain! |
transformers | 21,342 | closed | Errors while training apple/mobilevit-xx-small on image-classification example with and without deepspeed | ### System Info
- transformers installed from source
- python 3.8
- ZeRO-Stage-1
### Who can help?
@amyeroberts @NielsRogge @JingyaHuang
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
python -m torch.distributed.launch --nproc_per_node=8 ~/transformers/examples/pytorch/image-classification/run_image_classification.py --model_name_or_path apple/mobilevit-xx-small --dataset_name beans --overwrite_output_dir --output_dir ./outputs/ --remove_unused_columns False --do_train --do_eval --learning_rate 2e-5 --num_train_epochs 50 --per_device_train_batch_size 8 --per_device_eval_batch_size 8 --logging_strategy steps --logging_steps 10 --evaluation_strategy epoch --seed 1337 --fp16 True --report_to none --ignore_mismatched_sizes True
AttributeError: 'MobileViTImageProcessor' object has no attribute 'image_mean'
python -m torch.distributed.launch --nproc_per_node=8 ~/transformers/examples/pytorch/image-classification/run_image_classification.py --model_name_or_path apple/mobilevit-xx-small --dataset_name beans --overwrite_output_dir --output_dir ./outputs/ --remove_unused_columns False --do_train --do_eval --learning_rate 2e-5 --num_train_epochs 50 --per_device_train_batch_size 8 --per_device_eval_batch_size 8 --logging_strategy steps --logging_steps 10 --evaluation_strategy epoch --seed 1337 --fp16 True --report_to none --ignore_mismatched_sizes True --deepspeed ~/zero_stage_1.json
AttributeError: 'MobileViTConfig' object has no attribute 'hidden_size'
### Expected behavior
I expect both examples to train with the deepspeed-enabled run completing faster than baseline. Currently, both scenarios error out. Thank you in advance for the assistance. | 01-27-2023 19:57:33 | 01-27-2023 19:57:33 | cc @amyeroberts and @alaradirik <|||||>See #21221 - the example scripts aren't meant to work out-of-the-box for any model, as MobileViT for instance doesn't normalizes the images with a mean and std, so one needs to comment out the normalization line.<|||||>@NielsRogge thank you, I can confirm that commenting out the normalization line resolves this issue. How about the DeepSpeed compatibility issue? @JingyaHuang mentioned to me that MobileViT has more of a CNN architecture rather than transformer so it may not work at all with DeepSpeed. Is this accurate? Or is there another workaround you can provide? Thank you in advance.<|||||>Hi @prathikr, as your previous assumption of the issue's root was `hidden_sizes`, I suspected it could come from the lack of [support for variable hidden_size in ONNX Runtime's graph optimization](https://github.com/microsoft/onnxruntime/blob/81120e9e8b377567daa00d55614c902f35b2ae8f/onnxruntime/python/tools/transformers/optimizer.py#L145) other than a problem from the DeepSpeed(but it could be, I am not aware of this).<|||||>@JingyaHuang I don't think so because this issue arises when running without ONNX Runtime as well<|||||>@NielsRogge @JingyaHuang any updates on the deepspeed issue for MobileViT? <|||||>What's the reason you'd like to use Deepspeed?
Also, please provide a full stacktrace<|||||>@NielsRogge, most Microsoft internal training pipelines including AzureML leverage DeepSpeed since it provides better training speed and smaller memory footprint. When we evaluate any Hugging Face models, we always try to integrate both ORT and DeepSpeed to maximize training speed.<|||||>Traceback (most recent call last):
File "/home/prathikrao/transformers/examples/pytorch/image-classification/run_image_classification.py", line 392, in <module>
main()
File "/home/prathikrao/transformers/examples/pytorch/image-classification/run_image_classification.py", line 366, in main
train_result = trainer.train(resume_from_checkpoint=checkpoint)
File "/home/prathikrao/transformers/src/transformers/trainer.py", line 1547, in train
return inner_training_loop(
File "/home/prathikrao/transformers/src/transformers/trainer.py", line 1616, in _inner_training_loop
deepspeed_engine, optimizer, lr_scheduler = deepspeed_init(
File "/home/prathikrao/transformers/src/transformers/deepspeed.py", line 312, in deepspeed_init
hf_deepspeed_config.trainer_config_finalize(args, model, num_training_steps)
File "/home/prathikrao/transformers/src/transformers/deepspeed.py", line 174, in trainer_config_finalize
hidden_size = model.config.hidden_size
File "/home/prathikrao/transformers/src/transformers/configuration_utils.py", line 260, in __getattribute__
return super().__getattribute__(key)
AttributeError: 'MobileViTConfig' object has no attribute 'hidden_size'<|||||>@NielsRogge above is the full stacktrace. I believe I'm seeing this because MobileViT has an attribute named [hidden_sizes](https://github.com/huggingface/transformers/blob/7119bb052a3f492b9af3afe4f3f13132445eba6e/src/transformers/models/mobilevit/configuration_mobilevit.py#L63) which is a list (different from ViT, for example, which has [hidden_size](https://github.com/huggingface/transformers/blob/7119bb052a3f492b9af3afe4f3f13132445eba6e/src/transformers/models/vit/configuration_vit.py#L47) as an int). Not sure if this is an architectural difference that would make Deepspeed incompatible or if there is some workaround for this.<|||||>I'll cc @stas00 here as he's our Deepspeed expert. Several vision models indeed have a `hidden_sizes` attribute in their config (as a list of integers), as vision models often consist of several stages, each stage having its own dimensionality (unlike models like ViT which use the same hidden size for each Transformer block).<|||||>Yes, of course, I can help here. Thank you for pinging me, @NielsRogge
OK, here is what's happening. The issue isn't of deepspeed but of its integration into transformers.
Here we pull `model.config.hidden_size` to create the most efficient for the model setup
https://github.com/huggingface/transformers/blob/5b67ab9924cf7587b39b59eb0bf0abd3d099e8b9/src/transformers/deepspeed.py#L174-L175
now you're saying not all models have it.
So here are some suggestions at how to overcome this problem, while keeping the optimization as close as possible to the best.
1. check if the model has `config.hidden_size` and if it doesn't and we have these 2 settings in the incoming ds_config:
```
zero_optimization.stage3_prefetch_bucket_size = "auto"
zero_optimization.stage3_param_persistence_threshold = "auto"
```
assert about it and then the user can replace `auto` with the value they think works the best and run again.
2. check if the model has `config.hidden_size` and if it doesn't and we have these 2 settings in the incoming ds_config:
```
zero_optimization.stage3_prefetch_bucket_size = "auto"
zero_optimization.stage3_param_persistence_threshold = "auto"
```
check that it has `hidden_sizes` and use the one with the highest value and use that instead of `config.hidden_size`
Should we try option #2?<|||||>Please try this PR and let me know if it fixes the problem https://github.com/huggingface/transformers/pull/21504
Thank you!
p.s. I'm making an assumption that the largest hidden size is the most optimal choice, I could be wrong here. <|||||>Thank you @stas00, I can confirm this solves the issue. Not sure about the performance but it at least runs with this fix.<|||||>Thanks a lot for testing, @prathikr. Will merge this asap. |
transformers | 21,341 | closed | Generate: TF `compute_transition_scores` | # What does this PR do?
This PR adds the TF `compute_transition_scores`, akin to PT's #21191.
What seemingly started off as a simple task, ended up being a complex task -- the TF side had many missing and/or incorrect secondary `generate` outputs 😬 This means that we need to beef up the TF side of `generate` tests, which is much shorter than its PT counterpart. | 01-27-2023 17:03:38 | 01-27-2023 17:03:38 | _The documentation is not available anymore as the PR was closed or merged._<|||||>@patrickvonplaten reverted the change in the order of operations in beam sample, and added a comment to based on our offline conversation.
LMK if you're happy with the PR 🙏 |
transformers | 21,340 | closed | Nystromformer ONNX export | # This PR implements ONNX export functionality for Nystromformer models.
In addition to running the test cases, I exported a locally built Nystromformer 2048 input sequence model to ONNX and ran prediction.
I verified the prediction output.
Fixes # (https://github.com/huggingface/transformers/issues/21339)
## Before submitting
- [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [X] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [X] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts and @NielsRogge
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
| 01-27-2023 14:18:52 | 01-27-2023 14:18:52 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Hi there. The ONNX integration has moved to the [optimum library](https://github.com/huggingface/optimum) so you should open your pull request there :-)<|||||>Hi, okay. Should I delete this PR or fix the error and submit another PR to optimum??<|||||>Yes, you can cloes this PR. We don' accept new PRs as all the support is in optimum now.<|||||>Moving code to optimum |
transformers | 21,339 | closed | Add Nystromformer support to ONNX export | ### Feature request
Add Nystromformer support to ONNX export
### Motivation
Nystromformer models are very compute inexpensive long sequence models which perform exceptionally well.
### Your contribution
I will be submitting a PR (code and docs are complete) PR is in progress. | 01-27-2023 14:18:00 | 01-27-2023 14:18:00 | Moving code to optimum |
transformers | 21,338 | closed | Automated compatible models list for task guides | # What does this PR do?
This PR adds a script that aggregates model architectures compatible with a task illustrated in a task guide and adds a list of links to them in a <Tip> in the guide. This serves several purposes:
1. Reinforces the idea that task guides are applicable to more than one model architecture.
2. Improves discoverability of models.
3. Serves as the first step to improve navigation between task guides and model docs.
| 01-27-2023 14:03:27 | 01-27-2023 14:03:27 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 21,337 | closed | Fix `RobertaPreLayerNorm` doctest | # What does this PR do?
Fix `RobertaPreLayerNorm` doctest. The doctest should have 0 failure against this commit 🔥 | 01-27-2023 13:42:18 | 01-27-2023 13:42:18 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 21,336 | closed | Cleanup the usage of `layer_norm_eps` in some models | # What does this PR do?
**(No breaking change in this PR)**
**(So far I only change `OneFormerConfig`, but I will update other config classes whose default `layer_norm_eps = 1e-5`)**
Fix the missing usage of `config.layer_norm_eps` in some pytorch models: **if the default value in the config class is `1e-05`** (i.e. the same as the default value in `nn.LayerNorm`. In this case, we don't break the current behavior, and **make the (WIP) test dealing with much fewer edge cases** (which is always a bit of burden).
**In #20699, my claim regarding the breaking change was a bit misleading**: only those config classes having default `layer_norm_eps` not being `1e-5` (for example, `LxmertConfig`) will have breaking change - which I don't do anything in this PR. | 01-27-2023 11:33:16 | 01-27-2023 11:33:16 | _The documentation is not available anymore as the PR was closed or merged._<|||||>@sgugger @NielsRogge Just want to hear from you to make sure we are good for this change 🙏 before I continue.
I can definitely just add these config classes to a list of edge cases in my (WIP) tests. But I feel it's better to clean them up, so future models won't copy/paste the same code, and we accumulate more and more edge cases to skip in the tests. |
transformers | 21,335 | closed | TypeError: _forward_unimplemented() got an unexpected keyword argument 'input_ids' | ### System Info
- `transformers` version: 4.24.0
- Platform: Windows-10-10.0.19044-SP0
- Python version: 3.10.8
- Huggingface_hub version: 0.11.0
- PyTorch version (GPU?): 1.13.0+cu117 (True)
### Who can help?
@ArthurZucker and @younesbelkada since I am using `distilbert-base-uncased`<br>(and maybe @sgugger, since I am following this [link](https://huggingface.co/transformers/v3.2.0/custom_datasets.html)) on the hugging face website
### Information
- [x] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
I am using a custom dataset to fine tune `distilbert-base-uncased`. I followed the method described [on the hugging face wesbite](https://huggingface.co/transformers/v3.2.0/custom_datasets.html) to the T. Here is my code for making the dataset.
```python
def create_hugging_face_dataset(data:dict):
train_text, test_text, train_label, test_label = train_test_split(data['text'], data['label'], test_size=0.1, shuffle=True)
train_text, validation_text, train_label, validation_label = train_test_split(train_text, train_label, test_size=0.1, shuffle=True)
tokenizer = DistilBertTokenizerFast.from_pretrained('distilbert-base-uncased')
train_encodings = tokenizer(train_text, truncation=True, padding=True)
test_encodings = tokenizer(test_text, truncation=True, padding=True)
validation_encodings = tokenizer(validation_text, truncation=True, padding=True)
class MBICDataset(torch.utils.data.Dataset):
def __init__(self, encodings, labels):
self.encodings = encodings
self.labels = labels
def __getitem__(self, idx):
item = {key: torch.Tensor(val[idx]) for key, val in self.encodings.items()}
item['labels'] = torch.Tensor(self.labels[idx])
return item
def __len__(self):
return len(self.labels)
train_ds = MBICDataset(train_encodings, train_label)
test_ds = MBICDataset(test_encodings, test_label)
validation_ds = MBICDataset(validation_encodings, validation_label)
FINAL_DS = {"train":train_ds, "test":test_ds, "validation":validation_ds}
```
After making the dataset I try to fine-tune the model using the following code.
```python
tokenizer = DistilBertTokenizerFast.from_pretrained('distilbert-base-uncased')
training_stuff = {
"batch_size": 64,
"epochs": 4,
"learning_rate": 1e-5,
"weight_decay": 0.01
}
training_args = TrainingArguments(
output_dir="C:/Users/uujain2/Desktop/Utkarsh/FYP/Models/DistilBert",
per_device_train_batch_size=training_stuff["batch_size"],
evaluation_strategy="steps",
num_train_epochs=training_stuff["epochs"],
fp16=True,
save_steps=100,
eval_steps=50,
logging_steps=10,
weight_decay=training_stuff["weight_decay"],
learning_rate=training_stuff["learning_rate"],
save_total_limit=64,
remove_unused_columns=False,
push_to_hub=False,
report_to='tensorboard',
load_best_model_at_end=True,
)
model = DistilBertPreTrainedModel.from_pretrained(
'distilbert-base-uncased',
num_labels=3,
id2label={0: 'Biased', 1: 'Non-biased', 2: 'No agreeemnt'},
label2id={'Biased': 0, 'Non-biased': 1, 'No agreement': 2},
)
trainer = Trainer(
model=model,
args=training_args,
train_dataset=FINAL_DS['train'],
eval_dataset=FINAL_DS['validation'],
tokenizer=tokenizer,
)
train_results = trainer.train()
```
However, I run into the following error.
```
Traceback (most recent call last):
File "c:\Users\uujain2\Desktop\Utkarsh\FYP\Code\test.py", line 68, in <module>
train_results = trainer.train()
File "C:\Users\uujain2\AppData\Local\Programs\Python\Python310\lib\site-packages\transformers\trainer.py", line 1501, in train
return inner_training_loop(
File "C:\Users\uujain2\AppData\Local\Programs\Python\Python310\lib\site-packages\transformers\trainer.py", line 1749, in _inner_training_loop
tr_loss_step = self.training_step(model, inputs)
File "C:\Users\uujain2\AppData\Local\Programs\Python\Python310\lib\site-packages\transformers\trainer.py", line 2508, in training_step
loss = self.compute_loss(model, inputs)
File "C:\Users\uujain2\AppData\Local\Programs\Python\Python310\lib\site-packages\transformers\trainer.py", line 2540, in compute_loss
outputs = model(**inputs)
File "C:\Users\uujain2\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1190, in _call_impl
return forward_call(*input, **kwargs)
TypeError: _forward_unimplemented() got an unexpected keyword argument 'input_ids'
```
### Expected behavior
I expect the model to start the finetuning process instead of throwing this error. | 01-27-2023 11:12:29 | 01-27-2023 11:12:29 | `DistilBertPreTrainedModel` is an abstract class and shouldn't be used directly. Maybe you wanted to use `DistilBertModel` or `DistilBertForPretraining`?<|||||>Thank you for your quick response. It was my silly mistake to use an abstract class for pre-training. I was able to import `DistilBertModel`, however the import for `DistilBertForPretraining` failed, but that's alright.
However when I try to run the model now I get the following error.
```
ValueError: Unable to create tensor, you should probably activate truncation and/or padding with 'padding=True' 'truncation=True' to have batched tensors with the same length. Perhaps your features (`labels` in this case) have excessive nesting (inputs type `list` where type `int` is expected).
```
I have followed the webpage titled [Fine-tuning with custom datasets](https://huggingface.co/transformers/v3.4.0/custom_datasets.html). My function that creates the initial lists with texts and labels is below. The data is formatted very similarly to the webpage:
```python
def create_MBIC_data_dict() -> dict[str, str]:
data_dict = {'text': [], 'label':[]}
with open(f"{DATA_FOLDER_PATH}/final_labels_MBIC_new.csv") as csv_file:
csv_reader = csv.reader(csv_file)
line_count = 0
for row in csv_reader:
if line_count != 0:
data_dict['text'].append(row[0])
label_val = -1
match row[7]:
case "Biased":
label_val = 1
case "Non-biased":
label_val = 0
case "No agreement":
label_val = 2
data_dict['label'].append(label_val)
line_count += 1
return data_dict
```
Afterwards the `create_hugging_face_dataset` function executes which creates the dataset.
@sgugger <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 21,334 | open | Tf timestamps whisper + update generate support | # What does this PR
This PR updates the way we generation TF and FLAX to fix the breaking changes that we had.
It also adds support for the timestamps in `TF`.
Follows #21965 | 01-27-2023 10:35:54 | 01-27-2023 10:35:54 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_21334). All of your documentation changes will be reflected on that endpoint.<|||||>Awesome thanks for the review 🤗 <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>lmk when you want to pick this up again :P Meanwhile, shall we add the WIP label, so that the bot doesn't ping us?<|||||>yes! Hahah sorry, maybe next week or 2 weeks from now !<|||||>Okay! Thanks to @gante's recommendations, the xla generation works perfectly! The slow timestamp processing test also passes 🥳 <|||||>Thanks for your review, will adresse all of this <|||||>@ArthurZucker I was testing out if I get the timestamps with TF model with your ```tf-timestamps-whisper``` branch on colab but I see this:
```
[/content/transformers/src/transformers/models/whisper/tokenization_whisper.py](https://localhost:8080/#) in decode(self, token_ids, skip_special_tokens, clean_up_tokenization_spaces, output_offsets, time_precision, decode_with_timestamps, **kwargs)
593 )
594 if decode_with_timestamps:
--> 595 text = self._decode_with_timestamps(token_ids, time_precision=time_precision)
596 # retrieve offsets
597 if output_offsets:
[/content/transformers/src/transformers/models/whisper/tokenization_whisper.py](https://localhost:8080/#) in _decode_with_timestamps(self, token_ids, time_precision)
501 for token in token_ids:
502 if token >= timestamp_begin:
--> 503 timestamp = f"<|{(token - timestamp_begin) * time_precision:.2f}|>"
504 outputs.append(timestamp)
505 outputs.append([])
[/usr/local/lib/python3.10/dist-packages/tensorflow/python/util/traceback_utils.py](https://localhost:8080/#) in error_handler(*args, **kwargs)
151 except Exception as e:
152 filtered_tb = _process_traceback_frames(e.__traceback__)
--> 153 raise e.with_traceback(filtered_tb) from None
154 finally:
155 del filtered_tb
[/usr/local/lib/python3.10/dist-packages/tensorflow/python/ops/gen_math_ops.py](https://localhost:8080/#) in mul(x, y, name)
6574 if tld.is_eager:
6575 try:
-> 6576 _result = pywrap_tfe.TFE_Py_FastPathExecute(
6577 _ctx, "Mul", name, x, y)
6578 return _result
TypeError: Cannot convert 0.02 to EagerTensor of dtype int32
```
<|||||>Hey! That’s probably because I haven’t pull from main for a while and we changed the whisper tokenizer. As you can see the decoding process is the one failing here <|||||>@ArthurZucker Thanks for the response. I got the issue resolved with
```
timestamp = f"<|{float(token - timestamp_begin) * time_precision:.2f}|>"
```
i.e. changing ```token - timestamp_begin``` to ```float(token - timestamp_begin)```
<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 21,333 | closed | Little cleanup: let huggingface_hub manage token retrieval | # What does this PR do?
Since [`huggingface_hub==0.11.0` release](https://github.com/huggingface/huggingface_hub/releases/tag/v0.11.0), `hfh` always send the stored token when making requests to the hub, unless explicitly told no (`use_auth_token=False`). Before that, `transformers` had implemented some workarounds to retrieve the cached token when `token=None` is provided by the user. This PR aims to remove those workarounds since `hfh` automatically does this part.
Note: in case of the `PushToHubMixin` class, I changed some the arguments/return value of a private methods (no need to return a token anymore). I thought it would be ok as the method is private but please let me know if you prefer that I revert this part.
## Who can review?
@sgugger
| 01-27-2023 09:54:33 | 01-27-2023 09:54:33 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Yes that would be cool if a Great Depreciation is initiated! :) I can help if needed.
It seems the tests are failing with some `ModuleNotFoundError: No module named 'transformers_modules.local'` errors. Is it related to the PR? Doesn't seem to be but since it's affecting tests named `test_from_pretrained_***`, I'm asking.
In any case, if the PR looks good to you, can I let you merge it?<|||||>Those are flaky tests I need to fix :-) Merging! |
transformers | 21,332 | closed | Add variant to transformers | # What does this PR do?
This PR adds a `"variant"` keyword argument to PyTorch's `from_pretrained` and `save_pretrained` so that multiple weight variants can be saved in the model repo.
You can try it out by running:
```python
from transformers import CLIPTextModel
path = "huggingface/the-no-branch-repo" # or ./text_encoder if local
print("This should work!:")
model = CLIPTextModel.from_pretrained(path, subfolder="text_encoder", variant="no_ema")
print("This should work!:")
model = CLIPTextModel.from_pretrained(path, subfolder="text_encoder", variant="fp16")
print("This should work!:")
model = CLIPTextModel.from_pretrained(path, subfolder="text_encoder")
print("This should NOT work!:")
model = CLIPTextModel.from_pretrained(path, subfolder="text_encoder", variant="other")
```
From this repo: https://huggingface.co/huggingface/the-no-branch-repo/tree/main/text_encoder . The repo is a dummy stable diffusion model and folder structure looks as follows:
```
├── feature_extractor
│ └── preprocessor_config.json
├── load.py
├── model_index.json
├── safety_checker
│ ├── config.json
│ └── pytorch_model.bin
├── save.py
├── scheduler
│ └── scheduler_config.json
├── text_encoder
│ ├── config.json
│ ├── pytorch_model.bin
│ ├── pytorch_model.fp16.bin
│ └── pytorch_model.no_ema.bin
├── tokenizer
│ ├── merges.txt
│ ├── special_tokens_map.json
│ ├── tokenizer_config.json
│ └── vocab.json
├── unet
│ ├── config.json
│ └── diffusion_pytorch_model.bin
└── vae
├── config.json
└── diffusion_pytorch_model.bin
```
cc @pcuenca @patil-suraj @sgugger @LysandreJik @julien-c @osanseviero
**[Update] This PR should be ready for merge** | 01-27-2023 09:17:59 | 01-27-2023 09:17:59 | _The documentation is not available anymore as the PR was closed or merged._<|||||>> pytorch_model.{variant}.bin sounds better to me, to keep the file-extension (not so important for .bin, but more important for .h5, .safetensors or any other format)
Even for `.bin` files, I'd say it's good to keep the file extension as it does not break the LFS property for existing `.gitattributes` files (see [huggingface/the-no-branch-repo](https://huggingface.co/huggingface/the-no-branch-repo/tree/main/text_encoder) where bin files are uploaded as regular).<|||||>Failing test is unrelated. Think this PR is good for merge.
@wauplin @julien-c good for you?
The resulting folder structure now looks as described in the PR statement: https://github.com/huggingface/transformers/pull/21332#issue-1559421203<|||||>Thanks for the reviews! Merging<|||||>cc @sgugger would it be possible to add this feature to `push_to_hub` as well?
I'd like to use it for BLIP-2. For the moment it seems the only way to do this is calling `save_pretrained("...", variant="fp16")` and then manually upload the PyTorch checkpoint to the model repo<|||||>Happy to review a PR. |
transformers | 21,331 | closed | Bump onnx from 1.11.0 to 1.13.0 in /examples/research_projects/decision_transformer | Bumps [onnx](https://github.com/onnx/onnx) from 1.11.0 to 1.13.0.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/onnx/onnx/releases">onnx's releases</a>.</em></p>
<blockquote>
<h2>v1.13.0</h2>
<p>ONNX v1.13.0 is now available with exciting new features! We would like to thank everyone who contributed to this release! Please visit <a href="https://onnx.ai/">onnx.ai</a> to learn more about ONNX and associated projects.</p>
<h1>New operators</h1>
<ul>
<li><a href="https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Col2Im-18">Col2Im</a> added in <a href="https://github-redirect.dependabot.com/onnx/onnx/pull/3948">#3948</a></li>
<li><a href="https://github.com/onnx/onnx/blob/main/docs/Changelog.md#bitwisenot-18">BitwiseNot</a> added in <a href="https://github-redirect.dependabot.com/onnx/onnx/pull/4497">#4497</a></li>
<li><a href="https://github.com/onnx/onnx/blob/main/docs/Changelog.md#bitwiseand-18">BitwiseAnd</a>, <a href="https://github.com/onnx/onnx/blob/main/docs/Changelog.md#bitwiseor-18">BitwiseOr</a> and <a href="https://github.com/onnx/onnx/blob/main/docs/Changelog.md#bitwisexor-18">BitwiseXor</a> added in <a href="https://github-redirect.dependabot.com/onnx/onnx/pull/4496">#4496</a></li>
</ul>
<h1>Operator extensions</h1>
<ul>
<li><a href="https://github.com/onnx/onnx/blob/main/docs/Changelog.md#resize-18">Resize</a> - New attributes: <code>antialias</code>, <code>axes</code> and <code>keep_aspect_ratio_policy</code>, allow for both <code>scales</code> and <code>sizes</code> to be provided when one of them is an empty constant <a href="https://github-redirect.dependabot.com/onnx/onnx/pull/4126">#4126</a>, <a href="https://github-redirect.dependabot.com/onnx/onnx/pull/4388">#4388</a></li>
<li><a href="https://github.com/onnx/onnx/blob/main/docs/Changelog.md#pad-18">Pad</a> - New attribute <code>axes</code> <a href="https://github-redirect.dependabot.com/onnx/onnx/pull/4190">#4190</a></li>
<li><a href="https://github.com/onnx/onnx/blob/main/docs/Changelog.md#optionalhaselement-18">OptionalHasElement</a> - New input types handling <a href="https://github-redirect.dependabot.com/onnx/onnx/pull/4326">#4326</a></li>
<li><a href="https://github.com/onnx/onnx/blob/main/docs/Changelog.md#optionalhaselement-18">OptionalHasElement</a> and <a href="https://github.com/onnx/onnx/blob/main/docs/Changelog.md#optionalgetelement-18">OptionalGetElement</a> - Accept tensor and sequence types <a href="https://github-redirect.dependabot.com/onnx/onnx/pull/4421">#4421</a></li>
<li><a href="https://github.com/onnx/onnx/blob/main/docs/Changelog.md#scatterelements-18">ScatterElement</a> and <a href="https://github.com/onnx/onnx/blob/main/docs/Changelog.md#scatternd-18">ScatterND</a> - Add <code>max</code> and <code>min</code> as supported reduction attributes <a href="https://github-redirect.dependabot.com/onnx/onnx/pull/4411">#4411</a></li>
<li><a href="https://github.com/onnx/onnx/blob/main/docs/Changelog.md#split-18">Split</a> - Add support for uneven tensor splitting and a new <code>num_outputs</code> attribute <a href="https://github-redirect.dependabot.com/onnx/onnx/pull/4481">#4481</a></li>
<li><a href="https://github.com/onnx/onnx/blob/main/docs/Changelog.md#lppool-18">LpPool</a> - New attributes: <code>ceil_mode</code> and <code>dilations</code> <a href="https://github-redirect.dependabot.com/onnx/onnx/pull/4534">#4534</a></li>
</ul>
<h1>Function updates</h1>
<ul>
<li><a href="https://github.com/onnx/onnx/blob/main/docs/Changelog.md#centercroppad-18">CenterCropPad</a> added in <a href="https://github-redirect.dependabot.com/onnx/onnx/pull/4190">#4190</a></li>
<li><a href="https://github.com/onnx/onnx/blob/main/docs/Changelog.md#mish-18">mish</a> added in <a href="https://github-redirect.dependabot.com/onnx/onnx/pull/4350">#4350</a></li>
<li><a href="https://github.com/onnx/onnx/blob/main/docs/Changelog.md#groupnormalization-18">GroupNormalization</a> added in <a href="https://github-redirect.dependabot.com/onnx/onnx/pull/4621">#4621</a></li>
</ul>
<h1>Reference Python runtime</h1>
<p>Reference Python runtime dependent on only Python and numpy has been added. <a href="https://github-redirect.dependabot.com/onnx/onnx/pull/4483">#4483</a></p>
<h1>Python 3.11 support</h1>
<p>ONNX 1.13.0 supports Python 3.11. <a href="https://github-redirect.dependabot.com/onnx/onnx/pull/4490">#4490</a></p>
<h1>Apple Silicon support</h1>
<p>Support for M1/M2 ARM processors has been added. <a href="https://github-redirect.dependabot.com/onnx/onnx/pull/4642">#4642</a></p>
<h1>More</h1>
<p>ONNX 1.13.0 also comes with numerous:</p>
<ul>
<li>bugfixes</li>
<li>infrastructure improvements</li>
<li>CI improvements</li>
<li>documentation updates</li>
<li>security updates</li>
</ul>
<p>For full details see <a href="https://github.com/onnx/onnx/wiki/Logistics-for-ONNX-Release-1.13.0">Logistics for ONNX Release 1.13.0</a>.</p>
<h1>Deprecation notice</h1>
<ul>
<li><code>TENSOR_TYPE_TO_STORAGE_TENSOR_TYPE</code> has been deprecated <a href="https://github-redirect.dependabot.com/onnx/onnx/pull/4270">#4270</a></li>
<li>ONNXIFI: ONNX Interface for Framework Integration has been deprecated <a href="https://github-redirect.dependabot.com/onnx/onnx/pull/4431">#4431</a></li>
</ul>
<h1>Installation</h1>
<p>You can upgrade to the latest release using <code>pip install onnx --upgrade</code> or build from source following the README <a href="https://github.com/onnx/onnx/tree/rel-1.13.0#build-onnx-from-source">instructions</a>.</p>
<h1>Contributors</h1>
<p>Thanks to these individuals for their contributions in this release since last 1.12.0 release: <a href="https://github.com/AnandKri"><code>@AnandKri</code></a>, <a href="https://github.com/cbourjau"><code>@cbourjau</code></a>, <a href="https://github.com/jcwchen"><code>@jcwchen</code></a>, <a href="https://github.com/gramalingam"><code>@gramalingam</code></a>, <a href="https://github.com/garymm"><code>@garymm</code></a>, <a href="https://github.com/GaetanLepage"><code>@GaetanLepage</code></a>, <a href="https://github.com/ilya-lavrenov"><code>@ilya-lavrenov</code></a>, <a href="https://github.com/jnovikov"><code>@jnovikov</code></a>, <a href="https://github.com/JackBoosY"><code>@JackBoosY</code></a>, <a href="https://github.com/jbachurski"><code>@jbachurski</code></a>, <a href="https://github.com/tjich"><code>@tjich</code></a>, <a href="https://github.com/jantonguirao"><code>@jantonguirao</code></a>, <a href="https://github.com/justinchuby"><code>@justinchuby</code></a>, <a href="https://github.com/natke"><code>@natke</code></a>, <a href="https://github.com/philass"><code>@philass</code></a>, <a href="https://github.com/prasanthpul"><code>@prasanthpul</code></a>, <a href="https://github.com/p-wysocki"><code>@p-wysocki</code></a>, <a href="https://github.com/SpaceIm"><code>@SpaceIm</code></a>, <a href="https://github.com/stephenneuendorffer"><code>@stephenneuendorffer</code></a>,<a href="https://github.com/take-cheeze"><code>@take-cheeze</code></a>, <a href="https://github.com/sechkova"><code>@sechkova</code></a>, <a href="https://github.com/thiagocrepaldi"><code>@thiagocrepaldi</code></a>, <a href="https://github.com/xadupre"><code>@xadupre</code></a>, <a href="https://github.com/mszhanyi"><code>@mszhanyi</code></a>, <a href="https://github.com/yuanyao-nv"><code>@yuanyao-nv</code></a>, <a href="https://github.com/andife"><code>@andife</code></a>, <a href="https://github.com/daquexian"><code>@daquexian</code></a>, <a href="https://github.com/kylesayrs"><code>@kylesayrs</code></a>, <a href="https://github.com/liqunfu"><code>@liqunfu</code></a>, <a href="https://github.com/longlee0622"><code>@longlee0622</code></a>, <a href="https://github.com/HSQ79815"><code>@HSQ79815</code></a>, <a href="https://github.com/williamberman"><code>@williamberman</code></a>, <a href="https://github.com/YanBC"><code>@YanBC</code></a></p>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a href="https://github.com/onnx/onnx/blob/main/docs/Changelog.md">onnx's changelog</a>.</em></p>
<blockquote>
<!-- raw HTML omitted -->
<h2>Operator Changelog</h2>
<p><em>This file is automatically generated from the
<a href="https://github.com/onnx/onnx/blob/main/docs/onnx/defs">def files</a> via <a href="https://github.com/onnx/onnx/blob/main/docs/onnx/defs/gen_doc.py">this script</a>.
Do not modify directly and instead edit operator definitions.</em></p>
<p>For an operator input/output's differentiability, it can be differentiable,
non-differentiable, or undefined. If a variable's differentiability
is not specified, that variable has undefined differentiability.</p>
<h1>ai.onnx (default)</h1>
<h2>Version 1 of the default ONNX operator set</h2>
<h3><!-- raw HTML omitted --><!-- raw HTML omitted --><strong>Abs-1</strong><!-- raw HTML omitted --></h3>
<p>Absolute takes one input data (Tensor<!-- raw HTML omitted -->) and produces one output data
(Tensor<!-- raw HTML omitted -->) where the absolute is, y = abs(x), is applied to
the tensor elementwise.</p>
<h4>Version</h4>
<p>This version of the operator has been available since version 1 of the default ONNX operator set.</p>
<h4>Attributes</h4>
<!-- raw HTML omitted -->
<h4>Inputs</h4>
<!-- raw HTML omitted -->
<h4>Outputs</h4>
<!-- raw HTML omitted -->
<h4>Type Constraints</h4>
<!-- raw HTML omitted -->
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="https://github.com/onnx/onnx/commit/1ba785612a79fe749aa1e478336e534743372639"><code>1ba7856</code></a> Mark final RC (<a href="https://github-redirect.dependabot.com/onnx/onnx/issues/4696">#4696</a>)</li>
<li><a href="https://github.com/onnx/onnx/commit/7adb4214b7f0e8486cf18e97e6951c69038c3375"><code>7adb421</code></a> misc fixes for issues found in ort integration (<a href="https://github-redirect.dependabot.com/onnx/onnx/issues/4681">#4681</a>) (<a href="https://github-redirect.dependabot.com/onnx/onnx/issues/4695">#4695</a>)</li>
<li><a href="https://github.com/onnx/onnx/commit/a9130150957d9ed3b5957c1e8b24e3cccea6fdcf"><code>a913015</code></a> Mark release as rc1 (<a href="https://github-redirect.dependabot.com/onnx/onnx/issues/4674">#4674</a>)</li>
<li><a href="https://github.com/onnx/onnx/commit/3fd41d249bb8006935aa0031a332dd945e61b7e5"><code>3fd41d2</code></a> Bump version (<a href="https://github-redirect.dependabot.com/onnx/onnx/issues/4666">#4666</a>)</li>
<li><a href="https://github.com/onnx/onnx/commit/bad0697bb9feafe656ecad9ff794426708b527aa"><code>bad0697</code></a> Add LpPool-18 - add <code>ceil_mode</code> and <code>dilations</code> attributes (<a href="https://github-redirect.dependabot.com/onnx/onnx/issues/4534">#4534</a>)</li>
<li><a href="https://github.com/onnx/onnx/commit/7a1fae4dcb0d3cf035b8258723b4495133180391"><code>7a1fae4</code></a> make primary ops function step 2 (<a href="https://github-redirect.dependabot.com/onnx/onnx/issues/4512">#4512</a>)</li>
<li><a href="https://github.com/onnx/onnx/commit/8fb26ede15adcda983ed126b2d6dfba52af4e748"><code>8fb26ed</code></a> Fixed some typos in python.rst (<a href="https://github-redirect.dependabot.com/onnx/onnx/issues/4668">#4668</a>)</li>
<li><a href="https://github.com/onnx/onnx/commit/9955f35a306ed2ea28650b4f42a5a4056cc2d82c"><code>9955f35</code></a> Fix typo in python.rst (<a href="https://github-redirect.dependabot.com/onnx/onnx/issues/4667">#4667</a>)</li>
<li><a href="https://github.com/onnx/onnx/commit/cd6e5db337e5cd008128e10eb2edc68db47d6413"><code>cd6e5db</code></a> Consider GraphInferenceContext in inference functions: InferenceContext (<a href="https://github-redirect.dependabot.com/onnx/onnx/issues/4632">#4632</a>)</li>
<li><a href="https://github.com/onnx/onnx/commit/466edb7992da7d4eac62e972e6082587c1410a78"><code>466edb7</code></a> Add Python 3.11 support (<a href="https://github-redirect.dependabot.com/onnx/onnx/issues/4490">#4490</a>)</li>
<li>Additional commits viewable in <a href="https://github.com/onnx/onnx/compare/v1.11.0...v1.13.0">compare view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
- `@dependabot use these labels` will set the current labels as the default for future PRs for this repo and language
- `@dependabot use these reviewers` will set the current reviewers as the default for future PRs for this repo and language
- `@dependabot use these assignees` will set the current assignees as the default for future PRs for this repo and language
- `@dependabot use this milestone` will set the current milestone as the default for future PRs for this repo and language
You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/huggingface/transformers/network/alerts).
</details> | 01-27-2023 01:07:09 | 01-27-2023 01:07:09 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 21,330 | closed | Add XLM-V | ### Model description
[XLM-V: Overcoming the Vocabulary Bottleneck in Multilingual Masked Language Models](https://arxiv.org/abs/2301.10472)
Large multilingual language models typically rely on a single vocabulary shared across 100+ languages. As these models have increased in parameter count and depth, vocabulary size has remained largely unchanged. This vocabulary bottleneck limits the representational capabilities of multilingual models like XLM-R. In this paper, we introduce a new approach for scaling to very large multilingual vocabularies by de-emphasizing token sharing between languages with little lexical overlap and assigning vocabulary capacity to achieve sufficient coverage for each individual language. Tokenizations using our vocabulary are typically more semantically meaningful and shorter compared to XLM-R. Leveraging this improved vocabulary, we train XLM-V, a multilingual language model with a one million token vocabulary. XLM-V outperforms XLM-R on every task we tested on ranging from natural language inference (XNLI), question answering (MLQA, XQuAD, TyDiQA), and named entity recognition (WikiAnn) to low-resource tasks (Americas NLI, MasakhaNER).
Should work as [XLM-RoBERTa](https://twitter.com/LiangDavis/status/1618738467315531777?s=20&t=nObyGbBEqmBZr9rmTEAeVg)
### Open source status
- [X] The model implementation is available
- [X] The model weights are available
### Provide useful links for the implementation
_No response_ | 01-27-2023 00:01:00 | 01-27-2023 00:01:00 | Can I work on this issue? And can you point me to where should I learn more about this?<|||||>Some more info:
Weights can be - according to this tweet [this](https://twitter.com/LiangDavis/status/1618738467315531777) found here:
https://dl.fbaipublicfiles.com/fairseq/xlmv/xlmv.base.tar.gz<|||||>Hi guys,
I adopted the RoBERTa conversion script and model conversion was sucessful:
https://gist.github.com/stefan-it/def0e13c872e992aa54dff2768ec5da4
It outputs:
```
torch.Size([1, 11, 901629]) torch.Size([1, 11, 901629])
max_absolute_diff = 7.62939453125e-06
Do both models output the same tensors? 🔥
Saving model to /media/stefan/89914e9b-0644-4f79-8e65-a8c5245df168/xlmv/exported-working
Configuration saved in /media/stefan/89914e9b-0644-4f79-8e65-a8c5245df168/xlmv/exported-working/config.json
Model weights saved in /media/stefan/89914e9b-0644-4f79-8e65-a8c5245df168/xlmv/exported-working/pytorch_model.bin
```<|||||>@jalajk24 , sorry, I've overlooked your comment.
Here's an explanation what I did so far:
* Finding the official checkpoint (which is a bit hard without Twitter, because XLM-V is not yet mentioned in the official `fairseq` repo...)
* Try to convert the checkpoint with the existing code base
* I used the original RoBERTa [conversion script](https://github.com/huggingface/transformers/blob/main/src/transformers/models/roberta/convert_roberta_original_pytorch_checkpoint_to_pytorch.py) and adjust some outdated config parameters (e.g. `roberta.args` is replaced by `roberta.cfg` in newer `fairseq` versions)
* Fixing other changed variables, e.g. `roberta_sent_encoder.layernorm_embedding` must be used instead of the old `roberta_sent_encoder.emb_layer_norm`
* Then conversion runs: when both models (Original `fairseq` model and the converted model in Transformers) output the same tensor for a given input sequence -> model conversion was sucessful.
* If that would not be the case (e.g. we had this when converting XLM-R-XL and XLM-R-XXL models, see [here](https://github.com/huggingface/transformers/pull/13727)) we need to adjust the model architecture (XLM-R-XL used some pre-layer-norm stuff).
The next steps would be on the tokenizer part:
* Load the original checkpoint with `fairseq` and tokenize some input sentence
* Use the `XLM-R` tokenizer with the new XLM-V sentencepiece vocab and tokenize the same input sentence
* Check if both tokenizers output the same tokenized sequence<|||||>Cool @stefan-it! So, maye we can create a model card and push the model (and tokenizer) to the hub (under the META AI org). WDYT?<|||||>@mrm8488 Sounds good! I will perform some tokenizer experiments and then I can upload the model -> maybe @patrickvonplaten can invite me to the [Meta AI](https://huggingface.co/facebook) organization on the model hub (for a short time period), when the model is ready to be... tested on downstream tasks :hugs: <|||||>Hey @stefan-it,
For sure! Invited you :-) <|||||>Thanks @patrickvonplaten !
I wrote a script that compares XLM-V tokenizer and HF tokenizer (which is basically a `XLMRobertaTokenizer` using the provided `sentencepiece.bpe.model` model):
https://gist.github.com/stefan-it/14295d37880bfb6329fe1db9d3e6a14c
It uses the WikiANN NER dataset that contains 176 languages, tokenizes each training sentence and compares the output of the original XLM-V tokenizer and the HF one. Some differences can be seen in the GIST mentioned above, e.g.:
```txt
Mismatch for ar sentence:
أبى أيوب الأنصارى .
XLM-V ids: [0, 6, 482745, 6, 529250, 478338, 382485, 6, 5, 2]
HF ids: [0, 6, 482745, 6, 529250, 478338, 382485, 6, 5, 6, 2]
------------------------------------------------------------------------------------------
Mismatch for az sentence:
O , nəinki Çexiyada , eləcə də bütün dünyada antifaşist ədəbiyyatının ən görkəmli nümayəndələrindən biridir .
XLM-V ids: [0, 122, 6, 4, 78808, 2376, 4377, 25427, 6, 4, 17739, 523, 1174, 14374, 214304, 162, 4193, 3386, 1358, 1105, 1221, 89755, 345, 1825, 63822, 19671, 8914, 280, 214304, 499, 162, 381, 6, 5, 2]
HF ids: [0, 122, 6, 4, 78808, 2376, 4377, 25427, 6, 4, 17739, 523, 1174, 14374, 162, 214304, 4193, 3386, 1358, 1105, 1221, 89755, 345, 1825, 63822, 19671, 8914, 280, 214304, 499, 162, 381, 6, 5, 2]
------------------------------------------------------------------------------------------
Mismatch for az sentence:
Filmin bəstəkarı Roberto Rossellininin qardaşı Renzo Rossellinidir .
XLM-V ids: [0, 70066, 93154, 309, 77404, 862785, 1639, 43, 49187, 872558, 862785, 43, 14803, 6, 5, 2]
HF ids: [0, 70066, 93154, 309, 77404, 862785, 43, 1639, 49187, 872558, 862785, 43, 14803, 6, 5, 2]
------------------------------------------------------------------------------------------
Mismatch for be sentence:
некаторыя аленяводы з верхняй Калымы ўжо качавалі на чукоцкіх землях .
XLM-V ids: [0, 212747, 187222, 187276, 231515, 186902, 245172, 186910, 191873, 187211, 186906, 190574, 202645, 197768, 186882, 190562, 187180, 217232, 212793, 6, 5, 2]
HF ids: [0, 212747, 187222, 187276, 231515, 186902, 245172, 186910, 191873, 187211, 186906, 190574, 217400, 192302, 186882, 190562, 187180, 217232, 212793, 6, 5, 2]
------------------------------------------------------------------------------------------
Mismatch for bn sentence:
আব্রাআম দ্য মোয়াভ্র্
XLM-V ids: [0, 450078, 447452, 391401, 383767, 442939, 388008, 392002, 500283, 388127, 2]
HF ids: [0, 450078, 447452, 391401, 383767, 442939, 388008, 392002, 500283, 388127, 6, 2]
------------------------------------------------------------------------------------------
Mismatch for ckb sentence:
شەڕی ناوخۆییی لیبیا ( ٢٠١١ )
XLM-V ids: [0, 448384, 3, 382407, 424947, 383163, 395213, 390588, 382407, 481417, 18, 430460, 396007, 1057, 2]
HF ids: [0, 448384, 3, 382407, 424947, 383163, 395213, 382407, 390588, 481417, 18, 430460, 396007, 1057, 2]
------------------------------------------------------------------------------------------
Mismatch for el sentence:
το λιμάνι του Μαρσασλόκκκ ήταν Φοινικική αποικία .
XLM-V ids: [0, 51, 33074, 54, 20175, 4103, 2207, 21516, 180155, 2263, 702, 1764, 179092, 1457, 127312, 1100, 6, 5, 2]
HF ids: [0, 51, 33074, 54, 20175, 4103, 2207, 21516, 2263, 180155, 702, 1764, 179092, 1457, 127312, 1100, 6, 5, 2]
------------------------------------------------------------------------------------------
Mismatch for eu sentence:
Þjóðólfur úr Hvini
XLM-V ids: [0, 576603, 584875, 704, 7755, 272, 110340, 2]
HF ids: [0, 576603, 584875, 704, 7755, 272, 110340, 6, 2]
------------------------------------------------------------------------------------------
Mismatch for fi sentence:
ohjaus British Wind Energy Association
XLM-V ids: [0, 18196, 82236, 60938, 48570, 71969, 2]
HF ids: [0, 18196, 82236, 60938, 48570, 71969, 6, 2]
------------------------------------------------------------------------------------------
Mismatch for fr sentence:
***************************** '' Charles de Bourbon-Siciles ''
XLM-V ids: [0, 541, 519880, 736484, 519880, 3426, 17736, 59, 648141, 13, 238, 676633, 11, 3426, 2]
HF ids: [0, 541, 736484, 519880, 519880, 3426, 17736, 59, 648141, 13, 238, 676633, 11, 3426, 2]
------------------------------------------------------------------------------------------
Mismatch for hr sentence:
*KKK Varteks ( Varaždin )
XLM-V ids: [0, 541, 13108, 379, 2056, 11962, 18, 794202, 1057, 2]
HF ids: [0, 541, 379, 13108, 2056, 11962, 18, 794202, 1057, 2]
------------------------------------------------------------------------------------------
Mismatch for ja sentence:
漳 州 訛 り 、 ' ' ' 泉 ' ' ' は 泉 州 訛 り を 表 す ) ] ]
XLM-V ids: [0, 6, 381875, 6, 284214, 6, 371882, 6, 283722, 6, 283381, 536, 536, 536, 6, 287298, 536, 536, 536, 6, 283385, 6, 287298, 6, 284214, 6, 371882, 6, 283722, 6, 283391, 6, 284061, 6, 284248, 1057, 6305, 6305, 2]
HF ids: [0, 6, 381875, 6, 284214, 6, 371882, 6, 283722, 6, 283381, 536, 536, 536, 6, 287298, 536, 536, 536, 6, 283385, 6, 287298, 6, 284214, 6, 371882, 6, 283722, 6, 283391, 6, 284061, 6, 284248, 1057, 6305, 6305, 6, 2]
------------------------------------------------------------------------------------------
Mismatch for km sentence:
' '' ក្រមង៉ុយ '' 'គឺជាកវីម្នាក់ដែលមិនសរសេរនូវកំណាព្យកាព្យឃ្លោងដែលលោកច្រៀងនោះ ឡើយ ។ ស្នាដៃរបស់លោកដែលគង់វង្សមកដល់សព្វថ្ងៃនេះកើតមានឡើងដោយការអញ្ជើញ ភ្នំពេញ ហើយធ្វើការកត់ត្រាទុក ។
XLM-V ids: [0, 536, 3426, 6, 436488, 414054, 470537, 406071, 3426, 536, 417648, 388584, 417615, 398401, 383964, 386188, 484094, 413545, 430365, 392709, 443000, 401931, 443000, 513438, 424986, 383964, 383825, 6, 470313, 392431, 445340, 383824, 6, 527700, 384224, 383825, 383964, 6, 486458, 486640, 6, 454853, 6, 504066, 459752, 423127, 386428, 410408, 385471, 383363, 510944, 394566, 386849, 388469, 383363, 384712, 398013, 438262, 423820, 383824, 2]
HF ids: [0, 536, 3426, 6, 436488, 414054, 470537, 406071, 3426, 536, 417648, 388584, 417615, 398401, 383964, 386188, 484094, 413545, 430365, 392709, 443000, 401931, 443000, 513438, 424986, 383964, 383825, 6, 470313, 392431, 445340, 383824, 6, 527700, 384224, 383825, 383964, 6, 486458, 486640, 6, 454853, 6, 504066, 459752, 423127, 386428, 410408, 385471, 383363, 510944, 394566, 386849, 388469, 383363, 384712, 398013, 438262, 423820, 383824, 6, 2]
------------------------------------------------------------------------------------------
Mismatch for ko sentence:
북쪽으로는 사바 구 , 서쪽으로는 소피아 구 , 남서쪽으로는 알라오트라망고로 구 , 남쪽으로는 아치나나나 구와 접한다 .
XLM-V ids: [0, 460610, 402460, 383267, 384648, 384084, 6, 4, 464357, 402460, 383973, 408125, 384084, 6, 4, 384737, 497040, 402460, 384068, 382873, 383469, 420080, 387243, 382503, 382498, 384084, 6, 4, 445962, 402460, 383309, 383375, 459065, 382738, 384084, 382541, 390528, 383229, 6, 5, 2]
HF ids: [0, 460610, 402460, 383267, 384648, 384084, 6, 4, 464357, 402460, 383973, 408125, 384084, 6, 4, 384737, 497040, 402460, 384068, 382873, 383469, 420080, 387243, 382503, 382498, 384084, 6, 4, 445962, 402460, 383309, 383375, 382738, 459065, 384084, 382541, 390528, 383229, 6, 5, 2]
------------------------------------------------------------------------------------------
Mismatch for lv sentence:
Eiropas autoceļš E77
XLM-V ids: [0, 3477, 121549, 619, 181, 6697, 2]
HF ids: [0, 3477, 121549, 619, 181, 6697, 6, 2]
------------------------------------------------------------------------------------------
Mismatch for mk sentence:
Поретко , на пример во делови од Пиринска Македонија и Егејска Македонија некои од горните женски облеки – ‘’’саите’’’ се кроеле од домашно ткаено платно во сина боја .
XLM-V ids: [0, 186970, 192733, 187180, 6, 4, 186882, 188182, 186930, 201221, 186939, 221926, 187217, 187685, 186883, 248608, 211453, 187685, 193651, 186939, 240530, 198728, 186987, 187184, 186991, 39, 14464, 42, 187373, 186961, 11099, 42, 186894, 203637, 197766, 186939, 210461, 6, 189541, 188031, 212555, 186930, 194795, 199817, 6, 5, 2]
HF ids: [0, 186970, 192733, 187180, 6, 4, 186882, 188182, 186930, 201221, 186939, 221926, 187217, 187685, 186883, 248608, 211453, 187685, 193651, 186939, 240530, 198728, 186987, 187184, 186991, 39, 14464, 42, 187373, 186961, 42, 11099, 186894, 203637, 197766, 186939, 210461, 6, 189541, 188031, 212555, 186930, 194795, 199817, 6, 5, 2]
------------------------------------------------------------------------------------------
Mismatch for ml sentence:
അനു എലിസബത്ത് ജോസ്
XLM-V ids: [0, 397569, 385011, 528343, 388795, 385776, 481383, 2]
HF ids: [0, 397569, 385011, 528343, 388795, 385776, 481383, 6, 2]
------------------------------------------------------------------------------------------
Mismatch for ms sentence:
███ Sidang Kemuncak Asia Timur
XLM-V ids: [0, 6, 369908, 377468, 593458, 3944, 664695, 8451, 551742, 2]
HF ids: [0, 6, 377468, 369908, 593458, 3944, 664695, 8451, 551742, 2]
------------------------------------------------------------------------------------------
Mismatch for no sentence:
De siste tre semestre var han i Grenoble i Frankrike , der mye av fritiden ble tilbrakt i Les2alpes og LaGrave .
XLM-V ids: [0, 447, 550187, 17752, 611647, 246, 25684, 28, 657552, 28, 557692, 6, 4, 2860, 549299, 15446, 617530, 117029, 664714, 28, 17112, 430, 460, 10083, 6995, 1079, 29815, 383, 6, 5, 2]
HF ids: [0, 447, 550187, 17752, 611647, 246, 25684, 28, 657552, 28, 557692, 6, 4, 2860, 549299, 15446, 617530, 117029, 664714, 28, 17112, 430, 460, 10083, 6995, 1079, 597, 573563, 6, 5, 2]
------------------------------------------------------------------------------------------
Mismatch for or sentence:
ଲେଉଟାଣି ଜୋହାନ୍ ଅଗଷ୍ଟସ ଆର୍ଫୱେଡ଼ସନ୍
XLM-V ids: [0, 6, 387665, 391689, 393963, 403921, 393333, 392380, 395060, 388377, 522433, 387310, 6, 476299, 398439, 432754, 392919, 424507, 2]
HF ids: [0, 6, 387665, 391689, 393963, 403921, 393333, 392380, 395060, 388377, 522433, 387310, 6, 476299, 398439, 432754, 392919, 424507, 6, 2]
------------------------------------------------------------------------------------------
Mismatch for sh sentence:
Kefej ( kralj Tegeje )
XLM-V ids: [0, 3944, 12705, 18, 793761, 96767, 382, 1057, 2]
HF ids: [0, 3944, 12705, 18, 793761, 96767, 382, 1057, 6, 2]
------------------------------------------------------------------------------------------
Mismatch for sl sentence:
__________10__________ Eugenio Siena Alfa Romeo
XLM-V ids: [0, 272238, 1741, 666448, 12002, 848378, 836660, 26591, 72466, 2]
HF ids: [0, 272238, 1741, 12002, 666448, 848378, 836660, 26591, 72466, 2]
------------------------------------------------------------------------------------------
Mismatch for sr sentence:
Прерасподела доходка , Економски факултет Београд USJF - Preraspodela dohotka.ppt
XLM-V ids: [0, 188107, 189047, 187172, 192298, 190169, 186948, 6, 4, 228329, 186887, 192995, 190449, 15373, 662660, 20, 1182, 120, 793095, 567795, 656994, 90130, 5, 457258, 2]
HF ids: [0, 188107, 189047, 187172, 192298, 190169, 186948, 6, 4, 228329, 186887, 192995, 190449, 15373, 662660, 20, 1182, 120, 793095, 567795, 656994, 90130, 5, 457258, 6, 2]
------------------------------------------------------------------------------------------
Mismatch for te sentence:
దారిమార్పు ఇండియన్ ఇన్స్టిట్యూట్ ఆఫ్ టెక్నాలజీ మద్రాస్
XLM-V ids: [0, 436137, 464065, 387183, 460474, 400919, 520935, 493353, 384438, 397587, 466836, 385426, 480198, 383019, 2]
HF ids: [0, 436137, 464065, 387183, 460474, 400919, 520935, 493353, 384438, 397587, 466836, 385426, 480198, 383019, 6, 2]
------------------------------------------------------------------------------------------
Mismatch for ur sentence:
جاوید شیخ - جاوید
XLM-V ids: [0, 408290, 389645, 20, 408290, 2]
HF ids: [0, 408290, 389645, 20, 408290, 6, 2]
------------------------------------------------------------------------------------------
Mismatch for uz sentence:
Dastlab Oltin Oʻrdattt asosiy siyosiy markazi hisoblangan .
XLM-V ids: [0, 61568, 14, 3181, 586435, 43, 122, 1476, 47569, 211172, 14, 15966, 43523, 22564, 42030, 7050, 6, 5, 2]
HF ids: [0, 61568, 14, 3181, 586435, 43, 122, 1476, 47569, 14, 211172, 15966, 43523, 22564, 42030, 7050, 6, 5, 2]
------------------------------------------------------------------------------------------
Mismatch for zh-yue sentence:
R E D I R E C T # 巴 菲 特
XLM-V ids: [0, 266, 181, 205, 168, 266, 181, 232, 157, 524, 335519, 6, 286994, 6, 283738, 2]
HF ids: [0, 266, 181, 205, 168, 266, 181, 232, 157, 524, 335519, 6, 286994, 6, 283738, 6, 2]
------------------------------------------------------------------------------------------
```
<|||||>Can we tolerate these mismatches :thinking: <|||||>Model is up now on the model hub:
https://huggingface.co/stefan-it/xlm-v-base
-> I would like to conduct some experiments on downstream tasks (mainly NER) to measure performance.
Maybe e.g. @mrm8488 also wants to fine-tune models so that we can try to reproduce some of the paper results :)
After some experiments I can transfer the model to the Meta AI organization. The MLM performance is really good, so the model *should* work:
```python
In [3]: unmasker("Paris is the <mask> of France.")
Out[3]:
[{'score': 0.9286897778511047,
'token': 133852,
'token_str': 'capital',
'sequence': 'Paris is the capital of France.'},
{'score': 0.018073994666337967,
'token': 46562,
'token_str': 'Capital',
'sequence': 'Paris is the Capital of France.'},
{'score': 0.013238662853837013,
'token': 8696,
'token_str': 'centre',
'sequence': 'Paris is the centre of France.'},
{'score': 0.010450296103954315,
'token': 550136,
'token_str': 'heart',
'sequence': 'Paris is the heart of France.'},
{'score': 0.005028395913541317,
'token': 60041,
'token_str': 'center',
'sequence': 'Paris is the center of France.'}]
```
<|||||>Thank you so much @stefan-it. Ofc, I will try to reproduce some of the reported results.<|||||>I've replicated the MasakhaNER v1 results from the paper:
I fine-tuned 5 models (with different seeds) on the English WikiANN (Rahimi split) and evaluated them on MasakhaNER v1. Note: `DATE` entities do not exist in WikiANN, so they were replaced with `O` for zero-shot evaluation. I averaged F1-Score over the 5 models to get the final score. Models were fine-tuned with a sequence length of 512 (paper uses 128, I recognized this after fine-tuning experiments), but other hyper-parameter are the same as used in XLM-V paper: Batch size is 32, learning rate 2e-05 and number of epochs is 10.
Putting it all together (see Table 11 in XLM-V paper):
| Model | amh | hau | ibo | kin | lug | luo | pcm | swa | wol | yor | Avg.
| ------------------ | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ----- | ---- | ---- | ----
| XLM-R (Paper) | 25.1 | 43.5 | 11.6 | 9.4 | 9.5 | 8.4 | 36.8 | 48.9 | 5.3 | 10.0 | 20.9
| XLM-R (Reproduced) | 27.1 | 42.4 | 14.2 | 12.4 | 14.3 | 10.0 | 40.6 | 50.2 | 6.3 | 11.5 | 22.9
| XLM-V (Paper) | 20.6 | 35.9 | 45.9 | 25.0 | 48.7 | 10.4 | 38.2 | 44.0 | 16.7 | 35.8 | 32.1
| XLM-V (Reproduced) | 25.3 | 45.7 | 55.6 | 33.2 | 56.1 | 16.5 | 40.7 | 50.8 | 26.3 | 47.2 | 39.7
Performance diff for WikiANN between XLM-R and XLM-V in the paper is 11.2%. Reproduced experiments gave an performance diff of 16.8%.
So I think these experiments show, that the model is working and it achieves great results on MasakhaNER v1!
I will set-up a repository for all these results and conduct more experiments on WikiANN (second NER downstream tasks that is mentioned in in the paper).
@patrickvonplaten Do you think the model is then ready to be moved to the Meta AI org? I've also written an initial model card.<|||||>Here's the comparison on WikiANN zero-shot (see Table10 in XLM-V paper):
| Model | ro | gu | pa | lt | az | uk | pl | qu | hu | fi | et | tr | kk | zh | my | yo | sw
| ------------------ | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ----
| XLM-R (Paper) | 73.5 | 62.9 | 53.6 | 72.7 | 61.0 | 72.4 | 77.5 | 60.4 | 75.8 | 74.4 | 71.2 | 75.4 | 42.2 | 25.3 | 48.9 | 33.6 | 66.3
| XLM-R (Reproduced) | 73.8 | 65.5 | 50.6 | 74.3 | 64.0 | 76.5 | 78.4 | 60.8 | 77.7 | 75.9 | 73.0 | 76.4 | 45.2 | 29.8 | 52.3 | 37.6 | 67.0
| XLM-V (Paper) | 73.8 | 66.4 | 48.7 | 75.6 | 66.7 | 65.7 | 79.5 | 70.0 | 79.5 | 78.7 | 75.0 | 77.3 | 50.4 | 30.2 | 61.5 | 54.2 | 72.4
| XLM-V (Reproduced) | 77.2 | 65.4 | 53.6 | 74.9 | 66.0 | 69.4 | 79.8 | 66.9 | 79.0 | 77.9 | 76.2 | 76.8 | 48.5 | 28.1 | 58.4 | 62.6 | 71.6
| Model | th | ko | ka | ja | ru | bg | es | pt | it | fr | fa | ur | mr | hi | bn | el | de
| ------------------ | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ----
| XLM-R (Paper) | 5.2 | 49.4 | 65.4 | 21.0 | 63.1 | 76.1 | 70.2 | 77.0 | 76.9 | 76.5 | 44.6 | 51.4 | 61.5 | 67.2 | 69.0 | 73.8 | 74.4
| XLM-R (Reproduced) | 4.7 | 49.4 | 67.5 | 21.9 | 65.2 | 77.5 | 76.7 | 79.0 | 77.7 | 77.9 | 49.0 | 55.1 | 61.3 | 67.8 | 69.6 | 74.1 | 75.4
| XLM-V (Paper) | 3.3 | 53.0 | 69.5 | 22.4 | 68.1 | 79.8 | 74.5 | 80.5 | 78.7 | 77.6 | 50.6 | 48.9 | 59.8 | 67.3 | 72.6 | 76.7 | 76.8
| XLM-V (Reproduced) | 2.6 | 51.6 | 71.2 | 20.6 | 67.8 | 79.4 | 76.2 | 79.9 | 79.5 | 77.5 | 51.7 | 51.5 | 61.9 | 69.2 | 73.2 | 75.9 | 77.1
| Model | en | nl | af | te | ta | ml | eu | tl | ms | jv | id | vi | he | ar | Avg.
| ------------------ | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ----
| XLM-R (Paper) | 83.0 | 80.0 | 75.8 | 49.2 | 56.3 | 61.9 | 57.2 | 69.8 | 68.3 | 59.4 | 48.6 | 67.7 | 53.2 | 43.8 | 61.3
| XLM-R (Reproduced) | 83.4 | 80.8 | 75.8 | 49.3 | 56.8 | 62.2 | 59.1 | 72.2 | 62.3 | 58.3 | 50.0 | 67.9 | 52.6 | 47.8 | 62.6
| XLM-V (Paper) | 83.4 | 81.4 | 78.3 | 51.8 | 54.9 | 63.1 | 67.1 | 75.6 | 70.0 | 67.5 | 52.6 | 67.1 | 60.1 | 45.8 | 64.7
| XLM-V (Reproduced) | 84.1 | 81.3 | 78.9 | 50.9 | 55.9 | 63.0 | 65.7 | 75.9 | 70.8 | 64.8 | 53.9 | 69.6 | 61.1 | 47.2 | 65.0
Diff. between XLM-V and XLM-R in the paper: (64.7 - 61.3) = 3.4%.
Diff. between reproduced XLM-V and XLM-R: (65.0 - 62.6) = 2.4%.
Same conclusion: the converted/integrated XLM-V works great :hugs: <|||||>Great job @stefan-it !!! 🔥<|||||>Thanks @mrm8488 !
Repo is btw: up here: https://github.com/stefan-it/xlm-v-experiments :)<|||||>Thanks a lot for your contribution @stefan-it 🙏
Just transferred the checkpoint to the appropriate organization: https://huggingface.co/facebook/xlm-v-base
However, I feel like it could be beneficial to have a separate model_doc for XLM-V (similar to how we did this for T5v1.1 etc.).
Do you mind opening a PR for that?<|||||>Thanks! Closing this issue as the model is now available: https://huggingface.co/docs/transformers/main/en/model_doc/xlm-v.<|||||>Amazing work @stefan-it - thanks a lot! <|||||>Amazing @stefan-it . Should I add some ft metric @patrickvonplaten as done for other models? I fine-tuned it on XNLI: https://huggingface.co/mrm8488/xlm-v-base-finetuned-xglue-xnli |
transformers | 21,329 | closed | Add VQGAN-CLIP research project | # What does this PR do?
Implements VQGAN-CLIP using huggingface CLIP models
Related to #21064
This Research Project allows users to generate or edit images with a single line of code. It wraps the huggingface CLIPProcessor class, allowing images to be processed as torch tensors in order to preserve gradient flow through the transformations.
Features:
- Positive and negative prompts
- Multiple prompts
- Prompt Weights
- Creating GIF animations of the transformations
- Wandb logging
Tagging @amyeroberts for review :)
| 01-26-2023 20:06:17 | 01-26-2023 20:06:17 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Hi there,
Thanks for the feedback, I fixed the style issues and removed the face image.
Also refactored the code to have more accurate function names, used just the tokenizer, changed the assertions to exceptions, and removed some extraneous code (eg double crop, freeze_module)
Have a great day :)
Erwann<|||||>@ErwannMillon - thanks for the updates. It's looking good 😎 !
Just two last things before I think we're ready to merge:
* Removing the `face.jpg` file and instead pointing to a place where it can be downloaded
* Could you also remove the notebook? Like the image, we want to avoid adding large files as much as possible. Happy for you to link to e.g. a colab which shows this demo in the README. |
transformers | 21,328 | closed | Fix M2M100 positional embedding creation for ONNX | # What does this PR do?
This PR changes the reshape step when computing the sinusoidal positional embeddings in M2M100 to make it work with ONNX.
Shape inference is incorrect before:

You can see that ONNX sets the last axis of the shape of `last_hidden_state` to be dynamic, with some auto-generated name, while it should be static.
After the fix:
 | 01-26-2023 17:19:33 | 01-26-2023 17:19:33 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 21,327 | closed | Remove more unused attributes in config classes | # What does this PR do?
Remove more unused attributes in config classes.
There are more changes needed than I expected previously. I have to adopt the new test (now only in my branch) in a progressive way and make the changes to pass that test in the same time.
I tried to change more places in single PR to avoid too many PRs related to this topic. But there will still be a few PRs in the future 🙏 . | 01-26-2023 17:19:16 | 01-26-2023 17:19:16 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 21,326 | closed | Deepspeed with Trainer RecursionError: maximum recursion depth exceeded while calling a Python object | ### System Info
System Info:
- `transformers` version: 4.25.1
- Platform: Linux-5.4.0-137-generic-x86_64-with-glibc2.31
- Python version: 3.10.8
- Huggingface_hub version: 0.11.1
- PyTorch version (GPU?): 1.13.1+cu117 (True)
### Who can help?
@stas00
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Stack trace:
```
File "myPythonScript.py", line 230, in train
trainer.train()
File "/miniconda3/envs/venv/lib/python3.10/site-packages/transformers/trainer.py", line 1527, in train
return inner_training_loop(
File "/miniconda3/envs/venv/lib/python3.10/site-packages/transformers/trainer.py", line 1597, in _inner_training_loop
deepspeed_engine, optimizer, lr_scheduler = deepspeed_init(
File "/miniconda3/envs/venv/lib/python3.10/site-packages/transformers/deepspeed.py", line 344, in deepspeed_init
deepspeed_engine, optimizer, _, lr_scheduler = deepspeed.initialize(**kwargs)
File "/miniconda3/envs/venv/lib/python3.10/site-packages/deepspeed/__init__.py", line 125, in initialize
return inner_training_loop(
File "/miniconda3/envs/venv/lib/python3.10/site-packages/transformers/trainer.py", line 1597, in _inner_training_loop
deepspeed_engine, optimizer, lr_scheduler = deepspeed_init(
File "/miniconda3/envs/venv/lib/python3.10/site-packages/transformers/deepspeed.py", line 344, in deepspeed_init
engine = DeepSpeedEngine(args=args,
File "/miniconda3/envs/venv/lib/python3.10/site-packages/deepspeed/runtime/zero/partition_parameters.py", line 348, in wrapper
deepspeed_engine, optimizer, _, lr_scheduler = deepspeed.initialize(**kwargs)
File "/miniconda3/envs/venv/lib/python3.10/site-packages/deepspeed/__init__.py", line 125, in initialize
engine = DeepSpeedEngine(args=args,
File "/miniconda3/envs/venv/lib/python3.10/site-packages/deepspeed/runtime/zero/partition_parameters.py", line 348, in wrapper
if not hasattr(module, "_ds_child_entered"):
File "/miniconda3/envs/venv/lib/python3.10/site-packages/deepspeed/runtime/engine.py", line 495, in __getattr__
if not hasattr(module, "_ds_child_entered"):
File "/miniconda3/envs/venv/lib/python3.10/site-packages/deepspeed/runtime/engine.py", line 495, in __getattr__
if name in dir(self):
File "/home/ballin/.local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 2022, in __dir__
parameters = list(self._parameters.keys())
File "/miniconda3/envs/venv/lib/python3.10/site-packages/deepspeed/runtime/engine.py", line 495, in __getattr__
if name in dir(self):
....
... multiple hundred lines of the same two function calls ....
....
File "/miniconda3/envs/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 2028, in __dir__
parameters = list(self._parameters.keys())
File "/mnt/ssestorage2-data/ballin/miniconda3/envs/venv/lib/python3.10/site-packages/deepspeed/runtime/engine.py", line 495, in __getattr__
if name in dir(self):
File "/mnt/ssestorage2-data/ballin/miniconda3/envs/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 2028, in __dir__
parameters = list(self._parameters.keys())
File "/mnt/ssestorage2-data/ballin/miniconda3/envs/venv/lib/python3.10/site-packages/deepspeed/runtime/engine.py", line 495, in __getattr__
if name in dir(self):
File "/mnt/ssestorage2-data/ballin/miniconda3/envs/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 2026, in __dir__
module_attrs = dir(self.__class__)
RecursionError: maximum recursion depth exceeded while calling a Python object
```
The code used in myClassifier.py:
```
MODEL = "microsoft/bloom-deepspeed-inference-fp16"
TOKENIZER = AutoTokenizer.from_pretrained(MODEL)
training_args = TrainingArguments(
do_train=True,
do_eval=True,
fp16=True,
load_best_model_at_end=True,
evaluation_strategy="epoch",
save_strategy="epoch",
deepspeed="ds_config.json",
local_rank=os.environ.get("LOCAL_RANK")
)
config = AutoConfig.from_pretrained(MODEL)
with deepspeed.zero.Init(dtype=torch.float16):
model = AutoModelForSequenceClassification.from_config(config=config,torch_dtype=torch.float16)
model = model.eval()
dist.barrier()
trainer = Trainer(
model=model,
args=training_args,
train_dataset=ds["train"],
eval_dataset=ds["test"],
tokenizer=TOKENIZER
)
trainer.train()
```
I am using the deepspeed configuration file from https://huggingface.co/docs/transformers/main/en/main_classes/deepspeed#zero3-config and call deepspeed with ```deepspeed --num_gpus=4 --master_addr="myIP" --master_port=1234 --hostfile=job/hostfile myPythonScript.py``` on 2 nodes using 4x NVIDIA A100 with 80 GB each.
### Expected behavior
The training should run without an error. | 01-26-2023 16:57:17 | 01-26-2023 16:57:17 | In general this belongs to https://github.com/microsoft/DeepSpeed/issues as this is a deepspeed issue. I see you file it here https://github.com/microsoft/DeepSpeedExamples/issues/84#issuecomment-1405311822 but I think this is the wrong place.
I have run into this problem myself recently - it was triggered by `zero.Init` - and I had some nested `from_pretrained` calls and multiple `zero.Init` calls. Once I recoded to have only a single `zero.Init` call the problem went away.
So in your code sample you shouldn't do:
```
with deepspeed.zero.Init(dtype=torch.float16):
model = AutoModelForSequenceClassification.from_config(config=config,torch_dtype=torch.float16)
```
because `from_config` already uses `zero.Init` internally! so you end up with nested `zero.Init` and it breaks.
It should be just:
```
model = AutoModelForSequenceClassification.from_config(config=config,torch_dtype=torch.float16)
```<|||||>Thank you @stas00 , this resolved the issue! |
transformers | 21,325 | closed | Token batching | Hello,
Many frameworks support _token batching_, in which batches are constructed not so that they contain the same number of sequences, but rather so that they contain approximately the same number of tokens (so a batch could consist either of a large number of short sequences or a small number of long sequences). One motivation for this is so that memory use is roughly constant from batch to batch, which makes it easier to use a very large batch size without risking an out-of-memory error.
For example, this is the behavior when using `--max-tokens` instead of `--batch-size` in fairseq.
I found a previous issue (https://github.com/huggingface/transformers/issues/14767) where this was asked. At the time, someone claimed that the feature existed and posted a video. However, the examples presented in that video do **not** actually implement this feature. Subsequent comments pointed out that the issue remained unresolved, but they were ignored.
So my question is, does token batching already exist in transformers? If so, how can I make use of it?
Thank you for your help! i wasn't sure if I should have made this a feature request because it's not actually clear to me whether the feature has already been implemented or not. | 01-26-2023 16:45:42 | 01-26-2023 16:45:42 | Hi there, what you are asking for is not supported. Note that Transformers is primarily a library of models. You can adapt the data preprocessing part of any of our existing examples to suit your needs, but we won't support every feature out of the box as it's not the goal of the library.<|||||>Hello,
Thank you for your quick reply. I'll admit I'm a bit surprised that this is considered out of scope. It is a models library, yes, but the main ways people interact with models are through training (including finetuning) and inference. In either case, inputs need to be batched. This is a very mainstream technique for doing it, especially for self-attention-based models because of the popularity of very large batches (at least when training from scratch, I'm fairly new to finetuning so perhaps the situation is different).
Thank you again for your help.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Token batching is a necessary feature for some tasks like machine translation as it is a recognized setting in the field. When you want to make sure that your experimental setup is consistent with other frameworks, you must do so. |
transformers | 21,324 | closed | [Whisper] another patch | # What does this PR do?
This fixes some issues I came up with when benchmarking the model + the CI-tests are not passing.
We updated the configs online, which changed a lot of things.
Also TF's forced decoder logit processor was wrong | 01-26-2023 15:56:02 | 01-26-2023 15:56:02 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 21,323 | closed | Generate: better `compute_transition_scores` examples | # What does this PR do?
Adds further notes and details to the examples in `compute_transition_scores`, so it can be used out of the box with encoder-decoder models.
Inspired from the interaction in #21321 | 01-26-2023 15:41:19 | 01-26-2023 15:41:19 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 21,321 | closed | `compute_transition_scores` becomes erroneous when setting the minimal length of generation | ### System Info
transformers==4.27.0.dev0 (latest from master)
### Who can help?
@gante @pat
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Code to reproduce:
```
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
import numpy as np
checkpoint = 'facebook/bart-large-cnn'
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
model = AutoModelForSeq2SeqLM.from_pretrained(checkpoint)
text = "Liverpool football club was established in 1892. It is one of the largest and the most famous football clubs in the world with six Champions League titles, 19 premier league titles and others. In 2015 Jurgen Klopp became a Liverpool manager and since that time led the team to another Champions league and Premier League trophies."
inputs = tokenizer([text], return_tensors="pt")
# Example 1: Print the scores for each token generated with Greedy Search
outputs = model.generate(
**inputs,
min_new_tokens=10,
max_new_tokens=256,
return_dict_in_generate=True,
output_scores=True
)
transition_scores = model.compute_transition_scores(
outputs.sequences, outputs.scores, outputs.beam_indices, normalize_logits=True
)
generated_tokens = outputs.sequences[:, 1:]
for tok, score in zip(generated_tokens[0], transition_scores[0]):
print(f"| {tok:5d} | {tokenizer.decode(tok):8s} | {score.numpy():.3f} | {np.exp(score.numpy()):.2%}")
print(outputs.sequences_scores[0].item(), transition_scores.mean().item())
```
### Expected behavior
Hi. A recently added `compute_transition_scores` method for text generation behaves erroneously when the minimal length of generated text is set. The last row in my code prints the average sequence log score obtained throughout generation and it clearly differs from that obtained via the function (0.02 vs 0.19) - I observe the same behavior for other sequences/models when `min_length` parameter is not default in the `model.generate` method.
P.s. Unsure what the argument `normalize_logits` in the function does, however, setting it to False does not solve the problem. | 01-26-2023 12:54:30 | 01-26-2023 12:54:30 | Hey @Aktsvigun 👋 Thank you for raising this issue, it is great to iron out usage difficulties 🤗
There is no bug, you forgot to account for `length_penalty` -- see the other example in `compute_transition_scores`'s docstring. I'm pasting the corrected snippet below.
Two further notes:
- `normalize_logits` normalizes the logits, such that `sum(exp(logits)) = 1` at each generated token. Our models do not perform this normalization by default, and it is very helpful to evaluate the generate output.
- To get `outputs.sequences_scores` back, we need to make sure we operate on the scores in the same conditions as in `.generate()` -- with `normalize_logits=False` and applying `length_penalty` as in the example below.
___________________________
```py
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
import numpy as np
checkpoint = 'facebook/bart-large-cnn'
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
model = AutoModelForSeq2SeqLM.from_pretrained(checkpoint)
text = "Liverpool football club was established in 1892. It is one of the largest and the most famous football clubs in the world with six Champions League titles, 19 premier league titles and others. In 2015 Jurgen Klopp became a Liverpool manager and since that time led the team to another Champions league and Premier League trophies."
inputs = tokenizer([text], return_tensors="pt")
# Example 1: Print the scores for each token generated with Greedy Search
outputs = model.generate(
**inputs,
min_new_tokens=10,
max_new_tokens=256,
return_dict_in_generate=True,
output_scores=True
)
transition_scores = model.compute_transition_scores(
outputs.sequences, outputs.scores, outputs.beam_indices, normalize_logits=False
)
generated_tokens = outputs.sequences[:, 1:]
for tok, score in zip(generated_tokens[0], transition_scores[0]):
print(f"| {tok:5d} | {tokenizer.decode(tok):8s} | {score.numpy():.3f} | {np.exp(score.numpy()):.2%}")
# output_length -> 1 from forced BOS token, np.sum(transition_scores.numpy() < 0, axis=1) from the other tokens
output_length = 1 + np.sum(transition_scores.numpy() < 0, axis=1)
length_penalty = model.generation_config.length_penalty
reconstructed_scores = transition_scores.sum(axis=1) / (output_length**length_penalty)
print(np.allclose(outputs.sequences_scores, reconstructed_scores))
```<|||||>I'm closing this issue as it seems to be solved, but feel free to reopen it if you have other related issues :)<|||||>Hi @gante!
Thanks a lot for the quick response!
Sure, my bad omitting the length penalty parameter.
Thanks again for the amazing function!
|
transformers | 21,320 | closed | [`Vision-Encoder- Decoder`] Add `vision-encoder-decoder` on `AutoProcessor` | # What does this PR do?
Similarly as https://github.com/huggingface/transformers/pull/21319 & #21299 a doctest was failing because the correct processor was not mapped in the `AutoProcessor` mapping for `vision-encoder-decoder`
This fixes also a doctest that was failing, link to failing job: https://github.com/huggingface/transformers/actions/runs/4002271138/jobs/6869333719
cc @ydshieh @sgugger | 01-26-2023 12:27:57 | 01-26-2023 12:27:57 | See https://github.com/huggingface/transformers/pull/21319#issuecomment-1404938821<|||||>Closing in favor of opening PRs on the Hub as described in the comment! <|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_21320). All of your documentation changes will be reflected on that endpoint. |
transformers | 21,319 | closed | [`Speech-Encoder-Decoder`] Add `speech-encoder-decoder` to `AutoProcessor` | # What does this PR do?
Currently loading the correct processor for `speech-encoder-decoder` using `AutoProcessor` is broken on `main`.
The issue and the fix is very identical as https://github.com/huggingface/transformers/pull/21299 , as a doctest was also failing. Link to failing job: https://github.com/huggingface/transformers/actions/runs/4002271138/jobs/6869333719
cc @ydshieh @sgugger
| 01-26-2023 12:17:54 | 01-26-2023 12:17:54 | Hi @younesbelkada ! Thank you for the PR(s). In my opinion, (generic) composite models like (text/vision/speech) encoder-decoder models are not meant to use `Auto` mappings, as their design is to compose any pair of models (whenever it works).
The fix should be updating the config file on the hub for these checkpoints. In this case, by adding `processor_class`, for which I have opened those Hub PR.<|||||>Makes sense! Thanks for explaining! <|||||>_The documentation is not available anymore as the PR was closed or merged._ |
transformers | 21,318 | closed | [Doctest] Fix `Perceiver` doctest | # What does this PR do?
This PR fixes a failing doctest for `PerceiverModel`. Link to failing job: https://github.com/huggingface/transformers/actions/runs/4002271138/jobs/6869333719
With #21225 being merged, the snippet [here](https://github.com/huggingface/transformers/blob/31336dcf3f93dee19cd13c981f16982d612040d2/src/transformers/models/perceiver/modeling_perceiver.py#L796):
```python
...
model = PerceiverModel(config, input_preprocessor=preprocessor, decoder=decoder)
# you can then do a forward pass as follows:
tokenizer = PerceiverTokenizer()
```
has been modified by:
```python
...
model = PerceiverModel(config, input_preprocessor=preprocessor, decoder=decoder)
# you can then do a forward pass as follows:
tokenizer = AutoTokenizer()
```
As there is no canonical way to automatically load a tokenizer from something which is different than a path, or model id, one should load any default tokenizer by instantiating it using the child class and not using `AutoTokenizer`.
This PR reverts this change and fixes the doctest
cc @ydshieh 💯 | 01-26-2023 12:08:51 | 01-26-2023 12:08:51 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 21,317 | closed | trainer get_optimizer_cls_and_kwargs doesn't seem to use optim_args | Hello,
I was reading the code for the trainer and I noticed that the optimizer arguments that get passed to the trainer with the TrainingArguments don't seem to be actually applied for any optimizers aside from AnyPrecisionAdamW, is this intended?
I don't think this question really fits into any of the templates so I didn't use them.
@sgugger
https://github.com/huggingface/transformers/blob/4e41b87e3d13af0d1d7d3d27d101e60c33c92100/src/transformers/trainer.py#L1077
| 01-26-2023 11:13:02 | 01-26-2023 11:13:02 | Yes, this is intended as the regular adam kwargs have their own training argument.<|||||>Oh okay I see, is there anything speaking against using optim_args to pass kwargs to adafactor?<|||||>Adafactor in the library is deprecated an not maintained, you should rely on another implementation.<|||||>Thanks for the info but if that's the case, is there a deprecation warning somewhere for this? Because I don't recall seeing one.
In any case your info closes this issue, thank you.<|||||>We haven't officially deprecated it since we are waiting for someome to add support for another integration of it (like for AnyPrecisionAdam and the others). |
transformers | 21,316 | closed | Small QoL for qa. | # What does this PR do?
Adding a small qol to avoid panic exceptions down at the tokenizer level.
Fixes https://github.com/huggingface/tokenizers/issues/944
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts and @NielsRogge
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
--> | 01-26-2023 09:22:13 | 01-26-2023 09:22:13 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 21,315 | closed | check paths in `utils/documentation_tests.txt` | # What does this PR do?
This PR adds a new check to ensure the paths in `utils/documentation_tests.txt` are all valid, so doctest CI won't fail from the beginning.
related PR: #21314 - ~~We need to wait that PR merged before margining this one.~~ It's merged.
The effect of this PR
<img width="800" alt="Screenshot 2023-01-26 111139" src="https://user-images.githubusercontent.com/2521628/214810581-4367c7d8-94b7-4534-94f6-b51d3582a5cb.png">
| 01-26-2023 09:19:14 | 01-26-2023 09:19:14 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 21,314 | closed | Fix 2 paths in the doctest list | # What does this PR do?
doctest has 0 failures
```
🌞 There were no failures: all 0 tests passed. The suite ran in 0h0m0s.
```
which is caused by
```
ERROR: file or directory not found: src/transformers/models/maskformer/configuration_mask2former.py
collecting ... collected 0 items
```
😭😭😭
This PR makes the failures (if any) visible again. | 01-26-2023 08:49:15 | 01-26-2023 08:49:15 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 21,313 | closed | post_process_instance_segmentation does not resize outputs? | ### System Info
```
Python 3.8.8 (default, Apr 13 2021, 15:08:03) [MSC v.1916 64 bit (AMD64)] :: Anaconda, Inc. on win32
import transformers
transformers.__version__
'4.25.1'
```
### Who can help?
```
results = processor.post_process_instance_segmentation(outputs, target_sizes=[(1000, 1000), (1000, 1000)] )
results[0]['segmentation'].cpu().numpy().shape
(128, 128)
```
Shouldn't the output be ```(1000, 1000)```
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```
results = processor.post_process_instance_segmentation(outputs, target_sizes=[(1000, 1000), (1000, 1000)] )
results[0]['segmentation'].cpu().numpy().shape
(128, 128)
```
Shouldn't the output be `(1000, 1000)`
### Expected behavior
Output masks should be same size as `target_sizes` | 01-26-2023 05:42:15 | 01-26-2023 05:42:15 | cc @alaradirik <|||||>@alaradirik @NielsRogge what's weird is `post_process_semantic_segmentation` works as expects but not `post_process_instance_segmentation`
```
results = processor.post_process_semantic_segmentation(outputs, target_sizes=[(1000, 1000), (1000, 1000)] )
results[0]['segmentation'].cpu().numpy().shape
(1000, 1000)
```
```
results = processor.post_process_instance_segmentation(outputs, target_sizes=[(1000, 1000), (1000, 1000)] )
results[0]['segmentation'].cpu().numpy().shape
(128, 128)
```<|||||>Hi @nickponline!
Are you using MaskFormer or Mask2Former? We are aware of the issue and it should be fixed with the latest release. Could you try upgrading to transformers 4.26.0?<|||||>MaskFormer
On Fri, Jan 27, 2023 at 12:08 AM Alara Dirik ***@***.***>
wrote:
> Hi @nickponline <https://github.com/nickponline>!
>
> Are you using MaskFormer or Mask2Former? We are aware of the issue and it
> should be fixed with the latest release. Could you try upgrading to
> transformers 4.26.0?
>
> —
> Reply to this email directly, view it on GitHub
> <https://github.com/huggingface/transformers/issues/21313#issuecomment-1406151217>,
> or unsubscribe
> <https://github.com/notifications/unsubscribe-auth/AAEQCR2MY7H2MV5ED6QIEJTWUN66PANCNFSM6AAAAAAUHDZ5ZI>
> .
> You are receiving this because you were mentioned.Message ID:
> ***@***.***>
>
<|||||>@alaradirik @NielsRogge when I use 4.26.0 the resizing works and I get masks back of the `target_size`, but I do notice that the fidelity of the semantic segmentation results are better than the instance segmentation ones. Is something different in how the masks are upsampled? Example below and both masks are 1000x1000
instance segmentation (notice the low resolution)

semantic semantic segmentation (right).

<|||||>@alaradirik these results are with Mask2Former ^<|||||>@nickponline that is a good point.
To answer your question, Mask2Former outputs mask logits of shape (96, 96) for efficiency purposes. The `post_process_semantic_segmentation` method directly interpolates the mask logits to the target size, whereas the `post_process_instance_segmentation` method first interpolates the mask logits to the preprocessed image size (384, 384), computes the final score of each binary mask proposal by multiplying the mask proposal score with the class score and resizes the final instance segmentation map (discrete instance id values instead of continuous logit values) .
The `post_process_instance_segmentation` method yields the same results as the original Mask2Former post-processing. However, you can smoothen the results by cloning the repo, editing line 990 of the [image_processing_mask2former.py](https://github.com/huggingface/transformers/blob/main/src/transformers/models/mask2former/image_processing_mask2former.py) file such that the mask logits are interpolated to the target size (instead of (384, 384)) and building the library locally.
<|||||>@alaradirik I also noticed the hard-coded resizing to (384, 384) in `post_process_instance_segmentation`. What if the user chooses to resize the input images to a different input size, e.g., (480, 640)? Shouldn't the post-processing adapt to the actual input size?<|||||>@Callidior you can pass `target_sizes` to the `post_process_instance_segmentation` method, which is a list containing the desired (height, width) as tuples<|||||>@NielsRogge As far as I understood, resizing to `target_sizes` happens after the hard-coded resize to 384x384. It does not replace it.
But what if I resize the input images, for example, to a non-quadratic size of 640x960. The post-processing would first resize the segmentation maps to 384x384 and then to 640x960. This would loose much more spatial precision along one dimension than the other.<|||||>Yes but as @alaradirik explains, this is to comply to the original implementation, which always first interpolates to (384x384). So as she suggest, if you really want to interpolate directly to the desired size, feel free to fork and edit [this line](https://github.com/huggingface/transformers/blob/2ea1ef909016484bee9d60c05582031464490f77/src/transformers/models/mask2former/image_processing_mask2former.py#L990) <|||||>Hi @Callidior, as @NielsRogge pointed out, we follow the original implementation in order to make it easier for users to benchmark official checkpoints or their fine-tuned model against other models.
In order to make changes, you can git clone the repo, change the relevant line and install the library locally with `pip install -e ".[dev]"`.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 21,312 | open | [pure bf16 training] w/ `AnyPrecisionAdamW` and Kahan summation | This PR was prompted by [this discussion](https://github.com/pytorch/torchdistx/pull/52#discussion_r1082027732) with @lessw2020.
The PR works, just keeping it as Draft for now as I haven't polished it to be ready for merging.
# How to perform pure bf16 training (not mixed) running with `AnyPrecisionAdamW` also in bf16 w/ Kahan summation
I think it should require x8 bytes per param, instead of x18 for mixed precision training - i.e. 1/2 memory usage for everything but activations memory.
(also included a hack into loading `load_from_disk` to get saved datasets, but it's unrelated to the actual feature - will remove at the end)
To test checkout this branch:
```
git clone https://github.com/huggingface/transformers transformers-bf16
cd transformers-bf16
git checkout full-bf16-train
```
## getting `AnyPrecisionAdamW`
You can try to install the bleed edge [`torchdistx`](https://github.com/pytorch/torchdistx/) but it's very difficult to do. Since the optimizer is just python code, we just hack-install it doing just this:
```
mkdir -p $CONDA_PREFIX/lib/python3.8/site-packages/torchdistx/optimizers
wget https://raw.githubusercontent.com/pytorch/torchdistx/main/src/python/torchdistx/optimizers/anyprecision_optimizer.py \
-O $CONDA_PREFIX/lib/python3.8/site-packages/torchdistx/optimizers/__init__.py
```
you will just need to update your destination path if you're not using CONDA or have a different python version. To be more specific adjust the location of your python's `site-packages` directory.
# Training
If you have an 80GB A100, you can do `opt-1.3b` setup below, otherwise for smaller cards choose one of the smaller setups.
You can of course do this for any model, this PR is model invariant.
And you can do either finetuning or training from scratch
## opt-1.3b / bf16-pure training from scratch
First, prep an initialized opt-1.3 model:
```
cat << EOT > prep-bf16.py
from transformers import AutoConfig, AutoModel, AutoTokenizer
import torch
mname = "facebook/opt-1.3b"
config = AutoConfig.from_pretrained(mname)
model = AutoModel.from_config(config, torch_dtype=torch.bfloat16)
tokenizer = AutoTokenizer.from_pretrained(mname)
path = "opt-1.3b-bf16"
model.save_pretrained(path)
tokenizer.save_pretrained(path)
EOT
python prep-bf16.py
```
Train from scratch:
```
rm -rf save_dir; PYTHONPATH="src" python -m torch.distributed.run \
--nproc_per_node=1 --nnode=1 --node_rank=0 \
--master_addr=127.0.0.1 --master_port=9901 \
examples/pytorch/language-modeling/run_clm.py --bf16 \
--half_precision_backend no_amp --seed 42 --model_name_or_path opt-1.3b-bf16 \
--dataset_name wikitext --dataset_config_name wikitext-103-raw-v1 --optim \
adamw_anyprecision --optim_args \
'use_kahan_summation=true, momentum_dtype=bfloat16, variance_dtype=bfloat16, compensation_buffer_dtype=bfloat16' \
--per_device_train_batch_size 12 --per_device_eval_batch_size 12 \
--gradient_accumulation_steps 1 --do_train --do_eval --logging_steps 10 \
--save_steps 1000 --eval_steps 100 --weight_decay 0.1 --num_train_epochs 1 \
--adam_beta1 0.9 --adam_beta2 0.95 --learning_rate 0.0002 --lr_scheduler_type \
linear --warmup_steps 500 --report_to tensorboard --output_dir save_dir
```
Let's check that I got the math right for opt-1.3B
Theoretical memory allocation for optim states, weights, grads
```
breakdown: n_params*(optim + grad + weights)
bf16 mixed precision: 1.3*(8 + 2 + 4+2 ) = 1.3*16 = 20.8GB
bf16 pure: 1.3*(4+2 + 2 + 2 ) = 1.3*10 = 13.0GB
-----------------------------------------------------
diff: 7.8GB
```
Real memory allocation: (got by adding `--skip_memory_metrics 0` flag to get memory usage reports)
```
a. bf16 mixed precision:
before_init_mem_gpu = 0MB
init_mem_gpu_alloc_delta = 5019MB
init_mem_gpu_peaked_delta = 0MB
train_mem_gpu_alloc_delta = 20076MB
train_mem_gpu_peaked_delta = 123MB
-----------------------------------------
total = 25218MB
b. bf16 pure:
before_init_mem_gpu = 0MB
init_mem_gpu_alloc_delta = 5019MB
init_mem_gpu_peaked_delta = 0MB
train_mem_gpu_alloc_delta = 12548MB
train_mem_gpu_peaked_delta = 124MB
-----------------------------------------
total = 17691MB
diff: 7.53GB
```
So the theoretical and actual numbers check out memory wise.
## opt-125m / bf16-pure training from scratch
If you want to fit into a smaller card, let's do opt-125m
Then prep an empty opt-125m model:
```
cat << EOT > prep-bf16.py
from transformers import AutoConfig, AutoModel, AutoTokenizer
import torch
mname = "facebook/opt-125m"
config = AutoConfig.from_pretrained(mname)
model = AutoModel.from_config(config, torch_dtype=torch.bfloat16)
tokenizer = AutoTokenizer.from_pretrained(mname)
path = "opt-125m-bf16"
model.save_pretrained(path)
tokenizer.save_pretrained(path)
EOT
python prep-bf16.py
```
Train from scratch in pure bf16:
```
rm -rf save_dir; PYTHONPATH="src" python -m torch.distributed.run \
--nproc_per_node=1 --nnode=1 --node_rank=0 \
--master_addr=127.0.0.1 --master_port=9901 \
examples/pytorch/language-modeling/run_clm.py --bf16 \
--half_precision_backend no_amp --seed 42 --model_name_or_path opt-125m-bf16 \
--dataset_name wikitext --dataset_config_name wikitext-103-raw-v1 --optim \
adamw_anyprecision --optim_args \
'use_kahan_summation=true, momentum_dtype=bfloat16, variance_dtype=bfloat16, compensation_buffer_dtype=bfloat16' \
--per_device_train_batch_size 12 --per_device_eval_batch_size 12 \
--gradient_accumulation_steps 1 --do_train --do_eval --logging_steps 10 \
--save_steps 1000 --eval_steps 100 --weight_decay 0.1 --num_train_epochs 1 \
--adam_beta1 0.9 --adam_beta2 0.95 --learning_rate 0.0002 --lr_scheduler_type \
linear --warmup_steps 500 --report_to tensorboard --output_dir save_dir
```
## opt-125m / fp16-amp training from scratch
Same for mixed precision fp16 (we want bf16 to give us a similar loss curve when everything else is the same):
```
cat << EOT > prep-fp16.py
from transformers import AutoConfig, AutoModel, AutoTokenizer
import torch
mname = "facebook/opt-125m"
config = AutoConfig.from_pretrained(mname)
model = AutoModel.from_config(config, torch_dtype=torch.float16)
tokenizer = AutoTokenizer.from_pretrained(mname)
path = "opt-125m-fp16"
model.save_pretrained(path)
tokenizer.save_pretrained(path)
EOT
python prep-fp16.py
```
```
rm -rf save_dir; PYTHONPATH="src" python -m torch.distributed.run \
--nproc_per_node=1 --nnode=1 --node_rank=0 \
--master_addr=127.0.0.1 --master_port=9901 \
examples/pytorch/language-modeling/run_clm.py --ff16 \
--seed 42 --model_name_or_path opt-125m-fp16 \
--dataset_name wikitext --dataset_config_name wikitext-103-raw-v1 \
--per_device_train_batch_size 12 --per_device_eval_batch_size 12 \
--gradient_accumulation_steps 1 --do_train --do_eval --logging_steps 10 \
--save_steps 1000 --eval_steps 100 --weight_decay 0.1 --num_train_epochs 1 \
--adam_beta1 0.9 --adam_beta2 0.95 --learning_rate 0.0002 --lr_scheduler_type \
linear --warmup_steps 500 --report_to tensorboard --output_dir save_dir
```
| 01-26-2023 05:31:25 | 01-26-2023 05:31:25 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_21312). All of your documentation changes will be reflected on that endpoint. |
transformers | 21,311 | closed | [WHISPER] Add language to whisper output | ### Feature request
Adding the translated language in whisper output, in addition to currently returned `text` and `chunks`
Whisper outputs language tag and for autodetection is important to have this feature as some use-cases don't know the language of the translation.
### Motivation
One example usage is, for both transcription and translation, detecting if the language is `en`, we don't need to add additional translation.
### Your contribution
Tried, couldn't find where | 01-25-2023 23:44:07 | 01-25-2023 23:44:07 | We'll be adding a `tokenizer_kwargs`, to allow the `skip_special_tokens` to be overwritten. This should allow you to do something like
```
>>> out = pipeline(..., tokenizer_kwargs={"skip_special_tokens": False}, return_timestamps=True, max_length = 2)
"<startoftranscript><en>"
```
Then either you regex or encode with the tokenizer and that should do the trick. cc @Narsil as we talked about this offline
<|||||>Would that work for you ?<|||||>I... am not sure?
I can only come at this as a fairly clueless dev that barely understands tokenization.
In that case, compared to how whisper is built, the above seems very complex to do.
@ArthurZucker I think as we chatted, you guys have many limitations in keeping pipelines generic features.
Could there be an easier way to get the detected language?
Maybe exposing the `detect_language` feature of whisper via `pipe.model.detect_language(audio_file)` somehow?
<|||||>As a workaround I'm loading and running whisper base to just detect language, I would love to be at least able to use the loaded transformers whisper.
So far, no luck.
detect_language is not exposed .
and running pipe.model.generate() on the source file gives me :
`{AttributeError}'str' object has no attribute 'shape'`
Which I assume is because generate needs a numpy array of the audio? 🤔
But def. complex for the average user<|||||>For anyone getting here, I found a better workaround.
Thanks to @ArthurZucker notebook with examples:
https://colab.research.google.com/drive/1rS1L4YSJqKUH_3YxIQHBI982zso23wor?usp=sharing#scrollTo=i5sKbZpsY9-J
It does still require a whisper dependency, but doesn't load the openai whisper model into memory at all, just uses it's utils and dependencies on ffmpeg-python.
```python
audio = whisper.load_audio(source_file)
short_audio_for_lang_detection = whisper.pad_or_trim(audio)
inputs = pipe.feature_extractor(short_audio_for_lang_detection,
return_tensors="pt",sampling_rate=16_000).input_features.to(pipe.device)
lang_token = pipe.model.generate(inputs, max_new_tokens=1)[0, 1]
detected_language_token = pipe.tokenizer.decode(lang_token)
detected_language = detected_language_token[2:-2]
language_title = LANGUAGES[detected_language]
log.info(f"Detected language: {language_title}")<|||||>`pipe.feature_extractor(short_audio_for_lang_detection)` should by default give only the first 30seconds, so `hort_audio_for_lang_detection = whisper.pad_or_trim(audio)` is probably useless.
@Narsil how about we make
```
if isinstance(inputs, str):
if inputs.startswith("http://") or inputs.startswith("https://"):
# We need to actually check for a real protocol, otherwise it's impossible to use a local file
# like http_huggingface_co.png
inputs = requests.get(inputs).content
else:
with open(inputs, "rb") as f:
inputs = f.read()
if isinstance(inputs, bytes):
inputs = ffmpeg_read(inputs, self.feature_extractor.sampling_rate)
```
into a simple function that you call in the preprocess? This could remove all whisper dependencies in this example. WDYT? <|||||>could be awesome to have `model.detect_language` instead of all the mess above and dependencies on whisper! <|||||>> into a simple function that you call in the preprocess?
Sure, I'm not sure I understand how that cleans up the audio trimming, but we can definitely abstract away.<|||||>> could be awesome to have model.detect_language instead of all the mess above and dependencies on whisper!
If you have some good ideas, please suggest them instead of waving them out.
Unfortunately, we can't just add `detect_language` whereever it may be. Whisper is not the final model for audio, when 3 months down the line and another model which works entirely differently comes into play, and we have specified `detect_language` for whisper, we're going to be in a bad shape to support this new shiny model in a seamless fashion. Making model specific code is trivial, this is what the snippet provided above is for. Making abstractions over many models which work very differently is much harder, and that's what we're trying to do. So that users can switch to new shiny model later, without rewriting their entire code.
`model` doesn't own and will never own `feature_extractor` which is required for the mel extraction of the audio for instance so `model.detect_language` doesn't work.
Then `pipeline` works on large audio, by chunking them into some length in seconds, so potentially a single file could have multiple languages being detected in each chunk, so we have to account for that.
<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>This was fixed by #21427 closing it! |
transformers | 21,310 | closed | Update Hebrew language code to he per IANA registry | Here's my original PR into whisper that changes the same: https://github.com/openai/whisper/pull/401
# What does this PR do?
Changes the language code for the Hebrew language from `iw` to `he`
Per [IANA registry](https://www.iana.org/assignments/language-subtag-registry/language-subtag-registry), `iw` was deprecated as the code for Hebrew in 1989 and the preferred code is `he`
The correct subtag:
```
%%
Type: language
Subtag: he
Description: Hebrew
Added: 2005-10-16
Suppress-Script: Hebr
%%
```
And the deprecation
```
%%
Type: language
Subtag: iw
Description: Hebrew
Added: 2005-10-16
Deprecated: 1989-01-01
Preferred-Value: he
Suppress-Script: Hebr
%%
```
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@ArthurZucker @sanchit-gandhi
| 01-25-2023 23:14:41 | 01-25-2023 23:14:41 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Cool! thanks for this, it would be best to also update the models online wdyt @sanchit-gandhi <|||||>Funny bug that happens now, I cannot process hebrew.
I use original whisper to detect language (because #21311 is not fixed yet) and it returns "he"
Then I use that to send to transformers whisper, and it fails as the token is not recognized (it expects iw) 🤭 <|||||>Added PRs for models (this.. was not easy 😅 )
Base PR - https://huggingface.co/openai/whisper-base/discussions/8
Tiny PR - https://huggingface.co/openai/whisper-tiny/discussions/6
Small PR - https://huggingface.co/openai/whisper-small/discussions/13
Medium PR - https://huggingface.co/openai/whisper-medium/discussions/8
Large PR - https://huggingface.co/openai/whisper-large/discussions/21
Large V2 PR - https://huggingface.co/openai/whisper-large-v2/discussions/18<|||||>@sgugger @ArthurZucker thanks for merging this in!
The model PRs need to be merged in for this to work, correct?
Otherwise there's a mismatch between this repo and loaded models in token names<|||||>Yes, they'll need to be merged.<|||||>They are all merged 😉 |
transformers | 21,309 | closed | Documentation example error for Train a TensorFlow model with Keras | ### System Info
- `transformers` version: 4.25.1
- Platform: macOS-13.1-arm64-arm-64bit
- Python version: 3.10.8
- Huggingface_hub version: 0.11.1
- PyTorch version (GPU?): not installed (NA)
- Tensorflow version (GPU?): 2.11.0 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <No>
- Using distributed or parallel set-up in script?: <No>
Note: I'm using tensorflow-metal since I'm running on an M1 chip
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
I tried both versions of the documentation code; the produce the same error.
Version 1:
```python
from datasets import load_dataset
dataset = load_dataset("glue", "cola")
dataset = dataset["train"]
from transformers import AutoTokenizer
import numpy as np
tokenizer = AutoTokenizer.from_pretrained("bert-base-cased")
tokenized_data = tokenizer(dataset["sentence"], return_tensors="np", padding=True)
labels = np.array(dataset["label"])
from transformers import TFAutoModelForSequenceClassification
from tensorflow.keras.optimizers import Adam
model = TFAutoModelForSequenceClassification.from_pretrained("bert-base-cased")
model.compile(optimizer=Adam(3e-5))
tokenized_data = dict(tokenized_data)
model.fit(tokenized_data, labels)
```
Version 2:
```python
from datasets import load_dataset
dataset = load_dataset("glue", "cola")
dataset = dataset["train"]
from transformers import AutoTokenizer
import numpy as np
tokenizer = AutoTokenizer.from_pretrained("bert-base-cased")
def tokenize_dataset(data):
# Keys of the returned dictionary will be added to the dataset as columns
return tokenizer(data["sentence"])
dataset = dataset.map(tokenize_dataset)
tf_dataset = model.prepare_tf_dataset(dataset, batch_size=16, shuffle=True, tokenizer=tokenizer)
from transformers import TFAutoModelForSequenceClassification
from tensorflow.keras.optimizers import Adam
model = TFAutoModelForSequenceClassification.from_pretrained("bert-base-cased")
model.compile(optimizer=Adam(3e-5))
model.fit(tf_dataset)
```
### Expected behavior
Every line works until the final one which produces an error. I would expect the model to be fit. | 01-25-2023 21:32:47 | 01-25-2023 21:32:47 | cc @Rocketknight1 <|||||>Hi @lexipalmer13 - that code runs fine for me locally, but we did have a lot of compatibility issues with TF 2.11. Version 4.26, which we released two days ago, should fix those issues. Can you try running `pip install --upgrade transformers` to see if it works for you with the newest version?<|||||>Hi @Rocketknight1 - thanks so much for getting back to me! It continues to throw the same error even with the updated transformers. I put the error below (again it's only the model.fit that's causing me issues so the initial packages/model loading/pre-processing is all running). It seems the main issue is this
`NotFoundError: Graph execution error:`
```python
2023-01-27 13:41:30.016811: W tensorflow/tsl/platform/profile_utils/cpu_utils.cc:128] Failed to get CPU frequency: 0 Hz
2023-01-27 13:41:39.700272: I tensorflow/core/grappler/optimizers/custom_graph_optimizer_registry.cc:114] Plugin optimizer for device_type GPU is enabled.
2023-01-27 13:41:43.449299: W tensorflow/core/framework/op_kernel.cc:1830] OP_REQUIRES failed at xla_ops.cc:418 : NOT_FOUND: could not find registered platform with id: 0x127acaa60
2023-01-27 13:41:43.449332: W tensorflow/core/framework/op_kernel.cc:1830] OP_REQUIRES failed at xla_ops.cc:418 : NOT_FOUND: could not find registered platform with id: 0x127acaa60
....repeats a bunch of times
2023-01-27 13:41:47.628274: W tensorflow/core/framework/op_kernel.cc:1830] OP_REQUIRES failed at xla_ops.cc:418 : NOT_FOUND: could not find registered platform with id: 0x127acaa60
---------------------------------------------------------------------------
NotFoundError Traceback (most recent call last)
Cell In[19], line 1
----> 1 model.fit(tokenized_data, labels)
File ~/miniconda/lib/python3.10/site-packages/keras/utils/traceback_utils.py:70, in filter_traceback.<locals>.error_handler(*args, **kwargs)
67 filtered_tb = _process_traceback_frames(e.__traceback__)
68 # To get the full stack trace, call:
69 # `tf.debugging.disable_traceback_filtering()`
---> 70 raise e.with_traceback(filtered_tb) from None
71 finally:
72 del filtered_tb
File ~/miniconda/lib/python3.10/site-packages/tensorflow/python/eager/execute.py:52, in quick_execute(op_name, num_outputs, inputs, attrs, ctx, name)
50 try:
51 ctx.ensure_initialized()
---> 52 tensors = pywrap_tfe.TFE_Py_Execute(ctx._handle, device_name, op_name,
53 inputs, attrs, num_outputs)
54 except core._NotOkStatusException as e:
55 if name is not None:
NotFoundError: Graph execution error:
Detected at node 'StatefulPartitionedCall_199' defined at (most recent call last):
File "/Users/lexipalmer/miniconda/lib/python3.10/runpy.py", line 196, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/Users/lexipalmer/miniconda/lib/python3.10/runpy.py", line 86, in _run_code
exec(code, run_globals)
File "/Users/lexipalmer/miniconda/lib/python3.10/site-packages/ipykernel_launcher.py", line 17, in <module>
app.launch_new_instance()
File "/Users/lexipalmer/miniconda/lib/python3.10/site-packages/traitlets/config/application.py", line 1041, in launch_instance
app.start()
File "/Users/lexipalmer/miniconda/lib/python3.10/site-packages/ipykernel/kernelapp.py", line 724, in start
self.io_loop.start()
File "/Users/lexipalmer/miniconda/lib/python3.10/site-packages/tornado/platform/asyncio.py", line 215, in start
self.asyncio_loop.run_forever()
File "/Users/lexipalmer/miniconda/lib/python3.10/asyncio/base_events.py", line 603, in run_forever
self._run_once()
File "/Users/lexipalmer/miniconda/lib/python3.10/asyncio/base_events.py", line 1899, in _run_once
handle._run()
File "/Users/lexipalmer/miniconda/lib/python3.10/asyncio/events.py", line 80, in _run
self._context.run(self._callback, *self._args)
File "/Users/lexipalmer/miniconda/lib/python3.10/site-packages/ipykernel/kernelbase.py", line 512, in dispatch_queue
await self.process_one()
File "/Users/lexipalmer/miniconda/lib/python3.10/site-packages/ipykernel/kernelbase.py", line 501, in process_one
await dispatch(*args)
File "/Users/lexipalmer/miniconda/lib/python3.10/site-packages/ipykernel/kernelbase.py", line 408, in dispatch_shell
await result
File "/Users/lexipalmer/miniconda/lib/python3.10/site-packages/ipykernel/kernelbase.py", line 731, in execute_request
reply_content = await reply_content
File "/Users/lexipalmer/miniconda/lib/python3.10/site-packages/ipykernel/ipkernel.py", line 417, in do_execute
res = shell.run_cell(
File "/Users/lexipalmer/miniconda/lib/python3.10/site-packages/ipykernel/zmqshell.py", line 540, in run_cell
return super().run_cell(*args, **kwargs)
File "/Users/lexipalmer/miniconda/lib/python3.10/site-packages/IPython/core/interactiveshell.py", line 2945, in run_cell
result = self._run_cell(
File "/Users/lexipalmer/miniconda/lib/python3.10/site-packages/IPython/core/interactiveshell.py", line 3000, in _run_cell
return runner(coro)
File "/Users/lexipalmer/miniconda/lib/python3.10/site-packages/IPython/core/async_helpers.py", line 129, in _pseudo_sync_runner
coro.send(None)
File "/Users/lexipalmer/miniconda/lib/python3.10/site-packages/IPython/core/interactiveshell.py", line 3203, in run_cell_async
has_raised = await self.run_ast_nodes(code_ast.body, cell_name,
File "/Users/lexipalmer/miniconda/lib/python3.10/site-packages/IPython/core/interactiveshell.py", line 3382, in run_ast_nodes
if await self.run_code(code, result, async_=asy):
File "/Users/lexipalmer/miniconda/lib/python3.10/site-packages/IPython/core/interactiveshell.py", line 3442, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "/var/folders/ny/h_bygvy53h16kd57z4lsmsvh0000gn/T/ipykernel_6697/3344439326.py", line 1, in <module>
model.fit(tokenized_data, labels)
File "/Users/lexipalmer/miniconda/lib/python3.10/site-packages/keras/utils/traceback_utils.py", line 65, in error_handler
return fn(*args, **kwargs)
File "/Users/lexipalmer/miniconda/lib/python3.10/site-packages/keras/engine/training.py", line 1650, in fit
tmp_logs = self.train_function(iterator)
File "/Users/lexipalmer/miniconda/lib/python3.10/site-packages/keras/engine/training.py", line 1249, in train_function
return step_function(self, iterator)
File "/Users/lexipalmer/miniconda/lib/python3.10/site-packages/keras/engine/training.py", line 1233, in step_function
outputs = model.distribute_strategy.run(run_step, args=(data,))
File "/Users/lexipalmer/miniconda/lib/python3.10/site-packages/keras/engine/training.py", line 1222, in run_step
outputs = model.train_step(data)
File "/Users/lexipalmer/miniconda/lib/python3.10/site-packages/transformers/modeling_tf_utils.py", line 1572, in train_step
self.optimizer.minimize(loss, self.trainable_variables, tape=tape)
File "/Users/lexipalmer/miniconda/lib/python3.10/site-packages/keras/optimizers/optimizer_experimental/optimizer.py", line 527, in minimize
self.apply_gradients(grads_and_vars)
File "/Users/lexipalmer/miniconda/lib/python3.10/site-packages/keras/optimizers/optimizer_experimental/optimizer.py", line 1140, in apply_gradients
return super().apply_gradients(grads_and_vars, name=name)
File "/Users/lexipalmer/miniconda/lib/python3.10/site-packages/keras/optimizers/optimizer_experimental/optimizer.py", line 634, in apply_gradients
iteration = self._internal_apply_gradients(grads_and_vars)
File "/Users/lexipalmer/miniconda/lib/python3.10/site-packages/keras/optimizers/optimizer_experimental/optimizer.py", line 1166, in _internal_apply_gradients
return tf.__internal__.distribute.interim.maybe_merge_call(
File "/Users/lexipalmer/miniconda/lib/python3.10/site-packages/keras/optimizers/optimizer_experimental/optimizer.py", line 1216, in _distributed_apply_gradients_fn
distribution.extended.update(
File "/Users/lexipalmer/miniconda/lib/python3.10/site-packages/keras/optimizers/optimizer_experimental/optimizer.py", line 1211, in apply_grad_to_update_var
return self._update_step_xla(grad, var, id(self._var_key(var)))
Node: 'StatefulPartitionedCall_199'
could not find registered platform with id: 0x127acaa60
[[{{node StatefulPartitionedCall_199}}]] [Op:__inference_train_function_34674]
```<|||||>Hi @lexipalmer13, thanks for the error traceback! I believe this error isn't related to `transformers` after all - the issue is an incompatibility specifically triggered by using XLA on TF 2.11 with Apple's M1's silicon. You can see a thread detailing the issue [here](https://developer.apple.com/forums/thread/721619).
The underlying cause is that TensorFlow moved to a new optimizer format in TF 2.11. This was the cause of the compatibility issues we experienced with `transformers` as well. The new optimizer format automatically compiles the update step with XLA, triggering the bug. As a workaround for now, you can replace the line
```py
from tensorflow.keras.optimizers import Adam
```
with
```py
from tensorflow.keras.optimizers.legacy import Adam
```
Hopefully this issue will be resolved in TF soon, and you won't need this workaround anymore!<|||||>Hi @Rocketknight1 Yes, that fixed it! Thanks so much for your help! |
transformers | 21,308 | closed | Small fix to ExponentialDecayLengthPenalty docstring | ## What does this PR do?
Currently, the `ExponentialDecayLengthPenalty` doc string incorrectly states that its `exponential_decay_length_penalty` tuple parameter is optional.
Also changed the corresponding type hint to be more specific.
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
## Who can review?
@gante @sgugger | 01-25-2023 19:10:33 | 01-25-2023 19:10:33 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 21,307 | closed | [WHISPER] Small patch | # What does this PR do?
A small nit might be causing fails in the CI. This will patch it | 01-25-2023 17:45:06 | 01-25-2023 17:45:06 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 21,306 | closed | Conversion Script Tatoeba | ### System Info
I follow this guide https://github.com/huggingface/transformers/tree/main/scripts/tatoeba to convert models from Tatoeba Translation Challenge to Huggingface but when I ran
`python3 src/transformers/models/marian/convert_marian_tatoeba_to_pytorch.py --models kor-eng --save_dir converted`
it returns like the below, I found that the find_vocab_file function requires a vocab file with ext .yml, but the model from Tatoeba doesn't has.
```
0%| | 0/1 [00:01<?, ?it/s]Traceback (most recent call last):
File "src/transformers/models/marian/convert_marian_tatoeba_to_pytorch.py", line 1324, in <module>
resolver.convert_models(args.models[0])
File "src/transformers/models/marian/convert_marian_tatoeba_to_pytorch.py", line 90, in convert_models
convert(save_dir / model["_name"], dest_dir / f"opus-mt-{pair_name}")
File "/tmp/transformers/src/transformers/models/marian/convert_marian_to_pytorch.py", line 663, in convert
opus_state = OpusState(source_dir)
File "/tmp/transformers/src/transformers/models/marian/convert_marian_to_pytorch.py", line 494, in __init__
self.tokenizer = self.load_tokenizer()
File "/tmp/transformers/src/transformers/models/marian/convert_marian_to_pytorch.py", line 592, in load_tokenizer
add_special_tokens_to_vocab(self.source_dir, not self.share_encoder_decoder_embeddings)
File "/tmp/transformers/src/transformers/models/marian/convert_marian_to_pytorch.py", line 409, in add_special_tokens_to_vocab
vocab = load_yaml(find_vocab_file(model_dir))
File "/tmp/transformers/src/transformers/models/marian/convert_marian_to_pytorch.py", line 385, in find_vocab_file
return list(model_dir.glob("*vocab.yml"))[0]
IndexError: list index out of range
```
@sgugger @stevhliu
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
1. clone and install https://github.com/huggingface/transformers/tree/main/scripts/tatoeba
2. run python3 src/transformers/models/marian/convert_marian_tatoeba_to_pytorch.py --models kor-eng --save_dir converted
### Expected behavior
it can convert successfully | 01-25-2023 17:26:22 | 01-25-2023 17:26:22 | cc @ArthurZucker <|||||>I'll have a look thanks for reporting<|||||>Sorry, but it seems that all the formats are messed up w.r.t. the old scripts. These are not maintained and thus we don't plan on fixing this. If you want to however, feel free to contribute! |
transformers | 21,305 | closed | [`Blenderbot`] Discrepancy between `BlenderbotTokenizer` and `BlenderbotTokenizerFast` | ### System Info
`main` branch
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
The initial issue is that I didn't get the same generated output between when using `BlenderbotTokenizer` and `BlenderboTokenizerFast`. The initial script to reproduce is the following:
```python
from transformers import BlenderbotTokenizer, BlenderbotTokenizerFast, BlenderbotForConditionalGeneration, AutoTokenizer
mname = "facebook/blenderbot-400M-distill"
model = BlenderbotForConditionalGeneration.from_pretrained(mname)
tokenizer = BlenderbotTokenizer.from_pretrained(mname, add_prefix_space=False)
tokenizer_fast = BlenderbotTokenizerFast.from_pretrained(mname, add_prefix_space=False)
NEXT_UTTERANCE = (
"My friends are cool but they eat too many carbs.</s> <s> That's unfortunate. "
"Are they trying to lose weight or are they just trying to be healthier?</s> "
"<s> I'm not sure."
)
inputs = tokenizer([NEXT_UTTERANCE], return_tensors="pt")
inputs_fast = tokenizer_fast([NEXT_UTTERANCE], return_tensors="pt")
# check that the fast tokenizer is the same as the slow one
assert torch.all(inputs.input_ids == inputs_fast.input_ids)
from transformers import BlenderbotTokenizer, BlenderbotTokenizerFast, BlenderbotForConditionalGeneration, AutoTokenizer
mname = "facebook/blenderbot-400M-distill"
model = BlenderbotForConditionalGeneration.from_pretrained(mname)
tokenizer = BlenderbotTokenizer.from_pretrained(mname)
tokenizer_fast = BlenderbotTokenizerFast.from_pretrained(mname)
def generate(tokenizer):
UTTERANCE = "My friends are cool but they eat too many carbs."
inputs = tokenizer([UTTERANCE], return_tensors="pt")
NEXT_UTTERANCE = (
"My friends are cool but they eat too many carbs.</s> <s>That's unfortunate. "
"Are they trying to lose weight or are they just trying to be healthier?</s> "
"<s> I'm not sure."
)
inputs = tokenizer([NEXT_UTTERANCE], return_tensors="pt")
next_reply_ids = model.generate(**inputs)
# print("decoded input : ", tokenizer.batch_decode(inputs.input_ids, skip_special_tokens=False)[0])
print("Bot: ", tokenizer.batch_decode(next_reply_ids, skip_special_tokens=False)[0])
generate(tokenizer)
generate(tokenizer_fast)
>>> That's too bad. Have you tried encouraging them to change their eating habits?
>>> I see. Well, it's good that they're trying to change their eating habits.
```
Interestingly this always pass:
```python
import torch
from transformers import BlenderbotTokenizer, BlenderbotTokenizerFast, BlenderbotForConditionalGeneration, AutoTokenizer
mname = "facebook/blenderbot-400M-distill"
model = BlenderbotForConditionalGeneration.from_pretrained(mname)
tokenizer = BlenderbotTokenizer.from_pretrained(mname)
tokenizer_fast = BlenderbotTokenizerFast.from_pretrained(mname)
NEXT_UTTERANCE = (
"My friends are cool but they eat too many carbs.</s> <s> That's unfortunate. "
"Are they trying to lose weight or are they just trying to be healthier?</s> "
"<s> I'm not sure."
)
UTTERANCE = "My friends are cool but they eat too many carbs."
_ = tokenizer([UTTERANCE], return_tensors="pt")
_ = tokenizer_fast([UTTERANCE], return_tensors="pt")
inputs = tokenizer([NEXT_UTTERANCE], return_tensors="pt")
inputs_fast = tokenizer_fast([NEXT_UTTERANCE], return_tensors="pt")
# check that the fast tokenizer is the same as the slow one
assert torch.all(inputs.input_ids == inputs_fast.input_ids)
next_reply_ids = model.generate(**inputs)
next_reply_ids_fast = model.generate(**inputs_fast)
assert torch.all(inputs.input_ids == inputs_fast.input_ids)
print(tokenizer.batch_decode(next_reply_ids))
>>> I see. Well, it's good that they're trying to change their eating habits.
print(tokenizer_fast.batch_decode(next_reply_ids_fast))
>>> I see. Well, it's good that they're trying to change their eating habits.
```
### Expected behavior
Both generations should be the same ideally!
cc @ydshieh @ArthurZucker | 01-25-2023 16:41:42 | 01-25-2023 16:41:42 | Hi @younesbelkada
It would be nice if you also show `inputs ` and `inputs_fast` (we can definitely check ourselves), or mention if this is the same or not :-)<|||||>Thanks a lot! I have updated the description with more details <|||||>I'll have a look but the fact that the second scripts works well is already good. Will check that all the inputs_ids and generated_ids are the same
<|||||>@ArthurZucker I wanted to work on this issue, I did little more digging and found out that this issue (difference in input_ids by the tokenizer) happens when <s\> is not followed by a space. The 2nd script works as the is space between \<s> and next character. <|||||>This could mean that the `clean_up_tokenization_space` or `spaces_between_special_tokens` args don't have the same values in the different models.<|||||>Okay, Let me dig further in this direction.<|||||>You can now control the `clean_up_tokenization_space` parameter when initialising a model (merged in #22341) which should have fixed this issue (need to update the param) |
transformers | 21,304 | closed | Use `model_class.__name__` and compare against `XXX_MAPPING_NAMES` | # What does this PR do?
Currently, in `tests/test_modeling_common.py`, there are a lot of condition like
```python
if model_class in get_values(MODEL_MAPPING)
```
which implies in order to get this information, all models must be able to be imported, or rely on some of our mechanism to make sure the execution won't fail the program.
In some rare case, like `natten` is installed but having a incompatible version with `torch`, we will get
```bash
E RuntimeError: Failed to import transformers.models.dinat.modeling_dinat because of the following error (look up to see its traceback):
E Failed to import NATTEN's CPP backend. This could be due to an invalid/incomplete install. Please uninstall NATTEN (pip uninstall natten) and re-install with the correct torch build: natten.shi-labs.com.
```
even running a single test with a model (for example, `efficientformer`) that doesn't need `natten`.
This PR changes the condition to
```python
if model_class.__name__ in get_values(MODEL_MAPPING_NAMES)
```
which gives the same results + avoid such confusing failures + potentially reduce some overhead | 01-25-2023 16:26:03 | 01-25-2023 16:26:03 | _The documentation is not available anymore as the PR was closed or merged._<|||||>@sgugger Is this kind of change OK for you ..? If so, I will apply the same change to other places.<|||||>The 2 failed tests are known to be flaky (for now) - Merge the PR without re-runing CI. |
transformers | 21,303 | closed | Image Classification Pipeline returns score= 1.0 | ### System Info
Vision Transformers Documentation mentions that they do support regression when num_labels == 1. However, it seems incompatible with Pipeline.
In this code, the logits are normalized into scores. However, when num_labels = 1, it effectively turns the score to `1`.
https://github.com/huggingface/transformers/blob/63b204eadd9829985ba13e7e4d51f905adfc2d5e/src/transformers/pipelines/image_classification.py#L116
### Who can help?
@amyeroberts @nielsr
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
1. Train a ViT on a regression (num_labels = 1)
2. Use pipeline

### Expected behavior
The model should returns predictions | 01-25-2023 16:11:43 | 01-25-2023 16:11:43 | There is no pipeline available for regression tasks, you need to use the model directly and takes its outputs.<|||||>Thanks @sgugger! Super fast answer!
As I found the pipelines to be very helpful, I'm sharing my solution below for folks that want to still use them.
One can just rewrite the function in the `postprocess` [function](https://github.com/huggingface/transformers/blob/main/src/transformers/pipelines/image_classification.py#L116):
```
def postprocess(self, model_outputs, top_k=5):
if top_k > self.model.config.num_labels:
top_k = self.model.config.num_labels
if self.framework == "pt":
pred = model_outputs.logits
else:
raise ValueError(f"Unsupported framework: {self.framework}")
scores = pred.tolist()
return scores
```
You can then instantiate the pipeline of the overwritten class:
`pipe = ImageClassificationPipeline(model=model,feature_extractor=extractor,device='cuda:0')`
And run your inference:
```
def data():
for path in paths:
yield PILImage.open(path)
from tqdm import tqdm
scores = []
for out in tqdm(pipe(data())):
scores.append(out)
```<|||||>Yes that's why the pipeline is called classification, rather than regression. We would need an `ImageRegressionPipeline` for this use case ;)<|||||>Closing this issue as it seems resolved. |
transformers | 21,302 | closed | Documentation code sample fixes | Several code examples in the docs will fail if used as is. In many cases, it's a missing dependency, other times, this is due to naming inconsistency, one may be related to a change in API, and a few are due to the execution order (i.e., things used before defined in a tutorial).
This maintenance PR fixes these issues so that the code samples in the docs work as expected and do not cause unnecessary frustration.
| 01-25-2023 16:09:52 | 01-25-2023 16:09:52 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 21,301 | closed | Fix TF `generate` (probably) | # What does this PR do?
We have CI failures for `TFBertEncoderDecoderModelTest.test_bert2bert_summarization` and `TFGPT2EncoderDecoderModelTest.test_bert2gpt2_summarization`.
The error message is
```bash
> if generation_config.min_length is not None and generation_config.min_length > generation_config.max_length:
E TypeError: '>' not supported between instances of 'int' and 'NoneType'
```
These 2 tests pass `max_length=None` to `generate`:
```python
output_ids = model.generate(input_ids=input_dict["input_ids"], max_length=None).numpy().tolist()
```
and this line (in `generate`)
https://github.com/huggingface/transformers/blob/63b204eadd9829985ba13e7e4d51f905adfc2d5e/src/transformers/generation/tf_utils.py#L613
change `generation_config.max_length` from `20` (the default value) to `None`, and finally we get error at
https://github.com/huggingface/transformers/blob/63b204eadd9829985ba13e7e4d51f905adfc2d5e/src/transformers/generation/tf_utils.py#L719
This PR check if `generation_config.max_length is not None` before doing comparison - the 2 tests pass with this change.
But we need @gante to see if this is the right fix. | 01-25-2023 15:43:01 | 01-25-2023 15:43:01 | @ArthurZucker The PR #20944 failed the test
```bash
tests/models/encoder_decoder/test_modeling_tf_encoder_decoder.py::TFBertEncoderDecoderModelTest::test_bert2bert_summarization
```
while one commit before `7cb596fa` works well - assuming the changes in this PR is applied.
With #20944, the outputs from the above is somehow gibberish.
**We can wait this PR being merged** , then could you take a look of this issue 🙏 ?
~~(Or if you want to look it earlier - you just have to pull this branch)~~ Better to wait, as I am not sure if there are more recent commits affect this test.
Here is the traceback
```bash
E AssertionError: Lists differ: ['sa sa sa university sa sa sigma sa sa th[501 chars] sa'] != ["sae was founded in 1856, five years befo[236 chars]hs."]
E
E First differing element 0:
E 'sa sa sa university sa sa sigma sa sa th[500 chars]a sa'
E "sae was founded in 1856, five years befo[235 chars]ths."
E
E Diff is 897 characters long. Set self.maxDiff to None to see it.
```<|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>Basically this is because there hast to be a `past` argument passed down instead of `pat_key_values`. I'll open another PR to fix these, but #21296 should be the fix <|||||>> Basically this is because there hast to be a `past` argument passed down instead of `pat_key_values`. I'll open another PR to fix these, but #21296 should be the fix
Thank you. We are going to have 0 test failures soon!<|||||>Instead of adding this extra if to handle `max_length=None`, I'd like to keep disallowing `max_length=None` -- enabling it may allow users to enter uncharted territory when current length > model's maximum input length 😅
The fix should be to remove `max_length=None` in the test -- the right value will be fetched from the config, like in the PT test.<|||||>@gante Thanks! Updated the PR :-)<|||||>@gante I merged this PR as it is. However, we could potentially improve the code in `generate` to validate (more) the arguments to avoid such failures. Will let you to decide as you know much more :-) |
transformers | 21,300 | closed | Adding NLLB-200 - MoE - 54.5B for no language left behind | ### System Info
Hello @LysandreJik,
Thanks a lot for your work on no language left behind.
Is there any plan to add the 54.4B Model?
Kindest regards
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Improvement
### Expected behavior
Improvement | 01-25-2023 15:19:53 | 01-25-2023 15:19:53 | WDYT @ArthurZucker @younesbelkada given your work on MoEs?<|||||>Sure, we can add this to the to dos, @PierreColombo could you add the link to the open sourced checkpoints? <|||||>Hi Thanks for your positive answer.
Code is here: https://github.com/facebookresearch/fairseq/tree/nllb
Checkpoints are here : https://tinyurl.com/nllb200moe54bmodel
Thanks !
<|||||>Hi all,
This would be greatly appreciated!
Thanks<|||||>also cc @sheonhan re. NNLB<|||||>+1, would love to see it!<|||||>+1 here.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>unstale?<|||||>We went for the fairseq implementation :'(<|||||>Friendly ping @ArthurZucker <|||||>Yes! @sheonhan mentioned wanting to take this, otherwise will gladly sprint !<|||||>Since I'm working on the Image Completion Transformer at the moment, I might be blocking the folks who want to use it asap, so you should go ahead! @ArthurZucker |
transformers | 21,299 | closed | [Hubert] Fix Hubert processing auto | # What does this PR do?
Currently on the `main` branch, the scripts provided on the docstring of `Hubert` fails:
```
from transformers import AutoProcessor, HubertModel
from datasets import load_dataset
import soundfile as sf
processor = AutoProcessor.from_pretrained("facebook/hubert-large-ls960-ft")
model = HubertModel.from_pretrained("facebook/hubert-large-ls960-ft")
def map_to_array(batch):
speech, _ = sf.read(batch["file"])
batch["speech"] = speech
return batch
ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
ds = ds.map(map_to_array)
input_values = processor(ds["speech"][0], return_tensors="pt").input_values # Batch size 1
hidden_states = model(input_values).last_hidden_statehidden_states = model(input_values).last_hidden_state
```
With the PR #21225 all custom `xxxProcessor` have been removed in favor of `AutoProcessor` in docstrings. Since `Hubert` was not included inside the processor automapping the script above lead into a bug, since if the model is not present in the auto mapping dictionary, the script will try to load a tokenizer: https://github.com/huggingface/transformers/blob/255257f3ea0862cbb92ea9fa1113cbee1898aadd/src/transformers/models/auto/processing_auto.py#L275 / Hence, `Wav2vec2CTCTokenizer` was loaded instead of `Wav2vec2Processor` that is supposed to be the target object to be loaded.
This PR fixes this by adding `hubert` inside the automapping class for `AutoProcessor`
This PR also fixes 2 failing doctests for `HubertModel`, link to failing job: https://github.com/huggingface/transformers/actions/runs/4002271138/jobs/6869333719
cc @sgugger @ydshieh | 01-25-2023 13:44:47 | 01-25-2023 13:44:47 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Thanks! |
transformers | 21,298 | closed | [Whisper] Add SpecAugment | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Hi @sanchit-gandhi @ArthurZucker,
Thanks for pointing out the flaw in the other PR (https://github.com/huggingface/transformers/pull/21063)! Here I will add [SpecAugment](https://arxiv.org/abs/1904.08779) to [modeling_whisper.py](https://github.com/huggingface/transformers/blob/main/src/transformers/models/whisper/modeling_whisper.py)
Several things have been done or to be done:
- [x] Return `attention_mask` by `WhisperFeatureExtractor`, which will be used to guide the mask function along the time axis
- [x] Rescale `attention_mask` from the sample level (48000) to the feature level (3000) by `hop_length` (160). It is done inside `WhisperFeatureExtractor` since the `hop_length` is defined there. But I'm not sure if returned `attention_mask` has other utilities
- [x] Copy `_compute_mask_indices` of wav2vec2, utility function to generate masks
- [x] Add `_mask_input_features` to mask `input_features`, referring to `_mask_hidden_states ` in wav2vec2
- [x] Add related parameters to the model config
- [x] Add test
- [ ] Update training script [run_speech_recognition_seq2seq.py](https://github.com/huggingface/transformers/blob/main/examples/pytorch/speech-recognition/run_speech_recognition_seq2seq.py), adding `attention_mask` to `prepare_dataset`
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts and @NielsRogge
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
| 01-25-2023 12:40:35 | 01-25-2023 12:40:35 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Looks very nice now that everything's living in the modelling file! Great that you've leveraged very closely from Wav2Vec2 as well.
I see the problem that you're currently working around with the padded audio inputs! Here your passing an attention mask from the feature extractor to the model which tells the model where the audio inputs have been padded to 30s. When we mask using SpecAug, we don't want to mask any of these padded features, only the real audio inputs.
With regards to whether we should use an attention mask for SpecAug, it's hard to know because we don't have a reference implementation. However, my feeling is that we **should** use an attention mask and only pad the real audio inputs, not any of the padded zeros. It makes little sense to pass an audio of length 10s to the model and then mask the spectrogram from 20-25s (which would just be silence...)
WDYT here @bofenghuang @ArthurZucker? Pass an attention mask and only compute SpecAug on the real audio inputs? Or ditch the attention mask and compute SpecAug uniformly across the 30s input (whether that input be audio or padded silence)?<|||||>Hi @sanchit-gandhi, thanks for the explantation! And I'm agree with you, here I tried to mask only real values using `attention_mask`<|||||>@bofenghuang is it ready for review?
<|||||>@ArthurZucker yes please ! One validated, we could add this option to run_speech_recognition_seq2seq.py<|||||>Yes! Let's go for numpy! Especially given that current users would have a breaking change if they do not have librosa. <|||||>@sanchit-gandhi @ArthurZucker thanks for the review ! Just add the test !<|||||>Can you make sure to fix the conflicts, and rebase on main to use the latest linter? (you need to do another `pip install -e ".[dev]"`<|||||>@ArthurZucker thanks for the tips ! Think it's done<|||||>Will review again!<|||||>Done ! Thanks to all the reviews and the discussions @ArthurZucker @sanchit-gandhi @sgugger !<|||||>Thanks a lot for your contributions! And congrats on the PR 😉 <|||||>@bofenghuang thanks for PR
looking forward to this (is it already available)
> Update training script [run_speech_recognition_seq2seq.py](https://github.com/huggingface/transformers/blob/main/examples/pytorch/speech-recognition/run_speech_recognition_seq2seq.py), adding attention_mask to prepare_dataset
any tips / hint ,how to apply it during training also very helpful
<|||||>Hey @acul3! You just need to set these config parameters according to your spec aug requirements:
https://github.com/huggingface/transformers/blob/8c40ba73d8091ebe0bdc8da5b634bf7951d18f99/src/transformers/models/whisper/configuration_whisper.py#L139-L167
The rest will be taken care for you in the training script!
The easiest way of doing this is probably by first downloading the config to your local device and setting the SpecAug params:
```python
from transformers import WhisperConfig
config = WhisperConfig.from_pretrained("openai/whisper-tiny") # update to the checkpoint you want to fine-tune from
config.apply_spec_augment = True
... # set all the other spec aug params as required
config.save_pretrained("/some/local/path/to/save")
```
Then in the training script, either add the argument to your bash script:
```
--config_name="/some/local/path/to/save" \
```
Or load the config from the place you saved it if you're using a notebook:
```python
config = WhisperConfig.from_pretrained("/some/local/path/to/save")
```<|||||>@acul3 please see this PR https://github.com/huggingface/transformers/pull/21942 :) |
transformers | 21,297 | closed | [Doctest] Fix `Blenderbot` doctest | # What does this PR do?
This PR fixes the doctest `transformers.models.blenderbot.modeling_blenderbot.BlenderbotForConditionalGeneration.forward` . Link to failing job is here: https://github.com/huggingface/transformers/actions/runs/4002271138/jobs/6869333719
Updating the prediction with the correct result seems to be the correct fix. I am usure whether this was tested before so I cannot compare for now.
One thing that I suspect is that we get different results across different PT versions, but not sure
cc @ydshieh
| 01-25-2023 12:35:09 | 01-25-2023 12:35:09 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Actually, as discussed with @ydshieh offline, there seems to be a discrepency between `BlenderbotTokenizer` & `BlenderbotTokenizerFast`.
The PR #21225 changed the docstring to use `AutoTokenizer` instead of `BlenderbotTokenizer`. This lead to loading `BlenderbotTokenizerFast`. You can reproduce the discrepency with the script below:
```python
from transformers import BlenderbotTokenizer, BlenderbotTokenizerFast, BlenderbotForConditionalGeneration, AutoTokenizer
mname = "facebook/blenderbot-400M-distill"
model = BlenderbotForConditionalGeneration.from_pretrained(mname)
tokenizer = BlenderbotTokenizer.from_pretrained(mname)
tokenizer_fast = BlenderbotTokenizerFast.from_pretrained(mname)
def generate(tokenizer):
UTTERANCE = "My friends are cool but they eat too many carbs."
inputs = tokenizer([UTTERANCE], return_tensors="pt")
NEXT_UTTERANCE = (
"My friends are cool but they eat too many carbs.</s> <s>That's unfortunate. "
"Are they trying to lose weight or are they just trying to be healthier?</s> "
"<s> I'm not sure."
)
inputs = tokenizer([NEXT_UTTERANCE], return_tensors="pt")
next_reply_ids = model.generate(**inputs)
print("Bot: ", tokenizer.batch_decode(next_reply_ids, skip_special_tokens=True)[0])
generate(tokenizer)
>>> Bot: That's too bad. Have you tried encouraging them to change their eating habits?
generate(tokenizer_fast)
>>> Bot: I see. Well, it's good that they're trying to change their eating habits.
```
I am not sure if this is a known bug or intended. Maybe the changes I proposed is not the correct fix here<|||||>Thanks for digging deeper @younesbelkada! Could you check what's the difference between the fast and slow tokenizer from the checkpoint used in this doc example? And compare the difference of `inputs = tokenizer([UTTERANCE], return_tensors="pt") between these 2 tokenizers.
`
Another similar issue (but not related to this one)
https://github.com/huggingface/transformers/pull/21254<|||||>I think it's good as we want to default to fast tokenizers (which is the reason we switched to AutoTokenizer) so the fix is the right one in my opinion.<|||||>I agree - but just thinking if we should find out what's going wrong and potentially fix the inconsistency between these 2 tokenizers (or something in our codebase).
The fix is good for me, and you can merge @younesbelkada !<|||||>Thanks everyone!
I will open an issue to describe the bug
EDIT: https://github.com/huggingface/transformers/issues/21305 |
transformers | 21,296 | closed | [CI-Daily] replace `past` in prepare inputs for generation | # What does this PR do?
This will fix the failing test. It is a little nit, that escaped during #20944
cc @ydshieh | 01-25-2023 12:29:05 | 01-25-2023 12:29:05 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 21,295 | closed | Update `OneFormerModelIntegrationTest` expected values | # What does this PR do?
The test failures are likely a hardware/environment difference between the contributor and our CI runners. | 01-25-2023 10:10:07 | 01-25-2023 10:10:07 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 21,294 | closed | Fix `EfficientFormer` | # What does this PR do?
Fix `EfficientFormer`:
- correct checkpoints
- fix an device issue regarding `EfficientFormerSelfAttention.ab`, see [failed job run page](https://github.com/huggingface/transformers/actions/runs/3992425421/jobs/6848316487)
The error:
```bash
(line 136) RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu!
``` | 01-25-2023 09:39:08 | 01-25-2023 09:39:08 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 21,293 | closed | from transformers import T5Model -> No module named 'torch._C' | ### System Info
Prompt says to use "transformers-cli env", but it's not clear where is the documentation for installing transformers-cli on Ubuntu...
python version: 3.10.6
system: ubuntu 20 (no gpu, laptop)
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
from transformers import T5Model
### Expected behavior
Should give no errors, but for me it gives:
```
File "/usr/local/lib/python3.10/dist-packages/transformers/utils/import_utils.py", line 1101, in __getattr__
value = getattr(module, name)
File "/usr/local/lib/python3.10/dist-packages/transformers/utils/import_utils.py", line 1100, in __getattr__
module = self._get_module(self._class_to_module[name])
File "/usr/local/lib/python3.10/dist-packages/transformers/utils/import_utils.py", line 1112, in _get_module
raise RuntimeError(
RuntimeError: Failed to import transformers.models.t5.modeling_t5 because of the following error (look up to see its traceback):
No module named 'torch._C'
Process finished with exit code 1
``` | 01-25-2023 09:29:51 | 01-25-2023 09:29:51 | Solved it by:
```
pip3 uninstall torch
pip3 install torch
```
Very weird. |
transformers | 21,292 | closed | Moving to cleaner tokenizer version or `oneformer`. | # What does this PR do?
Enables `oneformer` models on `image-segmentation` pipeline.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts and @NielsRogge
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
--> | 01-25-2023 09:18:18 | 01-25-2023 09:18:18 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 21,291 | closed | add GPTSAN model (reopen) | # Model description
**Before PR was automatically closed as a result of sync and pull, so it will be reopened.**
GPTSAN is a Japanese language model using Switch Transformer. It has the same structure as the model introduced as Prefix LM in the T5 paper, and works with both Test Generation and Masked Language Model.
To add this model to the transformer, I did the following:
Porting GPTSAN to PyTorch. Model conversion. Creating model cards in HuggingFace Hub. Porting generation code.
The model card has already been uploaded. (https://huggingface.co/Tanrei/GPTSAN-japanese/)
Tokenizer uses GPT-NeoX-Japanese, and only new vocabulary files are uploaded to the model card. Minor differences are absorbed within the generation algorithm in the model's source code.
GPTSAN repository is:
https://github.com/tanreinama/GPTSAN
Discussion of HuggingFace integration is:
https://github.com/tanreinama/GPTSAN/issues/2
Thanks to: @ArthurZucker and @younesbelkada
| 01-25-2023 01:16:49 | 01-25-2023 01:16:49 | _The documentation is not available anymore as the PR was closed or merged._<|||||>oh... I still get an error that I don't understand. Do you know what is wrong? I pulled and merged from the latest main.<|||||>I will sync and pull main again.<|||||>Do you want a review? <|||||>@ArthurZucker
yes. this is ok.<|||||>@ArthurZucker can you review it or will you be late?<|||||>Reviewing now 😉 <|||||>Still on the way: I have a few questions.<|||||>Feel free to ask! <|||||>thanks.
I was separated GPTSANJapaneseModel and GPTSANJapaneseForConditionalGeneration.
Regarding the return value of GPTSANJapaneseForConditionalGeneration, using Seq2SeqMoEOutput like switch_transformers does not work.
Well, this is not the encode_decode model.
```
return Seq2SeqMoEOutput(
loss=loss,
logits=lm_logits,
encoder_z_loss=z_loss,
encoder_aux_loss=aux_loss,
past_key_values=outputs.past_key_values,
encoder_last_hidden_state=outputs.last_hidden_state,
encoder_hidden_states=outputs.hidden_states,
encoder_attentions=outputs.attentions,
encoder_router_logits=outputs.router_probs,
)
```
↑ is said to be "there is no attentions in the output" in the unit test.
Using CausalLMOutputWithPast works.
```
return CausalLMOutputWithPast(
loss=loss,
logits=lm_logits,
past_key_values=outputs.past_key_values,
hidden_states=outputs.hidden_states,
attentions=outputs.attentions,
)
```
But CausalLMOutputWithPast doesn't have z_loss or other switch transformer outputs.
I can't seem to find a good fit one in modeling_outputs.py.
Is it ok without switch transformer outputs?
<|||||>ready to review.<|||||>Due to the time difference, the continuation will be tomorrow<|||||>Absolutely no problem! 😉 <|||||>can review it.<|||||>@tanreinama the code looks much more cleaner now 🔥
Let's see the next review of @ArthurZucker but we wanted to thank you on your great efforts!
I really like this model and would like to communicate about it on Twitter, can you share with us your social media handle? Thanks!<|||||>Oh... I don't do SNS. I don't have a Twitter or Instagram account (yeah, I'm a weirdo)
I have only facebook. https://www.facebook.com/toshiyuki.sakamoto.75/<|||||>I found a few typo in comment. so I fixed it.<|||||>Reviewing again now<|||||>Ok, it's reviewable.<|||||>@ArthurZucker @sgugger
I fixed the point in the comment. It's ready if checks are passed.<|||||>Congratulations! 🚀 This was a big model addition and the codebase is very clean now!
Will try to share this new model on tweeter and see if we can reach our Japanese community! <|||||>good timing<|||||>@ArthurZucker @sgugger
ok. I fixed it.<|||||>Congrats again on this work! and thanks for being a valuable contributor! 😉 🚀 <|||||>Wow! I'm very happy! And thanks to the HuggingFace team.
I couldn't have done it without your amazing and persistent support. It was my first experience committing to such a large repository, so I learned a lot.
And I'm so excited. It's already night in Japan, but I might not be able to sleep😘 |
transformers | 21,290 | closed | [`bnb`] Fine-tuning HF 8-bit models | # What does this PR do?
This PR attempts to add the official support of fine-tuning 8-bit models using `transformers`, `bitsandbytes` and adaptors (such as LoRA), supported by `peft`.
With this PR, it will be possible to fine-tune large models with no cost, for e.g. it will be possible to fine-tune `opt-6.7b` in a single Google Colab instance. This would also enable fine-tuning Whisper and large flan-t5 in 8bit.
In order to perform this fine-tuning, a user needs to load the model with the flag `enable_memory_efficient_backward=True`, freeze the parameters of the model and use `peft` to inject adaptators inside the model.
The PR comes also with `Trainer` integration of this feature, that is supported at least in a single GPU setup.
Here is a script (based on an old notebook from @justheuristic) a user can try to run with this PR:
```
import torch
import torch.nn as nn
from transformers import AutoTokenizer, AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained(
"facebook/opt-6.7b",
load_in_8bit=True,
device_map='auto',
torch_dtype=torch.float16,
enable_memory_efficient_backward=True
)
tokenizer = AutoTokenizer.from_pretrained("facebook/opt-6.7b")
for param in model.parameters():
param.requires_grad = False # freeze the model - train adapters later
if param.ndim == 1:
# cast the small parameters (e.g. layernorm) to fp32 for stability
param.data = param.data.to(torch.float32)
model.gradient_checkpointing_enable() # reduce number of stored activations
model.model.decoder.project_in = lambda x: x.requires_grad_(True)
class CastOutputToFloat(nn.Sequential):
def forward(self, x): return super().forward(x).to(torch.float32)
model.lm_head = CastOutputToFloat(model.lm_head)
class LoRALayer(nn.Module):
"""Wraps a linear layer with LoRA-like adapter"""
def __init__(self, module: nn.Module, rank: int):
super().__init__()
self.module = module
self.adapter = nn.Sequential(nn.Linear(module.in_features, rank, bias=False),
nn.Linear(rank, module.out_features, bias=False))
small_std = (2. / (5 * min(module.in_features, module.out_features))) ** 0.5
nn.init.normal_(self.adapter[0].weight, std=small_std)
nn.init.zeros_(self.adapter[1].weight)
self.adapter.to(module.weight.device)
def forward(self, input, *args, **kwargs):
return self.module(input, *args, **kwargs) + self.adapter(input)
for name, module in model.named_modules():
if 'OPTAttention' in repr(type(module)):
module.q_proj = LoRALayer(module.q_proj, rank=16)
module.k_proj = LoRALayer(module.k_proj, rank=16)
module.v_proj = LoRALayer(module.v_proj, rank=16)
assert sum(isinstance(module, LoRALayer) for module in model.modules()) == 96
import transformers
from datasets import load_dataset
data = load_dataset("Abirate/english_quotes")
data = data.map(lambda samples: tokenizer(samples['quote']), batched=True)
trainer = transformers.Trainer(
model=model, train_dataset=data['train'],
args=transformers.TrainingArguments(
per_device_train_batch_size=4, gradient_accumulation_steps=4,
warmup_steps=250, max_steps=1000, learning_rate=2e-4, fp16=True,
logging_steps=1, output_dir='outputs'),
data_collator=transformers.DataCollatorForLanguageModeling(tokenizer, mlm=False)
)
model.config.use_cache = False # silence the warnings. Please re-enable for inference!
trainer.train()
```
## TODOs:
- [x] clear notebook
- [x] how to share adapters weights using `peft`
- [x] tests
cc @pacman100 @TimDettmers @sgugger
| 01-24-2023 21:38:56 | 01-24-2023 21:38:56 | _The documentation is not available anymore as the PR was closed or merged._<|||||>As the training support has just been added in `bitsandbytes==0.37.0` I proposed new changes, I also added new tests
This PR is now ready for review |
transformers | 21,289 | closed | convert fast tokenizers to slow | ### Feature request
Recently noticed that the models being uploaded now are only their fast versions and the sentencepeice model (that's included in the slow version) is missing. I need the sentence peice model of some tokenizers for a personal project and wanted to know what's the best way to go about that. After I looked through the current code on the repository I saw that there were a lot of methods for handling Conversion from Slow to Fast tokenization so I think it should be possible the other way around too. After a bit of research the only quick and dirty way I could think of was creating a utility script for converting the json files of the fast tokenizer to a the spe model format for a slow tokenizer because I think the information in both is the same so the mechanics should be similar too.
### Motivation
I looked through the tokenizers and saw that most of them getting uploaded don't have slow tokenizers.
### Your contribution
If there is any way I can help I would love to know , just need some guaidence on how to implement this! | 01-24-2023 20:40:24 | 01-24-2023 20:40:24 | I don't think it's possible to get the sentencepiece model from the `tokenizer.json` file but maybe @Narsil knows a way.<|||||>hey @Narsil can you please give some insight on this?<|||||>You could try and create inverse scripts for the conversion you found. But it's not going to be trivial.
You need to create the protobuf sentencepiece expects.
Not sure I can provide much more guidance.
Why do you want slow tokenizers if I may ask? <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>hey @Narsil Thanks for the reply but I found a fix for my issue :) <|||||>Awesome. Do you mind explaining a little more or giving links for potential readers that would want to do the same? <|||||>For Sure!
I noticed that you guys have code for converting a spm model ( A slow tokenizer ) to a tokenizer.json (fast tokenizer). I also noticed for some models you guys did not upload the SPM model even though it was an SPM based tokenizer. To get the SPM model from the tokenizer.json that was uploaded I had to figure out how to manually create an SPM model that had identical information as what's stored in the tokenizer.json
For example I had to copy the vocabulary , precompiled_charsmap , and other special tokens and manully edit a blank SPM file ( it already had the correct architecture and some dummy data that I removed while editing). Once all the information was copied over to the SPM file it was working as expected.
here is a notebook demonstrating the process
https://colab.research.google.com/drive/1kfC_iEuU0upVQ5Y3rnnl5VSngSPuiSQI?usp=sharing
|
transformers | 21,288 | closed | Fix `TrainingArguments.label_names` docs to reflect the correct default value behaviour | Fixes #21287
@sgugger
| 01-24-2023 19:04:43 | 01-24-2023 19:04:43 | _The documentation is not available anymore as the PR was closed or merged._<|||||>This was never flagged as a breaking change (indeed I only found this because it broke one of my scripts). I wonder if I should add "🚨 🚨 🚨" to the PR name to indicate a breaking change<|||||>The breaking change is in the PR that changed the default to `label_names` a while ago, not in this one :-) |
transformers | 21,287 | closed | [docs] TrainingArguments default label_names is not what is described in the documentation | ### System Info
- `transformers` version: 4.25.1
- Platform: macOS-12.6.1-arm64-arm-64bit
- Python version: 3.8.15
- Huggingface_hub version: 0.11.1
- PyTorch version (GPU?): 1.13.1 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: No
### Who can help?
@sgugger, @stevhliu and @MKhalusova
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
1. Create a model with a `forward` that has more than one label. For example:
```
def forward(
self,
input_ids,
bbox,
attention_mask,
token_type_ids,
labels,
reference_labels
)
```
2. Create a trainer for your model with `trainer = Trainer(model, ...)`. Make sure to not set `label_names` and let it default.
3. Check `trainer.label_names` and see that it returns `["labels", "reference_labels"]`
### Expected behavior
[The documentation](https://huggingface.co/docs/transformers/main_classes/trainer#transformers.Seq2SeqTrainingArguments.label_names) states that:
> Will eventually default to ["labels"] except if the model used is one of the XxxForQuestionAnswering in which case it will default to ["start_positions", "end_positions"].
[This PR](https://github.com/huggingface/transformers/pull/16526) changed the behaviour that the documentation describes. | 01-24-2023 18:24:47 | 01-24-2023 18:24:47 | Indeed. Do you want to open a PR to fix the documentation? |
transformers | 21,286 | closed | Add metric_key_prefix from training_args | ### Feature request
Today, if we create a `Trainer` as in
```
trainer = Trainer(
model=self.model, # the instantiated 🤗 Transformers model to be trained
args=training_args, # training arguments, defined above
train_dataset=data_cls["train"],
eval_dataset=data_cls["develop"],
compute_metrics=partial(
compute_metrics,
fbeta_beta=self.config.early_stopping.fbeta_beta,
),
data_collator=collate_chunks, # type: ignore
callbacks=callbacks, # type: ignore
)
```
and do `trainer.train()`:
1. There's no way to change the prefixes for the `train` dataset metrics (at least that I'm aware of)
2. One can change the prefix for the evaluation dataset from the default `eval/` into anything by changing the above into
```
prefix = "other"
Trainer(
....
eval_dataset={prefix: data_cls["develop"]},
...
)
```
However, doing this creates `eval/other_accuracy` due to the way [rewrite_logs](https://github.com/huggingface/transformers/blob/e2e393c6f25205739b5dc9fddd460d7bfab85150/src/transformers/integrations.py#L540) works. Ideally I'd like it to be `other/accuracy`.
My request is to have a clear way in the training_args to add any prefixes to the metrics, either for train or eval datasets.
### Motivation
I want to train multiple models within the same wandb run. As things stand right now, the metrics clash
### Your contribution
I've contributed to other OS projects but I'm not very familiar with the codebase to do the contribution directly. If someone guides me around with a high-level description, happy to do it myself | 01-24-2023 18:08:19 | 01-24-2023 18:08:19 | cc @sgugger <|||||>This seems like a very niche feature which can be achieved by customizing your callback (you can use your own instead of the default ones).<|||||>@sgugger could you elaborate a bit more about what callback(s)? I guess I have to remove one and add mine?
(oh sorry, I just found https://huggingface.co/docs/transformers/main_classes/callback) |
transformers | 21,285 | closed | `trainer.predict(dataset)` drops samples when used with multi-gpu setup | ### System Info
- `transformers` version: 4.26.0.dev0
- Platform: Linux-4.19.0-23-cloud-amd64-x86_64-with-glibc2.28
- Python version: 3.9.13
- Huggingface_hub version: 0.10.1
- PyTorch version (GPU?): 1.11.0+cu113 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: Yes
### Who can help?
@sgguger
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
When running on a multi-gpu setup, the output of `trainer.predict()` misses samples even if the parameter `dataloader_drop_last` is set to `False` in the `TrainingArguments`.
[Here is an example colab notebook](https://drive.google.com/file/d/1pvnDkAhkMjLkUZVvindsFqhA_DRLveUk/view?usp=sharing) that could be run on a multi-gpu instance to reproduce the issue.
As an example, I had 2 gpus on my machine, my dataset had a size of 25000 samples, batch size per device was set to 256 (then 512 in total). When calling `trainer.predict()`, the output I got had a size of 24951.
The number of missing samples (25000-24951= 49) corresponds to the size of the dataloader (output of `len(trainer.get_test_dataloader()`)
### Expected behavior
Output of `trainer.predict(dataset)` should be the same length as dataset | 01-24-2023 18:07:26 | 01-24-2023 18:07:26 | The model used in this notebook (`Roberta0`) does not follow the requirements of the `Trainer` as described [here](https://huggingface.co/docs/transformers/main_classes/trainer) (see the big square in red). The output of the model should be a tuple, a dictionary or a `ModelOutput`, but it can't be a simple tensor. This is the reason for the problem seen.<|||||>Thanks for your quick response @sgugger |
transformers | 21,284 | closed | Update expected values for doctest | Manually update expected values from hardware/environment differences for doctest on `task_summary.mdx`. | 01-24-2023 17:43:15 | 01-24-2023 17:43:15 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 21,283 | closed | [examples/deepspeed] fix renamed api | Fixing the breakages caused by https://github.com/huggingface/transformers/pull/21155
| 01-24-2023 17:12:46 | 01-24-2023 17:12:46 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 21,282 | closed | [GIT] Add test for batched generation | # What does this PR do?
Related to #21087, I've added a test for batched generation with GIT. | 01-24-2023 16:54:01 | 01-24-2023 16:54:01 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Failing test is unrelated, merging. |
transformers | 21,281 | closed | [`t5`] Fix T5 inference in `float16` + `bnb` error | # What does this PR do?
Currently on the `main` branch, the inference of `t5` is broken in half-precision.
With the introduction of `_keep_in_fp32_modules` attributes in https://github.com/huggingface/transformers/pull/20683 , `wo` layers needs to be upcasted in `float32` for more accurate inference.
It appears that in the aforementioned PR, we forgot to apply the same fix in `T5DenseActDense` layers, leading into a broken inference API when running inference in fp16 for models that uses `T5DenseActDense` layers instead of `T5DenseGatedActDense` layers. This can be reproduced for example with `t5-small`:
```
from transformers import T5ForConditionalGeneration, T5Tokenizer
import torch
model_id = "t5-small"
model = T5ForConditionalGeneration.from_pretrained(model_id, device_map="auto", torch_dtype=torch.float16)
tokenizer = T5Tokenizer.from_pretrained(model_id)
input_tokens = tokenizer.encode("Translate the following in German: My name is Younes.", return_tensors="pt").to("cuda")
output = model.generate(input_tokens, max_length=32, num_beams=4, early_stopping=True)
print(tokenizer.decode(output[0], skip_special_tokens=True))
```
This is not the case for `flan` family since `flan-t5` uses `T5DenseGatedActDense` that has been correctly fixed in https://github.com/huggingface/transformers/pull/20760
This PR adds also a fix for 8-bit models. If a user wants to run inference in 8-bit without `_keep_in_fp32_modules` for backward compatibility reasons:
```
from transformers import T5ForConditionalGeneration, T5Tokenizer
import torch
T5ForConditionalGeneration._keep_in_fp32_modules = None
model_id = "google/flan-t5-small"
model = T5ForConditionalGeneration.from_pretrained(model_id, device_map="auto", torch_dtype=torch.float16)
tokenizer = T5Tokenizer.from_pretrained(model_id)
input_tokens = tokenizer.encode("Translate the following in German: My name is Younes.", return_tensors="pt").to("cuda")
output = model.generate(input_tokens, max_length=32, num_beams=4, early_stopping=True)
print(tokenizer.decode(output[0], skip_special_tokens=True))
```
They face an issue that is hard to interpret:
```
"addmm_cuda" not implemented for 'Char'
```
See for example: https://github.com/TimDettmers/bitsandbytes/issues/111#issuecomment-1368952450
This is because the `hidden_states` are converted in `int8` [here](https://github.com/huggingface/transformers/blob/main/src/transformers/models/t5/modeling_t5.py#L313-L314), leading to entering a linear layer with an 8bit input, which leads to the error. Therefore one should cast to `self.wo.weight.dtype` only if `dtype != torch.int8`. (pointed out also [here](https://github.com/TimDettmers/bitsandbytes/issues/111#issuecomment-1402113167) )
This PR applies also `make fix-copies` introducing the fix to other architectures too - happy to revert that since this issue is only relevant for `T5` (LongT5 etc does not have `_keep_in_fp32_modules`)
This PR also tests everything, making sure this will never happen again!
cc @sgugger | 01-24-2023 16:48:04 | 01-24-2023 16:48:04 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 21,280 | closed | Does "kwargs" actually work in from_pretrained of Tokenizer? | ### System Info
Colab
transformers==4.25.1
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Description of the parameters for the method **from_pretrained** states:
```
kwargs (additional keyword arguments, *optional*):
Will be passed to the Tokenizer `__init__()` method. Can be used to set special tokens like
`bos_token`, `eos_token`, `unk_token`, `sep_token`, `pad_token`, `cls_token`, `mask_token`,
`additional_special_tokens`. See parameters in the `__init__()` for more details.
```
However, the following code does not change parameters of the tokenizer:
```
tokenizer = transformers.AutoTokenizer.from_pretrained(
'bert-base-uncased',
kwargs={'model_max_length': 777, 'cls_token': '[CLASS]'}
)
print(tokenizer.model_max_length)
print(tokenizer.cls_token)
```
Output:
```
512
[CLS]
```
Why is that?
### Expected behavior
The output I want to see is:
```
777
[CLASS]
```
| 01-24-2023 13:50:46 | 01-24-2023 13:50:46 | |
transformers | 21,279 | closed | How to use pre-trained BERT or GPT transformers for CNN based video captioning | Hello, How to use pre-trained BERT or GPT transformers for video captioning task using CNN features not vision transformer | 01-24-2023 13:43:34 | 01-24-2023 13:43:34 | Hi,
I'd recommend checking out the [GIT](https://huggingface.co/docs/transformers/main/en/model_doc/git) model which was just added to the library, as it's the first one in this library that can be used for video captioning. Check out the demo notebook [here](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/GIT/Inference_with_GIT_for_image_video_captioning_and_image_video_QA.ipynb).
The model is a GPT-like model conditioned on both images and text to predict the next text tokens.<|||||>Closing this as it seems resolved.<|||||>Thank you so much |
transformers | 21,278 | closed | Hotifx remove tuple for git config image processor. | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts and @NielsRogge
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
--> | 01-24-2023 12:23:04 | 01-24-2023 12:23:04 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Just a question, for [this model](https://huggingface.co/microsoft/git-base-vatex), which uses `VideoMAEImageProcessor`, will the following then always instantiate a `CLIPImageProcessor`?
```
from transformers import AutoProcessor, VideoMAEImageProcessor
processor = AutoProcessor.from_pretrained("microsoft/git-base-vatex")
assert isinstance(processor.image_processor, VideoMAEImageProcessor)
``` <|||||>> Just a question, for [this model](https://huggingface.co/microsoft/git-base-vatex), which uses `VideoMAEImageProcessor`, will the following then always instantiate a `CLIPImageProcessor`?
>
> ```
> from transformers import AutoProcessor, VideoMAEImageProcessor
>
> processor = AutoProcessor.from_pretrained("microsoft/git-base-vatex")
> assert isinstance(processor.image_processor, VideoMAEImageProcessor)
> ```
Why not try it out ?
```
gh pr checkout 21278
pip install -e .
python your_script.py
``` |
transformers | 21,277 | closed | [W2V2 with LM] Fix decoder test with params | # What does this PR do?
Fixes https://github.com/huggingface/transformers/pull/21226. This test started failing due to a PyPI update of pyctcdecode, which incorporated a number of bug fixes to the LM decode method (see https://github.com/kensho-technologies/pyctcdecode/issues/107#issuecomment-1400757049).
This PR modifies the decoding params, such that the same outputs are obtained for pyctcdecode v0.4.0 and v0.5.0 (latest) - hence the test should pass irrespective of the package version while still verifying correctness of the outputs. The PR also adds tests for the LM and logit scores.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts and @NielsRogge
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
| 01-24-2023 09:55:21 | 01-24-2023 09:55:21 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Indeed the var names are a bit confusing @ydshieh! They follow the same convention throughout the test file, where `decoded_processor` refers to the LM outputs decoded by the HF **processor** class, and `decoded_decoder` refers to the LM outputs decoded by the pyctc **decoder** class.
I'll update these var names in a follow-up PR to make them a bit more intuitive 👍 |
transformers | 21,276 | closed | [Doc] fix broken link | Fixes https://github.com/huggingface/transformers/issues/21275
I can confirm the link at least works in https://moon-ci-docs.huggingface.co/docs/transformers/pr_21276/en/main_classes/text_generation
cc @ydshieh 💯
| 01-24-2023 09:22:07 | 01-24-2023 09:22:07 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 21,275 | closed | Text generation link not working | I'm not sure if this is the right place to notify such issues.
On [this page](https://huggingface.co/docs/transformers/main_classes/text_generation) the link in the second paragraph to "text generation strategies guide" does not work. It should point to: https://huggingface.co/docs/transformers/main/generation_strategies | 01-24-2023 08:27:50 | 01-24-2023 08:27:50 | will move this to `transformers`<|||||>Thanks for noticing! should be addressed in #21276 |
transformers | 21,274 | closed | Output `past_key_values` from `TextGenerationPipeline`. | ### Feature request
Currently, `TextGenerationPipeline` does not allow users to extract the `past_key_values` object from its output. It would be nice for us to be able to do so, so that we could then stream the intermediate text in chunks, whilst not having to recalculate the `past_key_values` after every time we yield.
### Motivation
Runtime seems to skyrocket when streaming the results in pipeline using chunks. I believe this is due to the fact that we waste time having to recalculate `past_key_values` every time we make a call to `pipeline()`.
### Your contribution
Would be happy to help review code! | 01-24-2023 05:28:53 | 01-24-2023 05:28:53 | That's interesting, the current pipeline does not support chunking indeed. However, I think adding this would not be really hard cc @Narsil, would go in the `generate_kwargs`, only issue is that it is not going out. <|||||>That would be nice, but requires pretty much changing `generate` upside down and inside out.
This is what we have done here: https://github.com/huggingface/text-generation-inference which was required to get max performance out of bloom.
However, this is a pretty large endeavor which would mean the pipeline would basically redo the entire `generate` 's job.
Since `generate` is already quite complex, I'm hesitant to start such a thing.
> Runtime seems to skyrocket when streaming the results in pipeline using chunks. I believe this is due to the fact that we waste time having to recalculate past_key_values every time we make a call to pipeline().
When you're generating, you shouldn't have to care about the leftmost part of a text, it will be ignored all the time, usually text generation models simply chunk the left most part of the text.
Isnt' that doable in your case ? Do you mind showing a script of what you're attempting to do ? This might help better understand what you're trying to achieve, and what are the possible options.
<|||||>@Narsil thanks for the response! Here is an example of what I'd like to be able to do:
```python
def stream_inference(input_dict):
text = input_dict["text_inputs"]
chunk_size = input_dict.pop("chunk_size", 10)
for _ in range(10):
generated_text = pipeline(text, max_new_tokens=chunk_size, use_cache=True)[0]["generated_text"]
yield generated_text
text += generated_text
```
What I've observed is that although we set `use_cache=True`, there is still the overhead of re-calculating the past_key_values every time we call `pipeline()` since it has been exited. Ideally, if we could extract `past_key_values` from the output of pipeline, then we could feed that back in the successive calls to address this issue.
Thoughts?
<|||||>Pipeline is stateless, so it cannot keep the `past_key_values` and for you to send it again and again kind of defeats the purpose of a pipeline imo (since you can't batch anymore for starters, in general you're introducing some kind of state).
I can provide a script which *kind* of mimic what you want to do, it is pretty hacky, but the "clean" version is exactly how I said, it would need a major rewrite of some components.
https://github.com/huggingface/transformers/issues/17365#issuecomment-1152192715
Here is the adapted version without threading (which you should avoid if possible):
```python
from transformers import pipeline
import torch
import threading
from transformers.generation.stopping_criteria import StoppingCriteria, StoppingCriteriaList
from queue import Queue
pipe = pipeline(model="gpt2", task="text-generation", device=0)
class Stream(StoppingCriteria):
def __init__(self):
self.prev_string = ""
def __call__(self, input_ids, scores) -> bool:
string = pipe.tokenizer.decode(input_ids[0])
# print(f"Total: {repr(string)}")
print(f"New: {repr(string[len(self.prev_string):])}")
self.prev_string = string
return False
for out in pipe("My initial text", max_new_tokens=10, stopping_criteria=[Stream()]):
print("Final result", out)
```
Does this work for you ?<|||||>@OlivierDehaene Tagging just because we were talking about the stream process in `text-generation-inference` :)<|||||>@Narsil Hmm, this does not address the issue of having to re-calculate `past_key_values` though between successive calls of `pipe()`, no?<|||||>Oh no that cannot change. But the idea, is that you can call it for a very long range (like `max_new_tokens=100`) which will use the past_key_values over and over without you having to deal with it. And you can still capture tokens as they are produced to send them to a viewer (here the stdout).
Doing anything with `past_key_values` at the pipeline level, is IMO too advanced for what pipelines are supposed to be. As it will break batching (which you most likely don't care about since you seem to be generating things live, but it's still a constraint on the `pipeline` itself).
The main goal of pipelines is to be useable by non-ML software engineers, past_key_values do require you to understand in quite a lot of details how things work internally. That's why IMO it's out of scope for `pipeline`.
If you really want full control, for instance to get resumable inference, you have to go at a lower level than the pipeline IMO.
The code is not going to be so bad if you don't have batching to deal with
A gist:
```python
input_ids = tokenizer.encode("intiial string")
stopping_criteria = StoppingCriteriaList([EOSToken, etc...])
logits_processor = LogitsProcessorList[...]) # <--- For both of these, check out `generate` on what are those options and how to create them).
past_key_values = None
scores = None
while not stopping_criteria(input_ids, scores)
outputs = model.forward(input_ids, past_key_values)
past_key_values = outputs.past_key_values
logits = outputs.logits.softmax(dim=-1)
scores = logits_processor(logits)
input_ids = logits.argmax(dim=-1) # <---- choose whatever sampling strategy makes most sense
```
The code is not meant to be functional, but the end result should look something like it.
Since your problem space is likely to be simpler than the general `transformers` one, you can probably get rid of a sizeable chunk of complexity that we have to deal with, for beam_search, specific models, legacy code, batching, which don't really matter as much for you.
```<|||||>@Narsil Nice, I see what you are saying. Just for my own understanding -- is Stopping Criteria called per token produced?<|||||>Yes, it's intended goal is to decide when to stop generating tokens (hence the return type, false means continue generating, true means stop, iteration will stop when ANY criteria wants to stop).<|||||>@Narsil Thanks so much!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 21,273 | closed | Use `logger.info` instead of `print` to emit a logging message in `hub.py` | # What does this PR do?
I found a line that uses the `print` method to emit a logging message in `transformers/utils/hub.py`. When I was developing a CLI app that outputs results in a specific format to stdout, this line emitted a logging message to stdout, resulting in an error. This PR fixes the line to use the `logger.info` method to emit the message instead.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts and @NielsRogge
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
| 01-24-2023 02:37:56 | 01-24-2023 02:37:56 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 21,272 | closed | Poor CPU Inference Scalability Possibly Due to Disk IO | ### System Info
Environment:
Python: 3.10.8
Pytorch: 1.12.1
Transformers: 4.24.0
Model Used: cardiffnlp/twitter-roberta-base-sentiment
Tests Performed:
I created a simple docker image based on this [github repo](https://github.com/jrsperry/transformers-sentiment-test) which would make predictions on sentences and print out the time to make said predictions. I did this in a kubernetes cluster on aws with node types of c6i.8xlarge, m6a.4xlarge, and several others.
When running with only 1 pod replica with a limit of 4 cpu I got the expected performance. When I run multiple pods per node (more than 2) with the same limits set the performance per pod absolutely falls off a cliff. All the pods are still getting the same amount of cpu as the first test with 1 pod, and there's no clear ram pressure, but some of the performance per pod is 10 times slower than before. I've confirmed that all pods were getting the same amount of cpu (4) as well.
I ended up adding a higher class of storage to the kubernetes node to see if it was IO bound and my performance improved dramatically and was much closer to being inline with my expectations given what I observed with the single pod test.
In my test image I don't write to disk so I'm a bit perplexed as to where the disk pressure is coming from, whether it's expected, and if there's any way to optimize around it.
I've run many of these tests with different base images and slightly different transformers and pytorch versions and all of them suffered the same kind of performance drop off. If I provide the same amount of cpu to each pod, I would expect a similar amount of performance. It's possible this isn't IO based, it's just the only thing that brought performance back closer to expectations, although confusingly not in every situation, and varied in its impact. The environment that responded best to the ssd drive had python 3.10.8, torch 1.12.1 and transformers 4.24.0.
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
- Run multiple instances of the sperry1996/transformers-sentiment-test:0.0.1 image on a kubernetes node and compare the results vs running 1 instance per node (with equal requests and limits set). Alternatively you can run multiple containers locally with the command `docker run --cpus="0.75" sperry1996/transformers-sentiment-test:0.0.1` (on an x86 machine) or `docker run --cpus="0.75" sperry1996/transformers-sentiment-test:0.0.1-CPU-MULTI` (on an arm machine). I experienced slowdowns of around 4x (when running more than 1 container) on an m1 mac with docker desktop and around 25-30% on a windows with docker desktop (mac docker desktop has worse disk io performance than windows, there may be other factors as well).
### Expected behavior
I would expect fairly linear scaling of the instances of the pods given they have equal requests and limits set. | 01-23-2023 21:24:25 | 01-23-2023 21:24:25 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 21,271 | closed | issue warning about different batch size being used for --resume_from_checkpoint | ### Feature request
According to #7198 pretraining must always use the same batch size:
> [15 Dec 21]
> [bowen-n] When I resume training from a checkpoint, I use a new batch size different from the previous training and it seems that the number of the skipped epoch is wrong.
> ...
> [sgugger] That feature is not supported, you should resume training with the exact same hyperparameters or start a new training if you want to change them.
This should be made explicit in a warning. As is, the system seems to just hang up.
This occurred specifically with the following script:
> ./examples/pytorch/language-modeling/run_mlm_no_trainer.py
The following is a sketch of the new. behavior. Batch size is not recoded in the checkpoint, so additional support is required:
```
if args.resume_from_checkpoint is not None or args.resume_from_checkpoint != "":
accelerator.print(f"Resumed from checkpoint: {args.resume_from_checkpoint}")
accelerator.load_state(args.resume_from_checkpoint)
path = os.path.basename(args.resume_from_checkpoint)
# Make sure same batch sizes uses
# TODO: add support for storing batch size in checkpoint
if accelerator...batch_size != args.per_device_eval_batch_size):
logger.warning("Different batch size specified: previous %d vs. current %s; result unpredictable.",
accelerator...batch_size, args.per_device_eval_batch_size),
```
Note that restarting is a not a practical solution given the time involved in pretraining. Given such consequences, it would be also be good for the argument description for --resume_from_checkpoint to warn about this limitation.
```
parser.add_argument(
"--resume_from_checkpoint",
type=str,
default=None,
help="If the training should continue from a checkpoint folder". \
" (n.b., using same hyperparameters in particular batch size).",
)
```
----------
Environment information:
- `transformers` version: 4.26.0.dev0
- Platform: Linux-5.15.0-1022-aws-x86_64-with-glibc2.31
- Python version: 3.9.13
- Huggingface_hub version: 0.11.1
- PyTorch version (GPU?): 1.12.1 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: yes
### Motivation
Again restarting is a not a practical solution given the time involved in pretraining.
Thus, the system should detect when two incompatible batch sizes are being used as well as mention this current restriction in the usage documentation. (It might not be apparent for users who have used other pretraining packages such such as Google [BERT](https://github.com/google-research/bert).)
Note that I had reviewed the script --help mentions of batch size before running the pretraining to make sure it was specified correctly. Thus I would have been alerted much sooner about this limitation, in particular before a costly pretraining run was made.
### Your contribution
I am more comfortable with the no_trainer-style scripts, having customized both the MLM and CLM ones as well as the code parrot example (i..e, under ./examples/research_projects/codeparrot). Therefore I could do the change outlined above as a PR once the support for recording the batch size is implemented.
I could also add a test case for this, but there only seems existing tests for the Trainer style script (e.g., run_mlm.py) as in tests/trainer/test_trainer.py. | 01-23-2023 21:22:58 | 01-23-2023 21:22:58 | The examples presented in this repo are not feature-complete apps with ironclad error messages, but just that... examples. We keep them with as little code as possible so they can be easily understood and customized. That's why they won't contain warnings for everything that could go wrong.<|||||>OK, so this turns out to be a two part feature request:
1. Issue warning
2. Include more informative usage for --resume_from_checkpoint
The first can be written off as won't fix as per keeping it simple, but the second should be addressed because it is straightforward to do so (e.g., minimal risk).
These examples are critical for doing non-trivial tasks with Hugging Face (e.g., pretraining and fine-tuning). The accelerator-based scripts are particularly non-trivial, so fleshing them out a little will be beneficial.<|||||>Sylvain: can you provide some tips (e.g., relatively safe batch size adjustments)?
Can you also un-hide the forum post I just made?
> https://discuss.huggingface.co/t/resuming-accelerate-based-pretraining-with-different-batch-size/30845
I put in a few links to the issue and code samples, so it got flagged as potential spam.
Best,
Tom<|||||>@tohara-PandoLogic Your post is un-hidden since yesterday, I rejected the spam flag. I still don't understand why you are using `--resume_from_checkpoint` for two different training with different hyperparameters. You can start a new training from any checkpoint by passing the model folder in `--model_name_or_path`.
<|||||>OK, thanks. I take it the seed needs to be changed as well.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.