repo
stringclasses 1
value | number
int64 1
25.3k
| state
stringclasses 2
values | title
stringlengths 1
487
| body
stringlengths 0
234k
⌀ | created_at
stringlengths 19
19
| closed_at
stringlengths 19
19
| comments
stringlengths 0
293k
|
---|---|---|---|---|---|---|---|
transformers | 23,286 | closed | `offset_alibi` is not used | What's the meaning of `offset_alibi`? It's not used in Bloom's source codes. Does it make sense? | 05-11-2023 08:52:10 | 05-11-2023 08:52:10 | Hi @JaheimLee, thanks for raising an issue!
This is a question best placed in our [forums](https://discuss.huggingface.co/), as we try to reserve the github issues for feature requests and bug reports. Another good place to ask would be opening a discussion [on the model page on the hub](https://huggingface.co/bigscience/bloom-7b1/discussions), as it directly relates to the configuration file there. <|||||>> Hi @JaheimLee, thanks for raising an issue!
>
> This is a question best placed in our [forums](https://discuss.huggingface.co/), as we try to reserve the github issues for feature requests and bug reports. Another good place to ask would be opening a discussion [on the model page on the hub](https://huggingface.co/bigscience/bloom-7b1/discussions), as it directly relates to the configuration file there.
Oh, sorry. I created an issue on the hub. I close this one now. |
transformers | 23,285 | closed | AutoModelFromCausalLLM of Bloom not releasing GPU memory after each inference batch | ### System Info
Hi there, I have set torch.no_grad() and torch.cuda.empty_cache(), but the GPU still encounters out-of-memory (OOM) errors after a few inferences. My torch version is 1.13.1, deepspeed version is 0.9, and transformer version is 4.28.
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
I have set torch.no_grad() and torch.cuda.empty_cache() for AutoModelFromCausalLLM
### Expected behavior
auto release the memory usage | 05-11-2023 08:38:10 | 05-11-2023 08:38:10 | Hi @chuckhope, thanks for raising this issue.
So that we can best help, could you provide the following information:
* a minimal code snippet to reproduce the error
* Information about the hardware - are you running on a single GPU?
* Information about the model - are you using a single checkpoint? If so, which one? If not, is this observed for all checkpoints? <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 23,284 | closed | [WIP]/[DRAFT] Add ImageBind model | # What does this PR do?
Adding ImageBind model (DRAFT/ WORK IN PROGRESS - not ready for review yet)
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
https://github.com/huggingface/transformers/issues/23240
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case. https://github.com/huggingface/transformers/issues/23240
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@amyeroberts @ArthurZucker
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
| 05-11-2023 06:09:59 | 05-11-2023 06:09:59 | @shehanmunasinghe Awesome work with all these model PRs 🤗 let me know when it's ready for review! <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 23,282 | closed | T5-Flan Resuming Int-8 / LoRA / Deepspeed Checkpoint | ### System Info
transformers 4.29
accelerate 0.19
peft 0.3
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
My training ended unexpectedly and I want to resume my T5-Flan training from a checkpoint. Inside my checkpoint I have:
```
global_step25000
latest
pytorch_model.bin
rng_state.pth
trainer_state.json
training_args.bin
zero_to_fp32.py
```
I am unable to load the checkpoint in the following ways:
```
# try 1
model = AutoModelForSeq2SeqLM.from_pretrained("/checkpoint-25000",
load_in_8bit=True,
device_map='auto',
)
# ValueError: weight is on the meta device, we need a `value` to put in on 0.
# try 2
model = AutoModelForSeq2SeqLM.from_pretrained("/checkpoint-25000",
)
# ValueError: weight is on the meta device, we need a `value` to put in on 0.
# try 3
config = AutoConfig.from_pretrained("google/flan-t5-large")
with init_empty_weights():
model = AutoModelForSeq2SeqLM.from_config(config)
model.tie_weights()
device_map = infer_auto_device_map(model)
model = load_checkpoint_and_dispatch(model, "checkpoint-25000/pytorch_model.bin", device_map=device_map)
# AttributeError: 'T5ForConditionalGeneration' object has no attribute 'model'
```
### Expected behavior
The checkpoint to attach so I could resume training. | 05-11-2023 02:40:28 | 05-11-2023 02:40:28 | Totally didnt try the trainer, but it works!
`trainer.train('checkpoint-xxxx')` |
transformers | 23,280 | closed | [bloomz] attn_mask return bool, but Deepspeed softmax input needs int | ### System Info
- `transformers` version: 4.27.1
- Platform: Linux-4.18.0-240.el8.x86_64-x86_64-with-glibc2.2.5
- Python version: 3.8.12
- Huggingface_hub version: 0.13.3
- PyTorch version (GPU?): 1.11.0+cu102 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: true
- Using distributed or parallel set-up in script?: true
### Who can help?
@thomasw21 @patrickvonplaten @sgugger
### Information
- [x] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```bash
# using deepspeedChat example but change the opt to bloomz-1b7
# deepspeedChat github: https://github.com/microsoft/DeepSpeedExamples/blob/master/applications/DeepSpeed-Chat/README.md
#!/bin/bash
# Copyright (c) Microsoft Corporation.
# SPDX-License-Identifier: Apache-2.0
# DeepSpeed Team
ACTOR_MODEL_PATH="bigscience/bloomz-1b7"
CRITIC_MODEL_PATH="bigscience/bloomz-1b7"
ACTOR_ZERO_STAGE=${3:-2}
CRITIC_ZERO_STAGE=${4:-2}
OUTPUT=${5:-'./output'}
NUM_GPUS=${6:-8}
NUM_NODES=${7:-1}
mkdir -p $OUTPUT
Num_Padding_at_Beginning=0 # this is model related
Actor_Lr=9.65e-6
Critic_Lr=5e-6
hostname='localhost'
export NCCL_SOCKET_IFNAME=eth
export NCCL_DEBUG=INFO
export TOKENIZERS_PARALLELISM=false
deepspeed --master_port 25303 --master_addr ${hostname} --num_gpus ${NUM_GPUS} --num_nodes ${NUM_NODES} --hostfile 'deepspeed_hostfile' main.py \
--data_path Dahoas/rm-static \
--data_split 2,4,4 \
--actor_model_name_or_path $ACTOR_MODEL_PATH \
--critic_model_name_or_path $CRITIC_MODEL_PATH \
--num_padding_at_beginning 1 \
--per_device_train_batch_size 1 \
--per_device_mini_train_batch_size 1 \
--generation_batch_numbers 1 \
--ppo_epochs 1 \
--max_answer_seq_len 256 \
--max_prompt_seq_len 256 \
--actor_learning_rate ${Actor_Lr} \
--critic_learning_rate ${Critic_Lr} \
--disable_actor_dropout \
--num_train_epochs 1 \
--lr_scheduler_type cosine \
--gradient_accumulation_steps 1 \
--num_warmup_steps 100 \
--deepspeed --seed 1234 \
--enable_hybrid_engine \
--inference_tp_size ${NUM_NODES} \
--tp_gather_partition_size ${NUM_GPUS} \
--actor_zero_stage $ACTOR_ZERO_STAGE \
--critic_zero_stage $CRITIC_ZERO_STAGE \
--actor_gradient_checkpointing \
--critic_gradient_checkpointing \
--output_dir $OUTPUT |&
tee $OUTPUT/training.log
```
the error is:
```
Traceback (most recent call last):
File "DeepSpeedExamples/applications/DeepSpeed-Chat/training/step3_rlhf_finetuning/main.py", line 562, in <module>
main()
File "DeepSpeedExamples/applications/DeepSpeed-Chat/training/step3_rlhf_finetuning/main.py", line 471, in main
out = trainer.generate_experience(prompts)
File "DeepSpeedExamples/applications/DeepSpeed-Chat/training/step3_rlhf_finetuning/ppo_trainer.py", line 97, in generate_experience
seq = self._generate_sequence(prompts)
File "DeepSpeedExamples/applications/DeepSpeed-Chat/training/step3_rlhf_finetuning/ppo_trainer.py", line 73, in _generate_sequence
seq = self.actor_model.module.generate(prompts,
File "/dcv/lib/python3.9/site-packages/deepspeed/runtime/hybrid_engine.py", line 245, in generate
generate_ret_vals = self._generate(*inputs, **kwargs)
File "/dcv/lib/python3.9/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)
File "/dcv/lib/python3.9/site-packages/transformers/generation/utils.py", line 1437, in generate
return self.greedy_search(
File "/dcv/lib/python3.9/site-packages/transformers/generation/utils.py", line 2248, in greedy_search
outputs = self(
File "/dcv/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1208, in _call_impl
result = forward_call(*input, **kwargs)
File "/dcv/lib/python3.9/site-packages/transformers/models/bloom/modeling_bloom.py", line 913, in forward
transformer_outputs = self.transformer(
File "/dcv/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1208, in _call_impl
result = forward_call(*input, **kwargs)
File "/dcv/lib/python3.9/site-packages/transformers/models/bloom/modeling_bloom.py", line 786, in forward
outputs = block(
File "/dcv/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1208, in _call_impl
result = forward_call(*input, **kwargs)
File "/dcv/lib/python3.9/site-packages/deepspeed/model_implementations/transformers/ds_transformer.py", line 147, in forward
self.attention(input,
File "/dcv/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1190, in _call_impl
return forward_call(*input, **kwargs)
File "/dcv/lib/python3.9/site-packages/deepspeed/ops/transformer/inference/ds_attention.py", line 160, in forward
context_layer, key_layer, value_layer = self.compute_attention(qkv_out=qkv_out,
File "/dcv/lib/python3.9/site-packages/deepspeed/ops/transformer/inference/ds_attention.py", line 253, in compute_attention
attn_mask=((1 - input_mask).half() * minus_inf),
File "/dcv/lib/python3.9/site-packages/torch/_tensor.py", line 39, in wrapped
return f(*args, **kwargs)
File "/dcv/lib/python3.9/site-packages/torch/_tensor.py", line 833, in __rsub__
return _C._VariableFunctions.rsub(self, other)
RuntimeError: Subtraction, the `-` operator, with a bool tensor is not supported. If you are trying to invert a mask, use the `~` or `logical_not()` operator instead.
```
I want to know why this pull : https://github.com/huggingface/transformers/pull/18141/files change the following code:
`expanded_attn_mask if combined_attention_mask is None else expanded_attn_mask + combined_attention_mask`
to:
`expanded_attn_mask if combined_attention_mask is None else expanded_attn_mask | combined_attention_mask`
Because of the change, the `causal_mask` is the tensor.bool not tensor.int64
### Expected behavior
`causal_mask` is the tensor.int64 not tensor.bool | 05-11-2023 02:06:11 | 05-11-2023 02:06:11 | Hi @shenzhuo,
The linked PR was closed and the commits not added - the PR introducing the change was #18344. From the PR description, it seems converting `causal_mask` to `bool` was intentional and not a side-effect. I'll let @thomasw21 explain why this change was made :) <|||||>Yeah so there's no reason to pass `attention_mask` to be int64 since basically it stored boolean values. I think the reason why this is breaking is because of `deepspeed`, the forward function is overriden by custom operations on `deepspeed` side: https://github.com/microsoft/DeepSpeed/blame/194053bd58947ac6a45363ba780c9dfb127d3064/deepspeed/ops/transformer/inference/ds_attention.py#L168
I would suggest to fix this in DS side, ie probable changing `(1 - input_mask).to(target_dtype) * minus_inf)` to something like `(~input_mask).to(target_type) * minus_inf`<|||||>> Yeah so there's no reason to pass `attention_mask` to be int64 since basically it stored boolean values. I think the reason why this is breaking is because of `deepspeed`, the forward function is overriden by custom operations on `deepspeed` side: https://github.com/microsoft/DeepSpeed/blame/194053bd58947ac6a45363ba780c9dfb127d3064/deepspeed/ops/transformer/inference/ds_attention.py#L168
>
> I would suggest to fix this in DS side, ie probable changing `(1 - input_mask).to(target_dtype) * minus_inf)` to something like `(~input_mask).to(target_type) * minus_inf`
I think the DeepSpeed uses `(1 - input_mask).to(target_dtype) * minus_inf)` because their framework is tested based on the opt model. At the same time, many modeling_x.py files in transformers return int64
<|||||>Hum the specific module is called `BloomSelfAttention` https://github.com/microsoft/DeepSpeed/blob/194053bd58947ac6a45363ba780c9dfb127d3064/deepspeed/ops/transformer/inference/ds_attention.py#L171<|||||>> Hum the specific module is called `BloomSelfAttention` https://github.com/microsoft/DeepSpeed/blob/194053bd58947ac6a45363ba780c9dfb127d3064/deepspeed/ops/transformer/inference/ds_attention.py#L171
It's a bug. I think...<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 23,279 | closed | xlm-roberta-xlarge doesn't exist | ### System Info
from https://huggingface.co/docs/transformers/model_doc/xlm-roberta-xl
`model = XLMRobertaXLForSequenceClassification.from_pretrained("xlm-roberta-xlarge")
`
I can't find xlm-roberta-xlarge on https://huggingface.co/models
there's facebook/xlm-roberta-xl - but this is the raw masked token model and doesn't seem to work with XLMRobertaXLForSequenceClassification, it's missing the classifier head.
model = XLMRobertaForSequenceClassification.from_pretrained("xlm-roberta-large")
works just fine for me
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
from transformers import AutoTokenizer, XLMRobertaXLForSequenceClassification
model = XLMRobertaXLForSequenceClassification.from_pretrained("xlm-roberta-xlarge")
### Expected behavior
I guess xlm-roberta-xlarge should be available, or the docs should be amended.. | 05-10-2023 22:08:08 | 05-10-2023 22:08:08 | Hi @Jack000 , thanks for reporting this issue.
This is indeed odd - the checkpoint being reference doesn't seem to exist. In the original PR adding the model to the library - it seems the [checkpoints were added under the facebook org](https://github.com/huggingface/transformers/pull/13727#pullrequestreview-866391378). It's OK if the checkpoint used in the example doesn't have the weights for the specific head e.g. for [multiple choice for bert we use `bert-base-uncased`](https://huggingface.co/docs/transformers/v4.29.1/en/model_doc/bert#transformers.BertForMultipleChoice). Would you like to open a PR to update the checkpoint? <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 23,278 | closed | AttributeError: module transformers.tools has no attribute DocumentQuestionAnsweringTool keeps appearing in transformers version 4.29.0 | ### System Info
- `transformers` version: 4.29.0
- Platform: macOS-13.3.1-arm64-arm-64bit
- Python version: 3.10.11
- Huggingface_hub version: 0.14.1
- Safetensors version: not installed
- PyTorch version (GPU?): 2.0.1 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Hi together, I stumbled across the following error while trying the new Transformers Agent:
```python
from huggingface_hub import login
login('my_token')
```
...
Token is valid.
...
Login successful.
```python
from transformers import HfAgent
agent = HfAgent("https://api-inference.huggingface.co/models/bigcode/starcoder")
```
As soon as I try to instantiate the agent, the following error appears:
```python
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/happydata/miniforge3/envs/huggingface-env/lib/python3.10/site-packages/transformers/tools/agents.py", line 469, in __init__
super().__init__(
File "/Users/happydata/miniforge3/envs/huggingface-env/lib/python3.10/site-packages/transformers/tools/agents.py", line 199, in __init__
_setup_default_tools()
File "/Users/happydata/miniforge3/envs/huggingface-env/lib/python3.10/site-packages/transformers/tools/agents.py", line 97, in _setup_default_tools
tool_class = getattr(tools_module, tool_class_name)
File "/Users/happydata/miniforge3/envs/huggingface-env/lib/python3.10/site-packages/transformers/utils/import_utils.py", line 1165, in __getattr__
raise AttributeError(f"module {self.__name__} has no attribute {name}")
AttributeError: module transformers.tools has no attribute DocumentQuestionAnsweringTool
```
In my case it didn't matter whether I tried BigCode, OpenAssistant or the OpenAiAgent.
### Expected behavior
I tried to follow the quickstart guide from https://huggingface.co/docs/transformers/transformers_agents, but it seems that the AttributeError with the transformers.tools module keeps appearing again and again.
Thank you for your help in advance! | 05-10-2023 21:28:39 | 05-10-2023 21:28:39 | In my case, re-installation solved the problem.
I got an error:
> Failed to import transformers.tools.agents ...`
Try this:
`$ pip install huggingface_hub>=0.14.1 git+https://github.com/huggingface/[email protected] diffusers accelerate datasets torch soundfile sentencepiece opencv-python openai`
And restart the connected ipykernel.<|||||>I had the same issue and installed these packages one by one, it seems that the "torch" lib missing is what causes this exact error.
> In my case, re-installation solved the problem.
>
> I got an error:
>
> > Failed to import transformers.tools.agents ...`
>
> Try this: `$ pip install huggingface_hub>=0.14.1 git+https://github.com/huggingface/[email protected] diffusers accelerate datasets torch soundfile sentencepiece opencv-python openai`
>
> And restart the connected ipykernel.
<|||||>@yerimJu @vitorrm Thank you for your answers, it looks like it works with the one-by-one reinstallation of the packages! |
transformers | 23,277 | closed | Fix doctest files fetch issue | Reverts huggingface/transformers#23271
Embarrassingly and unfortunately, the new job `tests_pr_documientation_tests` fails at the step `Get files to test` when the job is run on the `main` branch.
https://app.circleci.com/pipelines/github/huggingface/transformers/64235/workflows/54a99003-258e-4c2a-8366-b4461b3ec33f/jobs/794628/parallel-runs/0/steps/0-113
I will have to take a look - the log is not informative. | 05-10-2023 20:42:41 | 05-10-2023 20:42:41 | _The documentation is not available anymore as the PR was closed or merged._<|||||>As said offline I don't think we need to revert urgently, we can just ignore the red check on main.<|||||>@sgugger FYI The doctest PR also has some problem running on our GitHub Actions runner. See error below.
I will take a look, but this PR could be merged (without fixing the following issue) once you think the changes are good.
```bash
_____ ERROR collecting src/transformers/generation/configuration_utils.py ______
import file mismatch:
imported module 'transformers.generation.configuration_utils' has this __file__ attribute:
/transformers/src/transformers/generation/configuration_utils.py
which is not the same as the test file we want to collect:
/__w/transformers/transformers/src/transformers/generation/configuration_utils.py
HINT: remove __pycache__ / .pyc files and/or use a unique basename for your test file modules
```<|||||>> @sgugger FYI The doctest PR also has some problem running on our GitHub Actions runner. See error below. I will take a look, but this PR could be merged (without fixing the following issue) once you think the changes are good.
>
> ```shell
> _____ ERROR collecting src/transformers/generation/configuration_utils.py ______
> import file mismatch:
> imported module 'transformers.generation.configuration_utils' has this __file__ attribute:
> /transformers/src/transformers/generation/configuration_utils.py
> which is not the same as the test file we want to collect:
> /__w/transformers/transformers/src/transformers/generation/configuration_utils.py
> HINT: remove __pycache__ / .pyc files and/or use a unique basename for your test file modules
> ```
@sgugger A short fix for this issue is given in [the last commit](https://github.com/huggingface/transformers/pull/23277/commits/bada5b3d616bff32f7440408428a4b9ed13c503b).
The reason is the file `conftest.py` has this line `from transformers.testing_utils import HfDoctestModule, HfDocTestParser` added in #23271. However, the `transformers` is installed during docker image build, which is different from the one when the CI is run.
This change should be applied to other workflow file, but it's rare that we have such imports in the codebase. I will do it in a separate PR.<|||||>Thanks for all the explanation!<|||||>Going to merge as the failing tests are irrelevant and I have tried to re-run for a few times. |
transformers | 23,276 | closed | `transformers-cli` -> `huggingface-cli` | Leftover from the last PR - `transformers-cli` should be `huggingface-cli` now. | 05-10-2023 18:26:07 | 05-10-2023 18:26:07 | _The documentation is not available anymore as the PR was closed or merged._<|||||>lgtm! |
transformers | 23,275 | closed | Remove missplaced test file | # What does this PR do?
I just stumbled unto this test file leaving in `src/transformers/data` which is never executed. Upon verification with @gante nothing inside of it is necessary as it predates the logit processors, so we can safely remove it. | 05-10-2023 18:17:05 | 05-10-2023 18:17:05 | |
transformers | 23,274 | closed | Fix link displayed for custom tools | # What does this PR do?
This fixes the link displayed when a custom tool downloads code files from the Hub. | 05-10-2023 18:12:21 | 05-10-2023 18:12:21 | |
transformers | 23,273 | closed | replaced assert with raise ValueError for t5, switch_transformers, pix2struct, mt5, longt5, gptsan_japanese. | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
As said by @sanchit-gandhi, [here](https://github.com/huggingface/transformers/pull/21785#discussion_r1184787328) this PR replaces `assert` with `raise ValueError` for models - t5, switch_transformers, pix2struct, mt5, longt5, gptsan_japanese.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
@sanchit-gandhi | 05-10-2023 17:35:35 | 05-10-2023 17:35:35 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Hi @sanchit-gandhi just pushed the change that you requested.<|||||>Awesome thanks - requesting a final review!<|||||>Hi @amyeroberts, just pushed the changes you requested!
Let me know if any more changes are needed or not.<|||||>Merging as errors are unrelated to this PR and have been resolved on main |
transformers | 23,271 | closed | Bring back the PR `Refactor doctests + add CI` to `main` | Reverts huggingface/transformers#23245
So we can keep the PR #22987 regarding the new doctest way, but without exposing `doctest_utils` to `src/transformers`.
@sgugger Let me know if you prefer to move this `doctest_utils.py` to `tests` folder. | 05-10-2023 16:38:50 | 05-10-2023 16:38:50 | > Can you put the new testing utils (from `doctest_utils`) in `testing_utils`, so it all goes in the same place?
In this case, am I allowed to put the import of `pytest` and `_pytest` on the top level of `testing_utils`? I am asking because I see in that file there is
```python
try:
import pytest # We don't need a hard dependency on pytest in the main library
except ImportError:
return test_case
```
<|||||>There is no direct import into `testing_utils.py` so this should be fine to remove the try except (we will have until next release to make sure it doesn't create a new core dep of Transformers :sweat_smile: )<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_23271). All of your documentation changes will be reflected on that endpoint.<|||||>Thanks for taking care of this! Think the filtered list could be obtain in a cleaner way with some bash commands but otherwise great 👍🏻 <|||||>@ArthurZucker It was finally going to a `tests_fetcher.py`
https://github.com/huggingface/transformers/pull/23277/files
The bash command was just getting too complex ...<|||||>Nice! Thanks for following up |
transformers | 23,270 | closed | OPT/BioGPT: Improved attention mask shape exception | # What does this PR do?
Related exception: #23197
Currently, in OPT/BioGPT, if we don't pass an attention mask or if we pass an attention mask with a wrong shape, an exception will be printed in the attention layer: `Attention mask should be of size {(bsz, 1, tgt_len, src_len)}, but is {attention_mask.size()}`. This exception has the following problems:
1. It checks the expanded attention mask, not the attention mask as set by the user (or the default). Even as a maintainer, I can't immediately decode the error message, as I would need to know how the mask is expanded for the model in question.
2. If there is a bug computing `bsz`, `tgt_len`, or `src_len`, the exception will be misleading.
In #23197 we found that when the length of `past_key_values` is equal to the length of the `attention_mask`, `tgt_len` and `src_len` will be wrong (in at least these 2 models), triggering the exception with an incorrect message. This PR solves both issues: it prevents the incorrect computation of `tgt_len` and `src_len` by checking the shape of `attention_mask` in the main model class, printing a user-friendly message.
If this PR gets approved, I will add a similar check to the other models. | 05-10-2023 16:15:01 | 05-10-2023 16:15:01 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 23,269 | closed | Render custom tool docs a bit better | # What does this PR do?
This PR disables syntax highlighting on the blocks that shouldn't have it. | 05-10-2023 15:23:38 | 05-10-2023 15:23:38 | |
transformers | 23,268 | closed | Convert numpy arrays to lists before saving the evaluation metrics as json | eval_metrics contains:
- mean_iou: float
- mean_accuracy: float
- overall_accuracy: float
- per_category_iou: ndarray of shape (num_labels,)
- per_category_accuracy: ndarray of shape (num_labels,)
ndarrays are to be converted to lists for json serialization.
| 05-10-2023 15:20:23 | 05-10-2023 15:20:23 | @sgugger Can you please review this small update.<|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>> LGTM! Can you just run `make style` on your branch to fix the styling issue?
yes, the styling issue is now fixed. |
transformers | 23,267 | closed | Fix new line bug in chat mode for agents | # What does this PR do?
Depending on the agent used we might get too many new lines here. Just stripping everything and adding the right amount fixes this. | 05-10-2023 15:07:31 | 05-10-2023 15:07:31 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_23267). All of your documentation changes will be reflected on that endpoint. |
transformers | 23,266 | closed | Refine documentation for Tools | # What does this PR do?
Refine a bit the documentation of agents and tools. | 05-10-2023 14:39:39 | 05-10-2023 14:39:39 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 23,262 | closed | agent fail | ### System Info
transformers==4.30.0.dev0
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction

from transformers import HfAgent
agent = HfAgent("https://api-inference.huggingface.co/models/bigcode/starcoder")
from PIL import Image
x = Image.open('test.jpg')
agent.run("here is a image named `image`, what does it belong to?", image=x, remote=True)
==Explanation from the agent==
I will use the following tool: `image_classifier` to classify the image.
==Code generated by the agent==
label = image_classifier(image)
print(f"The label is {label}.")
==Result==
Evaluation of the code stopped at line 0 before the end because of the following error:
It is not permitted to evaluate other functions than the provided tools (tried to execute image_classifier).
agent.run("here is a image named `image`, what does it belong to?", image='test.jpg', remote=True)
==Explanation from the agent==
I will use the following tool: `image_classifier` to classify the image.
==Code generated by the agent==
label = image_classifier(image)
print(f"The label is {label}.")
==Result==
Evaluation of the code stopped at line 0 before the end because of the following error:
It is not permitted to evaluate other functions than the provided tools (tried to execute image_classifier).
### Expected behavior
run normally | 05-10-2023 14:25:02 | 05-10-2023 14:25:02 | cc @sgugger <|||||>Run normally meaning? I'm not even completely sure what you want the agent to run so I don't see how the LLM could find out too ;-)
Make sure to:
1. Use openAI, sadly it's better than the opensource alternatives
2. refine your prompt input, we have a great guide for that [here](https://huggingface.co/docs/transformers/custom_tools#writing-good-user-inputs)<|||||>hi @sgugger ,
I am sorry for my bad description.
There are two problems when I run the command `agent.run("here is a image named image, what does it belong to?", image=x, remote=True)`
1、The agent return tool `image_classifier ` which not in toolbox. According to the base run-prompt,it should only return tool in toolbox. So this is only because the llm's capacity?
2、for tool like `image_classifier` or `image_caption`, it‘s input is `image`,what is the type of `image`? PIL or Numpy or str(local path)?
thanks!
<|||||>it seems that the param `remote=True` does not work, what tools can be loaded remotely?<|||||>The agent can return whatever the hell it wants. If it decides to use tools that don't exist, there is nothing we can do (again use openAI to get better results).
There is no image classifier tool. For tools working on images, the input type required is a standard PIL Image.
As for your last comment `remote=True` on all tools.<|||||>@sgugger
thanks

for param `remote=True`, it seems does not work. What tools support remote and where I can search?
<|||||>@ltm920716 there is a dataset that lists all the standard tools that use an endpoint for demonstration purpose, and then that you can use remotely : https://huggingface.co/datasets/huggingface-tools/default-endpoints |
transformers | 23,261 | closed | Update Image segmentation description | null | 05-10-2023 13:27:13 | 05-10-2023 13:27:13 | |
transformers | 23,260 | closed | pin TF prob in docker files | # What does this PR do?
Same as in #23220 but for docker file | 05-10-2023 13:15:36 | 05-10-2023 13:15:36 | |
transformers | 23,259 | closed | Metadata update | Automatically updates the metadata to contain the `tool` tag. | 05-10-2023 12:49:30 | 05-10-2023 12:49:30 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_23259). All of your documentation changes will be reflected on that endpoint. |
transformers | 23,258 | open | Flaky Whisper PT-TF & PT-Flax Equivalence Test | ### System Info
transformers 4.29.0 dev
### Who can help?
@ArthurZucker @sanchit-gandhi @ydshieh
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Flaky test, so not reproducible. Example run where the error occurred:
* https://app.circleci.com/pipelines/github/huggingface/transformers/64100/workflows/b4463c5d-b3dc-4b00-a7cd-19acd096cb07/jobs/792381
* https://app.circleci.com/pipelines/github/huggingface/transformers/64111/workflows/dc9092c4-0673-46c7-b89f-f805bc20128c/jobs/792557
* https://app.circleci.com/pipelines/github/huggingface/transformers/64230/workflows/e2d42ca4-f367-4a85-9054-a0ea99e49849/jobs/794534
Occasionally, the PT-TF and PT-Flax whisper equivalence test fails. The tolerance was increased in #23257 and #23288 but the reason for recent failures has not yet been found.
### Expected behaviour
Equivalence tests reliably pass. | 05-10-2023 11:35:00 | 05-10-2023 11:35:00 | If this started happening recently, it might be related to https://github.com/huggingface/transformers/pull/21998
It's possible the feature extraction for PyTorch now gives different results than the TF / Flax versions. It shouldn't, but it's possible that a small difference in the preprocessed inputs is causing this. |
transformers | 23,257 | closed | Temporary tolerance fix for flaky whipser PT-TF equiv. test | # What does this PR do?
The PT-TF whisper tests have recently become flaky e.g. for [this CI run](https://app.circleci.com/pipelines/github/huggingface/transformers/64100/workflows/b4463c5d-b3dc-4b00-a7cd-19acd096cb07/jobs/792381).
Although the differences are still relatively small, it represents ~2x on the largest absolute difference.
This PR temporarily increases the tolerance until the root cause is found. An issue will be opened and linked here for reference.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
| 05-10-2023 11:33:54 | 05-10-2023 11:33:54 | Issue: #23258 <|||||>_The documentation is not available anymore as the PR was closed or merged._ |
transformers | 23,256 | closed | [`gpt`] Gpt2 fix half precision causal mask | # What does this PR do?
Applies a similar fix than https://github.com/huggingface/transformers/issues/23136 but for GPT2.
To reproduce:
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("gpt2", device_map="auto", load_in_8bit=True)
inputs = torch.LongTensor([[1, 1, 1], [1, 2, 1]]).to(0)
print(model(inputs))
```
The explanation is the same as the tagged PR:
> When going for low_cpu_mem_usage each parameter is force-casted to the expected dtype, which is force-set to torch.float16 for 8bit models.
> Therefore, for 8bit models (and also half-precision models) the causal mask is always force casted to float16 as it is part of the model's state dict, hence expected to be loaded from the Hub if the mask is available on the state dict.
> The fix is to add persistant=False and add a field _keys_to_ignore_on_unexpected (for removing the warnings) to avoid loading that causal mask from the state dict and assign it to the buffer, and all causal masks that are saved as buffers should do the same to avoid unexpected behaviors.
Some users reported that they were also able to reproduce on PyTorch main branch but without load_in_8bit, I didn't managed to reproduce that way, I will have a deeper look
cc @amyeroberts
| 05-10-2023 11:19:36 | 05-10-2023 11:19:36 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 23,255 | closed | Improve Docs of Custom Tools and Agents | # What does this PR do?
This PR improves the docs explaining how to customize prompts and corrects some grammar, spelling, code snippets of both `transformers_agent.mdx` and `custom_tools.mdx`. Also `agent.toolbox` is made a get method / property which should help both with documentation and forbid the user to overwrite the attribute completely.
| 05-10-2023 11:03:00 | 05-10-2023 11:03:00 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_23255). All of your documentation changes will be reflected on that endpoint. |
transformers | 23,254 | closed | Making `safetensors` a core dependency. | # What does this PR do?
Making `safetensors` a core dependency.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
--> | 05-10-2023 10:58:53 | 05-10-2023 10:58:53 | _The documentation is not available anymore as the PR was closed or merged._<|||||>> If it is in the `install_requires` it doesn't need to be anywhere else.
`tokenizers` is in the `install_requires` too, yet in a bunch of other places (I merely copied it). Isn't `tokenizers` a core dependency ? <|||||>Yes it is. It was not added by me and way before I was asked to review PRs for Transformers ;-)<|||||>Linked blogpost : https://github.com/huggingface/blog/pull/1096<|||||>This means that weights are now always loaded in safetensors format but still saved in PyTorch format no? Think this is a good first step. Don't see a problem with having `safetensors` as a core dependency<|||||>> This means that weights are now always loaded in safetensors format but still saved in PyTorch format no? Think this is a good first step. Don't see a problem with having safetensors as a core dependency
Indeed.
The next step will be saving in `safetensors` first. But we need to let time pass so that we can ensure a vast majority of users has `safetensors` so that users using a somewhat old transformers can still load new models (finetuned versions of existing models in old versions)<|||||>Merging ! |
transformers | 23,253 | closed | KeyError: 'num_special_tokens_to_add' | ### System Info
transformers==4.28.1
M2 MBP
OSX 13.2
Python 3.10.10
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
```python
from transformers import GPT2Tokenizer, GPT2LMHeadModel, Trainer, TrainingArguments, TextDataset
# Step 1: Load the pre-trained GPT-2 model
model = GPT2LMHeadModel.from_pretrained('gpt2')
# Step 2: Tokenize the training data
tokenizer = GPT2Tokenizer.from_pretrained('gpt2')
train_file_path = 'shakespeare.txt'
train_encodings = tokenizer(train_file_path)
# Step 3: Prepare the training data
train_dataset = TextDataset(train_encodings, file_path=train_file_path, block_size=512)
# Step 4: Create a TrainingArguments object
training_args = TrainingArguments(
output_dir='./results',
num_train_epochs=3,
per_device_train_batch_size=2,
per_device_eval_batch_size=2,
warmup_steps=500,
weight_decay=0.01,
logging_dir='./logs',
logging_steps=1000,
save_steps=5000,
evaluation_strategy='steps',
eval_steps=5000,
load_best_model_at_end=True
)
# Step 5: Instantiate a Trainer object
trainer = Trainer(
model=model,
args=training_args,
train_dataset=train_dataset
)
# Step 6: Train the model
trainer.train()
```
[shakespeare.txt](https://github.com/huggingface/transformers/files/11440814/shakespeare.txt)
### Expected behavior
Successful fine tune the model.
**As is:**<br>
```bash
warnings.warn(
╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮
│ /Users/sarit/miniforge3/lib/python3.10/site-packages/transformers/tokenization_utils_base.py:248 │
│ in __getattr__ │
│ │
│ 245 │ │
│ 246 │ def __getattr__(self, item: str): │
│ 247 │ │ try: │
│ ❱ 248 │ │ │ return self.data[item] │
│ 249 │ │ except KeyError: │
│ 250 │ │ │ raise AttributeError │
│ 251 │
╰──────────────────────────────────────────────────────────────────────────────────────────────────╯
KeyError: 'num_special_tokens_to_add'
During handling of the above exception, another exception occurred:
╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮
│ /Users/sarit/study/gpt4all/gpt2_fine_tune.py:12 in <module> │
│ │
│ 9 train_encodings = tokenizer(train_file_path) │
│ 10 │
│ 11 # Step 3: Prepare the training data │
│ ❱ 12 train_dataset = TextDataset(train_encodings, file_path=train_file_path, block_size=512) │
│ 13 │
│ 14 # Step 4: Create a TrainingArguments object │
│ 15 training_args = TrainingArguments( │
│ │
│ /Users/sarit/miniforge3/lib/python3.10/site-packages/transformers/data/datasets/language_modelin │
│ g.py:62 in __init__ │
│ │
│ 59 │ │ if os.path.isfile(file_path) is False: │
│ 60 │ │ │ raise ValueError(f"Input file path {file_path} not found") │
│ 61 │ │ │
│ ❱ 62 │ │ block_size = block_size - tokenizer.num_special_tokens_to_add(pair=False) │
│ 63 │ │ │
│ 64 │ │ directory, filename = os.path.split(file_path) │
│ 65 │ │ cached_features_file = os.path.join( │
│ │
│ /Users/sarit/miniforge3/lib/python3.10/site-packages/transformers/tokenization_utils_base.py:250 │
│ in __getattr__ │
│ │
│ 247 │ │ try: │
│ 248 │ │ │ return self.data[item] │
│ 249 │ │ except KeyError: │
│ ❱ 250 │ │ │ raise AttributeError │
│ 251 │ │
│ 252 │ def __getstate__(self): │
│ 253 │ │ return {"data": self.data, "encodings": self._encodings} │
╰──────────────────────────────────────────────────────────────────────────────────────────────────╯
``` | 05-10-2023 10:32:03 | 05-10-2023 10:32:03 | Hi @elcolie,
The error is arising because `TextDataset` takes a `tokenizer` object as its first argument for instantiation. `train_encodings` is a dictionary containing the input ids and attention mask for the text input "shakespeare.txt" to be fed to the model. This is what you want:
```
train_dataset = TextDataset(tokenizer, file_path=train_file_path, block_size=512)
```
Note, TextDataset is deprecated and will soon be removed from the library. Preprocessing of datasets should be handled with the 🤗 Datasets library. You can see examples of how to use it in our [example scripts](https://github.com/huggingface/transformers/tree/main/examples) e.g. [this one for language modeling](https://github.com/huggingface/transformers/blob/main/examples/pytorch/language-modeling/run_mlm.py).<|||||>@amyeroberts
I got new error. Thank you :)
```bash
Traceback (most recent call last):
File "/Users/sarit/study/gpt4all/gpt2_fine_tune.py", line 37, in <module>
trainer.train()
File "/Users/sarit/miniforge3/lib/python3.10/site-packages/transformers/trainer.py", line 1662, in train
return inner_training_loop(
File "/Users/sarit/miniforge3/lib/python3.10/site-packages/transformers/trainer.py", line 1929, in _inner_training_loop
tr_loss_step = self.training_step(model, inputs)
File "/Users/sarit/miniforge3/lib/python3.10/site-packages/transformers/trainer.py", line 2692, in training_step
inputs = self._prepare_inputs(inputs)
File "/Users/sarit/miniforge3/lib/python3.10/site-packages/transformers/trainer.py", line 2639, in _prepare_inputs
raise ValueError(
ValueError: The batch received was empty, your model won't be able to train on it. Double-check that your training dataset contains keys expected by the model: input_ids,past_key_values,attention_mask,token_type_ids,position_ids,head_mask,inputs_embeds,encoder_hidden_states,encoder_attention_mask,labels,use_cache,output_attentions,output_hidden_states,return_dict,label,label_ids,labels.
╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮
│ /Users/sarit/study/gpt4all/gpt2_fine_tune.py:37 in <module> │
│ │
│ 34 ) │
│ 35 │
│ 36 # Step 6: Train the model │
│ ❱ 37 trainer.train() │
│ 38 │
│ │
│ /Users/sarit/miniforge3/lib/python3.10/site-packages/transformers/trainer.py:1662 in train │
│ │
│ 1659 │ │ inner_training_loop = find_executable_batch_size( │
│ 1660 │ │ │ self._inner_training_loop, self._train_batch_size, args.auto_find_batch_size │
│ 1661 │ │ ) │
│ ❱ 1662 │ │ return inner_training_loop( │
│ 1663 │ │ │ args=args, │
│ 1664 │ │ │ resume_from_checkpoint=resume_from_checkpoint, │
│ 1665 │ │ │ trial=trial, │
│ │
│ /Users/sarit/miniforge3/lib/python3.10/site-packages/transformers/trainer.py:1929 in │
│ _inner_training_loop │
│ │
│ 1926 │ │ │ │ │ with model.no_sync(): │
│ 1927 │ │ │ │ │ │ tr_loss_step = self.training_step(model, inputs) │
│ 1928 │ │ │ │ else: │
│ ❱ 1929 │ │ │ │ │ tr_loss_step = self.training_step(model, inputs) │
│ 1930 │ │ │ │ │
│ 1931 │ │ │ │ if ( │
│ 1932 │ │ │ │ │ args.logging_nan_inf_filter │
│ │
│ /Users/sarit/miniforge3/lib/python3.10/site-packages/transformers/trainer.py:2692 in │
│ training_step │
│ │
│ 2689 │ │ │ `torch.Tensor`: The tensor with training loss on this batch. │
│ 2690 │ │ """ │
│ 2691 │ │ model.train() │
│ ❱ 2692 │ │ inputs = self._prepare_inputs(inputs) │
│ 2693 │ │ │
│ 2694 │ │ if is_sagemaker_mp_enabled(): │
│ 2695 │ │ │ loss_mb = smp_forward_backward(model, inputs, self.args.gradient_accumulatio │
│ │
│ /Users/sarit/miniforge3/lib/python3.10/site-packages/transformers/trainer.py:2639 in │
│ _prepare_inputs │
│ │
│ 2636 │ │ """ │
│ 2637 │ │ inputs = self._prepare_input(inputs) │
│ 2638 │ │ if len(inputs) == 0: │
│ ❱ 2639 │ │ │ raise ValueError( │
│ 2640 │ │ │ │ "The batch received was empty, your model won't be able to train on it. │
│ 2641 │ │ │ │ f"training dataset contains keys expected by the model: {','.join(self._ │
│ 2642 │ │ │ ) │
╰──────────────────────────────────────────────────────────────────────────────────────────────────╯
ValueError: The batch received was empty, your model won't be able to train on it. Double-check that your training dataset contains keys expected by the model:
input_ids,past_key_values,attention_mask,token_type_ids,position_ids,head_mask,inputs_embeds,encoder_hidden_states,encoder_attention_mask,labels,use_cache,output_attentions,output_hidden_st
ates,return_dict,label,label_ids,labels.
```
|
transformers | 23,252 | closed | Add document-question-answering in task_summary | # What does this PR do?
From issue #18926
This PR adds Document Question Answering summary to task_summary.mdx
It also provides an example using pipeline and this [model](https://huggingface.co/naver-clova-ix/donut-base-finetuned-docvqa)
## Who can review?
@stevhliu
| 05-10-2023 09:46:03 | 05-10-2023 09:46:03 | _The documentation is not available anymore as the PR was closed or merged._<|||||>@stevhliu I only changed the docs. I don't know why the checks keep failing. It says it is having problems with ffmpeg?<|||||>Thank you so much for you help @stevhliu I think I should open a new PR. I made a mess here<|||||>I created a new pull request #23318 closing this PR |
transformers | 23,251 | closed | Check for Bool instead of Optionals | Hey guys,
forgive me if this note is naive (I'm not a Python professional) but during debugging the code I got thrown off by this line:
```
if return_dict_in_generate and output_scores:
beam_indices = tuple((beam_indices[beam_idx[i]] + (beam_idx[i],) for i in range(len(beam_indices))))
```
https://github.com/huggingface/transformers/blame/3335724376319a0c453049d0cd883504f530ff52/src/transformers/generation/utils.py#L2976
It seems like you're simply checking whether `return_dict_in_generate` and `output_scores` are not `None` instead of checking the underlying `Bool`s. I assume this is intended, correct? I'm asking because I passed `False` for both values and was wondering why it would still run the `beam_indices = tuple...` line. | 05-10-2023 09:09:06 | 05-10-2023 09:09:06 | Hi @seboslaw, thanks for raising this issue.
I don't believe the logic there is checking for `None` values. In [L2850](https://github.com/huggingface/transformers/blame/3335724376319a0c453049d0cd883504f530ff52/src/transformers/generation/utils.py#L2850), `return_dict_in_generate` is set to either the bool value passed in, or defaults to the bool value in the config if unset / is None. The same happens to `output_scores` in [L2843](https://github.com/huggingface/transformers/blame/3335724376319a0c453049d0cd883504f530ff52/src/transformers/generation/utils.py#LL2843C18-L2843C18).
However, even if it's less clear than an explicit `None` check e.g. `if x is None`, the `beam_indices` logic should only ever be executed if both `return_dict_in_generate` and `output_scores` both evaluate to `True` e.g. the following will only print out `True, True`.
```
for a, b in (
(None, None), (None, False), (False, None), (True, None), (None, True), (True, False), (False, True), (True, True)
):
if a and b:
print(a, b)
```
i.e. if the `beam_indices` line is still executing when both values are being set to `False` there's a probably a bug somewhere. Could you follow the issue template and provide details such that we can help debug this? Specifically:
* The running environment: run `transformers-cli env` in the terminal and copy-paste the output
* A minimal reproducible code snippet we can copy and run to replicate the issue? <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 23,250 | open | skip_special_tokens has different behavior between slow and fast tokenizer | ### System Info
- `transformers` version: 4.26.1
- Platform: Linux-5.10.16.3-microsoft-standard-WSL2-x86_64-with-glibc2.31
- Python version: 3.9.16
- Huggingface_hub version: 0.12.1
- PyTorch version (GPU?): 1.12.1+cu113 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help?
@ArthurZucker
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Hi, recently, I find some subtle difference between slow tokenizer and fast tokenizer, Here is a example
```python
from transformers import AutoTokenizer, T5Tokenizer
path = "t5-small"
text = "this is a ஐ apple"
fast_tokenizer = AutoTokenizer.from_pretrained(path)
num = fast_tokenizer.add_tokens(["ஐ"], special_tokens=True)
assert num == 1
ids = fast_tokenizer(text)["input_ids"]
fast_tokenizer.decode(ids, skip_special_tokens=True) # 'this is a apple'
slow_tokenizer = T5Tokenizer.from_pretrained(path)
num = slow_tokenizer.add_tokens(["ஐ"], special_tokens=True)
assert num == 1
ids = slow_tokenizer(text)["input_ids"]
slow_tokenizer.decode(ids, skip_special_tokens=True) # 'this is a ஐ apple'
```
Here are more informations about the issue, I'm not a native English speaker, hope to be understood.
- I know in the first situation, fast tokenizer utilizes 🤗 Tokenizer, which will invoke `tokenizers.Tokenizer.add_special_tokens(tokens)`, thus the token `ஐ` will be added to vocabulary, and be viewed as "special token", and [never be processed by tokenizer.model](https://huggingface.co/docs/tokenizers/api/tokenizer#tokenizers.Tokenizer.add_special_tokens).
- In the second situation, when decoding, slow tokenizer treats the added token `ஐ` as "normal token", so it will not be skipped. By the way, I read the related source code, when `skip_special_tokens=True`, slow tokenizer only skip `self.all_special_ids`, but `ஐ` is not stored in this, but `self.added_tokens_encoder`.
I read some 🤗 official documents, and struggled to figure out the meaning of so called "special token", and realize it's a subtle concept, here is my thought: Tokens can be divided to these categories:
- normal tokens: these tokens can be split
- control tokens (the name inspired by [SentencePiece](https://github.com/google/sentencepiece/blob/master/python/sentencepiece_python_module_example.ipynb)): `bos_token`, `eos_token`, ..., `additional_special_tokens`, the major propose of these tokens is used in encode **[post-processing](https://huggingface.co/docs/tokenizers/pipeline)** pipeline. When these tokens appeared in input text, in slow tokenizer situation, **in most cases**, these tokens also be included in `self.unique_no_split_tokens`, so these tokens **will not be split**, but I don't know the treatment in fast tokenizer case.
- user add tokens:
- If the token already in vocab, but it can be marked as "special token", and this token will never be split now (but cannot be treated as the same as control tokens in some subtle situation).
- If the token not in vocab, it will be added (allocate a new token_id to it), this token also will never be split.
so, in both cases, these user added tokens will never be split.
Please let me know if there are any misunderstandings.
Several weeks ago, I summit a [issue 23001](https://github.com/huggingface/transformers/issues/23001) related to `return_overflowing_tokens` behavior, which is considered as a specific feature of fast tokenizer, so it's a feature not a bug. Generally, I want to know the differences between slow and fast tokenizer, should be viewed as features, or bugs.
### Expected behavior
The slow tokenizer should behave same as fast tokenizer. | 05-10-2023 06:40:21 | 05-10-2023 06:40:21 | I'd like to confirm my understandings to the concept, since the [PR 23312](https://github.com/huggingface/transformers/pull/23312) is in progressing:
In 🤗 Transformers, for both slow and fast tokenizers, there are only two types of tokens:
- ***normal tokens***: these tokens can be split. These tokens cannot be add, but when `add_tokens(tokens, special_tokens=True)` be called, and the `tokens` to be added already in ***normal tokens***, in this case, they will be marked as ***special tokens*** and will not be split.
- ***special tokens***: these tokens cannot be split, include:
- `eos_token`, `bos_token`, ..., `additional_special_tokens`, which are defined in `SpecialTokensMixin`
- user add tokens via `add_tokens(tokens)`. (1) When set the parameter `special_tokens=False`, if a token in `tokens` already in ***normal tokens***, do nothing to the token; (2) When set the parameter `special_tokens=False`, if a token in `tokens` already in ***normal tokens***, mark it as ***special tokens*** and will not be split;
In both slow and fast tokenizer, `tokenizer.decode(ids, skip_special_tokens=True)` will skip all ***special tokens***.
Please let me know if there are any misunderstandings.<|||||>cc @younesbelkada <|||||>Hey! Thanks for reporting this!
- Differences between fast and slow are sometimes bugs, sometimes features, which is what makes it a bit complicated.
Now about the core of the issue, you have a good grasp of what is going on, good job! 🤗 And thanks for taking the time to dig in. T5 is a bit of a special case because it uses a hack in the `_convert_token_to_ids` method.
The core issue is that the `additional_special_tokens` list and the `added_specilal_tokens` encoder and decoder are not perfectly linked. Updating one does not update the other, which is a bug. Documentation is also rather scarce on how we use the `additional_special_tokens`, I am trying to regroup issues linked to that to create a proper fix. Will have a look at the PR!
<|||||>One thing is that some of the added tokens can be `non special tokens`, which is why you have:
- normal tokens ( from the original vocab file of the SPModel for example)
- special tokens (which can be added int he additional special tokens or, control tokens which are class attributes) that behave the same
- added normal tokens, which should not be split, and have their own index. These are useful when a token is missing from the spmodel, which you can never touch. <|||||>Thanks for your reply, so the example for slow and fast tokenizer, which behavior is expected?
> ### System Info
> * `transformers` version: 4.26.1
> * Platform: Linux-5.10.16.3-microsoft-standard-WSL2-x86_64-with-glibc2.31
> * Python version: 3.9.16
> * Huggingface_hub version: 0.12.1
> * PyTorch version (GPU?): 1.12.1+cu113 (True)
> * Tensorflow version (GPU?): not installed (NA)
> * Flax version (CPU?/GPU?/TPU?): not installed (NA)
> * Jax version: not installed
> * JaxLib version: not installed
> * Using GPU in script?: No
> * Using distributed or parallel set-up in script?: No
>
> ### Who can help?
> @ArthurZucker
>
> ### Information
> * [ ] The official example scripts
> * [x] My own modified scripts
>
> ### Tasks
> * [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
> * [x] My own task or dataset (give details below)
>
> ### Reproduction
> Hi, recently, I find some subtle difference between slow tokenizer and fast tokenizer, Here is a example
>
> ```python
> from transformers import AutoTokenizer, T5Tokenizer
> path = "t5-small"
> text = "this is a ஐ apple"
>
> fast_tokenizer = AutoTokenizer.from_pretrained(path)
> num = fast_tokenizer.add_tokens(["ஐ"], special_tokens=True)
> assert num == 1
> ids = fast_tokenizer(text)["input_ids"]
> fast_tokenizer.decode(ids, skip_special_tokens=True) # 'this is a apple'
>
> slow_tokenizer = T5Tokenizer.from_pretrained(path)
> num = slow_tokenizer.add_tokens(["ஐ"], special_tokens=True)
> assert num == 1
> ids = slow_tokenizer(text)["input_ids"]
> slow_tokenizer.decode(ids, skip_special_tokens=True) # 'this is a ஐ apple'
> ```
>
> Here are more informations about the issue, I'm not a native English speaker, hope to be understood.
>
> * I know in the first situation, fast tokenizer utilizes 🤗 Tokenizer, which will invoke `tokenizers.Tokenizer.add_special_tokens(tokens)`, thus the token `ஐ` will be added to vocabulary, and be viewed as "special token", and [never be processed by tokenizer.model](https://huggingface.co/docs/tokenizers/api/tokenizer#tokenizers.Tokenizer.add_special_tokens).
> * In the second situation, when decoding, slow tokenizer treats the added token `ஐ` as "normal token", so it will not be skipped. By the way, I read the related source code, when `skip_special_tokens=True`, slow tokenizer only skip `self.all_special_ids`, but `ஐ` is not stored in this, but `self.added_tokens_encoder`.
>
> I read some 🤗 official documents, and struggled to figure out the meaning of so called "special token", and realize it's a subtle concept, here is my thought: Tokens can be divided to these categories:
>
> * normal tokens: these tokens can be split
> * control tokens (the name inspired by [SentencePiece](https://github.com/google/sentencepiece/blob/master/python/sentencepiece_python_module_example.ipynb)): `bos_token`, `eos_token`, ..., `additional_special_tokens`, the major propose of these tokens is used in encode **[post-processing](https://huggingface.co/docs/tokenizers/pipeline)** pipeline. When these tokens appeared in input text, in slow tokenizer situation, **in most cases**, these tokens also be included in `self.unique_no_split_tokens`, so these tokens **will not be split**, but I don't know the treatment in fast tokenizer case.
> * user add tokens:
>
> * If the token already in vocab, but it can be marked as "special token", and this token will never be split now (but cannot be treated as the same as control tokens in some subtle situation).
> * If the token not in vocab, it will be added (allocate a new token_id to it), this token also will never be split.
> so, in both cases, these user added tokens will never be split.
>
> Please let me know if there are any misunderstandings.
>
> Several weeks ago, I summit a [issue 23001](https://github.com/huggingface/transformers/issues/23001) related to `return_overflowing_tokens` behavior, which is considered as a specific feature of fast tokenizer, so it's a feature not a bug. Generally, I want to know the differences between slow and fast tokenizer, should be viewed as features, or bugs.
>
> ### Expected behavior
> The slow tokenizer should behave same as fast tokenizer.
<|||||>In this case, the `fast` is correct: when we ask to skip special tokens when decoding, we expect all the special tokens to be skipped. <|||||>It will be addressed in the linked PR. This is mostly due to the fact that the slow tokenizer was not properly added to the list of `additional_special_tokens` when being added using `add_tokens`. The refactoring will prevent this from happening! |
transformers | 23,249 | closed | Every call to the generate method will repeatedly print "Generate config {config}" on the console | https://github.com/huggingface/transformers/blob/3335724376319a0c453049d0cd883504f530ff52/src/transformers/generation/configuration_utils.py#L577
Is it feasible to delete this line of code? Or is there a better way?
| 05-10-2023 04:12:21 | 05-10-2023 04:12:21 | cc @gante <|||||>Hey @Silypie 👋
Consider the following points:
1. `logger.info` messages are not printed by default
2. Logging with `info` on `from_dict` configuration methods is also standard across the library
3. This line should only be reached from `.generate()` in a legacy setup
Because of these 3 points, I'm biased toward not changing this behavior. Nevertheless, it may be a bug -- can you share a short stand-alone script so I can reproduce the issue? |
transformers | 23,248 | closed | Incorrect preprocessing in run_t5_mlm_flax.py | ### System Info
Hi there!
I am running the run_t5_mlm_flax.py as is and noticed this error
`ValueError: `input_ids` are incorrectly preprocessed. `input_ids` length is 922, but should be 1024.`
while running
```
for step, batch_idx in enumerate(tqdm(train_batch_idx, desc="Training...", position=1)):
samples = [tokenized_datasets["train"][int(idx)] for idx in batch_idx]
model_inputs = data_collator(samples)```
I see the following output from a demo e.g.
`print(tokenizer.batch_decode(input_ids, skip_special_tokens=False)[0])`
gives
`<unk> file_gif></s> Stacy: OK Stacy: let's go over the list one more time! Angelica: haha, you don't have to do it... Stacy:<extra_id_0>? of course I do!<extra_id_1> acy: I'm your maid of honor<extra_id_2> OK ;* <unk> <extra_id_3> of songs for the DJ? Angelica: sent Stacy: Flower arrangements and your bouquet? Angelica: done Stacy: comfortable shoes for after midnight? Angelica: bought and packed Stacy: Make-up and hair? Angelica: scheduled Angelica: they'll be at my place at 10 am St<extra_id_4> How much time<extra_id_5> we need? Angelica: around 3 hours the both of us? Stacy: OK, that gives us enough time to go get dressed and get to the church Angelica: Yeah, my mom wants to have her hair done as well, but we'll go seperate<extra_id_6> OK Angelica: anything else? Stacy<extra_id_7> ica: Nick's got them Stacy: Remind him to BRING them :D Angelica: maybe you're right... ;)<extra_id_8> in: Have u watched<extra_id_9>? Izayah: No what's it about? Westin: I don't know yet Izayah: So why are you asking? Westin: Haha I wanna watch this but it's a fantasy movie Izayah<extra_id_10> Hmm not into such movies Westin: Neither me but this one<extra_id_11> yah: Enjoy Westin<extra_id_12> Thanks anyway I<extra_id_13> </s> Macy: when is the deadline for our project? Mac<extra_id_14> or <extra_id_15> day Sloane: next monday :) Macy: oh shit, i better start working faster Veronica: yeah, monday - please make sure you have your part ready Veronica: Monica will be mad if we don<extra_id_16> it on time</s> Francesca: girls, I<extra_id_17> your advice Blake: what's up? Vivienne: yes? Francesca<extra_id_18> wants us to go on a dancing course<extra_id_19>'s not that i don't like it but I'm stressed<extra_id_20> can't dance,<extra_id_21> can barely walk :/ Blake: you know, the courses are to go and<extra_id_22> them...s<extra_id_23> you're a perfect candidate to try it Vivienne: that'<extra_id_24>, nobody who can dance would pay to go on a dancing course Francesca: I get it, but I'm so im<extra_id_25> doesn't work out as I want :/ Francesca: and Brian is 100% sure and is<extra_id_26> to stop freaking out Blake: oh come on, maybe it'<extra_id_27> something you will love? you won't know until you try Blake: sometimes you just have<extra_id_28> the deep end and see<extra_id_29> happens Vivienne: Blake's right Vivienne: c'mon Francesca: I'll reconsider it...maybe <extra_id_30> the parties where you can be the couple of the night Blake: exactly!!!!! Blake:<extra_id_31> re scared then Viv and I<extra_id_32> a couple<extra_id_33> hahaha<extra_id_34> this is absolutely fantastic, I'm in XDDDD Francesca:<extra_id_35> mg, really? xddd Blake: why not Blake: Viv, will you<extra_id_36> gf during that course? xd Viv: of course DARLING hahahahahaah Francesca: I can't believe it X<extra_id_37> already see myself introducing<extra_id_38> as the lesbian couple<extra_id_39> </s> Elizabeth: How about the cathedral? Kathleen: Eh probably there’<extra_id_40> <extra_id_41> tower... Elizabeth: Yes, there<extra_id_42> ;] Kathleen: No way, I’m not climbing some stupid stairs Elizabeth: You<extra_id_43>, it’ll not take long... Kathleen: Great, standing there alone<extra_id_44> organization! Elizabeth: How on earth am I<extra_id_45> are against anything I come up with!! Kathleen: Maybe you just have bad ideas ;/ Elizabeth: The rest of the group is not complaining, only you Kathleen:<extra_id_46> you<extra_id_47> about it Elizabeth: Listen, I’m done, I<extra_id_48> ask you about anything, you’ll see the program in a few days and tell me if you want to go<extra_id_49> It’s even worse, you promised everyone will have a chance to express their opinions! Elizabeth: But<extra_id_50> </s>`
and
`print(tokenizer.batch_decode(labels, skip_special_tokens=False)[0])`
gives
`<extra_id_0> are you kidding me<extra_id_1> St<extra_id_2>! Angelica:<extra_id_3> 3 Stacy: List<extra_id_4> acy:<extra_id_5> do<extra_id_6> ly Stacy:<extra_id_7> : the rings? Angel<extra_id_8> </s> West<extra_id_9> beasts of the southern wild<extra_id_10> :<extra_id_11> seems to be interesting Iza<extra_id_12> :<extra_id_13> zayah: Haha ok<extra_id_14> y: next monday<extra_id_15> thurs<extra_id_16> 't deliver<extra_id_17> need<extra_id_18> : Brian<extra_id_19>, it<extra_id_20> out...i<extra_id_21> I<extra_id_22> learn from<extra_id_23> o<extra_id_24> s right<extra_id_25> patient when something<extra_id_26> almost bullying me<extra_id_27> s<extra_id_28> to jump in<extra_id_29> what<extra_id_30> Vivienne: just think about all<extra_id_31> If you'<extra_id_32> can go as<extra_id_33> on that course ha<extra_id_34> Vivienne: Blake....<extra_id_35> o<extra_id_36> be my <extra_id_37> DDDD I<extra_id_38> you to my boyfriend<extra_id_39> I know and respect XD<extra_id_40> s<extra_id_41> a<extra_id_42> is <extra_id_43> can wait outside<extra_id_44>, nice<extra_id_45> supposed to organize anything when you<extra_id_46> Maybe<extra_id_47> just don’t know<extra_id_48> will not<extra_id_49> or not Kathleen:<extra_id_50> I</s>`
What am I missing here? As there any helper script to run data preprocessing ad-hoc?
Thank you!
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Occurs while running
from transformers import T5Tokenizer, T5ForConditionalGeneration
import torch
from datasets import load_dataset
from itertools import chain
from dataclasses import dataclass
from transformers import (
BatchEncoding,
PreTrainedTokenizerBase
)
from typing import Dict, List, Optional
import numpy as np
from tqdm imprt tqdm
tokenizer = T5Tokenizer.from_pretrained("t5-small")
model = T5ForConditionalGeneration.from_pretrained("t5-small")
# Load dataset from the hub
datasets = load_dataset("samsum")
print(f"Train dataset size: {len(dataset['train'])}")
print(f"Test dataset size: {len(dataset['test'])}")
def tokenize_function(examples):
return tokenizer(examples["dialogue"], return_attention_mask=False)
tokenized_datasets = datasets.map(
tokenize_function,
batched=True,
num_proc=4,
load_from_cache_file=False,
)
def compute_input_and_target_lengths(inputs_length, noise_density=0.15, mean_noise_span_length=1.0):
"""This function is copy of `random_spans_helper <https://github.com/google-research/text-to-text-transfer-transformer/blob/84f8bcc14b5f2c03de51bd3587609ba8f6bbd1cd/t5/data/preprocessors.py#L2466>`__ .
Training parameters to avoid padding with random_spans_noise_mask.
When training a model with random_spans_noise_mask, we would like to set the other
training hyperparmeters in a way that avoids padding.
This function helps us compute these hyperparameters.
We assume that each noise span in the input is replaced by extra_tokens_per_span_inputs sentinel tokens,
and each non-noise span in the targets is replaced by extra_tokens_per_span_targets sentinel tokens.
This function tells us the required number of tokens in the raw example (for split_tokens())
as well as the length of the encoded targets. Note that this function assumes
the inputs and targets will have EOS appended and includes that in the reported length.
Args:
inputs_length: an integer - desired length of the tokenized inputs sequence
noise_density: a float
mean_noise_span_length: a float
Returns:
tokens_length: length of original text in tokens
targets_length: an integer - length in tokens of encoded targets sequence
"""
def _tokens_length_to_inputs_length_targets_length(tokens_length):
num_noise_tokens = int(round(tokens_length * noise_density))
num_nonnoise_tokens = tokens_length - num_noise_tokens
num_noise_spans = int(round(num_noise_tokens / mean_noise_span_length))
# inputs contain all nonnoise tokens, sentinels for all noise spans
# and one EOS token.
_input_length = num_nonnoise_tokens + num_noise_spans + 1
_output_length = num_noise_tokens + num_noise_spans + 1
return _input_length, _output_length
tokens_length = inputs_length
while _tokens_length_to_inputs_length_targets_length(tokens_length + 1)[0] <= inputs_length:
tokens_length += 1
inputs_length, targets_length = _tokens_length_to_inputs_length_targets_length(tokens_length)
# minor hack to get the targets length to be equal to inputs length
# which is more likely to have been set to a nice round number.
if noise_density == 0.5 and targets_length > inputs_length:
tokens_length -= 1
targets_length -= 1
return tokens_length, targets_length
expanded_inputs_length, targets_length = compute_input_and_target_lengths(
inputs_length=1024,
noise_density=0.15,
mean_noise_span_length=1.0,
)
def group_texts(examples):
# Concatenate all texts.
concatenated_examples = {k: list(chain(*examples[k])) for k in examples.keys()}
total_length = len(concatenated_examples[list(examples.keys())[0]])
# We drop the small remainder, we could add padding if the model supported it instead of this drop, you can
# customize this part to your needs.
if total_length >= expanded_inputs_length:
total_length = (total_length // expanded_inputs_length) * expanded_inputs_length
# Split by chunks of max_len.
result = {
k: [t[i : i + expanded_inputs_length] for i in range(0, total_length, expanded_inputs_length)]
for k, t in concatenated_examples.items()
}
return result
tokenized_datasets = tokenized_datasets.map(
group_texts,
batched=True,
num_proc=4,
load_from_cache_file=False,
)
@dataclass
class FlaxDataCollatorForT5MLM:
"""
Data collator used for T5 span-masked language modeling.
It is made sure that after masking the inputs are of length `data_args.max_seq_length` and targets are also of fixed length.
For more information on how T5 span-masked language modeling works, one can take a look
at the `official paper <https://arxiv.org/pdf/1910.10683.pdf>`__
or the `official code for preprocessing <https://github.com/google-research/text-to-text-transfer-transformer/blob/master/t5/data/preprocessors.py>`__ .
Args:
tokenizer (:class:`~transformers.PreTrainedTokenizer` or :class:`~transformers.PreTrainedTokenizerFast`):
The tokenizer used for encoding the data.
noise_density (:obj:`float`):
The probability with which to (randomly) mask tokens in the input.
mean_noise_span_length (:obj:`float`):
The average span length of the masked tokens.
input_length (:obj:`int`):
The expected input length after masking.
target_length (:obj:`int`):
The expected target length after masking.
pad_token_id: (:obj:`int`):
The pad token id of the model
decoder_start_token_id: (:obj:`int):
The decoder start token id of the model
"""
tokenizer: PreTrainedTokenizerBase
noise_density: float
mean_noise_span_length: float
input_length: int
target_length: int
pad_token_id: int
decoder_start_token_id: int
def __call__(self, examples: List[Dict[str, np.ndarray]]) -> BatchEncoding:
# convert list to dict and tensorize input
batch = BatchEncoding(
{k: np.array([examples[i][k] for i in range(len(examples))]) for k, v in examples[0].items()}
)
input_ids = batch["input_ids"]
batch_size, expandend_input_length = input_ids.shape
mask_indices = np.asarray([self.random_spans_noise_mask(expandend_input_length) for i in range(batch_size)])
labels_mask = ~mask_indices
input_ids_sentinel = self.create_sentinel_ids(mask_indices.astype(np.int8))
labels_sentinel = self.create_sentinel_ids(labels_mask.astype(np.int8))
batch["input_ids"] = self.filter_input_ids(input_ids, input_ids_sentinel)
batch["labels"] = self.filter_input_ids(input_ids, labels_sentinel)
print(">>>>> sanity check <<<<<<<<<<")
print("\n")
print(">>>>> inputs <<<<<<<<<<")
print(self.tokenizer.batch_decode(batch["input_ids"])[0])
print("\n")
print(">>>>> masks <<<<<<<<<<")
print(self.tokenizer.batch_decode(batch["labels"])[0])
print("\n")
print(">>>>> snity check <<<<<<<<<<")
if batch["input_ids"].shape[-1] != self.input_length:
raise ValueError(
f"`input_ids` are incorrectly preprocessed. `input_ids` length is {batch['input_ids'].shape[-1]}, but"
f" should be {self.input_length}."
)
if batch["labels"].shape[-1] != self.target_length:
raise ValueError(
f"`labels` are incorrectly preprocessed. `labels` length is {batch['labels'].shape[-1]}, but should be"
f" {self.target_length}."
)
# to check that tokens are correctly preprocessed, one can run `self.tokenizer.batch_decode(input_ids)` and `self.tokenizer.batch_decode(labels)` here...
return batch
def create_sentinel_ids(self, mask_indices):
"""
Sentinel ids creation given the indices that should be masked.
The start indices of each mask are replaced by the sentinel ids in increasing
order. Consecutive mask indices to be deleted are replaced with `-1`.
"""
start_indices = mask_indices - np.roll(mask_indices, 1, axis=-1) * mask_indices
start_indices[:, 0] = mask_indices[:, 0]
sentinel_ids = np.where(start_indices != 0, np.cumsum(start_indices, axis=-1), start_indices)
sentinel_ids = np.where(sentinel_ids != 0, (len(self.tokenizer) - sentinel_ids), 0)
sentinel_ids -= mask_indices - start_indices
return sentinel_ids
def filter_input_ids(self, input_ids, sentinel_ids):
"""
Puts sentinel mask on `input_ids` and fuse consecutive mask tokens into a single mask token by deleting.
This will reduce the sequence length from `expanded_inputs_length` to `input_length`.
"""
batch_size = input_ids.shape[0]
input_ids_full = np.where(sentinel_ids != 0, sentinel_ids, input_ids)
# input_ids tokens and sentinel tokens are >= 0, tokens < 0 are
# masked tokens coming after sentinel tokens and should be removed
input_ids = input_ids_full[input_ids_full >= 0].reshape((batch_size, -1))
input_ids = np.concatenate(
[input_ids, np.full((batch_size, 1), self.tokenizer.eos_token_id, dtype=np.int32)], axis=-1
)
return input_ids
def random_spans_noise_mask(self, length):
"""This function is copy of `random_spans_helper <https://github.com/google-research/text-to-text-transfer-transformer/blob/84f8bcc14b5f2c03de51bd3587609ba8f6bbd1cd/t5/data/preprocessors.py#L2682>`__ .
Noise mask consisting of random spans of noise tokens.
The number of noise tokens and the number of noise spans and non-noise spans
are determined deterministically as follows:
num_noise_tokens = round(length * noise_density)
num_nonnoise_spans = num_noise_spans = round(num_noise_tokens / mean_noise_span_length)
Spans alternate between non-noise and noise, beginning with non-noise.
Subject to the above restrictions, all masks are equally likely.
Args:
length: an int32 scalar (length of the incoming token sequence)
noise_density: a float - approximate density of output mask
mean_noise_span_length: a number
Returns:
a boolean tensor with shape [length]
"""
orig_length = length
num_noise_tokens = int(np.round(length * self.noise_density))
num_nonnoise_tokens = length - num_noise_tokens
# avoid degeneracy by ensuring positive numbers of noise and nonnoise tokens.
num_noise_tokens = min(max(num_noise_tokens, 1), length - 1)
# num_noise_tokens should be less than num_noise_tokens and num_nonnoise_tokens
num_noise_spans = int(np.round(min(num_noise_tokens, num_nonnoise_tokens) / self.mean_noise_span_length))
# avoid degeneracy by ensuring positive number of noise spans
num_noise_spans = max(num_noise_spans, 1)
# pick the lengths of the noise spans and the non-noise spans
def _random_segmentation(num_items, num_segments):
"""Partition a sequence of items randomly into non-empty segments.
Args:
num_items: an integer scalar > 0
num_segments: an integer scalar in [1, num_items]
Returns:
a Tensor with shape [num_segments] containing positive integers that add
up to num_items
"""
mask_indices = np.arange(num_items - 1) < (num_segments - 1)
np.random.shuffle(mask_indices)
first_in_segment = np.pad(mask_indices, [[1, 0]])
segment_id = np.cumsum(first_in_segment)
# count length of sub segments assuming that list is sorted
_, segment_length = np.unique(segment_id, return_counts=True)
return segment_length
noise_span_lengths = _random_segmentation(num_noise_tokens, num_noise_spans)
nonnoise_span_lengths = _random_segmentation(num_nonnoise_tokens, num_noise_spans)
interleaved_span_lengths = np.reshape(
np.stack([nonnoise_span_lengths, noise_span_lengths], axis=1), [num_noise_spans * 2]
)
span_starts = np.cumsum(interleaved_span_lengths)[:-1]
span_start_indicator = np.zeros((length,), dtype=np.int8)
span_start_indicator[span_starts] = True
span_num = np.cumsum(span_start_indicator)
is_noise = np.equal(span_num % 2, 1)
return is_noise[:orig_length]
data_collator = FlaxDataCollatorForT5MLM(
tokenizer=tokenizer,
noise_density=0.15,
mean_noise_span_length=3.0,
input_length=1024,
target_length=309,
pad_token_id=0,
decoder_start_token_id=0,
)
def generate_batch_splits(samples_idx: np.ndarray, batch_size: int, drop_last=True) -> np.ndarray:
"""Generate batches of data for a specified batch size from sample indices. If the dataset size is not divisible by
the batch size and `drop_last` is `True`, the last incomplete batch is dropped. Else, it is returned."""
num_samples = len(samples_idx)
if drop_last:
samples_to_remove = num_samples % batch_size
if samples_to_remove != 0:
samples_idx = samples_idx[:-samples_to_remove]
sections_split = num_samples // batch_size
samples_idx = samples_idx.reshape((sections_split, batch_size))
else:
sections_split = math.ceil(num_samples / batch_size)
samples_idx = np.array_split(samples_idx, sections_split)
return samples_idx
# Generate an epoch by shuffling sampling indices from the train dataset
num_train_samples = len(tokenized_datasets["train"])
# Avoid using jax.numpy here in case of TPU training
train_samples_idx = np.random.permutation(np.arange(num_train_samples))
train_batch_idx = generate_batch_splits(train_samples_idx, 4)
for step, batch_idx in enumerate(tqdm(train_batch_idx, desc="Training...", position=1)):
samples = [tokenized_datasets["train"][int(idx)] for idx in batch_idx]
model_inputs = data_collator(samples)
from `run_t5_mlm_flax.py`
### Expected behavior
I expected to run run_t5_mlm_flax.py script without an error | 05-10-2023 02:54:10 | 05-10-2023 02:54:10 | Hi @BSharmi,
The example code here isn't exactly the same as the `run_t5_mlm_flax.py` script and is missing some important lines. For example, [this one](https://github.com/huggingface/transformers/blob/291c5e9b256ad3ae970f8ef47d1693f3ae976a6e/examples/flax/language-modeling/run_t5_mlm_flax.py#LL660C10-L660C10), which enforces the length of the returned sequences from the tokenizer. Additionally, the model being imported is a pytorch model - `T5ForConditionalGeneration` - wheras the flax equivalent would be required for the flax script: `FlaxT5ForConditionalGeneration`.
I would first make sure you can run the script with the [example snippet from the README](https://github.com/huggingface/transformers/tree/main/examples/flax/language-modeling#train-model-2) and then start to adapt to the new use case. <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 23,246 | closed | Fix `from_config` | # What does this PR do?
Resolves https://github.com/huggingface/transformers/issues/23241
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
@sgugger | 05-09-2023 19:27:01 | 05-09-2023 19:27:01 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 23,245 | closed | Revert "[Doctests] Refactor doctests + add CI" | Reverts huggingface/transformers#22987
This PR created a hard dependency on `pytest` which we don't want in Transformers. Looking a bit more it would be better if the whole `doctest_utils.py` module lived outside of the Transformers library, so it should be structured differently. | 05-09-2023 19:24:00 | 05-09-2023 19:24:00 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_23245). All of your documentation changes will be reflected on that endpoint. |
transformers | 23,244 | closed | Hot Fix | # What does this PR do?
Fix the failing GitHub Action `Update Transformers metadata` due to the missing `pytest` after PR #22987. But it's kind strange that a simple `from transformers.utils import direct_transformers_import` will need `pytest`. Maybe we need to re-think if to have `from .doctest_utils import HfDocTestParser` inside the file `transformers/utils/__init__.py`. | 05-09-2023 19:02:32 | 05-09-2023 19:02:32 | _The documentation is not available anymore as the PR was closed or merged._<|||||>No need to have this PR anymore after #23245 and the decision made there. |
transformers | 23,243 | closed | CTC example: updated trainer parameters to save tokenizer | The current example only passes `feature_extractor` to `Trainer` and thus `tokenizer` is not saved and won't be pushed to Hub. This PR fixes this by passing the `processor` to `Trainer`. It can probably be refactored further to get the tokenizer and feature_extractor from the instantiated processor, but with regard to behavior, this small fix seems to address the problem. | 05-09-2023 18:51:07 | 05-09-2023 18:51:07 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Thanks @MKhalusova! |
transformers | 23,242 | closed | CTC example: updated trainer parameters to save tokenizer | The current example only passes `feature_extractor` to `Trainer` and thus `tokenizer` is not saved and won't be pushed to Hub. This PR fixes this by passing the `processor` to `Trainer`. It can probably be refactored further to get the tokenizer and feature_extractor from the instantiated processor, but with regard to behavior, this small fix seems to address the problem.
| 05-09-2023 18:47:10 | 05-09-2023 18:47:10 | Sorry, wrong branch, will open a new PR<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_23242). All of your documentation changes will be reflected on that endpoint. |
transformers | 23,241 | closed | `from_config` errors for `bigcode/santacoder` | ### System Info
transformers commit (current main branch): c34a525d2
### Who can help?
@sgugger
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
```
from transformers import AutoConfig, AutoModelForCausalLM
model_name = "bigcode/santacoder"
config = AutoConfig.from_pretrained(model_name, trust_remote_code=True)
model = AutoModelForCausalLM.from_config(config, trust_remote_code=True)
```
gives error:
```
Exception has occurred: ValueError
not enough values to unpack (expected 2, got 1)
File "/fsx/kunhao/transformers/src/transformers/dynamic_module_utils.py", line 408, in get_class_from_dynamic_module
module_file, class_name = class_reference.split(".")
File "/fsx/kunhao/transformers/src/transformers/models/auto/auto_factory.py", line 411, in from_config
model_class = get_class_from_dynamic_module(repo_id, module_file + ".py", class_name, **kwargs)
ValueError: not enough values to unpack (expected 2, got 1)
```
However, directly calling `model = AutoModelForCausalLM.from_pretrained(model_name, trust_remote_code=True)` works fine.
### Expected behavior
No error happens | 05-09-2023 17:55:39 | 05-09-2023 17:55:39 | Not an expert but I feel like we could do something here https://github.com/huggingface/transformers/blob/main/src/transformers/models/auto/auto_factory.py#L410-L411
```diff
class_ref = config.auto_map[cls.__name__]
if "--" in class_ref:
repo_id, class_ref = class_ref.split("--")
else:
repo_id = config.name_or_path
- module_file, class_name = class_ref.split(".")
- model_class = get_class_from_dynamic_module(repo_id, module_file + ".py", class_name, **kwargs)
+ model_class = get_class_from_dynamic_module(class_ref, repo_id, **kwargs)
```<|||||>Sounds like the right fix if you want to make a quick PR! |
transformers | 23,240 | open | [New model] ImageBind: One Embedding Space To Bind Them All | ### Model description
As stated in their [blog post](https://ai.facebook.com/blog/imagebind-six-modalities-binding-ai/),
> "[ImageBind is] the first AI model capable of binding information from six modalities. The [model](https://github.com/facebookresearch/ImageBind) learns a single embedding, or shared representation space, not just for text, image/video, and audio, but also for sensors that record depth (3D), thermal (infrared radiation), and inertial measurement units (IMU), which calculate motion and position."
### Open source status
- [X] The model implementation is available
- [X] The model weights are available
### Provide useful links for the implementation
GitHub repo: https://github.com/facebookresearch/ImageBind
Paper: https://facebookresearch.github.io/ImageBind/paper
Blog: https://ai.facebook.com/blog/imagebind-six-modalities-binding-ai/
Demo: https://imagebind.metademolab.com/
Video: https://dl.fbaipublicfiles.com/imagebind/imagebind_video.mp4
Weights: https://dl.fbaipublicfiles.com/imagebind/imagebind_huge.pth (currently only 1 that I can see) | 05-09-2023 17:35:41 | 05-09-2023 17:35:41 | Hi @xenova , I would like to work on implementing this model.<|||||>> Hi @xenova , I would like to work on implementing this model.
Sweet! |
transformers | 23,239 | closed | [docs] Audio task guides fixes | Related to https://github.com/huggingface/transformers/issues/23188 and https://github.com/huggingface/transformers/issues/23222
In the guide examples, only `feature_extractor` is passed to `Trainer`, so that's the only part of the processor that gets pushed to Hub. This PR fixes the docs to pass `processor` to Trainer as the `tokenizer` parameter, so both `feature_extractor` and `tokenizer` are saved.
The behavior is confirmed with the ASR task guide example. We may also need to fix the example scripts. I'll look into it, and if a fix is needed, I'll create a separate PR. | 05-09-2023 17:18:31 | 05-09-2023 17:18:31 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 23,238 | open | [Bug] Failiure to generate Diffusion images / AI language responses when upgrading past 4.19.2 | ### System Info
I've posted this to both Auto1x4 and Opinionated, but I don't think it's an issue on their end. So here I am.
For some reason, I am completely unable to generate images or use ai language models if I upgrade my transformers past 4.19.2.
This ticket details my enite [install/diagnostic workflow](https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/7351) and other examples of the issue.
In a nut-shell, I can git-pull any of my AI based generators, and they will all exhibit this issue until I change requirements.txt to transformers = 4.19.2
My Auto1x4 & Opinionated Environ and prompts are as follow:
1.5 ema-only safetensor from [runwayml](https://huggingface.co/runwayml/stable-diffusion-v1-5/tree/main)
On vae-ft-mse-840000-ema-pruned.ckpt from [stabilityai](https://huggingface.co/stabilityai/sd-vae-ft-mse-original/tree/main)
ETA weight: 31337, CFG: 7
30 Step Euler A
"3d rendering of a small metal cube sitting on a glass table"
4.19.2

4.25.1

4.26.1

4.28.1

The strangest thing is that after 4.19.2 all versions are wrong, but they're all CONSISTENTLY wrong. I don't really know where else to turn.
```
- `transformers` version: 4.29.0.dev0
- Platform: Windows-10-10.0.19045-SP0
- Python version: 3.10.6
- Huggingface_hub version: 0.13.3
- Safetensors version: 0.3.0
- PyTorch version (GPU?): 2.0.0+cu118 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Presumably
- Using distributed or parallel set-up in script?: No idea
```
Launch details from Opinionated
```
13:48:03-071493 INFO Starting SD.Next
13:48:03-092427 INFO Python 3.10.6 on Windows
13:48:03-350737 INFO Version: f6898c9a Fri May 5 13:40:53 2023 -0400
13:48:03-741693 INFO Setting environment tuning
13:48:03-745682 INFO nVidia CUDA toolkit detected
13:48:05-379334 INFO Torch 2.0.0+cu118
13:48:05-397286 INFO Torch backend: nVidia CUDA 11.8 cuDNN 8700
13:48:05-400261 INFO Torch detected GPU: NVIDIA GeForce RTX 3080 VRAM 10240 Arch (8, 6) Cores 68
```
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Personally reproducible by changing requirements.txt in any project to any version of transformers higher than 4.19.2
I have not gotten confirmation of anyone else having or being able to reproduce this issue.
### Expected behavior
Parity or near parity of model generation. | 05-09-2023 17:13:39 | 05-09-2023 17:13:39 | |
transformers | 23,237 | closed | Cannot Convert CLIP to TensorRT | ### System Info
- `transformers` version: 4.26.1
- Platform: Linux-4.14.301-224.520.amzn2.x86_64-x86_64-with-glibc2.29
- Python version: 3.8.10
- Huggingface_hub version: 0.14.1
- PyTorch version (GPU?): 1.13.1+cu117 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help?
@amyeroberts
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
This is my code for exporting CLIP image encoder ( openai/clip-vit-large-patch14-336 ) as ONNX
```
import os
from transformers import AutoConfig, AutoProcessor, CLIPModel, CLIPVisionModel
from transformers.modeling_outputs import BaseModelOutputWithPooling
from pathlib import Path
from transformers.onnx import export
from typing import Mapping, OrderedDict
from transformers.onnx import OnnxConfig, validate_model_outputs
from functools import partial
import torch
class CLIPImageEncoder(CLIPModel):
def forward(self,
pixel_values: torch.FloatTensor
):
outputs = self.get_image_features(
pixel_values=pixel_values
)
return BaseModelOutputWithPooling(
pooler_output=outputs.reshape(-1, 768)
)
class EncoderOnnxConfig(OnnxConfig):
@property
def inputs(self) -> Mapping[str, Mapping[int, str]]:
return OrderedDict(
[
("pixel_values", {0: "batch", 1: "num_channels", 2: "height", 3: "width"})
]
)
@property
def outputs(self) -> Mapping[str, Mapping[int, str]]:
return OrderedDict(
[
("pooler_output", {0: "batch", 1: "dim"})
]
)
config = AutoConfig.from_pretrained("openai/clip-vit-large-patch14-336")
onnx_config = EncoderOnnxConfig(config)
model = CLIPImageEncoder.from_pretrained("openai/clip-vit-large-patch14-336")
del model.text_model
del model.text_projection
processor = AutoProcessor.from_pretrained("openai/clip-vit-large-patch14-336")
onnx_path = Path("tmp_onnx/model.onnx")
onnx_inputs, onnx_outputs = export(processor.image_processor, model, onnx_config, onnx_config.default_onnx_opset, onnx_path)
validate_model_outputs(
onnx_config, processor.image_processor, model, onnx_path, onnx_outputs, 1e-4
)
```
It successfully outputs `model.onnx`, I then try to convert it to TensorRT within `triton inference server` by adding the following to `config.pbtxt`
```
optimization {
graph { level: 3 }
execution_accelerators {
gpu_execution_accelerator : [ {
name : "tensorrt"
parameters { key: "precision_mode" value: "FP16" }
parameters { key: "max_workspace_size_bytes" value: "1073741824" }
}]
}
}
```
but it outputs the following error
```
2023-05-09 15:12:36.763934678 [E:onnxruntime:log, tensorrt_execution_provider.h:58 log] [2023-05-09 15:12:36 ERROR] 10: [optimizer.cpp::computeCosts::3728] Error Code 10: Internal Error (Could not find any implementation for node {ForeignNode[onnx::MatMul_3130 + (Unnamed Layer* 135) [Shuffle].../visual_projection/MatMul]}.)
Segmentation fault (core dumped)
```
Given the large number of users of CLIP, Huggingface already made ONNX conversion step really smooth, if the exported ONNX can also be easily converted to TensorRT, that'd add a lot of value.
I wonder if the error is due to specific implementation of CLIP in Huggingface repo? Like use of one operator instead of another although the outcome is the same.
### Expected behavior
I follow the same ONNX conversion script for many other models such as MiniLM, T5, DistilBert, and the resulting ONNX can be easily converted to TensorRT inside Triton Inference Server. This is not the case for CLIP (ViT) model. Ideally, all ONNXs exported by Huggingface can be easily converted to TensorRT inside Triton Inference Server. | 05-09-2023 15:23:42 | 05-09-2023 15:23:42 | When it comes to production and deployment, you should use TensorFlow. This repo already supports TFCLIPModel and Triton Inference Server supports Tensorflow as well. I was able to convert some TF models in this repo into TensorRT without any bugs (including CLIP), and the success rate is 100%. For CLIP model, I recommend you use TF or `torch_tensorrt` (https://github.com/pytorch/TensorRT) to convert the model rather than ONNX path.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 23,236 | closed | accelerate deepspeed and gradient accumulation integrate | ### What does this PR do?
1. Shift deepspeed integration to accelerate
2. Shift Gradient Accumulation to Accelerate
3. Merge after #23168
4. no user facing change. Now user can use `accelerate launch` with trainer for DeepSpeed, e.g.:
```
accelerate launch --num_processes=2 --mixed_precision=bf16 --use_deepspeed --gradient_accumulation_steps=1 --gradient_clipping=1 --zero3_init_flag=True --zero3_save_16bit_model=False --zero_stage=3 --offload_optimizer_device=none --offload_param_device=none ./examples/pytorch/text-classification/run_glue.py --model_name_or_path bert-base-cased --task_name $TASK_NAME --do_train --do_eval --max_seq_length 128 --per_device_train_batch_size 16 --learning_rate 5e-5 --num_train_epochs 3 --output_dir /tmp/$TASK_NAME/ --overwrite_output_dir --bf16
```
Usual run using `torchrun` and trainer args is unimpacted:
```
torchrun --nnodes 1 --nproc-per-node 2 ./examples/pytorch/text-classification/run_glue.py --model_name_or_path bert-base-cased --task_name $TASK_NAME --do_train --do_eval --max_seq_length 128 --per_device_train_batch_size 16 --learning_rate 5e-5 --num_train_epochs 3 --output_dir /tmp/$TASK_NAME/ --overwrite_output_dir --deepspeed ~/transformers/tests/deepspeed/ds_config_zero2.json
```
5. Save and load utils are changed accordingly | 05-09-2023 15:11:27 | 05-09-2023 15:11:27 | _The documentation is not available anymore as the PR was closed or merged._<|||||>> Thanks for working on this. Is the diff longer than expected because of other PRs to be merged before?
Due to updating from main, it is not showing the diff wrt previous branches. Weird.
> Might be cool to Have Stas have a look (not pinging him here too early) once this is ready to merge and tests are confirmed to all pass.
Yes, definitely. All tests are passing already. Checked the slow tests offline.
<|||||>@sgugger, now the diff is only specific to DeepSpeed changes + gradient accumulation changes + saving/loading changes wrt previous PR.<|||||>Hello @stas00, please review this PR which aims to shift the accelerate handling in Trainer to Accelerate. Thank you! |
transformers | 23,235 | closed | Support ratios for `logging_steps`, `eval_steps`, and `save_steps` | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #23171
Adds support for *ratios* to the `logging_steps`, `eval_steps`, and `save_steps` arguments, i.e. if they are a float in range `[0,1)`, the steps are calculated as a ratio of total training steps.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@sgugger | 05-09-2023 15:02:13 | 05-09-2023 15:02:13 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 23,234 | closed | Overhaul TF serving signatures + dummy inputs | Right now, our default TF serving signature is only really appropriate for BERT-like models, which means it needs to be overridden in most cases. This PR does inspection of `self.call` to figure out what to actually use, but can still be overridden if required. It also moves the definition of the serving signature to the `__init__()`, which allows it to use values from the `config` to set parts of the shape (e.g. `num_channels`)
I might also explore doing something similar with `dummy_inputs` in this PR and build models via the serving signature, without needing to explicitly define `dummy_inputs`. Ideally, we could eliminate a lot of that boilerplate, which would make it much easier for users to contribute models and reduce the amount of work needed to turn a LLM translation from PyTorch into a working TF model. | 05-09-2023 14:33:26 | 05-09-2023 14:33:26 | _The documentation is not available anymore as the PR was closed or merged._<|||||>This should be ready to review now - it shouldn't affect any existing models, as all our existing models override `serving`, `serving_output` and `dummy_inputs`. However, it should hopefully be a default that "just works" for a lot of future models, and means we can stop specifying the same information in three different places.<|||||>@gante Agreed! I can make that cleanup part of this PR, WDYT?<|||||>@Rocketknight1 Sounds good!<|||||>@Rocketknight1 plz ping when this PR is ready for a re-review 🔥 <|||||>@gante @amyeroberts This should be ready for re-review now, but no rush because it's late on a Friday! The core idea is that I collapsed all of the redundant information we had down to a single source of truth. That source of truth is a new property on our models: `input_signature`. **All** serving methods and decorators have been removed from our models - serving on all models is now set in the `__init__` with
`self.serving = tf.function(self.eager_serving, input_signature=[self.input_signature]`.
This fixes a major problem we had: As well as being a huge source of repetitive boilerplate, the serving signatures were incorrect in several places, and because they were compiled with a decorator, the decorator could not access `self.config`, which meant the serving signature could not include shape constraints that are defined in the config (such as `config.image_size`). This meant we just used `None` dimensions for dimensions that were actually not variable!
Additionally, `dummy_inputs` is now inferred from `self.input_signature` as well. Specifically, the `dummy_inputs` property fills in `None` dimensions in the `input_signature` with `2` and then just generates tensors with that shape and dtype, then builds the model with those.
`dummy_inputs` can still be overridden, and this is used in a few models when they need particular dummy inputs to build without errors. The vast majority of `dummy_inputs` have been removed, though. `serving` can in theory be overridden too, but there was no need to do this in any of our models.
Finally, the new base `serving_output` code covers most cases, and I'd estimate about 75% of `serving_outputs` in the codebase are gone. I expect there are going to be a few issues, but I'll keep an eye on the tests and make sure it's all okay!<|||||>It seems like there's a few issues caused by the default dummy input values triggering assertions or issues - I'll add dummy_inputs overrides to those models.<|||||>@gante @amyeroberts I think everything should pass now - ready for final review!<|||||>This PR reduces the size of our TF codebase (files matching `*_tf_*.py`) by a little under 5%, lol |
transformers | 23,233 | closed | 404 Client Error: Not Found for url: https://huggingface.co/api/models/bert-large-cased | ### System Info
- `transformers` version: 4.9.1
- Platform: Linux-4.15.0-210-generic-x86_64-with-debian-buster-sid
- Python version: 3.7.10
- PyTorch version (GPU?): 1.9.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help?
@ArthurZucker @younesbelkada @Narsil @sgugger
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
1. `from transformers import BertTokenizer, BertModel`
2. `tokenizer = BertTokenizer.from_pretrained('bert-large-cased')`
As discussed [here](https://huggingface.co/bert-large-cased#:~:text=from%20transformers%20import%20BertTokenizer%2C%20BertModel%0Atokenizer%20%3D%20BertTokenizer.from_pretrained(%27bert%2Dlarge%2Dcased%27))
Leads to the following `HTTPError`
```
HTTPError Traceback (most recent call last)
<ipython-input-6-5c580443a1ad> in <module>
----> 1 tokenizer = BertTokenizer.from_pretrained('bert-large-cased')
~/miniconda3/envs/cmd-chall/lib/python3.7/site-packages/transformers/tokenization_utils_base.py in from_pretrained(cls, pretrained_model_name_or_path, *init_inputs, **kwargs)
1646 # At this point pretrained_model_name_or_path is either a directory or a model identifier name
1647 fast_tokenizer_file = get_fast_tokenizer_file(
-> 1648 pretrained_model_name_or_path, revision=revision, use_auth_token=use_auth_token
1649 )
1650 additional_files_names = {
~/miniconda3/envs/cmd-chall/lib/python3.7/site-packages/transformers/tokenization_utils_base.py in get_fast_tokenizer_file(path_or_repo, revision, use_auth_token)
3406 """
3407 # Inspect all files from the repo/folder.
-> 3408 all_files = get_list_of_files(path_or_repo, revision=revision, use_auth_token=use_auth_token)
3409 tokenizer_files_map = {}
3410 for file_name in all_files:
~/miniconda3/envs/cmd-chall/lib/python3.7/site-packages/transformers/file_utils.py in get_list_of_files(path_or_repo, revision, use_auth_token)
1685 token = None
1686 model_info = HfApi(endpoint=HUGGINGFACE_CO_RESOLVE_ENDPOINT).model_info(
-> 1687 path_or_repo, revision=revision, token=token
1688 )
1689 return [f.rfilename for f in model_info.siblings]
~/miniconda3/envs/cmd-chall/lib/python3.7/site-packages/huggingface_hub/hf_api.py in model_info(self, repo_id, revision, token)
246 )
247 r = requests.get(path, headers=headers)
--> 248 r.raise_for_status()
249 d = r.json()
250 return ModelInfo(**d)
~/miniconda3/envs/cmd-chall/lib/python3.7/site-packages/requests/models.py in raise_for_status(self)
951
952 if http_error_msg:
--> 953 raise HTTPError(http_error_msg, response=self)
954
955 def close(self):
HTTPError: 404 Client Error: Not Found for url: https://huggingface.co/api/models/bert-large-cased
```
### Expected behavior
Should run without `HTTPError` | 05-09-2023 14:24:19 | 05-09-2023 14:24:19 | Same here.
It seems to have something to do with [this](https://twitter.com/huggingface/status/1655760648926642178)<|||||>This is a duplicate of #23228 and #23229. The HuggingFace Hub is undergoing some problems, you can follow progress on resolution on the [HF status twitter](https://twitter.com/hf_status) account or the [status page](https://status.huggingface.co/). |
transformers | 23,232 | closed | Add Japanese translation to accelerate.mdx | # What does this PR do?
Adds Japanese translation to accelerate.mdx
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #18413
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@omarespejel @sgugger
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
| 05-09-2023 14:20:56 | 05-09-2023 14:20:56 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Thanks a lot! |
transformers | 23,231 | closed | Whisper is inconsistent with returning last segment | ### Feature request
When the input audio is cut off in the middle of a word, Whisper may not predict an ending timestamp. How we handle this differs between decoding using the tokenizer or using a pipeline. It's also different from how OpenAI handles this.
To see what happens, let's run Whisper:
```python
# load model
from transformers import AutoProcessor, WhisperForConditionalGeneration
processor = AutoProcessor.from_pretrained("openai/whisper-tiny")
model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-tiny")
tokenizer = processor.tokenizer
# load data
from datasets import load_dataset
dataset = load_dataset(
"hf-internal-testing/librispeech_asr_demo", "clean", split="validation"
)
dataset = dataset.sort("id")
# create example that reproduces the issue
import numpy as np
example1 = dataset[0]["audio"]["array"]
example2 = dataset[1]["audio"]["array"]
example3 = dataset[1]["audio"]["array"]
example = np.concatenate([example1, example2, example3]).astype(np.float32)
example = example[:200000]
# get input spectrogram
inputs = processor(example, sampling_rate=16000, return_tensors="pt")
input_features = inputs.input_features
# make prediction including timestamps
predicted_ids = model.generate(input_features, return_timestamps=True)
processor.decode(predicted_ids[0], decode_with_timestamps=True, output_offsets=True)
```
This outputs:
```python
{'text': "<|startoftranscript|><|en|><|transcribe|><|0.00|> Mr. Quilter is the apostle of the middle classes, and we are glad to welcome his gospel.<|6.00|><|6.00|> Nor is Mr. Quilter's manner less interesting than his matter.<|11.00|><|11.00|> Nor is Mr. Quilter's<|endoftext|>",
'offsets': [{'text': ' Mr. Quilter is the apostle of the middle classes, and we are glad to welcome his gospel.',
'timestamp': (0.0, 6.0)},
{'text': " Nor is Mr. Quilter's manner less interesting than his matter.",
'timestamp': (6.0, 11.0)}]}
```
Notice that the last segment is: ` Nor is Mr. Quilter's<|endoftext|>`. However, there is no entry for this in the `"offsets"` array. This happens because in `tokenization_whisper.py` in `_compute_offsets` the last segment is skipped if it does not end with a timestamp token. The `"text"` output, however, does include that last segment.
OpenAI does the following:
```python
# load model
import whisper
model = whisper.load_model("tiny")
# load example as above...
# make prediction
result = model.transcribe(
example,
verbose=True,
condition_on_previous_text=False,
)
```
This does include the final segment:
```text
[00:00.000 --> 00:06.000] Mr. Quilter is the apostle of the middle classes, and we are glad to welcome his gospel.
[00:06.000 --> 00:11.000] Nor is Mr. Quilter's manner less interesting than his matter.
[00:11.000 --> 00:13.000] Norris, Mr. Quilters.
```
The reason the text is different than with our Whisper (`Nor is Mr. Quilter's` vs `Norris, Mr. Quilters.`), is that OpenAI detects that the last segment does not end with a timestamp and is therefore incomplete. It now "rewinds" to the last timestamp token and makes a new prediction from there. This new prediction can be different since the input spectrogram has essentially been shifted in time. Since now only one segment is returned, the OpenAI logic uses the start and end time from the audio as the timestamps for this final segment.
We can also use a `pipeline` to run Whisper:
```python
from transformers import pipeline
pipe = pipeline(task="automatic-speech-recognition", model="openai/whisper-tiny")
pipe(example, return_timestamps=True)
```
This outputs the following:
```python
{'text': " Mr. Quilter is the apostle of the middle classes, and we are glad to welcome his gospel. Nor is Mr. Quilter's manner less interesting than his matter. Nor is Mr. Quilter's",
'chunks': [{'timestamp': (0.0, 6.0),
'text': ' Mr. Quilter is the apostle of the middle classes, and we are glad to welcome his gospel.'},
{'timestamp': (6.0, 11.0),
'text': " Nor is Mr. Quilter's manner less interesting than his matter."},
{'timestamp': (11.0, None), 'text': " Nor is Mr. Quilter's"}]}
```
Here the list containing the timestamps is named `"chunks"` instead of `"offsets"` but otherwise contains the same information. But now it does include the final segment. The ending timestamp is None.
It also outputs a warning because the last segment doesn't have an ending timestamp (yes, WhisperTimeStampLogitsProcessor was used):
> "There was an error while processing timestamps, we haven't found a timestamp as last token. Was WhisperTimeStampLogitsProcessor used?"
Long story short:
- The behavior of `tokenizer.decode(..., with_offsets=True)` is different from the pipeline with `return_timestamps=True`, and both are different from what OpenAI does (`None` instead of the actual ending timestamp, no rewinding). In addition, the pipeline does not include the timestamps in the text but `tokenizer.decode()` does.
- Is the `tokenizer.decode(..., with_offsets=True)` behavior a bug?
- The implementation of how the pipeline implements this "split up segments by timestamps" (`tokenizer._decode_asr`) is different from how the tokenizer implements it (`tokenizer._compute_offsets`). So we have two different implementations doing the same thing but with different results.
- Could we perhaps make this a bit more consistent? The pipeline calls the returned segments "chunks", but also uses this same term for splitting up the audio into partially overlapping 30-second slices. Very confusing.
### Motivation
I'm currently adding word-level timestamps to Whisper: https://github.com/huggingface/transformers/pull/23205
In the OpenAI implementation these timestamps are added to the returned segments. Obviously if the last segment isn't being included, we can't add the word-level timestamps there. The word-level timestamps should also work in the pipeline.
### Your contribution
Rather than just submitting a PR to fix this issue, I'm opening this up for discussion to decide how we want to handle this, as it affects multiple pieces of an already complex system. | 05-09-2023 13:12:29 | 05-09-2023 13:12:29 | Thanks for the super comprehensive write-up! For sure, let's discuss here what the best approach to fixing this issue is. Here are my thoughts:
1. It's better to return the last segment with an incomplete timestamp rather than dropping it completely - when decoding with timestamps, many users will look just at `"offsets"` to get the transcription + timestamps bundled together, and ignore the overall `"text"` (since they assume the text in `"offsets"` will be the same as in `"text"`). If we drop the last segment because it doesn't have a timestamp, we effectively miss a transcription chunk (as well as a timestamp). IMO it's better to return the transcription for this last offset even if it has a missing timestamp, so at least we return the correct transcription overall. So here I do agree that there is a 'bug' in the tokenizer and we should change its behaviour to mirror that of the processor. Happy to remove `tokenizer. _compute_offsets` in favour of `tokenizer._decode_asr` (and if this is not allowed because of breaking changes, then have it call `tokenizer._decode_asr` under the hood).
2. I'm not sure there's a clean way of doing the 'rewind' trick that OpenAI do: `transformers` is very distinct in it's three sequential stages of inference: feature extractor -> model -> tokenizer. OpenAI are a bit more liberal in going between model and tokenizer (e.g. with this 'rewind' trick and their decode with temperature fallback algorithm). Adding the 'rewind' trick to the pipeline method would add a lot of custom complexity that we probably want to avoid. What we can quite easily do is update the warning message to say something along the lines of `"audio is cut off in the middle of a word, Whisper did not predict an ending timestamp"` to at least inform users of why the last timestamp is missing.
Also cc @Narsil - would be interested in hearing your thoughts on this!<|||||>Agreed on uniformity in handling those "incomplete" things
1. `(start, None)` is the easiest imo. (I agree with @sanchit-gandhi basically).
2. `rewind` trick is dirty and cannot be done in pipelines. `pipelines` are stateless, and this is what enables orthogonal batching. OpenAI cannot do batching. We could have exactly their code too somewhere else, but not in the aforementionned locations.
Having predictable runtime is really important imo, and rewind trick is killing that. Also if the model is super bad (which can happen on random and badly finetuned models) then you'll still have incomplete chunks.
For `chunks` we cannot change the output things because of backward compatibity.
<|||||>Agreed that we should fix this to keep the final segment. I can add it to my list.
Also agree that the rewinding trick isn't something we should do, as it interferes with the batching approach. Plus it's kind of a hack anyway.
Keeping a timestamp of `None` to mean "end of the input" is simple on our end, but it might be less convenient for users to interpret what time this actually corresponds to (since the input may have padding and so it's not necessarily the length of the input, and the user may not be able to easily figure out where the padding occurs).
<|||||>> it might be less convenient for users to interpret
Indeed, but the timestamp is supposed to be emitted by the model and correspond the actual end of speech.
There can be padding, but also just silent audio. Ideally it would be nice to output a sensible default, but here if the model doesn't give us a timestamp, then... well we cannot really do anything about it, and we just don't have the information, trying to recreate something is IMHO lying to the user.
For instance, there is nothing preventing the model from outputting timestamps that are out of order even though it wouldn't make sense. but if the model is doing it I think we should just translate what the model is saying, even if nonsensical (gladly this doesn't seem to be actually occurring in the real world)<|||||>> if the model doesn't give us a timestamp, then... well we cannot really do anything about it, and we just don't have the information, trying to recreate something is IMHO lying to the user.
Fair point. 😄 <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 23,230 | closed | llama model can't generate EOS | ### System Info
python 3.8.16
torch 1.13.1
transformers 4.28.0
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```
from transformers import LlamaTokenizer, LlamaForCausalLM, GenerationConfig
# both model have same behavior
model_path = "luodian/llama-7b-hf"
# model_path = "huggyllama/llama-7b"
model = LlamaForCausalLM.from_pretrained(model_path)
tokenizer = LlamaTokenizer.from_pretrained(model_path)
tokenizer.pad_token = tokenizer.eos_token
text = ["Translate english to chinese: I love you.", "What is your name:"]
a = tokenizer(text, return_tensors='pt',padding="longest")
print(model.generate(**a, max_new_tokens=64))
```
### Expected behavior
The llama model's generate method doesn't generate any EOS token under any circumstances | 05-09-2023 10:33:00 | 05-09-2023 10:33:00 | cc @ArthurZucker <|||||>Hey!
This seems to be a bit similar to #23175.
When the `generate` function is called, it should stop once the `eos_token` (which is `2`).
If the model does not predict it, then the generate function will not stop. This can come from the training, but is most probably not an issue with the `generate` function.
You can check the original behaviour here: https://github.com/facebookresearch/llama/blob/main/llama/generation.py you'll see that it does not stop on the `eos` token. <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>what was the soln? |
transformers | 23,229 | closed | OSError: sentence-transformers/all-distilroberta-v1 is not a local folder and is not a valid model identifier listed on 'https://huggingface.co/models' | ### System Info
- `transformers` version: 4.28.1
- Platform: macOS-13.3-x86_64-i386-64bit
- Python version: 3.9.6
- Huggingface_hub version: 0.13.4
- Safetensors version: not installed
- PyTorch version (GPU?): 1.13.1 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Run the following code:
```python
from transformers import AutoModel
model = AutoModel.from_pretrained("sentence-transformers/all-distilroberta-v1")
```
This fails with the OSError in the issue title. I would normally raise this issue on the hub, but then to my amazement, `sentence-transformers/all-distilroberta-v1` does not exist on the hub. Our code reliably worked for months, so I presume that this model, quite well known actually, was in the space at some point. I wonder why model loading no longer works. @younesbelkada , any idea?
### Expected behavior
Model is loaded. | 05-09-2023 10:25:24 | 05-09-2023 10:25:24 | Experiencing the same issue with a different model. Looks like [the entire sentence-transformers page](https://huggingface.co/sentence-transformers) is down. Hopefully a fix is on the way.
UPDATE: Indeed, it looks like HuggingFace is aware of the issues and working on it: https://twitter.com/huggingface/status/1655760648926642178<|||||>Experiencing the same issue with a model from `cross-encoder`... Looks like the entire [cross-encoder page](https://huggingface.co/cross-encoder) is down. <|||||>Hi @alexcoca, thanks for raising this issue!
We're unfortunately experiencing a bug which means some popular organisations like sentence-transformers have had their model temporarily disappear from the Hub.
They will come back; we're working hard on getting this fixed ASAP! Apologies for the disruption.<|||||>@amyeroberts thanks for your hard work, everything is back to normal, I think. For future reference, users should know that `from_pretrained` methods have a `local_files_only` flag that can be passed to load a model that has been cached locally before. This can help in situations like this, thanks @Wauplin for pointing this out. |
transformers | 23,228 | closed | 504 Server Error: Gateway Time-out for BertTokenizer | `File /usr/local/lib/python3.9/dist-packages/huggingface_hub/utils/_errors.py:259, in hf_raise_for_status(response, endpoint_name)
258 try:
--> 259 response.raise_for_status()
260 except HTTPError as e:
File /usr/local/lib/python3.9/dist-packages/requests/models.py:1021, in Response.raise_for_status(self)
1020 if http_error_msg:
-> 1021 raise HTTPError(http_error_msg, response=self)
HTTPError: 504 Server Error: Gateway Time-out for url: https://huggingface.co/api/models/bert-base-uncased/tree/main?recursive=True
The above exception was the direct cause of the following exception:
HfHubHTTPError Traceback (most recent call last)
Cell In[6], line 2
1 from transformers import BertTokenizer
----> 2 tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
File ~/.local/lib/python3.9/site-packages/transformers/tokenization_utils_base.py:1654, in PreTrainedTokenizerBase.from_pretrained(cls, pretrained_model_name_or_path, *init_inputs, **kwargs)
1651 vocab_files[file_id] = pretrained_model_name_or_path
1652 else:
1653 # At this point pretrained_model_name_or_path is either a directory or a model identifier name
-> 1654 fast_tokenizer_file = get_fast_tokenizer_file(
1655 pretrained_model_name_or_path,
1656 revision=revision,
1657 use_auth_token=use_auth_token,
1658 local_files_only=local_files_only,
1659 )
1660 additional_files_names = {
1661 "added_tokens_file": ADDED_TOKENS_FILE,
1662 "special_tokens_map_file": SPECIAL_TOKENS_MAP_FILE,
1663 "tokenizer_config_file": TOKENIZER_CONFIG_FILE,
1664 "tokenizer_file": fast_tokenizer_file,
1665 }
1666 # Look for the tokenizer files
File ~/.local/lib/python3.9/site-packages/transformers/tokenization_utils_base.py:3486, in get_fast_tokenizer_file(path_or_repo, revision, use_auth_token, local_files_only)
3466 """
3467 Get the tokenizer file to use for this version of transformers.
3468
(...)
3483 `str`: The tokenizer file to use.
3484 """
3485 # Inspect all files from the repo/folder.
-> 3486 all_files = get_list_of_files(
3487 path_or_repo, revision=revision, use_auth_token=use_auth_token, local_files_only=local_files_only
3488 )
3489 tokenizer_files_map = {}
3490 for file_name in all_files:
File ~/.local/lib/python3.9/site-packages/transformers/file_utils.py:2103, in get_list_of_files(path_or_repo, revision, use_auth_token, local_files_only)
2101 else:
2102 token = None
-> 2103 return list_repo_files(path_or_repo, revision=revision, token=token)
File /usr/local/lib/python3.9/dist-packages/huggingface_hub/utils/_deprecation.py:103, in _deprecate_arguments.<locals>._inner_deprecate_positional_args.<locals>.inner_f(*args, **kwargs)
101 message += "\n\n" + custom_message
102 warnings.warn(message, FutureWarning)
--> 103 return f(*args, **kwargs)
File /usr/local/lib/python3.9/dist-packages/huggingface_hub/utils/_validators.py:120, in validate_hf_hub_args.<locals>._inner_fn(*args, **kwargs)
117 if check_use_auth_token:
118 kwargs = smoothly_deprecate_use_auth_token(fn_name=fn.__name__, has_token=has_token, kwargs=kwargs)
--> 120 return fn(*args, **kwargs)
File /usr/local/lib/python3.9/dist-packages/huggingface_hub/hf_api.py:1966, in HfApi.list_repo_files(self, repo_id, revision, repo_type, timeout, token)
1936 @_deprecate_arguments(version="0.17", deprecated_args=["timeout"], custom_message="timeout is not used anymore.")
1937 @validate_hf_hub_args
1938 def list_repo_files(
(...)
1945 token: Optional[Union[bool, str]] = None,
1946 ) -> List[str]:
1947 """
1948 Get the list of files in a given repo.
1949
(...)
1964 `List[str]`: the list of files in a given repository.
1965 """
-> 1966 return [
1967 f.rfilename
1968 for f in self.list_files_info(
1969 repo_id=repo_id, paths=None, revision=revision, repo_type=repo_type, token=token
1970 )
1971 ]
File /usr/local/lib/python3.9/dist-packages/huggingface_hub/hf_api.py:1966, in <listcomp>(.0)
1936 @_deprecate_arguments(version="0.17", deprecated_args=["timeout"], custom_message="timeout is not used anymore.")
1937 @validate_hf_hub_args
1938 def list_repo_files(
(...)
1945 token: Optional[Union[bool, str]] = None,
1946 ) -> List[str]:
1947 """
1948 Get the list of files in a given repo.
1949
(...)
1964 `List[str]`: the list of files in a given repository.
1965 """
-> 1966 return [
1967 f.rfilename
1968 for f in self.list_files_info(
1969 repo_id=repo_id, paths=None, revision=revision, repo_type=repo_type, token=token
1970 )
1971 ]
File /usr/local/lib/python3.9/dist-packages/huggingface_hub/hf_api.py:1932, in HfApi.list_files_info(self, repo_id, paths, revision, repo_type, token)
1930 encoded_path = "/" + quote(path, safe="") if path else ""
1931 tree_url = f"{self.endpoint}/api/{repo_type}s/{repo_id}/tree/{revision}{encoded_path}"
-> 1932 for subpath_info in paginate(path=tree_url, headers=headers, params={"recursive": True}):
1933 if subpath_info["type"] == "file":
1934 yield _format_as_repo_file(subpath_info)
File /usr/local/lib/python3.9/dist-packages/huggingface_hub/utils/_pagination.py:36, in paginate(path, params, headers)
34 session = get_session()
35 r = session.get(path, params=params, headers=headers)
---> 36 hf_raise_for_status(r)
37 yield from r.json()
39 # Follow pages
40 # Next link already contains query params
File /usr/local/lib/python3.9/dist-packages/huggingface_hub/utils/_errors.py:301, in hf_raise_for_status(response, endpoint_name)
297 raise BadRequestError(message, response=response) from e
299 # Convert `HTTPError` into a `HfHubHTTPError` to display request information
300 # as well (request id and/or server error message)
--> 301 raise HfHubHTTPError(str(e), response=response) from e
HfHubHTTPError: 504 Server Error: Gateway Time-out for url: https://huggingface.co/api/models/bert-base-uncased/tree/main?recursive=True` | 05-09-2023 09:54:59 | 05-09-2023 09:54:59 | Hi @ZhengMengbin, thanks for raising this issue.
We had a short outage of the hub website and API earlier today, which is likely the result of the 504 error. I'm able to load the `bert-base-uncased` checkpoint locally on the dev branch of transformers.
If the error persists on your end, could you reply with a reproducible code snippet and information about your running environment (run `transformers-cli env` in your terminal)?<|||||>> Hi @ZhengMengbin, thanks for raising this issue.
>
> We had a short outage of the hub website and API earlier today, which is likely the result of the 504 error. I'm able to load the `bert-base-uncased` checkpoint locally on the dev branch of transformers.
>
> If the error persists on your end, could you reply with a reproducible code snippet and information about your running environment (run `transformers-cli env` in your terminal)?
I have a similar issue: huggingface_hub.utils._errors.HfHubHTTPError: 504 Server Error: Gateway Time-out for url: https://huggingface.co/api/models/gpt2.
Here's what I got after run transformers-cli env:
- `transformers` version: 4.10.3
- Platform: Linux-5.15.0-71-generic-x86_64-with-glibc2.31
- Python version: 3.9.15
- PyTorch version (GPU?): 1.13.0+cu117 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?:no
<|||||>Same issue here, requesting bert-base-multilingual-cased
```
huggingface_hub.utils._errors.HfHubHTTPError: 504 Server Error: Gateway Time-out for url: https://huggingface.co/api/models/bert-base-multilingual-cased
```
Accessing this URL via browser grants a CloudFront error.

<|||||>This seems to be working again since approx 10 Minutes.
The model can be loaded (although slow) via code and the api is answering when accessed via browser.
<|||||>@mkuf @YiranHuangIrene Thanks for the additional information.
Unfortunately we're still experiencing some issues with the hub which we're actively trying to resolve. Some of the features have come back online but we haven't returned to a full service yet. Apologies for the disruption.
I'll reply here when I hear everything should be back to normal. Our [HF status twitter](https://twitter.com/hf_status) account is the best place to find the most up to date info on progress, and [status page](https://status.huggingface.co/) to see the current status.<|||||>Autotokenizer doesn't seem to work for any of the pretrained models: roberta, bert or distilled versions. Reason being Bad Gateway error:
`OSError: There was a specific connection error when trying to load distilbert-base-uncased:
504 Server Error: Gateway Time-out for url: [https://huggingface.co/distilbert-base-uncased/resolve/main/config.json`](https://huggingface.co/distilbert-base-uncased/resolve/main/config.json%60)
I am using `requests==2.27.1` and no certificate validation `os.environ['CURL_CA_BUNDLE'] = ''`<|||||>Yes the website is currently experiencing some issues. Should come back in a few minutes, you can check the status [here](https://status.huggingface.co/). |
transformers | 23,227 | closed | Fix typo ; Update output.mdx | # What does this PR do?
Fixes a typo.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
## Before submitting
- [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
| 05-09-2023 09:27:47 | 05-09-2023 09:27:47 | |
transformers | 23,226 | open | NSP Support for Zero-shot Text Classification Pipeline | ### Feature request
Zero-shot classification can be solved with NextSentencePrediction task of BERT, and it has shown competitive results to NLI-based zero-shot classification in some cases. There could be a parameter where we choose the type of submethod that we are going to use for the pipeline like `pipeline(task="zero-shot-classification", type_="nsp")` or we could just simply add a task named "nsp-zeroshot-classification". This is also possible for MLM, which is a more widely used pretraining task across LMs.
### Motivation
Like I said, NSP has proven to be useful especially for languages that do not have access to NLI dataset since only pre-training is enough. Although multilingual NLI models can also be used, they have been proven to be worse compared to smaller monolingual models in this task, as one would expect. Even if this is a small detail which would be unnecessary to put into the codebase, I wanted to share this implementation so that anyone who's interested can take a look and try different methods.
Here are some references, one of which is my study, that use NSP for zero-shot classification.
Sun, Y., Zheng, Y., Hao, C., & Qiu, H. (2021). NSP-BERT: A Prompt-based Zero-Shot Learner Through an Original Pre-training Task--Next Sentence Prediction. arXiv preprint arXiv:2109.03564.
Çelik, E., & Dalyan, T. (2023). Unified benchmark for zero-shot Turkish text classification. Information Processing & Management, 60(3), 103298.
### Your contribution
I can open a PR, here's the implementation I did based on Sun et al. 2021. It is heaily based on the current NLI zeroshot pipeline class, but also adds a `reverse` argument which changes the order of the sentences for NSP.
```python
import numpy as np
from typing import List, Union
from transformers.utils import logging
from transformers.pipelines.base import ChunkPipeline, ArgumentHandler
from transformers.tokenization_utils import TruncationStrategy
from transformers.pipelines import ZeroShotClassificationArgumentHandler
logger = logging.get_logger(__name__)
class ZeroShotClassificationArgumentHandler(ArgumentHandler):
def _parse_labels(self, labels):
if isinstance(labels, str):
labels = [label.strip() for label in labels.split(",") if label.strip()]
return labels
def __call__(self, sequences, labels, hypothesis_template, reverse):
if len(labels) == 0 or len(sequences) == 0:
raise ValueError(
"You must include at least one label and at least one sequence."
)
if hypothesis_template.format(labels[0]) == hypothesis_template:
raise ValueError(
(
'The provided hypothesis_template "{}" was not able to be formatted with the target labels. '
"Make sure the passed template includes formatting syntax such as {{}} where the label should go."
).format(hypothesis_template)
)
if isinstance(sequences, str):
sequences = [sequences]
sequence_pairs = []
for sequence in sequences:
if reverse:
sequence_pairs.extend(
[[hypothesis_template.format(label), sequence] for label in labels]
)
else:
sequence_pairs.extend(
[[sequence, hypothesis_template.format(label)] for label in labels]
)
return sequence_pairs, sequences
class NSPZeroShotClassificationPipeline(ChunkPipeline):
def __init__(
self, args_parser=ZeroShotClassificationArgumentHandler(), *args, **kwargs
):
self._args_parser = args_parser
super().__init__(*args, **kwargs)
@property
def isNext_id(self):
return 0
def _parse_and_tokenize(
self,
sequence_pairs,
padding=True,
add_special_tokens=True,
truncation=TruncationStrategy.ONLY_FIRST,
**kwargs,
):
return_tensors = self.framework
if self.tokenizer.pad_token is None:
logger.error(
"Tokenizer was not supporting padding necessary for zero-shot, attempting to use "
" `pad_token=eos_token`"
)
self.tokenizer.pad_token = self.tokenizer.eos_token
try:
inputs = self.tokenizer(
sequence_pairs,
add_special_tokens=add_special_tokens,
return_tensors=return_tensors,
padding=padding,
truncation=truncation,
)
except Exception as e:
if "too short" in str(e):
inputs = self.tokenizer(
sequence_pairs,
add_special_tokens=add_special_tokens,
return_tensors=return_tensors,
padding=padding,
truncation=TruncationStrategy.DO_NOT_TRUNCATE,
)
else:
raise e
return inputs
def _sanitize_parameters(self, **kwargs):
if kwargs.get("multi_class", None) is not None:
kwargs["multi_label"] = kwargs["multi_class"]
logger.warning(
"The `multi_class` argument has been deprecated and renamed to `multi_label`. "
"`multi_class` will be removed in a future version of Transformers."
)
preprocess_params = {}
if "candidate_labels" in kwargs:
preprocess_params["candidate_labels"] = self._args_parser._parse_labels(
kwargs["candidate_labels"]
)
if "hypothesis_template" in kwargs:
preprocess_params["hypothesis_template"] = kwargs["hypothesis_template"]
if "reverse" in kwargs:
preprocess_params["reverse"] = kwargs["reverse"]
postprocess_params = {}
if "multi_label" in kwargs:
postprocess_params["multi_label"] = kwargs["multi_label"]
return preprocess_params, {}, postprocess_params
def __call__(
self,
sequences: Union[str, List[str]],
*args,
**kwargs,
):
if len(args) == 0:
pass
elif len(args) == 1 and "candidate_labels" not in kwargs:
kwargs["candidate_labels"] = args[0]
else:
raise ValueError(f"Unable to understand extra arguments {args}")
return super().__call__(sequences, **kwargs)
def preprocess(
self,
inputs,
candidate_labels=None,
hypothesis_template="This example is {}.",
reverse=False,
):
sequence_pairs, sequences = self._args_parser(
inputs, candidate_labels, hypothesis_template, reverse
)
for i, (candidate_label, sequence_pair) in enumerate(
zip(candidate_labels, sequence_pairs)
):
model_input = self._parse_and_tokenize([sequence_pair])
yield {
"candidate_label": candidate_label,
"sequence": sequences[0],
"is_last": i == len(candidate_labels) - 1,
**model_input,
}
def _forward(self, inputs):
candidate_label = inputs["candidate_label"]
sequence = inputs["sequence"]
model_inputs = {k: inputs[k] for k in self.tokenizer.model_input_names}
outputs = self.model(**model_inputs)
model_outputs = {
"candidate_label": candidate_label,
"sequence": sequence,
"is_last": inputs["is_last"],
**outputs,
}
return model_outputs
def postprocess(self, model_outputs, multi_label=False):
candidate_labels = [outputs["candidate_label"] for outputs in model_outputs]
sequences = [outputs["sequence"] for outputs in model_outputs]
logits = np.concatenate([output["logits"].numpy() for output in model_outputs])
N = logits.shape[0]
n = len(candidate_labels)
num_sequences = N // n
reshaped_outputs = logits.reshape((num_sequences, n, -1))
if multi_label or len(candidate_labels) == 1:
isNext_id = self.isNext_id
notNext_id = 1
isNext_contr_logits = reshaped_outputs[..., [notNext_id, isNext_id]]
scores = np.exp(isNext_contr_logits) / np.exp(isNext_contr_logits).sum(
-1, keepdims=True
)
scores = scores[..., 1]
else:
isNext_logits = reshaped_outputs[..., self.isNext_id]
scores = np.exp(isNext_logits) / np.exp(isNext_logits).sum(
-1, keepdims=True
)
top_inds = list(reversed(scores[0].argsort()))
return {
"sequence": sequences[0],
"labels": [candidate_labels[i] for i in top_inds],
"scores": scores[0, top_inds].tolist(),
}
```
This task can be used by registering it to the tasks, shown in example below:
```python
from nsp import NSPZeroShotClassificationPipeline
from transformers.pipelines import PIPELINE_REGISTRY
from transformers import BertForNextSentencePrediction, TFBertForNextSentencePrediction
PIPELINES = [
dict(
task="nsp-zeroshot-classification",
pipeline_class=NSPZeroShotClassificationPipeline,
pt_model=BertForNextSentencePrediction,
tf_model=TFBertForNextSentencePrediction,
default={"pt": ("bert-base-uncased")},
type="text",
)
]
for p in PIPELINES:
PIPELINE_REGISTRY.register_pipeline(**p)
``` | 05-09-2023 07:46:40 | 05-09-2023 07:46:40 | cc @Narsil |
transformers | 23,225 | closed | fix: Update run_qa.py to work with deepset/germanquad | # What does this PR do?
This updates the `run_qa.py` script in the `examples/pytorch/question-answering` folder to work with the `deepset/germanquad` dataset. The script expects the ID field for each example in the dataset to be a string type, but the previously mentioned dataset stores the ID field as an int. So I added a `str` call on the id field to make sure any integer IDs are converted into the expected string format for squad datasets when using the `evaluate` library. This is relevant for https://huggingface.co/datasets/deepset/germanquad which stores the IDs of each example with an integer ID.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #N/A
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Hey @sgugger I would appreciate your review on this PR! I tagged you based on the recommendation of the PR template since you are listed as the maintainer of the pytorch examples.
| 05-09-2023 07:19:36 | 05-09-2023 07:19:36 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_23225). All of your documentation changes will be reflected on that endpoint.<|||||>Could you explain what fails when leaving the code as is?<|||||>Sure! I get a `pyarrow` error that the format of `predictions` does not match the defined schema for `predictions` which expects the ID field to be of type string. I'm a bit busy at the moment, but later I can reproduce the error and copy paste the message here. <|||||>That's is a bit weird as we only use pyarrow through `dataset` but this is after the dataset creation.<|||||>Is pyarrow also used in the Evaluation library for computing the squad_v2 metrics? It seemed the schema enforcement was for the predictions format because germanquad has its own schema in its dataset repo. <|||||>Ah good catch! Yes I get the issue now. |
transformers | 23,224 | closed | [SAM] Add resources | # What does this PR do?
This PR adds links to 2 demo notebooks I made regarding SAM.
It also fixes a hyperlink which didn't render properly in the docs.
cc @younesbelkada | 05-09-2023 07:00:05 | 05-09-2023 07:00:05 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_23224). All of your documentation changes will be reflected on that endpoint. |
transformers | 23,223 | closed | Fix wav2vec2 is_batched check to include 2-D numpy arrays | # What does this PR do?
Fixes #22175 so it treats 2-D numpy arrays as being batched.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@sanchit-gandhi
| 05-09-2023 03:36:11 | 05-09-2023 03:36:11 | Note: I was having some trouble running the relevant test(s). For instance, `pytest tests/models/wav2vec2/test_feature_extraction_wav2vec2.py` fails with
```
File ... path/to/file/transformers/src/transformers/training_args.py", line 67, in <module>
from accelerate import PartialState
ImportError: cannot import name 'PartialState' from 'accelerate' (/Users/leonwu/opt/anaconda3/lib/python3.9/site-packages/accelerate/__init__.py)
```
I suspect this might be related to #22816, but wasn't sure if I should downgrade `transformers` itself if I'm trying to make a PR.
```
$ transformers-cli env
- `transformers` version: 4.29.0.dev0
- Platform: macOS-13.2.1-arm64-arm-64bit
- Python version: 3.10.10
- Huggingface_hub version: 0.14.1
- Safetensors version: not installed
- PyTorch version (GPU?): 2.0.1 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
```
and my virtual environment's `accelerate` library is `0.19.0`.<|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>Regarding the venv issue you're facing, could you try to isolate it with:
```python
from transformers import training_args
```
I can't reproduce this error using `transformers` from main - could you try rebasing onto main to make sure it's not already been fixed?<|||||>> Regarding the venv issue you're facing, could you try to isolate it with:
>
> ```python
> from transformers import training_args
> ```
>
> I can't reproduce this error using `transformers` from main - could you try rebasing onto main to make sure it's not already been fixed?
that one's fixed, but I ran into a different issue which looks quite a bit like https://github.com/huggingface/transformers/issues/18355#issuecomment-1200940810. I'm going to try the instructions there-- installation probably going to take a while :))
EDIT this kind of works potentially https://github.com/huggingface/transformers/issues/18355#issuecomment-1543277694<|||||>Noting here that I needed to separately `pip install parameterized` for some reason, but I've added the tests and confirmed they work now!<|||||>d'oh, I gotta be more careful with copilot generations! fixed<|||||>Cool! This looks ready to me @LWprogramming 👍 Would you mind just running the quality fix up:
```
make style
```
And then pushing the change? This should fix the failing code quality test and re-trigger the CI<|||||>Is there a way to try running tests non-locally besides Circle CI? The `examples_torch` is failing on a wav2vec thing but I'm unsure if the bf16 unexpected result is a problem with my code, and when I run it locally with `pytest --make-reports=examples_torch ./examples/pytorch/ | tee tests_output.txt` it looks extremely slow.<|||||>Failing test looks unrelated! |
transformers | 23,222 | open | ASR example doesn't save tokenizer settings | ### System Info
- `transformers` version: 4.28.1
- Platform: Windows-10-10.0.22621-SP0
- Python version: 3.11.2
- Huggingface_hub version: 0.14.1
- Safetensors version: not installed
- PyTorch version (GPU?): 2.0.1+cu117 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: NO
- Using distributed or parallel set-up in script?: NO
### Who can help?
@sgugger
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Run training using [run_speech_recognition_ctc.py](https://github.com/huggingface/transformers/blob/main/examples/pytorch/speech-recognition/run_speech_recognition_ctc.py) and the included json file.
[train.json.zip](https://github.com/huggingface/transformers/files/11425889/train.json.zip)
Next, attempt to infer using the trained model:
```py
import os.path
from datasets import load_dataset
from datasets import Audio
from transformers import pipeline, AutomaticSpeechRecognitionPipeline
cv13 = load_dataset(
"mozilla-foundation/common_voice_13_0",
"eo",
split="train[:10]",
)
print(cv13[0])
cv13 = cv13.cast_column("audio", Audio(sampling_rate=16000))
sampling_rate = cv13.features["audio"].sampling_rate
audio_file = cv13[0]["audio"]["path"]
d, n = os.path.split(audio_file)
audio_file = os.path.join(d, "eo_train_0", n)
print(audio_file)
transcriber: AutomaticSpeechRecognitionPipeline = pipeline(
"automatic-speech-recognition",
model="xekri/wav2vec2-common_voice_13_0-eo-demo2",
)
print(transcriber(audio_file))
```
Output:
```
Found cached dataset common_voice_13_0 (C:/Users/rober/.cache/huggingface/datasets/mozilla-foundation___common_voice_13_0/eo/13.0.0/22809012aac1fc9803eaffc44122e4149043748e93933935d5ea19898587e4d7)
{'client_id': 'b8c51543fe043c8f27d0de0428e060e309d9d824ac9ad33e40aba7062dafd99e2e87bbedc671007e31973afb599b1c290dbd922637b79132727b5f37bc1ee88e', 'path': 'C:\\Users\\rober\\.cache\\huggingface\\datasets\\downloads\\extracted\\1dea8f044902d398c6cb09bfb5629dc2fbd80a6309ddd435c4554fa38f730472\\common_voice_eo_20453647.mp3', 'audio': {'path': 'C:\\Users\\rober\\.cache\\huggingface\\datasets\\downloads\\extracted\\1dea8f044902d398c6cb09bfb5629dc2fbd80a6309ddd435c4554fa38f730472\\common_voice_eo_20453647.mp3', 'array': array([ 0.00000000e+00, 0.00000000e+00, 0.00000000e+00, ...,
-1.16407300e-11, 1.07661449e-12, -1.71219774e-11]), 'sampling_rate': 48000}, 'sentence': 'Ĉu ili tiel plaĉas al vi?', 'up_votes': 2, 'down_votes': 0, 'age': 'twenties', 'gender': 'male', 'accent': 'Internacia', 'locale': 'eo', 'segment': '', 'variant': ''}
C:\Users\rober\.cache\huggingface\datasets\downloads\extracted\1dea8f044902d398c6cb09bfb5629dc2fbd80a6309ddd435c4554fa38f730472\eo_train_0\common_voice_eo_20453647.mp3
Downloading (…)lve/main/config.json: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2.27k/2.27k [00:00<?, ?B/s]
F:\eo-reco\.env\Lib\site-packages\huggingface_hub\file_download.py:133: UserWarning: `huggingface_hub` cache-system uses symlinks by default to efficiently store duplicated files but your machine does not support them in C:\Users\rober\.cache\huggingface\hub. Caching files will still work but in a degraded version that might require more space on your disk. This warning can be disabled by setting the `HF_HUB_DISABLE_SYMLINKS_WARNING` environment variable. For more details, see https://huggingface.co/docs/huggingface_hub/how-to-cache#limitations.
To support symlinks on Windows, you either need to activate Developer Mode or to run Python as an administrator. In order to see activate developer mode, see this article: https://docs.microsoft.com/en-us/windows/apps/get-started/enable-your-device-for-development
warnings.warn(message)
Downloading pytorch_model.bin: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1.26G/1.26G [01:56<00:00, 10.8MB/s]
Traceback (most recent call last):
File "F:\eo-reco\infer.py", line 20, in <module>
transcriber: AutomaticSpeechRecognitionPipeline = pipeline(
^^^^^^^^^
File "F:\eo-reco\.env\Lib\site-packages\transformers\pipelines\__init__.py", line 876, in pipeline
tokenizer = AutoTokenizer.from_pretrained(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "F:\eo-reco\.env\Lib\site-packages\transformers\models\auto\tokenization_auto.py", line 723, in from_pretrained
return tokenizer_class_py.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "F:\eo-reco\.env\Lib\site-packages\transformers\tokenization_utils_base.py", line 1795, in from_pretrained
raise EnvironmentError(
OSError: Can't load tokenizer for 'xekri/wav2vec2-common_voice_13_0-eo-demo2'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local directory with the same name. Otherwise, make sure 'xekri/wav2vec2-common_voice_13_0-eo-demo2' is the correct path to a directory containing all relevant files for a Wav2Vec2CTCTokenizer tokenizer.
```
Checking the uploaded repo, it seems that no tokenizer-related files (e.g. `vocab.json`, `tokenizer_config.json`, etc) were pushed.
I added some debug to `run_speech_recognition_ctc.py` and found that these files were generated locally, but got deleted locally during step 7 when `Trainer` was initialized (line 701).
The output from `run_speech_recognition_ctc.py` at that point was:
```
loading file vocab.json
loading file tokenizer_config.json
loading file added_tokens.json
loading file special_tokens_map.json
Adding <s> to the vocabulary
Adding </s> to the vocabulary
Cloning https://huggingface.co/xekri/wav2vec2-common_voice_13_0-eo-demo into local empty directory.
05/08/2023 15:06:23 - WARNING - huggingface_hub.repository - Cloning https://huggingface.co/xekri/wav2vec2-common_voice_13_0-eo-demo into local empty directory.
max_steps is given, it will override any value given in num_train_epochs
```
It seems that instantiating `Training` with `push_to_hub=true` creates a new repo and then empties anything in the local directory so that it can clone the repo into it. This deletes any files written to the local directory, which includes the tokenizer configs.
### Expected behavior
No error. | 05-08-2023 23:45:16 | 05-08-2023 23:45:16 | The comment on `Trainer.push_to_hub` does say `Upload *self.model* and *self.tokenizer* to the 🤗 model hub`. And in fact, it does call the trainer's `tokenizer.save_pretrained` function. However, in `run_speech_recognition_ctc.py`, `tokenizer` is set to `feature_extractor` in the initialization, and `Wav2Vec2FeatureExtractor.save_pretrained` does not save tokenizer settings.<|||||>When I replace these lines at the end of `run_speech_recognition_ctc` from this:
```py
if training_args.push_to_hub:
trainer.push_to_hub(**kwargs)
else:
trainer.create_model_card(**kwargs)
```
to this:
```py
tokenizer.save_pretrained(training_args.output_dir)
trainer.create_model_card(**kwargs)
if training_args.push_to_hub:
trainer.push_to_hub(**kwargs)
```
we do get tokenizer files. Also, may as well write the model card in any case.<|||||>cc @sanchit-gandhi <|||||>The code in the `run_speech_recognition_ctc.py` script as well as the instructions from the [ASR guide](https://huggingface.co/docs/transformers/tasks/asr) that you used in issue https://github.com/huggingface/transformers/issues/23188 do the following:
```python
trainer = Trainer(
...
tokenizer=processor.feature_extractor,
...
)
```
The "processor" combines the feature extractor and tokenizer into a single class, but because we only pass the feature extractor to the Trainer, the tokenizer doesn't get saved. So that's clearly a mistake on our end.
The following fix should work:
```python
trainer = Trainer(
...
tokenizer=processor,
...
)
```
We're updating the docs to fix this. (It's a bit confusing that this argument from Trainer is called `tokenizer` but that's what's responsible for saving the non-model stuff.)
<|||||>Probably we can directly add a new argument to the `Trainer` for the processor @hollance? This would stop all confusion IMO:
```python
trainer = Trainer(
...
processor=processor,
...
)
```
Here we could expect the user to pass either one of `tokenizer` or `processor` to the `Trainer`. Within the `Trainer` we only use the `tokenizer` to get the model input name, which after #20117 we can now get directly from the `processor`.<|||||>Can confirm, setting `tokenizer=processor` in `run_speech_recognition_ctc.py` works. Agree that `tokenizer` is a bit of a misleading keyword then.<|||||>Keeping this open since we really should update the Trainer to take `processor` as an argument over `tokenizer=processor`<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 23,221 | closed | T5 working on cpu but not gpu | ### System Info
transformers 4.16.2
ubuntu 22.04
python 3.10.6
gpu = amd radeon vii
torch 2.0.1 + rocm 5.4.2
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
I am playing around with summarization, and the following code works fine when `device = torch.device("cpu")`, but when I try on cuda I get the error below.
```
model = T5ForConditionalGeneration.from_pretrained("t5-large")
device = torch.device("cuda")
model = model.to(device)
tokenizer = T5Tokenizer.from_pretrained("t5-large")
inputs = tokenizer.encode("summarize: " + text, return_tensors="pt", max_length=512, truncation=True).to(device)
outputs = model.generate(
inputs,
max_length=150,
min_length=40,
length_penalty=2.0,
num_beams=4,
early_stopping=True)
print(tokenizer.decode(outputs[0]))
```
```
Traceback (most recent call last):
File "/home/user/testing/summary.py", line 81, in <module>
outputs = model.generate(
File "/home/user/.local/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/home/user/.local/lib/python3.10/site-packages/transformers/generation_utils.py", line 1234, in generate
return self.beam_search(
File "/home/user/.local/lib/python3.10/site-packages/transformers/generation_utils.py", line 2026, in beam_search
beam_outputs = beam_scorer.process(
File "/home/user/.local/lib/python3.10/site-packages/transformers/generation_beam_search.py", line 257, in process
input_ids[batch_beam_idx].clone(),
IndexError: index -18014394218708992 is out of bounds for dimension 0 with size 4
```
### Expected behavior
When running on cpu, the code runs without errors and prints the output. I am trying to get the same results with gpu. | 05-08-2023 22:49:19 | 05-08-2023 22:49:19 | Hi @mystsec, thanks for raising this issue!
Version 4.16.2 is over a year old and since then there have been a lot of updates to our generation code. I'm able to run the example provide on the most recent release of transformers - v4.28.1
<|||||>@amyeroberts I updated transformers to 4.28.1, and now I get the following warning + similar error to earlier:
```
/home/user/.local/lib/python3.10/site-packages/transformers/models/t5/tokenization_t5.py:163: FutureWarning: This tokenizer was incorrectly instantiated with a model max length of 512 which will be corrected in Transformers v5.
For now, this behavior is kept to avoid breaking backwards compatibility when padding/encoding with `truncation is True`.
- Be aware that you SHOULD NOT rely on t5-large automatically truncating your input to 512 when padding/encoding.
- If you want to encode/pad to sequences longer than 512 you can either instantiate this tokenizer with `model_max_length` or pass `max_length` when encoding/padding.
- To avoid this warning, please instantiate this tokenizer with `model_max_length` set to your preferred value.
warnings.warn(
Traceback (most recent call last):
File "/home/user/testing/summary.py", line 81, in <module>
outputs = model.generate(
File "/home/user/.local/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/home/user/.local/lib/python3.10/site-packages/transformers/generation/utils.py", line 1524, in generate
return self.beam_search(
File "/home/user/.local/lib/python3.10/site-packages/transformers/generation/utils.py", line 2897, in beam_search
sequence_outputs = beam_scorer.finalize(
File "/home/user/.local/lib/python3.10/site-packages/transformers/generation/beam_search.py", line 360, in finalize
decoded: torch.LongTensor = input_ids.new(batch_size * self.num_beam_hyps_to_keep, sent_max_len)
RuntimeError: Trying to create tensor with negative dimension -36028792732385279: [1, -36028792732385279]
```<|||||>@mystsec Could you try doing a fresh install of transformers in your environment? I'm unable to replicate the error with transformers 4.28.1 on both cpu and gpu.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 23,220 | closed | Pin tensorflow-probability | # What does this PR do?
All is said in the title. Latest release requires TensorFlow>=2.12 which we don't support (not sure why, it's been a month and a half). | 05-08-2023 21:32:10 | 05-08-2023 21:32:10 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 23,219 | closed | ValueError: DistilBertModel does not support gradient checkpointing. | How to enable the "gradient_checkpointing" for DistilBert model ? However, it's working fine for the Bert model, I've followed the steps given on this page to enable it
https://huggingface.co/docs/transformers/v4.18.0/en/performance
I've gone through the huggingface code of respective classes and found that the feature is present only for the Bert model and not the DistilBert.
https://github.com/huggingface/transformers/blob/188a8bfcccc6b862fe7ccc2859d977c01dd98136/src/transformers/models/bert/modeling_bert.py#L593
https://github.com/huggingface/transformers/blob/188a8bfcccc6b862fe7ccc2859d977c01dd98136/src/transformers/models/distilbert/modeling_distilbert.py#L470
| 05-08-2023 20:26:54 | 05-08-2023 20:26:54 | Yes DistilBert does not support gradient checkpointing. DistilBERT is a small model, so that feature is not needed for it.<|||||>I want to run this model across large batch sizes to see how much I can benefit from this. Is there any way I can enable for this model as well using torch.utils.checkpoint.checkpoint, but not sure where to apply checkpointing for this. <|||||>I want to try out DeepSpeed’s activation checkpointing but can't use this on above model as it requires to enable the "gradient_checkpointing" flag in the HF trainer.
I was going through the DeepSpeed details on the below page and it mentions that we've to enable the "gradient_checkpointing" flag in HF trainer to use this
"**HF Transformers models don’t know anything about DeepSpeed’s activation checkpointing,** so if you try to enable that feature in the DeepSpeed config file, nothing will happen."
(https://huggingface.co/docs/transformers/main_classes/deepspeed)
What code changes required I need to do to replace with the Deepspeed API or enable the model.gradient_checkpointing_enable() for distilbert
<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>> Yes DistilBert does not support gradient checkpointing. DistilBERT is a small model, so that feature is not needed for it.
I don't think this feature is redundant if we want to train it with extremely large batch size. To my knowledge, minilm, which has fewer parameters than distilbert, supports gradient checkpointing.<|||||>Yeah, I agree jordan. When I tried to compare some transformer models, I could not train DistilBert because of the large batch size while i could train bert/roberta. |
transformers | 23,218 | open | Model outputs are impacted by the aspect ratios of other images in a batch | ### System Info
- `transformers` version: 4.27.4
- Platform: macOS-13.3.1-arm64-arm-64bit
- Python version: 3.10.11
- Huggingface_hub version: 0.13.3
- PyTorch version (GPU?): 1.13.1 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
### Who can help?
@amyeroberts @NielsRogge
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
I have been experimenting with `DetrForObjectDetection` and discovered an issue where the model output for a given image depends on the aspect ratio of the other images in the batch.
A reproducible example is given below:
``` python
import io
import requests
import torch
from PIL import Image
from transformers import DetrForObjectDetection, DetrImageProcessor
def main():
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
print(f"{url = }")
with requests.Session() as session:
image_bytes = session.get(url).content
image = Image.open(io.BytesIO(image_bytes))
print(f"{image.size = }")
pretrained_model_name = "facebook/detr-resnet-50"
print(f"{pretrained_model_name = }")
image_processor = DetrImageProcessor.from_pretrained(pretrained_model_name)
assert isinstance(image_processor, DetrImageProcessor)
model = DetrForObjectDetection.from_pretrained(pretrained_model_name)
assert isinstance(model, DetrForObjectDetection)
for images_expr, images in [
(
"[image]",
[image],
),
(
"[image, image]",
[image, image],
),
(
"[image, image.resize((image.width, image.height * 2))]",
[image, image.resize((image.width, image.height * 2))],
),
]:
print(f"images = {images_expr}")
inputs = image_processor(images=images, return_tensors="pt")
assert sorted(inputs) == ["pixel_mask", "pixel_values"]
pixel_mask, pixel_values = inputs["pixel_mask"], inputs["pixel_values"]
print(f" {pixel_mask.shape = }, {pixel_values.shape = }")
with torch.no_grad():
outputs = model(
pixel_mask=pixel_mask,
pixel_values=pixel_values,
)
print(f" {outputs.encoder_last_hidden_state.shape = }")
print(f" {outputs.encoder_last_hidden_state[0, 0, :8] = }")
if __name__ == "__main__":
main()
```
``` text
url = 'http://images.cocodataset.org/val2017/000000039769.jpg'
image.size = (640, 480)
pretrained_model_name = 'facebook/detr-resnet-50'
images = [image]
pixel_mask.shape = torch.Size([1, 800, 1066]), pixel_values.shape = torch.Size([1, 3, 800, 1066])
outputs.encoder_last_hidden_state.shape = torch.Size([1, 850, 256])
outputs.encoder_last_hidden_state[0, 0, :8] = tensor([-0.0544, -0.0425, -0.0307, -0.0107, 0.0201, -0.1194, 0.0373, 0.0250])
images = [image, image]
pixel_mask.shape = torch.Size([2, 800, 1066]), pixel_values.shape = torch.Size([2, 3, 800, 1066])
outputs.encoder_last_hidden_state.shape = torch.Size([2, 850, 256])
outputs.encoder_last_hidden_state[0, 0, :8] = tensor([-0.0544, -0.0425, -0.0307, -0.0107, 0.0201, -0.1194, 0.0373, 0.0250])
images = [image, image.resize((image.width, image.height * 2))]
pixel_mask.shape = torch.Size([2, 1200, 1066]), pixel_values.shape = torch.Size([2, 3, 1200, 1066])
outputs.encoder_last_hidden_state.shape = torch.Size([2, 1292, 256])
outputs.encoder_last_hidden_state[0, 0, :8] = tensor([-0.0399, -0.0472, -0.0268, -0.0136, 0.0196, -0.1215, 0.0678, 0.0230])
```
The issue is the last line: the output of the last layer of the encoder is different for the first image in the batch.
Here is my understanding so far of how the issue arises:
- The `image_processor` resizes all images to be as large as possible, subject to the shortest edge being less than or equal to `800` and the longest edge being less than or equal to `1333`.
- To combine images of different aspect ratios in the same batch, images are padded with zeros at the bottom and right.
- The pixel values and pixel mask are forwarded through `DetrForObjectDetection` and all the way to the `DetrEncoder`, which then forwards _only_ the pixel values to the backbone (see [here](https://github.com/huggingface/transformers/blob/94056b57beb4499f4f74d5d88a41e8266cc01778/src/transformers/models/detr/modeling_detr.py#L372)).
- If an image is padded with zeros then it is OK to omit the pixel mask if zeros are preserved by the layers (e.g. a `Conv2D` layer). However, in this case, the backbone has batch normalization layers that add values too. The result of this is that the padding pixels get non-zero values which then influence downstream convolutions.
### Expected behavior
If two images are included in a single batch, the model output should be identical to as if the two images were evaluated in separate batches of size one. | 05-08-2023 19:50:57 | 05-08-2023 19:50:57 | Hi @rstebbing,
Indeed, this is a pretty tricky issue. You're understanding of the image processor and model matches mine :)
It seems that the effect of batch size is something the authors were aware of: https://github.com/facebookresearch/detr#evaluation, although they don't specify why e.g. the influence of layer norm.
cc @rafaelpadilla Who has also been investing some of the influences of batch size on object detection metrics and came across the same issue. |
transformers | 23,217 | closed | Paged Optimizer + Lion Optimizer for Trainer | # What does this PR do?
This PR introduces one new optimizer (Lion) and one new feature from bitsandbytes (paged optimizers) for the trainer `--optim` variable.
Paged optimizers is an idea that will be published in an upcoming paper where we fine-tune 65B models on a single GPU. Paged optimizers use as much GPU memory as is available, but if less GPU memory is available they automatically switch to a page-by-page transfer mode between the CPU and GPU and transfer just the optimizer states that are needed right now to perform the parameter updates. As such, they are similar to optimizers that are offloaded but Paged optimizer work well with as little as 2 MB of GPU memory and require no user interaction, no extra code, and are failsafe (you cannot do the allocation wrong). If more memory is available, they behave just like any other optimizer -- there is no difference in behavior or performance.
Paged optimizers are particularly useful for training with variable length mini-batches/sequences: if the model fits in the GPU RAM for most mini-batches and hits a mini-batch with very large context/sequence size, then the optimizer will be evicted to the CPU temporarily. Normal optimization resumes after the large mini-batch.
Since these transfers happen page-by-page and the entire system is automatic, the user does not need to do anything for memory benefits and performance considerations.
The only thing that is necessary to use paged optimizer is to pass the specific argument to the trainer: `--optim paged_adamw_32bit` or `--optim paged_lion_32bit` use standard 32-bit AdamW or Lion that are paged.
More details on the algorithm. Paged optimizers work like this:
1. Optimizer states are allocated on the CPU and mapped to a certain GPU device.
2. When an `optimizer.step()` is performed bitsandbytes prefetches the GPU memory page-by-page from the CPU buffer, thus only needing 2MB of GPU memory to perform the optimizer update. If more memory is available, then the swapped-in pages will stay in memory until ...
3. In the case new GPU memory is allocated which exceeds the total GPU RAM capacity, for example, your GPU has 11 GB of RAM and 10.5 GB are used already, and PyTorch allocates another 2 GB of tensors, then the GPU pages for the optimizer are evicted unto the CPU. This happens automatically without user interaction. As such, an out-of-memory event is prevented automatically.
## Before submitting
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## About the implementation / Discussion
I added tests similar to those from 8-bit Adam. I refactored all bnb optimizers into one section of the trainer to reduce bloat.
Reviewers: @sgugger @younesbelkada
| 05-08-2023 19:24:20 | 05-08-2023 19:24:20 | _The documentation is not available anymore as the PR was closed or merged._<|||||>This feature truely needed, does there any timeline on when will new release of bitesandbytes? |
transformers | 23,216 | closed | docs: Fix broken link in 'How to add a model...' | # What does this PR do?
See https://huggingface.co/docs/transformers/add_new_model#run-a-pretrained-checkpoint-using-the-original-repository:~:text=Get%20familiar%20with%20the%20original%20repository
No issue filed
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
@sgugger | 05-08-2023 18:32:38 | 05-08-2023 18:32:38 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 23,215 | closed | transformers.set_seed seems to do nothing | ### System Info
- `transformers` version: 4.28.0.dev0
- Platform: Linux-5.19.0-41-generic-x86_64-with-glibc2.35
- Python version: 3.10.10
- Huggingface_hub version: 0.13.3
- Safetensors version: not installed
- PyTorch version (GPU?): 2.0.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help?
@gante, @younesbelkada
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
1. install transformes with support for alpaca model
2. run this code with one seed
3. run this code with any other seed
4. see that the results are the same
```
from transformers import GenerationConfig, LlamaTokenizer, LlamaForCausalLM, set_seed
from torch import float16, compile, no_grad
set_seed(621)
# Enhances prompt
def enhance_prompt(prompt, input=None):
if input:
return f"""Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Input:
{input}
### Response:"""
else:
return f"""Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:"""
# Gets response from Alpaca
def get_response(prompt):
with no_grad():
outputs = alpaca.generate(input_ids=tokenizer(prompt, return_tensors="pt").input_ids.to("cuda"), generation_config=generation_config, return_dict_in_generate=True, output_scores=True)
outputs = tokenizer.decode(outputs.sequences[0], skip_special_tokens=True)
return outputs.split("### Response:")[1]
# Sets up Alpaca
tokenizer = LlamaTokenizer.from_pretrained("chainyo/alpaca-lora-7b")
alpaca = LlamaForCausalLM.from_pretrained("chainyo/alpaca-lora-7b", load_in_8bit=True, torch_dtype=float16, device_map="auto")
generation_config = GenerationConfig(temperature=0.2, top_p=0.75, top_k=40, num_beams=4, max_new_tokens=64)
alpaca.eval()
compile(alpaca)
# Gets output from Alpaca
prompt = enhance_prompt("Write a simple poem about flowers.")
out = get_response(prompt)
# Prints Alpaca's output
print(out)
```
### Expected behavior
Model will output two different answers but now it gives the same every seed I try. | 05-08-2023 18:30:22 | 05-08-2023 18:30:22 | @mojejmenojehonza 👋
Two notes:
1. You should pass `do_sample=True` in your generation config or in your `.generate()` call. Most models have it off by default, causing the generation to be deterministic (and ignoring parameters like `temperature`, `top_k`, etc).
2. With `temperature=0.2`, the relative weight of the most likely logits is massively increased, making generation almost deterministic. Even if there are no bugs in your script, it's far from guaranteed that two different seeds produce different outputs with such low temperature :)<|||||>@gante
Thanks worked like a charm :)<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 23,214 | closed | Transformers Agents | # Introducing Transformers Agents
This PR adds a new API called Transformers Agents. Agents allow you to use Transformers with zero code experience, directly talking to Transformers or Diffusers via natural language. It is based on `Agent`s and `Tool`s. The agent is an LLM prompted to generate code using the tools, which are simple functions performing a single task.
Tools can live in Transformers or on the Hub, this PR introduces both. You can read more about this in the added documentation but here is an example:
Define an agent using the starcoder model:
```py
from transformers import HfAgent
agent = HfAgent("https://api-inference.huggingface.co/models/bigcode/starcoder")
```
Use the command `run` to execute a given problem:
```py
agent.run("Draw me a picture of rivers and lakes")
```

Use the command `chat` to chat with the agent and execute instructions one after the other:
```py
agent.chat("Draw me a picture of rivers and lakes")
```

```py
agent.chat("Transform the picture so that there is a rock in there")
```
 | 05-08-2023 17:53:20 | 05-08-2023 17:53:20 | _The documentation is not available anymore as the PR was closed or merged._<|||||>picture = agent.run("Draw me a picture of rivers and lakes")
==Explanation from the agent==
I will use the following tool: `image_segmenter` to generate a segmentation mask for the image.
==Code generated by the agent==
prompt = "rivers and lakes"
mask = image_segmenter(image, prompt)
==Result==
Evaluation of the code stopped at line 1 before the end because of the following error:
The variable `image` is not defined.<|||||>Ah yes we did that example with openAI. Will fine-tune the prompt so that example works before the release, thanks for the pointer! |
transformers | 23,213 | open | Question about resum_from_checkpoint in run_translation_no_trainer.py | ### System Info
- `transformers` version: 4.29.0.dev0
- Platform: Windows-10-10.0.19045-SP0
- Python version: 3.10.6
- Huggingface_hub version: 0.14.1
- Safetensors version: not installed
- PyTorch version (GPU?): 1.12.1+cpu (False)
- Tensorflow version (GPU?): 2.10.0 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
### Who can help?
@sgugger
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
-
### Expected behavior
Hi, i stumbled across this function, while i was debugging my own code which is a changed version of the no_trainer version.
While my code crashes with a cuda error, i found this part interesting as it would help me in getting faster to the evaluation process (where my error occurs).
However it seems a bit weird as i was reading this function
[https://github.com/huggingface/transformers/blob/main/examples/pytorch/translation/run_translation_no_trainer.py#LL594C4-L614C66](https://github.com/huggingface/transformers/blob/main/examples/pytorch/translation/run_translation_no_trainer.py#LL594C4-L614C66)
What seems a bit wird to me is, that first "resume_from_checkpoint" could be either None (default) or a checkpoint like "epoch_5" or "step_1000". The parameter Definition says it should be of Type String or daults to None [https://github.com/huggingface/transformers/blob/main/examples/pytorch/translation/run_translation_no_trainer.py#L266-L271](https://github.com/huggingface/transformers/blob/main/examples/pytorch/translation/run_translation_no_trainer.py#L266-L271)
The first check is if the argument was set, to go into the resume block.
```
if args.resume_from_checkpoint:
```
From my understanding the following values would work:
"step_1234" or any other string,
"" an empty string.
True
Now the next check will look if it is not none or not empty
```
if args.resume_from_checkpoint is not None or args.resume_from_checkpoint != "":
```
So everything in the if block will be executed in these instances:
is: True/False
is: "step_1000" or any other non empty string.
But if it is True, the later part can not work here as it searches either for "epoch_*" or "step_*", in one of my testruns it whowed when using True it will search for the folder "True" which then fails. While it might be perfectly finde to set it to "" i also found some examples about using the normal trainer, where resume was simply set to true like so:
```trainer.train(resume_from_checkpoint = True)```
My suggestion here would be to first check if it is not None, then if it is not True:
```
if args.resume_from_checkpoint:
args.resume_from_checkpoint != "" and args.resume_from_checkpoint is not True
```
So if it is set to True or an empty string, it will search for a folder with either "step_" or "epoch_" as a name and use this folder.
Another difficulty i found in my project was, that i wanted to save the checkpoints into a subfolder, saving it was not the problem but loading it here, because the script will only replace "epoch_" or "step_" to fetch the step or epoch number. In Order to find the folders in the subfolder i changed the check like so:
```
dirs = [os.path.join(args.checkpoint_dir, f.name) for f in os.scandir(f"{args.checkpoint_dir}") if f.is_dir()]
...
import re
# First the correct epoch is detected here, later the process will skip training until reaching the correct step.
if "epoch" in training_difference:
repl = re.search(r'epoch_(\d+)', training_difference).group()
starting_epoch = int(repl.replace("epoch_", "")) + 1
```
Same check would have to be done in the "step_" part.
Maybe the args.checkpoint_dir can be omitted, so the user can specify the path when setting the argument (True or "" would then not work anymore)
At last i have a question about this part:
```
accelerator.load_state(checkpoint)
```
This is only defined in the block when ```args.resume_from_checkpoint``` is not empty, is it not needed to let the accelerator always load the appropriate checkpoint? In the example it would not use accelerator.load_state when loading it from the latest chekpoint which it discovered by itself.
Edit:
fixed file links | 05-08-2023 17:00:11 | 05-08-2023 17:00:11 | cc @muellerzr <|||||>@danielDigitalArt it's not needed unless we resume from a checkpoint, yes. If you would like to open a PR with your suggestions, that would be welcome as they do make sense to me as an inclusion. Very keen observation<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 23,212 | closed | Unit tests for hf_argparser | # What does this PR do?
Adds unit tests for hf_argparser.py
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
| 05-08-2023 16:15:44 | 05-08-2023 16:15:44 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Oh, I didn't see that file. Will retract request.<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_23212). All of your documentation changes will be reflected on that endpoint. |
transformers | 23,211 | closed | Fix remote tool | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
| 05-08-2023 15:56:34 | 05-08-2023 15:56:34 | |
transformers | 23,210 | closed | Help with using gpt-neo models correctly | ### System Info
TPU v2 (I am not using the `run_clm_mp.py`, so I do not really need TPU v3-8)
- `transformers` version: 4.29.0.dev0
- Platform: Linux-5.13.0-1027-gcp-x86_64-with-glibc2.31
- Python version: 3.11.3
- Huggingface_hub version: 0.14.1
- Safetensors version: not installed
- PyTorch version (GPU?): not installed (NA)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?[/GPU](https://vscode-remote+ssh-002dremote-002b35-002e238-002e156-002e174.vscode-resource.vscode-cdn.net/GPU)?[/TPU](https://vscode-remote+ssh-002dremote-002b35-002e238-002e156-002e174.vscode-resource.vscode-cdn.net/TPU)?): 0.6.9 (tpu)
- Jax version: 0.4.8
- JaxLib version: 0.4.7
- Using GPU in script?: <fill in> No
- Using distributed or parallel set-up in script?: <fill in> NA
### Who can help?
@ArthurZucker and @younesbelkada
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
Please just use https://gist.github.com/buttercutter/df275bc0f26180cb0f77479482855b83/27ded5e8d0b416664bc2f396886510b857050679
### Expected behavior
There should not be any illegal characters like `)]` in the model output
Please advise on how the numerical values for `emb.at[:50257, :]` and `vocab_size=50264` are being derived. | 05-08-2023 15:51:13 | 05-08-2023 15:51:13 | Why `remove_columns=column_names` which will end up feeding nothing to the [nlp model training process](https://github.com/huggingface/transformers/blob/3335724376319a0c453049d0cd883504f530ff52/examples/research_projects/jax-projects/model_parallel/run_clm_mp.py#L356) ?
Feel free to correct me if I miss anything or wrong.

Edit: It seems that `remove_columns` is to remove tokenizer inputs away from the tokenized outputs.<|||||>```python
input_ids[0] = [ 82 6442 25 ... 50256 50256 50256]
0%| | 0[/104](https://vscode-remote+ssh-002dremote-002b35-002e238-002e156-002e174.vscode-resource.vscode-cdn.net/104) [00:01<?, ?it[/s](https://vscode-remote+ssh-002dremote-002b35-002e238-002e156-002e174.vscode-resource.vscode-cdn.net/s)]
╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮
│ in <module>:94 │
│ │
│ 91 │ │
│ 92 │ # Generate the answer │
│ 93 │ #Changing temperature, top_k and top_p does not seem to change the outcome │
│ ❱ 94 │ outputs = model.generate( │
│ 95 │ │ input_ids = eval_tokenized_dataset["input_ids"][index][None, :], │
│ 96 │ │ max_new_tokens=generated_max_length, │
│ 97 │ │ pad_token_id = model.config.eos_token_id, │
│ │
│ /home/moe/.local/lib/python3.11/site-packages/transformers/generation/flax_utils.py:429 in │
│ generate │
│ │
│ 426 │ │ │ ) │
│ 427 │ │ elif generation_config.do_sample and generation_config.num_beams == 1: │
│ 428 │ │ │ logits_warper = self._get_logits_warper(generation_config=generation_config) │
│ ❱ 429 │ │ │ return self._sample( │
│ 430 │ │ │ │ input_ids, │
│ 431 │ │ │ │ generation_config.max_length, │
│ 432 │ │ │ │ generation_config.pad_token_id, │
│ │
│ /home/moe/.local/lib/python3.11/site-packages/transformers/generation/flax_utils.py:682 in │
│ _sample │
│ │
│ 679 │ │ model = self.decode if self.config.is_encoder_decoder else self │
│ 680 │ │ │
│ 681 │ │ # initialize model specific kwargs │
│ ❱ 682 │ │ model_kwargs = self.prepare_inputs_for_generation(input_ids, max_length, **model │
│ 683 │ │ │
│ 684 │ │ # initialize state │
│ 685 │ │ state = SampleState( │
│ │
│ /home/moe/.local/lib/python3.11/site-packages/transformers/models/gpt_neo/modeling_flax_gpt_neo. │
│ py:661 in prepare_inputs_for_generation │
│ │
│ 658 │ │ # initializing the cache │
│ 659 │ │ batch_size, seq_length = input_ids.shape │
│ 660 │ │ │
│ ❱ 661 │ │ past_key_values = self.init_cache(batch_size, max_length) │
│ 662 │ │ # Note that usually one would have to put 0's in the attention_mask for x > inpu │
│ 663 │ │ # But since GPTNeo uses a causal mask, those positions are masked anyways. │
│ 664 │ │ # Thus we can create a single static attention_mask here, which is more efficien │
│ │
│ /home/moe/.local/lib/python3.11/site-packages/transformers/models/gpt_neo/modeling_flax_gpt_neo. │
│ py:396 in init_cache │
│ │
│ 393 │ │ attention_mask = jnp.ones_like(input_ids) │
│ 394 │ │ position_ids = jnp.broadcast_to(jnp.arange(jnp.atleast_2d(input_ids).shape[-1]), │
│ 395 │ │ │
│ ❱ 396 │ │ init_variables = self.module.init( │
│ 397 │ │ │ jax.random.PRNGKey(0), input_ids, attention_mask, position_ids, return_dict= │
│ 398 │ │ ) │
│ 399 │ │ return unfreeze(init_variables["cache"]) │
│ │
│ /home/moe/.local/lib/python3.11/site-packages/jax/_src/traceback_util.py:166 in │
│ reraise_with_filtered_traceback │
│ │
│ 163 def reraise_with_filtered_traceback(*args, **kwargs): │
│ 164 │ __tracebackhide__ = True │
│ 165 │ try: │
│ ❱ 166 │ return fun(*args, **kwargs) │
│ 167 │ except Exception as e: │
│ 168 │ mode = _filtering_mode() │
│ 169 │ if _is_under_reraiser(e) or mode == "off": │
│ │
│ /home/moe/.local/lib/python3.11/site-packages/flax/linen/module.py:1640 in init │
│ │
│ 1637 │ """ │
│ 1638 │ Module._module_checks(self) │
│ 1639 │ │
│ ❱ 1640 │ _, v_out = self.init_with_output( │
│ 1641 │ │ rngs, │
│ 1642 │ │ *args, │
│ 1643 │ │ method=method, │
│ │
│ /home/moe/.local/lib/python3.11/site-packages/jax/_src/traceback_util.py:166 in │
│ reraise_with_filtered_traceback │
│ │
│ 163 def reraise_with_filtered_traceback(*args, **kwargs): │
│ 164 │ __tracebackhide__ = True │
│ 165 │ try: │
│ ❱ 166 │ return fun(*args, **kwargs) │
│ 167 │ except Exception as e: │
│ 168 │ mode = _filtering_mode() │
│ 169 │ if _is_under_reraiser(e) or mode == "off": │
│ │
│ /home/moe/.local/lib/python3.11/site-packages/flax/linen/module.py:1545 in init_with_output │
│ │
│ 1542 │ elif method is None: │
│ 1543 │ method = self.__call__ │
│ 1544 │ method = _get_unbound_fn(method) │
│ ❱ 1545 │ return init_with_output( │
│ 1546 │ │ method, │
│ 1547 │ │ self, │
│ 1548 │ │ mutable=mutable, │
│ │
│ /home/moe/.local/lib/python3.11/site-packages/flax/core/scope.py:965 in wrapper │
│ │
│ 962 │ if not isinstance(rngs, dict): │
│ 963 │ rngs = {'params': rngs} │
│ 964 │ init_flags = {**(flags if flags is not None else {}), 'initializing': True} │
│ ❱ 965 │ return apply(fn, mutable=mutable, flags=init_flags)({}, *args, rngs=rngs, │
│ 966 │ │ │ │ │ │ │ │ │ │ │ │ │ │ **kwargs) │
│ 967 │
│ 968 return wrapper │
│ │
│ /home/moe/.local/lib/python3.11/site-packages/flax/core/scope.py:933 in wrapper │
│ │
│ 930 │ │
│ 931 │ with bind(variables, rngs=rngs, mutable=mutable, │
│ 932 │ │ │ flags=flags).temporary() as root: │
│ ❱ 933 │ y = fn(root, *args, **kwargs) │
│ 934 │ if mutable is not False: │
│ 935 │ return y, root.mutable_variables() │
│ 936 │ else: │
│ │
│ /home/moe/.local/lib/python3.11/site-packages/flax/linen/module.py:2121 in scope_fn │
│ │
│ 2118 def scope_fn(scope, *args, **kwargs): │
│ 2119 │ _context.capture_stack.append(capture_intermediates) │
│ 2120 │ try: │
│ ❱ 2121 │ return fn(module.clone(parent=scope), *args, **kwargs) │
│ 2122 │ finally: │
│ 2123 │ _context.capture_stack.pop() │
│ 2124 │
│ │
│ /home/moe/.local/lib/python3.11/site-packages/flax/linen/module.py:432 in wrapped_module_method │
│ │
│ 429 │ # otherwise call the wrapped function as is. │
│ 430 │ if args and isinstance(args[0], Module): │
│ 431 │ self, args = args[0], args[1:] │
│ ❱ 432 │ return self._call_wrapped_method(fun, args, kwargs) │
│ 433 │ else: │
│ 434 │ return fun(*args, **kwargs) │
│ 435 wrapped_module_method.method_handler_wrapped = True # type: ignore[attr-defined] │
│ │
│ /home/moe/.local/lib/python3.11/site-packages/flax/linen/module.py:864 in _call_wrapped_method │
│ │
│ 861 │ # call method │
│ 862 │ if _use_named_call: │
│ 863 │ │ with jax.named_scope(_derive_profiling_name(self, fun)): │
│ ❱ 864 │ │ y = fun(self, *args, **kwargs) │
│ 865 │ else: │
│ 866 │ │ y = fun(self, *args, **kwargs) │
│ 867 │
│ │
│ /home/moe/.local/lib/python3.11/site-packages/transformers/models/gpt_neo/modeling_flax_gpt_neo. │
│ py:622 in __call__ │
│ │
│ 619 │ │ output_hidden_states: bool = False, │
│ 620 │ │ return_dict: bool = True, │
│ 621 │ ): │
│ ❱ 622 │ │ outputs = self.transformer( │
│ 623 │ │ │ input_ids, │
│ 624 │ │ │ attention_mask, │
│ 625 │ │ │ position_ids, │
│ │
│ /home/moe/.local/lib/python3.11/site-packages/flax/linen/module.py:432 in wrapped_module_method │
│ │
│ 429 │ # otherwise call the wrapped function as is. │
│ 430 │ if args and isinstance(args[0], Module): │
│ 431 │ self, args = args[0], args[1:] │
│ ❱ 432 │ return self._call_wrapped_method(fun, args, kwargs) │
│ 433 │ else: │
│ 434 │ return fun(*args, **kwargs) │
│ 435 wrapped_module_method.method_handler_wrapped = True # type: ignore[attr-defined] │
│ │
│ /home/moe/.local/lib/python3.11/site-packages/flax/linen/module.py:864 in _call_wrapped_method │
│ │
│ 861 │ # call method │
│ 862 │ if _use_named_call: │
│ 863 │ │ with jax.named_scope(_derive_profiling_name(self, fun)): │
│ ❱ 864 │ │ y = fun(self, *args, **kwargs) │
│ 865 │ else: │
│ 866 │ │ y = fun(self, *args, **kwargs) │
│ 867 │
│ │
│ /home/moe/.local/lib/python3.11/site-packages/transformers/models/gpt_neo/modeling_flax_gpt_neo. │
│ py:555 in __call__ │
│ │
│ 552 │ │ hidden_states = input_embeds + position_embeds │
│ 553 │ │ hidden_states = self.dropout(hidden_states, deterministic=deterministic) │
│ 554 │ │ │
│ ❱ 555 │ │ outputs = self.h( │
│ 556 │ │ │ hidden_states, │
│ 557 │ │ │ attention_mask, │
│ 558 │ │ │ deterministic=deterministic, │
│ │
│ /home/moe/.local/lib/python3.11/site-packages/flax/linen/module.py:432 in wrapped_module_method │
│ │
│ 429 │ # otherwise call the wrapped function as is. │
│ 430 │ if args and isinstance(args[0], Module): │
│ 431 │ self, args = args[0], args[1:] │
│ ❱ 432 │ return self._call_wrapped_method(fun, args, kwargs) │
│ 433 │ else: │
│ 434 │ return fun(*args, **kwargs) │
│ 435 wrapped_module_method.method_handler_wrapped = True # type: ignore[attr-defined] │
│ │
│ /home/moe/.local/lib/python3.11/site-packages/flax/linen/module.py:864 in _call_wrapped_method │
│ │
│ 861 │ # call method │
│ 862 │ if _use_named_call: │
│ 863 │ │ with jax.named_scope(_derive_profiling_name(self, fun)): │
│ ❱ 864 │ │ y = fun(self, *args, **kwargs) │
│ 865 │ else: │
│ 866 │ │ y = fun(self, *args, **kwargs) │
│ 867 │
│ │
│ /home/moe/.local/lib/python3.11/site-packages/transformers/models/gpt_neo/modeling_flax_gpt_neo. │
│ py:499 in __call__ │
│ │
│ 496 │ │ │ if output_hidden_states: │
│ 497 │ │ │ │ all_hidden_states += (hidden_states,) │
│ 498 │ │ │ │
│ ❱ 499 │ │ │ layer_outputs = block( │
│ 500 │ │ │ │ hidden_states, │
│ 501 │ │ │ │ attention_mask, │
│ 502 │ │ │ │ deterministic=deterministic, │
│ │
│ /home/moe/.local/lib/python3.11/site-packages/flax/linen/module.py:432 in wrapped_module_method │
│ │
│ 429 │ # otherwise call the wrapped function as is. │
│ 430 │ if args and isinstance(args[0], Module): │
│ 431 │ self, args = args[0], args[1:] │
│ ❱ 432 │ return self._call_wrapped_method(fun, args, kwargs) │
│ 433 │ else: │
│ 434 │ return fun(*args, **kwargs) │
│ 435 wrapped_module_method.method_handler_wrapped = True # type: ignore[attr-defined] │
│ │
│ /home/moe/.local/lib/python3.11/site-packages/flax/linen/module.py:864 in _call_wrapped_method │
│ │
│ 861 │ # call method │
│ 862 │ if _use_named_call: │
│ 863 │ │ with jax.named_scope(_derive_profiling_name(self, fun)): │
│ ❱ 864 │ │ y = fun(self, *args, **kwargs) │
│ 865 │ else: │
│ 866 │ │ y = fun(self, *args, **kwargs) │
│ 867 │
│ │
│ /home/moe/.local/lib/python3.11/site-packages/transformers/models/gpt_neo/modeling_flax_gpt_neo. │
│ py:320 in __call__ │
│ │
│ 317 │ ): │
│ 318 │ │ residual = hidden_states │
│ 319 │ │ hidden_states = self.ln_1(hidden_states) │
│ ❱ 320 │ │ outputs = self.attn( │
│ 321 │ │ │ hidden_states, │
│ 322 │ │ │ attention_mask=attention_mask, │
│ 323 │ │ │ deterministic=deterministic, │
│ │
│ /home/moe/.local/lib/python3.11/site-packages/flax/linen/module.py:432 in wrapped_module_method │
│ │
│ 429 │ # otherwise call the wrapped function as is. │
│ 430 │ if args and isinstance(args[0], Module): │
│ 431 │ self, args = args[0], args[1:] │
│ ❱ 432 │ return self._call_wrapped_method(fun, args, kwargs) │
│ 433 │ else: │
│ 434 │ return fun(*args, **kwargs) │
│ 435 wrapped_module_method.method_handler_wrapped = True # type: ignore[attr-defined] │
│ │
│ /home/moe/.local/lib/python3.11/site-packages/flax/linen/module.py:864 in _call_wrapped_method │
│ │
│ 861 │ # call method │
│ 862 │ if _use_named_call: │
│ 863 │ │ with jax.named_scope(_derive_profiling_name(self, fun)): │
│ ❱ 864 │ │ y = fun(self, *args, **kwargs) │
│ 865 │ else: │
│ 866 │ │ y = fun(self, *args, **kwargs) │
│ 867 │
│ │
│ /home/moe/.local/lib/python3.11/site-packages/transformers/models/gpt_neo/modeling_flax_gpt_neo. │
│ py:266 in __call__ │
│ │
│ 263 │ │ init_cache: bool = False, │
│ 264 │ │ output_attentions: bool = False, │
│ 265 │ ): │
│ ❱ 266 │ │ return self.attention( │
│ 267 │ │ │ hidden_states, │
│ 268 │ │ │ attention_mask=attention_mask, │
│ 269 │ │ │ deterministic=deterministic, │
│ │
│ /home/moe/.local/lib/python3.11/site-packages/flax/linen/module.py:432 in wrapped_module_method │
│ │
│ 429 │ # otherwise call the wrapped function as is. │
│ 430 │ if args and isinstance(args[0], Module): │
│ 431 │ self, args = args[0], args[1:] │
│ ❱ 432 │ return self._call_wrapped_method(fun, args, kwargs) │
│ 433 │ else: │
│ 434 │ return fun(*args, **kwargs) │
│ 435 wrapped_module_method.method_handler_wrapped = True # type: ignore[attr-defined] │
│ │
│ /home/moe/.local/lib/python3.11/site-packages/flax/linen/module.py:864 in _call_wrapped_method │
│ │
│ 861 │ # call method │
│ 862 │ if _use_named_call: │
│ 863 │ │ with jax.named_scope(_derive_profiling_name(self, fun)): │
│ ❱ 864 │ │ y = fun(self, *args, **kwargs) │
│ 865 │ else: │
│ 866 │ │ y = fun(self, *args, **kwargs) │
│ 867 │
│ │
│ /home/moe/.local/lib/python3.11/site-packages/transformers/models/gpt_neo/modeling_flax_gpt_neo. │
│ py:209 in __call__ │
│ │
│ 206 │ │ batch_size = hidden_states.shape[0] │
│ 207 │ │ causal_mask = jnp.broadcast_to(causal_mask, (batch_size,) + causal_mask.shape[1: │
│ 208 │ │ │
│ ❱ 209 │ │ attention_mask = jnp.broadcast_to(jnp.expand_dims(attention_mask, axis=(-3, -2)) │
│ 210 │ │ attention_mask = combine_masks(attention_mask, causal_mask) │
│ 211 │ │ │
│ 212 │ │ dropout_rng = None │
│ │
│ /home/moe/.local/lib/python3.11/site-packages/jax/_src/numpy/lax_numpy.py:1117 in broadcast_to │
│ │
│ 1114 The JAX version does not necessarily return a view of the input. │
│ 1115 """) │
│ 1116 def broadcast_to(array: ArrayLike, shape: Shape) -> Array: │
│ ❱ 1117 return util._broadcast_to(array, shape) │
│ 1118 │
│ 1119 │
│ 1120 def _split(op: str, ary: ArrayLike, indices_or_sections: Union[int, ArrayLike], │
│ │
│ /home/moe/.local/lib/python3.11/site-packages/jax/_src/numpy/util.py:418 in _broadcast_to │
│ │
│ 415 │ │ │ │ │ for arr_d, shape_d in safe_zip(arr_shape, shape_tail)) │
│ 416 │ if nlead < 0 or not compatible: │
│ 417 │ msg = "Incompatible shapes for broadcasting: {} and requested shape {}" │
│ ❱ 418 │ raise ValueError(msg.format(arr_shape, shape)) │
│ 419 │ diff, = np.where(tuple(not core.symbolic_equal_dim(arr_d, shape_d) │
│ 420 │ │ │ │ │ │ for arr_d, shape_d in safe_zip(arr_shape, shape_tail))) │
│ 421 │ new_dims = tuple(range(nlead)) + tuple(nlead + diff) │
╰──────────────────────────────────────────────────────────────────────────────────────────────────╯
ValueError: Incompatible shapes for broadcasting: (1, 1, 1, 2296) and requested shape (1, 1, 2048, 2048)
```
If I use the following code with
[eval_arc_test_dataset_solve_prefix.csv](https://github.com/huggingface/transformers/files/11471190/eval_arc_test_dataset_solve_prefix.csv) and `expected_length = config.max_position_embeddings` and `generated_max_length = len(prompt) + len(correct_answer)` after the [embedding resizing/sharding code](https://github.com/huggingface/transformers/tree/3335724376319a0c453049d0cd883504f530ff52/examples/research_projects/jax-projects/model_parallel#model-parallel-language-model-training-example) , I have the above error.
```python
import pandas as pd
from tqdm import tqdm
import jax
import numpy as np
from transformers import FlaxGPTNeoForCausalLM, AutoTokenizer, AutoConfig
model_name = './gpt-neo-125M'
model = FlaxGPTNeoForCausalLM.from_pretrained(model_name)
config = AutoConfig.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained("gpt2", use_fast=True)
tokenizer.pad_token = tokenizer.eos_token
# Function to calculate character accuracy
def character_accuracy(predicted, correct):
matching_chars = sum(c1 == c2 for c1, c2 in zip(predicted, correct))
return matching_chars / max(len(predicted), len(correct))
# Initialize counters and lists for results
total_correct = 0
total_rows = len(df)
correct = []
char_accuracy = []
predictions = []
# Read the CSV file
df = pd.read_csv("./eval_arc_test_dataset_solve_prefix.csv")
def tokenize_function(examples):
# strip leading and trailing spaces from examples["correct_answer"]
#print("examples[\"correct_answer\"] = ", examples["correct_answer"])
examples["correct_answer"] = [x.strip() for x in examples["correct_answer"]]
#empty resultant list
prompt_correct_answer = []
#choose the smaller list to iterate
small_list = len(examples["prompt"]) < len(examples["correct_answer"]) and examples["prompt"] or examples["correct_answer"]
prompt_correct_answer = [examples["prompt"][i]+examples["correct_answer"][i] for i in range(len(small_list))]
expected_length = config.max_position_embeddings #data_args.block_size
#tokenized_prompt = tokenizer(prompt_correct_answer, padding="longest", truncation=True, max_length=None)
tokenized_prompt = tokenizer(prompt_correct_answer, padding="max_length", truncation=True, max_length=expected_length)
# Force the length of the input_ids and attention_mask to match the expected length
tokenized_prompt["input_ids"] = [seq[:expected_length] + [tokenizer.pad_token_id] * (expected_length - len(seq)) for seq in tokenized_prompt["input_ids"]]
tokenized_prompt["attention_mask"] = [mask[:expected_length] + [0] * (expected_length - len(mask)) for mask in tokenized_prompt["attention_mask"]]
# Convert tokenized sequences to arrays of integers
input_ids = np.array(tokenized_prompt["input_ids"], dtype=np.int32)
attention_mask = np.array(tokenized_prompt["attention_mask"], dtype=np.int32)
print("input_ids[0] = ", input_ids[0])
return {"input_ids": input_ids, "attention_mask": attention_mask, "labels": input_ids}
'''
eval_tokenized_dataset = df.map(
tokenize_function,
batched=True,
num_proc=2,
remove_columns=['prompt', 'correct_answer'],
load_from_cache_file=False,
)
'''
def process_dataframe(df):
# Convert the DataFrame to a dictionary
examples = df.to_dict(orient='list')
# Call the tokenize_function with the examples dictionary
return tokenize_function(examples)
# Process the DataFrame using the process_dataframe function
eval_tokenized_dataset = process_dataframe(df)
# Iterate over rows in the DataFrame
for index, row in tqdm(df.iterrows(), total=total_rows):
prompt = row['prompt']
correct_answer = row['correct_answer']
generated_max_length = len(prompt) + len(correct_answer)
#print("generated_max_length = ", generated_max_length)
#Changing the seed and thus the prng_key value below, does seem to change the outcome.
seed = 1000
model.seed = seed
#inputs = tokenizer(prompt, return_tensors="np")
# Generate the answer
#Changing temperature, top_k and top_p does not seem to change the outcome
outputs = model.generate(
input_ids = eval_tokenized_dataset["input_ids"][index][None, :],
max_new_tokens=generated_max_length,
pad_token_id = model.config.eos_token_id,
prng_key=jax.random.PRNGKey(seed),
temperature=0.8,
early_stopping=True,
top_k=50,
top_p=0.95,
do_sample=True,
no_repeat_ngram_size=2)
output_sequence = outputs['sequences'].squeeze(0)
generated_answer = tokenizer.decode(output_sequence, clean_up_tokenization_spaces=True)
predictions.append(generated_answer)
# Calculate correctness and character accuracy
is_correct = int(generated_answer == correct_answer)
char_acc = character_accuracy(generated_answer, correct_answer)
# Update counters and lists
total_correct += is_correct
correct.append(is_correct)
char_accuracy.append(char_acc)
# Add the new columns to the DataFrame
df['predictions'] = predictions
df['correct'] = correct
df['character_accuracy'] = char_accuracy
# Calculate and print the statistics
percentage_correct = total_correct / total_rows * 100
avg_char_accuracy = sum(char_accuracy) / total_rows * 100
print(f"Total correct answers: {total_correct}")
print(f"Percentage correct: {percentage_correct:.2f}%")
print(f"avg_char_accuracy: {avg_char_accuracy:.2f}%")
# Save the updated DataFrame to a new CSV file
df.to_csv("eval_arc_test_dataset_solve_prefix.csv", index=False)
```<|||||>Hey! Thanks for opening an issue.
For more help on how to use the model, I would recommend you to ask and check out [forum](https://discuss.huggingface.co/).
However, if you want help for this particular problem, we would need a *minimal* reproducing script where you remove everything that is not necessary to reproduce the particular bug! That would help us a lot<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 23,209 | closed | Test composition remote tool | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
| 05-08-2023 15:48:26 | 05-08-2023 15:48:26 | |
transformers | 23,208 | closed | Proposed fix for TF example now running on safetensors. | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
| 05-08-2023 14:54:41 | 05-08-2023 14:54:41 | _The documentation is not available anymore as the PR was closed or merged._<|||||>CI seems back to normal, can you just rebase on main to get the pin on the `tensofrlow_probability`? |
transformers | 23,206 | closed | NER Pipeline: Entities group with multiple hyphens | ### System Info
- `transformers` version: 4.27.4
- Platform: Linux-5.15.0-67-generic-x86_64-with-glibc2.29
- Python version: 3.8.10
- Huggingface_hub version: 0.13.3
- PyTorch version (GPU?): 2.0.0+cu118 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
### Who can help?
@Narsil
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
I have a BERT model that has labels such as `B-tax-percent` or `I-tax-amount`.
When I do an inference on my model, when grouping the entities, I only get the last part of my entity name, for example `percent` or `amount` instead of `tax-percent` or `tax-amount`.
Here is an example:
**config.json**:
```json
{
"_name_or_path": "Geotrend/distilbert-base-en-fr-cased",
"activation": "gelu",
"architectures": [
"DistilBertForTokenClassification"
],
"attention_dropout": 0.1,
"dim": 768,
"dropout": 0.1,
"hidden_dim": 3072,
"id2label": {
"0": "O",
"1": "B-curr",
"10": "B-date",
"11": "B-payment-date",
"12": "B-tax_^-percent",
// [...]
```
**inference.py:**
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification, pipeline
tokenizer = AutoTokenizer.from_pretrained("Geotrend/distilbert-base-en-fr-cased")
model = AutoModelForTokenClassification.from_pretrained("./data/models")
nerpipeline = pipeline('ner', model=model, tokenizer=tokenizer, device=0, aggregation_strategy="average")
print(nerpipeline("""37.29%""))```
```
**Output**
```python
[{'entity_group': 'percent', 'score': 0.4425447, 'word': '37', 'start': 0, 'end': 2}, {'entity_group': 'percent', 'score': 0.462865, 'word': '.', 'start': 2, 'end': 3}, {'entity_group': 'percent', 'score': 0.5241904, 'word': '29', 'start': 3, 'end': 5}, {'entity_group': 'percent', 'score': 0.3016571, 'word': '%', 'start': 5, 'end': 6}]
```
### Expected behavior
I would expect the entity group to be named with the full name except the "B" or "I" prefix.
```python
[{'entity_group': 'tax-percent', 'score': 0.4425447, 'word': '37', 'start': 0, 'end': 2}, {'entity_group': 'tax-percent', 'score': 0.462865, 'word': '.', 'start': 2, 'end': 3}, {'entity_group': 'tax-percent', 'score': 0.5241904, 'word': '29', 'start': 3, 'end': 5}, {'entity_group': 'tax-percent', 'score': 0.3016571, 'word': '%', 'start': 5, 'end': 6}]
```
It seems that the issue comes from this line, but maybe there is a reasoning that I am not aware of, or maybe multiple hyphens in entity group names are a bad practice?
https://github.com/huggingface/transformers/blob/04ab5605fbb4ef207b10bf2772d88c53fc242e83/src/transformers/pipelines/token_classification.py#L500 | 05-08-2023 13:09:57 | 05-08-2023 13:09:57 | Can you use `B-tax_percent` instead ? It allows preserving the logic and should work out of the box, no ?<|||||>Thank you for your quick response. Yes I could rename them, and at the end that is what I did, but isn't it still a bug? I did not expect the names of my entities to be truncated.
In my case it would not quite work out of the box because I would need to pre-process the label names from my dataset to change the hyphens to underscores for training, and during inference post-process them to put the hyphens again, but of course it is not a big deal and that would be mostly a one-time thing.<|||||>> Yes I could rename them, and at the end that is what I did, but isn't it still a bug? I did not expect the names of my entities to be truncated.
Sort of, `B-` and `I-` are respected conventions and we could definitely split only once to preseve the other, but there's always going to be some specification there (like why `B` and `I`).
PRs are welcome if you want to fix ! <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 23,205 | closed | add word-level timestamps to Whisper | # What does this PR do?
Our implementation of Whisper currently can return timestamps but these cover long-ish segments of text and are not necessarily very accurate. This PR adds a method of predicting timestamps at the word (or even token) level, by analyzing the cross-attentions and applying dynamic time warping. This is also the method that OpenAI uses for their `word_timestamps` option, and the implementation in this PR is heavily based on their code.
For a preliminary exploration of how to do this with HF Transformers, [see this Colab notebook](https://colab.research.google.com/drive/1VWbAgzKWQsStdAA1hcumBU2uyFQX7zAB?usp=sharing).
Fixes https://github.com/huggingface/transformers/issues/21412 and https://github.com/huggingface/transformers/issues/22590
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
| 05-08-2023 13:05:57 | 05-08-2023 13:05:57 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Design question:
I added a new `return_word_timestamps` argument to `model.generate()`. This allows us to combine `return_timestamps` with `return_word_timestamps`. Note that `return_timestamps` is not required for `return_word_timestamps` to work; they use different methods of computing the timestamps.
When `return_timestamps=True` and `return_word_timestamps=True`, the words and associated timestamps are added to the segment they belong to (this is what OpenAI does), like so:
```python
[
{
'text': "<|startoftranscript|><|en|><|transcribe|><|0.00|> Henry 5, Act 4, Scene 3.<|8.64|> ...",
'offsets': [
{
'text': ' Henry 5, Act 4, Scene 3.',
'timestamp': (0.0, 8.64),
'words': [ # this is the new bit
(' Henry', (0.0, 1.2), 0.98),
(' 5,', (1.3, 1.5), 0.95),
(' Act', (2.2, 2.9), 0.57),
...
]
},
...
```
When `return_timestamps=False` and `return_word_timestamps=True`, there are no segments and the word timestamps would look something like this:
```python
[
{
'text': "<|startoftranscript|><|en|><|transcribe|><|0.00|> Henry 5, Act 4, Scene 3.<|8.64|> ...",
'words': [
(' Henry', (0.0, 1.2), 0.98),
(' 5,', (1.3, 1.5), 0.95),
(' Act', (2.2, 2.9), 0.57),
...
]
},
...
```
For CTC models, the ASR pipeline lets you do `return_timestamps="words"`. So instead of having a separate argument, it might be better to overload `return_timestamps` for this in Whisper as well. Then the question is: if using `"words"` do we also do the regular timestamp prediction or not?
Also interesting: The [whisper-timestamped](https://github.com/linto-ai/whisper-timestamped) repo uses the word-level timestamps to generate more accurate segment timestamps.
So the options are:
1. have separate `return_timestamps` and `return_word_timestamps` arguments
2. allow `return_timestamps="words"`, which implies we also do the regular timestamps
3. allow `return_timestamps="words"`, but don't do regular timestamps
4. allow `return_timestamps="words"` and also use this to improve the regular timestamps as in the whisper-timestamped repo
I'm learning towards option 4 but curious to hear other opinions.
<|||||>Thanks for the explanation! Sounds good to me adding the new argument for word-level timestamps in `model.generate`. I would in favour of computing the segment-level timestamps when using pipeline with `return_timestamps="word"` (option 2), since computing segment-level timestamps has been shown to greatly reduce Whisper's propensity to hallucinate with our chunking+batching algorithm. It adds extra decoding steps (since we generate more tokens overall), but in general users seem happy with this if it means the transcriptions are returned with greater accuracy.
I'm a bit tentative about option 4 since it's a bit of an unofficial implementation that doesn't really quantify the performance gain compared to OpenAI's baseline method. Most users using transformers with word-level timestamps will expect vanilla DTW that matches the official implementation, but if there's a way we can guarantee a performance gain with little added complexity / overhead then it could make sense to add this. Do you have an example that demonstrates the gain we get from using `whisper-timestamped` and a good feel for how it would boost performance?<|||||>> Do you have an example that demonstrates the gain we get from using `whisper-timestamped` and a good feel for how it would boost performance?
Various papers, such as the WhisperX one, claim that Whisper's timestamps are often inaccurate. The word-level timestamps are more accurate because they map the words directly to a position in the input audio. However, I don't have any actual data on this. I'm also fine with leaving the original Whisper timestamps alone. ;-)<|||||>> Sounds good to me adding the new argument for word-level timestamps in `model.generate`. I would in favour of computing the segment-level timestamps when using pipeline with `return_timestamps="word"`
What do you think of also using `return_timestamps="word"` in `model.generate()` instead of `return_word_timestamps=True`? This would then do both regular and word-level timestamps.
<|||||>While doing the prep work for this new feature, I found that `tokenizer.decode(..., output_offsets=True)` doesn't include the last segment if no final timestamp is predicted. I made a separate issue for discussing that: https://github.com/huggingface/transformers/issues/23231
With that in mind, maybe we should not report the word-level timestamps per segment but like this:
```python
[
{
'text': "<|startoftranscript|><|en|><|transcribe|><|0.00|> Henry 5, Act 4, Scene 3.<|8.64|> ...",
'offsets': [
{
'text': ' Henry 5, Act 4, Scene 3.',
'timestamp': (0.0, 8.64),
}, ...
]
'words': [ # this is the new bit
{ 'text': ' Henry', 'timestamp': (0.0, 1.2), 'probability': 0.98 },
{ 'text': ' 5,', 'timestamp': (1.3, 1.5), 'probability': 0.95 },
{ 'text': ' Act', 'timestamp': (2.2, 2.9), 'probability': 0.57 },
...
]
},
```
This way we can keep it independent from the segment decoding and you just get one big list of word timestamps for the entire input file (also when using the pipeline). This is actually somewhat simpler to implement.
The question is: would users prefer it this way or per segment? (They can always write code to look at the timestamps for the segments to figure out which segment a particular word belongs to.)
(OpenAI puts the word-level timestamps inside the segments.)
<|||||>Did an initial implementation in `model.generate()`. The argument is `return_token_timestamps` instead of `return_word_timestamps` because `generate()` doesn't know what words are, only tokens.
Besides a tensor of predicted `token_ids`, this now also returns a tensor with the probability for each token, and a list with `(start time, end time)` tuples. Although since we're working with just tokens, I could change this to just the starting time (since the end time is always the starting time of the next token).
The code uses a simplified version of the OpenAI implementation. In particular, it doesn't filter out the special tokens such as `<|startoftranscription|>`. I was curious if that would matter — it seems that it actually does give somewhat worse results than OpenAI, so I'll have to change this to filter out the special tokens after all. 😅
<|||||>> What do you think of also using return_timestamps="word" in model.generate() instead of return_word_timestamps=True?
I would say "yes" if that makes the integration with `pipeline` easier (which I think it should if we already expect the argument `return_timestamps="word"` in `pipeline`)<|||||>IMO returning the individual words and their timestamps **separately** to the segments is fine - I can't think of an obvious use case where you'd want segment-level timestamps and then refine to word-level. You'd probably always go straight for word-level if you needed them. Also cc @Narsil here - some interesting discussions regarding how we can fit word-level timestamps into the Whisper modelling code + ASR pipeline<|||||>> Although since we're working with just tokens, I could change this to just the starting time (since the end time is always the starting time of the next token).
This is because the "space" token accounts for the time between word `i` and word `(i-1)`, which we don't return a timestamp for when we decode tokens to words?<|||||>> This is because the "space" token accounts for the time between word `i` and word `(i-1)`, which we don't return a timestamp for when we decode tokens to words?
No, it's because the alignment that is derived from the cross-attentions simply assigns a timestamp to each token. If we know which tokens should be grouped together to form a word, then the timestamp for the first token in the word is the start time of the word and the timestamp for the last token of the word is its end time.
But here we're only working with tokens, not words. Of course there are many tokens that do correspond to a whole word, but there are also words that are comprised of multiple tokes. There is no start time or end time for a given token, only "this token happens at around this time". So there is always a certain amount of imprecision, which is worse for longer tokens.<|||||>> some interesting discussions regarding how we can fit word-level timestamps into the Whisper modelling code + ASR pipeline
`model.generate()` will return a list of timestamps, one for every predicted token. This should be straightforward to integrate into the the pipeline, since in `_decode_asr` it will already filter the overlapping parts of the 30-second audio chunks at the token level. Since we know which tokens will be kept / dropped, we can simply keep / drop the corresponding timestamps.
Once we we have timestamps at the token level, we can use some basic rules (copied from OpenAI) to fuse these tokens and their timestamps into actual words, including punctuation.
So the integration with the pipeline should be less tricky than I initially thought. 😄 <|||||>Another design question:
OpenAI's implementation returns the probability of each word. This is the mean over the probabilities of the tokens making up the word. Seems useful enough to include this.
I changed `model.generate()` to return the probability of each token along with the token timestamps. However, these probabilities aren't actually used to derive the timestamps.
So maybe we don't need to include code for this at all. Right now, if the user wants to get probabilities, they can simply pass in `return_scores=True` and that gives them the logits. Then they can apply softmax etc themselves to get the probabilities. The pipeline would also use `return_scores` to get those probabilities.
I'm thinking that this is the cleanest solution. Having `model.generate()` return the token probabilities is like giving it the same functionality twice, since `return_scores=True` also returns this information already (just not as probabilities but as logits).<|||||>Here is a notebook that shows how to use the new functionality: https://colab.research.google.com/drive/10QS37Z3-5HNuiEubpb59n5GbC8-m5dQS?usp=sharing
<|||||>The current implementation of `model.generate()` works and gives results similar to OpenAI (although not exactly the same, as we grab the cross-attentions while generating, whereas they run the model again on the generated sequence).
Some more food for thought:
* Larger Whisper variants give better results. However, each model needs its own `config.attention_heads`. So we'll need to update the `config.json` files on the Hub (and users who fine-tuned their models need to patch their config.json files if they want to use token-level timestamps).
* I have experimented with two methods: 1) keep the special tokens when doing DTW on the cross-attention weights, 2) ignore the special tokens. OpenAI does the latter. I'm not sure which results I like better, the timing is always a bit off either way.
Pros of ignoring the special tokens:
- this is essentially what OpenAI does, so we get similar results (but again, not exactly the same)
Cons of ignoring the special tokens:
- not as batch-friendly (might not matter since the DTW implementation doesn't work on batches anyway)
- we need a placeholder timestamp (currently using `-1.0`) for these special tokens in the output tensor, which may make the results more awkward to parse for the user
EDIT: I did some more tests and with / without special tokens predicts pretty much the same timestamps, with only small differences. Keeping the special tokens in there seems to give slightly better results overall, so I'm going to revert to that.
The downside is that for padding tokens (`<|endoftext|>` at the end of sequence) it may predict nonsense timestamps, but the user will likely filter out these tokens afterwards anyway.<|||||>Hi @sanchit-gandhi and @Narsil, I'd like your feedback on the following:
I've added `return_timestamps="word"` to the ASR pipeline for Whisper. This calls `model.generate(..., return_token_timestamps=True)` to grab the token-level timestamps (see above for how that works) and then `_decode_asr()` turns this into word-level timestamps.
We now get this kind of output from the ASR pipeline:
```python
"chunks": [
{
"text": "hi there",
"timestamp": (0.5, 1.6),
"words": [
{"text": "hi", "timestamp": (0.5, 0.9)},
{"text": "there", "timestamp": (1.0, 1.6)}]
]
},
... next chunks ...
]
```
In other words, the word-level timestamps are organized per chunk. This is also how OpenAI does it. Doing it this way is a natural fit for the logic in `_decode_asr()` and `_find_longest_common_sequence()`, as the `token_timestamps` get split up exactly the same way as the regular `token_ids`.
However, it's not what the ASR pipeline docs promise. When return_timestamps="word", the expected output is:
```python
"chunks": [
{"text": "hi", "timestamp": (0.5, 0.9)},
{"text": "there", "timestamp": (1.0, 1.6)}
]
```
We no longer have sentence fragments (such as `"hi there"`) and their timestamps, but only individual words. I could do it like this with a bit of a hack, but note that the word-level timestamps are not always reliable (sometimes the cross-attention weights get confused near the end of the audio segment) and so it might be useful to keep the other timestamps as well.
Question 1: Which of the above outputs should we use?
Question 2: Should this output also include word probabilities? The OpenAI version does this. We could do it too, but it's not going to make the code any prettier.
Question 3: What do you think of my modifications to `_decode_asr()` and the tokenizer, is this the right way to go?
(Note: There are a few more details for me to implement, so the code isn't 100% ready yet, but most of the logic is there.)<|||||>Made some modifications. The output now includes a list of tokens for every word:
```python
"chunks": [
{
"text": "hi there",
"timestamp": (0.5, 1.6),
"words": [
{"text": "hi", "timestamp": (0.5, 0.9), "tokens": [123]},
{"text": "there", "timestamp": (1.0, 1.6), "tokens: [456, 789]},
]
},
... next chunks ...
]
```
If we also decide to add the probabilities (see above), then the output would look like this:
```python
"chunks": [
{
"text": "hi there",
"timestamp": (0.5, 1.6),
"words": [
{"text": "hi", "timestamp": (0.5, 0.9), "tokens": [123], "probability": 0.98},
{"text": "there", "timestamp": (1.0, 1.6), "tokens: [456, 789], "probability": 0.87},
]
},
... next chunks ...
]
```
<|||||>Link to a new Colab notebook demonstrating the pipeline with word-level timestamps: https://colab.research.google.com/drive/1hwTlVlkATbyXZCZ0XY5aSP7qZcBfHC-W?usp=sharing<|||||>With regards to your question about the output format for pipeline - the one that you've settled on sounds sensible to me given that the timestamp prediction is inherently different for words vs segments.
I wonder if having the probabilities would be more useful than the tokens though? I can see use cases where you default back to the segment-level timestamps if the word-level ones are low confidence. Not sure when the tokens would necessarily be useful?<|||||>> I wonder if having the probabilities would be more useful than the tokens though? I can see use cases where you default back to the segment-level timestamps if the word-level ones are low confidence. Not sure when the tokens would necessarily be useful?
Just to clarify: the probabilities are for the tokens / words, not the timestamps.
I also can't think off the top of my head what you'd want to have the tokens for. ;-)
<|||||>Kindly requesting a second review from @ArthurZucker in @Narsil's absence 🤗<|||||>Hi @amyeroberts, I think this PR is ready for a final core maintainer review. Thanks!<|||||>Made the requested changes, so this should be ready to go.<|||||>OK, I think that should be everything then. Feel free to merge! |
transformers | 23,204 | closed | New version of Accelerate for the Trainer | # What does this PR do?
All is said in the title, the ongoing efforts to migrate the Trainer to Accelerate require the new version of Accelerate. | 05-08-2023 12:46:32 | 05-08-2023 12:46:32 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 23,203 | closed | Whisper feature extraction: tiny condition check error | Hi
The `frame_width` below should be probably compared against `n_fft`, otherwise this `if` will always execute for nothing but not necessarily apply a padding as the frame width is already OK
https://github.com/huggingface/transformers/blob/04ab5605fbb4ef207b10bf2772d88c53fc242e83/src/transformers/models/whisper/feature_extraction_whisper.py#L163 | 05-08-2023 10:13:46 | 05-08-2023 10:13:46 | cc @sanchit-gandhi <|||||>Should be fixed by https://github.com/huggingface/transformers/pull/21998<|||||>Closed via #21998 |
transformers | 23,202 | closed | ImportError: cannot import name 'OpenLlamaForCausalLM' from 'transformers' | ### System Info
- `transformers` version: 4.29.0.dev0
- Platform: Linux-5.15.0-1035-aws-x86_64-with-glibc2.17
- Python version: 3.8.16
- Huggingface_hub version: 0.14.1
- Safetensors version: 0.3.1
- PyTorch version (GPU?): 2.0.0+cu117 (False)
- Tensorflow version (GPU?): 2.11.1 (False)
- Flax version (CPU?/GPU?/TPU?): 0.5.3 (cpu)
- Jax version: 0.3.6
- JaxLib version: 0.3.5
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
1. pip install git+https://github.com/huggingface/transformers.git#egg=transformers or clone & pip install -e .
2. `from transformers import OpenLlamaForCausalLM`
### Expected behavior
The module `OpenLlamaForCausalLM` is imported successfully, as I tried the latest version.
am I doing something wrong here? | 05-08-2023 10:10:49 | 05-08-2023 10:10:49 | This works for me on main. Are you sure you have pulled the latest changes if installing from the repo, or that you execute the code in the same environment you installed Transformers from source in?<|||||>It seems like it was an issue with pip failed to replace the current installed version with the main. Doing the following solve everthing:
1- `pip uninstall transformers`
2- Cloning the repo & `pip install -e .`
Thanks for the help 🙌 |
transformers | 23,201 | closed | torch.jit support | ### Feature request
Hi there, is torch.jit used by default for model inference in the transformers library, for example, in the Auto series APIs? If not, why isn't it used by default? Thank you.
### Motivation
None
### Your contribution
None | 05-08-2023 06:21:14 | 05-08-2023 06:21:14 | No it's not used by default since the Transformers library can't know whether you are going to use your model for training or inference. It also comes with constraints (like duplicating shared weights) so it's up to the user to activate it if the situation suits their needs. You can pass `torchscript=True` when loading your model to have the jit-compilation done for you.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 23,200 | closed | Update language_modeling.py | Title: Add truncate_seq_pair function to TextDatasetForNextSentencePrediction
Description:
This PR adds a `truncate_seq_pair()` function to the `TextDatasetForNextSentencePrediction` class, inside the `create_examples_from_document()` function. This function truncates the sequences if they exceed the maximum number of tokens, which is useful when dealing with very long input texts. By including this function, the generated examples will adhere to the maximum sequence length constraint, which is important for the proper functioning of the model during pre-training.
Fixes # (issue)
## Before submitting
- [x] I read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section.
- [x] I wrote any new necessary tests.
## Who can review?
- text models: @ArthurZucker and @younesbelkada
- tokenizers: @ArthurZucker
- trainer: @sgugger
Please let me know if there are any changes or improvements needed. | 05-08-2023 06:16:18 | 05-08-2023 06:16:18 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_23200). All of your documentation changes will be reflected on that endpoint.<|||||>This code is deprecated and not maintained anymore. To preprocess your data, we recommend you use the `datasets` library.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 23,199 | closed | Mismatch between config.vocab_size and len(tokenizer) in Flan-T5 | ### System Info
- `transformers` version: 4.28.1
- Platform: Linux-5.15.0-1023-azure-x86_64-with-glibc2.17
- Python version: 3.8.16
- Huggingface_hub version: 0.14.1
- Safetensors version: not installed
- PyTorch version (GPU?): 1.13.1 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@ArthurZucker and @younesbelkada
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
```python
from transformers import AutoTokenizer,AutoConfig
models = [
"google/flan-t5-small",
"google/flan-t5-base",
"google/flan-t5-large",
"google/flan-t5-xl",
"google/flan-t5-xxl",
]
for model in models:
config = AutoConfig.from_pretrained(model)
tokenizer = AutoTokenizer.from_pretrained(model)
print(f"{model}\n\tlen(tokenizer)={len(tokenizer)},tokenizer.vocab_size={tokenizer.vocab_size},config.vocab_size={config.vocab_size}")
```

### Expected behavior
The two are matched. | 05-08-2023 05:41:56 | 05-08-2023 05:41:56 | See #4875.<|||||>Got it. Thanks! Voting for clarification.<|||||>I gather from the thread that it shouldn't be a problem. The size was increased for the ease of GPU usage. Is this creating any issues in inferencing or training the model? Just want to know for better understanding!<|||||>Ok, here is my usecase.
I usually calculate loss myself rather than pass `labels` into a model for automatic loss calculation.
For example, when calculating commonly used **Cross Entropy Loss**, i would have to figure out vocab size myself.
```python
loss = F.cross_entropy(
outputs.logits.view(-1,VOCAB_SIZE),
labels.view(-1),
label_smoothing=cfg.trainer.label_smoothing_factor
)
```
So i think in cases where `vocab_size` matters, the value from from `config.vocab_size` and `len(tokenzier)` should be consistent.<|||||>The pre-trained model which is provided by google had its vocab_size manually set to 32128 by them. Here's what I found in their github:
<img width="809" alt="Screenshot 2023-05-20 at 9 54 23 AM" src="https://github.com/huggingface/transformers/assets/118152679/86b3bbf1-8d72-4b9f-807b-3ab6dfb3a2ef">
Did you try setting VOCAB_SIZE manually?<|||||>Hi, thanks for the response! I just mean when i need to know the value of `vocab_size`, imo it should be consistent with `len(tokenzier)` and `config.vocab_size` in huggingface. |
transformers | 23,198 | closed | Should group_text in run_clm.py separate documents with special tokens? | ### System Info
- transformers version: 4.28.1
- platform: OSX Ventura 13.3.1 (M1)
- python version: 3.11.3
### Who can help?
@sgugger
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
I observe that when running `run_clm.py` with gptj tokenizer, the `group_texts()` doesn't separate different "document" with a special token (for gptj tokenizer, eos = bos = padding). Is this something I need to handle myself?
Snippet from `run_clm.py`:
```python
from datasets import load_dataset
def tokenize_function(examples, text_column_name="text"):
...
# Main data processing function that will concatenate all texts from our dataset and generate chunks of block_size.
def group_texts(examples):
...
raw_datasets = load_dataset("wikitext", "wikitext-2-raw-v1", split="train[:5]")
tokenized_datasets = raw_datasets.map(
tokenize_function,
batched=True,
remove_columns=list(raw_datasets.features),
)
block_size = 8
lm_datasets = tokenized_datasets.map(group_texts, batched=True)
```
Inspecting `lm_datasets` shows the follows:
```python
>>> print(raw_datasets['text'])
['', ' = Valkyria Chronicles III = \n', '', ' Senjō no Valkyria ...', ...]
>>> print(tokenized_datasets['input_ids'])
[[], [796, 569, 18354, 7496, 17740, 6711, 796, 220, 198], [], [2311, 73, 13090, 645, 569, 18354, 7496, ...], ...]
>>> print(lm_datasets['input_ids'])
[[796, 569, 18354, 7496, 17740, 6711, 796, 220], [198, 2311, 73, 13090, 645, 569, 18354, 7496], ...]
```
As shown above, there's no eos or sep token (gptj tokenizer uses`<|endoftext|>` aka 50256 for both) in the `lm_datasets`
### Expected behavior
My understanding from the official tutorial ([link](https://www.youtube.com/watch?v=8PmhEIXhBvI&t=103s)), is to separate different documents with a special tokens.
| 05-08-2023 02:51:42 | 05-08-2023 02:51:42 | It shows one basic data preprocessing. It's up to you to customize it to your dataset and your needs :-)<|||||>Got it. Thank you @sgugger for the explanation.<|||||>I had similar confusion till I found this post.
This is how I address the issue
```
def tokenize_function(examples):
assert tokenizer.pad_token is not None
with CaptureLogger(tok_logger) as cl:
output = tokenizer(
examples[text_column_name],
truncation=True,
max_length=block_size,
padding="max_length",
)
# clm input could be much much longer than block_size
if "Token indices sequence length is longer than the" in cl.out:
tok_logger.warning(
"^^^^^^^^^^^^^^^^ Please ignore the warning above - this long input will be chunked into smaller bits"
" before being passed to the model."
)
return output
```
|
transformers | 23,197 | closed | BioGPT causal language model with unexpected error | ### System Info
transformers==4.28.0
### Who can help?
@ArthurZucker @younesbelkada @gante
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```
input_sequence = "Hello, I'm a language model,"
inputs = torch.as_tensor(tokenizer.encode(input_sequence)).unsqueeze(0).to(device)
past_key_values = None
count = 0
complete_token = []
with torch.no_grad():
while count<10:
count += 1
print("Iteration no.: " + str(count))
if count > 1:
inputs = input_token
model_out = model(input_ids=inputs.to(device), past_key_values=past_key_values)
logits = model_out.logits[:, -1, :]
past_key_values = model_out.past_key_values
topk_values, topk_indices = torch.topk(logits, 5)
log_probs = F.softmax(topk_values, dim=-1)
inputs_in_topk = torch.multinomial(log_probs, num_samples=1, replacement=True)
input_token = torch.gather(topk_indices, 1, inputs_in_topk)
complete_token.append(input_token)
```
### Expected behavior
I am trying to use a Causal Language Model from BioGPT. However, I got a strange error.
Here are my steps:
First, I installed `transformers` and `sacremoses`:
```
!pip install transformers sacremoses -q
```
Then I executed the code from above.
And here is the error I got:
```
Iteration no.: 1
Iteration no.: 2
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
/tmp/ipykernel_18990/2689790310.py in <cell line: 8>()
13 inputs = input_token
14
---> 15 model_out = model(input_ids=inputs.to(device), past_key_values=past_key_values)
16 logits = model_out.logits[:, -1, :]
17 past_key_values = model_out.past_key_values
~/anaconda3/envs/pytorch_p39/lib/python3.9/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
1192 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1193 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1194 return forward_call(*input, **kwargs)
1195 # Do not call functions when jit is used
1196 full_backward_hooks, non_full_backward_hooks = [], []
~/anaconda3/envs/pytorch_p39/lib/python3.9/site-packages/transformers/models/biogpt/modeling_biogpt.py in forward(self, input_ids, attention_mask, head_mask, inputs_embeds, past_key_values, labels, use_cache, output_attentions, output_hidden_states, return_dict)
677 return_dict = return_dict if return_dict is not None else self.config.use_return_dict
678
--> 679 outputs = self.biogpt(
680 input_ids,
681 attention_mask=attention_mask,
~/anaconda3/envs/pytorch_p39/lib/python3.9/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
1192 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1193 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1194 return forward_call(*input, **kwargs)
1195 # Do not call functions when jit is used
1196 full_backward_hooks, non_full_backward_hooks = [], []
~/anaconda3/envs/pytorch_p39/lib/python3.9/site-packages/transformers/models/biogpt/modeling_biogpt.py in forward(self, input_ids, attention_mask, head_mask, inputs_embeds, past_key_values, use_cache, output_attentions, output_hidden_states, return_dict)
589 )
590 else:
--> 591 layer_outputs = decoder_layer(
592 hidden_states,
593 attention_mask=attention_mask,
~/anaconda3/envs/pytorch_p39/lib/python3.9/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
1192 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1193 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1194 return forward_call(*input, **kwargs)
1195 # Do not call functions when jit is used
1196 full_backward_hooks, non_full_backward_hooks = [], []
~/anaconda3/envs/pytorch_p39/lib/python3.9/site-packages/transformers/models/biogpt/modeling_biogpt.py in forward(self, hidden_states, attention_mask, layer_head_mask, past_key_value, output_attentions, use_cache)
313 self_attn_past_key_value = past_key_value[:2] if past_key_value is not None else None
314 # add present self-attn cache to positions 1,2 of present_key_value tuple
--> 315 hidden_states, self_attn_weights, present_key_value = self.self_attn(
316 hidden_states=hidden_states,
317 past_key_value=self_attn_past_key_value,
~/anaconda3/envs/pytorch_p39/lib/python3.9/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
1192 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1193 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1194 return forward_call(*input, **kwargs)
1195 # Do not call functions when jit is used
1196 full_backward_hooks, non_full_backward_hooks = [], []
~/anaconda3/envs/pytorch_p39/lib/python3.9/site-packages/transformers/models/biogpt/modeling_biogpt.py in forward(self, hidden_states, key_value_states, past_key_value, attention_mask, layer_head_mask, output_attentions)
211 if attention_mask is not None:
212 if attention_mask.size() != (bsz, 1, tgt_len, src_len):
--> 213 raise ValueError(
214 f"Attention mask should be of size {(bsz, 1, tgt_len, src_len)}, but is {attention_mask.size()}"
215 )
ValueError: Attention mask should be of size (1, 1, 0, 12), but is torch.Size([1, 1, 1, 1])
```
So apparently, everything went fine in the first execution, but the in the second model call this error came up.
Do you know how to fix this? 🙂 | 05-07-2023 22:19:47 | 05-07-2023 22:19:47 | Hey @junoriosity 👋
Two notes:
1. It seems like you are trying to generate text using BioGPT. Have you seen our `.generate()` function? ([guide](https://huggingface.co/docs/transformers/generation_strategies), [blog post](https://huggingface.co/blog/how-to-generate)); If you still want to do it manually, you need to configure the attention mask, which is why you see the exception. The attention mask is expected to have the shape `[batch_size, seq_len]`, where `seq_len` is the number of all input tokens so far (including the ones in `past_key_values`).
2. When you share a script in an open-source repository for the contributors to help and/or debug, ensure it is self-contained (including imports and, if needed, all the data). We have many requests for help, which we can only attend at a decent pace if you help us too 🤗 See the script below for an example of a complete stand-alone reproducer:
```python
import torch
import torch.nn.functional as F
from transformers import AutoTokenizer, AutoModelForCausalLM
device = "cuda"
tokenizer = AutoTokenizer.from_pretrained("microsoft/biogpt")
model = AutoModelForCausalLM.from_pretrained("microsoft/biogpt").to(device)
input_sequence = "Hello, I'm a language model,"
inputs = torch.as_tensor(tokenizer.encode(input_sequence)).unsqueeze(0).to(device)
past_key_values = None
count = 0
complete_token = []
with torch.no_grad():
while count < 10:
count += 1
print("Iteration no.: " + str(count))
if count > 1:
inputs = input_token
model_out = model(input_ids=inputs.to(device), past_key_values=past_key_values)
logits = model_out.logits[:, -1, :]
past_key_values = model_out.past_key_values
topk_values, topk_indices = torch.topk(logits, 5)
log_probs = F.softmax(topk_values, dim=-1)
inputs_in_topk = torch.multinomial(log_probs, num_samples=1, replacement=True)
input_token = torch.gather(topk_indices, 1, inputs_in_topk)
complete_token.append(input_token)
```<|||||>@gante Sorry, I have used a Jupyter notebook and so the initial loading of libraries etc. is something I usually do in the first paragraphs - and I have overlooked it. My bad ... sorry. 🙂
Regarding the attention mask, this is very new to me. For instance, when I make use of the OPT models, I just have to enter the the `past_key_values` like before (essentially `model_out.past_key_values`) and everything is fine.
I have no clue right now, where I can get the `attention_mask` from resp. how I can create such an object from scratch.
If you could help me here, that would be awesome. 🙂<|||||>@junoriosity The attention mask is simply an integer tensor with the same shape as the inputs, with `1` on real tokens and `0` on padding. In particular for the attention mask update at generation time, see [this line](https://github.com/huggingface/transformers/blob/006da469dd5a465f4551f4245f780e3b1e92b76c/src/transformers/generation/utils.py#L766). However, it assumes that a starting attention mask exists, which you can obtain from `tokenizer(input_sequence).attention_mask`.
BTW, I would highly recommend using `.generate()` unless you are experimenting with new decoding strategies. There are many corner cases handled in there :)<|||||>@gante Many thanks for your suggestion. 🙂 Here is what I did now :
```
import torch
import torch.nn.functional as F
from transformers import AutoTokenizer, AutoModelForCausalLM
device = "cuda"
tokenizer = AutoTokenizer.from_pretrained("microsoft/biogpt")
model = AutoModelForCausalLM.from_pretrained("microsoft/biogpt").to(device)
input_sequence = "Hello, I'm a language model,"
inputs = torch.as_tensor(tokenizer.encode(input_sequence)).unsqueeze(0).to(device)
attention_mask = torch.as_tensor(tokenizer(input_sequence).attention_mask).unsqueeze(0).to(device)
past_key_values = None
count = 0
complete_token = []
with torch.no_grad():
while count < 10:
count += 1
print("Iteration no.: " + str(count))
if count > 1:
inputs = input_token
print(inputs.to(device))
print(attention_mask)
model_out = model(input_ids=inputs.to(device), attention_mask=attention_mask, past_key_values=past_key_values)
logits = model_out.logits[:, -1, :]
past_key_values = model_out.past_key_values
topk_values, topk_indices = torch.topk(logits, 5)
log_probs = F.softmax(topk_values, dim=-1)
inputs_in_topk = torch.multinomial(log_probs, num_samples=1, replacement=True)
input_token = torch.gather(topk_indices, 1, inputs_in_topk)
attention_mask = torch.as_tensor([1]).unsqueeze(0).to(device)
complete_token.append(input_token)
```
and the output is
```
Iteration no.: 1
tensor([[ 2, 313, 3666, 399, 7, 174, 4617, 659, 14, 2545, 144, 7]],
device='cuda:0')
tensor([[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]], device='cuda:0')
Iteration no.: 2
tensor([[8]], device='cuda:0')
tensor([[1]], device='cuda:0')
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
/tmp/ipykernel_11426/272587798.py in <cell line: 3>()
11 print(attention_mask)
12
---> 13 model_out = model(input_ids=inputs.to(device), attention_mask=attention_mask, past_key_values=past_key_values)
14 logits = model_out.logits[:, -1, :]
15 past_key_values = model_out.past_key_values
~/anaconda3/envs/pytorch_p39/lib/python3.9/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
1192 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1193 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1194 return forward_call(*input, **kwargs)
1195 # Do not call functions when jit is used
1196 full_backward_hooks, non_full_backward_hooks = [], []
~/anaconda3/envs/pytorch_p39/lib/python3.9/site-packages/transformers/models/biogpt/modeling_biogpt.py in forward(self, input_ids, attention_mask, head_mask, inputs_embeds, past_key_values, labels, use_cache, output_attentions, output_hidden_states, return_dict)
677 return_dict = return_dict if return_dict is not None else self.config.use_return_dict
678
--> 679 outputs = self.biogpt(
680 input_ids,
681 attention_mask=attention_mask,
~/anaconda3/envs/pytorch_p39/lib/python3.9/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
1192 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1193 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1194 return forward_call(*input, **kwargs)
1195 # Do not call functions when jit is used
1196 full_backward_hooks, non_full_backward_hooks = [], []
~/anaconda3/envs/pytorch_p39/lib/python3.9/site-packages/transformers/models/biogpt/modeling_biogpt.py in forward(self, input_ids, attention_mask, head_mask, inputs_embeds, past_key_values, use_cache, output_attentions, output_hidden_states, return_dict)
589 )
590 else:
--> 591 layer_outputs = decoder_layer(
592 hidden_states,
593 attention_mask=attention_mask,
~/anaconda3/envs/pytorch_p39/lib/python3.9/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
1192 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1193 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1194 return forward_call(*input, **kwargs)
1195 # Do not call functions when jit is used
1196 full_backward_hooks, non_full_backward_hooks = [], []
~/anaconda3/envs/pytorch_p39/lib/python3.9/site-packages/transformers/models/biogpt/modeling_biogpt.py in forward(self, hidden_states, attention_mask, layer_head_mask, past_key_value, output_attentions, use_cache)
313 self_attn_past_key_value = past_key_value[:2] if past_key_value is not None else None
314 # add present self-attn cache to positions 1,2 of present_key_value tuple
--> 315 hidden_states, self_attn_weights, present_key_value = self.self_attn(
316 hidden_states=hidden_states,
317 past_key_value=self_attn_past_key_value,
~/anaconda3/envs/pytorch_p39/lib/python3.9/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
1192 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1193 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1194 return forward_call(*input, **kwargs)
1195 # Do not call functions when jit is used
1196 full_backward_hooks, non_full_backward_hooks = [], []
~/anaconda3/envs/pytorch_p39/lib/python3.9/site-packages/transformers/models/biogpt/modeling_biogpt.py in forward(self, hidden_states, key_value_states, past_key_value, attention_mask, layer_head_mask, output_attentions)
211 if attention_mask is not None:
212 if attention_mask.size() != (bsz, 1, tgt_len, src_len):
--> 213 raise ValueError(
214 f"Attention mask should be of size {(bsz, 1, tgt_len, src_len)}, but is {attention_mask.size()}"
215 )
ValueError: Attention mask should be of size (1, 1, 0, 12), but is torch.Size([1, 1, 1, 1])
```
Do you know, how would I have to set the `attention_mask` in the iteration step?
<|||||>@junoriosity
We reserve these issues for bugs, as we don't have the capacity to provide hands-on support. I'm afraid you'll have to dig deeper in the code I shared, the answer is there :)<|||||>@gante I understand you well. However, from my understanding the `attention_mask` in the second run should not reflect anything related to the 12 tokens and instead just have length 1, if I just enter one token.<|||||>That is not correct -- you are passing 1 new token, and ~11~ N past tokens (cached in `past_key_values`) :)<|||||>@gante But the initial input is already 12 tokens
<img width="1103" alt="Bildschirmfoto 2023-05-10 um 16 20 06" src="https://github.com/huggingface/transformers/assets/5286536/42e9537b-dc28-47e3-9cee-551f23dbdbe6">
Hence, we would be at 13 tokens then ... or am I confusing something?<|||||>Updated the message above. You are right, not 11 tokens, but the exact number of tokens is not the important part here.
One issue remains, though: the exception is not correct in the presence of `past_key_values` and is very misleading 👀 The attention mask must be of shape `[batch_size, new+cached tokens]`, so the answer consists of concatenating `[[1]]` at the end of each iteration. I'll open a PR to fix the exception message.
In the end, there was indeed a bug, so here's a working solution. And pardon me for my pushback -- it's the only way we can keep replying to blocking issues at a good pace :)
```py
import torch
import torch.nn.functional as F
from transformers import AutoTokenizer, AutoModelForCausalLM
device = "cuda"
tokenizer = AutoTokenizer.from_pretrained("microsoft/biogpt")
model = AutoModelForCausalLM.from_pretrained("microsoft/biogpt").to(device)
input_sequence = "Hello, I'm a language model,"
inputs = torch.as_tensor(tokenizer.encode(input_sequence)).unsqueeze(0).to(device)
attention_mask = torch.as_tensor(tokenizer(input_sequence).attention_mask).unsqueeze(0).to(device)
past_key_values = None
count = 0
complete_token = []
with torch.no_grad():
while count < 10:
count += 1
print("Iteration no.: " + str(count))
if count > 1:
inputs = input_token
print(inputs.to(device))
print(attention_mask)
print(past_key_values[0][0].shape if past_key_values else None)
model_out = model(input_ids=inputs.to(device), attention_mask=attention_mask, past_key_values=past_key_values)
logits = model_out.logits[:, -1, :]
past_key_values = model_out.past_key_values
topk_values, topk_indices = torch.topk(logits, 5)
log_probs = F.softmax(topk_values, dim=-1)
inputs_in_topk = torch.multinomial(log_probs, num_samples=1, replacement=True)
input_token = torch.gather(topk_indices, 1, inputs_in_topk)
attention_mask = torch.concat((attention_mask, torch.tensor([[1]]).to(attention_mask.device)), dim=1)
complete_token.append(input_token)
```<|||||>@gante Many thanks for all your effort! 🤗
What is quite interesting, is that my initial approach works with
```
from transformers import BloomTokenizerFast, BloomForCausalLM
from transformers.models.opt import OPTForCausalLM
from transformers import AutoTokenizer
```
i.e., if I use
```
tokenizer = AutoTokenizer.from_pretrained('facebook/opt-13b')
model = OPTForCausalLM.from_pretrained("facebook/opt-13b").to(device)
```
resp.
```
tokenizer = BloomTokenizerFast.from_pretrained("bigscience/bloom-560m")
model = BloomForCausalLM.from_pretrained("bigscience/bloom-560m").to(device)
```
Perhaps it might be sensible to standardize it - but, of course, there might be some reasons that are a hindrance, that I am not aware of. 🙂<|||||>Some models (like OPT) have "better" default behavior in the absence of attention masks. We will probably move in that direction in the future.
However, the easier defaults come at a price -- at best, they require creating it from scratch at every forward pass (and at worst, you may get incorrect results). I'd recommend creating and manually manipulating the attention mask whenever possible, to avoid nasty surprises :) <|||||>@gante In any case, many thanks for all your kind support. 🤗<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 23,196 | closed | Update convert_dialogpt_original_pytorch_checkpoint_to_pytorch.py | The improvements include the addition of a main function and better variable naming for readability.
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
| 05-07-2023 21:19:44 | 05-07-2023 21:19:44 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 23,195 | closed | Update video_classification.py | The improvements include better handling of the libraries and more straightforward code.
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
| 05-07-2023 21:14:37 | 05-07-2023 21:14:37 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_23195). All of your documentation changes will be reflected on that endpoint.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 23,194 | closed | Fix hf_argparser.parse_json_file to open file with utf-8 encoding, close file when finished | # What does this PR do?
Fixes #23193
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
| 05-07-2023 17:54:38 | 05-07-2023 17:54:38 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 23,193 | closed | examples/run_speech_recognition_ctc: UnicodeDecodeError: 'charmap' codec can't decode byte 0x9d in position 725: character maps to <undefined> | ### System Info
- `transformers` version: 4.28.1
- Platform: Windows-10-10.0.22621-SP0
- Python version: 3.11.2
- Huggingface_hub version: 0.14.1
- Safetensors version: not installed
- PyTorch version (GPU?): 2.0.1+cu117 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: NO
- Using distributed or parallel set-up in script?: NO
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Create a json file corresponding to the [first example in speech recognition for pytorch](https://github.com/huggingface/transformers/tree/main/examples/pytorch/speech-recognition#single-gpu-ctc). See attached.
Run `python run_speech_recognition_ctc.py train.json`
Get error:
```
Traceback (most recent call last):
File "F:\eo-reco\run_speech_recognition_ctc.py", line 775, in <module>
main()
File "F:\eo-reco\run_speech_recognition_ctc.py", line 378, in main
model_args, data_args, training_args = parser.parse_json_file(json_file=os.path.abspath(sys.argv[1]))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "F:\eo-reco\.env\Lib\site-packages\transformers\hf_argparser.py", line 391, in parse_json_file
data = json.loads(open_json_file.read())
^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\rober\AppData\Local\Programs\Python\Python311\Lib\encodings\cp1252.py", line 23, in decode
return codecs.charmap_decode(input,self.errors,decoding_table)[0]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
UnicodeDecodeError: 'charmap' codec can't decode byte 0x9d in position 725: character maps to <undefined>
```
[train.json.zip](https://github.com/huggingface/transformers/files/11415631/train.json.zip)
### Expected behavior
No error. | 05-07-2023 17:53:14 | 05-07-2023 17:53:14 | |
transformers | 23,192 | closed | update flax_utils.py | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
| 05-07-2023 14:01:13 | 05-07-2023 14:01:13 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_23192). All of your documentation changes will be reflected on that endpoint. |
transformers | 23,191 | closed | Update LLaMA docs with arxiv link | # What does this PR do?
Fixes #23186
Adds arxiv link for "LLaMA: Open and Efficient Foundation Language Models" (https://arxiv.org/abs/2302.13971) to LLaMA model docs.
## Who can review?
@sgugger, @stevhliu and @MKhalusova | 05-07-2023 12:28:49 | 05-07-2023 12:28:49 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 23,190 | open | [WIP] Add BROS | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Add [BROS(BERT Relying On Spatiality)](https://arxiv.org/abs/2108.04539) to 🤗 Transformers
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a [Github issue](https://github.com/huggingface/transformers/issues/23181) or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@NielsRogge | 05-07-2023 10:27:50 | 05-07-2023 10:27:50 | @jinhopark8345 Awesome work - looking forward to having this model added! Feel free to ping us when the PR is ready for review or you have any implementation questions in the meantime. <|||||>@amyeroberts
I am confused about what needs to be done.
According to the [How to add a new model](https://huggingface.co/docs/transformers/add_new_model#514-port-brandnewbert-to-transformers) guideline, a big part of it is porting pretrained models (from the original repo) into Huggingface transformers and making sure they are correctly ported by checking the outputs of each layer's forward step.
However, it seems like the authors of the Bros model used `transformers-cli` to create the boilerplate code, and I don't think there is much to change from the [original code](https://github.com/clovaai/bros/blob/master/bros/modeling_bros.py).
Do I need to write a conversion script? Or can I skip this step and move to the step where I add model test codes?
Thanks for the help in advance!<|||||>@jinhopark8345 Interesting - that will definitely make things easier! In this case, if the files are already on the hub and in the correct format, there's no need for the conversion script. It's possible there might be additional arguments required in the config files or additional files needed in the hub repo, in which case, I'd suggest writing a script to add these. You probably won't be able to write directly to the org's repo, but can open a PR with any necessary changes. <|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_23190). All of your documentation changes will be reflected on that endpoint.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Still working on it! Writing tutorial/demo notebooks of how to use BROS <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>@amyeroberts Is it possible to reopen this PR because I have been working on [forked repo](https://github.com/jinhopark8345/bros/tree/feature/update-data-loading) of original bros code |
transformers | 23,189 | open | Regression Models | ### Feature request
I am working on a regression problem and I am looking forward to using Transformers for it but before jumping into the implementation and all stuff, I am curious that can you use transformers for a regression problem? I have around 90 features (floating points) and one target. I couldn’t find any paper on transformers for regression problems so please let me know if any of you used transformers for this purpose.
I am working on a problem where I am having tabular data having more than 90 features and one target and all the features are in integers (continuous). I want to use pre-trained BERT, GPT2 but when it comes to the tokenizer the tokenizer is expecting the input in the text format. I can change the integer data in the text format like this:
original_data = [1,2,3,4,5,…,94]
transformed_data = ["1,2,3,4,5,…,94"]
Now if I pass the transformed_data to the tokenizer then surely it will work but I wanna know if someone tried to use transformers for this purpose and if yes, then what was the outcome, and how did the results look like?
How can I use the transformers library for this purpose all the tokenizers are trained for the text data so I am kinda lost. Any help will be appreciate.
### Motivation
The purpose of regression models is to predict a continuous output variable based on one or more input variables. Regression models are widely used in many fields such as finance, economics, engineering, and social sciences, where the goal is to understand the relationship between the input variables and the output variable and to make predictions based on that understanding.
In regression analysis, the focus is on building a model that captures the relationship between the input variables and the output variable. This model is then used to predict the values of the output variable for new input data. The model can also be used to identify the important input variables that have a significant impact on the output variable.
Regression models come in various types, such as linear regression, logistic regression, polynomial regression, and others. The choice of the regression model depends on the type of data, the type of relationship between the input and output variables, and the purpose of the analysis.
### Your contribution
I can implement some of the code given in this [Post:](https://lajavaness.medium.com/regression-with-text-input-using-bert-and-transformers-71c155034b13) | 05-07-2023 07:04:52 | 05-07-2023 07:04:52 | Hi @vrunm,
I think you can use the forums for this sort of discussion.
Some helpful links would be https://discuss.huggingface.co/t/tabular-classification-regression-pipeline/22030/2
and https://discuss.huggingface.co/t/how-to-set-up-trainer-for-a-regression/12994 (related to your Post).
You can check the model documentation for [informer](https://huggingface.co/docs/transformers/model_doc/informer) and [time series transformer](https://huggingface.co/docs/transformers/model_doc/time_series_transformer).
Also, this blog, [Probabilistic Time Series Forecasting with 🤗 Transformers](https://huggingface.co/blog/time-series-transformers) might be helpful as well.
There are multiple papers and repo as well that use transformers for regression which you can easily find by searching on google.<|||||>@hsuyab I did go through these links and search online but I want something very specific and customizable. I want something that can be used from Huggingface as a core function. <|||||>Well you can check these, but I don't understand what you mean exactly by core functionality, https://pytorch-tabular.readthedocs.io/en/latest/
and https://pytorch-forecasting.readthedocs.io/<|||||>@hsuyab I want to build a multi variate regression model and want to use a Huggingface class specifically designed to that. Not a pipeline which does not allow to train and finetune your model.
<|||||>@vrunm okay, you can try loading in the modules and modifying the class functions by yourself however creating this functionality separately wouldn't make sense imo. It's still better to use some other libraries that are focused on this task or best use something like xgbosst/lightgbm.<|||||>@hsuyab Sure I will try that but do you have the code to modify the class functions or should I create a PR for this?<|||||>It's best you create a PR and use that.<|||||>@hsuyab can you share with me the outline of the classes to change to implement this functionality. I think asking the contributors will be a better choice.<|||||>@pvl @vanpelt @arfon Is it possible to implement regression from a specific class of huggingface transformers? What should the outline of the classes to change to implement this as a PR?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>@hsuyab were you able to open a PR to add this functionality?<|||||>no, imo performing regression is not something that's needed as a feature in transformers as of now as there are other libraries that are focused on implementing this in a better way.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>@hsuyab Do you think it is necessary to implement this functionality for now? Would really like your comments on the classes to implemented for this?<|||||>@vrunm @hsuyab Thanks for discussing and raising this issue.
Questions about how to solve problems using transformers are best placed in our [forums](https://discuss.huggingface.co/). We try to reserve the github issues for feature requests and bug reports.
One thing to note is that regression is already possible to do with models like BERT if `num_labels` is set to 1 in the config e.g. see this line in the code: https://github.com/huggingface/transformers/blob/33aafc26ee68df65c7d9457259fc3d59f79eef4f/src/transformers/models/bert/modeling_bert.py#L1583C26-L1583C26 |
transformers | 23,188 | closed | Running inference from ASR documentation, pipeline errors with "Can't load tokenizer" | ### System Info
- `transformers` version: 4.28.1
- Platform: Windows-10-10.0.22621-SP0
- Python version: 3.11.2
- Huggingface_hub version: 0.14.1
- Safetensors version: not installed
- PyTorch version (GPU?): 2.0.1+cu117 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: NO
- Using distributed or parallel set-up in script?: NO
### Who can help?
@Narsil
@sgugger
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Put together the script from [Automatic Speech Recognition](https://huggingface.co/docs/transformers/tasks/asr) into a file `main.py`, up to but not including Inference.
Run under Windows. Training succeeds.
Put together the Inference section into a file `infer.py`.
Run under Windows.
Output:
```
Downloading pytorch_model.bin: 100%|██████████████████████████████████████████████████████████████████████████████████| 378M/378M [00:35<00:00, 10.6MB/s]
Traceback (most recent call last):
File "f:\eo-reco\infer.py", line 10, in <module>
transcriber = pipeline("automatic-speech-recognition", model="xekri/my_awesome_asr_model")
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "f:\eo-reco\.env\Lib\site-packages\transformers\pipelines\__init__.py", line 876, in pipeline
tokenizer = AutoTokenizer.from_pretrained(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "f:\eo-reco\.env\Lib\site-packages\transformers\models\auto\tokenization_auto.py", line 723, in from_pretrained
return tokenizer_class_py.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "f:\eo-reco\.env\Lib\site-packages\transformers\tokenization_utils_base.py", line 1795, in from_pretrained
raise EnvironmentError(
OSError: Can't load tokenizer for 'xekri/my_awesome_asr_model'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local directory with the same name. Otherwise, make sure 'xekri/my_awesome_asr_model' is the correct path to a directory containing all relevant files for a Wav2Vec2CTCTokenizer tokenizer.
```
[main.py.zip](https://github.com/huggingface/transformers/files/11413782/main.py.zip)
[infer.py.zip](https://github.com/huggingface/transformers/files/11413784/infer.py.zip)
### Expected behavior
No error.
| 05-07-2023 03:02:08 | 05-07-2023 03:02:08 | Your code works fine for me on macOS (I tried with the main branch of Transformers, which is version 4.29.0.dev0). It also looks like the `tokenizer_config.json` is present in your model repo, so all the required files are present.
Are you sure you don't have a `F:\eo-reco\xekri\my_awesome_asr_model` directory that would be interfering with this?
<|||||>The problem happens even if I delete the local directory.
So the problem appears to be that there is a missing step in the docs:
`processor.save_pretrained(save_directory="my_awesome_asr_mind_model")`
Without this, there is no `tokenizer_config.json`.
The reason `tokenizer_config.json` was present in my repo is that I added the line and then ran the program again.
If you look at `main.py.zip` above, you can see where I had the line commented out. With that line commented out, the error happens.<|||||>It does look like those instructions are missing from the docs, I'll ping someone from the docs team to have a look. Thanks for reporting!
<|||||>Possibly related: #23222<|||||>Thanks for reporting this! If you pass `processor` to the Trainer, it will save both `tokenizer` and `feature_extractor`, and push them both to hub. I'll update the docs. https://github.com/huggingface/transformers/pull/23239<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 23,187 | closed | push_to_hub fails with "cannot lock ref" and "failed to push some refs" | ### System Info
- `transformers` version: 4.28.1
- Platform: Windows-10-10.0.22621-SP0
- Python version: 3.11.2
- Huggingface_hub version: 0.14.1
- Safetensors version: not installed
- PyTorch version (GPU?): 2.0.1+cu117 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: NO
- Using distributed or parallel set-up in script?: NO
### Who can help?
@sgugger
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Put together the script from [Automatic Speech Recognition](https://huggingface.co/docs/transformers/tasks/asr) into a file `main.py`, up to but not including Inference.
Run under Windows.
Training succeeds, but then:
```
Pushing to hub...
Several commits (2) will be pushed upstream.
The progress bars may be unreliable.
Upload file pytorch_model.bin: 366MB [02:42, 3.62MB/s] remote: error: cannot lock ref 'refs/heads/main': is at aa0f87dd56de1a36e17cffb07b4a50a0d0f530f4 but expected 52d558ffa06199a1340c979ed1fbbc0e98c862c8
To https://huggingface.co/xekri/my_awesome_asr_model
! [remote rejected] main -> main (failed to update ref)
error: failed to push some refs to 'https://huggingface.co/xekri/my_awesome_asr_model'
Upload file pytorch_model.bin: 100%|██████████████████████████████████████████████████████████████████████████████████| 360M/360M [02:43<00:00, 2.32MB/s]
Traceback (most recent call last):
File "f:\eo-reco\.env\Lib\site-packages\huggingface_hub\repository.py", line 1099, in git_push
raise subprocess.CalledProcessError(return_code, process.args, output=stdout, stderr=stderr)
subprocess.CalledProcessError: Command '['git', 'push', '--set-upstream', 'origin', 'main']' returned non-zero exit status 1.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "f:\eo-reco\main.py", line 147, in <module>
trainer.push_to_hub()
File "f:\eo-reco\.env\Lib\site-packages\transformers\trainer.py", line 3661, in push_to_hub
git_head_commit_url = self.repo.push_to_hub(
^^^^^^^^^^^^^^^^^^^^^^
File "f:\eo-reco\.env\Lib\site-packages\huggingface_hub\repository.py", line 1307, in push_to_hub
return self.git_push(
^^^^^^^^^^^^^^
File "f:\eo-reco\.env\Lib\site-packages\huggingface_hub\repository.py", line 1102, in git_push
raise EnvironmentError(exc.stderr)
OSError: remote: error: cannot lock ref 'refs/heads/main': is at aa0f87dd56de1a36e17cffb07b4a50a0d0f530f4 but expected 52d558ffa06199a1340c979ed1fbbc0e98c862c8
To https://huggingface.co/xekri/my_awesome_asr_model
! [remote rejected] main -> main (failed to update ref)
error: failed to push some refs to 'https://huggingface.co/xekri/my_awesome_asr_model'
The push command with PID 7788 failed.
To https://huggingface.co/xekri/my_awesome_asr_model
52d558f..381666e main -> main
```
Checking the repo on the hub shows that all files were seemingly committed.
The four commits to the hub were:
* `aa0f87dd56de1a36e17cffb07b4a50a0d0f530f4 `: "End of training"
* `381666e4197a0e5ce2b4d8a9b0c3f426cd2b2348`: "Training in progress, step 200"
* `52d558ffa06199a1340c979ed1fbbc0e98c862c8`: "Training in progress, step 100"
* `09c2ba5a5066b5b24e8fd2ddf333eda61f6c85da`: "initial commit"
In the attached file, the changes from the documented script are:
1. Loading dataset `mozilla-foundation/common_voice_13_0` (since dropbox is rejecting requests to download `PolyAI/minds14`, see [discussion](https://huggingface.co/datasets/PolyAI/minds14/discussions/6))
2. Modifications for columns present in that dataset.
[main.py.zip](https://github.com/huggingface/transformers/files/11413763/main.py.zip)
### Expected behavior
No scary git errors
| 05-07-2023 02:40:38 | 05-07-2023 02:40:38 | cc @Wauplin <|||||>Hi @RobertBaruch, I'm sorry you're facing this issue. I'm not sure what happened here. Can be either a wrong git setup or a temporary server issue on git operations. Or maybe concurrent push to the Hub which made one fail while having the others correctly uploading. Just to be sure, is the final state of repo as you want it or are you missing something? If something is still missing, I would advice you to save the data locally (with `.save_pretrained`) and then upload the folder using [`huggingface_hub.upload_folder`](https://huggingface.co/docs/huggingface_hub/guides/upload#upload-a-folder).
@sgugger I hope this is the type of unclear error that we could rid off when switching to a http-based approach (once https://github.com/huggingface/huggingface_hub/pull/1458 is merged) :)<|||||>This happens every time -- even with the [example for speech recognition](https://github.com/huggingface/transformers/tree/main/examples/pytorch/speech-recognition#single-gpu-ctc). I'm pretty sure this has something to do with Windows.
The final state of the repo appears to be correct. However, the problem is that an error is raised, which means that anything in the program after pushing to the repo will not be executed.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>hi @Wauplin, I faced the same issue several times. Can you clarify how to replicate what the --push_to_hub does from the command line? Is there a command to create the model card and upload only the files that --push_to_hub (without all checkpoints)? <|||||>how do I know what caused the issue of not uploading to the hub? Can it be that I don't have space left in my account? <|||||>> Can you clarify how to replicate what the --push_to_hub does from the command line?
Don't know about a command line equivalent but
@sgugger mentioned in https://github.com/huggingface/huggingface_hub/issues/1560#issuecomment-1634053167 that you can use `trainer.create_model_card()` to create a model card from your trainer.
> Is there a command (...) upload only the files that --push_to_hub (without all checkpoints)?
Once you have files saved locally, uploading them to the Hub can be quickly done using `huggingface_hub`. Here is a guide on [how to upload files to the Hub.](https://huggingface.co/docs/huggingface_hub/guides/upload#upload-files-to-the-hub). It is not a command line tool but rather a few lines of scripts to write. But that's only once you have files saved locally and know which ones you want to upload.
> how do I know what caused the issue of not uploading to the hub? Can it be that I don't have space left in my account?
If the upload fails, it is probably due to some network issues (see https://github.com/huggingface/huggingface_hub/issues/1560#issuecomment-1635878401). In any case, it is not a problem of not have space left on your Hugging Face account since it's unlimited.
|
transformers | 23,186 | closed | [Documentation] Possible mistake in model_doc LLaMA | ### System Info
Maybe a small mistake in the documentation. Here: https://github.com/huggingface/transformers/blob/ef42c2c487260c2a0111fa9d17f2507d84ddedea/docs/source/en/model_doc/llama.mdx?plain=1#L17
The title "LLaMA: Open and Efficient Foundation Language Models" is repeated. Does it mean [this arxiv link](https://arxiv.org/pdf/2302.13971.pdf)?
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Steps to reproduce:
1. Open docs https://huggingface.co/docs/transformers/main/en/model_doc/llama#overview
2. See the first line
### Expected behavior
It should be a link to https://arxiv.org/pdf/2302.13971.pdf | 05-07-2023 02:39:38 | 05-07-2023 02:39:38 | @habaneraa You're right! This seems to be a mistake in the docs. I will open a PR to fix this. |
transformers | 23,185 | closed | Code in the documentation on fine-tuning mBART-50 for machine translation doesn't seem to perform a backward pass | (Assignee: @patrickvonplaten)
I try to fine-tune mBART-50 ([paper](https://arxiv.org/pdf/2008.00401), [pre-trained model on Hugging Face](https://huggingface.co/facebook/mbart-large-50)) for machine translation in the transformers Python library. To test the fine-tuning, I am trying to simply teach mBART-50 a new word that I made up (the made up French "billozarion", whose made up translation in English is "plorization").
I use the following code. Over 95% of the code is from the [Hugging Face documentation](https://huggingface.co/docs/transformers/model_doc/mbart#training-of-mbart50):
```
from transformers import MBartForConditionalGeneration, MBart50TokenizerFast
print('Model loading started')
model = MBartForConditionalGeneration.from_pretrained("facebook/mbart-large-50")
tokenizer = MBart50TokenizerFast.from_pretrained("facebook/mbart-large-50", src_lang="fr_XX", tgt_lang="en_XX")
print('Model loading done')
src_text = " billozarion "
tgt_text = " plorization "
model_inputs = tokenizer(src_text, return_tensors="pt")
with tokenizer.as_target_tokenizer():
labels = tokenizer(tgt_text, return_tensors="pt").input_ids
print('Fine-tuning started')
for i in range(1000):
#pass
model(**model_inputs, labels=labels) # forward pass
print('Fine-tuning ended')
# Testing whether the model learned the new word. Translate French to English
tokenizer = MBart50TokenizerFast.from_pretrained("facebook/mbart-large-50-many-to-many-mmt")
tokenizer.src_lang = "fr_XX"
article_fr = src_text
encoded_fr = tokenizer(article_fr, return_tensors="pt")
generated_tokens = model.generate(**encoded_fr, forced_bos_token_id=tokenizer.lang_code_to_id["en_XX"])
translation = tokenizer.batch_decode(generated_tokens, skip_special_tokens=True)
print(translation)
```
However, the new word wasn't learned. The output is "billozarion" instead of "plorization".
I'm strictly following the Hugging Face documentation, unless I missed something. The `# forward pass` does make me concerned, as one would need a backward pass to update the gradients. Maybe this means that the documentation is incorrect, however I can't test that hypothesis as I don't know how to add the backward pass. Anyway, it seems there is an issue with the documentation on fine-tuning mBART-50 for machine translation: either the comment `# forward pass` is incorrect, or the code itself is missing the backward pass.
---
Environment that I used to run the code: Ubuntu 20.04.5 LTS with an NVIDIA A100 40GB GPU (I also tested with an NVIDIA T4 Tensor Core GPU) and CUDA 12.0 with the following conda environment:
```
conda create --name mbart-python39 python=3.9
conda activate mbart-python39
pip install transformers==4.28.1
pip install chardet==5.1.0
pip install sentencepiece==0.1.99
pip install protobuf==3.20
``` | 05-07-2023 01:47:21 | 05-07-2023 01:47:21 | The documentation shows how to do a forward pass of the model, it's not an example of full training. Those examples can be found in the [examples folder](https://github.com/huggingface/transformers/tree/main/examples/pytorch). You do need to define an optimizer, call `loss.backward()` etc. for a full training loop, like for any other PyTorch model.<|||||>Thanks @sgugger ! The documentation incorrectly claims it performs fine-tuning. That should be fixed, either by removing the fine-tuning claim or preferably by actually providing a fine-tuning code example. <|||||>I'm sorry but I don't see the words fine-tuning in the link you provided. Could you point out to me where the claim is? I see a snippet of code showing how to use the model for training, which you should plugin your actual training loop with your own data.<|||||>Thanks @sgugger , good idea, I should have provided the quote. Here is the quote from the [Hugging Face documentation](https://huggingface.co/docs/transformers/model_doc/mbart#training-of-mbart50):

The presence of "Training of MBart-50" and "Supervised training" heavily implies that the code trains the model.
<|||||>One could add the following to fine-tune mBART-50:
```
from transformers.optimization import AdamW
# Set up the optimizer and training settings
optimizer = AdamW(model.parameters(), lr=1e-4)
model.train()
print('Fine-tuning started')
for i in range(100):
optimizer.zero_grad()
output = model(**model_inputs, labels=labels) # forward pass
loss = output.loss
loss.backward()
optimizer.step()
print('Fine-tuning ended')
```
Full code:
```
from transformers import MBartForConditionalGeneration, MBart50TokenizerFast
from transformers.optimization import AdamW
import os
os.environ["TOKENIZERS_PARALLELISM"] = "false"
print('Model loading started')
model = MBartForConditionalGeneration.from_pretrained("facebook/mbart-large-50")
tokenizer = MBart50TokenizerFast.from_pretrained("facebook/mbart-large-50", src_lang="fr_XX", tgt_lang="en_XX")
print('Model loading done')
src_text = " billozarion "
tgt_text = " plorizatizzzon "
model_inputs = tokenizer(src_text, return_tensors="pt")
with tokenizer.as_target_tokenizer():
labels = tokenizer(tgt_text, return_tensors="pt").input_ids
# Set up the optimizer and training settings
optimizer = AdamW(model.parameters(), lr=1e-4)
model.train()
print('Fine-tuning started')
for i in range(100):
optimizer.zero_grad()
output = model(**model_inputs, labels=labels) # forward pass
loss = output.loss
loss.backward()
optimizer.step()
print('Fine-tuning ended')
# translate French to English
tokenizer = MBart50TokenizerFast.from_pretrained("facebook/mbart-large-50-many-to-many-mmt")
tokenizer.src_lang = "fr_XX"
article_fr = src_text
encoded_fr = tokenizer(article_fr, return_tensors="pt")
generated_tokens = model.generate(**encoded_fr, forced_bos_token_id=tokenizer.lang_code_to_id["en_XX"])
translation =tokenizer.batch_decode(generated_tokens, skip_special_tokens=True)
print(translation)
```
It outputs the correct made up translation "plorizatizzzon". I'd suggest that the code in the documentation be updated accordingly to truly perform fine-tuning.
https://github.com/huggingface/transformers/tree/main/examples/pytorch/translation contains two more advanced scripts to fine-tune mBART (thanks [sgugger](https://github.com/sgugger) for [pointing](https://github.com/huggingface/transformers/issues/23185#issuecomment-1537564079) me to it).<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 23,184 | closed | Update feature_extraction_deit.py | Updated copyright year to 2023
Rearranged import statements for better readability Replaced the warnings import with a more specific import Minor formatting improvements
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
| 05-06-2023 21:52:37 | 05-06-2023 21:52:37 | Updated copyright year to 2023
Rearranged import statements for better readability
Replaced the warnings import with a more specific import
Minor formatting improvements<|||||>_The documentation is not available anymore as the PR was closed or merged._ |
transformers | 23,183 | closed | Allow unneeded labels in forward | ### Feature request
It would be nice to leave in the `labels` part of my batch while passing it through `AutoModel`, and not have it throw the error `AutoModel doesn't expect keyword argument labels`
### Motivation
Sometimes i want to leave metadata in my batch, it would be nice for the model to use what it needs and leave the rest for downstream analysis.
### Your contribution
Happy to discuss my needs and use case, and a PR if I can :) | 05-06-2023 20:29:56 | 05-06-2023 20:29:56 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 23,182 | closed | Generate: starcoder 🤜 🤛 assisted generation | # What does this PR do?
Starcoder (GPTBigCode) has a unique cache format, and assisted generation is heavy on cache-related ops. This PR adds the GPTBigCode special case.
All slow tests for assisted generation are passing after these changes. | 05-06-2023 16:06:09 | 05-06-2023 16:06:09 | _The documentation is not available anymore as the PR was closed or merged._<|||||>> Arg. Would love to be able to change those parts directly in the model files in the future, instead of hacking special keys like this :-/
@sgugger I'm going to add a common test for the cache format, to ensure we don't do this again for future models :) |
transformers | 23,181 | open | Add BROS | ### Model description
[BROS(BERT Relying On Spatiality)](https://arxiv.org/abs/2108.04539) is a pre-trained multimodal transformer for Document Understanding using OCR results of document images (text and bounding box pairs).
and I would like to add this model to Huggingface as my first contribution!
### Open source status
- [X] The model implementation is available
- [X] The model weights are available
### Provide useful links for the implementation
https://github.com/clovaai/bros | 05-06-2023 15:11:38 | 05-06-2023 15:11:38 | Would be great if you could add it, it should be very straightforward :)
I have a few demo notebooks on fine-tuning BROS, let me share them here:
- https://colab.research.google.com/drive/1PUpiKcSdXjBYM6a300ayC9TaYwzxdMus?usp=sharing
- https://colab.research.google.com/drive/1pTjxx4_46Sk1Zs0W_yzceP_bstmu4vfz?usp=sharing.
The first one is fine-tuning BROS on the FUNSD dataset, the second one is the same but with support for creating more training examples using the `return_overflowing_tokens` feature.
Let me know if you need any help to start contributing, feel free to start opening a draft PR |
transformers | 23,180 | closed | Improvements Over `enable_progress_bar` in `transformers.utils.logging` | ### Feature request
Would it be possible to enhance the functionality of the `tqdm` bar utilized in the [logging module](https://github.com/huggingface/transformers/blob/v4.28.1/src/transformers/utils/logging.py#L351) to provide greater flexibility and adaptability for a broader range of use cases?
### Motivation
At present, it is not feasible to track the model download progress except by employing the `tf_logging.enable_progress_bar` method, which does not support a custom `tqdm`. Moreover, the built-in `tqdm` does not flush output, causing complications in my specific use case where I am invoking my script as a child process of a node server. Consequently, the progress output fails to reach the node process before the download completes, rendering the progress bar futile. Thus, I am requesting for increased flexibility and functionality of the `tqdm` bar utilized in the logging module to cater to a wider array of scenarios.
### Your contribution
The fix would be to add a `tqdm` or `tqdm_kwargs` argument to the following method.
```python
def enable_progress_bar():
"""Enable tqdm progress bar."""
global _tqdm_active
_tqdm_active = True
hf_hub_utils.enable_progress_bars()
```
Note: I have tried to set the tqdm using `tf_logging.tqdm = new_tqdm` but this seems to impact non download/progressbar type of messages which is odd... | 05-06-2023 12:37:42 | 05-06-2023 12:37:42 | We do not plan on offering more than the ability to turn progress bars on and off.<|||||>What is the rational behind that?
Not offering the ability to control at least the stream where the tqdm works is hindering integration with other techs. E.g., a Node server calling a python script as subprocess.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 23,179 | closed | How does the decoder handle pad encodings without encoder attention mask ? | I am trying to generate sequence using T5 decoder (not using generate function) by passing encodings and decoder input ids only. However my encoding has pad tokens as well. How is decoder able to handle those pad tokens without passing encoder attention mask.
Here is my code
input_ids = tokenizer(seq, max_length=1024, padding='max_length', truncation=True, return_tensors="pt")
input_ids=input_ids.to(device)
encoder_output_vectors = model.base_model.encoder(input_ids['input_ids'], return_dict=True)
encodings=encoder_output_vectors.last_hidden_state
#recon is my prompt
decoder_input_ids = tokenizer("recon:", add_special_tokens=False, return_tensors="pt").input_ids
decoder_input_ids=decoder_input_ids.to(device)
decoder_hidden_state = None
for i in range(max_len):
with torch.no_grad():
outputs=model.decoder(input_ids=decoder_input_ids,encoder_hidden_states=encodings)
logits=model.lm_head(outputs[0])
next_decoder_input_ids = torch.argmax(logits[:, -1:], axis=-1)
decoder_input_ids = torch.cat([decoder_input_ids, next_decoder_input_ids], axis=-1)
if next_decoder_input_ids == tokenizer.eos_token_id:
break
rec=tokenizer.decode(decoder_input_ids[0], skip_special_tokens=True) | 05-06-2023 10:57:43 | 05-06-2023 10:57:43 | cc @gante @younesbelkada <|||||>Hey @drkhan107 👋
The only alternative I see is to fine-tune the model using padded data as input and unpadded data as output, so the model learns to ignore the padding (and that may not work).<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.