repo
stringclasses 1
value | number
int64 1
25.3k
| state
stringclasses 2
values | title
stringlengths 1
487
| body
stringlengths 0
234k
⌀ | created_at
stringlengths 19
19
| closed_at
stringlengths 19
19
| comments
stringlengths 0
293k
|
---|---|---|---|---|---|---|---|
transformers | 24,403 | closed | Update activations.py with nn.GELU | use nn.GELU
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
| 06-21-2023 14:04:28 | 06-21-2023 14:04:28 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 24,402 | closed | Clean up dist import | # What does this PR do?
Cleans up the `torch.distributed.X` imports in `training_args` to use the already imported `dist` module, which helps clean up our logic and cases quite a bit.
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@amyeroberts (cc @sgugger )
| 06-21-2023 13:57:00 | 06-21-2023 13:57:00 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 24,401 | closed | Remove redundant code from TrainingArgs | # What does this PR do?
Removes some more redundant code that Accelerate can handle directly. Namely:
- World size
- Process index
- `main_process_first`
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@sgugger @pacman100 | 06-21-2023 13:49:19 | 06-21-2023 13:49:19 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 24,400 | closed | Check auto mappings could be imported via `from transformers` | # What does this PR do?
As shown in #24364, we easily forget to add model mappings like `TF_MODEL_FOR_AUDIO_CLASSIFICATION_MAPPING` to some `__init__` files.
Let's add a check to avoid and could detect such issues as early as possible.
Along this new check, also add some missing mappings to `__init__` files. | 06-21-2023 12:00:49 | 06-21-2023 12:00:49 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 24,399 | closed | byebye Hub connection timeout - Recast | # What does this PR do?
It's a bit hard to break up with timeout failure, but @Wauplin was working on it to have 60 seconds instead
https://github.com/huggingface/huggingface_hub/pull/1523
We need to change the commit hash to that one in our CircleCI job though. | 06-21-2023 09:10:25 | 06-21-2023 09:10:25 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 24,398 | closed | feat: add support for protobuf 4 |
# What does this PR do?
add support for protobuf 4
| 06-21-2023 08:51:14 | 06-21-2023 08:51:14 | _The documentation is not available anymore as the PR was closed or merged._<|||||>As you can see from all the red crosses above, this sadly requires more work than just unpinning protobuf.<|||||>@sgugger yes, thanks for taking the time to pointing that out. Currently trying to identify the scope of needed changes and viability. If I can't work on providing right/complete support I'll close the PR. |
transformers | 24,397 | closed | Add `ffmpeg` for `doc_test_job` on CircleCI | # What does this PR do?
Need this at least for `docs/source/en/task_summary.md`.
Otherwise, [job](https://app.circleci.com/pipelines/github/huggingface/transformers/66845/workflows/ae6bcd25-5071-4f48-a9ba-d446ae6e060f/jobs/833148) fails | 06-21-2023 08:45:05 | 06-21-2023 08:45:05 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 24,396 | closed | [`pipeline`] Fix str device issue | # What does this PR do?
Addresses: https://github.com/huggingface/transformers/pull/24140#issuecomment-1584617146
Currently passing `device="cuda"` is not supported when creating a pipeline.
This is because `torch.cuda.set_device(self.device)` expects the device to have an explicit index. The fix is to create an indexed device when initializing a pipeline with a str device
Handy reproducible snippet:
```python
from transformers import pipeline
# this works
pipe = pipeline("text-generation", device=0)
pipe("Hello")
# this works
pipe = pipeline("text-generation", device="cuda:0")
pipe("Hello")
# this fails
pipe = pipeline("text-generation", device="cuda")
pipe("Hello")
```
cc @amyeroberts @Narsil | 06-21-2023 07:51:03 | 06-21-2023 07:51:03 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Also
```
python -c 'from transformers import pipeline; pipe = pipeline(model="gpt2", device="cuda")'
```
Works on `main`.. So I'm not sure what's the issue
<|||||>@Narsil what you shared works on main but it should throw an error if you try to run an example with it (I attached a reproducible snippet above)
Alternatively, this fails on main and this PR fixes it
```bash
python -c 'from transformers import pipeline; pipe = pipeline(model="gpt2", device="cuda"); pipe("hello")'
```<|||||>Can we remove the `set_device` instead then ? Seems better:
```patch
diff --git a/src/transformers/pipelines/base.py b/src/transformers/pipelines/base.py
index 510c07cf5..b5975d081 100644
--- a/src/transformers/pipelines/base.py
+++ b/src/transformers/pipelines/base.py
@@ -901,10 +901,8 @@ class Pipeline(_ScikitCompat):
with tf.device("/CPU:0" if self.device == -1 else f"/device:GPU:{self.device}"):
yield
else:
- if self.device.type == "cuda":
- torch.cuda.set_device(self.device)
-
- yield
+ with torch.cuda.device(self.device):
+ yield
```<|||||>The initial thing fails indeed, and seems to be linked to the fact that there are multiple `set_device` happening causing issues.
By removing it the issue is indeed removed (but the test you added in the test suite isn't failing on main, and since this is what supposed to catch the regression, this is what I tried :) )
<|||||>I am happy to revert some of the changes I proposed and add yours, it looks much better. However I have few questions
1- is it ok to call that context manager if `self.device` is CPU? I think we need a check on top of that to make sure we're not on CPU (similarly as what we had before)
```python
import torch
device = torch.device("cpu")
with torch.cuda.device(device):
print(torch.randn(1))
```
Throws:
```bash
raise ValueError('Expected a cuda device, but got: {}'.format(device))
ValueError: Expected a cuda device, but got: cpu
```
EDIT: just `with torch.device(self.device)` seems to work
2- I am not sure but I think the `with device` context manager is only available since PT2.0 no?
<|||||>> 2- I am not sure but I think the with device context manager is only available since PT2.0 no?
I don't know, all those are very good questions for which I don't have the answer to. I just know that now `set_device` is strongly discouraged so it's probably the source of our issues.<|||||>Thanks !
I can confirm the context manager doesn't work for PT==1.9 which is [should be supported by us](https://github.com/huggingface/transformers/blob/4c6e42958951ca66a6b498b1afce8d8ad4ac2274/setup.py#L178):
```python
Traceback (most recent call last):
File "scratch.py", line 203, in <module>
with torch.device(device):
AttributeError: __enter__
```
Therefore I just added some changes to ensure backward compatibility with older PT versions. WDYT?<|||||>Hi @Narsil
Let me know if the changes look all good to you, happy to address any additional comments you have <|||||>May I attempt a different thing ?
I think the fix is correct, but I'm wondering if simply relying on `torch.cuda.device` context manager couldn't help remove the need for the compat layer.<|||||>Sure yes! <|||||>Cannot push
```patch
diff --git a/src/transformers/pipelines/base.py b/src/transformers/pipelines/base.py
index 626d33a3d..ee117e62a 100644
--- a/src/transformers/pipelines/base.py
+++ b/src/transformers/pipelines/base.py
@@ -50,7 +50,6 @@ if is_torch_available():
from torch.utils.data import DataLoader, Dataset
from ..models.auto.modeling_auto import AutoModel
- from ..pytorch_utils import is_torch_greater_or_equal_than_2_0
# Re-export for backward compatibility
from .pt_utils import KeyDataset
@@ -794,16 +793,11 @@ class Pipeline(_ScikitCompat):
if isinstance(device, torch.device):
self.device = device
elif isinstance(device, str):
- if device == "cuda" and not is_torch_greater_or_equal_than_2_0:
- # for backward compatiblity if using `set_device` and `cuda`
- device = f"cuda:{torch.cuda.current_device()}"
self.device = torch.device(device)
elif device < 0:
self.device = torch.device("cpu")
- elif isinstance(device, int):
- self.device = torch.device(f"cuda:{device}")
else:
- raise ValueError(f"Device type not supported. Got {device}")
+ self.device = torch.device(f"cuda:{device}")
else:
self.device = device if device is not None else -1
self.torch_dtype = torch_dtype
@@ -908,13 +902,10 @@ class Pipeline(_ScikitCompat):
with tf.device("/CPU:0" if self.device == -1 else f"/device:GPU:{self.device}"):
yield
else:
- if is_torch_greater_or_equal_than_2_0:
- with torch.device(self.device):
+ if self.device.type == "cuda":
+ with torch.cuda.device(self.device):
yield
- # for backward compatibility
else:
- if self.device.type == "cuda":
- torch.cuda.set_device(self.device)
yield
```<|||||>`torch.cuda.device` is defined for torch==1.9 so it should work.
And `torch.device("cpu")` ... well it's the default there's no need to context manage it.<|||||>Hi @Narsil
I am not sure if `with torch.cuda.device(self.device):` is supported for torch<2.0
https://pytorch.org/tutorials/recipes/recipes/changing_default_device.html
Maybe we should merge this PR for now to unblock also @thomasw21 & @NouamaneTazi . what do you think?<|||||>I don't think we're blocked by this.
> And torch.device("cpu") ... well it's the default there's no need to context manage it.
Not sure of the context of this sentence, but we're overriding the default to `cuda`, so having a context manager to switch back to `cpu` makes sense to me.<|||||>
https://pytorch.org/docs/1.9.0/generated/torch.cuda.device.html?highlight=torch%20cuda%20device#torch.cuda.device
It is supported from 1.9.0+, at least in the docs.<|||||>Great ! agreed with those changes |
transformers | 24,395 | closed | load_in_4bit seems dosen't work as experted which actually increase GPU memory usage when using zero3 via accelerate | ### System Info
transformers==4.31.0
deepspeed==0.9.2
peft==0.4.0
bitsandbytes==0.39.0
torch==1.13.0
CUDA Version 11.8
GPUS 8x A100 80gb
### Who can help?
@pacman100
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
I compared the from_pretrained method w/o load_in_4bit passed in and I found out that after passing load_in_4bit in from_pretrained method I can't load model with same hardware and same accelerate config.
accelerate config:
ds_zero3.yaml
```
compute_environment: LOCAL_MACHINE
deepspeed_config:
gradient_accumulation_steps: 1
gradient_clipping: 1.0
offload_optimizer_device: 'none'
offload_param_device: 'cpu'
zero3_init_flag: true
zero3_save_16bit_model: true
zero_stage: 3
distributed_type: DEEPSPEED
downcast_bf16: 'no'
machine_rank: 0
main_training_function: main
mixed_precision: bf16
num_machines: 1
num_processes: 8
rdzv_backend: static
same_network: true
tpu_env: []
tpu_use_cluster: false
tpu_use_sudo: false
use_cpu: false
```
load in normal way:
test.py
```import torch
from accelerate import Accelerator
from transformers import (
AutoModelForCausalLM,
)
def main():
accelerator = Accelerator()
model_name_or_path = "local path of sambanovasystems/BLOOMChat-176B-v1"
model = AutoModelForCausalLM.from_pretrained(
model_name_or_path,
trust_remote_code=True)
if __name__ == "__main__":
main()
```
run command:
```
accelerate launch --config_file ds_zero3.yaml test.py
```
it works just fine,
GPU memory usage: 22GB x 8
peak CPU memory usage: ~500GB
then, I try to use load_in_4bit=True
test_4bit.py
```import torch
from accelerate import Accelerator
from transformers import (
AutoModelForCausalLM,
)
def main():
accelerator = Accelerator()
model_name_or_path = "local path of sambanovasystems/BLOOMChat-176B-v1"
model = AutoModelForCausalLM.from_pretrained(
model_name_or_path,
trust_remote_code=True,
load_in_4bit=True)
if __name__ == "__main__":
main()
```
run command:
```
accelerate launch --config_file ds_zero3.yaml test_4bit.py
```
OOM error, it seems nether zero3 nor parameters offlaod working as experted. peak cpu usage ~500GB in this case.
also tried
```
quantization_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_quant_type="nf4",
bnb_4bit_use_double_quant=True,
bnb_4bit_compute_dtype=torch.bfloat16
)
```
OOM
### Expected behavior
I am trying to finetune bloomchat176b using qlora and qlora should using less hardware resource. Thanks for help | 06-21-2023 07:11:22 | 06-21-2023 07:11:22 | Hello, QLoRA and DeepSpeed can't be used together. DeepSpeed doesn't work with quantized parameters.<|||||>> Hello, QLoRA and DeepSpeed can't be used together. DeepSpeed doesn't work with quantized parameters.
is that means I can't do zero optimization while using qlora, at least for now? ddp is the only parallelism method compatible with qlora?<|||||>Hello, yes<|||||>cc @younesbelkada for adding more context just in case<|||||>Hi there!
Indeed 4bit + 8bit is not supported with DS, let's maybe add that check on accelerate.preprare (I can work on that)<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 24,394 | closed | [WIP] Add SPTSv2 | # What does this PR do?
This PR adds SPTSv2. Per the docs, I am opening this PR immediately after generating the boilerplate.
Fixes #24235
| 06-21-2023 04:36:03 | 06-21-2023 04:36:03 | cc @alaradirik for information. Please let us know when your model is ready for review or if you need any help :-)<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 24,393 | closed | Increased peak memory usage when upgrading to `transformers` v4.30 and inclusion of `safetensors` | ### System Info
- `transformers` version: 4.30.2
- Platform: Linux-5.19.0-1022-gcp-x86_64-with-glibc2.35
- Python version: 3.10.6
- Huggingface_hub version: 0.15.1
- Safetensors version: 0.3.1
- PyTorch version (GPU?): not installed (NA)
- Tensorflow version (GPU?): 2.12.0 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@gante @Rocketknight1
From June 8, after upgrading to `transformers` version 4.30 which automatically installs `safetensors`, peak memory usage in BertLarge and T5Large (and possibly other models that we have not measured) increased to what appears to be a fixed value for smaller batch sizes.

### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Run the commands below on a CUDA enabled Linux machine:
```
# Clone benchmarking repo
git clone https://github.com/iree-org/iree-samples.git
cd iree-samples/iree-tf/benchmark
# Setup output file.
OUTPUT_PATH=/tmp/tf.json
echo "{\"trigger\": { \"timestamp\": \"$(date +'%s')\" }, \"benchmarks\": []}" > "${OUTPUT_PATH}"
# Setup virtual environment.
TENSORFLOW_VERSION=2.12.0 VENV_DIR=tf.venv ./setup_venv.sh
source tf.venv/bin/activate
# Run benchmark.
BENCHMARK_ID=47cb0d3a-5eb7-41c7-9d7c-97aae7023ecf-MODEL_BERT_LARGE-fp32-TF-384xi32-batch1
python benchmark_model.py --benchmark_id="${BENCHMARK_ID}" --device=gpu --output_path="${OUTPUT_PATH}" --iterations=5
```
Benchmark output will show `"device_memory_peak_mb": 4157.236992`
Now remove `safetensors` (which was installed with v4.30):
```
pip uninstall safetensors
python benchmark_model.py --benchmark_id="${BENCHMARK_ID}" --device=gpu --output_path="${OUTPUT_PATH}" --iterations=5
```
Benchmark output will show `"device_memory_peak_mb": 1591.090432`
### Expected behavior
Device peak memory usage should not have increased by 2.5x. | 06-21-2023 04:23:10 | 06-21-2023 04:23:10 | Hey @mariecwhite 👋
The script you shared with us is quite large -- any chance you could help us narrow it down? Ideally, we'd start from a short stand-alone script, unless we are unable to reproduce the issue :)
BTW, my immediate suspicion goes towards the `.from_pretrained()` function, as (de)serialization functions should be the only ones using `safetensors`.<|||||>Investigating and trying to make a minimal reproducer - in preliminary testing I do see some differences and I'd also guess that `safetensors` is the cause.<|||||>Confirmed that the issue is caused by loading `safetensors` weights with `from_pretrained` as @gante suspected. The difference in memory usage appears before training has even begun. It is transient and only occurs during weight loading - so unless your GPU goes OOM during weight loading itself, the rest of training will not be affected by this.
My guess is that the `safetensors` loading is creating two (or more?) copies of the weights on the GPU during the loading process before eventually cleaning them up. The most likely cause is that tensors in TF are created on-device by default, whereas in torch they are created on CPU and must be moved, so probably some of the code is accidentally creating some temporary variables on GPU?
To test memory usage for loading in TF I used the following:
```python
import tensorflow as tf
from transformers import TFAutoModel
model = TFAutoModel.from_pretrained(repo_dir)
print(tf.config.experimental.get_memory_info("GPU:0"))
```
In my testing, peak GPU memory usage when loading `bert-large-cased` was 1.5GB when loading from TF `.h5` weights and 4.1GB when loading from `safetensors`, which matches @mariecwhite's benchmark.
cc @Narsil <|||||>Further investigation: I think the cause is in the `safetensors` code [here](https://github.com/huggingface/safetensors/blob/main/bindings/python/py_src/safetensors/tensorflow.py#L130) - `tf.convert_to_tensor()` creates the tensor on the GPU by default if one is present, so the entire state dict is materialized on the GPU alongside the randomly initialized weights during loading.<|||||>Hi @mariecwhite, thanks again for the bug report! This is a significant issue and we really appreciate the warning. The PR to fix it is open at #24404 and will hopefully be merged soon. If you'd like to try using the PR branch before then, you can install it with
```python
pip install git+https://github.com/huggingface/transformers.git@tf_safetensors_reduced_mem_usage
```<|||||>Thank you for the quick follow-up!<|||||>No probs - it's our fault for missing this issue! The PR has now been merged, so you can just install from `main` to use it.
```
pip install git+https://github.com/huggingface/transformers.git
```
It'll be included in the next patch or full release of `transformers`, at which point you can go back to just `pip install transformers`. Thanks again for the clear bug report and the work you did tracing memory usage in different scenarios! |
transformers | 24,392 | open | Allow `TextClassificationPipeline` to handle input longer than `model_max_length` tokens | ### Problem
Running a `TextClassificationPipeline` on a text with more tokens than its model's maximum position embeddings (e.g. 512 for BERT) like so:
```python
from transformers import pipeline
classifier = pipeline('sentiment-analysis')
classifier("Hello, world! " * 1000)
```
will lead to this error:
```
RuntimeError: The size of tensor a (4002) must match the size of tensor b (512) at non-singleton dimension 1
```
Note: the numbers (`4002` and `512`, above) will vary depending on the max length of the model in use and the length (in tokens) of the text that triggered the error.
(_**If you found this issue through web-searching for the above error or some other means, look at the linked PR for an implemented code fix to this problem, and consider giving a thumbs-up to this comment if you think it should be merged into the main codebase**_)
### Feature request
We should add "chunking"/"sliding window" functionality to `TextClassificationPipeline`, allowing it to process documents longer than the `model_max_length` of its `.model`. Specifically, this would run an instance of the model on each of several "sliding window" views of each input sequence, then take the mean, similar to (but somewhat simpler than) how [`TokenClassificationPipeline`](https://github.com/huggingface/transformers/blob/ad78d9597b224443e9fe65a94acc8c0bc48cd039/src/transformers/pipelines/token_classification.py#L96) does so in part by subclassing from `ChunkPipeline`.
### Motivation
It would be nice to easily do, e.g., sentiment analysis on documents longer than the `model_max_length` of the given model/tokenizer. I have in the past tried to do this in a time-sensitive context and was unable to do so.
### Your contribution
I have already opened a draft PR: #24312. I would be happy to finish the missing parts (e.g. documentation) if someone on the Huggingface team (I believe @Narsil is the appropriate person to tag) can confirm that they would accept this feature as I plan to implement it. | 06-21-2023 03:51:47 | 06-21-2023 03:51:47 | I don't have a lot of time to review this atm.
@amyeroberts Do you know someone that could ?
Overall I'm hesitant to think it's a good idea. Maintenance is much higher for those `ChunkPipeline` and splitting a document into bits is relatively easy to do outside of the pipeline.
The merge strategies are also not entirely obvious to me.
That being said, it's definitely very convenient if implemented directly in the pipeline.<|||||>@boyleconnor Thanks for opening this feature request and for opening an example PR! As @Narsil mentions, there's a maintenance cost to adding this and the set of people who could review this are all pretty busy.
What I suggest is leaving the PR as an example for anyone who might wish to see how to implement this. If this issue gets a lot of attention (we'll measure with 👍 on the feature description) then we can revisit.
|
transformers | 24,391 | closed | Bug on Gather all remaining tensors and put them back on the CPU | ### System Info
- `transformers` version: 4.31.0.dev0
- Platform: Linux-5.15.107+-x86_64-with-glibc2.31
- Python version: 3.10.12
- Huggingface_hub version: 0.15.1
- Safetensors version: 0.3.1
- PyTorch version (GPU?): 2.0.1+cu118 (True)
- Tensorflow version (GPU?): 2.12.0 (True)
- Flax version (CPU?/GPU?/TPU?): 0.6.9 (gpu)
- Jax version: 0.4.10
- JaxLib version: 0.4.10
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@sgugger @muellerzr @ArthurZucker
### Information
- [X] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
### Colab Link
- https://colab.research.google.com/drive/1afT8O5OrUaTaZ07xvi_3AMW07P2nnRuC?usp=sharing
### Expected behavior
### What is problem?
In the trainer's `evaluation_loop`, the huggingface trainer collects all tensors and puts them into the `compute_metrics` method all at once. If the tensor is too large, a CUDA OOM error occurs, so it's prevented by sending it to the cpu in advance (using `nested_numpify` method) in the middle with `eval_accumulation_step`.
For samples smaller than `eval_accumulation_step`, metrics were calculated after additional summing using the code below.
https://github.com/huggingface/transformers/blob/66fd3a8d626a32989f4569260db32785c6cbf42a/src/transformers/trainer.py#L3304-L3318
However, since the code changes as below in PR #24028, the problem as occurred.
https://github.com/huggingface/transformers/blob/ebd94b0f6f215f6bc0f70e61eba075eb9196f9ef/src/transformers/trainer.py#L3184-L3192
The code above doesn't merge the remaining tensors into the final container, it just allocates them. In fact, in the example code I ran, even though `len(eval_dataset)` was `7224`, with `per_device_eval_batch_size=16`, `eval_accumulation_steps=100`, gpu-cpu communication was performed 4 times, and only `824` eval samples remained.
Please check it out and hope the PR will be corrected. Thank you! | 06-21-2023 03:16:04 | 06-21-2023 03:16:04 | @jinmang2 can you provide the example code you ran? Would be an excellent way for us to write a test-case around it :)<|||||>@muellerzr sure!
### dialogue state tracking results (my own code)
- colab link: https://colab.research.google.com/drive/1afT8O5OrUaTaZ07xvi_3AMW07P2nnRuC?usp=sharing
- code link: https://github.com/jinmang2/KLUE-DST/blob/main/run.py
- results
```python
...
eval_results = trainer.evaluation_loop(
trainer.get_eval_dataloader(),
description="Evaluation",
prediction_loss_only=False,
ignore_keys=None,
metric_key_prefix="eval",
)
len(trainer.eval_dataset), eval_results.predictions[0].shape
```
```
(7224, (824, 9, 71))
```
- expected shape: `(7224, 9, 71)`
### glue mnli results (huggingface's example code)
- colab link: https://colab.research.google.com/drive/1Yfoh4-Pl5LqGUWBZZqbc3OGN1R3x3O_w?usp=sharing
- code link: https://github.com/huggingface/transformers/blob/ba695c1efd55091e394eb59c90fb33ac3f9f0d41/examples/pytorch/text-classification/run_glue.py
- results
```python
...
eval_results = trainer.evaluation_loop(
trainer.get_eval_dataloader(),
description="Evaluation",
prediction_loss_only=False,
ignore_keys=None,
metric_key_prefix="eval",
)
# The total number of samples in the eval example is 9815.
# However, it can be seen that only 3 samples of prediction used for evaluation remain.
len(trainer.eval_dataset), eval_results.predictions[0].shape
```
```
(9815, (3,))
```
- expected shape: `(9815,)`<|||||>Since the `evaluate` method does not know how many eval samples have been evaluated (internally, only necessary values are loaded into the metrics dictionary), the `evaluation_loop` method directly receives `eval_results` and checks the eval samples.<|||||>Thanks @jinmang2, this indeed was a big from the integration and the original logic should have been maintained. A PR will be opened shortly with the solution (and also solves a failing test!) thanks again for your solution and thorough analysis<|||||>Thanks for fixing it! :-) |
transformers | 24,390 | closed | AttributeError: 'AutoformerModel' object has no attribute 'embedder' | ### System Info
Darwin Kernel Version 22.4.0: Mon Mar 6 21:00:41 PST 2023; root:xnu-8796.101.5~3/RELEASE_ARM64_T8103
### Who can help?
@ydshieh
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
1. declare the model:
`config = AutoformerConfig()
config.prediction_length = 5
config.context_length = 55
config.lags_sequence = [1,2, 3, 4, 5]
model = AutoformerModel(config)`
2. invoke the forward method of the model by calling it:
`outputs = model(
past_values=batches["past_values"][:16],
past_time_features=batches["past_time_features"][:16],
past_observed_mask=batches["past_observed_mask"][:16],
static_categorical_features=batches["static_categorical_features"][:16],
future_values=batches["future_values"][:16],
future_time_features=batches["future_time_features"][:16],
)`
### Expected behavior
I'd expect the model to run forward method successfully. Instead, I get the following error:
```
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
/var/folders/2n/p53ld4x51l1_yhdkj7q4y3t00000gr/T/ipykernel_49714/3104772270.py in <module>
----> 1 outputs = model(
2 past_values=batches["past_values"][:16],
3 past_time_features=batches["past_time_features"][:16],
4 past_observed_mask=batches["past_observed_mask"][:16],
5 static_categorical_features=batches["static_categorical_features"][:16],
~/opt/anaconda3/lib/python3.9/site-packages/torch/nn/modules/module.py in _call_impl(self, *args, **kwargs)
1499 or _global_backward_pre_hooks or _global_backward_hooks
1500 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1501 return forward_call(*args, **kwargs)
1502 # Do not call functions when jit is used
1503 full_backward_hooks, non_full_backward_hooks = [], []
~/opt/anaconda3/lib/python3.9/site-packages/transformers/models/autoformer/modeling_autoformer.py in forward(self, past_values, past_time_features, past_observed_mask, static_categorical_features, static_real_features, future_values, future_time_features, decoder_attention_mask, head_mask, decoder_head_mask, cross_attn_head_mask, encoder_outputs, past_key_values, output_hidden_states, output_attentions, use_cache, return_dict)
1725 return_dict = return_dict if return_dict is not None else self.config.use_return_dict
1726
-> 1727 transformer_inputs, temporal_features, loc, scale, static_feat = self.create_network_inputs(
1728 past_values=past_values,
1729 past_time_features=past_time_features,
~/opt/anaconda3/lib/python3.9/site-packages/transformers/models/autoformer/modeling_autoformer.py in create_network_inputs(self, past_values, past_time_features, static_categorical_features, static_real_features, past_observed_mask, future_values, future_time_features)
1637 static_feat = torch.cat((static_real_features, static_feat), dim=1)
1638 if static_categorical_features is not None:
-> 1639 embedded_cat = self.embedder(static_categorical_features)
1640 static_feat = torch.cat((embedded_cat, static_feat), dim=1)
1641 expanded_static_feat = static_feat.unsqueeze(1).expand(-1, time_feat.shape[1], -1)
~/opt/anaconda3/lib/python3.9/site-packages/torch/nn/modules/module.py in __getattr__(self, name)
1612 if name in modules:
1613 return modules[name]
-> 1614 raise AttributeError("'{}' object has no attribute '{}'".format(
1615 type(self).__name__, name))
1616
AttributeError: 'AutoformerModel' object has no attribute 'embedder'
``` | 06-21-2023 00:25:46 | 06-21-2023 00:25:46 | cc @kashif <|||||>While waiting our expert @kashif , the code in
```python
if config.num_static_categorical_features > 0:
self.embedder = AutoformerFeatureEmbedder(
cardinalities=config.cardinality, embedding_dims=config.embedding_dimension
)
```
together
```
if static_categorical_features is not None:
embedded_cat = self.embedder(static_categorical_features)
```
Since you pass `static_categorical_features` to the model's forward, you can check your config's `num_static_categorical_features ` attribute. Probably it is 0 and `self.embedder` is not created. In this case, I think we should not pass `static_categorical_features` to the model.
<|||||>thanks @ydshieh having a look!<|||||>@pourmatin that is correct, if you do not specify any categorical features, then you should not pass the model a list of categorical features... I believe we had a check for this, let me confirm!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 24,389 | closed | [Trainer] Fix optimizer step on PyTorch TPU | # What does this PR do?
Update optimizer step for TPUs to user `self.optimizer.step()` instead of `xm.optimizer_step(self.optimizer)`.
AcceleratedOptimizer properly calls `xm.optimizer_step` on the optimizer (https://github.com/huggingface/accelerate/blob/main/src/accelerate/optimizer.py#L129).
This fixes a bug in transformers/trainer.py when using Pytorch on TPUs:
File "/usr/local/lib/python3.8/dist-packages/torch_xla/core/xla_model.py", line 471, in _fetch_gradients for param_group in optimizer.getstate()['param_groups']: KeyError: 'param_groups'
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@sgugger
| 06-20-2023 23:21:51 | 06-20-2023 23:21:51 | cc @muellerzr and @pacman100 <|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>Thank you @cowanmeg for the fix! |
transformers | 24,388 | closed | [docs] Fix NLLB-MoE links | Fixes the broken links raised in #24382. | 06-20-2023 23:11:25 | 06-20-2023 23:11:25 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 24,387 | closed | Update deprecated torch.ger | ` torch.ger` was deprecated long time ago and `torch.outer` is a direct replacement: https://pytorch.org/docs/stable/generated/torch.ger.html | 06-20-2023 21:57:35 | 06-20-2023 21:57:35 | @sgugger please take a look.<|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>A test (I believe unrelated to the change) timed out https://app.circleci.com/pipelines/github/huggingface/transformers/66846/workflows/ae63fb76-7260-490e-8301-7c6cf986e693/jobs/833163<|||||>Yes it's unrelated, merging :-) |
transformers | 24,386 | closed | Loading Trained RAG Model | ### System Info
Python 3.9.16
Transformers 4.13.0
WSL
### Who can help?
@ArthurZucker @younesbelkada @shamanez
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
After finetuning RAG, I'm left with the following directory, and I'm not sure how to load the resulting checkpoint.

I should note the checkpoint is ~6 GB while the [original huggingface checkpoint](https://huggingface.co/facebook/rag-token-base/tree/main) is 2 GB. I suspect this is because I used the [`finetune_rag_ray_end2end.sh`](https://github.com/huggingface/transformers/tree/main/examples/research_projects/rag-end2end-retriever) script, so it includes all 3 models (reader, retriever, generator).
Below are my attempts to load the checkpoint
**Attempt 1**
```py
ds = load_dataset(path='wiki_dpr', name='psgs_w100.multiset.compressed', split='train')
rag_tokenizer = RagTokenizer.from_pretrained("facebook/rag-token-base")
rag_retriever = RagRetriever.from_pretrained(
"facebook/rag-token-base",
use_dummy_dataset=False,
indexed_dataset=ds,
index_name="embeddings",
)
rag_model = RagTokenForGeneration.from_pretrained("facebook/rag-token-base", retriever=rag_retriever)
checkpoint_path = "/fs/nexus-scratch/yzhang42/rag_end2end/model_checkpoints_MS/val_avg_em=0.0026-step_count=601.0.ckpt"
rag_model.load_state_dict(torch.load(checkpoint_path))
```
The program runs forever with the following traceback when I interrupt it:
```
Some weights of RagTokenForGeneration were not initialized from the model checkpoint at facebook/rag-token-base and are newly initialized: ['rag.generator.lm_head.weight']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
/fs/nexus-scratch/yzhang42/miniconda3/envs/qa3/lib/python3.9/site-packages/ray/_private/services.py:238: UserWarning: Not all Ray Dashboard dependencies were found. To use the dashboard please install Ray using `pip install ray[default]`. To disable this message, set RAY_DISABLE_IMPORT_WARNING env var to '1'.
warnings.warn(warning_message)
^CTraceback (most recent call last):
File "/nfshomes/yzhang42/rag/notebooks/rag_eval.py", line 37, in <module>
rag_model.load_state_dict(torch.load(checkpoint_path))
File "/fs/nexus-scratch/yzhang42/miniconda3/envs/qa3/lib/python3.9/site-packages/torch/serialization.py", line 712, in load
return _load(opened_zipfile, map_location, pickle_module, **pickle_load_args)
File "/fs/nexus-scratch/yzhang42/miniconda3/envs/qa3/lib/python3.9/site-packages/torch/serialization.py", line 1049, in _load
result = unpickler.load()
File "/fs/nexus-scratch/yzhang42/miniconda3/envs/qa3/lib/python3.9/site-packages/ray/actor.py", line 1005, in _deserialization_helper
return worker.core_worker.deserialize_and_register_actor_handle(
File "python/ray/_raylet.pyx", line 1594, in ray._raylet.CoreWorker.deserialize_and_register_actor_handle
File "python/ray/_raylet.pyx", line 1563, in ray._raylet.CoreWorker.make_actor_handle
File "/fs/nexus-scratch/yzhang42/miniconda3/envs/qa3/lib/python3.9/site-packages/ray/_private/function_manager.py", line 402, in load_actor_class
actor_class = self._load_actor_class_from_gcs(
File "/fs/nexus-scratch/yzhang42/miniconda3/envs/qa3/lib/python3.9/site-packages/ray/_private/function_manager.py", line 487, in _load_actor_class_from_gcs
time.sleep(0.001)
KeyboardInterrupt
```
**Attempt 2**
```py
from transformers import AutoConfig, AutoModel, PretrainedConfig, RagTokenizer, RagRetriever, BartForConditionalGeneration, RagTokenForGeneration, RagSequenceForGeneration, RagConfig
from transformers import BartModel
qe_config = PretrainedConfig(
name_or_path=\
"/fs/nexus-scratch/yzhang42/rag_end2end/model_checkpoints_MS/checkpoint601/generator_tokenizer/tokenizer_config.json")
gen_config = PretrainedConfig(
name_or_path=\
"/fs/nexus-scratch/yzhang42/rag_end2end/model_checkpoints_MS/checkpoint601/question_encoder_tokenizer/tokenizer_config.json")
RagConfig.from_question_encoder_generator_configs(
question_encoder_config=qe_config,
generator_config=gen_config
)
```
Gives the following error:
```
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
Cell In[2], line 11
4 qe_config = PretrainedConfig(
5 name_or_path=\
6 "/fs/nexus-scratch/yzhang42/rag_end2end/model_checkpoints_MS/checkpoint601/generator_tokenizer/tokenizer_config.json")
7 gen_config = PretrainedConfig(
8 name_or_path=\
9 "/fs/nexus-scratch/yzhang42/rag_end2end/model_checkpoints_MS/checkpoint601/question_encoder_tokenizer/tokenizer_config.json")
---> 11 RagConfig.from_question_encoder_generator_configs(
12 question_encoder_config=qe_config,
13 generator_config=gen_config
14 )
File /fs/nexus-scratch/yzhang42/miniconda3/envs/qa3/lib/python3.9/site-packages/transformers/models/rag/configuration_rag.py:183, in RagConfig.from_question_encoder_generator_configs(cls, question_encoder_config, generator_config, **kwargs)
172 @classmethod
173 def from_question_encoder_generator_configs(
174 cls, question_encoder_config: PretrainedConfig, generator_config: PretrainedConfig, **kwargs
175 ) -> PretrainedConfig:
176 r"""
177 Instantiate a :class:`~transformers.EncoderDecoderConfig` (or a derived class) from a pre-trained encoder model
178 configuration and decoder model configuration.
(...)
181 :class:`EncoderDecoderConfig`: An instance of a configuration object
182 """
--> 183 return cls(question_encoder=question_encoder_config.to_dict(), generator=generator_config.to_dict(), **kwargs)
File /fs/nexus-scratch/yzhang42/miniconda3/envs/qa3/lib/python3.9/site-packages/transformers/models/rag/configuration_rag.py:140, in RagConfig.__init__(self, vocab_size, is_encoder_decoder, prefix, bos_token_id, pad_token_id, eos_token_id, decoder_start_token_id, title_sep, doc_sep, n_docs, max_combined_length, retrieval_vector_size, retrieval_batch_size, dataset, dataset_split, index_name, index_path, passages_path, use_dummy_dataset, reduce_loss, label_smoothing, do_deduplication, exclude_bos_score, do_marginalize, output_retrieved, use_cache, forced_eos_token_id, **kwargs)
136 decoder_model_type = decoder_config.pop("model_type")
138 from ..auto.configuration_auto import AutoConfig
--> 140 self.question_encoder = AutoConfig.for_model(question_encoder_model_type, **question_encoder_config)
141 self.generator = AutoConfig.for_model(decoder_model_type, **decoder_config)
143 self.reduce_loss = reduce_loss
File /fs/nexus-scratch/yzhang42/miniconda3/envs/qa3/lib/python3.9/site-packages/transformers/models/auto/configuration_auto.py:492, in AutoConfig.for_model(cls, model_type, *args, **kwargs)
490 config_class = CONFIG_MAPPING[model_type]
491 return config_class(*args, **kwargs)
--> 492 raise ValueError(
493 f"Unrecognized model identifier: {model_type}. Should contain one of {', '.join(CONFIG_MAPPING.keys())}"
494 )
ValueError: Unrecognized model identifier: . Should contain one of imagegpt, qdqbert, vision-encoder-decoder, trocr, fnet, segformer, vision-text-dual-encoder, perceiver, gptj, layoutlmv2, beit, rembert, visual_bert, canine, roformer, clip, bigbird_pegasus, deit, luke, detr, gpt_neo, big_bird, speech_to_text_2, speech_to_text, vit, wav2vec2, m2m_100, convbert, led, blenderbot-small, retribert, ibert, mt5, t5, mobilebert, distilbert, albert, bert-generation, camembert, xlm-roberta, pegasus, marian, mbart, megatron-bert, mpnet, bart, blenderbot, reformer, longformer, roberta, deberta-v2, deberta, flaubert, fsmt, squeezebert, hubert, bert, openai-gpt, gpt2, transfo-xl, xlnet, xlm-prophetnet, prophetnet, xlm, ctrl, electra, speech-encoder-decoder, encoder-decoder, funnel, lxmert, dpr, layoutlm, rag, tapas, splinter, sew-d, sew, unispeech-sat, unispeech
```
### Expected behavior
I'm not sure what expected behavior is supposed to be. | 06-20-2023 21:01:58 | 06-20-2023 21:01:58 | Hi @YichiRockyZhang
Thanks for the issue, looking at your environment (transformers == 4.13.0) I would probably give it a try with on of the newest version of transformers. It seems the config didn't saved properly the model identifier for some reason. Would it be possible to use a recent version of the lib for you? <|||||>Hi @YichiRockyZhang
If @younesbelkada's above suggesion is still not working, it would help a lot if you can provide a short but a bit more complete code example that you:
- **create/load the initialize model(s)**
- **save it to the checkpoint (without the need of training/fine-tuning)**
- _(you already provide this part)_ the way you try to load the saved checkpoint
This way, it's easier and fast for us to reproduce and look into the issue. Thank you in advance.<|||||>Hi @younesbelkada . Thanks for the response! This did help as running the finetuning script now results in a more sensible saved checkpoint.

I can now load the model with the following:
```py
path = "/fs/nexus-scratch/yzhang42/rag_end2end/model_checkpoints_MS/checkpoint31"
rag_tokenizer = RagTokenizer.from_pretrained(path)
rag_retriever = RagRetriever.from_pretrained(
path,
use_dummy_dataset=False,
indexed_dataset=ds,
index_name="compressed",
)
rag_model = RagTokenForGeneration.from_pretrained(path, retriever=rag_retriever)
```
Hi @ydshieh ! Unfortunately, I believe my problem is specific to fine-tuning. I'm using the only fine-tuning script for this model that I can find (in huggingface documentation and even on the internet). The script uses pytorch lightning to train and save the model. The below snippet from [`finetune_rag.py`](https://github.com/huggingface/transformers/blob/main/examples/research_projects/rag-end2end-retriever/finetune_rag.py) details how the models is saved.
```py
@pl.utilities.rank_zero_only
def on_save_checkpoint(self, checkpoint: Dict[str, Any]) -> None:
save_path = self.output_dir.joinpath("checkpoint{}".format(self.step_count))
self.model.config.save_step = self.step_count
# self.model.save_pretrained(save_path)
self.tokenizer.save_pretrained(save_path)
if self.custom_config.end2end:
modified_state_dict = self.model.state_dict()
for key in self.model.state_dict().keys():
if key.split(".")[1] == "ctx_encoder":
del modified_state_dict[key]
self.model.save_pretrained(save_directory=save_path, state_dict=modified_state_dict)
save_path_dpr = os.path.join(self.dpr_ctx_check_dir, "checkpoint{}".format(self.step_count))
self.model.rag.ctx_encoder.save_pretrained(save_path_dpr)
self.context_tokenizer.save_pretrained(save_path_dpr)
```
I understand HF does not maintain these scripts, but for what it's worth, I think retrieval-augmented models are very important and should have a bit more support!<|||||>@YichiRockyZhang
Thanks for sharing more details. What I means is that you can still make a **self-complete** code snippet:
- how you (or the script create the model)
- then save that model using the logic in the method `on_save_checkpoint` you provided
You don't need to go through the training part in the script, just the create/save part. By ` self-complete`, it means we can just run it directly to see the failure you have. Of course y**ou will have to wrap up things in your own way** (not just showing us the definition of `on_save_checkpoint`). I hope this makes my previous comment a bit clear and look forward to see a reproducible code snippet 🤗 <|||||>@ydshieh Hi, thank you for the quick responses! I've edited my above reply to reflect the fact that upgrading to transformers==4.30.2 seemed to have worked after making sure my data was ASCII encoded. Though it does seem that the fine-tuning script is only saving the whole model after the first epoch. I've adjusted the code to be
```py
@pl.utilities.rank_zero_only
def on_save_checkpoint(self, checkpoint: Dict[str, Any]) -> None:
save_path = self.output_dir.joinpath("checkpoint{}".format(self.step_count))
self.model.config.save_step = self.step_count
# self.model.save_pretrained(save_path)
self.tokenizer.save_pretrained(save_path)
if self.custom_config.end2end:
modified_state_dict = self.model.state_dict()
for key in self.model.state_dict().keys():
if key.split(".")[1] == "ctx_encoder":
del modified_state_dict[key]
self.model.save_pretrained(save_directory=save_path, state_dict=modified_state_dict)
save_path_dpr = os.path.join(self.dpr_ctx_check_dir, "checkpoint{}".format(self.step_count))
self.model.rag.ctx_encoder.save_pretrained(save_path_dpr)
self.context_tokenizer.save_pretrained(save_path_dpr)
else: #NEW
state_dict = self.model.state_dict()
self.model.save_pretrained(save_directory=save_path, state_dict=state_dict)
```
I will update this thread in the morning once fine-tuning is finished. If my fix doesn't work out, I'll try to put together a more minimal and self-complete script for debugging purposes! 🤗<|||||>Nice and good luck :-) !<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 24,385 | closed | How to unwrap after auto_wrap in FSDP? | I am currently fine-tuning a LLM (LLaMA) and would like to retrieve the gradients of each weight (parameter) after every gradient update. However, I notice that weights are (auto) wrapped into stuff like “_fsdp_wrapped_module._flat_param” during training. I need to map these wrapped weights to the original LLaMA architecture such as “self_attn.v_proj”. Any code examples?
I guess “summon_full_params()” might be the function that I look for, but I am not sure if that is correct. I also have difficulty using this function. Thanks a lot for any help! | 06-20-2023 20:56:45 | 06-20-2023 20:56:45 | This is more a question for the PyTorch forums as it's purely related to FSDP. Still cc-ing @pacman100 in case he has any idea.<|||||>Hello, as Sylvain mentioned this question is for the PyTorch forums. `summon_full_params` usage can be found in these tests: https://github.com/pytorch/pytorch/blob/main/test/distributed/checkpoint/test_fsdp_optim_state.py#L56-L59
I am not sure if it contains the information related to the gradients of a given parameter. <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 24,384 | closed | Refactor hyperparameter search backends | Fixes https://github.com/huggingface/transformers/issues/24379
The goal here is to clearly group the essential info/functionality about each backend together to make reading/changing things easier. For example if another backend integration is added it should be less likely for something to be forgotten as apparently happened with wandb.
@sgugger sorry I didn't get a full confirmation to go ahead with this, it just seemed easier to show what I meant with code rather than continue explaining in the issue. There's many other ways this could be done and I can change the approach but I hope that the general direction at least is clear from this PR.
I also think this would help move towards improving the user facing API since as mentioned in https://github.com/huggingface/transformers/issues/24278#issuecomment-1599189018 (cc @hugocool) the kwargs have no type hints and are not very easy to use. So maybe instead of:
```python
best_run = trainer.hyperparameter_search(
direction="maximize",
backend="ray",
# this is just **kwargs, not so clear what's possible...
storage_path="...",
callbacks=...,
)
```
one could write:
```python
best_run = trainer.hyperparameter_search(
direction="maximize",
backend=RayTuneBackend(
# now more assistance is possible
storage_path="...",
callbacks=...,
),
)
```
| 06-20-2023 20:50:49 | 06-20-2023 20:50:49 | > Using abstract classes like this is not really the way the Transformers library is designed
That's fine, I was very unsure which approach to take. `abc` offers additional safety and IDE assistance as the standard way to ensure that all abstract methods are implemented, but it's probably overkill here and I also didn't like how heavy it was. I've pushed a much simpler strategy.
> I recommended to just complete the error message to include wandb.
The point of this is that it's difficult to see all the missing bits. The current code isn't just missing wandb in the error message, it's also missing from `default_hp_space` (fixed in this PR) and the docstring (not enforced in this PR, although it could be, I just didn't want to jump there just yet).<|||||>> I also don't see how you would have benefits for IDE as you show in the PR description.
Sorry for the confusion, that's not part of this PR to keep the scope focused, but if this is merged I can follow it up with another which adds constructors to each backend class which accept the precise kwargs that the backend `run` supports.<|||||>Opened https://github.com/huggingface/huggingface_hub/issues/1526 in regards to the unrelated test failure.<|||||>Thanks a lot for your contribution!<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_24384). All of your documentation changes will be reflected on that endpoint. |
transformers | 24,383 | closed | How BERT 512 limit works? | ### System Info
I passed a long text of 3000 tokens and it did not give me any error. Does BERT not have 512 token limit? Why it is not giving any error? This is the code I used. You can pass any input with tokens more than 512
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
from transformers import BertTokenizer
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
# Long input text with more than 512 tokens
text = "This is a very long text with more than 512 tokens..."
tokens = tokenizer.tokenize(text)
print(len(tokens))
### Expected behavior
error when processing more than 512 | 06-20-2023 20:08:02 | 06-20-2023 20:08:02 | Please use the [forums](https://discuss.huggingface.co/) for such questions. You did not pass your input the model, just the tokenizer.<|||||>sure! I will take care of that. I am little new to this. Thanks for info |
transformers | 24,381 | closed | fixing layer indexing error when pipeline parallel > 1 | when applying pipeline parallel, the index of layers converted from transformers to megatron is wrong since there is not offset applied successfully.
For example, if having 4 layers, and pipeline parallel is 2, we want to the result looks like `layers.0 + layers.1` and `layers.2 + layers.3`, but now the result is `layers.0 + layers.1` and `layers.0 + layers.1`, since user should use `pp_layer_id` that is calculated with `layer + offset` , instead of `layer` that is only a index of range loop. | 06-20-2023 17:24:31 | 06-20-2023 17:24:31 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_24381). All of your documentation changes will be reflected on that endpoint. |
transformers | 24,380 | closed | WavLM error when running forward | ### System Info
**Relevant Libraries**
transformers==4.26.1
torchaudio==2.0.2
torch==2.0.1
OS: Ubuntu 20.04
### Who can help?
@sanchit-gandhi
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Run the following code:
```python
from transformers import AutoModel, AutoFeatureExtractor
model = AutoModel.from_pretrained("microsoft/wavlm-base-plus")
fe = AutoFeatureExtractor.from_pretrained("microsoft/wavlm-base-plus")
audio_path = "..."
audio, sr = torchaudio.load(audio_path)
input = fe(audio, return_tensor="pt")
model(input_values=input["input_values"])
```
---
When I try running the previous code, I got the following error:
```bash
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "PATH_TO_MY_ENV_SITE_PACKAGES/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "PATH_TO_MY_ENV_SITE_PACKAGES/transformers/models/wavlm/modeling_wavlm.py", line 1229, in forward
extract_features = self.feature_extractor(input_values)
File "PATH_TO_MY_ENV_SITE_PACKAGES/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "PATH_TO_MY_ENV_SITE_PACKAGES/transformers/models/wavlm/modeling_wavlm.py", line 346, in forward
hidden_states = input_values[:, None]
TypeError: list indices must be integers or slices, not tuple
```
### Expected behavior
get the output with last_hidden_state and others. This is not happening with HuBERT or Wav2Vec2. | 06-20-2023 17:01:21 | 06-20-2023 17:01:21 | I think there's a small typo with your codesnippet:
```diff
- input = fe(audio, return_tensor="pt")
+ input = fe(audio, return_tensors="pt")
```
E.g. running the following works for me:
```python
from transformers import AutoModel, AutoFeatureExtractor
import torch
import numpy as np
model = AutoModel.from_pretrained("microsoft/wavlm-base-plus")
fe = AutoFeatureExtractor.from_pretrained("microsoft/wavlm-base-plus")
audio = np.random.randn(16000) # random 1 second input audio
input = fe(audio, sampling_rate=16000, return_tensors="pt")
with torch.no_grad():
model(input_values=input["input_values"])
```<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Hey @MorenoLaQuatra - did the above suggestion fix your issue? Feel free to close this thread if so<|||||>Well, actually my problem is more linked with the hidden_states when using output_hidden_states=True (the typo was my fault when reporting the snippet here on GitHub). However, I cannot reproduce it at the moment, so I will close for now.
Thanks @sanchit-gandhi !<|||||>Thanks for clarifying @MorenoLaQuatra! Feel free to open the issue again with a code repro if you find the model isn't working and we can take a deeper dive into it |
transformers | 24,379 | closed | `Trainer.hyperparameter_search` doesn't document `wandb` or offer it as a default backend | `Trainer.hyperparameter_search` seems to prioritise the `optuna/ray/sigopt` backends, while `wandb` almost seems like a second-class citizen in the code. Specifically, the docstring explicitly mentions the first three backends multiple times in different contexts but not `wandb`, and `default_hp_search_backend` won't return `wandb` even if it's available. Is this intentional or accidental? | 06-20-2023 16:52:32 | 06-20-2023 16:52:32 | Those are all integrations maintained by the authors of those libraries, we do not maintain them ourselves. It might be a bug, but it's up to the wandb folks to fix it in this case :-) <|||||>Even the glue code in `trainer.py` that ties the various backends together?
Would you accept a PR to refactor this stuff? For example this code:
```python
if backend is None:
backend = default_hp_search_backend()
if backend is None:
raise RuntimeError(
"At least one of optuna or ray should be installed. "
"To install optuna run `pip install optuna`. "
"To install ray run `pip install ray[tune]`. "
"To install sigopt run `pip install sigopt`."
)
backend = HPSearchBackend(backend)
if backend == HPSearchBackend.OPTUNA and not is_optuna_available():
raise RuntimeError("You picked the optuna backend, but it is not installed. Use `pip install optuna`.")
if backend == HPSearchBackend.RAY and not is_ray_tune_available():
raise RuntimeError(
"You picked the Ray Tune backend, but it is not installed. Use `pip install 'ray[tune]'`."
)
if backend == HPSearchBackend.SIGOPT and not is_sigopt_available():
raise RuntimeError("You picked the sigopt backend, but it is not installed. Use `pip install sigopt`.")
if backend == HPSearchBackend.WANDB and not is_wandb_available():
raise RuntimeError("You picked the wandb backend, but it is not installed. Use `pip install wandb`.")
```
contains a lot of repetition that I'd be happy to clean up, and it's easy to see how the wandb integration author missed a place to add a reference to wandb.<|||||>The first bit with the runtime error is fine (though missing wandb). For the rest, it should be done in each integration which normally error very fast if the corresponding lib is not installed. |
transformers | 24,382 | closed | Some links in NLLB page are broken. | **Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
**Describe the solution you'd like**
A clear and concise description of what you want to happen.
**Describe alternatives you've considered**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
Add any other context or screenshots about the feature request here.
Hi, this is just a simple notification for crashed links.
https://huggingface.co/docs/transformers/v4.30.0/model_doc/nllb-moe
On this page, two links in the "documentation resources" are broken.
Thank you.
| 06-20-2023 16:51:59 | 06-20-2023 16:51:59 | cc @stevhliu <|||||>Doc in version 4.30.0 still has problems, not only the doc section but other links :( @stevhliu <|||||>The fix is on main, so it will only be reflected on the main version of the documentation. And if you have found other links to fix, please do tell us or open directly PRs to fix them :-) |
transformers | 24,378 | closed | Skip a tapas (tokenization) test in past CI | # What does this PR do?
Same as in #24251 where 1 test (from the tokenization test file) is missed. | 06-20-2023 16:24:31 | 06-20-2023 16:24:31 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_24378). All of your documentation changes will be reflected on that endpoint. |
transformers | 24,377 | closed | Better test name and enable pipeline test for `pix2struct` | # What does this PR do?
In #24364, `pix2struct` test file didn't get `pipeline_model_mapping` being updated, due to the `heuristic` of finding a test class not working well (try to find the shortest test class name).
Let's try to give the test class a better/short/clear name, (despite we don't really have a base model), `Pix2StructModelTest` instead of `Pix2StructTextImageModelTest`.
This enables the script `add_pipeline_model_mapping_to_test.py` works for `pix2struct`, and then we get the pipeline test being run. | 06-20-2023 16:01:34 | 06-20-2023 16:01:34 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 24,376 | closed | Migrate doc files to Markdown. | # What does this PR do?
The new UI in GitHub makes MDX pretty hard to read for diffs, so this PR migrates the doc files from mdx to md. This shouldn't break anything in the doc-builder. | 06-20-2023 15:40:55 | 06-20-2023 15:40:55 | I'd add a :warning: emoji as a prefix for the disclaimer, but other than that it looks good to me!<|||||>_The documentation is not available anymore as the PR was closed or merged._ |
transformers | 24,375 | closed | TF LLaMA Port | This is an autoconversion of the LLaMA code to TF by GPT-4. As always, expect things to be broken until I finish debugging it!
TODO list:
- [ ] Get tests to pass
- [ ] No `MainLayer` - we shouldn't need it! Make sure weight naming can still be controlled.
- [ ] Explore full `float16` weights
- [ ] Explore passing `DTensor` layouts | 06-20-2023 15:36:37 | 06-20-2023 15:36:37 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_24375). All of your documentation changes will be reflected on that endpoint.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 24,374 | closed | Rename test to be more accurate | # What does this PR do?
Tiny fix but this integration test actually tests Finn to English so let's name it accordingly. | 06-20-2023 15:35:33 | 06-20-2023 15:35:33 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_24374). All of your documentation changes will be reflected on that endpoint. |
transformers | 24,373 | closed | Add a check in `ImageToTextPipeline._forward` | # What does this PR do?
Inside `ImageToTextPipeline.preprocess`, we have
```python
if self.model.config.model_type == "git" and prompt is None:
model_inputs["input_ids"] = None
```
So it may happen with a list of `None` (for Git model), and the `_forward` fail.
This PR add a check and change the above case to a single `None` value to avoid failure. | 06-20-2023 15:17:53 | 06-20-2023 15:17:53 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 24,372 | closed | `resize_token_embeddings` breaks `gpt2` generation | ### System Info
```
- `transformers` version: 4.30.1
- Platform: Linux-5.15.0-1023-aws-x86_64-with-glibc2.2.5
- Python version: 3.8.11
- Huggingface_hub version: 0.15.1
- Safetensors version: 0.3.1
- PyTorch version (GPU?): 2.0.1+cu117 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
```
### Who can help?
Maybe @younesbelkada
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
```
from transformers import AutoModelForCausalLM, AutoTokenizer
pretrained_model = AutoModelForCausalLM.from_pretrained("gpt2")
tokenizer = AutoTokenizer.from_pretrained("gpt2")
device = "cpu"
input_ids = tokenizer.encode('I enjoy walking with my cute dog', return_tensors='pt')
greedy_output = pretrained_model.generate(
input_ids=input_ids.to(device),
max_new_tokens=50,
temperature=0.7,
pad_token_id=tokenizer.pad_token_id,
)
print("Output:\n" + 100 * '-')
print(tokenizer.decode(greedy_output[0]))
###
pretrained_model = AutoModelForCausalLM.from_pretrained("gpt2")
tokenizer = AutoTokenizer.from_pretrained("gpt2")
tokenizer.add_special_tokens({"pad_token": "[PAD]"})
device = "cpu"
input_ids = tokenizer.encode('I enjoy walking with my cute dog', return_tensors='pt')
greedy_output = pretrained_model.generate(
input_ids=input_ids.to(device),
max_new_tokens=50,
temperature=0.7,
pad_token_id=tokenizer.pad_token_id,
)
print("Output2:\n" + 100 * '-')
print(tokenizer.decode(greedy_output[0]))
###
pretrained_model = AutoModelForCausalLM.from_pretrained("gpt2")
tokenizer = AutoTokenizer.from_pretrained("gpt2")
tokenizer.add_special_tokens({"pad_token": "[PAD]"})
device = "cpu"
pretrained_model.resize_token_embeddings(len(tokenizer))
# encode context the generation is conditioned on
input_ids = tokenizer.encode('I enjoy walking with my cute dog', return_tensors='pt')
greedy_output = pretrained_model.generate(
input_ids=input_ids.to(device),
max_new_tokens=50,
temperature=0.7,
)
print("Output3:\n" + 100 * '-')
print(tokenizer.decode(greedy_output[0]))
```
```
The attention mask and the pad token id were not set. As a consequence, you may observe unexpected behavior. Please pass your input's `attention_mask` to obtain reliable results.
Setting `pad_token_id` to `eos_token_id`:50256 for open-end generation.
Output:
----------------------------------------------------------------------------------------------------
I enjoy walking with my cute dog, but I'm not sure if I'll ever be able to walk with my dog. I'm not sure if I'll ever be able to walk with my dog.
I'm not sure if I'll ever be able to walk with my
Output2:
----------------------------------------------------------------------------------------------------
I enjoy walking with my cute dog, but I'm not sure if I'll ever be able to walk with my dog. I'm not sure if I'll ever be able to walk with my dog.
I'm not sure if I'll ever be able to walk with my
Output3:
----------------------------------------------------------------------------------------------------
I enjoy walking with my cute dog[PAD][PAD][PAD][PAD][PAD][PAD][PAD][PAD][PAD][PAD][PAD][PAD][PAD][PAD][PAD][PAD][PAD][PAD][PAD][PAD][PAD][PAD][PAD][PAD][PAD][PAD][PAD][PAD][PAD][PAD][PAD][PAD][PAD][PAD][PAD][PAD][PAD][PAD][PAD][PAD][PAD][PAD][PAD][PAD][PAD][PAD][PAD][PAD][PAD][PAD]
```
### Expected behavior
According to https://stackoverflow.com/a/69194717/6611317,
```
tokenizer.add_special_tokens({"pad_token": "[PAD]"})
pretrained_model.resize_token_embeddings(len(tokenizer))
```
Should work as expected and does not completely break generation. | 06-20-2023 14:09:43 | 06-20-2023 14:09:43 | You are adding a random line in the model. Without fine-tuning it there is no reason for it to continue working. |
transformers | 24,371 | closed | Future compatibility with LangChain | Is there a specific timeline for future LLM agents compatibility with LangChain?
What other current compatibility solutions with LangChain are there?
| 06-20-2023 13:43:28 | 06-20-2023 13:43:28 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 24,369 | closed | Additional option for text generation when setting num_beam_groups | ### Feature request
I propose to add `num_return_sequences_per_groups` as an argument to `generate` function of `transformers.GenerationMixin`. Setting this will output `num_return_sequences_per_groups` sentences each group. Use cases are as follows:
code:
```python
outputs = model.generate(
tokenizer.encode(text, return_tensors="pt", max_length=512),
num_beam_groups=3,
num_beams=12,
diversity_penalty=1.0,
num_return_sequences_per_groups=2,
)
for output in outputs:
print(tokenizer.decode(output, skip_special_tokens=True))
```
output:
```
A flock of birds flying over the ocean.
A flock of birds flying over a beach.
Birds flying over the water in the sun.
Birds flying the water near a mountain.
Several birds are flying over a body of water.
Several birds flying over a body of water.
```
The example referred to https://arxiv.org/abs/1610.02424 .
### Motivation
As shown below, the output may have little difference when `num_beam_groups` and `num_beams` have the same values.
code:
```python
from transformers import (
AutoTokenizer,
AutoModelForSeq2SeqLM,
)
model = AutoModelForSeq2SeqLM.from_pretrained("facebook/bart-large-xsum")
tokenizer = AutoTokenizer.from_pretrained("facebook/bart-large-xsum")
text = "The dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration."
outputs = model.generate(
tokenizer.encode(text, return_tensors="pt", max_length=512),
num_beam_groups=2,
num_beams=2,
diversity_penalty=1000000.0,
num_return_sequences=2,
)
for output in outputs:
print(tokenizer.decode(output, skip_special_tokens=True))
```
output:
```
A number Of research projects have investigated the role of the brain's encoder and decoder in the control of the encoded sequences.
A number Of research projects have investigated the role of the brain's encoder and decoder in the control of the encoded sequences..
```
This problem occurs because num_beam*2 is searched as an implementation of beam search. Such output is undesirable.
This example is only for clarity. Even in the general case, the current implementation does not guarantee diversity because the output is ordered by score. Therefore, I would like to enable a variety of outputs with this option.
### Your contribution
If it looks good, I will implement it. | 06-20-2023 09:58:49 | 06-20-2023 09:58:49 | cc @gante <|||||>Hey @hukuda222
`generate` is already a configuration behemoth, and we would be adding one more flag. By default, we are reluctant to add more flags unless the benefits are large OR there is demand for the option. As such, I'm going to propose the same as I do in similar issues ([e.g.](https://github.com/huggingface/transformers/issues/22168#issuecomment-1477998997))!
If this comment gets 10 reactions/this issue gets mentioned 10 times, then it means that folks have been searching for this feature. In that case, I'll greenlight the suggestion, and let's add it to the codebase. That way, we can balance HF's limited maintenance resources with actual feature demand! (Whoever does the 10th react, plz tag me)
@hukuda222 does that sound good to you?<|||||>@gante Sounds good. Thank you for your quick response.<|||||>@gante
Sorry for the delay. I thought the output I presented earlier might be a bug in the current code. Diverse beam search is a method that generates `num_beams//num_beam_groups` sentences for each group independently. However, the current code uses one BeamHypotheses shared by all groups. Therefore, group A will generate two sentences before group B outputs a sentence.
https://github.com/huggingface/transformers/blob/ad78d9597b224443e9fe65a94acc8c0bc48cd039/src/transformers/generation/beam_search.py#L178-L186
This is a problem that can be solved by creating as many BeamHypotheses as there are groups. I would like to address this inconvenience in the form of a modification to the diverse beam search implementation, rather than adding an option. If you don't mind, could you give me your opinion?<|||||>Hey @hukuda222 👋
I've had a deeper look at group beam search, and it doesn't not seem to be working properly. For instance, the snippet below produces the same sequence on all beams, and that should not happen (each beam should generate different continuations).
I don't have the bandwidth to fix it immediately, so if you're able to contribute we'd deeply appreciate 🙌 Since it is broken (and thus no retrocompatibility needs to be respected), feel free to also change the behavior of `num_return_sequences` in group beam search to prioritize returning from different beams, which makes more sense.
___________________________________
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained("gpt2")
tokenizer = AutoTokenizer.from_pretrained("gpt2")
inputs = tokenizer(["The full name of Donald is Donald"], return_tensors="pt")
outputs = model.generate(**inputs, num_beams=4, num_beam_groups=4, num_return_sequences=4)
print("\n".join(tokenizer.batch_decode(outputs, skip_special_tokens=True)))
# Outputs the following sequence 4 times. Each beam should return different sequences.
# The full name of Donald is Donald J. Trump Jr. The full name of Donald is Donald J
```<|||||>@gante
Thanks for doing the research. I will send PR as soon as I can fix it. |
transformers | 24,368 | closed | [Tokenizer doc] Clarification about `add_prefix_space` | # What does this PR do?
Adresses #17391, updates the documentation that suggested to use `add_prefix_space` when calling the tokenizer | 06-20-2023 09:05:06 | 06-20-2023 09:05:06 | _The documentation is not available anymore as the PR was closed or merged._<|||||>This blog post on constrained decoding also uses `add_prefix_space` in `__call__` https://huggingface.co/blog/constrained-beam-search
<|||||>The blog is not hosted on `transformers` but on `blog`, will open a PR for that too later on, thanks for the catch 😉 |
transformers | 24,367 | closed | [Whisper Docs] Nits | # What does this PR do?
Adresses #24342, where it is mentioned that the documentation is conter intuitive. Indeed, after a lot of changes, the default value for the `bos_token` that we use is different, thus no official models (hosted on the hub) use `bos_token = "<startoftranscript>"` | 06-20-2023 08:41:59 | 06-20-2023 08:41:59 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 24,366 | closed | Format of the documentation (文档格式和可读性的问题) | ### System Info
huggingface的文档格式真的看起来很乱,读起来很不流畅,强烈建议官方优化一下:
例如:https://huggingface.co/docs/transformers/installation
1. 段落之间几乎没有分隔符,没有标题,字体大小、行间距之类的也很差;
2. 超链和普通文字的格式几乎一样,容易误触;
3. 代码段也不够美观,看起来就像是从interactive 的IDE写的,不方便初学者复制。
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
希望能改进吧
### Expected behavior
希望能改进吧 | 06-20-2023 07:38:24 | 06-20-2023 07:38:24 | sorry,我的浏览器兼容性问题。
My bad!!!! |
transformers | 24,365 | open | ValueError: Unexpected result of `train_function` (Empty logs). Please use `Model.compile(..., run_eagerly=True)`, or `tf.config.run_functions_eagerly(True)` for more information of where went wrong, or file a issue/bug to `tf.keras`. | ### System Info
I am trying to train the CLIP model with my custom dataset But I am facing the above issue
my current version of tensorflow ==2.12.0
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction

### Expected behavior
Please provide me the guidance how I can use the code t train on custom dataset | 06-20-2023 07:31:37 | 06-20-2023 07:31:37 | Hi @ErHimani, thanks for raising an issue.
Could you follow the issue template and provide:
* The running environment: run `transformers-cli env` in the terminal and copy-paste the output
* A minimal code snippet so that was can reproduce the error
Without these, we're unable to help.<|||||>hii @amyeroberts I am using the following command:
`python examples\tensorflow\contrastive-image-text\run_clip.py --output_dir .\clip-roberta-finetuned --vision_model_name_or_path openai/clip-vit-base-patch32 --text_model_name_or_path roberta-base --train_file descriptions.json --image_column image_path --caption_column text --remove_unused_columns=False --do_train --per_device_train_batch_size="64" --per_device_eval_batch_size="64" --learning_rate="5e-5" --warmup_steps="0" --weight_decay 0.1`<|||||>@ErHimani Thanks for providing this. In order for us to be able to reproduce, we'll need `descriptions.json`, or an example sample from the dataset to be able to reproduce. We also require the running environment information, as noted above. <|||||>@amyeroberts Please find Link of the custom dataset
description.json:[https://drive.google.com/file/d/14FGJwXRsxns679-ILGlLcBRpqe8UUmGu/view?usp=sharing](url)
Images:[https://drive.google.com/drive/folders/1yr8zapcCPdxlN-5ZSczOIiIeIyS3K_Vt?usp=sharing](url) |
transformers | 24,364 | closed | Update tiny models for pipeline testing. | # What does this PR do?
Update tiny models for pipeline testing. | 06-19-2023 18:31:22 | 06-19-2023 18:31:22 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 24,363 | closed | [modelcard] add audio classification to task list | # What does this PR do?
Adds audio classification to the modelcard tasks lists, thus enabling model cards to be created for this task (required for https://github.com/huggingface/audio-transformers-course/pull/46) | 06-19-2023 17:40:18 | 06-19-2023 17:40:18 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 24,362 | closed | For loop support in python interpreter of Transformer agent | ### Feature request
Hello, I would like to add for loop support in https://github.com/huggingface/transformers/blame/c2393cad085e3875ee2206d917d46d15e50602a3/src/transformers/tools/python_interpreter.py
Any idea about how to implement this?
### Motivation
For loop is quite common in generated code. Usually it will not cause infinite loop.
### Your contribution
An additional 'elif' for 'ast.For' before the 'else' will be nice: https://github.com/huggingface/transformers/blame/c2393cad085e3875ee2206d917d46d15e50602a3/src/transformers/tools/python_interpreter.py#L132 | 06-19-2023 16:58:25 | 06-19-2023 16:58:25 | cc @sgugger <|||||>Would you like to open a PR for this?<|||||>I would like to, but I don't know how to implement it yet. Do you have any suggestions?<|||||>Should be added with the PR mentioned above :-) |
transformers | 24,361 | closed | Accelerate preprocessing crashing due to non-tensor input | ### System Info
I believe a recent update has caused Accelerate to try and concatenate all tensor data in the input dictionary.
This is a problem, because my inputs contain non-tensor data, due to the fact that such data is intermittent and not always provided in the batch.
Rather than skipping over this information, it instead tries to torch.cat the data, which results in crashes.
```
Traceback (most recent call last):
File "/home/lily/Desktop/Project/finetune_dynamic.py", line 304, in <module>
fire.Fire(train)
File "/home/lily/anaconda3/envs/emme/lib/python3.10/site-packages/fire/core.py", line 141, in Fire
component_trace = _Fire(component, args, parsed_flag_args, context, name)
File "/home/lily/anaconda3/envs/emme/lib/python3.10/site-packages/fire/core.py", line 475, in _Fire
component, remaining_args = _CallAndUpdateTrace(
File "/home/lily/anaconda3/envs/emme/lib/python3.10/site-packages/fire/core.py", line 691, in _CallAndUpdateTrace
component = fn(*varargs, **kwargs)
File "/home/lily/Desktop/Emme (copy)/finetune_dynamic.py", line 295, in train
trainer.train(resume_from_checkpoint=resume_from_checkpoint)
File "/home/lily/anaconda3/envs/emme/lib/python3.10/site-packages/transformers/trainer.py", line 1539, in train
return inner_training_loop(
File "/home/lily/anaconda3/envs/emme/lib/python3.10/site-packages/transformers/trainer.py", line 1779, in _inner_training_loop
for step, inputs in enumerate(epoch_iterator):
File "/home/lily/anaconda3/envs/emme/lib/python3.10/site-packages/accelerate/data_loader.py", line 553, in __iter__
next_batch, next_batch_info = self._fetch_batches(main_iterator)
File "/home/lily/anaconda3/envs/emme/lib/python3.10/site-packages/accelerate/data_loader.py", line 521, in _fetch_batches
batch = concatenate(batches, dim=0)
File "/home/lily/anaconda3/envs/emme/lib/python3.10/site-packages/accelerate/utils/operations.py", line 413, in concatenate
return type(data[0])({k: concatenate([d[k] for d in data], dim=dim) for k in data[0].keys()})
File "/home/lily/anaconda3/envs/emme/lib/python3.10/site-packages/accelerate/utils/operations.py", line 413, in <dictcomp>
return type(data[0])({k: concatenate([d[k] for d in data], dim=dim) for k in data[0].keys()})
File "/home/lily/anaconda3/envs/emme/lib/python3.10/site-packages/accelerate/utils/operations.py", line 411, in concatenate
return honor_type(data[0], (concatenate([d[i] for d in data], dim=dim) for i in range(len(data[0]))))
File "/home/lily/anaconda3/envs/emme/lib/python3.10/site-packages/accelerate/utils/operations.py", line 84, in honor_type
return type(obj)(generator)
File "/home/lily/anaconda3/envs/emme/lib/python3.10/site-packages/accelerate/utils/operations.py", line 411, in <genexpr>
return honor_type(data[0], (concatenate([d[i] for d in data], dim=dim) for i in range(len(data[0]))))
File "/home/lily/anaconda3/envs/emme/lib/python3.10/site-packages/accelerate/utils/operations.py", line 411, in concatenate
return honor_type(data[0], (concatenate([d[i] for d in data], dim=dim) for i in range(len(data[0]))))
File "/home/lily/anaconda3/envs/emme/lib/python3.10/site-packages/accelerate/utils/operations.py", line 84, in honor_type
return type(obj)(generator)
File "/home/lily/anaconda3/envs/emme/lib/python3.10/site-packages/accelerate/utils/operations.py", line 411, in <genexpr>
return honor_type(data[0], (concatenate([d[i] for d in data], dim=dim) for i in range(len(data[0]))))
File "/home/lily/anaconda3/envs/emme/lib/python3.10/site-packages/accelerate/utils/operations.py", line 415, in concatenate
raise TypeError(f"Can only concatenate tensors but got {type(data[0])}")
TypeError: Can only concatenate tensors but got <class 'str'>
```
I'd like to know if there's a way to turn off this feature in accelerate. I can (and have been, up until now) handle batching my own data. It's only now that it's become a problem.
Issue raised on accelerate as well:
https://github.com/huggingface/accelerate/issues/1611
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Reproduce:
1: Create model that accepts non-tensor inputs as a nested list
2: Feed input to model via huggingface trainer
3: Observe crash
### Expected behavior
Accelerate should ignore the data that it can't process, and pass it to the model as normal. | 06-19-2023 15:42:06 | 06-19-2023 15:42:06 | Hi @ElleLeonne,
So that we can best help, could you share a minimal code snippet so that we can reproduce the error and information about the running environment: run `transformers-cli env` in the terminal and copy-paste the output?<|||||>Out of town today, but I'll have something shortly.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 24,360 | closed | TensorFlow CI fixes | I made a lot of changes to the TF tests, and this exposed a few issues. This PR fixes all the exposed issues, so hopefully after this the only remaining CI issues should be related to generation or the `SharedEmbeddings` refactor. | 06-19-2023 15:26:17 | 06-19-2023 15:26:17 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 24,359 | open | ValueError: Found `optimizer` configured in the DeepSpeed config, but no `scheduler`. Please configure a scheduler in the DeepSpeed config. | ValueError: Found `optimizer` configured in the DeepSpeed config, but no `scheduler`. Please configure a scheduler in the DeepSpeed config.
Am using `--warmup_ratio 0.03 --lr_scheduler_type "cosine" \`
Here, and I didn't found a properly shceduler in deepspeed ssame as cosine, what should to set? | 06-19-2023 15:17:55 | 06-19-2023 15:17:55 | Hi @luohao123,
So that we can help you, could you follow the issue template and provide a minimal code snippet to reproduce the error and the running environment: run `transformers-cli env` in the terminal and copy-paste the output?
cc @pacman100 <|||||>**TLDR;** if you're in a rush, downgrading to version `<4.30` (4.29.2) worked for me
**I've had the same issue 👇**
I believe the previous behaviour allowed you to not include any DeepSpeed configuration `scheduler` key and the one specified in your `TrainerArguments` would be used. Now it seems you have to include the corresponding scheduler between DeepSpeed and Hugging Face `Trainer`.
i.e.
| DeepSpeed scheduler | Trainer scheduler | Resulting scheduler |
| ----------- | ----------- | ----------- |
| WarmupLR | constant_with_warmup | constant_with_warmup |
| WarmupDecayLR | linear | linear |
whereas before you could just ignore the first column and leave it blank to get the same result
| DeepSpeed scheduler | Trainer scheduler | Resulting scheduler |
| ----------- | ----------- | ----------- |
| | constant_with_warmup | constant_with_warmup |
| | linear | linear |
personally, I found it handier before where I only had to specify the scheduler in one place rather than tracking this over a DeepSpeed config and a Trainer config which are generally separate objects.<|||||>Hello, the supported combinations now are:
1. Trainer optimizer + Trainer scheduler - Don't specify these in the DS config and use trainer args
2. DeepSpeed optimizer + DeeepSpeed Scheduler - Specify both in DeepSpeed config and no need to use/specify them via Trainer args (@jackapbutler, please note this as you happen to be doing both)
3. Trainer optimizer + DeepSpeed Scheduler - Don't specify optimizer in DS config; only set the scheduler there. Don't specify the scheduler via Trainer args.
@luohao123, the case you want is DeepSpeed Optimizer + Trainer Scheduler which isn't supported now. The suggested approach in your case would be to use `Trainer optimizer + Trainer scheduler` (Settting 1. above).
Hope this helps.
<|||||>@pacman100 I actually got some errors when specifci via trainingargs with cosine scheduler while not specific in deepspeed config:
```
│ ❱ 485 │ │ self.initialize_optimizer_states() │
│ 486 │ │ see_memory_usage("After initializing optimizer states", force=True) │
│ 487 │ │ │
│ 488 │ │ if dist.get_rank() == 0: │
│ │
│ /root/anaconda3/lib/python3.10/site-packages/deepspeed/runtime/zero/stage_1_and_2.py:620 in │
│ initialize_optimizer_states │
│ │
│ 617 │ │ if isinstance(self.optimizer, torch.optim.Adagrad): │
│ 618 │ │ │ self.optimizer = torch.optim.Adagrad(self.single_partition_of_fp32_groups, * │
│ 619 │ │ else: │
│ ❱ 620 │ │ │ self.optimizer.step() │
│ 621 │ │ │
│ 622 │ │ if not self.cpu_offload: │
│ 623 │ │ │ for group in self.single_partition_of_fp32_groups: │
│ │
│ /root/anaconda3/lib/python3.10/site-packages/torch/optim/lr_scheduler.py:69 in wrapper │
│ │
│ 66 │ │ │ │ instance = instance_ref() │
│ 67 │ │ │ │ instance._step_count += 1 │
│ 68 │ │ │ │ wrapped = func.__get__(instance, cls) │
│ ❱ 69 │ │ │ │ return wrapped(*args, **kwargs) │
│ 70 │ │ │ │
│ 71 │ │ │ # Note that the returned function here is no longer a bound method, │
│ 72 │ │ │ # so attributes like `__func__` and `__self__` no longer exist. │
│ │
│ /root/anaconda3/lib/python3.10/site-packages/torch/optim/optimizer.py:280 in wrapper │
│ │
│ 277 │ │ │ │ │ │ │ raise RuntimeError(f"{func} must return None or a tuple of ( │
│ 278 │ │ │ │ │ │ │ │ │ │ │ f"but got {result}.") │
│ 279 │ │ │ │ │
│ ❱ 280 │ │ │ │ out = func(*args, **kwargs) │
│ 281 │ │ │ │ self._optimizer_step_code() │
│ 282 │ │ │ │ │
│ 283 │ │ │ │ # call optimizer step post hooks │
│ │
│ /root/anaconda3/lib/python3.10/site-packages/torch/optim/optimizer.py:33 in _use_grad │
│ │
│ 30 │ │ prev_grad = torch.is_grad_enabled() │
│ 31 │ │ try: │
│ 32 │ │ │ torch.set_grad_enabled(self.defaults['differentiable']) │
│ ❱ 33 │ │ │ ret = func(self, *args, **kwargs) │
│ 34 │ │ finally: │
│ 35 │ │ │ torch.set_grad_enabled(prev_grad) │
│ 36 │ │ return ret │
│ │
│ /root/anaconda3/lib/python3.10/site-packages/torch/optim/adamw.py:171 in step │
│ │
│ 168 │ │ │ │ state_steps, │
│ 169 │ │ │ ) │
│ 170 │ │ │ │
│ ❱ 171 │ │ │ adamw( │
│ 172 │ │ │ │ params_with_grad, │
│ 173 │ │ │ │ grads, │
│ 174 │ │ │ │ exp_avgs, │
│ │
│ /root/anaconda3/lib/python3.10/site-packages/torch/optim/adamw.py:321 in adamw │
│ │
│ 318 │ else: │
│ 319 │ │ func = _single_tensor_adamw │
│ 320 │ │
│ ❱ 321 │ func( │
│ 322 │ │ params, │
│ 323 │ │ grads, │
│ 324 │ │ exp_avgs, │
│ │
│ /root/anaconda3/lib/python3.10/site-packages/torch/optim/adamw.py:564 in _multi_tensor_adamw │
│ │
│ 561 │ │ │ │ torch._foreach_div_(max_exp_avg_sq_sqrt, bias_correction2_sqrt) │
│ 562 │ │ │ │ denom = torch._foreach_add(max_exp_avg_sq_sqrt, eps) │
│ 563 │ │ │ else: │
│ ❱ 564 │ │ │ │ exp_avg_sq_sqrt = torch._foreach_sqrt(device_exp_avg_sqs) │
│ 565 │ │ │ │ torch._foreach_div_(exp_avg_sq_sqrt, bias_correction2_sqrt) │
│ 566 │ │ │ │ denom = torch._foreach_add(exp_avg_sq_sqrt, eps) │
│ 567 │
╰──────────────────────────────────────────────────────────────────────────────────────────────────╯
RuntimeError: CUDA error: an illegal memory access was encountered
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
```
Which is not right, A100, can u take a llok?
this is my ds config:
```
{
"zero_allow_untested_optimizer": true,
"fp16": {
"enabled": "auto",
"opt_level": "O2",
"initial_scale_power": 16,
"loss_scale_window": 1000,
"hysteresis": 2,
"min_loss_scale": 1,
"loss_scale": 0
},
"zero_optimization": {
"stage": 2,
"allgather_partitions": true,
"allgather_bucket_size": 5e8,
"overlap_comm": false,
"reduce_scatter": true,
"reduce_bucket_size": 5e8,
"contiguous_gradients": true
},
"train_micro_batch_size_per_gpu": "auto",
"gradient_accumulation_steps": "auto"
}
```
this is my training args:
```
CUDA_VISIBLE_DEVICES=2,3 deepspeed --master_port 61000 train_full.py \
--data_path ./data/train_data.json \
--model_name_or_path ./checkpoints/baichuan-7B/ \
--per_device_train_batch_size 4 --output_dir out/bc_full \
--bf16 --num_train_epochs 3 \
--per_device_eval_batch_size 4 \
--gradient_accumulation_steps 16 \
--learning_rate 2e-5 --weight_decay 0. \
--warmup_ratio 0.03 --lr_scheduler_type "cosine" \
--model_max_length 1024 \
--logging_steps 50 \
--lazy_preprocess True \
--deepspeed configs/ds_s2_fschat.json
```
what did wrong???<|||||>Hello @luohao123, please provide minimal reproducible example for further deep dive. Things work fine for me with official example:
ds config:
```
{
"fp16": {
"enabled": "auto",
"loss_scale": 0,
"loss_scale_window": 1000,
"initial_scale_power": 16,
"hysteresis": 2,
"min_loss_scale": 1
},
"bf16": {
"enabled": "auto"
},
"zero_optimization": {
"stage": 2,
"offload_optimizer": {
"device": "cpu",
"pin_memory": true
},
"allgather_partitions": true,
"allgather_bucket_size": 2e8,
"overlap_comm": true,
"reduce_scatter": true,
"reduce_bucket_size": 2e8,
"contiguous_gradients": true
},
"gradient_accumulation_steps": "auto",
"gradient_clipping": "auto",
"steps_per_print": 2000,
"train_batch_size": "auto",
"train_micro_batch_size_per_gpu": "auto",
"wall_clock_breakdown": false
}
```
Command:
```
cd transformers
export TASK_NAME=mrpc
CUDA_VISIBLE_DEVICES=2,3 deepspeed ./examples/pytorch/text-classification/run_glue.py --model_name_or_path bert-base-cased --task_name $TASK_NAME --do_train --do_eval --max_seq_length 128 --per_device_train_batch_size 16 --learning_rate 5e-5 --num_train_epochs 3 --output_dir /tmp/$TASK_NAME/ --overwrite_output_dir --deepspeed ds_config_zero2.json --lr_scheduler_type "cosine"
```
output logs:
```
[2023-06-22 09:47:48,765] [INFO] [config.py:964:print] zero_enabled ................. True
[2023-06-22 09:47:48,765] [INFO] [config.py:964:print] zero_force_ds_cpu_optimizer .. True
[2023-06-22 09:47:48,765] [INFO] [config.py:964:print] zero_optimization_stage ...... 2
[2023-06-22 09:47:48,765] [INFO] [config.py:950:print_user_config] json = {
"fp16": {
"enabled": false,
"loss_scale": 0,
"loss_scale_window": 1000,
"initial_scale_power": 16,
"hysteresis": 2,
"min_loss_scale": 1
},
"bf16": {
"enabled": false
},
"zero_optimization": {
"stage": 2,
"offload_optimizer": {
"device": "cpu",
"pin_memory": true
},
"allgather_partitions": true,
"allgather_bucket_size": 2.000000e+08,
"overlap_comm": true,
"reduce_scatter": true,
"reduce_bucket_size": 2.000000e+08,
"contiguous_gradients": true
},
"gradient_accumulation_steps": 1,
"gradient_clipping": 1.0,
"steps_per_print": inf,
"train_batch_size": 32,
"train_micro_batch_size_per_gpu": 16,
"wall_clock_breakdown": false,
"zero_allow_untested_optimizer": true
}
Using /raid/sourab/.cache/huggingface/torch_extensions/py311_cu118 as PyTorch extensions root...
No modifications detected for re-loaded extension module utils, skipping build step...
Loading extension module utils...
Time to load utils op: 0.00022840499877929688 seconds
[INFO|trainer.py:1680] 2023-06-22 09:47:48,766 >> ***** Running training *****
[INFO|trainer.py:1681] 2023-06-22 09:47:48,766 >> Num examples = 3,668
[INFO|trainer.py:1682] 2023-06-22 09:47:48,766 >> Num Epochs = 3
[INFO|trainer.py:1683] 2023-06-22 09:47:48,766 >> Instantaneous batch size per device = 16
[INFO|trainer.py:1684] 2023-06-22 09:47:48,766 >> Total train batch size (w. parallel, distributed & accumulation) = 32
[INFO|trainer.py:1685] 2023-06-22 09:47:48,766 >> Gradient Accumulation steps = 1
[INFO|trainer.py:1686] 2023-06-22 09:47:48,766 >> Total optimization steps = 345
[INFO|trainer.py:1687] 2023-06-22 09:47:48,766 >> Number of trainable parameters = 108,311,810
[INFO|integrations.py:727] 2023-06-22 09:47:48,767 >> Automatic Weights & Biases logging enabled, to disable set os.environ["WANDB_DISABLED"] = "true"
wandb: Currently logged in as: smangrul. Use `wandb login --relogin` to force relogin
wandb: Tracking run with wandb version 0.15.4
wandb: Run data is saved locally in /home/sourab/transformers/wandb/run-20230622_094749-h2mion2e
wandb: Run `wandb offline` to turn off syncing.
wandb: Syncing run rose-vortex-320
wandb: ⭐️ View project at https://wandb.ai/smangrul/huggingface
wandb: 🚀 View run at https://wandb.ai/smangrul/huggingface/runs/h2mion2e
0%| | 0/345 [00:00<?, ?it/s]/home/sourab/miniconda3/envs/ml/lib/python3.11/site-packages/deepspeed/runtime/zero/stage_1_and_2.py:1829: UserWarning: The torch.cuda.*DtypeTensor constructors are no longer recommended. It's best to use methods such as torch.tensor(data, dtype=*, device='cuda') to create tensors. (Triggered internally at /opt/conda/conda-bld/pytorch_1687280020902/work/torch/csrc/tensor/python_tensor.cpp:83.)
overflow_gpu = get_accelerator().ByteTensor([overflow])
/home/sourab/miniconda3/envs/ml/lib/python3.11/site-packages/deepspeed/runtime/zero/stage_1_and_2.py:1829: UserWarning: The torch.cuda.*DtypeTensor constructors are no longer recommended. It's best to use methods such as torch.tensor(data, dtype=*, device='cuda') to create tensors. (Triggered internally at /opt/conda/conda-bld/pytorch_1687280020902/work/torch/csrc/tensor/python_tensor.cpp:83.)
overflow_gpu = get_accelerator().ByteTensor([overflow])
100%|████████████████████████████████████████████████████████████████████████████████████████| 345/345 [00:57<00:00, 6.13it/s][INFO|trainer.py:1924] 2023-06-22 09:48:49,820 >>
Training completed. Do not forget to share your model on huggingface.co/models =)
{'train_runtime': 61.0539, 'train_samples_per_second': 180.234, 'train_steps_per_second': 5.651, 'train_loss': 0.4465487715126812, 'epoch': 3.0}
100%|████████████████████████████████████████████████████████████████████████████████████████| 345/345 [00:57<00:00, 6.03it/s]
[INFO|trainer.py:2832] 2023-06-22 09:48:49,823 >> Saving model checkpoint to /tmp/mrpc/
[INFO|configuration_utils.py:458] 2023-06-22 09:48:49,824 >> Configuration saved in /tmp/mrpc/config.json
[INFO|modeling_utils.py:1845] 2023-06-22 09:48:50,616 >> Model weights saved in /tmp/mrpc/pytorch_model.bin
[INFO|tokenization_utils_base.py:2215] 2023-06-22 09:48:50,617 >> tokenizer config file saved in /tmp/mrpc/tokenizer_config.json
[INFO|tokenization_utils_base.py:2222] 2023-06-22 09:48:50,617 >> Special tokens file saved in /tmp/mrpc/special_tokens_map.json
***** train metrics *****
epoch = 3.0
train_loss = 0.4465
train_runtime = 0:01:01.05
train_samples = 3668
train_samples_per_second = 180.234
train_steps_per_second = 5.651
06/22/2023 09:48:50 - INFO - __main__ - *** Evaluate ***
[INFO|trainer.py:769] 2023-06-22 09:48:50,645 >> The following columns in the evaluation set don't have a corresponding argument in `BertForSequenceClassification.forward` and have been ignored: sentence1, sentence2, idx. If sentence1, sentence2, idx are not expected by `BertForSequenceClassification.forward`, you can safely ignore this message.
[INFO|trainer.py:3106] 2023-06-22 09:48:50,646 >> ***** Running Evaluation *****
[INFO|trainer.py:3108] 2023-06-22 09:48:50,646 >> Num examples = 408
[INFO|trainer.py:3111] 2023-06-22 09:48:50,646 >> Batch size = 8
100%|██████████████████████████████████████████████████████████████████████████████████████████| 26/26 [00:00<00:00, 52.94it/s]
***** eval metrics *****
epoch = 3.0
eval_accuracy = 0.8431
eval_combined_score = 0.8664
eval_f1 = 0.8897
eval_loss = 0.3868
eval_runtime = 0:00:00.51
eval_samples = 408
eval_samples_per_second = 797.59
eval_steps_per_second = 50.827
wandb: Waiting for W&B process to finish... (success).
[2023-06-22 09:48:52,926] [INFO] [launch.py:347:main] Process 3002010 exits successfully.
wandb:
wandb: Run history:
wandb: eval/accuracy ▁
wandb: eval/combined_score ▁
wandb: eval/f1 ▁
wandb: eval/loss ▁
wandb: eval/runtime ▁
wandb: eval/samples_per_second ▁
wandb: eval/steps_per_second ▁
wandb: train/epoch ▁▁
wandb: train/global_step ▁▁
wandb: train/total_flos ▁
wandb: train/train_loss ▁
wandb: train/train_runtime ▁
wandb: train/train_samples_per_second ▁
wandb: train/train_steps_per_second ▁
wandb:
wandb: Run summary:
wandb: eval/accuracy 0.84314
wandb: eval/combined_score 0.8664
wandb: eval/f1 0.88966
wandb: eval/loss 0.38684
wandb: eval/runtime 0.5115
wandb: eval/samples_per_second 797.59
wandb: eval/steps_per_second 50.827
wandb: train/epoch 3.0
wandb: train/global_step 345
wandb: train/total_flos 726186493739008.0
wandb: train/train_loss 0.44655
wandb: train/train_runtime 61.0539
wandb: train/train_samples_per_second 180.234
wandb: train/train_steps_per_second 5.651
wandb:
wandb: 🚀 View run rose-vortex-320 at: https://wandb.ai/smangrul/huggingface/runs/h2mion2e
wandb: Synced 6 W&B file(s), 0 media file(s), 2 artifact file(s) and 0 other file(s)
wandb: Find logs at: ./wandb/run-20230622_094749-h2mion2e/logs
[2023-06-22 09:49:01,927] [INFO] [launch.py:347:main] Process 3002009 exits successfully.
```<|||||>@pacman100 thank u, let me try your config and have a test again, I notice your config are not exactly as mine.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>> Hello, the supported combinations now are:
>
> 1. Trainer optimizer + Trainer scheduler - Don't specify these in the DS config and use trainer args
> 2. DeepSpeed optimizer + DeeepSpeed Scheduler - Specify both in DeepSpeed config and no need to use/specify them via Trainer args (@jackapbutler, please note this as you happen to be doing both)
> 3. Trainer optimizer + DeepSpeed Scheduler - Don't specify optimizer in DS config; only set the scheduler there. Don't specify the scheduler via Trainer args.
>
> @luohao123, the case you want is DeepSpeed Optimizer + Trainer Scheduler which isn't supported now. The suggested approach in your case would be to use `Trainer optimizer + Trainer scheduler` (Settting 1. above).
>
> Hope this helps.
Hi, I want to know if I use setting 1, will the optimizer utilize DeepSpeed's cpuAdam? <|||||>> Hi, I want to know if I use setting 1, will the optimizer utilize DeepSpeed's cpuAdam?
Yes, by default `zero_force_ds_cpu_optimizer` is set to True if not explicitly specified in the ds_config. As such, it will leverage the DeepSpeed's cpuAdam when offloading as it is strongly recommended by DeepSpeed team |
transformers | 24,358 | closed | Fix the order in `GPTNeo`'s docstring | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
| 06-19-2023 14:00:53 | 06-19-2023 14:00:53 | Et voilà :)<|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>@qgallouedec To get the setup_and_quality tests passing, you'll need to run `make style` at the top level of the repo and push any changes to this branch. |
transformers | 24,357 | closed | Make `AutoFormer` work with previous torch version | # What does this PR do?
Without `import torch.utils.checkpoint` (which we have in other files, like `Bart`), with torch 1.13, we got an error
(running `RUN_SLOW=1 python3 -m pytest -v tests/models/autoformer/test_modeling_autoformer.py::AutoformerModelTest::test_training_gradient_checkpointing`)
```bash
> layer_outputs = torch.utils.checkpoint.checkpoint(
create_custom_forward(encoder_layer),
hidden_states,
attention_mask,
(head_mask[idx] if head_mask is not None else None),
)
E AttributeError: module 'torch.utils' has no attribute 'checkpoint'
```
Let's make it work with previous torch version(s) ❤️ . | 06-19-2023 13:38:50 | 06-19-2023 13:38:50 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 24,356 | closed | Deepspeed OOM When training 7B model on V100 16GB (2) | ### System Info
- Python version 3.8
- transformers - installed from source latest
**Describe the bug**
OOM When training 7B model on V100 16GB (2) with zero stage 2 and CPU offloading even when memory estimation showed far less per GPU mem requirement.
```
-- memory estimation--
DEVICES ['Tesla V100-PCIE-16GB', 'Tesla V100-PCIE-16GB']
-------------ZERO 2------------
Estimated memory needed for params, optim states and gradients for a:
HW: Setup with 1 node, 2 GPUs per node.
SW: Model with 6650M total params.
per CPU | per GPU | Options
148.66GB | 12.39GB | offload_optimizer=cpu
74.33GB | 74.33GB | offload_optimizer=none
```
**Screenshots**
nvidia-smi during run
<img width="648" alt="Screenshot 2023-06-19 at 5 30 41 PM" src="https://github.com/microsoft/DeepSpeed/assets/25312635/47a512c6-b509-49c2-b111-8f7e9dac8532">
RAM usage
<img width="723" alt="Screenshot 2023-06-19 at 5 55 08 PM" src="https://github.com/microsoft/DeepSpeed/assets/25312635/e7e97e07-7203-42a7-8b89-468c2de35546">
Can see free RAM available.
**System info (please complete the following information):**
- OS: CentOS Linux
- GPU count and types : V100 16B X 2 single node
### Who can help?
@pacman100
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Steps to reproduce the behavior:
My code is [here](https://github.com/explodinggradients/Funtuner/blob/main/funtuner/trainer.py)
Run deepspeed funtuner/trainer.py
export PYTHONPATH="${PYTHONPATH}:/your-path/Funtuner"
please change the log_dir to your folder [here](https://github.com/explodinggradients/Funtuner/blob/c4e66209d5ee276a7eb8caf582435f1eaafbf18f/funtuner/config/config.yaml#L4) also you might want to set log_wandb=False
`dev-train` branch
### Expected behavior
Run w/o OOM error.
| 06-19-2023 13:30:56 | 06-19-2023 13:30:56 | |
transformers | 24,354 | closed | PEFT Models are not resuming from checkpoint as expected. | ### System Info
transformers : 4.30
### Who can help?
@llohann-speranca @younesbelkada
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Please try below code snippet as per example:
```python
import os
from transformers import TrainingArguments
from datasets import load_dataset
from trl import SFTTrainer
from peft import LoraConfig
dataset = load_dataset("imdb", split="train")
output_dir = "test"
training_args = TrainingArguments(
output_dir=output_dir,
per_device_train_batch_size=1,
per_device_eval_batch_size=1,
max_steps=5,
save_steps=1,
save_strategy='steps'
)
peft_config = LoraConfig(
r=16,
lora_alpha=32,
lora_dropout=0.05,
bias="none",
task_type="CAUSAL_LM",
)
trainer = SFTTrainer(
"EleutherAI/gpt-neo-125m",
train_dataset=dataset,
args=training_args,
dataset_text_field="text",
peft_config=peft_config
)
trainer.train()
trainer.save_model(os.path.join(output_dir, "checkpoint-1"))
trainer.train()
```
For the above code snippet I have pulled @llohann-speranca's resume from checkpoint repo then replaced the installed transformers repo.
Inital version of trainer.train() is working without any issues.
As mentioned that I have overridden the model by using trainer.save_model(path of saved model).
For resuming from checkpoint i have updated num of epochs much higher than previous one.
while passing as trainer.train(resume from checkpoint=True) then it is showing as can't find a valid checkpoint.
Also while passing as trainer.train(resume from checkpoint = path of saved model)then it is showing as can't find a valid checkpoint.
The same issue persists in the transformers source installed version as well.
### Expected behavior
The model should be resumed from checkpoint. | 06-19-2023 13:02:33 | 06-19-2023 13:02:33 | Hi @techthiyanes
Thank you very much for double checking, here are the snippets that I have ran and they work fine on my end using the branh you have mentioned:
<details><summary>Without `resume_from_checkpoint`</summary>
```python
import os
from transformers import TrainingArguments
from datasets import load_dataset
from trl import SFTTrainer
from peft import LoraConfig
dataset = load_dataset("imdb", split="train")
output_dir = "test"
training_args = TrainingArguments(
output_dir=output_dir,
per_device_train_batch_size=1,
per_device_eval_batch_size=1,
max_steps=5,
save_steps=1,
save_strategy='steps'
)
peft_config = LoraConfig(
r=16,
lora_alpha=32,
lora_dropout=0.05,
bias="none",
task_type="CAUSAL_LM",
)
trainer = SFTTrainer(
"EleutherAI/gpt-neo-125m",
train_dataset=dataset,
args=training_args,
dataset_text_field="text",
peft_config=peft_config
)
trainer.train()
trainer.save_model(os.path.join(output_dir, "checkpoint-1"))
trainer.train(resume_from_checkpoint=True)
```
</details>
<details><summary>With `resume_from_checkpoint`</summary>
```python
import os
from transformers import TrainingArguments
from datasets import load_dataset
from trl import SFTTrainer
from peft import LoraConfig
dataset = load_dataset("imdb", split="train")
output_dir = "test"
training_args = TrainingArguments(
output_dir=output_dir,
per_device_train_batch_size=1,
per_device_eval_batch_size=1,
max_steps=5,
save_steps=1,
save_strategy='steps'
)
peft_config = LoraConfig(
r=16,
lora_alpha=32,
lora_dropout=0.05,
bias="none",
task_type="CAUSAL_LM",
)
trainer = SFTTrainer(
"EleutherAI/gpt-neo-125m",
train_dataset=dataset,
args=training_args,
dataset_text_field="text",
peft_config=peft_config
)
trainer.train()
trainer.save_model(os.path.join(output_dir, "checkpoint-1"))
trainer.train()
```
</details>
Can you elaborate more on:
> For resuming from checkpoint i have updated num of epochs much higher than previous one.
while passing as trainer.train(resume from checkpoint=True) then it is showing as can't find a valid checkpoint.
Also while passing as trainer.train(resume from checkpoint = path of saved model)then it is showing as can't find a valid checkpoint.
Thanks! <|||||>> ```python
> ```python
> trainer.train(resume_from_checkpoint=True)
> ```
>
>
>
>
>
>
>
>
>
>
>
> ```
So far I'm able to replicate the issue.
Steps I have followed:
Libaries Installed:
! pip install datasets peft evaluate
!pip install git+https://github.com/huggingface/transformers
Clone PEFT resume from chekpoint branch:
!git clone https://github.com/llohann-speranca/transformers.git -b fix-resume-checkpoint-for-peftmodel
Replace this folder where the transformers library installed:
!cp -r /content/transformers /usr/local/lib/python3.10/dist-packages/transformers
Restart the run time.
Then below code snippet:
import os
from transformers import TrainingArguments
from datasets import load_dataset
from trl import SFTTrainer
from peft import LoraConfig
dataset = load_dataset("imdb", split="train")
output_dir = "test"
training_args = TrainingArguments(
output_dir=output_dir,
per_device_train_batch_size=1,
per_device_eval_batch_size=1,
max_steps=5,
save_steps=1,
save_strategy='steps'
)
peft_config = LoraConfig(
r=16,
lora_alpha=32,
lora_dropout=0.05,
bias="none",
task_type="CAUSAL_LM",
)
trainer = SFTTrainer(
"EleutherAI/gpt-neo-125m",
train_dataset=dataset,
args=training_args,
dataset_text_field="text",
peft_config=peft_config
)
trainer.train()
trainer.save_model(os.path.join(output_dir, "checkpoint-1"))
trainer.train(resume_from_checkpoint=True)

@younesbelkada @@llohann-speranca
I guess you would have run the snippet via already from modified trainer code that resides internally.
Could you please try running the code that is downloaded from git on specific branch?
Thanks a lot on your effort on validating this.
<|||||>Hi @techthiyanes
Can you try to install `transformers` with the following command ?
```bash
pip install git+https://github.com/llohann-speranca/transformers.git@fix-resume-checkpoint-for-peftmodel
```
The line 1991 of your traceback doesn't match with the line 1991 of the fork: https://github.com/llohann-speranca/transformers/blob/e01a4aa77073b847b9451c92c2df718a67960df1/src/transformers/trainer.py#L1991 so I believe you did not installed correctly transformers from that branch<|||||>> ```shell
> pip install git+https://github.com/llohann-speranca/transformers.git@fix-resume-checkpoint-for-peftmodel
> ```
Thanks a lot on finding and fixing to help this issue.
Now I am able to resume from checkpoint. It's working for classification and seq2seq models as well. |
transformers | 24,353 | closed | Fix ImageGPT doctest | # What does this PR do?
#24317 Resolved the ImageGPT doc test failing issue, as `clusters` in the image processor were not stored as numpy arrays as expected. This was tested by running the code directly, but I didn't run using
` pytest --doctest-modules src/transformers/models/imagegpt/modeling_imagegpt.py ` 🙃
The tests were failing because some code produces an output e.g. model architecture when caling `model.to`, but no "expected" output is provided. We don't want to check these outputs, so this PR adds controls to ignore.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
| 06-19-2023 12:55:19 | 06-19-2023 12:55:19 | _The documentation is not available anymore as the PR was closed or merged._<|||||>@ydshieh Running the doctests (properly this time :) ), the tests pass with the ignore statement on the for loop, and fail without (in the same way as on the CI). |
transformers | 24,352 | closed | Fix device issue in `SwitchTransformers` | # What does this PR do?
Need a tiny fix after #24300.
Currently, we have a failure
```bash
self = <tests.models.switch_transformers.test_modeling_switch_transformers.SwitchTransformersEncoderOnlyModelTest testMethod=test_multi_gpu_data_parallel_forward>
@staticmethod
def forward(ctx, target_device, dim, *inputs):
> assert all(i.device.type != 'cpu' for i in inputs), (
'Gather function not implemented for CPU tensors'
)
E AssertionError: Gather function not implemented for CPU tensors
/usr/local/lib/python3.8/dist-packages/torch/nn/parallel/_functions.py:56: AssertionError
``` | 06-19-2023 12:22:34 | 06-19-2023 12:22:34 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Merge now. Don't hesitate to leave comments if any @ArthurZucker . |
transformers | 24,351 | closed | pin `apex` to a speicifc commit (for DeepSpeed CI docker image) | # What does this PR do?
The docker image build for DeepSpeed job in CI fails since ~ one week due to this [apex issue](https://github.com/NVIDIA/apex/issues/1679).
Let's pin to the previous commit until the above mentioned issue is resolved on `apex` side.
Currently, the DeepSpeed job fails as the above failure prevents to use newer images that include some fixes on `accelerate` side. | 06-19-2023 09:51:09 | 06-19-2023 09:51:09 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 24,350 | closed | byebye Hub connection timeout | # What does this PR do?
No more timeout for connection to Hub in CI, and everyone is happy with ✅ | 06-19-2023 09:35:47 | 06-19-2023 09:35:47 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 24,349 | closed | [GPTNeoX] Nit in config | # What does this PR do?
Fixes #23081: when the number of heads is not a divisor of the hidden size, the attention will not work. This is most probably from the design of GPTNeoX's attention. | 06-19-2023 09:27:45 | 06-19-2023 09:27:45 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 24,348 | closed | Add mul choice train script | - Modify train script all done tasks
- Add common libraries for environments in env.yaml | 06-19-2023 09:12:22 | 06-19-2023 09:12:22 | |
transformers | 24,346 | closed | Clean up disk sapce during docker image build for `transformers-pytorch-gpu` | # What does this PR do?
PyTorch pipeline CI job start to fail due to
```bash
ImportError: accelerate>=0.20.3 is required for a normal functioning of this module, but found accelerate==0.20.2.
Try: pip install transformers -U or pip install -e '.[dev]' if you're working with git main
```
**The root cause is: docker image for this job failed to build due to disk space issue**
```bash
RROR: Could not install packages due to an OSError: [Errno 28] No space left on device
```
As usual, let's us save Space! | 06-19-2023 07:26:27 | 06-19-2023 07:26:27 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 24,345 | closed | Trainer reports batch size different from argument on multiple GPUs with DP | ### System Info
- `transformers` version: 4.30.1
- Platform: Linux-4.18.0-240.el8.x86_64-x86_64-with-glibc2.28
- Python version: 3.10.11
- Huggingface_hub version: 0.15.1
- Safetensors version: 0.3.1
- PyTorch version (GPU?): 1.12.1 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
Not sure why "PyTorch version (GPU?)" is False. I think this is because GPU not connected when I run this on report time. On actual training, GPU connected.
I'm pretty sure my pytorch environment is with GPU support, like I can use same conda environment to train normal, single GPU training exploiting GPU resource.
```
$ conda list | grep pytorch
pytorch 1.12.1 py3.10_cuda11.3_cudnn8.3.2_0 pytorch
```
### Who can help?
@sgugger
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
I've observed `Instantaneous batch size per device` in trainer log reported as `per_device_train_batch_size` x GPU count, reproducible in multiple cases.
I can't give full reproduction detail, but pretty sure that scenario below can give idea of the situation.
For example, I tried to train with 2 GPUs in DP sense(DP as described in [:link:](https://github.com/huggingface/transformers/blob/v4.30.1/docs/source/en/perf_train_gpu_many.mdx#data-parallelism)), with following TrainingArgument:
```py
TrainingArgument(
auto_find_batch_size=False,
per_device_train_batch_size=1,
...
)
```
Then training log looks like this. Note on `Instantaneous batch size per device` value. I expected 1 from `per_device_train_batch_size`
```
***** Running training *****
Num examples = ...
Num Epochs = ...
Instantaneous batch size per device = 2
Total train batch size (w. parallel, distributed & accumulation) = ...
Gradient Accumulation steps = ...
Total optimization steps = ...
Number of trainable parameters = ...
...
```
(I've experienced some other logging bug, like `Total train batch size` especially when with `auto_find_batch_size=True` but let's only focus on batch size mismatch in this issue)
I could check `Instantaneous batch size per device` reported as `per_device_train_batch_size` x GPU count happens again in other cases, like
- 4 GPUs / `per_device_train_batch_size=128` -> `Instantaneous batch size per device = 512`
This maybe
- correct actual behavior but logging is not correct, or
- actual bug, or
- I may have misunderstanding about DP, in this case please blame me :smile:
### Expected behavior
I expected
- `Instantaneous batch size per device` reported as `per_device_train_batch_size`
not
- `Instantaneous batch size per device` reported as `per_device_train_batch_size` x GPU count | 06-19-2023 04:06:05 | 06-19-2023 04:06:05 | How are you launching your training script? If it's just with python (no distributed), the `Trainer` will use `DataParallel` which requires your batch size to be mulitiplied by the number of GPUs to work properly. I'm guessing that's why you see the "instanteneous batch size" at 4x what you put.
This is the only case it will happen (if you launch in distributed mode, the batch size per device will show up correctly) and is a good mean to track whether you are using Distributed training properly (you shouldn't use DataParallel as per PyTorch documentation) so you should launch your script with `torchrun` or `accelerate launch`.<|||||>I just begin to try training with multiple GPUs :smile: And everybody gives warning on using DP, and recommends to use DDP over DP. Okay I'll try.
But that is out of this issue topic. So let's not talk about it anymore here.
---
> How are you launching your training script? If it's just with python (no distributed), the `Trainer` will use `DataParallel`
Yes this is the case I meant. This issue is about DP not DDP.
I think in this communication, it is extremely important to use same terms for same concepts, especially about several 'batch' concepts.
Let me use term
- `batch size per update`: count of input that used for one model parameter update
- `device`: in this case, let's fix this to GPU
- And I think that is what term 'device' mean in training arg `per_device_train_batch_size` and log `Instanteneous batch size per device`.
https://github.com/huggingface/transformers/blob/66fd3a8d626a32989f4569260db32785c6cbf42a/src/transformers/training_args.py#L193-L194
- `batch size per device`: count of input source for each device(i.e. GPU) for one model parameter update iteration
- Depending on documentations and communications, ambiguous terms used, like "mini-batch"([:link:](https://github.com/huggingface/transformers/blob/v4.30.2/docs/source/en/perf_train_gpu_many.mdx#L89) [:link:](https://github.com/huggingface/transformers/blob/v4.30.2/docs/source/en/perf_train_gpu_many.mdx#L95)) or "sub mini-batch" [:link:](https://www.telesens.co/2019/04/04/distributed-data-parallel-training-using-pytorch-on-aws/). So let's fix to this term for this issue communication.
- I expect this is the same concept with training arg `per_device_train_batch_size` and log `Instanteneous batch size per device`
> `DataParallel` which requires your batch size to be mulitiplied by the number of GPUs to work properly. I'm guessing that's why you see the "instanteneous batch size" at 4x what you put.
In your comment, 'batch size' seems to mean `batch size per update`. And yes, that is true, it should be 'GPU count' x `batch size per device`.
But the log `Instantaneous batch size per device` means `batch size per device`, not `batch size per update`. That is what I'm pointing out as a bug, which can lead user to misunderstanding.
<|||||>(I will only use DDP, so this issue is not anymore important for me. But if I'm someone who cares about the project, like maintainer, I would leave this open before the bug fixed. Any maintainers can close this issue if they want so :smile:)<|||||>I made the PR linked above to clarify the logging a bit more. Let me know if it's better! |
transformers | 24,344 | closed | docs: add BentoML to awesome-transformers | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Kindly ask to add BentoML to the list of awesome projects that has transformers support
cc @parano
Signed-off-by: Aaron <[email protected]>
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
cc @sgugger
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
| 06-19-2023 01:13:03 | 06-19-2023 01:13:03 | _The documentation is not available anymore as the PR was closed or merged._<|||||>cc @LysandreJik <|||||>I have updated the docs to the bottom of the page.<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_24344). All of your documentation changes will be reflected on that endpoint. |
transformers | 24,343 | closed | Enable non-causal mask (to enable MLM) for VisionEncoderDecoder models | ### Feature request
Hello! The current (amazing!) VisionEncoderDecoder library supports text generation via a standard causal LM. Some recent work (linked [here](https://arxiv.org/abs/2306.07915)) has shown promise in having the text decoder be a MLM instead of a causal LM. I believe this is doable with the current VisionEncoderDecoder library by passing in [MASK] tokens for the decoder_input_ids and passing in the labels as usual, but this would still result in a causal mask. The code comment is as follows which makes me think this:
```
decoder_attention_mask (`torch.BoolTensor` of shape `(batch_size, target_sequence_length)`, *optional*):
Default behavior: generate a tensor that ignores pad tokens in `decoder_input_ids`. Causal mask will also
be used by default.
```
Is there a way to turn off causal masking to predict multiple text tokens at once using a VisionEncoderDecoder model?
### Motivation
Masked language modeling on top of a Vision encoder appears to be a promising new approach for image captioning and pre-training of vision models according to [this recent work](https://arxiv.org/abs/2306.07915).
### Your contribution
Thank you! | 06-19-2023 01:01:56 | 06-19-2023 01:01:56 | Hi @metemadi, thanks for opening this issue!
This sounds like an interesting project! I believe there's a few places that would need to be adapted in order to enable this properly, such as not forcing `add_cross_attention` to the decoder config and not shifting tokens (cc @ydshieh). The VisionEncoderDecoder model is not intended to be compatible with all encoder-decoder pairs or use cases. This isn't something we'll add to the library at the moment, but feel free to share a fork branch with an implementation here if you'd like!<|||||>Thank you for the insanely fast reply - HuggingFace is amazing as always! This all makes sense. Thanks again.<|||||>Sorry, I forgot to reply:
There is however `class VisionTextDualEncoderModel`. One checkpoint on the Hub is [clip-italian](https://huggingface.co/clip-italian/clip-italian). If you look the config file, it uses `BertForMaskedLM` and `clip_vision_model`.
It might be helpful, but some slight modification might be necessary if the goal is to do what have been done in the paper you mentioned.
|
transformers | 24,342 | closed | Wrong pre-trained Whisper's BOS token? | ### System Info
- `transformers` version: 4.30.2
- Platform: macOS-13.4-arm64-arm-64bit
- Python version: 3.10.11
- Huggingface_hub version: 0.15.1
- Safetensors version: 0.3.1
- PyTorch version (GPU?): 2.0.0 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@sanchit
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```python
from transformers import WhisperTokenizer
WhisperTokenizer.from_pretrained("openai/whisper-tiny").bos_token
>> '<|endoftext|>'
```
### Expected behavior
Dear Gandhi,
From the [documentation](https://huggingface.co/docs/transformers/model_doc/whisper#transformers.WhisperTokenizer) and from what I expect from the Whisper tokenizer, `processor.tokenizer.bos_token` should be equal to `"<|startoftranscript|>"` when using one of the official vanilla Whisper model. Currently, it is equal to `"<|endoftext|>"`. Is it an intended behavior? What do you think?
On a different note, there is another weird behavior when encoding/decoding:
```python
tokenizer.encode(["<|startoftranscript|>"])
>> [50258, 50363, 50258, 50257]
processor.tokenizer.decode([50258, 50363, 50258, 50257])
>> '<|startoftranscript|><|notimestamps|><|startoftranscript|><|endoftext|>'
```
while I was expecting the last line to return `'<|startoftranscript|>'` only.
Yours sincerely,
Tony | 06-18-2023 10:43:37 | 06-18-2023 10:43:37 | Hi!
The first issue seems to be a feature of the whisper model. It has `<|endoftext|>` as token text for `bos`, `eos`, `pad` and `unk`. I see there are no dedicated tokens for `unk` and `pad`, so I think this is a feature of the model, and not a bug. If you look at the [original code](https://github.com/openai/whisper/blob/main/whisper/tokenizer.py), you can see that there is no dedicated token for `eos`, `bos`, `pad` or `unk`. This seems to indicate that these tokens are simply not used by the model.
The second issue is due to `add_special_tokens` being set to `True` by default. So this is not unexpected behavior.
```python
tokenizer.encode(["<|startoftranscript|>"], add_special_tokens=False)
>>> [50258]
tokenizer.decode([50258])
>>> '<|startoftranscript|>'
```
<|||||>cc @ArthurZucker <|||||>Hey, not entirely sure which part of the documentation you are referring to, but this is expected. The `bos_token` is not used to start a transcript. More details [here](https://huggingface.co/openai/whisper-base) about the starting tokens, and why we don't use this `bos`.<|||||>> Hi!
>
> The first issue seems to be a feature of the whisper model. It has `<|endoftext|>` as token text for `bos`, `eos`, `pad` and `unk`. I see there are no dedicated tokens for `unk` and `pad`, so I think this is a feature of the model, and not a bug. If you look at the [original code](https://github.com/openai/whisper/blob/main/whisper/tokenizer.py), you can see that there is no dedicated token for `eos`, `bos`, `pad` or `unk`. This seems to indicate that these tokens are simply not used by the model.
>
> The second issue is due to `add_special_tokens` being set to `True` by default. So this is not unexpected behavior.
>
> ```python
> tokenizer.encode(["<|startoftranscript|>"], add_special_tokens=False)
> >>> [50258]
> tokenizer.decode([50258])
> >>> '<|startoftranscript|>'
> ```
Thanks, makes sense!<|||||>> Hey, not entirely sure which part of the documentation you are referring to, but this is expected. The `bos_token` is not used to start a transcript. More details [here](https://huggingface.co/openai/whisper-base) about the starting tokens, and why we don't use this `bos`.
According to this [part](https://huggingface.co/docs/transformers/model_doc/whisper#transformers.WhisperTokenizer.bos_token), we have:
> bos_token (str, optional, defaults to "<|startoftranscript|>") — The beginning of sequence token.
Which in my opinion is a bit confusing.
But I do understand your point and how I should handle the `<|startoftranscript|>` now. Thanks for the help!<|||||>I'll update the documentation to make it less confusing. The token used to store the ` "<|startoftranscript|>"` token is `decoder_start_token_id`. The `bos_token` is pretty much unused, which is why it was set to the same as `eos_token`. |
transformers | 24,341 | closed | Colab Translation notebook link not found | ### System Info
Hello There!
First and foremost, congrats for Transformers Translation [tutorial](https://huggingface.co/docs/transformers/tasks/translation). 👍
It serves as a Spark for building english-to-many translation languages models!
I´m following it along with TF mostly reproducing it in a jupyter Notebook with TF for mac with GPU enabled
At the end of the [Train](https://huggingface.co/docs/transformers/tasks/translation) section , it is showed
_For a more in-depth example of how to finetune a model for translation, take a look at the corresponding PyTorch notebook or [TensorFlow notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/translation-tf.ipynb)._
Inside the notebook, at cell [4] , there shows a message
_**You** can find a script version of this notebook to fine-tune your model in a distributed fashion using multiple GPUs or TPUs [here](https://github.com/huggingface/transformers/tree/master/examples/seq2seq)._
The link is broken .
## Potential fix.
Maybe it could point to Transformer [performance docs](https://huggingface.co/docs/transformers/performance) if you want to go for a more general overview or some specific part of [run_translation.py](https://github.com/huggingface/transformers/blob/main/examples/tensorflow/translation/run_translation.py) script facilitated by team member [here](https://github.com/huggingface/transformers/issues/24254#issuecomment-1594830054) during #24254 help? Please , don't hesitate to share the link as there could be a benefit in implementing it
Thanks so much for the time dedicated to this
Keep up the amazing work in the Open!
### Who can help?
@Rocket
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
1. Follow tutorial in docs . Go to Notebook at the end of Train Section
2. Go to Tensorflow Notebook
3. Click link in cell [4] . It seems to go to /seq2seq examples
### Expected behavior
The link should point at a fine-tune script version of the notebook, or at least to docs | 06-18-2023 10:19:50 | 06-18-2023 10:19:50 | cc @Rocketknight1 <|||||>I opened a PR to the notebooks repo here to fix this: https://github.com/huggingface/notebooks/pull/398
Thanks for warning us about the issue - we appreciate the help to keep our docs up to date! |
transformers | 24,340 | closed | Fix TypeError: Object of type int64 is not JSON serializable | # What does this PR do?
Fixed that "TypeError: Object of type int64 is not JSON serializable"
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. | 06-18-2023 06:29:10 | 06-18-2023 06:29:10 | Hi @xiaoli, thanks for opening this PR.
Could you provide some more information about when the error occurs? Does this happen when running with the values from [the example readme](https://github.com/huggingface/transformers/tree/main/examples/pytorch/token-classification#pytorch-version-no-trainer)?<|||||>Hi @amyeroberts, it happened on executing [./run_no_trainer.sh](https://github.com/huggingface/transformers/blob/main/examples/pytorch/token-classification/run_no_trainer.sh), and everything works smoothly but the last step of that saving results into JSON file.
I got this error:
`TypeError: Object of type int64 is not JSON serializable`, so this commit is trying to fix that.
This was happened on my Ubuntu 22.04 workstation.<|||||>```sh
(transformers) ➜ token-classification git:(main) ./run_no_trainer.sh && echo $(date +%d.%m.%y-%H:%M:%S)
The following values were not passed to `accelerate launch` and had defaults used instead:
`--num_processes` was set to a value of `0`
`--num_machines` was set to a value of `1`
`--mixed_precision` was set to a value of `'no'`
`--dynamo_backend` was set to a value of `'no'`
To avoid this warning pass in values for each of the problematic parameters or run `accelerate config`.
06/20/2023 10:54:40 - INFO - __main__ - Distributed environment: DistributedType.NO
Num processes: 1
Process index: 0
Local process index: 0
Device: mps
Mixed precision type: no
Downloading builder script: 100%|████████████████████████████████████████████| 9.57k/9.57k [00:00<00:00, 8.80MB/s]
Downloading metadata: 100%|██████████████████████████████████████████████████| 3.73k/3.73k [00:00<00:00, 9.41MB/s]
Downloading readme: 100%|████████████████████████████████████████████████████| 12.3k/12.3k [00:00<00:00, 16.9MB/s]
Downloading and preparing dataset conll2003/conll2003 to /Users/xiaoliwang/.cache/huggingface/datasets/conll2003/conll2003/1.0.0/9a4d16a94f8674ba3466315300359b0acd891b68b6c8743ddf60b9c702adce98...
Downloading data: 100%|████████████████████████████████████████████████████████| 983k/983k [00:00<00:00, 3.57MB/s]
Generating train split: 0%| | 0/14041 [00:00<?, ? examples/s]06/20/2023 10:54:47 - INFO - datasets_modules.datasets.conll2003.9a4d16a94f8674ba3466315300359b0acd891b68b6c8743ddf60b9c702adce98.conll2003 - ⏳ Generating examples from = /Users/xiaoliwang/.cache/huggingface/datasets/downloads/extracted/31a52031f62b2a9281d3b6c2723006e2fa05b33157a4249729067b79f7aa068a/train.txt
Generating validation split: 0%| | 0/3250 [00:00<?, ? examples/s]06/20/2023 10:54:48 - INFO - datasets_modules.datasets.conll2003.9a4d16a94f8674ba3466315300359b0acd891b68b6c8743ddf60b9c702adce98.conll2003 - ⏳ Generating examples from = /Users/xiaoliwang/.cache/huggingface/datasets/downloads/extracted/31a52031f62b2a9281d3b6c2723006e2fa05b33157a4249729067b79f7aa068a/valid.txt
Generating test split: 0%| | 0/3453 [00:00<?, ? examples/s]06/20/2023 10:54:48 - INFO - datasets_modules.datasets.conll2003.9a4d16a94f8674ba3466315300359b0acd891b68b6c8743ddf60b9c702adce98.conll2003 - ⏳ Generating examples from = /Users/xiaoliwang/.cache/huggingface/datasets/downloads/extracted/31a52031f62b2a9281d3b6c2723006e2fa05b33157a4249729067b79f7aa068a/test.txt
Dataset conll2003 downloaded and prepared to /Users/xiaoliwang/.cache/huggingface/datasets/conll2003/conll2003/1.0.0/9a4d16a94f8674ba3466315300359b0acd891b68b6c8743ddf60b9c702adce98. Subsequent calls will reuse this data.
100%|█████████████████████████████████████████████████████████████████████████████| 3/3 [00:00<00:00, 1282.14it/s]
loading configuration file config.json from cache at /Users/xiaoliwang/.cache/huggingface/hub/models--bert-base-uncased/snapshots/a265f773a47193eed794233aa2a0f0bb6d3eaa63/config.json
Model config BertConfig {
"_name_or_path": "bert-base-uncased",
"architectures": [
"BertForMaskedLM"
],
"attention_probs_dropout_prob": 0.1,
"classifier_dropout": null,
"gradient_checkpointing": false,
"hidden_act": "gelu",
"hidden_dropout_prob": 0.1,
"hidden_size": 768,
"id2label": {
"0": "LABEL_0",
"1": "LABEL_1",
"2": "LABEL_2",
"3": "LABEL_3",
"4": "LABEL_4",
"5": "LABEL_5",
"6": "LABEL_6",
"7": "LABEL_7",
"8": "LABEL_8"
},
"initializer_range": 0.02,
"intermediate_size": 3072,
"label2id": {
"LABEL_0": 0,
"LABEL_1": 1,
"LABEL_2": 2,
"LABEL_3": 3,
"LABEL_4": 4,
"LABEL_5": 5,
"LABEL_6": 6,
"LABEL_7": 7,
"LABEL_8": 8
},
"layer_norm_eps": 1e-12,
"max_position_embeddings": 512,
"model_type": "bert",
"num_attention_heads": 12,
"num_hidden_layers": 12,
"pad_token_id": 0,
"position_embedding_type": "absolute",
"transformers_version": "4.31.0.dev0",
"type_vocab_size": 2,
"use_cache": true,
"vocab_size": 30522
}
loading configuration file config.json from cache at /Users/xiaoliwang/.cache/huggingface/hub/models--bert-base-uncased/snapshots/a265f773a47193eed794233aa2a0f0bb6d3eaa63/config.json
Model config BertConfig {
"_name_or_path": "bert-base-uncased",
"architectures": [
"BertForMaskedLM"
],
"attention_probs_dropout_prob": 0.1,
"classifier_dropout": null,
"gradient_checkpointing": false,
"hidden_act": "gelu",
"hidden_dropout_prob": 0.1,
"hidden_size": 768,
"initializer_range": 0.02,
"intermediate_size": 3072,
"layer_norm_eps": 1e-12,
"max_position_embeddings": 512,
"model_type": "bert",
"num_attention_heads": 12,
"num_hidden_layers": 12,
"pad_token_id": 0,
"position_embedding_type": "absolute",
"transformers_version": "4.31.0.dev0",
"type_vocab_size": 2,
"use_cache": true,
"vocab_size": 30522
}
loading file vocab.txt from cache at /Users/xiaoliwang/.cache/huggingface/hub/models--bert-base-uncased/snapshots/a265f773a47193eed794233aa2a0f0bb6d3eaa63/vocab.txt
loading file tokenizer.json from cache at /Users/xiaoliwang/.cache/huggingface/hub/models--bert-base-uncased/snapshots/a265f773a47193eed794233aa2a0f0bb6d3eaa63/tokenizer.json
loading file added_tokens.json from cache at None
loading file special_tokens_map.json from cache at None
loading file tokenizer_config.json from cache at /Users/xiaoliwang/.cache/huggingface/hub/models--bert-base-uncased/snapshots/a265f773a47193eed794233aa2a0f0bb6d3eaa63/tokenizer_config.json
loading configuration file config.json from cache at /Users/xiaoliwang/.cache/huggingface/hub/models--bert-base-uncased/snapshots/a265f773a47193eed794233aa2a0f0bb6d3eaa63/config.json
Model config BertConfig {
"_name_or_path": "bert-base-uncased",
"architectures": [
"BertForMaskedLM"
],
"attention_probs_dropout_prob": 0.1,
"classifier_dropout": null,
"gradient_checkpointing": false,
"hidden_act": "gelu",
"hidden_dropout_prob": 0.1,
"hidden_size": 768,
"initializer_range": 0.02,
"intermediate_size": 3072,
"layer_norm_eps": 1e-12,
"max_position_embeddings": 512,
"model_type": "bert",
"num_attention_heads": 12,
"num_hidden_layers": 12,
"pad_token_id": 0,
"position_embedding_type": "absolute",
"transformers_version": "4.31.0.dev0",
"type_vocab_size": 2,
"use_cache": true,
"vocab_size": 30522
}
Downloading model.safetensors: 100%|███████████████████████████████████████████| 440M/440M [00:22<00:00, 19.8MB/s]
loading weights file model.safetensors from cache at /Users/xiaoliwang/.cache/huggingface/hub/models--bert-base-uncased/snapshots/a265f773a47193eed794233aa2a0f0bb6d3eaa63/model.safetensors
Some weights of the model checkpoint at bert-base-uncased were not used when initializing BertForTokenClassification: ['cls.predictions.transform.LayerNorm.bias', 'cls.seq_relationship.weight', 'cls.seq_relationship.bias', 'cls.predictions.bias', 'cls.predictions.transform.dense.weight', 'cls.predictions.transform.dense.bias', 'cls.predictions.transform.LayerNorm.weight']
- This IS expected if you are initializing BertForTokenClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing BertForTokenClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
Some weights of BertForTokenClassification were not initialized from the model checkpoint at bert-base-uncased and are newly initialized: ['classifier.weight', 'classifier.bias']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
06/20/2023 10:55:15 - INFO - __main__ - Sample 622 of the training set: {'input_ids': [101, 2522, 6657, 15222, 6962, 1015, 19739, 20486, 2072, 1014, 102, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 'token_type_ids': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 'labels': [-100, 3, -100, -100, -100, 0, 3, -100, -100, 0, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100]}.
06/20/2023 10:55:15 - INFO - __main__ - Sample 12142 of the training set: {'input_ids': [101, 2019, 26354, 4861, 2056, 2008, 9779, 9048, 2015, 1010, 2007, 2095, 1011, 2203, 2727, 7045, 1997, 2149, 1002, 2184, 1012, 1023, 2454, 1998, 10067, 1997, 1002, 2184, 1012, 1019, 2454, 1010, 2052, 2022, 3205, 2006, 1996, 5548, 4518, 3863, 1010, 2021, 2106, 2025, 2360, 2043, 1012, 102, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 'token_type_ids': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 'labels': [-100, 0, 3, 0, 0, 0, 3, -100, -100, 0, 0, 0, -100, -100, 0, 0, 0, 7, -100, 0, -100, -100, 0, 0, 0, 0, 0, 0, -100, -100, 0, 0, 0, 0, 0, 0, 0, 5, 0, 0, 0, 0, 0, 0, 0, 0, 0, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100]}.
06/20/2023 10:55:15 - INFO - __main__ - Sample 4570 of the training set: {'input_ids': [101, 2117, 2679, 102, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 'token_type_ids': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 'attention_mask': [1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 'labels': [-100, 0, 0, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100]}.
Downloading builder script: 100%|████████████████████████████████████████████| 6.34k/6.34k [00:00<00:00, 9.02MB/s]
06/20/2023 10:55:18 - INFO - __main__ - ***** Running training *****
06/20/2023 10:55:18 - INFO - __main__ - Num examples = 14041
06/20/2023 10:55:18 - INFO - __main__ - Num Epochs = 3
06/20/2023 10:55:18 - INFO - __main__ - Instantaneous batch size per device = 8
06/20/2023 10:55:18 - INFO - __main__ - Total train batch size (w. parallel, distributed & accumulation) = 8
06/20/2023 10:55:18 - INFO - __main__ - Gradient Accumulation steps = 1
06/20/2023 10:55:18 - INFO - __main__ - Total optimization steps = 5268
33%|███████████████████████▋ | 1756/5268 [24:08<1:29:30, 1.53s/it]epoch 0: {'LOC_precision': 0.9499192245557351, 'LOC_recall': 0.9602612955906369, 'LOC_f1': 0.9550622631293991, 'LOC_number': 1837, 'MISC_precision': 0.8572972972972973, 'MISC_recall': 0.8600867678958786, 'MISC_f1': 0.858689767190038, 'MISC_number': 922, 'ORG_precision': 0.8539482879105521, 'ORG_recall': 0.9112602535421327, 'ORG_f1': 0.8816738816738816, 'ORG_number': 1341, 'PER_precision': 0.9776810016330975, 'PER_recall': 0.9766177270255574, 'PER_f1': 0.9771490750816105, 'PER_number': 1839, 'overall_precision': 0.9214876033057852, 'overall_recall': 0.9387102205758545, 'overall_f1': 0.9300191842522312, 'overall_accuracy': 0.9868336482091035}
67%|████████████████████████████████████████████████▋ | 3512/5268 [50:27<18:04, 1.62it/s]epoch 1: {'LOC_precision': 0.9637760702524698, 'LOC_recall': 0.9559063690800218, 'LOC_f1': 0.9598250888220825, 'LOC_number': 1837, 'MISC_precision': 0.8524251805985552, 'MISC_recall': 0.89587852494577, 'MISC_f1': 0.8736118455843469, 'MISC_number': 922, 'ORG_precision': 0.892675852066715, 'ORG_recall': 0.9179716629381058, 'ORG_f1': 0.9051470588235293, 'ORG_number': 1341, 'PER_precision': 0.9721925133689839, 'PER_recall': 0.9885807504078303, 'PER_f1': 0.9803181450525748, 'PER_number': 1839, 'overall_precision': 0.9322847682119205, 'overall_recall': 0.9481394174103385, 'overall_f1': 0.940145254194841, 'overall_accuracy': 0.9880217361665661}
100%|███████████████████████████████████████████████████████████████████████| 5268/5268 [1:15:39<00:00, 1.44it/s]epoch 2: {'LOC_precision': 0.9538378958668814, 'LOC_recall': 0.9673380511703865, 'LOC_f1': 0.9605405405405405, 'LOC_number': 1837, 'MISC_precision': 0.8783351120597652, 'MISC_recall': 0.8926247288503254, 'MISC_f1': 0.8854222700376547, 'MISC_number': 922, 'ORG_precision': 0.9074759437453738, 'ORG_recall': 0.9142431021625652, 'ORG_f1': 0.9108469539375927, 'ORG_number': 1341, 'PER_precision': 0.9751619870410367, 'PER_recall': 0.9820554649265906, 'PER_f1': 0.978596586290978, 'PER_number': 1839, 'overall_precision': 0.9381975678827253, 'overall_recall': 0.94830779592524, 'overall_f1': 0.9432255903533747, 'overall_accuracy': 0.9891513935687436}
Configuration saved in /tmp/test-ner/config.json
Model weights saved in /tmp/test-ner/pytorch_model.bin
tokenizer config file saved in /tmp/test-ner/tokenizer_config.json
Special tokens file saved in /tmp/test-ner/special_tokens_map.json
Traceback (most recent call last):
File "/Users/xiaoliwang/repo/research/huggingface/transformers/examples/pytorch/token-classification/run_ner_no_trainer.py", line 784, in <module>
main()
File "/Users/xiaoliwang/repo/research/huggingface/transformers/examples/pytorch/token-classification/run_ner_no_trainer.py", line 780, in main
json.dump(all_results, f)
File "/Users/xiaoliwang/development/miniforge3/envs/transformers/lib/python3.11/json/__init__.py", line 179, in dump
for chunk in iterable:
File "/Users/xiaoliwang/development/miniforge3/envs/transformers/lib/python3.11/json/encoder.py", line 432, in _iterencode
yield from _iterencode_dict(o, _current_indent_level)
File "/Users/xiaoliwang/development/miniforge3/envs/transformers/lib/python3.11/json/encoder.py", line 406, in _iterencode_dict
yield from chunks
File "/Users/xiaoliwang/development/miniforge3/envs/transformers/lib/python3.11/json/encoder.py", line 439, in _iterencode
o = _default(o)
^^^^^^^^^^^
File "/Users/xiaoliwang/development/miniforge3/envs/transformers/lib/python3.11/json/encoder.py", line 180, in default
raise TypeError(f'Object of type {o.__class__.__name__} '
TypeError: Object of type int64 is not JSON serializable
100%|███████████████████████████████████████████████████████████████████████| 5268/5268 [1:17:11<00:00, 1.14it/s]
Traceback (most recent call last):
File "/Users/xiaoliwang/development/miniforge3/envs/transformers/bin/accelerate", line 8, in <module>
sys.exit(main())
^^^^^^
File "/Users/xiaoliwang/development/miniforge3/envs/transformers/lib/python3.11/site-packages/accelerate/commands/accelerate_cli.py", line 45, in main
args.func(args)
File "/Users/xiaoliwang/development/miniforge3/envs/transformers/lib/python3.11/site-packages/accelerate/commands/launch.py", line 969, in launch_command
simple_launcher(args)
File "/Users/xiaoliwang/development/miniforge3/envs/transformers/lib/python3.11/site-packages/accelerate/commands/launch.py", line 625, in simple_launcher
raise subprocess.CalledProcessError(returncode=process.returncode, cmd=cmd)
subprocess.CalledProcessError: Command '['/Users/xiaoliwang/development/miniforge3/envs/transformers/bin/python3.11', 'run_ner_no_trainer.py', '--model_name_or_path', 'bert-base-uncased', '--dataset_name', 'conll2003', '--output_dir', '/tmp/test-ner', '--pad_to_max_length', '--task_name', 'ner', '--return_entity_level_metrics']' returned non-zero exit status 1.
```
I have reproduced this on my Macbook Air M1 with mps accleration enabled. The full error messages have been posted above here, same as on my Ubuntu workstation.<|||||>@amyeroberts Thanks for your comments!
I think your idea is good, and I understand that your intention is obviously to avoid that `int` convertment of everything.
But according to this page https://docs.python.org/3/library/json.html
```
If specified, default should be a function that gets called for objects that can’t otherwise be serialized.
It should return a JSON encodable version of the object or raise a [TypeError](https://docs.python.org/3/library/exceptions.html#TypeError).
If not specified, [TypeError](https://docs.python.org/3/library/exceptions.html#TypeError) is raised.
```
From my understanding, this `default` parameter is just likely giving a new converter function, and in this case that function is a concise `int()`, yes, that's it. I think we don't need to write a new handler function to handling all different object types here, because we only cannot handle/serialize the `np.int64` here.
So in the future if we have something more than that, I could definitely to write a new hanlder to take good care of them, hence for the time being, I think `default=int` is a good enough solution :)<|||||>Hi @amyeroberts, I have changed that a little bit as you mentioned before :)<|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>@xiaoli For the quality CI checks, you'll need to run `make style` at the top level of the repo and push any changes that are applied. Once this is done, CI should all be green and branch good to merge in 👍 <|||||>> @xiaoli For the quality CI checks, you'll need to run `make style` at the top level of the repo and push any changes that are applied. Once this is done, CI should all be green and branch good to merge in 👍
@amyeroberts Thanks for intructions, but I am afraid that so many files being changed after `make style` execution:
```
(transformers) ➜ transformers git:(main) ✗ git status
On branch main
Your branch is ahead of 'origin/main' by 8 commits.
(use "git push" to publish your local commits)
Changes not staged for commit:
(use "git add <file>..." to update what will be committed)
(use "git restore <file>..." to discard changes in working directory)
modified: examples/research_projects/codeparrot/scripts/human_eval.py
modified: examples/research_projects/fsner/src/fsner/tokenizer_utils.py
modified: examples/research_projects/jax-projects/big_bird/prepare_natural_questions.py
modified: examples/research_projects/luke/run_luke_ner_no_trainer.py
modified: examples/research_projects/lxmert/modeling_frcnn.py
modified: examples/research_projects/visual_bert/modeling_frcnn.py
modified: src/transformers/generation/logits_process.py
modified: src/transformers/generation/tf_logits_process.py
modified: src/transformers/generation/tf_utils.py
modified: src/transformers/keras_callbacks.py
modified: src/transformers/models/bert/convert_bert_pytorch_checkpoint_to_original_tf.py
modified: src/transformers/models/bigbird_pegasus/convert_bigbird_pegasus_tf_to_pytorch.py
modified: src/transformers/models/deta/modeling_deta.py
modified: src/transformers/models/dpr/tokenization_dpr.py
modified: src/transformers/models/dpr/tokenization_dpr_fast.py
modified: src/transformers/models/pegasus/convert_pegasus_tf_to_pytorch.py
modified: src/transformers/models/sam/processing_sam.py
modified: tests/generation/test_framework_agnostic.py
modified: tests/models/codegen/test_modeling_codegen.py
modified: tests/models/data2vec/test_modeling_data2vec_audio.py
modified: tests/models/encodec/test_modeling_encodec.py
modified: tests/models/gpt2/test_modeling_gpt2.py
modified: tests/models/gptj/test_modeling_gptj.py
modified: tests/models/hubert/test_modeling_hubert.py
modified: tests/models/mctct/test_modeling_mctct.py
modified: tests/models/rwkv/test_modeling_rwkv.py
modified: tests/models/sew/test_modeling_sew.py
modified: tests/models/sew_d/test_modeling_sew_d.py
modified: tests/models/speecht5/test_modeling_speecht5.py
modified: tests/models/unispeech/test_modeling_unispeech.py
modified: tests/models/unispeech_sat/test_modeling_unispeech_sat.py
modified: tests/models/wav2vec2/test_modeling_flax_wav2vec2.py
modified: tests/models/wav2vec2/test_modeling_wav2vec2.py
modified: tests/models/wav2vec2_conformer/test_modeling_wav2vec2_conformer.py
modified: tests/models/wavlm/test_modeling_wavlm.py
modified: tests/models/whisper/test_modeling_whisper.py
modified: tests/onnx/test_onnx.py
modified: tests/test_modeling_tf_common.py
modified: tests/test_tokenization_common.py
modified: tests/trainer/test_trainer_seq2seq.py
modified: utils/check_copies.py
modified: utils/create_dummy_models.py
modified: utils/tests_fetcher.py
no changes added to commit (use "git add" and/or "git commit -a")
```<|||||>@amyeroberts `make style` changes are committed, thank you 😁 |
transformers | 24,339 | closed | feat: `agent.run(return_agent_types=True)` | ### Feature request
Currently, `agent.run` on main will run materializer from `AgentType` to return its corresponding type.
I think it would be a great addition to just return this `AgentType` directly for external libraries to build on top of!
```python
agent = transformers.HfAgent("inference-api-endpoint")
res: AgentType = agent.run(..., return_agent_types=True)
```
### Motivation
I'm currently playing around with the new agent API, and found that in cases where I don't want to return the decoded outputs immediately, it would be nice to get `AgentType` and manage the materialize myself.
### Your contribution
I can help to create PR, but I know that the Agent API are still very experimental and unstable
cc @LysandreJik on this | 06-18-2023 04:00:30 | 06-18-2023 04:00:30 | Hey @aarnphm, could you provide a code sample with the return you'd like to receive so that I can play with it and see if it makes sense to implement it? Thanks!<|||||>For example, I'm currently building [OpenLLM](https://github.com/bentoml/OpenLLM) and came across a use case where one can define an agent to generate an image and then caption it via a pipeline using [BentoML Runner](https://docs.bentoml.org/en/latest/concepts/runner.html#what-is-runner)
OpenLLM also provides support for HuggingFace Agent where users will can switch between The inference endpoint or hosting their own starcoder.
Given the following segment to save a `captioning` pipeline
```python
import bentoml, transformers
bentoml.transformers.save_model("captioning", pipeline('image-captioning'))
```
Runner is distributed by nature, and it can be defined in a service.py like so:
```python
import bentoml
import transformers
captioning_runner = bentoml.transformers.get("captioning").to_runner()
agent = transformers.HfAgent("http://283.23.22.1:3000/hf/agent") # `openllm start starcoder`
service = bentoml.Service("agent-with-runners", runners=[captioning_runner])
def preprocess(input_tensor: torch.Tensor) -> torch.Tensor:
...
@svc.api(input=bentoml.io.Text(), output=bentoml.io.Text())
async def transcribe_audio_to_french(prompt: str):
image_output: ImageAgentType = agent.run(prompt, ..., return_agent_types=True)
# then I do some preprocess with this tensor
input_for_pipeline = preprocess(image_output.to_raw())
return await async captioning_runner.async_run(input_for_pipeline)
```
You can run this with `bentoml serve service.py:svc`
This can be one use case of the AgentType that can be useful here, where one can access the tensor directly without having to convert from PIL.Image output (which is currently what the agent.run returns if it returns an image if I understand it correctly)
<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 24,338 | closed | Add SophiaG. | # What does this PR do?
This is a scratch PR showing how to test Sophia with Transformers. This is no way
production ready, and certainly needs to look at licensing. But, this is helpful if someone needs
to try this right away. I'm re-using **AdamW**'s `beta` values. Plus if you see carefully,
there's an ugly hack where I'm using `eps` as `rho`.
This is code directly copy-pasta-ed from: @Liuhong99 's [Sophia](https://github.com/Liuhong99/Sophia);
I am putting it here so people can experiment with it and see how it compares to **AdamW**. If there
is sufficient interest in adding this and it can be licensed, would be happy to work on it here. Anyone is free to take
this and turn it into something of value. Please close this as necessary too.
## Before submitting
This PR does none of the above. It is too early to do this, but if there is sufficient interest would be happy to go through this process.
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Models:
- text models: @ArthurZucker and @younesbelkada
Common:
- trainer: @sgugger
| 06-17-2023 22:27:39 | 06-17-2023 22:27:39 | [Semi-related] also linking to Paper page on HF: https://huggingface.co/papers/2305.14342<|||||>Thank you all for encouraging this.
As a first cut, I am working with the authors to see if this can be a PyPi package if authors agree. License is MIT, so hopefully we can get this out soon. [Sophia on PyPi](https://github.com/Liuhong99/Sophia/issues/29)<|||||>As Younes said before, we won't merge this PR but can leave it for anyone who want to try this out: Transformers is a library of models, not optimizers (the optimizers inside the library are actually deprecated). Once there is a package supporting this optimizer we can add support for the Trainer like we did for the `bitsandbytes` optimizers.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 24,337 | closed | past_key_values is not working as expected for falcon-7b | ### System Info
Hello,
I've been trying to use past_key_values to speed up text generation, but it doesn't seem to work: instead of generating coherent text like is done when I'm not using past_key_values, it seems to generate the same token over and over again. I've been trying to search the web for usage guidelines and it seemed to me like I'm doing everything correctly, but maybe I'm missing something.
Thank you!
### Who can help?
@ArthurZucker @younesbelkada - I think you're the relevant people for this.
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
```
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
import pandas as pd
import pickle
from transformers import pipeline
device = "cuda" if torch.cuda.is_available() else "cpu"
model = "tiiuae/falcon-7b-instruct"
tokenizer = AutoTokenizer.from_pretrained(model, padding_side="left")
model = AutoModelForCausalLM.from_pretrained("tiiuae/falcon-7b-instruct", trust_remote_code=True).to(device)
# WITH past_key_values
def get_answer_from_model1(model, tokenizer, q, max_new_tokens=50):
predicted_token_id = None
prompt = q
generated_text = ""
n_new_tokens = 0
past_key_values=None
while (predicted_token_id != tokenizer.eos_token_id) and (n_new_tokens < max_new_tokens):
if predicted_token_id is not None:
model_input = tokenizer(predicted_token, return_tensors='pt').to(device)
else:
model_input = tokenizer(prompt, return_tensors='pt').to(device)
with torch.no_grad():
model_output = model(model_input['input_ids'], past_key_values=past_key_values)
past_key_values = model_output['past_key_values']
logits = model_output['logits']
predicted_token_id = logits.argmax(-1)[0][-1]
predicted_token = tokenizer.decode(predicted_token_id)
if predicted_token_id != tokenizer.eos_token_id:
prompt += predicted_token
generated_text += predicted_token
n_new_tokens += 1
return generated_text
# WITHOUT past_key_values
def get_answer_from_model2(model, tokenizer, q, max_new_tokens=50):
predicted_token_id = None
prompt = q
generated_text = ""
n_new_tokens = 0
past_key_values=None
while (predicted_token_id != tokenizer.eos_token_id) and (n_new_tokens < max_new_tokens):
model_input = tokenizer(prompt, return_tensors='pt').to(device)
with torch.no_grad():
model_output = model(model_input['input_ids'], past_key_values=past_key_values)
logits = model_output['logits']
predicted_token_id = logits.argmax(-1)[0][-1]
predicted_token = tokenizer.decode(predicted_token_id)
if predicted_token_id != tokenizer.eos_token_id:
prompt += predicted_token
generated_text += predicted_token
n_new_tokens += 1
return generated_text
q="hello"
answer1 = get_answer_from_model1(model, tokenizer, q)
print(answer1)
answer2 = get_answer_from_model2(model, tokenizer, q)
print(answer2)
```
### Expected behavior
answer1 and answer2 should be the same | 06-17-2023 22:18:37 | 06-17-2023 22:18:37 | Hi @orgadhadas
Thanks for the issue, I think the canonical way to use past key values is to set `use_cache=True` when calling `model.generate`. I think the remote code supports that argument as you can see here: https://huggingface.co/tiiuae/falcon-7b-instruct/blob/main/modelling_RW.py#L699 Can you share with us why you want to define a custom past key value mechanism?<|||||>I need to use the model.forward method, to get access to the hidden states computed during inference (for research). I hope this answers the question.<|||||>You have the `output_hidden_states` argument which should output the hidden states no? <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 24,336 | closed | Fix link to documentation in Install from Source | Fix link to documentation _to install Transformers from Source_ .Probably the title changed at some point from 'Installing' to 'Install' and verbose message in utils broke.
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes link to install from source in verbose message inside _utils_
Context : found during exploration of translation tutorial script and work related to #24254
## Before submitting
- [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@sgugger @stevhliu
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
| 06-17-2023 21:08:24 | 06-17-2023 21:08:24 | @amyeroberts You are welcome! Thanks for creating Transformers library ! :)<|||||>_The documentation is not available anymore as the PR was closed or merged._ |
transformers | 24,335 | closed | [Wav2Vec2 - MMS] Correct directly loading adapters weights | # What does this PR do?
This PR corrects incorrect behavior when loading MMS with non-default adapter weights via `from_pretrained(...)`. The issue is explained well [here](https://github.com/huggingface/transformers/issues/24223#issuecomment-1595856093).
In a nutshell, we cannot load specific weights in the init because these loaded weights are later overwritten again in `from_pretrained`. To solve this I propose to add a new generic
```py
load_adaptive_weights()
```
call to `from_pretrained` that can be overridden by models that inherit from `PretrainedModel`. This both solves the issue #24223
and is also cleaner IMO since weights shouldn't be loaded when calling the `__init__` method of a model anyways really. It was weird before that:
```py
model = Wav2Vec2ForCTC(config, target_lang="fra")
```
would try to load weights into the model.
cc @sgugger @sanchit-gandhi @amyeroberts wdyt about the design? Happy to add some more tests if ok for you | 06-17-2023 20:45:36 | 06-17-2023 20:45:36 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Sorry, I accidentally submitted the review without a saved comment. I realised in the `from_pretrained` call why you were using `pass`. I still think raising an exception would be good, as otherwise we can get silent behaviour. Would it be possible to reliably check if `load_adaptive_weights` should be implemented for a model?
p.s. ignoring the wandb diffs, as they're just from being out-of-date from main |
transformers | 24,334 | closed | Generate: add SequenceBiasLogitsProcessor | # What does this PR do?
Closes #22168
As per [popular demand](https://github.com/huggingface/transformers/issues/22168#issuecomment-1477998997), adds a logits processor that applies a bias to certain sequences -- `SequenceBiasLogitsProcessor`
This manipulation is a more general case of forbidding certain sequences --`NoBadWordsLogitsProcessor` corresponds to applying an infinite negative bias. As such, this PR makes `NoBadWordsLogitsProcessor` a subclass of the new processor. In the refactoring process, I've rewritten this class to a) be more readable (clear variable naming, comments, docstrings); and b) be faster (though some vectorization).
| 06-17-2023 18:20:09 | 06-17-2023 18:20:09 | _The documentation is not available anymore as the PR was closed or merged._<|||||>@sgugger @amyeroberts
A further request for approval, as I've introduced a pattern that I'd like to repeat (assuming you're okay with it).
In the latest commit, you'll see:
1. An example in the logit processor class;
2. The new generation config field's docstring redirecting to the corresponding logit processor docs;
This allows me to write a clear explanation and example (or examples) for each configuration option, without creating a monster in the generation config docstring. The user gets high-level info in the generation config docstring, and details in each processor.
The examples are more useful if they are relative to common use cases, in this case different `.generate()` parameterization. However, the example is sitting in the logit processor class, and does not make **direct** reference to the class. The alternative, to create an example using the class directly, is not very desirable either -- very few people use the logit processor classes directly.
LMK if you have suggestions and/or if you agree 🤗
<|||||>@gante, the m4 eval code broke after this PR was merged:
```
stderr: File "/mnt/nvme0/code/huggingface/m4-master-3/m4/evaluation/launch.py", line 143, in <module>
stderr: main(args)
stderr: File "/mnt/nvme0/code/huggingface/m4-master-3/m4/evaluation/launch.py", line 97, in main
stderr: score = evaluator(task, accelerator, model, args)
stderr: File "/mnt/nvme0/code/huggingface/m4-master-3/m4/evaluation/evaluators/in_contexter.py", line 262, in in_contexter
stderr: metric = task.add_batch_metric(metric, **kwargs)
stderr: File "/mnt/nvme0/code/huggingface/m4-master-3/m4/models/vgpt2/evaluation_open_ended_vqa_in_context_vgpt2.py", line 338, in add_batch_metric
stderr: generated_tokens = self.generate_tokens(**kwargs)
stderr: File "/mnt/nvme0/code/huggingface/m4-master-3/m4/models/vgpt2/evaluation_open_ended_vqa_in_context_vgpt2.py", line 314, in generate_tokens
stderr: generated_tokens = unwrapped_model.generate(
stderr: File "/home/stas/anaconda3/envs/py38-pt20/lib/python3.8/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
stderr: return func(*args, **kwargs)
stderr: File "/mnt/nvme0/code/huggingface/transformers-master/src/transformers/generation/utils.py", line 1627, in generate
stderr: return self.beam_search(
stderr: File "/mnt/nvme0/code/huggingface/transformers-master/src/transformers/generation/utils.py", line 2951, in beam_search
stderr: next_token_scores_processed = logits_processor(input_ids, next_token_scores)
stderr: File "/mnt/nvme0/code/huggingface/transformers-master/src/transformers/generation/logits_process.py", line 92, in __call__
stderr: scores = processor(input_ids, scores)
stderr: File "/mnt/nvme0/code/huggingface/transformers-master/src/transformers/generation/logits_process.py", line 618, in __call__
stderr: self._prepare_bias_variables(scores)
stderr: File "/mnt/nvme0/code/huggingface/transformers-master/src/transformers/generation/logits_process.py", line 678, in _prepare_bias_variables
stderr: raise ValueError(
stderr: ValueError: Setting a bias on sequences that share a common token termination is not yet supported. Please open an issue if you see this error message (after checking that it doesn't already exist).
```
what should we change?<|||||>@stas00 interesting, I thought no relevant use case would hit this issue. I will open a PR with a fix!
(meanwhile, the solutions are either to a) downgrade transformers; or b) remove this exception if you're using the `bad_words_ids` generate argument, which should be fine)<|||||>Thank you, Joao
We are going to port the m4-pretrained model into `transformers` shortly, so neither of these proposals is an option in the long run. But a PR with a fix is - it's not urgent urgent as we meanwhile can use the older transformers. |
transformers | 24,333 | closed | Fix `KerasMetricCallback`: pass `generate_kwargs` even if `use_xla_generation` is False | # What does this PR do?
Currently, `KerasMetricCallback` ignores the `generate_kwargs` argument if `use_xla_generation` is set to `False` (which is the default). This means that when not using XLA, the user can't pass arguments like `max_new_tokens` to the `generate` method being called in `on_epoch_end`. It's also in contradiction with the docstring for `generate_kwargs`, which states:
> Keyword arguments to pass to `model.generate()` when generating. Has no effect if `predict_with_generate` is `False`.
This PR fixes the issue by passing `generate_kwargs` to `model.generate()` in the branch of execution where `use_xla_generation` is `False`.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@Rocketknight1
@gante
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
| 06-17-2023 16:56:33 | 06-17-2023 16:56:33 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 24,332 | closed | Why it always raise error like this? | ### System Info
I could promise that there is not a network connection problem at all,but it true raised like this:
```bash
Traceback (most recent call last):
File "test.py", line 3, in <module>
model = AutoModelForCausalLM.from_pretrained("nomic-ai/gpt4all-j", revision="v1.2-jazzy")
File "/home/elin/anaconda3/envs/nemo/lib/python3.8/site-packages/transformers/models/auto/auto_factory.py", line 444, in from_pretrained
config, kwargs = AutoConfig.from_pretrained(
File "/home/elin/anaconda3/envs/nemo/lib/python3.8/site-packages/transformers/models/auto/configuration_auto.py", line 928, in from_pretrained
config_dict, unused_kwargs = PretrainedConfig.get_config_dict(pretrained_model_name_or_path, **kwargs)
File "/home/elin/anaconda3/envs/nemo/lib/python3.8/site-packages/transformers/configuration_utils.py", line 574, in get_config_dict
config_dict, kwargs = cls._get_config_dict(pretrained_model_name_or_path, **kwargs)
File "/home/elin/anaconda3/envs/nemo/lib/python3.8/site-packages/transformers/configuration_utils.py", line 629, in _get_config_dict
resolved_config_file = cached_file(
File "/home/elin/anaconda3/envs/nemo/lib/python3.8/site-packages/transformers/utils/hub.py", line 452, in cached_file
raise EnvironmentError(
OSError: We couldn't connect to 'https://huggingface.co' to load this file, couldn't find it in the cached files and it looks like nomic-ai/gpt4all-j is not the path to a directory containing a file named config.json.
Checkout your internet connection or see how to run the library in offline mode at 'https://huggingface.co/docs/transformers/installation#offline-mode'.
```
And here is the SDKs version below:
```
Python 3.8.10
transformers 4.29.2
```
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Not only gpt4-j,but also fallcon by using the code they provided:
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model = "tiiuae/falcon-7b"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
torch_dtype=torch.bfloat16,
trust_remote_code=True,
device_map="auto",
)
sequences = pipeline(
"Girafatron is obsessed with giraffes, the most glorious animal on the face of this Earth. Giraftron believes all other animals are irrelevant when compared to the glorious majesty of the giraffe.\nDaniel: Hello, Girafatron!\nGirafatron:",
max_length=200,
do_sample=True,
top_k=10,
num_return_sequences=1,
eos_token_id=tokenizer.eos_token_id,
)
for seq in sequences:
print(f"Result: {seq['generated_text']}")
```
### Expected behavior
Fix it! | 06-17-2023 10:03:06 | 06-17-2023 10:03:06 | Hi @RosterMouch, thanks for raising this issue.
A few questions on our side to try and help dig into the issue:
* Could you share which verion of `huggingface_hub` is beign run in your environment?
* When you say "I could promise that there is not a network connection problem at all", could you share how this was tested?
* Is this an error the consistently happens or sporadically?
* Is this issue only ever seen with this checkpoint or with other checkpoints too? |
transformers | 24,331 | closed | style: add BitsAndBytesConfig __repr__ function | # What does this PR do?
Add a `__repr__` to `transformers.BitsAndBytesConfig` to make it nice to print.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
cc @sgugger
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
| 06-17-2023 04:42:28 | 06-17-2023 04:42:28 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Hi @amyeroberts I have addressed all of the issue. PTAL when you are available. Thanks a bunch.<|||||>Can you rebase your branch on main? This should fix the failing test. Thanks!<|||||>done.<|||||>Thanks a lot for working on this @aarnphm ! Nice job! |
transformers | 24,330 | closed | Resuming / retraining the peft model | ### System Info
Although resume_from_checkpoint is now working after @llohann-speranca solved the issue but now finetuning again with new data using train(resume_from_checkpoint) and then testing it makes it forget the old datas i.e. wont remember the things in old dataset.
Attaching the code below:
import json
import os
import bitsandbytes as bnb
import pandas as pd
import torch
import torch.nn as nn
import transformers
from datasets import load_dataset
from peft import (
LoraConfig,
PeftConfig,
PeftModel,
get_peft_model,
prepare_model_for_kbit_training,
)
from transformers import (
AutoConfig,
AutoModelForCausalLM,
AutoTokenizer,
BitsAndBytesConfig,
)
os.environ["CUDA_VISIBLE_DEVICES"] = "0"
MODEL_NAME = "tiiuae/falcon-7b"
bnb_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_use_double_quant=True,
bnb_4bit_quant_type="nf4",
bnb_4bit_compute_dtype=torch.bfloat16,
)
model = AutoModelForCausalLM.from_pretrained(
MODEL_NAME,
device_map="auto",
trust_remote_code=True,
quantization_config=bnb_config
)
tokenizer = AutoTokenizer.from_pretrained(MODEL_NAME)
tokenizer.pad_token = tokenizer.eos_token
def print_trainable_parameters(model):
"""
Prints the number of trainable parameters in the model.
"""
trainable_params = 0
all_param = 0
for _, param in model.named_parameters():
all_param += param.numel()
if param.requires_grad:
trainable_params += param.numel()
print(
f"trainable params: {trainable_params} || all params: {all_param} || trainable%: {100 * trainable_params / all_param}"
)
model.gradient_checkpointing_enable()
model = prepare_model_for_kbit_training(model)
config = LoraConfig(
r=16,
lora_alpha=32,
target_modules=["query_key_value"],
lora_dropout=0.05,
bias="none",
task_type="CAUSAL_LM"
)
model = get_peft_model(model, config)
print_trainable_parameters(model)
data = load_dataset("json", data_files="../localGPT/output.json")
def generate_prompt(data_point):
return f"""
: {data_point["question"]}
: {data_point["answer"]}
""".strip()
def generate_and_tokenize_prompt(data_point):
full_prompt = generate_prompt(data_point)
tokenized_full_prompt = tokenizer(full_prompt, padding=True, truncation=True)
return tokenized_full_prompt
data = data["train"].shuffle().map(generate_and_tokenize_prompt)
OUTPUT_DIR = "outputs"
training_args = transformers.TrainingArguments(
per_device_train_batch_size=1,
gradient_accumulation_steps=4,
num_train_epochs=1,
warmup_ratio=0.05,
max_steps=80,
learning_rate=2e-4,
fp16=True,
logging_steps=1,
save_total_limit=3,
output_dir=OUTPUT_DIR,
optim="paged_adamw_8bit",
lr_scheduler_type="cosine",
)
trainer = transformers.Trainer(
model=model,
train_dataset=data,
args=training_args,
data_collator=transformers.DataCollatorForLanguageModeling(tokenizer, mlm=False)
)
model.config.use_cache = False
trainer.train(resume_from_checkpoint=True)
trainer.save_model(os.path.join(OUTPUT_DIR, "checkpoint-2"))
PEFT_MODEL = OUTPUT_DIR+"/checkpoint-2"
config = PeftConfig.from_pretrained(PEFT_MODEL)
model = AutoModelForCausalLM.from_pretrained(
config.base_model_name_or_path,
return_dict=True,
quantization_config=bnb_config,
device_map="auto",
trust_remote_code=True
)
tokenizer = AutoTokenizer.from_pretrained(config.base_model_name_or_path)
tokenizer.pad_token = tokenizer.eos_token
model = PeftModel.from_pretrained(model, PEFT_MODEL)
generation_config = model.generation_config
generation_config.max_new_tokens = 20
generation_config.temperature = 0
generation_config.top_p = 0.7
generation_config.num_return_sequences = 1
generation_config.pad_token_id = tokenizer.eos_token_id
generation_config.eos_token_id = tokenizer.eos_token_id
DEVICE = "cuda:0"
prompt = """
:What is my cat's name?
:
""".strip()
encoding = tokenizer(prompt, return_tensors="pt").to(DEVICE)
with torch.inference_mode():
outputs = model.generate(
input_ids=encoding.input_ids,
attention_mask=encoding.attention_mask,
generation_config=generation_config,
)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
I trained my model for my cats name in first iteration and saved it in checkpoint-1 then retrained it for my dogs name although now it knows my dogs name it forgets my cats name
### Expected behavior
To remember my cats name | 06-16-2023 20:55:26 | 06-16-2023 20:55:26 | Hi @adityaaryan77, thanks for raising an issue!
It seems like this is a case of catastrophic forgetting, rather than a bug per se in the model or transformers library. As such question best placed in our [forums](https://discuss.huggingface.co/). We try to reserve the github issues for feature requests and bug reports.
If you believe this behaviour is related to a bug in the code, could you produce:
* The running environment: run `transformers-cli env` in the terminal and copy-paste the output
* A _minimal_ code reproducer which we could run to replicate i.e. with data <|||||>`transformers-cli env `
- `transformers` version: 4.30.2
- Platform: Linux-5.15.0-1040-azure-x86_64-with-glibc2.29
- Python version: 3.8.10
- Huggingface_hub version: 0.15.1
- Safetensors version: 0.3.1
- PyTorch version (GPU?): 2.0.1+cu117 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?Yes
- Using distributed or parallel set-up in script?: No
Sure here is the pastebin for the code snippet I am using : https://pastebin.pl/view/4e77a13d
And for example for data set: Here is a small example
First time fine tuning:
[
{"question":"What is my cats name?","answer":"Tom"}
]
Now using generate with "What is my cats name gives" response as "Tom"
Now saving this model and loading it with resume_from_checkpoint for further fine tuning with
[
{"question":"What is my dogs name?","answer":"Bob"}
]
And asking "What is my cats name?" gives response as "Bob" or sometimes repeats the question
<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 24,329 | closed | [Doc Fix] Fix model name path in the transformers doc for AutoClasses | Fixes the model name path in the transformers doc for the AutoTokenizer step.
| 06-16-2023 19:41:21 | 06-16-2023 19:41:21 | R: @stevhliu<|||||>_The documentation is not available anymore as the PR was closed or merged._ |
transformers | 24,328 | closed | save full_osd in fsdp mode | Fixes a typo in which the variable full_osd is referenced before definition if run in fsdp mode. The fix allows model files to be saved when running in fsdp.
Links to issue: https://github.com/huggingface/transformers/issues/24057
Fixes # 24057
| 06-16-2023 19:00:14 | 06-16-2023 19:00:14 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_24328). All of your documentation changes will be reflected on that endpoint.<|||||>Hello, PR #24446 addresses this issue. <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 24,327 | closed | AutoModelForSequenceClassification.from_config doesn't support LlamaConfig | ### System Info
Calling:
```
from transformers import AutoModelForSequenceClassification
from transformers.models.llama.configuration_llama import LlamaConfig
config = LlamaConfig()
model = AutoModelForSequenceClassification.from_config(config)
```
gives:
```
________________________________ Traceback (most recent call last) _________________________________
_ in <module>:1 _
_ _
_ /opt/conda/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py:413 in _
_ from_config _
_ _
_ 410 _ _ _ model_class = _get_model_class(config, cls._model_mapping) _
_ 411 _ _ _ return model_class._from_config(config, **kwargs) _
_ 412 _ _ _
_ _ 413 _ _ raise ValueError( _
_ 414 _ _ _ f"Unrecognized configuration class {config.__class__} for this kind of AutoM _
_ 415 _ _ _ f"Model type should be one of {', '.join(c.__name__ for c in cls._model_mapp _
_ 416 _ _ ) _
____________________________________________________________________________________________________
ValueError: Unrecognized configuration class <class 'type'> for this kind of AutoModel: AutoModelForSequenceClassification.
Model type should be one of AlbertConfig, BartConfig, BertConfig, BigBirdConfig, BigBirdPegasusConfig, BloomConfig, CamembertConfig, CanineConfig, ConvBertConfig,
CTRLConfig, Data2VecTextConfig, DebertaConfig, DebertaV2Config, DistilBertConfig, ElectraConfig, ErnieConfig, ErnieMConfig, EsmConfig, FlaubertConfig, FNetConfig,
FunnelConfig, GPT2Config, GPT2Config, GPTBigCodeConfig, GPTNeoConfig, GPTNeoXConfig, GPTJConfig, IBertConfig, LayoutLMConfig, LayoutLMv2Config, LayoutLMv3Config,
LEDConfig, LiltConfig, LlamaConfig, LongformerConfig, LukeConfig, MarkupLMConfig, MBartConfig, MegaConfig, MegatronBertConfig, MobileBertConfig, MPNetConfig, MvpConfig,
NezhaConfig, NystromformerConfig, OpenAIGPTConfig, OPTConfig, PerceiverConfig, PLBartConfig, QDQBertConfig, ReformerConfig, RemBertConfig, RobertaConfig,
RobertaPreLayerNormConfig, RoCBertConfig, RoFormerConfig, SqueezeBertConfig, TapasConfig, TransfoXLConfig, XLMConfig, XLMRobertaConfig, XLMRobertaXLConfig, XLNetConfig,
XmodConfig, YosoConfig.
```
### Who can help?
@ArthurZucker
### Expected behavior
Model should be able to be loaded from config with randomly initialized weights, preferably with bfloat16 and load_8bit support. | 06-16-2023 18:46:30 | 06-16-2023 18:46:30 | Hi @kungfu-eric,
This issue is arising because you need to pass an instance of the config, rather than the config class i.e.:
```python
from transformers import AutoModelForSequenceClassification
from transformers.models.llama.configuration_llama import LlamaConfig
config = LlamaConfig()
model = AutoModelForSequenceClassification.from_config(config)
```<|||||>What ended up fixing the issues was updating to transformers-4.30.2 from 4.28.1. @amyeroberts ah i wrote the simple example wrong. I did define the config in the actual full code. Thank you though. |
transformers | 24,326 | closed | Adding ddp_broadcast_buffers argument to Trainer | # What does this PR do?
In #22482, using the Trainer with `gpt2` and other similar models failed in naive distributed mode. Passing `ddp_broadcast_buffers=False` to Pytorch's DDP wrapper fixes the issue. This PR surfaces that argument to the Trainer user. | 06-16-2023 18:30:45 | 06-16-2023 18:30:45 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 24,325 | closed | [Time-Series] Added link to the blog in Tips | @kashif @NielsRogge | 06-16-2023 16:34:14 | 06-16-2023 16:34:14 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_24325). All of your documentation changes will be reflected on that endpoint.<|||||>file names changed, opened a new one here
https://github.com/huggingface/transformers/pull/24482 |
transformers | 24,324 | closed | Allow passing kwargs through to TFBertTokenizer | There are some kwargs like `preserve_unused_tokens` in the underlying TF tokenizer layers that might be useful to expose to users. This PR exposes them by passing through any unrecognized `kwargs` in the model `__init__` to the TF tokenizer layer.
Fixes #23798 | 06-16-2023 16:28:11 | 06-16-2023 16:28:11 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Ping @amyeroberts for core maintainer review now that the extra functionality is working fine (see issue #23798)<|||||>I was a bit wary about the kwargs thing too - `FastBertTokenizer` and `BertTokenizerLayer` actually have wildly different arguments, so depending on which one you're using the kwargs you need will be totally different. Still, I think for an advanced use case it's fine - we're just trying to enable some power user behaviours without forcing them to edit the library source, and I'd prefer something general like this over specifically exposing the options I think people need (because I didn't even realize in advance that the `preserve_unused` arg would be valuable!)
Anyway, merging! |
transformers | 24,323 | closed | Protobuf 4 support (again) | ### Feature request
Looking at https://github.com/huggingface/transformers/issues/21677#issuecomment-1435072007, I notice there are now new versions of tensorflow and tensorboard that may help with protobuf 4.x compatibility. It would be awesome to get this upgraded, thanks!
### Motivation
easier upgrade path to python 3.10 and above
### Your contribution
nope, sorry.
| 06-16-2023 16:21:21 | 06-16-2023 16:21:21 | Hi @dustyketchum,
Thanks for raising this issue. Managing the different dependencies in the library can be quite complex. As noted in the linked issue, the blocker for upgrading protobuf support was third-party libraries support of protobuf 4 rather than our own.
If this is something you or someone else in the community believes is very important, please feel free to open a PR. Note that it's not just necessary for the CI to be green and protobuf 4 be supported, we must also remain backwards compatible with previous versions.
cc @ydshieh <|||||>BTW, what's blocking when you try to use python 3.10 without protobuf 4.x ?<|||||>I am not blocked; I should have been more precise.
<|||||>Fixed in #24599 |
transformers | 24,322 | closed | Respect explicitly set framework parameter in pipeline | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #24321
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
| 06-16-2023 13:56:16 | 06-16-2023 13:56:16 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Updated so that `infer_framework_load_model` is not called at all if `model` is already loaded and `framework` is defined.
However, it may still be worth keeping the check inside `infer_framework_load_model` so that if `framework` is defined but the `model` is a `str` and we do need to call `infer_framework_load_model`, at least we don't need to then call `infer_framework` inside of it.<|||||>Ok, this code breaks some tests. Can you fix them ?<|||||>Noticed a "typo", it is fixed now.<|||||>@denis-ismailaj it's ready to merge ! |
transformers | 24,321 | closed | [pipeline] Explicitly set framework is ignored | ### System Info
- `transformers` version: 4.30.2
Omitting the rest because they aren't really relevant. Can submit later if needed.
### Who can help?
@Narsil
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```
from transformers import pipeline
from transformers import WhisperProcessor, WhisperForConditionalGeneration
processor = WhisperProcessor.from_pretrained("openai/whisper-tiny")
model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-tiny")
# Using this just to make sure an error if thrown if pipeline tries to check using module instead of specified framework.
class FakeWhisper:
def __getattr__(self, item):
return model.__getattr__(item)
pipe = pipeline(
"automatic-speech-recognition",
model=FakeWhisper(),
tokenizer=processor.tokenizer,
feature_extractor=processor.feature_extractor,
chunk_length_s=30,
device=model.device,
framework="pt",
)
```
The above code raises this error:
```
TypeError: Could not infer framework from class <class '__main__.FakeWhisper'>.
```
### Expected behavior
When specifying the framework explicitely, there is no need to infer it from the module of the model class, as mentioned here:
https://github.com/huggingface/transformers/blob/main/src/transformers/pipelines/__init__.py#L784-L796
But then, inside `infer_framework_load_model`, `infer_framework` is called regardless of the value of the `framework` parameter:
https://github.com/huggingface/transformers/blob/main/src/transformers/pipelines/base.py#L280 | 06-16-2023 13:50:21 | 06-16-2023 13:50:21 | |
transformers | 24,320 | closed | Add test for proper TF input signatures | This is a relatively simple PR intended to check that all TF models have proper input signatures that match their inputs. I was calling the function `self._prune_signature` in a few places to verify this, which felt a bit hacky. This test should let us get rid of `self._prune_signature` by enforcing valid signatures for all models.
Edit: I'm also slipping in a typo fix (fine-tine -> fine-tune) I saw while I was running the tests
Double-edit: I'm also slipping in a fix to an incorrect indentation in the `test_dataset_conversion` test that was causing some unnecessary repetition | 06-16-2023 13:39:24 | 06-16-2023 13:39:24 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 24,319 | closed | Fix ner average grouping with no groups | # What does this PR do?
Fixes #https://github.com/huggingface/transformers/issues/24314
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
--> | 06-16-2023 13:09:18 | 06-16-2023 13:09:18 | For bigger fixes I would add a test. This is small enough I think it's ok to skip. Let me know.<|||||>_The documentation is not available anymore as the PR was closed or merged._ |
transformers | 24,318 | closed | Recursion error when creating AutoTokenizer | ### System Info
```
- `transformers` version: 4.30.2
- Platform: Linux-5.15.0-1033-oracle-x86_64-with-glibc2.29
- Python version: 3.8.10
- Huggingface_hub version: 0.15.1
- Safetensors version: 0.3.1
- PyTorch version (GPU?): 2.0.0+cu117 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
```
### Who can help?
@ArthurZucker
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
1) Save `"Yhyu13/chimera-inst-chat-13b-hf"` tokenizer as `save_pretrained` to some folder
2) Try to create auto tokenizer from that folder
```bash
python -c "from transformers import AutoTokenizer; tok = AutoTokenizer.from_pretrained('/local/path/to/tokenizer'); print(tok)"
```
And see recursion error when running `.from_pretrained` (last lines in stack trace):
```
File "/home/alexander/llm_training/.venv/lib/python3.8/site-packages/transformers/tokenization_utils_fast.py", line 257, in _convert_token_to_id_with_added_voc
return self.unk_token_id
File "/home/alexander/llm_training/.venv/lib/python3.8/site-packages/transformers/tokenization_utils_base.py", line 1155, in unk_token_id
return self.convert_tokens_to_ids(self.unk_token)
File "/home/alexander/llm_training/.venv/lib/python3.8/site-packages/transformers/tokenization_utils_fast.py", line 250, in convert_tokens_to_ids
return self._convert_token_to_id_with_added_voc(tokens)
File "/home/alexander/llm_training/.venv/lib/python3.8/site-packages/transformers/tokenization_utils_fast.py", line 257, in _convert_token_to_id_with_added_voc
return self.unk_token_id
File "/home/alexander/llm_training/.venv/lib/python3.8/site-packages/transformers/tokenization_utils_base.py", line 1155, in unk_token_id
return self.convert_tokens_to_ids(self.unk_token)
File "/home/alexander/llm_training/.venv/lib/python3.8/site-packages/transformers/tokenization_utils_base.py", line 1035, in unk_token
return str(self._unk_token)
RecursionError: maximum recursion depth exceeded while getting the str of an object
```
### Expected behavior
After running `pip install transformers==4.29` everything works fine:
```bash
❯ python -c "from transformers import AutoTokenizer; tok = AutoTokenizer.from_pretrained('/local/path/to/tokenizer'); print(tok)"
LlamaTokenizerFast(name_or_path='/local/path/to/tokenizer', vocab_size=32000, model_max_length=2048, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'bos_token': '<s>', 'eos_token': '</s>', 'unk_token': '<unk>', 'pad_token': '<pad>'}, clean_up_tokenization_spaces=False)
```
Working transformers-cli env:
```
- `transformers` version: 4.29.0
- Platform: Linux-5.15.0-1033-oracle-x86_64-with-glibc2.29
- Python version: 3.8.10
- Huggingface_hub version: 0.15.1
- Safetensors version: 0.3.1
- PyTorch version (GPU?): 2.0.0+cu117 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
``` | 06-16-2023 10:26:42 | 06-16-2023 10:26:42 | Interesting, I can't even load the autotokenizer from remote, i.e., just loading `from_pretrained` using the remote identifier doesn't work and leads to a recursion error.<|||||>Hey! There seems to be something a bit strange with the `tokenizer_config.json`, since the `unk_token`, the `bos_token` as well as the `eos_token` are `""`, which means that they are empty. This is the root cause of the issue.
The following works:
```python
>>> from transformers import AutoTokenizer; tok = AutoTokenizer.from_pretrained(path, unk_token ="<s>")
```
While this does not work:
```python
>>> from transformers import AutoTokenizer; tok = AutoTokenizer.from_pretrained(path, unk_token =" <s>")
```
`""` is not part of the vocab, thus it cannot be used as an unknown token. You should update the tokenizer to have a `tok._unk_token = None`<|||||>There's something strange going on. I am facing the same problem and am unable to load a `LlamaTokenizer` on 4.30.2 that I have used previously (with 4.28.x). Here's my `tokenizer_config.json` in case that's relevant:
```json
{
"bos_token": "<s>",
"clean_up_tokenization_spaces": false,
"eos_token": "</s>",
"model_max_length": 1000000000000000019884624838656,
"tokenizer_class": "LlamaTokenizer",
"unk_token": "<unk>"
}
```<|||||>You are giving way too little informations, no traceback and this is not related to the mentioned issue. If you still have a problem, feel free to open a new issue, add a full reproducer and make sure you have a correctly converted tokenizer<|||||>> Hey! There seems to be something a bit strange with the `tokenizer_config.json`, since the `unk_token`, the `bos_token` as well as the `eos_token` are `""`, which means that they are empty. This is the root cause of the issue. The following works:
>
> ```python
> >>> from transformers import AutoTokenizer; tok = AutoTokenizer.from_pretrained(path, unk_token ="<s>")
> ```
>
> While this does not work:
>
> ```python
> >>> from transformers import AutoTokenizer; tok = AutoTokenizer.from_pretrained(path, unk_token =" <s>")
> ```
>
> `""` is not part of the vocab, thus it cannot be used as an unknown token. You should update the tokenizer to have a `tok._unk_token = None`
I think it is not a solution. Why does simple save/load not work then? Maybe at least add that information to docs?<|||||>If you load using `from transformers import AutoTokenizer; tok = AutoTokenizer.from_pretrained("Yhyu13/chimera-inst-chat-13b-hf", legacy=False, use_fast = False)`, you can load and save.
The reason why it does not work is because the tokenizer was saved using the `LlamaTokenizer` class. When doing the automatic conversion to a fast tokenizer, (`AutoTokenizer` automatically converts the slow to a fast tokenizer using the [`LlamaConverter` ](https://github.com/ArthurZucker/transformers/blob/1f2434777ecfc436aed40c282b074034f7232d6f/src/transformers/convert_slow_tokenizer.py#L1123).
The issue lies with `self.update_post_processor()`, which does not check if the `bos` and `eos` tokens are defined or if `add_bos_token` and `add_eos_token` are set to ` True`.
However the configuration files are still wrong, the `eos` and `bos` and `unk` tokens from the slow tokenizer are going to be different:
```python
>>> from transformers import AutoTokenizer; tok = AutoTokenizer.from_pretrained("Yhyu13/chimera-inst-chat-13b-hf", legacy=True, use_fast = False)
>>> tok
LlamaTokenizer(name_or_path='Yhyu13/chimera-inst-chat-13b-hf', vocab_size=32000, model_max_length=2048, is_fast=False, padding_side='right', truncation_side='right', special_tokens={'bos_token': '<s>', 'eos_token': '</s>', 'unk_token': '<unk>', 'pad_token': '<pad>'}, clean_up_tokenization_spaces=False)
```
the values the `tokenizer_config` were not used because the `[special_tokens_map](https://huggingface.co/Yhyu13/chimera-inst-chat-13b-hf/blob/main/special_tokens_map.json)` was saved.
TLDR; the tokenizer was not properly saved, priority is given to the `tokenizer_config.json` when loading the tokenizer, which is wrong in this case. |
transformers | 24,317 | closed | Fix ImageGPT doc example | # What does this PR do?
There was a bug in the example, where the clusters in the image processor were stored as lists, but the example assumed they were numpy arrays.
At the moment, clusters are stored as a list of lists, but [converted to a numpy array during processing](https://github.com/huggingface/transformers/blob/0b7b4429c78de68acaf72224eb6dae43616d820c/src/transformers/models/imagegpt/image_processing_imagegpt.py#L230). This PR converts to a numpy array when setting the class attribute and if new `cluster` are passed into `preprocess`.
This:
* Maintains backwards compatibility with old configurations
* Saved configs aren't changed (`clusters` is still converted to a list of lists when serializing) see this [dummy image processor](https://huggingface.co/amyeroberts/dummy_imagegpt_image_processor_np_clusters) created from this branch.
* Is more efficient - we're not converting the same list of lists every batch.
A potential breaking change is if users were accessing the `clusters` attribute and using it as a list of lists. As this was caught because users were using the clusters as a numpy array (according to the example) I expect this impact to be low.
Fixes #24189
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
| 06-16-2023 10:16:16 | 06-16-2023 10:16:16 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 24,316 | open | [Tokenizer] `skip_special_tokens` not working as expected | # Reporting a failing API design
This is mostly to help me record some of the biggest issues with the current API for adding tokens.
This is linked to #23909. Here is a simple snippet:
```python
>>> from transformers import AutoTokenizer
>>> tokenizer = AutoTokenizer.from_pretrained("t5-base", use_fast = False)
>>> tokenizer = [
AddedToken("[ABC]", normalized=False),
AddedToken("[DEF]", normalized=False),
AddedToken("GHI IHG", normalized=False),
]
>>> tokenizer.add_tokens(new_toks)
>>> tokenizer.add_tokens([AddedToken("[SAMPLE]", normalized=True)], special_tokens = True)
>>> print(tokenizer.added_tokens_encoder)
>>> print( tokenizer.all_special_ids)
```
This will show that the newly added token (`[SAMPLE]`) is not part of the `all_special_ids`. However, `all_special_ids` is used when decoding, to check whether the token should be skipped or not:
```python
for token in filtered_tokens:
if skip_special_tokens and token in self.all_special_ids:
continue
if token in self.added_tokens_encoder:
if current_sub_text:
sub_texts.append(self.convert_tokens_to_string(current_sub_text))
current_sub_text = []
sub_texts.append(token)
else:
current_sub_text.append(token)
```
Thus
```python
>>> encoded = tokenizer.encode("[ABC] [DEF][SAMPLE]", add_special_tokens=False)
>>> tokenizer.decode(encoded, skip_special_tokens = True)
"[ABC] [DEF][SAMPLE]"
```
However, the token is in `added_tokens_encoder` but not in `additional_special_tokens`.
Now imagine you want `spaces_between_special_tokens` ? This will add spaces between all added tokens, and thus checks if a token is part of `tokenzier.added_tokens_encoder`.
```python
>>> encoded = tokenizer.encode("[ABC] [DEF][SAMPLE]", add_special_tokens=False)
>>> tokenizer.decode(encoded, spaces_between_special_tokens = True)
"[ABC] [DEF] [SAMPLE]"
>>> tokenizer.decode(encoded, spaces_between_special_tokens = False)
"[ABC][DEF][SAMPLE]"
```
| 06-16-2023 09:52:47 | 06-16-2023 09:52:47 | |
transformers | 24,315 | closed | CUDA out of memory when use DistillBert for inference and use hidden_state as input_embeds | ### System Info
transformer: 4.24.0
python: 3.8.13
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Hello, since the memory cost of tuning the whole distillbert is still large for me, I am trying to get the last hidden states of a modified distillbert which only has first 5 layers, then use it as the input for a distillbert which only has the last 1 transformer layers and fine tune it (only fine tune the last transformer layer to save memory). **But when I try to get the hidden_states, it will always be out of memory whatever batch size I use.** Since I can use my current GPU to run even Bertseqclassfiication, I think the GPU memory is sufficient. Could I have some help about this?
Besides, **I am not sure if I can use hidden_state directly as input_embeds for another distillbert (in my case, it's a one transformer layer distillbert)** I want to fine tune, here is my current code, please let me know if I am not correct:
```
with open('/X.all.txt', "r") as fin:
node_text_list = fin.readlines()
model_path = r'/distilbert-base-uncased'
tokenizer = DistilBertTokenizer.from_pretrained(model_path)
X = [tokenizer(text, padding='max_length', max_length=32,
truncation=True, return_tensors="pt")
for text in node_text_list]`
input_ids = []
attention_mask = []
for i in range(len(X)):
input_ids.append(X[i]['input_ids'])
attention_mask.append(X[i]['attention_mask'])
input_ids = torch.stack(input_ids).squeeze(1)
attention_mask = torch.stack(attention_mask).squeeze(1)
data_set = torch.utils.data.TensorDataset(
input_ids, attention_mask
)
# Load the first BERT model
pre_loader = torch.utils.data.DataLoader(
data_set, batch_size=128, num_workers=0, pin_memory=True
)
model_pretrain = DistilBertModel.from_pretrained(args.pretrain)
model_pretrain.transformer.layer = model_pretrain.transformer.layer[:5]
hidden_states_list = []
model_pretrain = model_pretrain.to(device)
for param in model_pretrain.parameters():
param.requires_grad = False
# Pass the input through the first BERT model
for input_id, mask in pre_loader:
with torch.no_grad():
input_id = input_id.to(device)
mask = mask.to(device)
outputs = model_pretrain(input_id, mask, return_dict=False) ###out of memory here
hidden_states = outputs[0]
# Remove the memory usage of the first BERT model
del outputs
# Append the hidden states to the list
hidden_states_list.append(hidden_states)
# Concatenate the hidden states along the batch dimension
pretrain_hidden_states = torch.cat(hidden_states_list, dim=0)
model = DistilBertModel.from_pretrained(pretrain)
# Remove unnecessary layers from BERT
num_removed_layers = 1 # Specify the number of layers to remove
encoder_layers = model.transformer.layer[-num_removed_layers:]
model.transformer.layer = nn.ModuleList(encoder_layers)
### new sampling
train_set = torch.utils.data.TensorDataset(
pretrain_hidden_states, data.attention_mask
)
train_loader = torch.utils.data.DataLoader(
train_set, batch_size=128, num_workers=0, pin_memory=True
)
for hidden_state, mask in train_loader:
distilbert_output = model(inputs_embeds=hidden_state, attention_mask=mask, return_dict=False)
hidden_state = distilbert_output[0]
pooled_output = hidden_state[:, 0]
x = pooled_output
### classification layers
```
### Expected behavior
CUDA should not be out of memory | 06-16-2023 07:58:40 | 06-16-2023 07:58:40 | Hi @TOP-RX,
A few things from first looking at your script:
* tokenizers already work on batches. There's no need to pass line by line and then concatenate
```python
encoder_inputs = tokenizer(node_text_list, max_length=32, truncation=True, return_tensors="pt")
input_ids = encoder_inputs["input_ids"]
attention_mask = encoder_inputs["attention_mask"]
```
* Creating the dataset like this means that ALL of your data is read into memory (although not GPU) and converted to pytorch arrays at once. Consider loading using `datasets` some [info here](https://huggingface.co/docs/datasets/nlp_load) and tokenizing as a map function applied to the dataset. e.g. [like here](https://huggingface.co/docs/datasets/nlp_process). I highly recommend looking at the [scripts in `examples`](https://github.com/huggingface/transformers/tree/main/examples/pytorch) to see how best to structure training pipelines.
* Batch size
You mention it fails at any batch size - is this true for batch_size=1? The batch size in this script (128) is quite large
* Increasing memory usage
Does this happen if you just do one forward pass i.e. passing a single batch with no for loop?
I'd guess some of the memory issues are coming from `hidden_states_list`, which increases in size and the tensors I believe are still on the cuda device. <|||||>Hi @amyeroberts ,
Thanks so much for your reply and advices! I was following your suggestion and found the issue happened with `hidden_states_list.append(hidden_states)` even I used batch size =1, the GPU memory accumulates for each batch to cause the out of memory problem after several batches.
I am also wondering if I could have some suggestions from you about another issue(ignore some unrelated parts):
```
class Net(nn.Module):
def forward(hidden_state, mask):
distilbert_output = self.distilbert(inputs_embeds=hidden_state, attention_mask=mask, return_dict=False)
hidden_state = distilbert_output[0]
pooled_output = hidden_state[:, 0]
x = pooled_output
### classification layers
# Remove unnecessary layers from BERT
model = Net()
num_removed_layers = 1 # Specify the number of layers to remove
encoder_layers = model.distilbert.transformer.layer[-num_removed_layers:]
model.distilbert.transformer.layer = nn.ModuleList(encoder_layers)
```
1. Since I just want to use previous layers as encoder not tuning, and only tune the last layer (as code shown, I remove the layers which were used to generate the hidden states)plus my own model to save memory, Is this a correct way to directly use hidden_state I got as inputs_embeds for a Bert/DistillBert? And it seems if I use input_ids is fine but when I use `inputs_embeds=hidden_state` with the same settings, it will be out of memory.
2. I found no matter how many layers I removed, the GPU memory usage is almost same, it this normal?
Thanks so much!<|||||>@TOP-RX
> I was following your suggestion and found the issue happened with hidden_states_list.append(hidden_states) even I used batch size =1
OK, this indicates the issue is a result of the script and not code relating to transformers.
Questions about debugging custom training objectives or scripts are best placed in our [forums](https://discuss.huggingface.co/). We try to reserve the github issues for feature requests and bug reports.
Regarding your questions:
1. Same as above - this is a question for our forums :)
2. Without more specifics about the model, GPU utilisation vs. layers and what "almost the same" means, it's not possible to help. If you suspect this is a bug, then please open another separate issue giving information about how to reproduce and expected behaviour. <|||||>You are collecting the outputs (here `hidden_states_list`) for a whole dataset (e.g. `for input_id, mask in pre_loader:`). This is not going to work well. You should find a way to save it to some storage like disk with some tools (probably there is some in torch).<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 24,314 | closed | Device control characters lead to an error in average NER aggregation | ### System Info
Python 3.8.10, transformers versions tried: 4.30.1 and 4.30.2. Tried in a Docker container with Linux Ubuntu 20.04 and on Google Colab.
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
1. Load a pretrained language model as NER pipeline with average aggregation
2. Run it on a sequence `"\x11\x11"`
Colab: https://colab.research.google.com/drive/1Xm2kFAIsb1vt8R8JvdiLitDZ2m6WszcP?usp=sharing
The error happens on line 336 of `transformers\pipelines\token_classification.py`, in `aggregate_word()` function: `word = self.tokenizer.convert_tokens_to_string([entity["word"] for entity in entities])`. There, entities remains `None` while it should be `[]`; it seems to happen because in `aggregate()` function (same file), when `aggregation_strategy` is set to average, we call `aggregate_words()` and there `entities` is `[]`, so we end up with `word_group` as still `None`, and that gets passed to `aggregate_word`.
### Expected behavior
The aggregations runs without any errors and produces an empty list. | 06-16-2023 06:01:11 | 06-16-2023 06:01:11 | cc @Narsil @ArthurZucker <|||||>Created a fix PR. |
transformers | 24,313 | closed | Deepspeed ZeRO2 + Trainer does not resume training after evaluation | ### System Info
- `transformers` version: 4.30.2
- Platform: Linux-4.18.0-305.25.1.el8_4.x86_64-x86_64-with-glibc2.35
- Python version: 3.10.6
- Huggingface_hub version: 0.15.1
- Safetensors version: 0.3.1
- PyTorch version (GPU?): 2.0.0 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: Parallel
### Who can help?
@pacman, @sgugger
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Steps to reproduce:
1. Use the [pytorch container from Nvidia](https://catalog.ngc.nvidia.com/orgs/nvidia/containers/pytorch).
2. Pip install missing dependencies (IIRC: flash-attn, deepspeed, einops, transformers, accelerate). Note that for deepspeed, I had to use this [specific PR](https://github.com/microsoft/DeepSpeed/issues/3678)
3. Download ShareGPT dataset from huggingface [here](https://huggingface.co/datasets/Aeala/ShareGPT_Vicuna_unfiltered)
4. Run the script of my codebase [here](https://github.com/larrylawl/FastChat/blob/main/scripts/train_vicuna_13b_ds_debug.sh). You'll need to edit the filepaths. My codebase is a fork of FastChat.
### Expected behavior
As you can see from my [log file](https://github.com/huggingface/transformers/files/11766324/train.log), the training gets stuck after coming out of evaluation.
I expected the training script to continue. | 06-16-2023 05:42:03 | 06-16-2023 05:42:03 | When I killed the process, it gives the log
```
^[[A^[[A^[[A^[[A^[[A^C[2023-06-16 08:32:00,972] [INFO] [launch.py:318:sigkill_handler] Killing subprocess 1947071
Traceback (most recent call last):
File "/home/users/industry/dso/lannliat/.local/bin/deepspeed", line 6, in <module>
[2023-06-16 08:32:01,072] [INFO] [launch.py:318:sigkill_handler] Killing subprocess 1947071
main()
File "/home/users/industry/dso/lannliat/.local/lib/python3.10/site-packages/deepspeed/launcher/runner.py", line 565, in main
result.wait()
File "/usr/lib/python3.10/subprocess.py", line 1207, in wait
return self._wait(timeout=timeout)
File "/usr/lib/python3.10/subprocess.py", line 1941, in _wait
(pid, sts) = self._try_wait(0)
File "/usr/lib/python3.10/subprocess.py", line 1899, in _try_wait
(pid, sts) = os.waitpid(self.pid, wait_flags)
```
Seems like the process is waiting.<|||||>cc @pacman100 <|||||>Hello @larrylawl, can you provide a minimal reproducible example? The above example is very involved with a lot of dependencies like flash-attn ...
When I run the following official example, everything is working fine:
```
cd transformers
export TASK_NAME=mrpc
export CUDA_VISIBLE_DEVICES="0,1"
torchrun --nnodes 1 --nproc-per-node 2 ./examples/pytorch/text-classification/run_glue.py --model_name_or_path bert-base-cased --task_name $TASK_NAME --do_train --do_eval --max_seq_length 128 --per_device_train_batch_size 16 --learning_rate 5e-5 --num_train_epochs 3 --output_dir /tmp/$TASK_NAME/ --overwrite_output_dir --deepspeed ~/transformers/tests/deepspeed/ds_config_zero2.json --save_steps 10 --evaluation_strategy "steps" --eval_steps 10
```
output:
```
[INFO|trainer.py:1682] 2023-06-16 13:14:12,536 >> ***** Running training *****
[INFO|trainer.py:1683] 2023-06-16 13:14:12,536 >> Num examples = 3,668
[INFO|trainer.py:1684] 2023-06-16 13:14:12,536 >> Num Epochs = 3
[INFO|trainer.py:1685] 2023-06-16 13:14:12,536 >> Instantaneous batch size per device = 16
[INFO|trainer.py:1686] 2023-06-16 13:14:12,536 >> Total train batch size (w. parallel, distributed & accumulation) = 32
[INFO|trainer.py:1687] 2023-06-16 13:14:12,536 >> Gradient Accumulation steps = 1
[INFO|trainer.py:1688] 2023-06-16 13:14:12,536 >> Total optimization steps = 345
[INFO|trainer.py:1689] 2023-06-16 13:14:12,537 >> Number of trainable parameters = 108,311,810
[INFO|integrations.py:727] 2023-06-16 13:14:12,540 >> Automatic Weights & Biases logging enabled, to disable set os.environ["WANDB_DISABLED"] = "true"
wandb: Currently logged in as: smangrul. Use `wandb login --relogin` to force relogin
wandb: wandb version 0.15.4 is available! To upgrade, please run:
wandb: $ pip install wandb --upgrade
wandb: Tracking run with wandb version 0.13.3
wandb: Run data is saved locally in /home/sourab/transformers/wandb/run-20230616_131413-2fg1dtqg
wandb: Run `wandb offline` to turn off syncing.
wandb: Syncing run distinctive-puddle-305
wandb: ⭐️ View project at https://wandb.ai/smangrul/huggingface
wandb: 🚀 View run at https://wandb.ai/smangrul/huggingface/runs/2fg1dtqg
3%|█▊ | 10/345 [00:01<01:02, 5.40it/s][INFO|trainer.py:773] 2023-06-16 13:14:21,035 >> The following columns in the evaluation set don't have a corresponding argument in `BertForSequenceClassification.forward` and have been ignored: idx, sentence1, sentence2. If idx, sentence1, sentence2 are not expected by `BertForSequenceClassification.forward`, you can safely ignore this message.
[INFO|trainer.py:3079] 2023-06-16 13:14:21,037 >> ***** Running Evaluation *****
[INFO|trainer.py:3081] 2023-06-16 13:14:21,037 >> Num examples = 408
[INFO|trainer.py:3084] 2023-06-16 13:14:21,037 >> Batch size = 8
{'eval_loss': 0.6776408553123474, 'eval_accuracy': 0.6838235294117647, 'eval_f1': 0.8122270742358079, 'eval_combined_score': 0.7480253018237863, 'eval_runtime': 0.5202, 'eval_samples_per_second': 784.331, 'eval_steps_per_second': 49.982, 'epoch': 0.09}
3%|█▊ | 10/345 [00:02<01:02, 5.40it/s[INFO|trainer.py:2805] 2023-06-16 13:14:21,560 >> Saving model checkpoint to /tmp/mrpc/checkpoint-10
[INFO|configuration_utils.py:458] 2023-06-16 13:14:21,561 >> Configuration saved in /tmp/mrpc/checkpoint-10/config.json
[INFO|modeling_utils.py:1844] 2023-06-16 13:14:22,280 >> Model weights saved in /tmp/mrpc/checkpoint-10/pytorch_model.bin
[INFO|tokenization_utils_base.py:2194] 2023-06-16 13:14:22,280 >> tokenizer config file saved in /tmp/mrpc/checkpoint-10/tokenizer_config.json
[INFO|tokenization_utils_base.py:2201] 2023-06-16 13:14:22,281 >> Special tokens file saved in /tmp/mrpc/checkpoint-10/special_tokens_map.json
[2023-06-16 13:14:22,308] [INFO] [logging.py:96:log_dist] [Rank 0] [Torch] Checkpoint global_step10 is about to be saved!
/home/sourab/miniconda3/envs/ml/lib/python3.10/site-packages/torch/nn/modules/module.py:1802: UserWarning: Positional args are being deprecated, use kwargs instead. Refer to https://pytorch.org/docs/master/generated/torch.nn.Module.html#torch.nn.Module.state_dict for details.
warnings.warn(
/home/sourab/miniconda3/envs/ml/lib/python3.10/site-packages/torch/nn/modules/module.py:1802: UserWarning: Positional args are being deprecated, use kwargs instead. Refer to https://pytorch.org/docs/master/generated/torch.nn.Module.html#torch.nn.Module.state_dict for details.
warnings.warn(
[2023-06-16 13:14:22,314] [INFO] [logging.py:96:log_dist] [Rank 0] Saving model checkpoint: /tmp/mrpc/checkpoint-10/global_step10/mp_rank_00_model_states.pt
[2023-06-16 13:14:22,314] [INFO] [torch_checkpoint_engine.py:21:save] [Torch] Saving /tmp/mrpc/checkpoint-10/global_step10/mp_rank_00_model_states.pt...
[2023-06-16 13:14:23,319] [INFO] [torch_checkpoint_engine.py:23:save] [Torch] Saved /tmp/mrpc/checkpoint-10/global_step10/mp_rank_00_model_states.pt.
[2023-06-16 13:14:23,320] [INFO] [torch_checkpoint_engine.py:21:save] [Torch] Saving /tmp/mrpc/checkpoint-10/global_step10/zero_pp_rank_0_mp_rank_00_optim_states.pt...
[2023-06-16 13:14:26,180] [INFO] [torch_checkpoint_engine.py:23:save] [Torch] Saved /tmp/mrpc/checkpoint-10/global_step10/zero_pp_rank_0_mp_rank_00_optim_states.pt.
[2023-06-16 13:14:26,181] [INFO] [engine.py:3228:_save_zero_checkpoint] zero checkpoint saved /tmp/mrpc/checkpoint-10/global_step10/zero_pp_rank_0_mp_rank_00_optim_states.pt
[2023-06-16 13:14:26,181] [INFO] [torch_checkpoint_engine.py:33:commit] [Torch] Checkpoint global_step10 is ready now!
6%|███▌ | 20/345 [00:08<01:20, 4.06it/s][INFO|trainer.py:773] 2023-06-16 13:14:28,075 >> The following columns in the evaluation set don't have a corresponding argument in `BertForSequenceClassification.forward` and have been ignored: idx, sentence1, sentence2. If idx, sentence1, sentence2 are not expected by `BertForSequenceClassification.forward`, you can safely ignore this message.
[INFO|trainer.py:3079] 2023-06-16 13:14:28,077 >> ***** Running Evaluation *****
[INFO|trainer.py:3081] 2023-06-16 13:14:28,077 >> Num examples = 408
[INFO|trainer.py:3084] 2023-06-16 13:14:28,077 >> Batch size = 8
{'eval_loss': 0.6481188535690308, 'eval_accuracy': 0.6838235294117647, 'eval_f1': 0.8122270742358079, 'eval_combined_score': 0.7480253018237863, 'eval_runtime': 0.5179, 'eval_samples_per_second': 787.721, 'eval_steps_per_second': 50.198, 'epoch': 0.17}
...
52%|███████████████████████████████▊ | 180/345 [02:11<00:44, 3.72it/s][INFO|trainer.py:773] 2023-06-16 13:16:30,237 >> The following columns in the evaluation set don't have a corresponding argument in `BertForSequenceClassification.forward` and have been ignored: idx, sentence1, sentence2. If idx, sentence1, sentence2 are not expected by `BertForSequenceClassification.forward`, you can safely ignore this message.
[INFO|trainer.py:3079] 2023-06-16 13:16:30,239 >> ***** Running Evaluation *****
[INFO|trainer.py:3081] 2023-06-16 13:16:30,239 >> Num examples = 408
[INFO|trainer.py:3084] 2023-06-16 13:16:30,239 >> Batch size = 8
{'eval_loss': 0.36363303661346436, 'eval_accuracy': 0.8186274509803921, 'eval_f1': 0.8673835125448028, 'eval_combined_score': 0.8430054817625975, 'eval_runtime': 0.5171, 'eval_samples_per_second': 789.05, 'eval_steps_per_second': 50.283, 'epoch': 1.57}
```
<|||||>I encouter the same problem. Even with deepspeed and FSDP.
It feels like when model save its weights and stuck.<|||||>Hello @Ricardokevins, please provide a minimal reproducible example. I clearly show above that things are working fine
<|||||>@Ricardokevins I hypothesise that it's a flash attention issue. It works fine with deepspeed only (for my case) and fsdp only (for @pacman100 )<|||||>> @Ricardokevins I hypothesise that it's a flash attention issue. It works fine with deepspeed only (for my case) and fsdp only (for @pacman100 )
This issue may be quite complex and unusual. Initially, when I was training the code in Alpaca-Lora using DeepSpeed, I encountered this problem (training got stuck, and the GPU utilization of some GPUs remained at 0%). This was before using Flash-attn.
Subsequently, I started training the code in FastChat using FSDP (which includes Flash-attn), and encountered similar issues.
Yesterday, I reinstalled all the environments, replaced the code with flash-attn from the FastChat issue, and started training using DeepSpeed. So far, I haven't encountered any problems.
Currently, I'm still unsure about the root cause of the issue, as I haven't faced it recently. If I encounter it again in the future, I will continue the discussion and seek your assistance. Thank you.<|||||>@Ricardokevins Oh nice that you fixed it! Can I ask for some advice since I'm still facing the issue:
- What do you mean by "replaced the code with flash-attn from the FastChat issue"? Did you mean [this patch from FastChat](https://github.com/lm-sys/FastChat/blob/main/fastchat/train/llama_flash_attn_monkey_patch.py)?
- What's the cuda version of your system and environment? If you used a docker image, can you share which one worked for you?
- Do you mind sharing your training loss curves? I'm facing a strange issue where my deepspeed + flash attention setting yielded very volatile curves...

But FSDP + flash attention yielded smoother curves

<|||||>> @Ricardokevins Oh nice that you fixed it! Can I ask for some advice since I'm still facing the issue:
>
> * What do you mean by "replaced the code with flash-attn from the FastChat issue"? Did you mean [this patch from FastChat](https://github.com/lm-sys/FastChat/blob/main/fastchat/train/llama_flash_attn_monkey_patch.py)?
> * What's the cuda version of your system and environment? If you used a docker image, can you share which one worked for you?
> * Do you mind sharing your training loss curves? I'm facing a strange issue where my deepspeed + flash attention setting yielded very volatile curves...
>
> 
>
> But FSDP + flash attention yielded smoother curves
>
> 
1. i use the code from here: https://github.com/lm-sys/FastChat/commit/3adc92d405038d316a3cb908886261231b058590?diff=split
2. cuda version 11.7
<img width="862" alt="image" src="https://github.com/huggingface/transformers/assets/43642508/7b9bf3a7-98ea-4972-9142-ee25325575c2">
<|||||>Thanks @Ricardokevins ! Btw I've fixed the issue by setting number of threads used for intraop parallelism to 1
```
torch.set_num_threads(1)
```
This [thread](https://discuss.pytorch.org/t/cpu-usage-far-too-high-and-training-inefficient/57228) explains why the above works.
Also, the Vicuna repo now supports [xformers](https://github.com/lm-sys/FastChat/pull/1255). FYI<|||||>> Thanks @Ricardokevins ! Btw I've fixed the issue by setting number of threads used for intraop parallelism to 1
>
> ```
> torch.set_num_threads(1)
> ```
>
> This [thread](https://discuss.pytorch.org/t/cpu-usage-far-too-high-and-training-inefficient/57228) explains why the above works.
>
> Also, the Vicuna repo now supports [xformers](https://github.com/lm-sys/FastChat/pull/1255). FYI
wow, i will try this if I encounter the problem again!<|||||>> Thanks @Ricardokevins ! Btw I've fixed the issue by setting number of threads used for intraop parallelism to 1
>
> ```
> torch.set_num_threads(1)
> ```
>
> This [thread](https://discuss.pytorch.org/t/cpu-usage-far-too-high-and-training-inefficient/57228) explains why the above works.
>
> Also, the Vicuna repo now supports [xformers](https://github.com/lm-sys/FastChat/pull/1255). FYI
Hey,i try to use it,but it doesn't work |
transformers | 24,312 | closed | Add stride/chunking to `TextClassificationPipeline` | # What does this PR do?
Adds sliding window/chunking functionality to `TextClassificationPipeline` (similar to what #21771 did for `TokenClassificationPipeline`).
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@Narsil
| 06-16-2023 00:15:22 | 06-16-2023 00:15:22 | I happened to implement this for another project, so I went straight to implementing a PR here (skipping opening an issue) to see if this would be a desirable feature for HF. I realized after opening this PR that I haven't finished updating the docstrings where appropriate.
I could also use some guidance on when & where to return the result as a list vs. a list-of-lists; I'll admit I worked out [this section of code](https://github.com/huggingface/transformers/blob/52253ed1b1722a8e60e2df29ce0b1339dce07d9f/src/transformers/pipelines/text_classification.py#L222) mainly through trial-and-error with the existing test suite--not an ideal way to program.
Much of this code is copied/adapted from `TokenClassificationPipeline`. It looks like the changes also broke the legacy functionality of text pairs as shown in tests such as [`tests/models/bert/test_modeling_tf_bert.py::TFBertModelTest::test_pipeline_text_classification`](https://app.circleci.com/pipelines/github/huggingface/transformers/66591/workflows/85501b27-19f8-4dfe-a24e-242592463a89/jobs/829459); I think I can fix that if this new functionality turns out to be something the HuggingFace team wants to add.<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_24312). All of your documentation changes will be reflected on that endpoint.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 24,311 | closed | Update MMS integration docs | # What does this PR do?
Current MMS documentation is only focused on ASR. Update the doc to show examples for TTS, LID.
cc. @patrickvonplaten @sanchit-gandhi
| 06-15-2023 23:41:45 | 06-15-2023 23:41:45 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Thanks for the doc updates @vineelpratap! |
transformers | 24,310 | closed | Tied weights load | # What does this PR do?
This continue cleaning up a bit the model loading by:
1. Using the new `_tied_weight_keys` class variable when deleting weights without warning for safetensors serialization
2. Fix the logic that deletes tied params in missing keys and add a test (which fails on main)
3. As discussed internally, use a logger.info for the unexepected keys warning when the class used to load the model does not match the class in the config. | 06-15-2023 20:32:05 | 06-15-2023 20:32:05 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 24,309 | closed | saving model fails with deepspeed | ### System Info
System Info
transformers v4.30.0
python 3.8
There is a bug [here](https://github.com/huggingface/transformers/blob/0b7b4429c78de68acaf72224eb6dae43616d820c/src/transformers/trainer.py#LL2257C59-L2257C59), No `PretrainedModel` does not have `save_checkpoint` method.
Error trace
```
Traceback (most recent call last):
File "funtuner/trainer.py", line 98, in train
trainer.train()
File "/nfshome/store03/users/c.scmse/venv/lib/python3.8/site-packages/transformers/trainer.py", line 1540, in train
return inner_training_loop(
File "/nfshome/store03/users/c.scmse/venv/lib/python3.8/site-packages/transformers/trainer.py", line 1884, in _inner_training_loop
self._maybe_log_save_evaluate(tr_loss, model, trial, epoch, ignore_keys_for_eval)
File "/nfshome/store03/users/c.scmse/venv/lib/python3.8/site-packages/transformers/trainer.py", line 2196, in _maybe_log_save_evaluate
self._save_checkpoint(model, trial, metrics=metrics)
File "/nfshome/store03/users/c.scmse/venv/lib/python3.8/site-packages/transformers/trainer.py", line 2257, in _save_checkpoint
self.model_wrapped.save_checkpoint(output_dir)
File "/nfshome/store03/users/c.scmse/venv/lib/python3.8/site-packages/peft/peft_model.py", line 289, in __getattr__
return getattr(self.base_model, name)
File "/nfshome/store03/users/c.scmse/venv/lib/python3.8/site-packages/peft/tuners/lora.py", line 206, in __getattr__
return getattr(self.model, name)
File "/nfshome/store03/users/c.scmse/venv/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1614, in __getattr__
raise AttributeError("'{}' object has no attribute '{}'".format(
AttributeError: 'GPTNeoXForCausalLM' object has no attribute 'save_checkpoint'
```
### Who can help?
@pacman100
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
My code is [here](https://github.com/explodinggradients/Funtuner/blob/main/funtuner/trainer.py)
Run python3 funtuner/trainer.py
- export PYTHONPATH="${PYTHONPATH}:/your-path/Funtuner"
- please change the log_dir to your folder [here](https://github.com/explodinggradients/Funtuner/blob/c4e66209d5ee276a7eb8caf582435f1eaafbf18f/funtuner/config/config.yaml#L4) also you might want to set log_wandb=False
- `dev-train` branch
### Expected behavior
Please ensure that model training is running atleast 1000 steps without any errors. | 06-15-2023 18:24:03 | 06-15-2023 18:24:03 | Run it on 2 or more GPUs and it is working as expected.
<img width="1439" alt="Screenshot 2023-06-16 at 7 08 58 AM" src="https://github.com/huggingface/transformers/assets/13534540/d2ed6d49-886a-4798-a493-81d1984b1f39">
<|||||>So, the issue is that the model isn't getting wrapped in DeepSpeedEngine when run on a single GPU. Running on single GPU makes little sense to me with stage 2 because even with offloading, you don't get any considerable vram savings as the optimizer states and gradients in your case will be tiny as you are using PEFT.
As seen above optimizer state is 30MB compared to 11.2GB of Model<|||||>It is working for single GPU for the official example scripts. So, some issue with your codebase.
```
cd transformers
export TASK_NAME=mrpc
export CUDA_VISIBLE_DEVICES="0,1"
torchrun --nnodes 1 --nproc-per-node 1 ./examples/pytorch/text-classification/run_glue.py --model_name_or_path bert-base-cased --task_name $TASK_NAME --do_train --do_eval --max_seq_length 128 --per_device_train_batch_size 16 --learning_rate 5e-5 --num_train_epochs 3 --output_dir /tmp/$TASK_NAME/ --overwrite_output_dir --deepspeed ~/transformers/tests/deepspeed/ds_config_zero2.json --save_steps 10 --evaluation_strategy "epoch"
```<|||||>Okay, it is also working with your code but you need to use distributed launcher `torchrun`/`accelerate launch`/`deepspeed` instead of just `python`
command I am running:
```
torchrun --nproc-per-node 1 funtuner/trainer.py
```<|||||>
<img width="1430" alt="Screenshot 2023-06-16 at 7 18 46 AM" src="https://github.com/huggingface/transformers/assets/13534540/91994c8c-4980-4155-97c6-17e05e04aff0">
<|||||>Marking this as solved. Feel free to close this. Thank you for giving a clear reproducer with correct steps detailed avoiding a lot of back and forth; helping us resolve the issue faster.<|||||>Thanks for your reply @pacman100 . But I'm still facing the same issue even with multiple GPUs. Doesn't Deepspeed automatically use all the available GPUs? I even tried with `torchrun` it gets errors on the same line. <|||||>No, it doesn't automatically use all the available GPUs<|||||>As seen above, I'm able to save ckpts properly even with a single GPU when launching via torchrun. <|||||>Hey @pacman100, It was a mistake from my side (disk space was full), but the error didn't show up properly. It works fine now. You're the best :) |
transformers | 24,308 | closed | TypeError: cannot pickle 'module' object | ### System Info
transformers: 4.30.0
platform: Ubuntu 20.04.6 LTS x86_64
python: 3.8.5
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Hello team,
I was following this [tutorial](https://huggingface.co/docs/transformers/tasks/object_detection) on huggingface for object detection using DETR model on my custom dataset that has same dataset structure as `cpp5` (one used in tutorial)
I've used slightly different training arguments:
```
from transformers import TrainingArguments
from transformers.integrations import MLflowCallback, AzureMLCallback
training_args = TrainingArguments(
output_dir="detr-resnet-50_finetuned_loss-run",
per_device_train_batch_size=4,
num_train_epochs=30,
fp16=True,
save_steps=200,
logging_steps=50,
learning_rate=1e-5,
weight_decay=1e-4,
save_total_limit=2,
remove_unused_columns=False,
push_to_hub=False,
# dataloader_num_workers=4, # Adjust the number of dataloader workers according to your system
logging_dir="logs",
report_to="mlflow", # Report metrics to MLflow
# load_best_model_at_end=True,
metric_for_best_model="loss",
greater_is_better=False,
)
# Create the MLflow callback
mlflow_callback = MLflowCallback()
azureml_callback = AzureMLCallback()
# Integrate the MLflow callback in TrainingArguments
training_args.callbacks = [mlflow_callback, azureml_callback]
from transformers import Trainer
trainer = Trainer(
model=model,
args=training_args,
data_collator=collate_fn,
train_dataset=dataset["train"],
eval_dataset=dataset["valid"],
tokenizer=image_processor,
)
trainer.train()
```
but I am receiving at 200th step
```
╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮
│ in <module>:12 │
│ │
│ 9 │ tokenizer=image_processor, │
│ 10 ) │
│ 11 │
│ ❱ 12 trainer.train() │
│ 13 │
│ │
│ /anaconda/envs/azureml_py38/lib/python3.8/site-packages/transformers/trainer.py:1645 in train │
│ │
│ 1642 │ │ inner_training_loop = find_executable_batch_size( │
│ 1643 │ │ │ self._inner_training_loop, self._train_batch_size, args.auto_find_batch_size │
│ 1644 │ │ ) │
│ ❱ 1645 │ │ return inner_training_loop( │
│ 1646 │ │ │ args=args, │
│ 1647 │ │ │ resume_from_checkpoint=resume_from_checkpoint, │
│ 1648 │ │ │ trial=trial, │
│ │
│ /anaconda/envs/azureml_py38/lib/python3.8/site-packages/transformers/trainer.py:2011 in │
│ _inner_training_loop │
│ │
│ 2008 │ │ │ │ │ self.state.epoch = epoch + (step + 1 + steps_skipped) / steps_in_epo │
│ 2009 │ │ │ │ │ self.control = self.callback_handler.on_step_end(args, self.state, s │
│ 2010 │ │ │ │ │ │
│ ❱ 2011 │ │ │ │ │ self._maybe_log_save_evaluate(tr_loss, model, trial, epoch, ignore_k │
│ 2012 │ │ │ │ else: │
│ 2013 │ │ │ │ │ self.control = self.callback_handler.on_substep_end(args, self.state │
│ 2014 │
│ │
│ /anaconda/envs/azureml_py38/lib/python3.8/site-packages/transformers/trainer.py:2323 in │
│ _maybe_log_save_evaluate │
│ │
│ 2320 │ │ │ │ self.lr_scheduler.step(metrics[metric_to_check]) │
│ 2321 │ │ │
│ 2322 │ │ if self.control.should_save: │
│ ❱ 2323 │ │ │ self._save_checkpoint(model, trial, metrics=metrics) │
│ 2324 │ │ │ self.control = self.callback_handler.on_save(self.args, self.state, self.con │
│ 2325 │ │
│ 2326 │ def _load_rng_state(self, checkpoint): │
│ │
│ /anaconda/envs/azureml_py38/lib/python3.8/site-packages/transformers/trainer.py:2380 in │
│ _save_checkpoint │
│ │
│ 2377 │ │ │
│ 2378 │ │ run_dir = self._get_output_dir(trial=trial) │
│ 2379 │ │ output_dir = os.path.join(run_dir, checkpoint_folder) │
│ ❱ 2380 │ │ self.save_model(output_dir, _internal_call=True) │
│ 2381 │ │ if self.is_deepspeed_enabled: │
│ 2382 │ │ │ # under zero3 model file itself doesn't get saved since it's bogus! Unless d │
│ 2383 │ │ │ # config `stage3_gather_16bit_weights_on_model_save` is True │
│ │
│ /anaconda/envs/azureml_py38/lib/python3.8/site-packages/transformers/trainer.py:2878 in │
│ save_model │
│ │
│ 2875 │ │ │ │ │ self.model_wrapped.save_checkpoint(output_dir) │
│ 2876 │ │ │
│ 2877 │ │ elif self.args.should_save: │
│ ❱ 2878 │ │ │ self._save(output_dir) │
│ 2879 │ │ │
│ 2880 │ │ # Push to the Hub when `save_model` is called by the user. │
│ 2881 │ │ if self.args.push_to_hub and not _internal_call: │
│ │
│ /anaconda/envs/azureml_py38/lib/python3.8/site-packages/transformers/trainer.py:2944 in _save │
│ │
│ 2941 │ │ │ self.tokenizer.save_pretrained(output_dir) │
│ 2942 │ │ │
│ 2943 │ │ # Good practice: save your training arguments together with the trained model │
│ ❱ 2944 │ │ torch.save(self.args, os.path.join(output_dir, TRAINING_ARGS_NAME)) │
│ 2945 │ │
│ 2946 │ def store_flos(self): │
│ 2947 │ │ # Storing the number of floating-point operations that went into the model │
│ │
│ /anaconda/envs/azureml_py38/lib/python3.8/site-packages/torch/serialization.py:441 in save │
│ │
│ 438 │ │
│ 439 │ if _use_new_zipfile_serialization: │
│ 440 │ │ with _open_zipfile_writer(f) as opened_zipfile: │
│ ❱ 441 │ │ │ _save(obj, opened_zipfile, pickle_module, pickle_protocol) │
│ 442 │ │ │ return │
│ 443 │ else: │
│ 444 │ │ with _open_file_like(f, 'wb') as opened_file: │
│ │
│ /anaconda/envs/azureml_py38/lib/python3.8/site-packages/torch/serialization.py:653 in _save │
│ │
│ 650 │ data_buf = io.BytesIO() │
│ 651 │ pickler = pickle_module.Pickler(data_buf, protocol=pickle_protocol) │
│ 652 │ pickler.persistent_id = persistent_id │
│ ❱ 653 │ pickler.dump(obj) │
│ 654 │ data_value = data_buf.getvalue() │
│ 655 │ zip_file.write_record('data.pkl', data_value, len(data_value)) │
│ 656 │
╰──────────────────────────────────────────────────────────────────────────────────────────────────╯
TypeError: cannot pickle 'module' object
```
### Expected behavior
Artifact should've been saved at 200th step | 06-15-2023 16:23:31 | 06-15-2023 16:23:31 | cc @sgugger <|||||>It is possible that either of those callbacks (MlFlow or Azure) is inserting something in the state that cannot be serialized with `pickle`. We do not maintain those integrations ourselves, so I suggest you ping the author of the callback making your code fail (after trying to remove one or the other) :-)<|||||>Thanks @sgugger
I commented `MLFlowCallBack()` added by @noise-field in [#8016](https://github.com/huggingface/transformers/pull/8016)
and the code worked fine till 350 steps but I received a new error at the end due to `AzureMLCallback()` added by @davidefiocco in [#8062](https://github.com/huggingface/transformers/pull/8062#issue-729809630)
```
╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮
│ in <module>:12 │
│ │
│ 9 │ tokenizer=image_processor, │
│ 10 ) │
│ 11 │
│ ❱ 12 trainer.train() │
│ 13 │
│ │
│ /anaconda/envs/azureml_py38/lib/python3.8/site-packages/transformers/trainer.py:1645 in train │
│ │
│ 1642 │ │ inner_training_loop = find_executable_batch_size( │
│ 1643 │ │ │ self._inner_training_loop, self._train_batch_size, args.auto_find_batch_size │
│ 1644 │ │ ) │
│ ❱ 1645 │ │ return inner_training_loop( │
│ 1646 │ │ │ args=args, │
│ 1647 │ │ │ resume_from_checkpoint=resume_from_checkpoint, │
│ 1648 │ │ │ trial=trial, │
│ │
│ /anaconda/envs/azureml_py38/lib/python3.8/site-packages/transformers/trainer.py:2011 in │
│ _inner_training_loop │
│ │
│ 2008 │ │ │ │ │ self.state.epoch = epoch + (step + 1 + steps_skipped) / steps_in_epo │
│ 2009 │ │ │ │ │ self.control = self.callback_handler.on_step_end(args, self.state, s │
│ 2010 │ │ │ │ │ │
│ ❱ 2011 │ │ │ │ │ self._maybe_log_save_evaluate(tr_loss, model, trial, epoch, ignore_k │
│ 2012 │ │ │ │ else: │
│ 2013 │ │ │ │ │ self.control = self.callback_handler.on_substep_end(args, self.state │
│ 2014 │
│ │
│ /anaconda/envs/azureml_py38/lib/python3.8/site-packages/transformers/trainer.py:2324 in │
│ _maybe_log_save_evaluate │
│ │
│ 2321 │ │ │
│ 2322 │ │ if self.control.should_save: │
│ 2323 │ │ │ self._save_checkpoint(model, trial, metrics=metrics) │
│ ❱ 2324 │ │ │ self.control = self.callback_handler.on_save(self.args, self.state, self.con │
│ 2325 │ │
│ 2326 │ def _load_rng_state(self, checkpoint): │
│ 2327 │ │ # Load RNG states from `checkpoint` │
│ │
│ /anaconda/envs/azureml_py38/lib/python3.8/site-packages/transformers/trainer_callback.py:386 in │
│ on_save │
│ │
│ 383 │ │
│ 384 │ def on_save(self, args: TrainingArguments, state: TrainerState, control: TrainerCont │
│ 385 │ │ control.should_save = False │
│ ❱ 386 │ │ return self.call_event("on_save", args, state, control) │
│ 387 │ │
│ 388 │ def on_log(self, args: TrainingArguments, state: TrainerState, control: TrainerContr │
│ 389 │ │ control.should_log = False │
│ │
│ /anaconda/envs/azureml_py38/lib/python3.8/site-packages/transformers/trainer_callback.py:397 in │
│ call_event │
│ │
│ 394 │ │
│ 395 │ def call_event(self, event, args, state, control, **kwargs): │
│ 396 │ │ for callback in self.callbacks: │
│ ❱ 397 │ │ │ result = getattr(callback, event)( │
│ 398 │ │ │ │ args, │
│ 399 │ │ │ │ state, │
│ 400 │ │ │ │ control, │
│ │
│ /anaconda/envs/azureml_py38/lib/python3.8/site-packages/transformers/integrations.py:1055 in │
│ on_save │
│ │
│ 1052 │ │ │ ckpt_dir = f"checkpoint-{state.global_step}" │
│ 1053 │ │ │ artifact_path = os.path.join(args.output_dir, ckpt_dir) │
│ 1054 │ │ │ logger.info(f"Logging checkpoint artifacts in {ckpt_dir}. This may take time │
│ ❱ 1055 │ │ │ self._ml_flow.pyfunc.log_model( │
│ 1056 │ │ │ │ ckpt_dir, │
│ 1057 │ │ │ │ artifacts={"model_path": artifact_path}, │
│ 1058 │ │ │ │ python_model=self._ml_flow.pyfunc.PythonModel(), │
│ │
│ /anaconda/envs/azureml_py38/lib/python3.8/site-packages/mlflow/pyfunc/__init__.py:1578 in │
│ log_model │
│ │
│ 1575 │ :return: A :py:class:`ModelInfo <mlflow.models.model.ModelInfo>` instance that conta │
│ 1576 │ │ │ metadata of the logged model. │
│ 1577 │ """ │
│ ❱ 1578 │ return Model.log( │
│ 1579 │ │ artifact_path=artifact_path, │
│ 1580 │ │ flavor=mlflow.pyfunc, │
│ 1581 │ │ loader_module=loader_module, │
│ │
│ /anaconda/envs/azureml_py38/lib/python3.8/site-packages/mlflow/models/model.py:487 in log │
│ │
│ 484 │ │ │ run_id = mlflow.tracking.fluent._get_or_start_run().info.run_id │
│ 485 │ │ │ mlflow_model = cls(artifact_path=artifact_path, run_id=run_id, metadata=meta │
│ 486 │ │ │ flavor.save_model(path=local_path, mlflow_model=mlflow_model, **kwargs) │
│ ❱ 487 │ │ │ mlflow.tracking.fluent.log_artifacts(local_path, mlflow_model.artifact_path) │
│ 488 │ │ │ try: │
│ 489 │ │ │ │ mlflow.tracking.fluent._record_logged_model(mlflow_model) │
│ 490 │ │ │ except MlflowException: │
│ │
│ /anaconda/envs/azureml_py38/lib/python3.8/site-packages/mlflow/tracking/fluent.py:810 in │
│ log_artifacts │
│ │
│ 807 │ │ │ mlflow.log_artifacts("data", artifact_path="states") │
│ 808 │ """ │
│ 809 │ run_id = _get_or_start_run().info.run_id │
│ ❱ 810 │ MlflowClient().log_artifacts(run_id, local_dir, artifact_path) │
│ 811 │
│ 812 │
│ 813 def log_text(text: str, artifact_file: str) -> None: │
│ │
│ /anaconda/envs/azureml_py38/lib/python3.8/site-packages/mlflow/tracking/client.py:1048 in │
│ log_artifacts │
│ │
│ 1045 │ │ │ artifact: states │
│ 1046 │ │ │ is_dir: True │
│ 1047 │ │ """ │
│ ❱ 1048 │ │ self._tracking_client.log_artifacts(run_id, local_dir, artifact_path) │
│ 1049 │ │
│ 1050 │ @contextlib.contextmanager │
│ 1051 │ def _log_artifact_helper(self, run_id, artifact_file): │
│ │
│ /anaconda/envs/azureml_py38/lib/python3.8/site-packages/mlflow/tracking/_tracking_service/client │
│ .py:448 in log_artifacts │
│ │
│ 445 │ │ :param local_dir: Path to the directory of files to write. │
│ 446 │ │ :param artifact_path: If provided, the directory in ``artifact_uri`` to write to │
│ 447 │ │ """ │
│ ❱ 448 │ │ self._get_artifact_repo(run_id).log_artifacts(local_dir, artifact_path) │
│ 449 │ │
│ 450 │ def list_artifacts(self, run_id, path=None): │
│ 451 │ │ """ │
│ │
│ /anaconda/envs/azureml_py38/lib/python3.8/site-packages/azureml/mlflow/_store/artifact/artifact_ │
│ repo.py:88 in log_artifacts │
│ │
│ 85 │ │ if artifact_path is None: │
│ 86 │ │ │ dest_path = "" │
│ 87 │ │ │
│ ❱ 88 │ │ self.artifacts.upload_dir(local_dir, dest_path) │
│ 89 │ │
│ 90 │ def list_artifacts(self, path): │
│ 91 │ │ """ │
│ │
│ /anaconda/envs/azureml_py38/lib/python3.8/site-packages/azureml/mlflow/_client/artifact/run_arti │
│ fact_client.py:90 in upload_dir │
│ │
│ 87 │ │ │ │ local_paths.append(local_file_path) │
│ 88 │ │ │
│ 89 │ │ # Make batch request to create empty artifacts │
│ ❱ 90 │ │ empty_artifact_res = self._create_empty_artifacts(paths=remote_paths, run_id=sel │
│ 91 │ │ │
│ 92 │ │ result = self._upload_files( │
│ 93 │ │ │ local_paths=local_paths, remote_paths=remote_paths, empty_artifact_content=e │
│ │
│ /anaconda/envs/azureml_py38/lib/python3.8/site-packages/azureml/mlflow/_client/artifact/run_arti │
│ fact_client.py:146 in _create_empty_artifacts │
│ │
│ 143 │ │ │
│ 144 │ │ artifacts = [ArtifactPath(path=path) for path in paths] │
│ 145 │ │ │
│ ❱ 146 │ │ response = self._client.run_artifacts.batch_create_empty_artifacts( │
│ 147 │ │ │ subscription_id=self._service_context.subscription_id, │
│ 148 │ │ │ resource_group_name=self._service_context.resource_group_name, │
│ 149 │ │ │ workspace_name=self._service_context.workspace_name, │
│ │
│ /anaconda/envs/azureml_py38/lib/python3.8/site-packages/azureml/mlflow/_restclient/run_artifact/ │
│ operations/_run_artifacts_operations.py:1116 in batch_create_empty_artifacts │
│ │
│ 1113 │ │ │ body_content = None │
│ 1114 │ │ body_content_kwargs['content'] = body_content │
│ 1115 │ │ request = self._client.post(url, query_parameters, header_parameters, **body_con │
│ ❱ 1116 │ │ pipeline_response = self._client._pipeline.run(request, stream=False, **kwargs) │
│ 1117 │ │ response = pipeline_response.http_response │
│ 1118 │ │ │
│ 1119 │ │ if response.status_code not in [200]: │
│ │
│ /anaconda/envs/azureml_py38/lib/python3.8/site-packages/azure/core/pipeline/_base.py:205 in run │
│ │
│ 202 │ │ │ if self._impl_policies │
│ 203 │ │ │ else _TransportRunner(self._transport) │
│ 204 │ │ ) │
│ ❱ 205 │ │ return first_node.send(pipeline_request) # type: ignore │
│ 206 │
│ │
│ /anaconda/envs/azureml_py38/lib/python3.8/site-packages/azure/core/pipeline/_base.py:69 in send │
│ │
│ 66 │ │ """ │
│ 67 │ │ _await_result(self._policy.on_request, request) │
│ 68 │ │ try: │
│ ❱ 69 │ │ │ response = self.next.send(request) │
│ 70 │ │ except Exception: # pylint: disable=broad-except │
│ 71 │ │ │ _await_result(self._policy.on_exception, request) │
│ 72 │ │ │ raise │
│ │
│ /anaconda/envs/azureml_py38/lib/python3.8/site-packages/azure/core/pipeline/_base.py:69 in send │
│ │
│ 66 │ │ """ │
│ 67 │ │ _await_result(self._policy.on_request, request) │
│ 68 │ │ try: │
│ ❱ 69 │ │ │ response = self.next.send(request) │
│ 70 │ │ except Exception: # pylint: disable=broad-except │
│ 71 │ │ │ _await_result(self._policy.on_exception, request) │
│ 72 │ │ │ raise │
│ │
│ /anaconda/envs/azureml_py38/lib/python3.8/site-packages/azure/core/pipeline/_base.py:69 in send │
│ │
│ 66 │ │ """ │
│ 67 │ │ _await_result(self._policy.on_request, request) │
│ 68 │ │ try: │
│ ❱ 69 │ │ │ response = self.next.send(request) │
│ 70 │ │ except Exception: # pylint: disable=broad-except │
│ 71 │ │ │ _await_result(self._policy.on_exception, request) │
│ 72 │ │ │ raise │
│ │
│ /anaconda/envs/azureml_py38/lib/python3.8/site-packages/azure/core/pipeline/_base.py:69 in send │
│ │
│ 66 │ │ """ │
│ 67 │ │ _await_result(self._policy.on_request, request) │
│ 68 │ │ try: │
│ ❱ 69 │ │ │ response = self.next.send(request) │
│ 70 │ │ except Exception: # pylint: disable=broad-except │
│ 71 │ │ │ _await_result(self._policy.on_exception, request) │
│ 72 │ │ │ raise │
│ │
│ /anaconda/envs/azureml_py38/lib/python3.8/site-packages/azure/core/pipeline/_base.py:69 in send │
│ │
│ 66 │ │ """ │
│ 67 │ │ _await_result(self._policy.on_request, request) │
│ 68 │ │ try: │
│ ❱ 69 │ │ │ response = self.next.send(request) │
│ 70 │ │ except Exception: # pylint: disable=broad-except │
│ 71 │ │ │ _await_result(self._policy.on_exception, request) │
│ 72 │ │ │ raise │
│ │
│ /anaconda/envs/azureml_py38/lib/python3.8/site-packages/azure/mgmt/core/policies/_base.py:47 in │
│ send │
│ │
│ 44 │ def send(self, request): │
│ 45 │ │ # type: (PipelineRequest[HTTPRequestType], Any) -> PipelineResponse[HTTPRequestT │
│ 46 │ │ http_request = request.http_request │
│ ❱ 47 │ │ response = self.next.send(request) │
│ 48 │ │ if response.http_response.status_code == 409: │
│ 49 │ │ │ rp_name = self._check_rp_not_registered_err(response) │
│ 50 │ │ │ if rp_name: │
│ │
│ /anaconda/envs/azureml_py38/lib/python3.8/site-packages/azure/core/pipeline/policies/_redirect.p │
│ y:160 in send │
│ │
│ 157 │ │ retryable = True │
│ 158 │ │ redirect_settings = self.configure_redirects(request.context.options) │
│ 159 │ │ while retryable: │
│ ❱ 160 │ │ │ response = self.next.send(request) │
│ 161 │ │ │ redirect_location = self.get_redirect_location(response) │
│ 162 │ │ │ if redirect_location and redirect_settings["allow"]: │
│ 163 │ │ │ │ retryable = self.increment( │
│ │
│ /anaconda/envs/azureml_py38/lib/python3.8/site-packages/azure/core/pipeline/policies/_retry.py:5 │
│ 02 in send │
│ │
│ 499 │ │ │ │ │ │ else: │
│ 500 │ │ │ │ │ │ │ is_response_error = True │
│ 501 │ │ │ │ │ │ continue │
│ ❱ 502 │ │ │ │ raise err │
│ 503 │ │ │ finally: │
│ 504 │ │ │ │ end_time = time.time() │
│ 505 │ │ │ │ if absolute_timeout: │
│ │
│ /anaconda/envs/azureml_py38/lib/python3.8/site-packages/azure/core/pipeline/policies/_retry.py:4 │
│ 74 in send │
│ │
│ 471 │ │ │ try: │
│ 472 │ │ │ │ start_time = time.time() │
│ 473 │ │ │ │ self._configure_timeout(request, absolute_timeout, is_response_error) │
│ ❱ 474 │ │ │ │ response = self.next.send(request) │
│ 475 │ │ │ │ if self.is_retry(retry_settings, response): │
│ 476 │ │ │ │ │ retry_active = self.increment(retry_settings, response=response) │
│ 477 │ │ │ │ │ if retry_active: │
│ │
│ /anaconda/envs/azureml_py38/lib/python3.8/site-packages/azure/core/pipeline/policies/_authentica │
│ tion.py:117 in send │
│ │
│ 114 │ │ """ │
│ 115 │ │ self.on_request(request) │
│ 116 │ │ try: │
│ ❱ 117 │ │ │ response = self.next.send(request) │
│ 118 │ │ │ self.on_response(request, response) │
│ 119 │ │ except Exception: # pylint:disable=broad-except │
│ 120 │ │ │ self.on_exception(request) │
│ │
│ /anaconda/envs/azureml_py38/lib/python3.8/site-packages/azure/core/pipeline/_base.py:69 in send │
│ │
│ 66 │ │ """ │
│ 67 │ │ _await_result(self._policy.on_request, request) │
│ 68 │ │ try: │
│ ❱ 69 │ │ │ response = self.next.send(request) │
│ 70 │ │ except Exception: # pylint: disable=broad-except │
│ 71 │ │ │ _await_result(self._policy.on_exception, request) │
│ 72 │ │ │ raise │
│ │
│ /anaconda/envs/azureml_py38/lib/python3.8/site-packages/azure/core/pipeline/_base.py:69 in send │
│ │
│ 66 │ │ """ │
│ 67 │ │ _await_result(self._policy.on_request, request) │
│ 68 │ │ try: │
│ ❱ 69 │ │ │ response = self.next.send(request) │
│ 70 │ │ except Exception: # pylint: disable=broad-except │
│ 71 │ │ │ _await_result(self._policy.on_exception, request) │
│ 72 │ │ │ raise │
│ │
│ /anaconda/envs/azureml_py38/lib/python3.8/site-packages/azure/core/pipeline/_base.py:69 in send │
│ │
│ 66 │ │ """ │
│ 67 │ │ _await_result(self._policy.on_request, request) │
│ 68 │ │ try: │
│ ❱ 69 │ │ │ response = self.next.send(request) │
│ 70 │ │ except Exception: # pylint: disable=broad-except │
│ 71 │ │ │ _await_result(self._policy.on_exception, request) │
│ 72 │ │ │ raise │
│ │
│ /anaconda/envs/azureml_py38/lib/python3.8/site-packages/azure/core/pipeline/_base.py:69 in send │
│ │
│ 66 │ │ """ │
│ 67 │ │ _await_result(self._policy.on_request, request) │
│ 68 │ │ try: │
│ ❱ 69 │ │ │ response = self.next.send(request) │
│ 70 │ │ except Exception: # pylint: disable=broad-except │
│ 71 │ │ │ _await_result(self._policy.on_exception, request) │
│ 72 │ │ │ raise │
│ │
│ /anaconda/envs/azureml_py38/lib/python3.8/site-packages/azure/core/pipeline/_base.py:100 in send │
│ │
│ 97 │ │ """ │
│ 98 │ │ return PipelineResponse( │
│ 99 │ │ │ request.http_request, │
│ ❱ 100 │ │ │ self._sender.send(request.http_request, **request.context.options), │
│ 101 │ │ │ context=request.context, │
│ 102 │ │ ) │
│ 103 │
│ │
│ /anaconda/envs/azureml_py38/lib/python3.8/site-packages/azure/core/pipeline/transport/_requests_ │
│ basic.py:376 in send │
│ │
│ 373 │ │ │ error = ServiceRequestError(err, error=err) │
│ 374 │ │ │
│ 375 │ │ if error: │
│ ❱ 376 │ │ │ raise error │
│ 377 │ │ if _is_rest(request): │
│ 378 │ │ │ from azure.core.rest._requests_basic import RestRequestsTransportResponse │
│ 379 │
╰──────────────────────────────────────────────────────────────────────────────────────────────────╯
ServiceResponseError: HTTPSConnectionPool(host='centralindia.api.azureml.ms', port=443): Read timed out. (read
timeout=300)
```
I think this has something to do with the azureml not registering the request in given time. In past, I had faced a similar issue in yolov5 while logging these artifacts to azureml I used to get this similar error so I had added another a retry loop with some wait time and it worked
So please let me know if you found any resolution for the same @davidefiocco :)<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 24,307 | closed | Update test versions on README.md | # What does this PR do?
Hi @sgugger @amyeroberts ,
I have raised this PR is raised to improve the docs by updating the test versions mentioned in the README.md file. I have referred to the setup.py file to update it. It is related to issue #24263. I have followed the recommended documentation format as advised. Kindly check and advise.
Fix #24263 | 06-15-2023 16:10:07 | 06-15-2023 16:10:07 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Thanks, @sgugger. This is my first successful contribution to an open-source project. |
transformers | 24,306 | closed | Explicit arguments in `from_pretrained` | # What does this PR do?
[still incomplete]
Need to apply the same changes to other files containing `from_pretrained` (other framework, other classes like config, processor, auto, etc.) but @sgugger let me know if I am not lost already in the early stage. | 06-15-2023 14:48:57 | 06-15-2023 14:48:57 | _The documentation is not available anymore as the PR was closed or merged._<|||||>TODO:
- for TF/Flax model `from_pretrained`
- for tokenizer/processors
- for auto<|||||>@sgugger Would be nice if you can take a quick look 🙏 . And do you want me to deal with all framework (TF/Flax), tokenizer/processor, and also `auto` in this PR, or I am allowed to separate them ..?<|||||>You have a lot of tests failing to fix 😅 , sure you want a review yet?<|||||>@sgugger No, I didn't request a new review since last time you have a look. But the changes pushed triggered you 😆 |
transformers | 24,305 | closed | [AutoModel] Add AutoModelForTextEncoding | # What does this PR do?
Adds AutoModel for text encoding (used in the circumstance when you want to extract the text encoder from an encoder-decoder architecture).
This facilitates loading a t5 encoder from t5 enc-dec model weights (as is done in Music Gen in #24109) | 06-15-2023 13:27:34 | 06-15-2023 13:27:34 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 24,304 | open | SpikeGPT | ### Feature request
Extract the spiking nature of the LLM and port that [set] of features over for training/inference,.
https://github.com/ridgerchu/SpikeGPT
### Motivation
the benefits would result in more efficient computational costs (x22 reduction).
### Your contribution
I am willing to test, trace down bugs, and push. I'm still new in the world of llm backend coding. | 06-15-2023 13:26:35 | 06-15-2023 13:26:35 | Hi @thistleknot, thanks for opening this feature request.
Just skimming the repo, my understanding is that SpikeGPT already has a set of pretrained weights available.
If you (or someone else) would like to make this model available through the transformers API, the easiest and fastest way is to add it directly on the hub - here's a guide: https://huggingface.co/docs/transformers/custom_models.<|||||>They have a 200m model on the repo. Maybe I'm mistaken and there is nothing
that needs to be done. Wasn't sure if it's integrated in the eco system but
I'll double back and check
On Thu, Jun 15, 2023, 11:55 AM amyeroberts ***@***.***> wrote:
> Hi @thistleknot <https://github.com/thistleknot>, thanks for opening this
> feature request.
>
> Just skimming the repo, my understanding is that SpikeGPT already has a
> set of pretrained weights available.
>
> If you (or someone else) would like to make this model available through
> the transformers API, the easiest and fastest way is to add it directly on
> the hub - here's a guide:
> https://huggingface.co/docs/transformers/custom_models.
>
> —
> Reply to this email directly, view it on GitHub
> <https://github.com/huggingface/transformers/issues/24304#issuecomment-1593571452>,
> or unsubscribe
> <https://github.com/notifications/unsubscribe-auth/ABHKKOQDGGW32HFT2WGA2A3XLNLBNANCNFSM6AAAAAAZH3LM3A>
> .
> You are receiving this because you were mentioned.Message ID:
> ***@***.***>
>
<|||||>The some weights have already been uploaded on to the hub:
* https://huggingface.co/ridger/SpikeGPT-OpenWebText-216M
* https://huggingface.co/ridger/SpikeGPT-BookCorpus
However, to be able to use them with the transformers API e.g. `AutoModel.from_pretrained(checkpoint)`, then a modeling file would also need to be created and added to the hub e.g. like [this one for falcon](https://huggingface.co/tiiuae/falcon-7b/blob/main/modelling_RW.py). <|||||>Hi! If there is no API yet for this model may I work on it?
If yes, is there a timeline for how soon one has to ship it, making it available through `transformers` API? <|||||>This model is available online without need for an api
On Mon, Jul 24, 2023, 12:18 PM Abhipsha Das ***@***.***>
wrote:
> Hi! If there is no API yet for this model may I work on it?
> If yes, is there a timeline for how soon one has to ship it, making it
> available through transformers API?
>
> —
> Reply to this email directly, view it on GitHub
> <https://github.com/huggingface/transformers/issues/24304#issuecomment-1648477014>,
> or unsubscribe
> <https://github.com/notifications/unsubscribe-auth/ABHKKOXNLBHMUCP2GQTUK3DXR3DA7ANCNFSM6AAAAAAZH3LM3A>
> .
> You are receiving this because you were mentioned.Message ID:
> ***@***.***>
>
|
transformers | 24,303 | closed | RuntimeError: You must initialize the accelerate state by calling either `PartialState()` or `Accelerator()` before using the logging utility. | ### System Info
- `transformers` version: 4.30.2
- Platform: Linux-5.15.107+-x86_64-with-glibc2.31
- Python version: 3.10.12
- Huggingface_hub version: 0.15.1
- Safetensors version: 0.3.1
- PyTorch version (GPU?): 2.0.1+cu118 (True)
- Tensorflow version (GPU?): 2.12.0 (True)
- Flax version (CPU?/GPU?/TPU?): 0.6.9 (gpu)
- Jax version: 0.4.10
- JaxLib version: 0.4.10
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help?
@Narsil @ArthurZucker
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
I am using Google colab to run starchat-beta model from here https://huggingface.co/HuggingFaceH4/starchat-beta
Google Colab Link: https://colab.research.google.com/drive/1I1-zAY3AYNEiZ9Lk-35yqNrOncLFUsSo#scrollTo=-vH0ityRx9u4
**Step1**: Installed required library on colab
```
!pip install transformers
!pip install accelerate
!pip install xformers
```
**Step2:** Run below sample code from the model card of starchat-beta model
```
import torch
from transformers import pipeline
pipe = pipeline("text-generation", model="HuggingFaceH4/starchat-beta", torch_dtype=torch.bfloat16, device_map="auto")
prompt_template = "<|system|>\n<|end|>\n<|user|>\n{query}<|end|>\n<|assistant|>"
prompt = prompt_template.format(query="How do I sort a list in Python?")
outputs = pipe(prompt, max_new_tokens=256, do_sample=True, temperature=0.2, top_k=50, top_p=0.95, eos_token_id=49155)
```
**Issue:** The last line is producing the following error
> RuntimeError: You must initialize the accelerate state by calling either `PartialState()` or `Accelerator()` before using the logging utility.
### Expected behavior
Text output similar to the below one (may or may not be exact)
```
# You can sort a list in Python by using the sort() method. Here's an example:\n\n```\nnumbers = [3, 1, 4, 1, 5, 9, 2, 6, 5, 3, 5]\nnumbers.sort()\nprint(numbers)\n```\n\nThis will sort the list in place and print the sorted list.
``` | 06-15-2023 12:44:53 | 06-15-2023 12:44:53 | Could you share your accelerate version ? Might be outdated: https://github.com/huggingface/accelerate/issues/835<|||||>> Could you share your accelerate version ? Might be outdated: [huggingface/accelerate#835](https://github.com/huggingface/accelerate/issues/835)
0.20.3<|||||>@sgugger maybe for input ? Seems like `accelerate` issue but I cannot find anything relevant.<|||||>Could you provide us with the whole traceback? Also cc @muellerzr <|||||>A full traceback is definitely needed here to know what logger is being init'd wrong <|||||>> Could you provide us with the whole traceback? Also cc @muellerzr
```
in <cell line: 2>:2 │
│ │
│ /usr/local/lib/python3.10/dist-packages/transformers/pipelines/text_generation.py:201 in │
│ __call__ │
│ │
│ 198 │ │ │ - **generated_token_ids** (`torch.Tensor` or `tf.Tensor`, present when `retu │
│ 199 │ │ │ ids of the generated text. │
│ 200 │ │ """ │
│ ❱ 201 │ │ return super().__call__(text_inputs, **kwargs) │
│ 202 │ │
│ 203 │ def preprocess(self, prompt_text, prefix="", handle_long_generation=None, **generate │
│ 204 │ │ inputs = self.tokenizer( │
│ │
│ /usr/local/lib/python3.10/dist-packages/transformers/pipelines/base.py:1120 in __call__ │
│ │
│ 1117 │ │ │ │ ) │
│ 1118 │ │ │ ) │
│ 1119 │ │ else: │
│ ❱ 1120 │ │ │ return self.run_single(inputs, preprocess_params, forward_params, postproces │
│ 1121 │ │
│ 1122 │ def run_multi(self, inputs, preprocess_params, forward_params, postprocess_params): │
│ 1123 │ │ return [self.run_single(item, preprocess_params, forward_params, postprocess_par │
│ │
│ /usr/local/lib/python3.10/dist-packages/transformers/pipelines/base.py:1127 in run_single │
│ │
│ 1124 │ │
│ 1125 │ def run_single(self, inputs, preprocess_params, forward_params, postprocess_params): │
│ 1126 │ │ model_inputs = self.preprocess(inputs, **preprocess_params) │
│ ❱ 1127 │ │ model_outputs = self.forward(model_inputs, **forward_params) │
│ 1128 │ │ outputs = self.postprocess(model_outputs, **postprocess_params) │
│ 1129 │ │ return outputs │
│ 1130 │
│ │
│ /usr/local/lib/python3.10/dist-packages/transformers/pipelines/base.py:1026 in forward │
│ │
│ 1023 │ │ │ │ inference_context = self.get_inference_context() │
│ 1024 │ │ │ │ with inference_context(): │
│ 1025 │ │ │ │ │ model_inputs = self._ensure_tensor_on_device(model_inputs, device=se │
│ ❱ 1026 │ │ │ │ │ model_outputs = self._forward(model_inputs, **forward_params) │
│ 1027 │ │ │ │ │ model_outputs = self._ensure_tensor_on_device(model_outputs, device= │
│ 1028 │ │ │ else: │
│ 1029 │ │ │ │ raise ValueError(f"Framework {self.framework} is not supported") │
│ │
│ /usr/local/lib/python3.10/dist-packages/transformers/pipelines/text_generation.py:263 in │
│ _forward │
│ │
│ 260 │ │ │ │ generate_kwargs["min_length"] += prefix_length │
│ 261 │ │ │
│ 262 │ │ # BS x SL │
│ ❱ 263 │ │ generated_sequence = self.model.generate(input_ids=input_ids, attention_mask=att │
│ 264 │ │ out_b = generated_sequence.shape[0] │
│ 265 │ │ if self.framework == "pt": │
│ 266 │ │ │ generated_sequence = generated_sequence.reshape(in_b, out_b // in_b, *genera │
│ │
│ /usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py:115 in decorate_context │
│ │
│ 112 │ @functools.wraps(func) │
│ 113 │ def decorate_context(*args, **kwargs): │
│ 114 │ │ with ctx_factory(): │
│ ❱ 115 │ │ │ return func(*args, **kwargs) │
│ 116 │ │
│ 117 │ return decorate_context │
│ 118 │
│ │
│ /usr/local/lib/python3.10/dist-packages/transformers/generation/utils.py:1572 in generate │
│ │
│ 1569 │ │ │ ) │
│ 1570 │ │ │ │
│ 1571 │ │ │ # 13. run sample │
│ ❱ 1572 │ │ │ return self.sample( │
│ 1573 │ │ │ │ input_ids, │
│ 1574 │ │ │ │ logits_processor=logits_processor, │
│ 1575 │ │ │ │ logits_warper=logits_warper, │
│ │
│ /usr/local/lib/python3.10/dist-packages/transformers/generation/utils.py:2619 in sample │
│ │
│ 2616 │ │ │ model_inputs = self.prepare_inputs_for_generation(input_ids, **model_kwargs) │
│ 2617 │ │ │ │
│ 2618 │ │ │ # forward pass to get next token │
│ ❱ 2619 │ │ │ outputs = self( │
│ 2620 │ │ │ │ **model_inputs, │
│ 2621 │ │ │ │ return_dict=True, │
│ 2622 │ │ │ │ output_attentions=output_attentions, │
│ │
│ /usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py:1501 in _call_impl │
│ │
│ 1498 │ │ if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks │
│ 1499 │ │ │ │ or _global_backward_pre_hooks or _global_backward_hooks │
│ 1500 │ │ │ │ or _global_forward_hooks or _global_forward_pre_hooks): │
│ ❱ 1501 │ │ │ return forward_call(*args, **kwargs) │
│ 1502 │ │ # Do not call functions when jit is used │
│ 1503 │ │ full_backward_hooks, non_full_backward_hooks = [], [] │
│ 1504 │ │ backward_pre_hooks = [] │
│ │
│ /usr/local/lib/python3.10/dist-packages/accelerate/hooks.py:165 in new_forward │
│ │
│ 162 │ │ │ with torch.no_grad(): │
│ 163 │ │ │ │ output = old_forward(*args, **kwargs) │
│ 164 │ │ else: │
│ ❱ 165 │ │ │ output = old_forward(*args, **kwargs) │
│ 166 │ │ return module._hf_hook.post_forward(module, output) │
│ 167 │ │
│ 168 │ module.forward = new_forward │
│ │
│ /usr/local/lib/python3.10/dist-packages/transformers/models/gpt_bigcode/modeling_gpt_bigcode.py: │
│ 809 in forward │
│ │
│ 806 │ │ """ │
│ 807 │ │ return_dict = return_dict if return_dict is not None else self.config.use_return │
│ 808 │ │ │
│ ❱ 809 │ │ transformer_outputs = self.transformer( │
│ 810 │ │ │ input_ids, │
│ 811 │ │ │ past_key_values=past_key_values, │
│ 812 │ │ │ attention_mask=attention_mask, │
│ │
│ /usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py:1501 in _call_impl │
│ │
│ 1498 │ │ if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks │
│ 1499 │ │ │ │ or _global_backward_pre_hooks or _global_backward_hooks │
│ 1500 │ │ │ │ or _global_forward_hooks or _global_forward_pre_hooks): │
│ ❱ 1501 │ │ │ return forward_call(*args, **kwargs) │
│ 1502 │ │ # Do not call functions when jit is used │
│ 1503 │ │ full_backward_hooks, non_full_backward_hooks = [], [] │
│ 1504 │ │ backward_pre_hooks = [] │
│ │
│ /usr/local/lib/python3.10/dist-packages/transformers/models/gpt_bigcode/modeling_gpt_bigcode.py: │
│ 674 in forward │
│ │
│ 671 │ │ │ │ │ encoder_attention_mask, │
│ 672 │ │ │ │ ) │
│ 673 │ │ │ else: │
│ ❱ 674 │ │ │ │ outputs = block( │
│ 675 │ │ │ │ │ hidden_states, │
│ 676 │ │ │ │ │ layer_past=layer_past, │
│ 677 │ │ │ │ │ attention_mask=attention_mask, │
│ │
│ /usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py:1501 in _call_impl │
│ │
│ 1498 │ │ if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks │
│ 1499 │ │ │ │ or _global_backward_pre_hooks or _global_backward_hooks │
│ 1500 │ │ │ │ or _global_forward_hooks or _global_forward_pre_hooks): │
│ ❱ 1501 │ │ │ return forward_call(*args, **kwargs) │
│ 1502 │ │ # Do not call functions when jit is used │
│ 1503 │ │ full_backward_hooks, non_full_backward_hooks = [], [] │
│ 1504 │ │ backward_pre_hooks = [] │
│ │
│ /usr/local/lib/python3.10/dist-packages/accelerate/hooks.py:165 in new_forward │
│ │
│ 162 │ │ │ with torch.no_grad(): │
│ 163 │ │ │ │ output = old_forward(*args, **kwargs) │
│ 164 │ │ else: │
│ ❱ 165 │ │ │ output = old_forward(*args, **kwargs) │
│ 166 │ │ return module._hf_hook.post_forward(module, output) │
│ 167 │ │
│ 168 │ module.forward = new_forward │
│ │
│ /usr/local/lib/python3.10/dist-packages/transformers/models/gpt_bigcode/modeling_gpt_bigcode.py: │
│ 315 in forward │
│ │
│ 312 │ │ Tuple[torch.Tensor], Tuple[torch.Tensor, torch.Tensor], Tuple[torch.Tensor, torc │
│ 313 │ ]: │
│ 314 │ │ residual = hidden_states │
│ ❱ 315 │ │ hidden_states = self.ln_1(hidden_states) │
│ 316 │ │ attn_outputs = self.attn( │
│ 317 │ │ │ hidden_states, │
│ 318 │ │ │ layer_past=layer_past, │
│ │
│ /usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py:1501 in _call_impl │
│ │
│ 1498 │ │ if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks │
│ 1499 │ │ │ │ or _global_backward_pre_hooks or _global_backward_hooks │
│ 1500 │ │ │ │ or _global_forward_hooks or _global_forward_pre_hooks): │
│ ❱ 1501 │ │ │ return forward_call(*args, **kwargs) │
│ 1502 │ │ # Do not call functions when jit is used │
│ 1503 │ │ full_backward_hooks, non_full_backward_hooks = [], [] │
│ 1504 │ │ backward_pre_hooks = [] │
│ │
│ /usr/local/lib/python3.10/dist-packages/accelerate/hooks.py:160 in new_forward │
│ │
│ 157 │ │
│ 158 │ @functools.wraps(old_forward) │
│ 159 │ def new_forward(*args, **kwargs): │
│ ❱ 160 │ │ args, kwargs = module._hf_hook.pre_forward(module, *args, **kwargs) │
│ 161 │ │ if module._hf_hook.no_grad: │
│ 162 │ │ │ with torch.no_grad(): │
│ 163 │ │ │ │ output = old_forward(*args, **kwargs) │
│ │
│ /usr/local/lib/python3.10/dist-packages/accelerate/hooks.py:282 in pre_forward │
│ │
│ 279 │ │ │ for name, _ in named_module_tensors( │
│ 280 │ │ │ │ module, include_buffers=self.offload_buffers, recurse=self.place_submodu │
│ 281 │ │ │ ): │
│ ❱ 282 │ │ │ │ set_module_tensor_to_device(module, name, self.execution_device, value=s │
│ 283 │ │ │
│ 284 │ │ return send_to_device(args, self.execution_device), send_to_device( │
│ 285 │ │ │ kwargs, self.execution_device, skip_keys=self.skip_keys │
│ │
│ /usr/local/lib/python3.10/dist-packages/accelerate/utils/offload.py:123 in __getitem__ │
│ │
│ 120 │ │ self.prefix = prefix │
│ 121 │ │
│ 122 │ def __getitem__(self, key): │
│ ❱ 123 │ │ return self.dataset[f"{self.prefix}{key}"] │
│ 124 │ │
│ 125 │ def __iter__(self): │
│ 126 │ │ return iter([key for key in self.dataset if key.startswith(self.prefix)]) │
│ │
│ /usr/local/lib/python3.10/dist-packages/accelerate/utils/offload.py:176 in __getitem__ │
│ │
│ 173 │ │ │ │ raise ImportError("These offloaded weights require the use of safetensor │
│ 174 │ │ │ │
│ 175 │ │ │ if "SAFETENSORS_FAST_GPU" not in os.environ: │
│ ❱ 176 │ │ │ │ logger.info("Enabling fast loading with safetensors by setting `SAFETENS │
│ 177 │ │ │ │ os.environ["SAFETENSORS_FAST_GPU"] = "1" │
│ 178 │ │ │ │
│ 179 │ │ │ from safetensors import safe_open │
│ │
│ /usr/lib/python3.10/logging/__init__.py:1841 in info │
│ │
│ 1838 │ │ """ │
│ 1839 │ │ Delegate an info call to the underlying logger. │
│ 1840 │ │ """ │
│ ❱ 1841 │ │ self.log(INFO, msg, *args, **kwargs) │
│ 1842 │ │
│ 1843 │ def warning(self, msg, *args, **kwargs): │
│ 1844 │ │ """ │
│ │
│ /usr/local/lib/python3.10/dist-packages/accelerate/logging.py:51 in log │
│ │
│ 48 │ │ `in_order` is ignored if `main_process_only` is passed. │
│ 49 │ │ """ │
│ 50 │ │ if PartialState._shared_state == {}: │
│ ❱ 51 │ │ │ raise RuntimeError( │
│ 52 │ │ │ │ "You must initialize the accelerate state by calling either `PartialStat │
│ 53 │ │ │ ) │
│ 54 │ │ main_process_only = kwargs.pop("main_process_only", True) │
╰──────────────────────────────────────────────────────────────────────────────────────────────────╯
RuntimeError: You must initialize the accelerate state by calling either `PartialState()` or `Accelerator()` before
using the logging utility.
```<|||||>Thanks, I can see where this stems from. As a temporary workaround while we fix the issue, your can set `SAFETENSORS_FAST_GPU=1` in your environment to avoid this error.<|||||>@sgugger we can disable that flag for more recent versions, it's not used anymore btw. (safetensors>0.3.0)<|||||>> Thanks, I can see where this stems from. As a temporary workaround while we fix the issue, your can set `SAFETENSORS_FAST_GPU=1` in your environment to avoid this error.
I tried this and I am no longer getting the error. However, the below line from the sample code shared earlier takes forever to execute. (waited for 16 minutes before interrupting it)
`outputs = pipe(prompt, max_new_tokens=256, do_sample=True, temperature=0.2, top_k=50, top_p=0.95, eos_token_id=49155)`
Any suggesstions on how I can debug it.<|||||>If you're trying to run this super large model on a small maching, everything will end up being offloading to CPU + DISK, making everything EXTREMELY slow. There's no easy solution on small hardware
Try setting `max_new_tokens=1` and letting it run several minutes It should give you the next token.
Subsequent tokens are faster to get than the first one, however it should be of the same order of latency.<|||||>> If you're trying to run this super large model on a small maching, everything will end up being offloading to CPU + DISK, making everything EXTREMELY slow. There's no easy solution on small hardware
>
> Try setting `max_new_tokens=1` and letting it run several minutes It should give you the next token.
>
> Subsequent tokens are faster to get than the first one, however it should be of the same order of latency.
This worked and also helped understand the problem. Thank you so much. |
transformers | 24,302 | closed | [Docs] Fix the paper URL for MMS model | # What does this PR do?
Fixes the paper link for the mms model, the wrong URL is for the paper `XLS-R: SELF-SUPERVISED CROSS-LINGUAL SPEECH REPRESENTATION LEARNING AT SCALE` not `Scaling Speech Technology to 1,000+ Languages`. | 06-15-2023 12:19:58 | 06-15-2023 12:19:58 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Thanks for the quick review! @amyeroberts Would you please merge it? |
transformers | 24,301 | closed | Fix functional TF Whisper and modernize tests | There was a regression in 4.30 that affects functional construction of Whisper models in certain cases, my bad!
In an attempt to avoid this in future, I modified the `test_compile_tf_model` test. These tests were quite old and weren't that relevant for how we do things now, and were also quite slow. I pared the test down to the actual thing we want to test (functional construction with `tf.keras.Input` and potentially-unknown shape dimensions), which should make it fast enough to run in the live CI, as well as giving us more useful info about regressions like this in future.
Fixes #24291 | 06-15-2023 12:09:49 | 06-15-2023 12:09:49 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Update: The test is still very slow in some cases in the CI. I'm going to mark it as `@slow` before merging this PR after all, but I'll leave it as-is for now so I can fix any issues it raises before merging.<|||||>Quick note: I marked the test as slow (it was slow before too). I ran everything locally and all models passed, so hopefully it should still look good on the nightly CI after this is merged.<|||||>cc @amyeroberts for core maintainer review and we should be good to go! (Also sorry Amy if I'm spamming you with lots of PRs this week)<|||||>Thanks for the ping re-regression. This is way too big of a PR to be included in a patch though so I suggest making a separate small PR for the part that would need to go in a patch, or decide this won't go in a patch and tell users to wait for 4.31.<|||||>Ah, sorry! I didn't mean to imply there'd be a patch release - this doesn't affect too many people (only a very specific subset of Whisper users who are exporting models using the functional API), so it should be fine to wait until 4.31. I can just tell affected people to install from `main` until then. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.