repo
stringclasses
1 value
number
int64
1
25.3k
state
stringclasses
2 values
title
stringlengths
1
487
body
stringlengths
0
234k
βŒ€
created_at
stringlengths
19
19
closed_at
stringlengths
19
19
comments
stringlengths
0
293k
transformers
22,276
closed
run_summarization requires a dataset_name or train_file or validation_file in all cases
### System Info In the latest version of run_summarization on line 264 it requires dataset_name or train_file or validation_file. I was trying to perform a "do_predict" with the parameter "test_file" set, but the validation will not let me proceed. Looking at the code, do_predict only uses the test dataset anyway, so this appears to be a bug. See the specific file https://github.com/huggingface/transformers/blob/main/examples/pytorch/summarization/run_summarization.py ### Who can help? @sgugger Unable to "do_predict" due to validation not checking for "train_file". As a workaround I set "validation_file". ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction in a Colab notebook: %%shell /content/transformers/examples/pytorch/summarization/run_summarization.py \ --model_name_or_path /content/gs/models/Clinical-T5-Large/ \ --do_predict \ --test_file "/content/gs/models/Clinical-T5-Large/validation.csv" \ --dataset_config "3.0.0" \ --source_prefix "summarize: " \ --output_dir /content/gs/models/Clinical-T5-Large \ --per_device_train_batch_size=4 \ --per_device_eval_batch_size=4 \ --predict_with_generate \ --max_source_length=1024 You get the error "Need either a dataset name or a training/validation file." ### Expected behavior Expected normal prediction output like below: Running tokenizer on prediction dataset: 0% 0/5040 [00:00<?, ? examples/s]03/20/2023 15:40:20 - INFO - datasets.arrow_dataset - Caching processed dataset at /root/.cache/huggingface/datasets/csv/default-73bac40c02a24f34/0.0.0/6b34fb8fcf56f7c8ba51dc895bfa2bfbe43546f190a60fcf74bb5e8afdcc2317/cache-1e88c7213311bd88.arrow Downloading builder script: 100% 6.27k/6.27k [00:00<00:00, 3.59MB/s] 03/20/2023 15:40:52 - INFO - __main__ - *** Predict *** [INFO|trainer.py:3066] 2023-03-20 15:40:52,611 >> ***** Running Prediction ***** [INFO|trainer.py:3068] 2023-03-20 15:40:52,611 >> Num examples = 5040 [INFO|trainer.py:3071] 2023-03-20 15:40:52,611 >> Batch size = 4 [WARNING|logging.py:280] 2023-03-20 15:40:52,624 >> You're using a T5TokenizerFast tokenizer. Please note that with a fast tokenizer, using the `__call__` method is faster than using a method to encode the text followed by a call to the `pad` method to get a padded encoding. [INFO|configuration_utils.py:575] 2023-03-20 15:40:52,636 >> Generate config GenerationConfig { "_from_model_config": true, "decoder_start_token_id": 0, "eos_token_id": 1, "pad_token_id": 0, "transformers_version": "4.28.0.dev0" }
03-20-2023 15:55:48
03-20-2023 15:55:48
Indeed. Would you like to open a PR to fix this?
transformers
22,275
closed
[New GitHub Action Job] Automatically create/update tiny models
# What does this PR do? [New GitHub Action Job] Automatically create/update tiny models ## Goal **A scheduled job that create/update tiny models periodically** - so **we will have tiny versions for newly added models in `transformers` as soon as possible** ### Some properties - For a new model type: The Hub repo. is created - For a new framework implementation of an existing model type: A Hub repo. PR is opened - We keep track of the commit hash information for tiny models on the Hub - The pipeline tests will use the commit hash information stored in `tiny_model_summary.json` file - To avoid sudden CI failures due to new commits - The CI job will produce a file `updated_tiny_model_summary.json` - We should open a PR in `transformers` to update `tiny_model_summary.json` - If all pipeline tests pass, we are good to merge and use the new/updated tiny models on the Hub.
03-20-2023 15:30:37
03-20-2023 15:30:37
_The documentation is not available anymore as the PR was closed or merged._<|||||>Just a minor update, including the revision number. Will merge once the CI being green. Thank you for the reviews πŸš€
transformers
22,274
closed
Fix doc links
# What does this PR do? Resolves issue with some dead links in the documentation resulting from relative paths. Equivalent links were searched for in the translated docs but were not found. Hence only changes in files in `docs/source/en` Fixes # 21596 ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests?
03-20-2023 15:13:27
03-20-2023 15:13:27
_The documentation is not available anymore as the PR was closed or merged._
transformers
22,273
closed
Proper map location for optimizer load
# What does this PR do? I have been thinking more about #22159 and now remember why it might be better to load the optimizer state on the device directly: in multi-GPU training, the optimizer state is load in each process, so that would load it num_processes times on the CPU and risk a CPU RAM OOM. Therefore, this adjusts #22159 to load the optimizer state: - on CPU when there is only one process - on each device directly when there are multiple.
03-20-2023 15:00:21
03-20-2023 15:00:21
_The documentation is not available anymore as the PR was closed or merged._
transformers
22,272
closed
Fixed gradient checkpoint bug for TimeSeriesTransformer
# What does this PR do? Moved gradient checkpointing clause to above the decoder layer implementation. This should fix the bug this issue addresses. Fixes #21737 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. [GitHub Issue](https://github.com/huggingface/transformers/pull/21733) - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @younesbelkada
03-20-2023 14:58:05
03-20-2023 14:58:05
_The documentation is not available anymore as the PR was closed or merged._<|||||>Strange, I will see if I can install Python for my environment then. Thank you for all your help, and thanks for running make for me!
transformers
22,271
closed
Fix balanced and auto device_map
# What does this PR do? In #22095 some of the arguments passed to `infer_auto_device_map` were grouped in kwargs. The problem is that one of those (`max_memory`) is not updated anymore after being changed (when device_map is `"auto"`, `"balanced"` or `"balanced_low_0"`). This PR fixes that. Note: this is a regression so this will need to go in a patch.
03-20-2023 14:53:28
03-20-2023 14:53:28
_The documentation is not available anymore as the PR was closed or merged._
transformers
22,270
closed
Fix the gradient checkpointing bug of the llama model
# What does this PR do? The gradient checkpoint of the LLaMA model does not work. This PR fixed it following the [GPT-2 model's gradient checkpointing implementation](https://github.com/huggingface/transformers/blob/cf0af9a31beb84e8feec77af51f72d063ba905aa/src/transformers/models/gpt2/modeling_gpt2.py#L482). ### System Info - `transformers` version: 4.28.0.dev0 - Platform: Linux-5.4.0-113-generic-x86_64-with-glibc2.10 - Python version: 3.8.8 - Huggingface_hub version: 0.11.1 - PyTorch version (GPU?): 2.1.0.dev20230317+cu118 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: Yes ### Reproducing the Bug The bug is tested on 4 A100 40GB GPUs. Please first cloning Stanford Alpaca's repo that finetunes the LLaMA model: ```shell git clone [email protected]:tatsu-lab/stanford_alpaca.git cd stanford_alpaca.git pip install -r requirements.txt ``` CUDA out of memory error will raise if we run the training script with `per_device_train_batch_size=1`: ```sh torchrun --nproc_per_node=4 --master_port=<your_random_port> train.py \ --model_name_or_path <your_path_to_hf_converted_llama_ckpt_and_tokenizer> \ --data_path ./alpaca_data.json \ --bf16 True \ --output_dir <your_output_dir> \ --num_train_epochs 3 \ --per_device_train_batch_size 1 \ --per_device_eval_batch_size 4 \ --gradient_accumulation_steps 8 \ --evaluation_strategy "no" \ --save_strategy "steps" \ --save_steps 2000 \ --save_total_limit 1 \ --learning_rate 2e-5 \ --weight_decay 0. \ --warmup_ratio 0.03 \ --lr_scheduler_type "cosine" \ --logging_steps 1 \ --fsdp "full_shard auto_wrap" \ --fsdp_transformer_layer_cls_to_wrap 'LLaMADecoderLayer' \ --tf32 True --gradient_checkpointing ``` ### Test after this PR: After this PR, we can successfully train the LLaMA-7B models on 4 40GB GPUs with `per_device_train_batch_size=8`, using gradient checkpointing: ```sh torchrun --nproc_per_node=4 --master_port=<your_random_port> train.py \ --model_name_or_path <your_path_to_hf_converted_llama_ckpt_and_tokenizer> \ --data_path ./alpaca_data.json \ --bf16 True \ --output_dir <your_output_dir> \ --num_train_epochs 3 \ --per_device_train_batch_size 8 \ --per_device_eval_batch_size 4 \ --gradient_accumulation_steps 8 \ --evaluation_strategy "no" \ --save_strategy "steps" \ --save_steps 2000 \ --save_total_limit 1 \ --learning_rate 2e-5 \ --weight_decay 0. \ --warmup_ratio 0.03 \ --lr_scheduler_type "cosine" \ --logging_steps 1 \ --fsdp "full_shard auto_wrap" \ --fsdp_transformer_layer_cls_to_wrap 'LLaMADecoderLayer' \ --tf32 True --gradient_checkpointing ``` ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? @ArthurZucker @zphang @sgugger
03-20-2023 14:03:01
03-20-2023 14:03:01
_The documentation is not available anymore as the PR was closed or merged._
transformers
22,269
closed
Batch elements interfere with each other with int8
### System Info - `transformers` version: [cf0af9a31beb84e8feec77af51f72d063ba905aa](https://github.com/huggingface/transformers/commit/cf0af9a31beb84e8feec77af51f72d063ba905aa) - `bitsandbytes` version: 0.37.1 - Platform: Linux-5.4.0-139-generic-x86_64-with-glibc2.31 - Python version: 3.9.16 - Huggingface_hub version: 0.12.1 - PyTorch version (GPU?): 2.0.0+cu117 (True) - Using GPU in script?: yes: A100 in MIG mode - Using distributed or parallel set-up in script?: no ### Who can help? @sgugger @muell ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction The outputs of a model for a given batch element depend on the other elements in the batch when using int8 inference. See minimal example below. I'm not sure whether this is expected? ```python import transformers model = transformers.AutoModelForCausalLM.from_pretrained("bigscience/bloom-560m", load_in_8bit=True, device_map="auto") tokenizer = transformers.AutoTokenizer.from_pretrained("bigscience/bloom-560m") out1 = model(**tokenizer(["A"], return_tensors="pt").to("cuda")) out2 = model(**tokenizer(["A"], ["B"], return_tensors="pt").to("cuda")) print(out1['logits'][0][0]) print(out2['logits'][0][0]) print(out1['logits'][0][0] == out2['logits'][0][0]) > tensor([345.0000, 348.2500, 354.2500, ..., 206.2500, 206.2500, 206.2500], device='cuda:0', dtype=torch.float16, grad_fn=<SelectBackward0>) > tensor([344.7500, 347.7500, 353.7500, ..., 206.0000, 206.0000, 206.0000], device='cuda:0', dtype=torch.float16, grad_fn=<SelectBackward0>) > tensor([False, False, False, ..., False, False, False], device='cuda:0') ``` ### Expected behavior The computation should be independent of the other batch elements, as for fp32 (see below): ```python import transformers model = transformers.AutoModelForCausalLM.from_pretrained("bigscience/bloom-560m", load_in_8bit=False, device_map="auto").to("cuda") tokenizer = transformers.AutoTokenizer.from_pretrained("bigscience/bloom-560m") out1 = model(**tokenizer(["A"], return_tensors="pt").to("cuda")) out2 = model(**tokenizer(["A"], ["B"], return_tensors="pt").to("cuda")) print(out1['logits'][0][0]) print(out2['logits'][0][0]) print(out1['logits'][0][0] == out2['logits'][0][0]) > tensor([343.6242, 346.4580, 352.7924, ..., 205.3806, 205.3800, 205.3746], grad_fn=<SelectBackward0>) > tensor([343.6242, 346.4580, 352.7924, ..., 205.3806, 205.3800, 205.3746], grad_fn=<SelectBackward0>) > tensor([ True, True, True, ..., True, True, False]) ``` *Edit 2023/03/22 Corrected the code for FP32.*
03-20-2023 13:39:08
03-20-2023 13:39:08
cc @younesbelkada <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
22,268
closed
More doctests
# What does this PR do? Add all files (tokenization / image processor / feature extractor / processor) to doctests. Currently the list is not sorted - it might be better to add a check to sort the list in order (in `utils/check_doctest_list.py`).
03-20-2023 13:28:46
03-20-2023 13:28:46
_The documentation is not available anymore as the PR was closed or merged._<|||||>Currently 46 failed tests - error message to be sent to Slack is too long and failed the report sending. So I remove them for the list to be tested. Let's deal with them and add them back step by step, @amyeroberts For reference, here [the job run page](https://github.com/huggingface/transformers/actions/runs/4468541471/jobs/7849422399) ```bash src/transformers/models/auto/tokenization_auto.py src/transformers/models/bart/tokenization_bart.py src/transformers/models/bart/tokenization_bart_fast.py src/transformers/models/bertweet/tokenization_bertweet.py src/transformers/models/blenderbot/tokenization_blenderbot.py src/transformers/models/blenderbot/tokenization_blenderbot_fast.py src/transformers/models/bloom/tokenization_bloom_fast.py src/transformers/models/codegen/tokenization_codegen.py src/transformers/models/codegen/tokenization_codegen_fast.py src/transformers/models/deberta/tokenization_deberta.py src/transformers/models/deberta/tokenization_deberta_fast.py src/transformers/models/dpr/tokenization_dpr.py src/transformers/models/dpr/tokenization_dpr_fast.py src/transformers/models/gpt2/tokenization_gpt2.py src/transformers/models/gpt2/tokenization_gpt2_fast.py src/transformers/models/gptsan_japanese/tokenization_gptsan_japanese.py src/transformers/models/gptsan_japanese/tokenization_gptsan_japanese.py src/transformers/models/gpt_neox/tokenization_gpt_neox_fast.py src/transformers/models/gpt_sw3/tokenization_gpt_sw3.py src/transformers/models/led/tokenization_led.py src/transformers/models/led/tokenization_led_fast.py src/transformers/models/longformer/tokenization_longformer.py src/transformers/models/longformer/tokenization_longformer_fast.py src/transformers/models/luke/tokenization_luke.py src/transformers/models/m2m_100/tokenization_m2m_100.py src/transformers/models/marian/tokenization_marian.py src/transformers/models/mvp/tokenization_mvp.py src/transformers/models/mvp/tokenization_mvp_fast.py src/transformers/models/roberta/tokenization_roberta.py src/transformers/models/roberta/tokenization_roberta_fast.py src/transformers/models/roformer/tokenization_roformer.py src/transformers/models/roformer/tokenization_roformer_fast.py src/transformers/models/transfo_xl/tokenization_transfo_xl.py src/transformers/models/transfo_xl/tokenization_transfo_xl.py src/transformers/models/auto/image_processing_auto.py src/transformers/models/auto/feature_extraction_auto.py src/transformers/models/markuplm/feature_extraction_markuplm.py src/transformers/models/auto/processing_auto.py ```
transformers
22,267
closed
Fix error in mixed precision training of `TFCvtModel`
# What does this PR do? This PR fixes the issue that the `TFCvtModel` cannot be trained with `keras.fit` using `mixed-precision`. The issue was in this [line](https://github.com/huggingface/transformers/blob/c4bf6f38bda1de3798095515875a119298bf0611/src/transformers/models/cvt/modeling_tf_cvt.py#L96) when a random tensor is initialized without specifying the correct `dtype`. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @amyeroberts
03-20-2023 13:18:28
03-20-2023 13:18:28
_The documentation is not available anymore as the PR was closed or merged._
transformers
22,266
closed
Update training_args.py -- a nightly install is not required anymore for torch.compile
A nightly install is not required anymore for `torch.compile`. # What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer: @stas00, Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
03-20-2023 11:05:47
03-20-2023 11:05:47
_The documentation is not available anymore as the PR was closed or merged._
transformers
22,265
closed
Enable traced model for text-generation task
@gante Hi, Gante. Refer to: https://github.com/huggingface/transformers/pull/22072 Thanks for your advice. This PR only changed the example, would you please help me to review it? Thanks!
03-20-2023 10:55:43
03-20-2023 10:55:43
_The documentation is not available anymore as the PR was closed or merged._<|||||>@sgugger @gante<|||||>@gante Thanks for your attention. Would you please help me to merge it? Thanks! I think the demand for `jit trace` will grow, and I hope we can keep on working on it so it will be adapted to all models and all tasks in the future.
transformers
22,264
closed
Adding Llama FastTokenizer support.
- Requires https://github.com/huggingface/tokenizers/pull/1183 version - Only support byte_fallback for llama, raise otherwise (safety net). - Lots of questions are special tokens How to test: ```python #! pip install -e https://github.com/huggingface/tokenizers@byte_fallback#egg=tokenizers from transformers.convert_slow_tokenizer import convert_slow_tokenizer from transformers import AutoTokenizer from tokenizers import Tokenizer tokenizer = AutoTokenizer.from_pretrained("huggingface/llama-7b") if False: new_tokenizer = Tokenizer.from_file("tok.json") else: new_tokenizer = convert_slow_tokenizer(tokenizer) new_tokenizer.save("tok.json") strings = [ "This is a test", "η”Ÿζ΄»ηš„ηœŸθ°›ζ˜―", "η”Ÿζ΄»ηš„ηœŸθ°›ζ˜―[MASK]。", # XXX: This one is problematic because of special tokens # "<s> Something something", ] for string in strings: encoded = tokenizer(string)["input_ids"] encoded2 = new_tokenizer.encode(string).ids assert encoded == encoded2, f"{encoded} != {encoded2}" decoded = tokenizer.decode(encoded) decoded2 = new_tokenizer.decode(encoded2) assert decoded.strip() == decoded2, f"{repr(decoded)} != {repr(decoded2)}" ``` # What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer: @stas00, Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
03-20-2023 10:26:48
03-20-2023 10:26:48
_The documentation is not available anymore as the PR was closed or merged._<|||||>Thanks for the ping. We'll need the actual fast tokenizer file to merge this though :sweat_smile: <|||||>True, I uncovered more issues around multiple space handling, I'm nailing down on the pre_tokenizer combo for it.<|||||>More troublesome than anticipated. When encoding `" Hello"` from a pure BPE perspectivve, `tokenizers` does `[259, 10994]` (`" "` + `Hello`) whereas spm does `[29871, 15043]` (`" "` + `" Hello"`) which from a pure ids & merges perspectives seems worse. I though of fixing that using a `pre_tokenizer` that splits words onto their own. However on encoding `" ird"` this time `spm` DOES do `[259, 1823]`. Seems this is where the score comes into play.<|||||>What is the status of this PR? <|||||>For the doc builder, we're going to need an update on the docker image so that it pulls 0.13.3 to generate the doc.<|||||>Hi @Narsil , the `warning.warn` to `raise RuntimeError` change in `src/transformers/convert_slow_tokenizer.py` breaks a lot of things: I wanted to fine-tune a mT5 model and it is now no longer possible (I'm using the PyTorch example from [documentation](https://github.com/huggingface/transformers/tree/main/examples/pytorch/question-answering#fine-tuning-t5-on-squad20).) How is it possible to rubustify it -> also DeBERTa v3 has byte fallback vocab (but I didn't test it yet) :thinking: <|||||>> Hi @Narsil , > > the `warning.warn` to `raise RuntimeError` change in `src/transformers/convert_slow_tokenizer.py` breaks a lot of things: I wanted to fine-tune a mT5 model and it is now no longer possible (I'm using the PyTorch example from How is it possible to rubustify it -> also DeBERTa v3 has byte fallback vocab (but I didn't test it yet) thinking First of all we could revert by all means, but since now `tokenizers` has `ByteFallback` we could make it 1-1 for those, that was the idea behind upping to an error. It's a relatively sizeable issue if there are models deployed out there which have inconsistent behavior regarding this though (slow using byte fallback, fast not using it). I'm not sure why it was a warning in the first place. > DeBERTa v3 Let's have a look too. As a user, what's your opinion here, should we just fix the various conversion scripts, or would you rather keep the warning with the previous pitfalls ?<|||||>Both are using Unigram with ByteFallback which isn't supported yet. <|||||>@Narsil After this commit `AutoTokenizer.from_pretrained` is extremely slow, spending time in `convert_slow_tokenizer.py` at every call. Is it expected? Or I am doing something wrong?<|||||>Which repo are you using? We need to create the fast files on the repo. Converting from slow is super slow and there's nothing to be done about it (tokenizers needs to recreate a structure by doing O(n2) search over the vocab because spm does not store this information. <|||||>@ArthurZucker <|||||>I see thanks!
transformers
22,263
closed
AdamW implementation
### Feature request I'm getting the warning from optimization (https://github.com/huggingface/transformers/blob/main/src/transformers/optimization.py, lines 391 and on): ``` "This implementation of AdamW is deprecated and will be removed in a future version. Use the PyTorch" " implementation torch.optim.AdamW instead, or set `no_deprecation_warning=True` to disable this" " warning" ``` How worried should I really be? Are there plans to use the torch AdamW version and eventually discarding your own implementation? ### Motivation Presumably the torch.optim.AdamW implementation is better and using it would make the whole library a bit leaner ### Your contribution Not sure
03-20-2023 09:55:51
03-20-2023 09:55:51
Hi @StrangeTcy, thanks for raising this issue! Please don't be worried :) The warning is there so that there isn't any unexpected changes for users when `AdamW` is eventually removed from the library and is part of the deprecation cycle. We advise that the torch implementation is used instead of the one in the transformers library, and making this switch now in the relevant places in your code will ensure that nothing breaks when the time comes. Until then, the `AdamW` class will remain in transformers. One thing this warning is missing is specific information about when i.e. which version, this will happen and should be added! <|||||>Great, thanks. Looking forward to the next versions
transformers
22,262
closed
[WIP] Add H3
# What does this PR do? This PR adds the H3 model by Hazy Research (Stanford University). I've removed the Flash Attention dependency, and main author @DanFu09 has removed the einops dependency (πŸ™ ). I've kept an optional soft dependency on `pykeops`, to allow for speedups. The model runs fine if the user doesn't have this library installed.
03-20-2023 08:28:09
03-20-2023 08:28:09
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_22262). All of your documentation changes will be reflected on that endpoint.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Hey @NielsRogge βœ‹, Nice work! Do you know if this model would be integrated in the near future inside HuggingFace? Was this PR staled given complexities with custom ops? Could you give an overview of the missing steps needed in this PR to have a functional H3 model integrated into HF? πŸ™ Thanks for your work! πŸ™Œ <|||||>Hi @gaceladri the PR is actually totally ready, the only thing that needs to done is perhaps make [this function](https://github.com/NielsRogge/transformers/blob/5199d3d3a08264f1b17442504559c28304ce619c/src/transformers/models/h3/modeling_h3.py#L139) more like the other Attention classes in the library (like [this class](https://github.com/huggingface/transformers/blob/17a55534f5e5df10ac4804d4270bf6b8cc24998d/src/transformers/models/llama/modeling_llama.py#L158)).
transformers
22,261
closed
H
import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("icyGS/StockPredictor") model = AutoModelForSequenceClassification.from_pretrained("icyGS/StockPredictor")
03-20-2023 07:46:55
03-20-2023 07:46:55
Hi @lil-fahad, thanks for raising an issue. So that we can best help you, could you fill in the issue template including information such as the environment (run `transformers-cli env` in the terminal), the issue being encountered, the expected behaviour and a full traceback please? <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
22,260
closed
How to load local code for model with `trust_remote_code=True`?
### Feature request When I use the model with `trust_remote_code=True`, I cannot directly change these remote codes because everytime I load model it will request new codes from remote hub. So how can I avoid that ? Can I custom these codes in local? example: ``` model = AutoModelForSeq2SeqLM.from_pretrained('THUDM/glm-large-chinese', trust_remote_code=True) model.forward(...) # which I want to change the code ``` ### Motivation The remote code is not always good to fit user needs. So the user should have ways to change the remote code. ### Your contribution if there is no other way I can sub a PR.
03-20-2023 04:22:55
03-20-2023 04:22:55
Hi @LZY-the-boys, thanks for raising this issue, If I've understood correctly, the question being asked is how to load in a customized version of the model on the ['THUDM/glm-large-chinese' repo](https://huggingface.co/THUDM/glm-large-chinese). When running: ```python model = AutoModelForSeq2SeqLM.from_pretrained('THUDM/glm-large-chinese', trust_remote_code=True) ``` The model being downloaded will be the one defined in [THUDM/glm-large-chinese](https://huggingface.co/THUDM/glm-large-chinese). `trust_remote_code=True` is simply saying that it's OK for this model code to be downloaded and run from the hub. If you wish to load a local model, then this model should be saved out to either the hub or locally and the path to its location passed to `from_pretrained` e.g.: ``` model.save_pretained('path/to/my/model') # Model with adapted methods model = ModelClass.from_pretrained('path/to/my/model', trust_remote_code=True) ``` There's more information about using models with [custom code here](https://huggingface.co/docs/transformers/v4.27.1/en/custom_models#using-a-model-with-custom-code).<|||||>OK, the `model.save_pretrained` indeed is a choice to custom the remote code in local folder, though it will copy these local files to a `transformers/local` dir and run . In early times I change the code in that temporary directory so cause the above doubt.
transformers
22,259
closed
Different outputs of the official LLaMA repo and transformers' implementation
### System Info - `transformers` version: main - Platform: Linux-5.4.0-113-generic-x86_64-with-glibc2.10 - Python version: 3.8.16 - Huggingface_hub version: 0.11.1 - PyTorch version (GPU?): 1.12.1 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: Not ### Who can help? @zphang @ArthurZucker @gan ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction The official LLaMA repo generates a coherent and meaningful response to the below prompt, while the Huggingface LLaMA generates multiple responses that are not relevant to the prompt. ## Official LLaMA Outputs ```shell git clone [email protected]:facebookresearch/llama.git cd llama pip install -r requirements.txt pip install -e . ``` Please first substitute the [prompt](https://github.com/facebookresearch/llama/blob/57b0eb62de0636e75af471e49e2f1862d908d9d8/example.py#L82) as: ```python prompts = ["I believe the meaning of life is"] ``` Run for inference with the 13B model: ```sh torchrun --nproc_per_node 2 example.py --ckpt_dir $TARGET_FOLDER/13B --tokenizer_path $TARGET_FOLDER/tokenizer.model ``` The output is: ``` I believe the meaning of life is to love others, love ourselves, and love our God. The way we do that is by showing compassion and acceptance. We have to love the people around us even when they are struggling. We have to love ourselves even when we are failing. We have to love God even when we are not certain. This is the meaning of life. ``` ## Huggingface LLaMA The code to generate output with transformers' llama: ```py import transformers import torch torch.manual_seed(1) tokenizer = transformers.LlamaTokenizer.from_pretrained("$YOUR_CONVERTED_DIR/tokenizer/") model = transformers.LlamaForCausalLM.from_pretrained("$YOUR_CONVERTED_DIR/llama-13b/").half() model.cuda() prompt = "I believe the meaning of life is" inputs = tokenizer(prompt, return_tensors="pt") generated_ids = model.generate(inputs.input_ids.cuda(), max_new_tokens=256, do_sample=True, top_p=0.95, temperature=0.8) print(tokenizer.batch_decode(generated_ids)[0]) ``` The outputs seem to be more illogical (many sentences have nothing to do with `the meaning of life`): ``` I believe the meaning of life is to give life meaning I believe that we are here to be of service to others I believe that the purpose of life is to grow in wisdom and love I believe that life is not all about me I believe that what I give I receive and what I receive I give I believe that the journey is more important than the destination I believe that we have a gift to share and that that gift is not for ourselves I believe that I am the right person in the right place at the right time I believe that the only thing we have to be concerned about is the present moment I believe that God is in everyone and everything I believe that we are all connected I believe that we are all equal and unique I believe that we are all responsible for the world we live in I believe that we are all perfect and whole I believe that we are all worthy of love I believe that we are all on a journey of self-discovery I believe that we are all meant to do what we do I believe that we are all perfect in our own way I believe that we are all loved I believe that we are all loved by God I believe that there is only one God I believe that God ``` ## Analysis: In LLaMA's official repo, they set the [`temperature` to 0.8 and `top_p` to 0.95](https://github.com/facebookresearch/llama/blob/57b0eb62de0636e75af471e49e2f1862d908d9d8/example.py#L69) for generation. I have aligned this in the transformers' generation. One difference is that LLaMA's official repo uses FSDP and my transformers' code has no distributed set-up. But I think this will not affect the inference performance (not certain). ### Expected behavior A script to reproduce the official LLaMA repo's results is expected, which will be a great sanity check about the huggingface llama implementation. Thanks!
03-20-2023 03:23:31
03-20-2023 03:23:31
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>I met the same problem.<|||||>cc @gante <|||||>Hey @yqy2001 @TempleX98 πŸ‘‹ Unless the code is exactly the same, it is impossible to compare `sample` implementations based on a few examples. Small things like the order of operations will produce very small logits differences and, unless the logits are exactly the same, the sampling step will pick different tokens for the same seed. The best way to compare implementations is with greedy approaches with long outputs (especially if the comparison is done at a logit level!). In `transformers`, that is done by passing `do_sample=False`, `return_dict=True`, and `output_scores=True`. EDIT: please note that since this issue was originally opened, a [few llama-specific fixes and performance improvements were merged](https://github.com/huggingface/transformers/commits/main/src/transformers/models/llama) :)<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>It looks like this behavior depends on what model you are using, try to change to chat model like Llama-2-7b-chat-hf will solve this issue.
transformers
22,258
closed
HuggingFace Transformers Trainer._maybe_log_save_evaluate IndexError: invalid index to scalar variable
@sshleifer So, I'm working on fine tuning a BART model for question generation, and it seems to be going through training okay. Then all of a sudden, it stops at the end of the first validation with an `IndexError` which you can see below. The problem is occurring in the `Trainer._maybe_log_save_evaluate` method that is being called. ![IndexError: invalid index to scalar variable](https://user-images.githubusercontent.com/80857218/226214542-72cce7dd-6f09-4eaf-89e5-3d585fb07790.png) Here is my code for setting up the model, tokenizer, dataset, etc.: ```py from datasets import load_dataset from evaluate import load from accelerate import Accelerator from transformers import BartForConditionalGeneration, BartConfig, BartTokenizer from transformers import Seq2SeqTrainingArguments, Seq2SeqTrainer dataset = load_dataset("squad") metric = load("squad") accelerator = Accelerator() def model_init(): config = BartConfig() return accelerator.prepare(BartForConditionalGeneration(config).from_pretrained("facebook/bart-base").cuda()) tokenizer = accelerator.prepare(BartTokenizer.from_pretrained("facebook/bart-base")) def preprocess_function(data): inputs = tokenizer(data['context'], add_special_tokens=True, max_length=256, padding="max_length", truncation=True) targets = tokenizer(data['question'], add_special_tokens=True, max_length=32, padding="max_length", truncation=True) return {'input_ids': inputs['input_ids'], 'attention_mask': inputs['attention_mask'], 'labels': targets['input_ids']} dataset = dataset.map(preprocess_function, batched=True).shuffle(seed=777) training_args = Seq2SeqTrainingArguments( output_dir="./results", evaluation_strategy="steps", eval_steps=500, save_steps=50000, learning_rate=2e-5, per_device_train_batch_size=4, per_device_eval_batch_size=4, num_train_epochs=2, weight_decay=0.01, predict_with_generate=True, ) def compute_metrics(eval_pred): predictions, labels = eval_pred predictions = predictions.argmax(axis=-1) return metric.compute(predictions=predictions, references=labels) trainer = Seq2SeqTrainer( args=training_args, train_dataset=dataset["train"], eval_dataset=dataset["validation"], tokenizer=tokenizer, model_init=model_init, compute_metrics=compute_metrics, ) trainer.train() ``` I can't seem to figure out why this is happening and nothing I've found online has helped.
03-19-2023 22:49:52
03-19-2023 22:49:52
I finally got an answer to my issue on StackOverflow. Here is the [link](https://stackoverflow.com/questions/75780103/huggingface-transformers-trainer-maybe-log-save-evaluate-indexerror-invalid-in/75792634#75792634) to the answer: > Your issue comes from your compute_metrics function as you're using a QA metric with a text-generation model. > > To fix it, replace metric = load("squad") with a text-generation metric, for example bleu: metric = load("bleu"). And adapt your compute_metrics function in consequence: > > ```py > def compute_metrics(eval_pred): > predictions, references = eval_pred > predictions = tokenizer.batch_decode(predictions) > references = tokenizer.batch_decode(references) > references = [[ref] for ref in references] > return metric.compute(predictions=predictions, references=references) > ```
transformers
22,257
open
Ernie-M for pretraining multilingual models
### Feature request Two things that might help in that regard: - To train TSDAE, one needs support as class ErnieMForPreTraining, just as for Ernie https://huggingface.co/docs/transformers/model_doc/ernie#transformers.ErnieForPreTraining - To train cross-encoders with contrastive loss, a bit like SimCSE, one needs standard support for getting the 'attention_mask' out of the tokenizer sbert uses. Sbert just expects those. Tried to hack it in into sbert, but failed. ### Motivation Suspect getting Ernie-M-large for pretraining multilingual sentence embeddings will yield close to sota results. According to mSimCSE, we can get top multilingual embeddings just on training on their 300k dataset of english pairs, alone (worked better than cross-lingual training). With a stronger base model (they used xlm-roberta), sota embeddings might just lie on the streets. https://github.com/yaushian/mSimCSE ### Your contribution Can't do it alone, plz help.
03-19-2023 18:03:55
03-19-2023 18:03:55
Hi @KnutJaegersberg, thanks for making this suggestion! Would you like to try and open a PR to add the model? We have guidance written on adding [models here](https://huggingface.co/docs/transformers/v4.27.2/en/add_new_model). As the modeling file already exists - adding this component is even easier than a whole new model. For example, see [this PR](https://github.com/huggingface/transformers/pull/21754) for adding `WhisperForAudioClassification`.<|||||>Currently in the middle of something, will try to look at it later! <|||||>I'd love to try this out! <|||||>Seems like this will require more than simply copy-pasting the BertForPretraining code, but actually implementing cross-attention Masked Language Modeling and Back-translation Masked Language Modeling.
transformers
22,256
closed
[Docs] fix typos in some tokenizer docs
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fix the typos in tokenizer examples. It would be 4 tokens. Thx ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer: @stas00, Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
03-19-2023 13:41:44
03-19-2023 13:41:44
_The documentation is not available anymore as the PR was closed or merged._<|||||>Note: the difference in documented output and true output was mentioned in a [previous LongFormer PR](https://github.com/huggingface/transformers/pull/19346/files#r988044763).
transformers
22,255
closed
Re
null
03-19-2023 12:52:18
03-19-2023 12:52:18
transformers
22,254
closed
Trying to save a model with TFT5ForConditionalGeneration
### System Info transformers-cli env ouput: - `transformers` version: 4.28.0.dev0 - Platform: macOS-13.2.1-x86_64-i386-64bit - Python version: 3.9.6 - Huggingface_hub version: 0.12.0 - PyTorch version (GPU?): 1.13.1 (False) - Tensorflow version (GPU?): 2.11.0 (False) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: no - Using distributed or parallel set-up in script?: no -------------- I've compiled a TensorFlow graph that uses a pre-trained [flan-t5-large](https://huggingface.co/google/flan-t5-large), which means one of the layers uses `TFT5ForConditionalGeneration` but there are more layers before and after and my goal is the export the graph for TF serving framework. When I'm trying to `.save` the model I get the following error from Tensorflow: ``` Traceback (most recent call last): Traceback (most recent call last): File "/Users/serlich/Library/Application Support/JetBrains/Toolbox/apps/PyCharm-P/ch-0/223.8214.51/PyCharm.app/Contents/plugins/python/helpers/pydev/_pydevd_bundle/pydevd_exec2.py", line 3, in Exec exec(exp, global_vars, local_vars) File "<string>", line 1, in <module> File "venv/lib/python3.9/site-packages/tensorflow/python/framework/ops.py", line 955, in __bool__ self._disallow_bool_casting() File "venv/lib/python3.9/site-packages/tensorflow/python/framework/ops.py", line 554, in _disallow_bool_casting self._disallow_when_autograph_enabled( File "venv/lib/python3.9/site-packages/tensorflow/python/framework/ops.py", line 537, in _disallow_when_autograph_enabled raise errors.OperatorNotAllowedInGraphError( tensorflow.python.framework.errors_impl.OperatorNotAllowedInGraphError: Using a symbolic `tf.Tensor` as a Python `bool` is not allowed: AutoGraph did convert this function. This might indicate you are trying to use an unsupported feature. Traceback (most recent call last): File "venv/lib/python3.9/site-packages/keras/utils/traceback_utils.py", line 70, in error_handler raise e.with_traceback(filtered_tb) from None File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.9/lib/python3.9/contextlib.py", line 124, in __exit__ next(self.gen) File "/Users/serlich/Documents/case-wrap-up/t5_tensorflow_code.py", line 58, in call outputs = self.model.generate(input_ids=input_ids, attention_mask=attention_mask) File "venv/lib/python3.9/site-packages/transformers/generation/tf_utils.py", line 925, in generate return self.greedy_search( File "venv/lib/python3.9/site-packages/transformers/generation/tf_utils.py", line 1728, in greedy_search if greedy_search_cond_fn(generated, finished_sequences, cur_len, model_kwargs): tensorflow.python.framework.errors_impl.OperatorNotAllowedInGraphError: Exception encountered when calling layer 'complete_sentence_transformer' (type CompleteSentenceTransformer). Using a symbolic `tf.Tensor` as a Python `bool` is not allowed: AutoGraph did convert this function. This might indicate you are trying to use an unsupported feature. Call arguments received by layer 'complete_sentence_transformer' (type CompleteSentenceTransformer): β€’ args=('tf.Tensor(shape=(None, 1), dtype=string)',) β€’ kwargs={'training': 'False'} Process finished with exit code 1 ``` the error resonates from the following[if statement](https://github.com/huggingface/transformers/blob/60d51ef5123d949fd8c59cd4d3254e711541d278/src/transformers/generation/tf_utils.py#L1728): ``` # 1st generation step has to be run before to initialize `past_key_values` generated, finished_sequences, cur_len, model_kwargs = greedy_search_body_fn( generated, finished_sequences, cur_len, model_kwargs ) # 2-to-n generation steps can then be run in autoregressive fashion # only in case 1st generation step does NOT yield EOS token though if greedy_search_cond_fn(generated, finished_sequences, cur_len, model_kwargs): maximum_iterations = max_length - cur_len generated, _, cur_len, _ = tf.while_loop( greedy_search_cond_fn, greedy_search_body_fn, (generated, finished_sequences, cur_len, model_kwargs), maximum_iterations=maximum_iterations, ) ``` during saving `finished_sequences` is a symbolic tensor and so, TensorFlow prevents evaluating an if statement of a symbolic tensor. Commenting out `if greedy_search_cond_fn(generated, finished_sequences, cur_len, model_kwargs) ` allow me to save the model and load it later, however, remove the safeguard if the model predicts EOS in the first generation (which is very not likely). @ArthurZucker @younesbelkada @gante ### Who can help? _No response_ ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction ``` from transformers import TFT5ForConditionalGeneration import tensorflow as tf import tensorflow_text as text from tensorflow.python.platform import gfile save_dir = '' class CompleteSentenceTransformer(tf.keras.layers.Layer): def __init__(self): super().__init__() self._pad_token = 1 self.tokenizer = text.SentencepieceTokenizer(model=gfile.GFile('test/flan-t5-large/spiece.model', 'rb').read()) self.model = TFT5ForConditionalGeneration.from_pretrained('test/flan-t5-large', from_pt=True) def call(self, inputs, *args, **kwargs): tokens = self.tokenizer.tokenize(inputs) input_ids, attention_mask = text.pad_model_inputs(tokens, max_seq_length=self._max_seq_length, pad_value=self._pad_token) outputs = self.model.generate(input_ids=input_ids, attention_mask=attention_mask) return self.tokenizer.detokenize(outputs) complete_model = CompleteSentenceTransformer() inputs = tf.keras.layers.Input(shape=(1,), dtype=tf.string, name="inputs") outputs = complete_model(inputs) keras_model = tf.keras.Model(inputs, outputs) keras_model.save(save_dir) ``` Python 3.9.6 tensorflow 2.11.0 tensorflow-text 2.11.0 transformers 4.28.0.dev0 (from master) ### Expected behavior save model
03-19-2023 12:01:02
03-19-2023 12:01:02
cc @gante <|||||>Hey @erlichsefisalesforce πŸ‘‹ looking at the stack trace, we see that `inputs`'s first dimension, the batch size, is unknown (shape = `[None, 1]`). It is possible that our generate function is not fully serializable with a dynamic batch size, and may need some tweaks. I'm not sure when I'll be able to fix this problem in particular (it may be complex to solve). However, meanwhile, can you try exporting with a fixed batch size? In other words, define the input as `inputs = tf.keras.layers.Input(shape=(1,), dtype=tf.string, name="inputs", batch_size=<some integer>)`<|||||>Hi @gante, I'm still getting ``` File "/venv/lib/python3.9/site-packages/transformers/generation/tf_utils.py", line 767, in generate return self.greedy_search( File "venv/lib/python3.9/site-packages/transformers/generation/tf_utils.py", line 1452, in greedy_search if greedy_search_cond_fn(generated, finished_sequences, cur_len, model_kwargs): tensorflow.python.framework.errors_impl.OperatorNotAllowedInGraphError: Exception encountered when calling layer 'summarizer' (type CompleteSentenceTransformer). Using a symbolic `tf.Tensor` as a Python `bool` is not allowed: AutoGraph did convert this function. This might indicate you are trying to use an unsupported feature. Call arguments received by layer 'summarizer' (type CompleteSentenceTransformer): β€’ args=('tf.Tensor(shape=(3, 1), dtype=string)',) β€’ kwargs={'training': 'False'} ```<|||||>Hey @erlichsefisalesforce -- in that case, I will need a reproducible example to debug. The example you shared above contains references to local files :)<|||||>@gante, the folder contains [flan-t5-large](https://huggingface.co/google/flan-t5-large), `save_dir` is can be populated with any path to your local machine, and I think, that it.<|||||>It seems like the root issue persists -- `text.SentencepieceTokenizer().tokenize()` returns a tensor with an unknown batch size, regardless of the input batch size being defined, causing the same problem. The fix should be straightforward, so I will have a go at it. _____________________________________________________________________ Script to reproduce it: ```py # run these commands in advance: # mkdir /tmp/test # cd /tmp/test # git clone https://huggingface.co/google/flan-t5-small from transformers import TFT5ForConditionalGeneration import tensorflow as tf import tensorflow_text as text from tensorflow.python.platform import gfile save_dir = '/tmp/test/flan-t5-small' class CompleteSentenceTransformer(tf.keras.layers.Layer): def __init__(self): super().__init__() self._pad_token = 1 self.tokenizer = text.SentencepieceTokenizer(model=gfile.GFile('/tmp/test/flan-t5-small/spiece.model', 'rb').read()) self.model = TFT5ForConditionalGeneration.from_pretrained('/tmp/test/flan-t5-small', from_pt=True) def call(self, inputs, *args, **kwargs): tokens = self.tokenizer.tokenize(inputs) breakpoint() input_ids, attention_mask = text.pad_model_inputs(tokens, max_seq_length=512, pad_value=self.model.config.pad_token_id) outputs = self.model.generate(input_ids=input_ids, attention_mask=attention_mask) return self.tokenizer.detokenize(outputs) complete_model = CompleteSentenceTransformer() inputs = tf.keras.layers.Input(shape=(1,), dtype=tf.string, name="inputs", batch_size=4) outputs = complete_model(inputs) keras_model = tf.keras.Model(inputs, outputs) keras_model.save(save_dir) ```<|||||>@erlichsefisalesforce after #22310 gets merged, you should be able to run it on your end :) (you will need to install `transformers` from `main`)<|||||>Thank you @gante! will close the issue once I validate the solution on my end. :) <|||||>The solution was validated!
transformers
22,253
closed
Add `BioGPTForSequenceClassification`
# What does this PR do? Add Sequence Classification support for BioGPT. Fixes #21530 Fixes #21535 This PR completes the stalled PR #21535. <!--- ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? --> ## Who can review? @ArthurZucker @younesbelkada @NielsRogge @sgugger
03-19-2023 11:45:43
03-19-2023 11:45:43
_The documentation is not available anymore as the PR was closed or merged._<|||||>@NielsRogge @sgugger Is there a way to skip the check for specific lines when I run `make repo-consistency`. It gives an error when I add this: `# Copied from transformers.models.opt.modeling_opt.OPTForSequenceClassification with OPT->BioGpt`. There are some attributes like word_embed_proj_dim which do not exist for the BioGpt model. Also it changes the case of the docstring variable, which leads to a variable not found error. Should I drop the copy attribution comment? <|||||>If some attributes do not exist, let's just add the `# Adapted from` mention, and put the `# Copied from` only where it properly fits! <|||||>@younesbelkada You're right, I haven't figured out how to solve this failing test.<|||||>@ArthurZucker Any suggestions as to how to fix this failing test? I went through #18123. The code is extremely similar, but I still don't get why the test is failing. Maybe I am missing something. I need help to fix it. ```python _____________________________ BioGptModelTest.test_load_with_mismatched_shapes _____________________________ self = <tests.models.biogpt.test_modeling_biogpt.BioGptModelTest testMethod=test_load_with_mismatched_shapes> def test_load_with_mismatched_shapes(self): if not self.test_mismatched_shapes: return config, inputs_dict = self.model_tester.prepare_config_and_inputs_for_common() for model_class in self.all_model_classes: if model_class.__name__ not in get_values(MODEL_FOR_SEQUENCE_CLASSIFICATION_MAPPING_NAMES): continue with self.subTest(msg=f"Testing {model_class}"): with tempfile.TemporaryDirectory() as tmp_dir: model = model_class(config) model.save_pretrained(tmp_dir) # Fails when we don't set ignore_mismatched_sizes=True with self.assertRaises(RuntimeError): new_model = AutoModelForSequenceClassification.from_pretrained(tmp_dir, num_labels=42) with self.assertRaises(RuntimeError): > new_model_without_prefix = AutoModel.from_pretrained(tmp_dir, vocab_size=10) E AssertionError: RuntimeError not raised tests/test_modeling_common.py:2640: AssertionError ``` <|||||>Hey! I'll try to have a look, it looks like setting the `vocab_size` does not change the shape of the model which means that it does not raise an error when it should! <|||||>@ArthurZucker Thanks! The `vocab_size` argument had my suspicion as well. Since we inherit from `BioGptModel`, I thought that already does the needful. I could not figure out what I was missing. Looking forward to your suggestions.<|||||>@ArthurZucker Those changes seemed to do the trick, all the CI tests pass now. Thanks for your help!
transformers
22,252
closed
clip loss
https://github.com/mlfoundations/open_clip/blob/37b729bc69068daa7e860fb7dbcf1ef1d03a4185/src/open_clip/loss.py#L49 In the implementation of open_clip, logits distributed across multiple gpus are gathered for calculating loss. However, I cannot find the code related to this feature in this repository. I think more negative samples are very important for contrastive learning. @younesbelkada @ydshieh
03-19-2023 10:44:59
03-19-2023 10:44:59
Hi @hljjjmssyh The loss computation management is in the `Trainer` class, see https://github.com/huggingface/transformers/blob/da005253b82395b6097623bcee44b819bfe72b87/src/transformers/trainer.py#L2649-L2650<|||||>That's only for models wrapped in DataParallel @ydshieh @hljjjmssyh We don't include code requiring torch.distributed as it then fails when the script is used on one GPU. However we could use the Accelerate library to have something that works in both situation. If you want to explore this and open a PR, I'll be happy to review!<|||||>I think I’m missing something, it looks like this could be done for CLIP today with accelerate’s implementation in `examples/pytorch/image-classification/run_image_classification_no_trainer.py` running it with the appropriate args? Or maybe accelerate would nonetheless be a welcome addition somewhere else for the above mentioned purpose? It also looks [here](https://github.com/huggingface/transformers/blob/5990743fddb4780b15b8af2ed7ab55145ab40455/src/transformers/trainer.py#L1386-L1388) like the model would in fact be wrapped in DataParallel when training on multiple gpus.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
22,251
closed
t5 mlm train example, label generation
### System Info - `transformers` version: 4.26.1 - Platform: Linux-5.13.0-37-generic-x86_64-with-glibc2.31 - Python version: 3.9.12 - Huggingface_hub version: 0.12.1 ### Who can help? @sanchit-gandhi @sgugger @stevhliu @MKhalusova ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction In the [T5 doc](https://huggingface.co/docs/transformers/v4.27.1/en/model_doc/t5#training), there is the following example describing the input_ids and labels format to train a T5 model: ```python from transformers import T5Tokenizer, T5ForConditionalGeneration tokenizer = T5Tokenizer.from_pretrained("t5-small") model = T5ForConditionalGeneration.from_pretrained("t5-small") input_ids = tokenizer("The <extra_id_0> walks in <extra_id_1> park", return_tensors="pt").input_ids labels = tokenizer("<extra_id_0> cute dog <extra_id_1> the <extra_id_2>", return_tensors="pt").input_ids # the forward function automatically creates the correct decoder_input_ids loss = model(input_ids=input_ids, labels=labels).loss loss.item() ``` And right after this piece of code, the following: > If you’re interested in pre-training T5 on a new corpus, check out the [run_t5_mlm_flax.py](https://github.com/huggingface/transformers/tree/main/examples/flax/language-modeling) script in the Examples directory. So I looked at the example code on line [330](https://github.com/huggingface/transformers/blob/60d51ef5123d949fd8c59cd4d3254e711541d278/examples/flax/language-modeling/run_t5_mlm_flax.py#L330) of that file, but the behavior is different than what is written in the doc. Indeed, the `batch["labels"]` has a different format. If for example the input string is `"Hello world."`, `batch["input_ids"]` is set to `"Hello world<extra_id_0>"` and `batch["labels"]` is set to `"<extra_id_-2><extra_id_-3><extra_id_-6>"`. According to the doc, shouldn't `batch["labels"]` be `"<extra_id_0> . <extra_id_1>"`? To reproduce, you can simply reuse the following 3 functions: `random_spans_noise_mask`, `create_sentinel_ids` and `filter_input_ids` that are right below the `__call__` function on line 330: ```python import numpy as np from transformers import T5Tokenizer tokenizer = T5Tokenizer.from_pretrained('t5-small') tokenized_sample = tokenizer("Hello world.", add_special_tokens=False, return_tensors='pt').input_ids # we don't want </s> mask = random_spans_noise_mask(tokenized_sample.shape[1]) input_ids_sentinel = create_sentinel_ids(mask[None, :].astype(np.int8)) labels_sentinel = create_sentinel_ids(~mask[None, :].astype(np.int8)) input_ids = filter_input_ids(tokenized_sample, input_ids_sentinel) labels = filter_input_ids(tokenized_sample, labels_sentinel) print(tokenizer.batch_decode(input_ids, skip_special_tokens=False)[0]) print(tokenizer.batch_decode(labels, skip_special_tokens=False)[0]) ``` This of code of course follows the same structure as the example on line 330. ### Expected behavior Just wondering if the described behavior in the `run_t5_mlm_flax.py` script is intended or not, since the doc describes a different behavior. It is confusing as the doc refers to this example, but the behaviors are different. Thanks.
03-18-2023 23:13:30
03-18-2023 23:13:30
Issue was on my side, `~mask[None, :].astype(np.int8)` should be `(~mask[None, :]).astype(np.int8)`. But the resulting labels are still missing the extra id at the end, `batch["labels"]` will be equal to `"<extra_id_0> ."` instead of `"<extra_id_0> . <extra_id_1>"` and there are also no checks for if the number of sentinel tokens used is greater than the number of available sentinel tokens. Default is max 100 sentinel tokens if using pretrained T5 models.
transformers
22,250
closed
Is there or will there be support for xformers?
### Feature request xformers, (i couldn't find anything online or in the docs, but i suspect its very likely I'm just missing something) ### Motivation speed and memory improvement ### Your contribution I am unsure, but willing to help.
03-18-2023 21:10:07
03-18-2023 21:10:07
cc @younesbelkada <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
22,249
closed
LLaMa tokenizer is labelled incorrectly when called.
### System Info Most of the transformers functions call for "LlamaTokenizer", but the actual classes (found under transformers/models/llama) are labelled as "LLaMaTokenizer" ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction There are no steps, changing the class names fixes the loading. ### Expected behavior Either fix the class names or fix the functions that call for them.
03-18-2023 19:55:41
03-18-2023 19:55:41
transformers
22,248
closed
[Trainer] Use of inspect for model.forward with torch.compile
## Issue In `trainer`, the `inspect` module is used to remove extraneous dataset columns. https://github.com/huggingface/transformers/blob/60d51ef5123d949fd8c59cd4d3254e711541d278/src/transformers/trainer.py#L722-L728 However, `torch.compile` modifies the signature of the forward function of the original model, so `inspect.signature` is unable to correctly identify input arguments. ## Possible Solution If there is a way to recover the original arguments, that would be the cleanest solution. Otherwise, we could check if the model is compiled and modify the logic of the `_set_signature_columns_if_needed` function appropriately, with perhaps added logging to the user that columns won't be dropped due to using `torch.compile`. ## System Information * Python 3.8 * PyTorch 2.0 * transformers 4.27.1 ### Who can help? @stas00 @sgugger ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction ```python >>> import inspect, torch; from transformers import AutoModel >>> model = AutoModel.from_pretrained("roberta-base") >>> inspect.signature(model.forward) <Signature (input_ids: Union[torch.Tensor, NoneType] = None, attention_mask: Union[torch.Tensor, NoneType] = None, token_type_ids: Union[torch.Tensor, NoneType] = None, position_ids: Union[torch.Tensor, NoneType] = None, head_mask: Union[torch.Tensor, NoneType] = None, inputs_embeds: Union[torch.Tensor, NoneType] = None, encoder_hidden_states: Union[torch.Tensor, NoneType] = None, encoder_attention_mask: Union[torch.Tensor, NoneType] = None, past_key_values: Union[List[torch.FloatTensor], NoneType] = None, use_cache: Union[bool, NoneType] = None, output_attentions: Union[bool, NoneType] = None, output_hidden_states: Union[bool, NoneType] = None, return_dict: Union[bool, NoneType] = None) -> Union[Tuple[torch.Tensor], transformers.modeling_outputs.BaseModelOutputWithPoolingAndCrossAttentions]> >>> opt_model = torch.compile(model) >>> inspect.signature(opt_model.forward) <Signature (*args, **kwargs)> ``` ### Expected behavior The trainer should only drop unused columns, not all of them (which is what happens when it incorrectly registers `args` and `kwargs` as input arguments).
03-18-2023 17:45:35
03-18-2023 17:45:35
I realized that the proper way to use `torch.compile` with `Trainer` is through the `training_args.torch_compile` flag. Using the flag didn't cause the issue (I was manually compiling it outside the trainer). Closing, thanks!
transformers
22,247
closed
[Trainer] Add optional communication backends for torch.distributed when using GPU
# What does this PR do? Add optional backends for `torch.distributed` when using GPU. I want to use other communication backends according the [pytorch_distribution_tutorial](https://pytorch.org/tutorials/intermediate/dist_tuto.html#communication-backends), but I found Trainer only uses nccl when `self.no_cuda is False` . Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? - trainer: @sgugger
03-18-2023 17:14:04
03-18-2023 17:14:04
_The documentation is not available anymore as the PR was closed or merged._
transformers
22,246
closed
FlaxDataCollatorForT5MLM :ValueError: all input arrays must have the same shape
### System Info - transformers version: 4.27.1 - Platform: Linux-5.18.10-76051810-generic-x86_64-with-glibc2.35 - Python version: 3.10.6 - Huggingface_hub version: 0.11.1 - PyTorch version (GPU?): 2.0.0.dev20230202+cu116 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: yes - Using distributed or parallel set-up in script?: No ### Who can help? @patil-suraj @patrickvonplaten ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction I am following the script to reproduce the above https://github.com/huggingface/transformers/blob/main/examples/flax/language-modeling/run_t5_mlm_flax.py#L336-L346 If I give the `mean_noise_span_length ` > 1, for any value of noise_density, i get the ouput ``` prompt = "The cute dog walks in the green park" encoded = tokenizer(prompt, truncation=False, padding=False, return_tensors="pt").input_ids batch_size =1 input_length = encoded.shape[1] denoiser = FlaxDataCollatorForT5MLM(tokenizer,.35,3) mask_indices = np.asarray([denoiser.random_spans_noise_mask(input_length) for i in range(batch_size)]) labels_mask = ~mask_indices input_ids_sentinel = denoiser.create_sentinel_ids(mask_indices.astype(np.int8)) labels_sentinel = denoiser.create_sentinel_ids(labels_mask.astype(np.int8)) input_ids = denoiser.filter_input_ids(encoded, input_ids_sentinel) labels = denoiser.filter_input_ids(encoded, labels_sentinel) ``` If I give the `mean_noise_span_length ` == 1, for many value of noise_density, i get the error ``` Traceback (most recent call last): File "/home/alex/coding/tranformer_learn/t5_denoising.py", line 133, in <module> mask_indices = np.asarray([denoiser.random_spans_noise_mask(input_length) for i in range(batch_size)]) File "/home/alex/coding/tranformer_learn/t5_denoising.py", line 133, in <listcomp> mask_indices = np.asarray([denoiser.random_spans_noise_mask(input_length) for i in range(batch_size)]) File "/home/alex/coding/tranformer_learn/t5_denoising.py", line 94, in random_spans_noise_mask np.stack([nonnoise_span_lengths, noise_span_lengths], axis=1), [num_noise_spans * 2] File "<__array_function__ internals>", line 200, in stack File "/home/alex/.local/lib/python3.10/site-packages/numpy/core/shape_base.py", line 464, in stack raise ValueError('all input arrays must have the same shape') ValueError: all input arrays must have the same shape ``` Basically, the two arrays are different lengths in numpy stack ``` interleaved_span_lengths = np.reshape( np.stack([nonnoise_span_lengths, noise_span_lengths], axis=1), [num_noise_spans * 2] ) ``` From what I could make out this happens when `num_noise_spans` == `num_noise_tokens` when `mean_noise_span_length == 1` ``` num_noise_spans = int(np.round(num_noise_tokens / self.mean_noise_span_length)) ``` Code that can be run https://gist.github.com/alexcpn/b9bb2b0f01833d1bb862502faf99bab8 ### Expected behavior There should not be exception
03-18-2023 15:57:08
03-18-2023 15:57:08
cc @sanchit-gandhi @ArthurZucker maybe<|||||>Hey @alexcpn - great job at digging into the issue and thanks for the gist! It does indeed look like the case that we're hitting this error based on how we compute the `num_noise_spans`: https://github.com/huggingface/transformers/blob/aec10d162f59d809ead3990ef78c51918b622f38/examples/flax/language-modeling/run_t5_mlm_flax.py#L274 Would you like to open a PR to fix this so that it's robust for `mean_noise_span_length == 1`? The code is largely ported from the original T5 pre-processing, which can be found here: https://github.com/google-research/text-to-text-transfer-transformer/blob/main/t5/data/preprocessors.py<|||||>HI @sanchit-gandhi ; I have tried to demo the problem and the possible correction; Please find the pull request here https://github.com/huggingface/transformers/pull/22938
transformers
22,245
closed
ImportError: cannot import name 'AlignModel' from 'transformers
### System Info ImportError: cannot import name 'AlignModel' from 'transformers transformers.__version__= 4.22.1 ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction ImportError: cannot import name 'AlignModel' from 'transformers ### Expected behavior I want to import AlignModel from transformers4.22.1
03-18-2023 14:40:36
03-18-2023 14:40:36
ALIGN was only added in v4.27 of Transformers, so you'll need to do `pip install --upgrade transformers` to upgrade to the latest version.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
22,244
closed
input_ids and labels do not match while using FlaxDataCollatorForT5MLM methods
### System Info - `transformers` version: 4.27.1 - Platform: Linux-5.18.10-76051810-generic-x86_64-with-glibc2.35 - Python version: 3.10.6 - Huggingface_hub version: 0.11.1 - PyTorch version (GPU?): 2.0.0.dev20230202+cu116 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: yes - Using distributed or parallel set-up in script?: No ### Who can help? @patil-suraj @patrickvonplaten ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [x] My own task or dataset (give details below) ### Reproduction I am following the documentation at https://huggingface.co/docs/transformers/main/model_doc/t5#training for Unsupervised denoising training with my dataset ``` prompt = "The <extra_id_0> walks in <extra_id_1> park" encoded_prompt = tokenizer(prompt, truncation=False, padding=False, return_tensors="pt").input_ids print(f"encoded_prompt ={encoded_prompt}") labels ="<extra_id_0> cute dog <extra_id_1> the <extra_id_2>" encoded_labels = tokenizer(labels, truncation=False, padding=False, return_tensors="pt").input_ids print(f"encoded_labels ={encoded_labels}") print(f"{encoded_prompt.shape} ={encoded_labels.shape}") ``` Output ``` encoded_prompt =tensor([[ 37, 32099, 10681, 16, 32098, 2447, 1]]) encoded_labels =tensor([[32099, 5295, 1782, 32098, 8, 32097, 1]]) torch.Size([1, 7]) =torch.Size([1, 7]) ```` I am following the script to reproduce the above https://github.com/huggingface/transformers/blob/main/examples/flax/language-modeling/run_t5_mlm_flax.py#L336-L346 ``` prompt = "The cute dog walks in the green park" encoded = tokenizer(prompt, truncation=False, padding=False, return_tensors="pt").input_ids batch_size =1 input_length = encoded.shape[1] denoiser = FlaxDataCollatorForT5MLM(tokenizer,.35,3) mask_indices = np.asarray([denoiser.random_spans_noise_mask(input_length) for i in range(batch_size)]) labels_mask = ~mask_indices input_ids_sentinel = denoiser.create_sentinel_ids(mask_indices.astype(np.int8)) labels_sentinel = denoiser.create_sentinel_ids(labels_mask.astype(np.int8)) input_ids = denoiser.filter_input_ids(encoded, input_ids_sentinel) labels = denoiser.filter_input_ids(encoded, labels_sentinel) print(f"input_ids decoded = {tokenizer.decode(*input_ids,skip_special_tokens=False)}") print(f"labels decoded = {tokenizer.decode(*labels,skip_special_tokens=False)}") print(f"input_ids.shape {input_ids.shape} should be equal to labels.shape {labels.shape}") ``` This given the denoised output properly, but labels size '(1,5)' is not matching the input size '(1,8)' ``` input_ids decoded = The cute dog walks in the<extra_id_0></s> labels decoded = <extra_id_0> green park</s></s> input_ids.shape (1, 8) should be equal to labels.shape (1, 5) ``` Should I pad the labels with <extra-ids> to match the size of the input_ids, if not with what should I pad, as the `t5-base` or the transformer model needs the input_ids to be the same shape as the labels(targets) for training Code that can be run https://gist.github.com/alexcpn/b9bb2b0f01833d1bb862502faf99bab8 ### Expected behavior The input_ids and labels should be the same shape
03-18-2023 12:02:03
03-18-2023 12:02:03
For t5-training `input_ids` and `labels` need not match, unlike in gpt2 I was thiking that the denoised training will help it to memorise the text and I guess it kind of does From https://gist.github.com/alexcpn/e33a8b44e9774653d7492fb494fb1009 ``` After Training:'The cute dog walks in the'-->'cute dog walks in the cute cute dog' ``` But the idea in t5 model seems to be to just train it with a specific target (like translation with a prefix) ??
transformers
22,243
closed
Italian translation perf_infer_cpu
## What does this PR do? Italian translation of doc related to the preprocessing of :hugs: Transformers. * updated _toctree.yml * added perf_infer_cpu.mdx ## Before submitting - [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [X] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). See issue: [[#17459](https://www.linkedin.com/feed/hashtag/?keywords=%2317459)](https://github.com/huggingface/transformers/issues/17459) @sgugger, @stevhliu, @MKhalusova and @omarespejel
03-18-2023 06:37:05
03-18-2023 06:37:05
_The documentation is not available anymore as the PR was closed or merged._
transformers
22,242
closed
[deepspeed zero3] need `generate(synced_gpus=True, ...)`
As discussed in https://github.com/huggingface/transformers/issues/22231 `generate` under deepspeed zero3 using different input streams on different gpus may hang. It's documented [here](https://huggingface.co/docs/transformers/main/main_classes/deepspeed#custom-deepspeed-zero-inference) and in the API docs, that `synced_gpus=True` is required but who reads the docs. So this PR will automatically turn this flag on under ZeRO Stage-3, so everything works out of box and it'll warn the user once for their awareness. Fixes: https://github.com/huggingface/transformers/issues/22231
03-18-2023 05:05:58
03-18-2023 05:05:58
_The documentation is not available anymore as the PR was closed or merged._<|||||>So a potential issue I see is that deepspeed is potentially not enabled on all modele, & this would enable `synced_gpus` even for models where it's not enabled? depends on how `is_deepspeed_zero3_enabled` works, which you likely know better than me<|||||>- For Accelerate and HF Trainer everything is done automatically for you. - If you build your own trainer and follow [the instructions](https://huggingface.co/docs/transformers/main/main_classes/deepspeed#nontrainer-deepspeed-integration) it'll work as well. <|||||>1. Are you proposing: ``` def generate(..., synced_gpus=None) [...] if synced_gpus == None: if is_deepspeed_zero3_enabled() and dist.world_size() > 1: synced_gpus = True else: synced_gpus = False ``` which would preserve BC wrt current `synced_gpus=False` in the function definition. yes? 2. and no warning needed then? or still keeping it? 3. now docs will be mismatching so will need to adapt those to say that by default with multi-gpu it'll be set to `True`, but the user can choose to set it to `False` if they want to.<|||||>Yes, your code is exactly what I'm suggesting. I think it would be a better API since the user wouldn't have to look for warnings (no need for a warning indeed in this case) and would preserve backward compatibility as you mention.<|||||>That sounds good. Thank you for proposing it, Sylvain. So no warning needed, right? As this logic is really about dynamic default setting and it'll be documented as such.<|||||>Yup!<|||||>Thank you for suggesting a more elegant solution than my initial one, Sylvain.<|||||>thanks folks, this is great
transformers
22,241
closed
How to get T5 decoded logits using TFT5ForConditionalGeneration from encoded outputs?
### System Info - `transformers` version: 4.24.0 - Platform: Linux-6.1.11-76060111-generic-x86_64-with-glibc2.35 - Python version: 3.10.9 - Huggingface_hub version: 0.10.1 - PyTorch version (GPU?): 1.12.1 (False) - Tensorflow version (GPU?): 2.10.0 (False) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help? @Rocketknight1 @gante ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction ```python import numpy as np import tensorflow as tf from transformers import AutoTokenizer, T5Config, TFT5ForConditionalGeneration distill_config = T5Config(d_model=256, d_kv = 32, d_ff=512, num_heads=4, decoder_start_token_id=0) tf_model = TFT5ForConditionalGeneration(config=distill_config) tokenizer = AutoTokenizer.from_pretrained("t5-small", padding='max_length', truncation=True) inputs = tokenizer("this is a random input", return_tensors="tf")['input_ids'] encoder_outputs = tf_model.encoder(inputs) decoder_input_ids = tf.convert_to_tensor(np.asarray([[0]]).astype(np.int32)) output = tf_model.decoder(decoder_input_ids = decoder_input_ids, encoder_outputs=encoder_outputs.last_hidden_state) ``` Error: ```python ValueError Traceback (most recent call last) <ipython-input-5-face8f4fd36f> in <module> 10 encoder_outputs = tf_model.encoder(inputs) 11 decoder_input_ids = tf.convert_to_tensor(np.asarray([[0]]).astype(np.int32)) ---> 12 output = tf_model.decoder(decoder_input_ids = decoder_input_ids, encoder_outputs=encoder_outputs.last_hidden_state) 1 frames /usr/local/lib/python3.9/dist-packages/keras/utils/traceback_utils.py in error_handler(*args, **kwargs) 68 # To get the full stack trace, call: 69 # `tf.debugging.disable_traceback_filtering()` ---> 70 raise e.with_traceback(filtered_tb) from None 71 finally: 72 del filtered_tb /usr/local/lib/python3.9/dist-packages/keras/utils/layer_utils.py in split_out_first_arg(self, args, kwargs) 807 inputs = kwargs.pop(self._arg_names[0]) 808 else: --> 809 raise ValueError( 810 "The first argument to `Layer.call` must always be passed." 811 ) ValueError: The first argument to `Layer.call` must always be passed. ``` ### Expected behavior I am trying to convert a TFT5ForConditionalGeneration with custom config into a TFLite model, and as far as I see, implementing a greedy approach on my own seems faster, but if you know a more straightforward process, please let me know. I am currently trying to generate the decoder output using the encoder output, which I will generate only the first time when I pass the entire sentence. And then, I tried to reuse this encoded vector for the rest of the greedy search as input for the decoder.
03-18-2023 04:19:18
03-18-2023 04:19:18
Without using an encoded vector, this gives me the required output: ```python import tensorflow as tf from transformers import AutoTokenizer, T5Config, TFT5ForConditionalGeneration, set_seed set_seed(0) tokenizer = AutoTokenizer.from_pretrained("t5-small", padding='max_length', truncation=True) tf_model = TFT5ForConditionalGeneration.from_pretrained("t5-small") inputs = tokenizer("i got permission to begin a start up company by my own..</s>",return_tensors='tf') attn = inputs['attention_mask'] decoder_input = tf.zeros((1,1), dtype=tf.int64) output = tf_model(input_ids=inputs['input_ids'], attention_mask = attn, decoder_input_ids=decoder_input).logits print(tokenizer.batch_decode(output.numpy().argmax(-1).tolist()), output.numpy().argmax(-1).tolist()) ``` Output: `[''] [[3]]` But I get a different answer when I try to use the encoded vector as below. ```python import tensorflow as tf from transformers import AutoTokenizer, T5Config, TFT5ForConditionalGeneration, set_seed set_seed(0) tokenizer = AutoTokenizer.from_pretrained("t5-small", padding='max_length', truncation=True) tf_model = TFT5ForConditionalGeneration.from_pretrained("t5-small") inputs = tokenizer("i got permission to begin a start up company by my own..</s>",return_tensors='tf') attn = inputs['attention_mask'] encoder_outputs = tf_model.encoder(inputs['input_ids'], attention_mask = attn, return_dict = True) output = tf_model.decoder(decoder_input, encoder_hidden_states=encoder_outputs.last_hidden_state).last_hidden_state print(tokenizer.batch_decode(output.numpy().argmax(-1).tolist()), output.numpy().argmax(-1).tolist()) ``` Output: `['une'] [[245]]`<|||||>Hi @FrozenWolf-Cyber, thanks for raising this issue. This difference is arising because the two scripts are not equivalent. In the forward pass of the T5 model, the output of the decoder is passed to the language model head to produce the outputs - see the [relevant lines here](https://github.com/huggingface/transformers/blob/main/src/transformers/models/t5/modeling_tf_t5.py#L1429-L1433). <|||||>@amyeroberts Thanks for replying, I tried do: ```python tf_model.lm_head(output[0]) ``` But I seem to be getting the following error: ``` AttributeError Traceback (most recent call last) [<ipython-input-13-8324bea7f5ea>](https://localhost:8080/#) in <module> ----> 1 tf_model.lm_head(output[0]) AttributeError: 'TFT5ForConditionalGeneration' object has no attribute 'lm_head' ``` <|||||>This is because, for the `"t5-small"` checkpoint config, `tie_word_embeddings==True`. In this case, there isn't a `lm_head` layer, and instead the shared weights are used. The relevant lines [are here.](https://github.com/huggingface/transformers/blob/main/src/transformers/models/t5/modeling_tf_t5.py#L1430-L1431) <|||||>```python import tensorflow as tf from transformers import AutoTokenizer, T5Config, TFT5ForConditionalGeneration, set_seed set_seed(0) tokenizer = AutoTokenizer.from_pretrained("t5-small", padding='max_length', truncation=True) tf_model = TFT5ForConditionalGeneration.from_pretrained("t5-small") inputs = tokenizer("i got permission to begin a start up company by my own..</s>",return_tensors='tf') attn = inputs['attention_mask'] encoder_outputs = tf_model.encoder(inputs['input_ids'], attention_mask = attn) decoder_input = tf.zeros((1,1), dtype=tf.int64) sequence_output = tf_model.decoder(decoder_input, encoder_hidden_states=encoder_outputs[0])[0] sequence_output = sequence_output * (tf_model.model_dim**-0.5) logits = tf.matmul(sequence_output, tf_model.shared.weights, transpose_b=True) print(tokenizer.batch_decode(logits.numpy().argmax(-1).tolist())) ``` @amyeroberts Thank you very much this code works now :)
transformers
22,240
open
Add InternImage
### Model description InternImage is a new large-scale CNN-based foundation model, which can obtain the gain from increasing parameters and training data like ViTs. Different from the recent CNNs that focus on large dense kernels, InternImage takes deformable convolution as the core operator, so that this model not only has the large effective receptive field required for downstream tasks such as detection and segmentation, but also has the adaptive spatial aggregation conditioned by input and task information. InternImage-H achieved a new record 65.4 mAP on COCO test-dev and 62.9 mIoU on ADE20K, outperforming current leading CNNs and ViTs. It is worth noting that InternImage relies on a custom cuda operator, so if this causes problems for model addition, you can replace [the cuda operator](https://github.com/OpenGVLab/InternImage/blob/master/classification/ops_dcnv3/modules/dcnv3.py#L218) with [a pytorch implementation](https://github.com/OpenGVLab/InternImage/blob/master/classification/ops_dcnv3/modules/dcnv3.py#L91). In fact, we have already submitted [a version of the code on transformers](https://huggingface.co/OpenGVLab/internimage_t_1k_224/tree/main), however, due to security reasons, the code we submitted cannot call your web inference api, so we would like you to add InternImage to transformers. ### Open source status - [X] The model implementation is available - [X] The model weights are available ### Provide useful links for the implementation https://github.com/OpenGVLab/InternImage
03-18-2023 02:24:56
03-18-2023 02:24:56
Can I take it up?<|||||>> Can I take it up? Of course, thank you!<|||||>@souravpy Are you currently working on this? If not, I would love to take a look to see if I could help in adding this model to HF Transformers!<|||||>The [modeling code and weights](https://huggingface.co/OpenGVLab/internimage_s_1k_224/blob/main/intern_image.py) for Intern Image are already on the hub, and so the model can already be used directly with the `AutoModel` API. cf. https://github.com/huggingface/transformers/pull/23782#issuecomment-1568459737
transformers
22,239
closed
bos_token and eos_token for Llama tokenizer
### System Info - `transformers` version: 4.28.0.dev0 - Platform: Linux-5.15.0-58-generic-x86_64-with-glibc2.17 - Python version: 3.8.15 - Huggingface_hub version: 0.11.0 - PyTorch version (GPU?): 1.13.0+cu117 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed ### Who can help? @ArthurZucker @zphan ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction ``` model = AutoModelForCausalLM.from_pretrained("./llama-7b-hf") tokenizer = AutoTokenizer.from_pretrained("./llama-7b-hf", use_fast=False) ``` `model.config.eos_token_id` shows 1, but `tokenizer.eos_token_id` shows 2. ### Expected behavior I wonder if they should be the same, or am I missing something?
03-18-2023 02:15:22
03-18-2023 02:15:22
Hey! There must be a typo in your `generation_config` as the `convert_llama_weights_to_hf.py` as well as `configuration_llama` both set it to `2`. Are you sure that you are using the latest scripts? The fix is just `model.config.eos_token_id = 2` in this case. <|||||>I see. The [config.json](https://huggingface.co/decapoda-research/llama-7b-hf/blob/main/config.json) and [generation_config.json](https://huggingface.co/decapoda-research/llama-7b-hf/blob/main/generation_config.json) both set it to 1. So I will change it to 2 for now.<|||||>Thank youΒ  Sent from Yahoo Mail for iPhone On Monday, March 20, 2023, 11:16 AM, Yujian Liu ***@***.***> wrote: I see. The config.json and generation_config.json both set it to 1. So I will change it to 2 for now. β€” Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you are subscribed to this thread.Message ID: ***@***.***> <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
22,238
closed
replace_8bit_linear modules_to_not_convert default value fix
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes the default value of `modules_to_not_convert` of `utils.bitsandbytes.replace_8bit_linear`. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @younesbelkada <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer: @stas00, Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
03-17-2023 23:13:00
03-17-2023 23:13:00
_The documentation is not available anymore as the PR was closed or merged._
transformers
22,237
closed
Update vision docstring bool masked pos
# What does this PR do? Add the missing `bool_masked_pos` information in the docstring for vision models. Fixes #21484 ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests?
03-17-2023 18:20:46
03-17-2023 18:20:46
_The documentation is not available anymore as the PR was closed or merged._
transformers
22,236
closed
Rework a bit the LLaMA conversion script
# What does this PR do? This PR makes sure the LLaMA conversion script stays up to date with `save_pretrained` by having the checkpoint being loaded in an actual model then saved via that method. This avoids a lot of hard-coded values in JSON files. It keeps the old logic and merely re-loads the result in a Transformer model (after cleaning anything to make sure we never go above the model size in CPU RAM). It also changes a bit the API to put everything in the output folder like we usually have in repos on huggingface. cc @zphang so you are aware of this.
03-17-2023 18:10:56
03-17-2023 18:10:56
_The documentation is not available anymore as the PR was closed or merged._<|||||>I don't see how you can reduce the memory requirement since the files provided by Meta each contain a part of all weights, so you need to have them all loaded to reconstruct just one of the weights. That's why I didn't bother implementing sharding on the fly.<|||||>Indeed, just realised you have to `cat` them 😞 my bad!<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_22236). All of your documentation changes will be reflected on that endpoint.
transformers
22,235
closed
Wav2Vec2ProcessorWithLM can return N best hypotheses now
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes #22150 , now the user can specify the number of hypotheses which will be returned after the decoding stage. If the specified number is higher than the actual number of hypotheses then all hypotheses will be returned. This is useful when the user wants to run the rescoring on the n-best hypotheses (check out the motivation in the linked issue). Wav2Vec2DecoderWithLMOutput class was already prepared for this feature and [this comment in the code](https://github.com/huggingface/transformers/blob/2355e463955a5392c1acf1964d89747e8b146a6f/src/transformers/models/wav2vec2_with_lm/processing_wav2vec2_with_lm.py#L571) said that this feature will be eventually added, so here it is. I tried not to break anything that relies on the current version of the decode function, the doc string is updated with a new parameter. All tests passed. The code was well-formatted. ## Before submitting - [ x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? **Is this necessary for a such small feature?** @younesbelkada @ArthurZucker @sanchit-gandhi , does this make sense to you, guys? Is there anything else I should add? <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer: @stas00, Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
03-17-2023 16:32:53
03-17-2023 16:32:53
_The documentation is not available anymore as the PR was closed or merged._<|||||>We might have more luck with @sanchit-gandhi ;-)<|||||>> Thanks a lot for the PR @vsokolovskii, > > Just to better understand what happens now in case we decoder a batch of logits with `n_best > 1` - > will we return a list of a list of text in this case? > > Wondering if that's the API that we want - @sanchit-gandhi wdyt? Take a look at the [description of the output class arguments](https://github.com/huggingface/transformers/blob/48327c57182fdade7f7797d1eaad2d166de5c55b/src/transformers/models/wav2vec2_with_lm/processing_wav2vec2_with_lm.py#L45) that you have, you've already prepared everything for this change and I just added the return statement. There should be the possibility to get more than one hypothesis from the ASR in order to rescore it with a larger model, take a look at the motivation section in the linked issue. πŸ€— <|||||>@sanchit-gandhi aha... got it. Check out the new changes, please. > Very cool feature @vsokolovskii! Regarding @patrickvonplaten's question about batch decoding, we don't actually have the argument `n_best` for the `batch_decode` method, it's only for the single-item, `decode` method. So currently, we'd never be returning batches of n-best hypothesis. > > WDYT about adding `n_best` to the `batch_decode` method as well @vsokolovskii? In this case, I think we should match the output format to generate's beam search method as `[batches * num_sequences, output_sequences]` (see https://huggingface.co/docs/transformers/internal/generation_utils#transformers.generation.BeamSearchDecoderOnlyOutput.sequences) <|||||>@ArthurZucker @amyeroberts , could you please rerun the tests once the pipeline is fixed, I believe that it's not caused by my changes.<|||||>The code quality check not passing is not due to your PR at first glance, but to make sure, could you rebase on main? It has been fixed on the main branch.<|||||>> The code quality check not passing is not due to your PR at first glance, but to make sure, could you rebase on main? It has been fixed on the main branch. thanks! forgot yo update my fork
transformers
22,234
closed
Fix Unnecessary move of tensors from CPU to GPU in LlamaRotaryEmbedding
# What does this PR do? The original implementation of LlamaRotaryEmbedding does not use `cos_cached` & `sin_cached` tensors as the PyTorch Parameter or Buffer, thus these tensors do not move to GPU when we use `model.to(gpu_id)` or `model.cuda()`. They will keep in the device CPU. This PR adjusts the `cos_cached` & `sin_cached` tensors to the Buffer with persistent=False. This keeps these tensors moving from CPU to GPU together with the model, while keeping them out of the model's state_dict as original. # Fixes: Fix unnecessary moves of tensors from CPU to GPU in LlamaRotaryEmbedding, for saving a large amount of CPU usage especially when we do inference. Code for Reproducing the issue: ```python import os os.environ["CUDA_VISIBLE_DEVICES"] = "0" # Single card Generation from tqdm import tqdm import torch from transformers.models.llama.modeling_llama import LlamaForCausalLM from transformers.models.llama.tokenization_llama import LlamaTokenizer tokenizer = LlamaTokenizer.from_pretrained("decapoda-research/llama-7b-hf") model = LlamaForCausalLM.from_pretrained("decapoda-research/llama-7b-hf", torch_dtype=torch.float16) model = model.cuda() model.eval() # Batch generation inputs = [ "LLaMa is a large language model developed by Meta AI, for", ] * 32 batch = tokenizer(inputs, return_tensors="pt", add_special_tokens=False) batch = batch.to(model.device) # Here we do some high computational batched generation for i in tqdm(range(5000)): generated = model.generate(batch["input_ids"], temperature=0.7, top_p=0.9, do_sample=True, num_beams=1, max_new_tokens=600,) ``` Use `top` command in bash to watch the CPU usage. Here are the comparison before applying this PR and after this PR: Before: | Fix | USER | PR | NI | VIRT | RES | SHR | S | %CPU | %MEM | TIME+ | COMMAND | |--------|------|----|----|--------|------|--------|---|------|------|---------|----------| | Before | root | 20 | 0 | 108.6g | 1.9g | 411620 | R | 6263 | 0.2 | 40:28.1 | python | After: | Fix | USER | PR | NI | VIRT | RES | SHR | S | %CPU | %MEM | TIME+ | COMMAND | |-------|------|----|----|--------|------|--------|---|------|------|---------|----------| | After | root | 20 | 0 | 108.6g | 1.8g | 414360 | R | 98.3 | 0.2 | 03:21.6 | python | Here the CPU usage drops to a normal level because the `cos_cached` & `sin_cached` tensors can move to GPU correctly with the model. This helps avoid unnecessary moves of tensors from CPU to GPU in LlamaRotaryEmbedding. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @ArthurZucker and @younesbelkada
03-17-2023 16:15:54
03-17-2023 16:15:54
_The documentation is not available anymore as the PR was closed or merged._<|||||>Did you accidentally break meta loading? `with init_empty_weights():` leaves `cos_cached` and `sin_cached` on meta device and they won't be initialized because they are not persistent. <|||||>Since `inv_freq` is a persistent buffer, it should be ok to also make harmonics persistent<|||||>Hi @BlackSamorez, would you like to open a PR with these suggested changes including details about the issue they resolve? <|||||>@BlackSamorez `init_empty_weights` ignores buffers by default, so this should not cause any problem. We have multiple instance of non-persistent buffers in the lib and this is not a problem. I've also run Llama without any issue after it being initialized on the meta device.<|||||>Hi, I test the following two codes on my device. It seems the meta device works correctly in this PR. ```python import pickle import os os.environ["CUDA_VISIBLE_DEVICES"] = "0" import torch from transformers.models.llama.configuration_llama import LlamaConfig from transformers.models.llama.modeling_llama import LlamaForCausalLM model_name_or_path = "decapoda-research/llama-7b-hf" model1 = LlamaForCausalLM.from_pretrained(model_name_or_path, torch_dtype=torch.float16) model1 = model1.to(torch.device("cuda:0")) # Save a initialized cos_cached tensor to `cos1.pt`, for comparasion with meta device loading cos1 = model1.model.layers[0].self_attn.rotary_emb.cos_cached.to(torch.device("cpu")) pickle.dump(cos1, open("cos1.pt", 'wb')) ``` ```python import pickle import os os.environ["CUDA_VISIBLE_DEVICES"] = "0" import torch from transformers.models.llama.configuration_llama import LlamaConfig from transformers.models.llama.modeling_llama import LlamaForCausalLM from accelerate import init_empty_weights, infer_auto_device_map, load_checkpoint_and_dispatch model_name_or_path = "decapoda-research/llama-7b-hf" config = LlamaConfig.from_pretrained(model_name_or_path, torch_dtype=torch.float16) with init_empty_weights(): model0 = LlamaForCausalLM(config) model0 = load_checkpoint_and_dispatch( model0, model_name_or_path, device_map='auto', ) # Compare the `cos_cached` tensor cos0 = model0.model.layers[0].self_attn.rotary_emb.cos_cached.to(torch.device("cpu")) cos1 = pickle.load(open("cos1.pt", 'rb')) all((cos0==cos1).tolist()) # True ``` @BlackSamorez Maybe you can check the results on your device.<|||||>Yes, you're right and I was wrong. It works and the problem was in entirely different part of my program. Consider https://github.com/huggingface/transformers/pull/22234#discussion_r1143320748 and https://github.com/huggingface/transformers/pull/22234#discussion_r1143321732 invalid. Thank you!<|||||>I'm still facing this issue with latest deepspeed (0.9.5+1491e14e) and transformers (4.31.0.dev0). I feel this issue is more likely related to the LLaMA implementation here (LlamaRotaryEmbedding). ``` RuntimeErrorcos = cos[position_ids].unsqueeze(1) # [bs, 1, seq_len, dim]: indices should be either on cpu or on the same device as the indexed tensor (cpu) RuntimeErrorcos = cos[position_ids].unsqueeze(1) # [bs, 1, seq_len, dim]: indices should be either on cpu or on the same device as the indexed tensor (cpu) RuntimeErrorcos = cos[position_ids].unsqueeze(1) # [bs, 1, seq_len, dim]: indices should be either on cpu or on the same device as the indexed tensor (cpu) RuntimeError: indices should be either on cpu or on the same device as the indexed tensor (cpu) cos = cos[position_ids].unsqueeze(1) # [bs, 1, seq_len, dim] RuntimeError: indices should be either on cpu or on the same device as the indexed tensor (cpu) cos = cos[position_ids].unsqueeze(1) # [bs, 1, seq_len, dim] RuntimeError: indices should be either on cpu or on the same device as the indexed tensor (cpu) cos = cos[position_ids].unsqueeze(1) # [bs, 1, seq_len, dim] RuntimeError : cos = cos[position_ids].unsqueeze(1) # [bs, 1, seq_len, dim]indices should be either on cpu or on the same device as the indexed tensor (cpu) RuntimeError: indices should be either on cpu or on the same device as the indexed tensor (cpu) ```<|||||>> I'm still facing this issue with latest deepspeed (0.9.5+1491e14e) and transformers (4.31.0.dev0). I feel this issue is more likely related to the LLaMA implementation here (LlamaRotaryEmbedding). > > ``` > RuntimeErrorcos = cos[position_ids].unsqueeze(1) # [bs, 1, seq_len, dim]: > indices should be either on cpu or on the same device as the indexed tensor (cpu) > RuntimeErrorcos = cos[position_ids].unsqueeze(1) # [bs, 1, seq_len, dim]: > indices should be either on cpu or on the same device as the indexed tensor (cpu) > RuntimeErrorcos = cos[position_ids].unsqueeze(1) # [bs, 1, seq_len, dim]: > indices should be either on cpu or on the same device as the indexed tensor (cpu) > RuntimeError: indices should be either on cpu or on the same device as the indexed tensor (cpu) > cos = cos[position_ids].unsqueeze(1) # [bs, 1, seq_len, dim] > RuntimeError: indices should be either on cpu or on the same device as the indexed tensor (cpu) > cos = cos[position_ids].unsqueeze(1) # [bs, 1, seq_len, dim] > RuntimeError: indices should be either on cpu or on the same device as the indexed tensor (cpu) > cos = cos[position_ids].unsqueeze(1) # [bs, 1, seq_len, dim] > RuntimeError : cos = cos[position_ids].unsqueeze(1) # [bs, 1, seq_len, dim]indices should be either on cpu or on the same device as the indexed tensor (cpu) > > RuntimeError: indices should be either on cpu or on the same device as the indexed tensor (cpu) > ``` I encountered exactly the same issue,training failed when using zero3<|||||>Thanks for the input, will investigate!
transformers
22,233
closed
Revert "Use `dash==2.8.1` for now for daily CI"
Reverts huggingface/transformers#22227 New version [dash 2.9.1](https://github.com/plotly/dash/releases/tag/v2.9.1) works with our CI. Tested and it works. We no longer need the change in #22227
03-17-2023 15:48:00
03-17-2023 15:48:00
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_22233). All of your documentation changes will be reflected on that endpoint.
transformers
22,232
closed
Fix llama_tokenizer
Fixes #22222 This PR fixes the LlamaTokenizer importing Fixed __init__.py file in src/transformers
03-17-2023 15:04:23
03-17-2023 15:04:23
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_22232). All of your documentation changes will be reflected on that endpoint.
transformers
22,231
closed
Detect Accelerate's DeepSpeed level 3 Env Vars and warn if synced_gpus is False
### Feature request If `ACCELERATE_DEEPSPEED_ZERO_STAGE` == 3 and generate is called without `synced_gpus`, it would be reasonable to warn the user that if they're doing a distributed call to generate with a deepspeed model, they need to give generate the `synced_gpus` arguments. ### Motivation ## Background Deepspeed level 3 shards the parameters, so it requires that `model.forward` be called the same amount of times on each process even at inference time, so the weights can be moved around in time. `model.forward` is called for each token generated at generation time. If a process stops generating before other processes, Deepspeed level 3 breaks because `model.forward` isn't called in processes where generation is over. That's why the `synced_gpus` argument is present in `model.generate`, the `model.forward` function keeps getting called until all processes are done generating. ## Accelerate Has Env Vars that Indicate Stage 3 When using Deepspeed, accelerate has an env var called `ACCELERATE_DEEPSPEED_ZERO_STAGE` that contains the level. While `ACCELERATE_DEEPSPEED_ZERO_STAGE` being set to 3 doesn't guarantee that the model is being called is distributed, it is a pretty big indication in practice, and it would be reasonable to give a warning if `model.generate` (and possibly `model.greedy_search` etc) are called without `synced_gpus`, as new users will probably not know about this. If there is a way for `model.generate` to know in a more reliable way if the model is distributed with Deepspeed level 3, then that could be used to warn the user as well ofc. ### Your contribution I can do it, but for these nuanced, low coding qty things, you folks are probably better placed than me.
03-17-2023 14:41:21
03-17-2023 14:41:21
cc @stas00 and @pacman100 <|||||>Totally. Thank you for bringing it up, @JulesGM The API for checking this situation is already available and is being used in the HF Trainer: https://github.com/huggingface/transformers/blob/bec075612a293a66022937f65ba0c0df25224d29/src/transformers/trainer_seq2seq.py#L180-L188 For DIY integration we can 1. document it here: https://huggingface.co/docs/transformers/main/main_classes/deepspeed#nontrainer-deepspeed-integration 2. and add an assert inside `generate` if it is called w/o this flag and WORLD_SIZE>1 and zero3. No warnings please - nobody sees those. (need to think how to check world_size inside `generate` but checking for deepspeed first will enable a definite use of `torch.distributed.get_world_size()` so should be easy). Would you like to work on that, @JulesGM? I'd be happy to support you or I might find time to do it myself some time later. Totally up to you.<|||||>That's great to hear Stas. Honestly I'm kind of working night and day for my thesis deadline right now, so if you want to do it, it would be much appreciated.<|||||>Thank you for letting me know your preference, please try this PR and let me know if it solves the problem for you, @JulesGM https://github.com/huggingface/transformers/pull/22242 I decided to just set it automatically if it wasn't set. The docs were already correct, so no need to change them.
transformers
22,230
closed
Removed .mdx extension in two links
This PR fixes two links that had .mdx in them that shouldn't have been there.
03-17-2023 14:11:32
03-17-2023 14:11:32
_The documentation is not available anymore as the PR was closed or merged._
transformers
22,229
closed
Fix natten
# What does this PR do? The new NATTEN 0.14.5 supports PyTorch 2.0, but also adds an additional argument to the QK operation to allow optional RPBs. This ends up failing NATTEN tests. This commit adds NATTEN back to circleci and adds the arguments to get it working again. Reverts #22218. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @sgugger ## Misc Related issues on NATTEN: https://github.com/SHI-Labs/NATTEN/issues/23 https://github.com/SHI-Labs/NATTEN/issues/19
03-17-2023 13:58:02
03-17-2023 13:58:02
_The documentation is not available anymore as the PR was closed or merged._<|||||>@alihassanijr Thanks for such a quick fix! Just double checking - is this version of `natten` compatible with later versions of PyTorch, or just >= 2.x.x?<|||||>My pleasure, sorry I didn't realize this earlier. Yes, `0.14.5` still supports torch >= 1.8 and comes with wheels for those: https://shi-labs.com/natten So the problem was that we had a pull request a couple of months ago that added an additional argument to one of the C functions. We didn't immediately rebuild and push out a new release at the time. We did however push out a new build to support PyTorch 2.0, and it included this change, which is why we had to open a PR here as well. On a different note, we only had to change this here in the first place because we explicitly use the C function calls in the models using NA: https://github.com/alihassanijr/transformers/blob/6125d62e05aba0bd1f6a53bd3bf44b4d86b58f25/src/transformers/models/dinat/modeling_dinat.py#L350 https://github.com/alihassanijr/transformers/blob/6125d62e05aba0bd1f6a53bd3bf44b4d86b58f25/src/transformers/models/nat/modeling_nat.py#L342 We could try and figure out how we would directly import the nn.Module we typically encourage everyone to use, that way any changes to the signatures would not affect `transformers`. NATTEN can in theory support future PyTorch versions without any change required (unless PyTorch changes anything in their ATEN backend like they did with the dispatchers in 1.13, which would require us to work those changes into our CPP backend as well.) The only slight hitch is that if users want to install NATTEN with wheels (and not have to wait for pip to build it locally), we have to build those on our end and upload them. I've been wanting to set up CircleCI or Travis so that I wouldn't have to set that up manually every time there's a new PyTorch release, but just haven't found the time to do so yet. But we will to the best of our abilities try to build them upon new PyTorch releases and push them out as soon as possible.
transformers
22,228
closed
Fix state dict loading via symlink on windows
# What does this PR do? I ran into an issue trying to run this on Windows 10 (via Git Bash, in a python 3.9.12 Conda environment, deps installed via pip). My requirements.txt included below for completeness. I tried running an example of SD 2 from the docs ``` from diffusers import DiffusionPipeline, DPMSolverMultistepScheduler import torch repo_id = "stabilityai/stable-diffusion-2-base" pipe = DiffusionPipeline.from_pretrained(repo_id, torch_dtype=torch.float16, revision="fp16") pipe.scheduler = DPMSolverMultistepScheduler.from_config(pipe.scheduler.config) pipe = pipe.to("cuda") prompt = "High quality photo of an astronaut riding a horse in space" image = pipe(prompt, num_inference_steps=25).images[0] image.save("astronaut.png") ``` And kept getting output like this: ``` schmavery ~/git/sd-test $ python test.py A matching Triton is not available, some optimizations will not be enabled. Error caught was: No module named 'triton' Downloading pytorch_model.bin: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 681M/681M [00:16<00:00, 41.8MB/s] Fetching 12 files: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 12/12 [00:16<00:00, 1.38s/it] Traceback (most recent call last): File "C:\Users\schmavery\scoop\apps\miniconda3\current\lib\site-packages\transformers\modeling_utils.py", line 417, in load_state_dict return torch.load(checkpoint_file, map_location="cpu") File "C:\Users\schmavery\scoop\apps\miniconda3\current\lib\site-packages\torch\serialization.py", line 771, in load with _open_file_like(f, 'rb') as opened_file: File "C:\Users\schmavery\scoop\apps\miniconda3\current\lib\site-packages\torch\serialization.py", line 270, in _open_file_like return _open_file(name_or_buffer, mode) File "C:\Users\schmavery\scoop\apps\miniconda3\current\lib\site-packages\torch\serialization.py", line 251, in __init__ super(_open_file, self).__init__(open(name, mode)) FileNotFoundError: [Errno 2] No such file or directory: 'C:\\Users\\schmavery\\.cache\\huggingface\\hub\\models--stabilityai--stable-diffusion-2-base\\snapshots\\1cb61502fc8b634cdb04e7cd69e06051a728bedf\\text_encoder\\pytorch_model.bin' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "C:\Users\schmavery\git\sd-test\test.py", line 5, in <module> pipe = DiffusionPipeline.from_pretrained(repo_id, torch_dtype=torch.float16, revision="fp16") File "C:\Users\schmavery\scoop\apps\miniconda3\current\lib\site-packages\diffusers\pipelines\pipeline_utils.py", line 944, in from_pretrained loaded_sub_model = load_method(os.path.join(cached_folder, name), **loading_kwargs) File "C:\Users\schmavery\scoop\apps\miniconda3\current\lib\site-packages\transformers\modeling_utils.py", line 2431, in from_pretrained state_dict = load_state_dict(resolved_archive_file) File "C:\Users\schmavery\scoop\apps\miniconda3\current\lib\site-packages\transformers\modeling_utils.py", line 420, in load_state_dict with open(checkpoint_file) as f: FileNotFoundError: [Errno 2] No such file or directory: 'C:\\Users\\schmavery\\.cache\\huggingface\\hub\\models--stabilityai--stable-diffusion-2-base\\snapshots\\1cb61502fc8b634cdb04e7cd69e06051a728bedf\\text_encoder\\pytorch_model.bin' ``` I did some poking around and realized that `C:\Users\schmavery\.cache\huggingface\hub\models--stabilityai--stable-diffusion-2-base\snapshots\1cb61502fc8b634cdb04e7cd69e06051a728bedf\text_encoder\pytorch_model.bin` is a symlink to another file in `C:\Users\schmavery\.cache\huggingface\hub\models--stabilityai--stable-diffusion-2-base\blobs`. Some searching online revealed some issues with python loading files via symlink in Windows, mostly due to Window's funny handling of symlinks. I tried adding a call to `os.path.realpath` to resolve the path before opening the file, and that solved the problem! I thought I'd post this here in case it helps anyone. requirements.txt: ``` accelerate==0.17.1 brotlipy==0.7.0 certifi @ file:///C:/b/abs_85o_6fm0se/croot/certifi_1671487778835/work/certifi cffi @ file:///C:/b/abs_49n3v2hyhr/croot/cffi_1670423218144/work charset-normalizer @ file:///tmp/build/80754af9/charset-normalizer_1630003229654/work colorama @ file:///C:/b/abs_a9ozq0l032/croot/colorama_1672387194846/work conda==23.1.0 conda-package-handling @ file:///C:/b/abs_fcga8w0uem/croot/conda-package-handling_1672865024290/work conda_package_streaming @ file:///C:/b/abs_0e5n5hdal3/croot/conda-package-streaming_1670508162902/work cryptography @ file:///C:/b/abs_8ecplyc3n2/croot/cryptography_1677533105000/work diffusers==0.14.0 filelock==3.10.0 huggingface-hub==0.13.2 idna @ file:///C:/b/abs_bdhbebrioa/croot/idna_1666125572046/work importlib-metadata==6.0.0 Jinja2==3.1.2 MarkupSafe==2.1.2 menuinst @ file:///C:/ci/menuinst_1631733438520/work mpmath==1.3.0 mypy-extensions==1.0.0 networkx==3.0 numpy==1.24.2 packaging==23.0 Pillow==9.4.0 pluggy @ file:///C:/ci/pluggy_1648024580010/work psutil==5.9.4 pycosat @ file:///C:/b/abs_4b1rrw8pn9/croot/pycosat_1666807711599/work pycparser @ file:///tmp/build/80754af9/pycparser_1636541352034/work pyOpenSSL @ file:///C:/b/abs_552w85x1jz/croot/pyopenssl_1677607703691/work pyre-extensions==0.0.23 PySocks @ file:///C:/ci/pysocks_1605307512533/work pywin32==305.1 PyYAML==6.0 regex==2022.10.31 requests @ file:///C:/ci/requests_1657735342357/work ruamel.yaml @ file:///C:/b/abs_30ee5qbthd/croot/ruamel.yaml_1666304562000/work ruamel.yaml.clib @ file:///C:/b/abs_aarblxbilo/croot/ruamel.yaml.clib_1666302270884/work sympy==1.11.1 tokenizers==0.13.2 toolz @ file:///C:/b/abs_cfvk6rc40d/croot/toolz_1667464080130/work torch==1.13.1+cu117 torchaudio==0.13.1+cu117 torchvision==0.14.1+cu117 tqdm @ file:///C:/b/abs_0axbz66qik/croots/recipe/tqdm_1664392691071/work transformers==4.27.1 typing-inspect==0.8.0 typing_extensions==4.5.0 urllib3 @ file:///C:/b/abs_9bcwxczrvm/croot/urllib3_1673575521331/work win-inet-pton @ file:///C:/ci/win_inet_pton_1605306162074/work wincertstore==0.2 xformers==0.0.16 zipp==3.15.0 zstandard==0.19.0 ``` ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ^^ This is such a small change that it shouldn't affect any docs/tests I think ## Who can review? Looks like @sgugger and @stas00 were the last to touch this area in the file, though it wasn't particularly recently. I wonder if some change was made in how the models are cached that could have caused this.. 🀷 My original local fix just changed the torch load to `torch.load(os.path.realpath(checkpoint_file_realpath), map_location="cpu")`, but this seems like it might catch a couple more cases. I considered just overriding the `checkpoint_file` variable to point to the realpath but I thought that might have made the error messages less clear.
03-17-2023 13:26:26
03-17-2023 13:26:26
cc @Wauplin looks like something that should be in huggingface_hub (it it's not already).<|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>Aie, this is a real problem I think. In `huggingface_hub` we return a path to the snapshots/ folder that is indeed a symlink to a file in the blobs/ folder. In the case of a `hf_hub_download`, I would be fine with doing a `os.path.realpath` before returning the path but that would still be an issue when doing `snapshot_download`. The point of having a `snapshots/` folder as we did is to provide the same file structure as in the repo for third-party libraries. But if Windows has a "funny way to handle symlinks" by not following them, I'm afraid `huggingface_hub` can't do anything about it except really changing the cache structure. What I'm wondering here is why is has not been discovered before. @Schmavery would it be possible that you first ran a script in developer mode/as admin that have cached files using symlinks and you are now re-running the script in "normal" mode which result in not being able to follow symlinks? (for the record, we already had some issues with symlinks on windows and [decided to duplicate files](https://github.com/huggingface/huggingface_hub/issues/1062#issuecomment-1256054899) for non-dev non-admin users)<|||||>cc @LysandreJik @julien-c about the cache-system design<|||||>@Wauplin thanks for the quick reply! I'm also curious why I'm the first to run into this, though at this point I'm used to things not working in Windows because of all the different ways things can be set up! I don't think I ran anything as admin. I'm happy to run whatever command you need to get more info about the setup, but from some basic `ls` it looks like the permissions/ownership is as I might have expected. ``` schmavery ~/git/sd-test $ ls -l ~/.cache/huggingface/hub/models--stabilityai--stable-diffusion-2-base/ total 4 drwxr-xr-x 1 schmavery 0 Mar 17 08:26 blobs drwxr-xr-x 1 schmavery 0 Mar 16 22:33 refs drwxr-xr-x 1 schmavery 0 Mar 16 22:33 snapshots schmavery ~/git/sd-test $ ls -l ~/.cache/huggingface/hub/models--stabilityai--stable-diffusion-2-base/snapshots/ total 4 drwxr-xr-x 1 schmavery 0 Mar 16 22:33 1cb61502fc8b634cdb04e7cd69e06051a728bedf schmavery ~/git/sd-test $ ls -lh ~/.cache/huggingface/hub/models--stabilityai--stable-diffusion-2-base/blobs/ total 2.5G -rw-r--r-- 1 schmavery 160M Mar 16 22:33 11bc15ceb385823b4adb68bd5bdd7568d0c706c3de5ea9ebcb0b807092fc9030 -rw-r--r-- 1 schmavery 607 Mar 16 22:33 14bcdff46ade71e94221b696cefbad2382223370 -rw-r--r-- 1 schmavery 1.7G Mar 16 22:35 34009b21392113e829e498653f739f1ec81244b4a2eaf56f111b0805c9617650 -rw-r--r-- 1 schmavery 1.1M Mar 16 22:33 469be27c5c010538f845f518c4f5e8574c78f7c8 -rw-r--r-- 1 schmavery 340 Mar 16 22:33 4a37db2129e08cb00670e652398a8f3960d97d0e -rw-r--r-- 1 schmavery 513K Mar 16 22:33 76e821f1b6f0a9709293c3b6b51ed90980b3166b -rw-r--r-- 1 schmavery 905 Mar 16 22:33 9e3e87514708d0a2b44abfa0096ec14802862f5d -rw-r--r-- 1 schmavery 511 Mar 16 22:33 9ef36adb76dff35bf9dc2fc690ce4ae3bb72360d -rw-r--r-- 1 schmavery 629 Mar 16 22:33 a08e9e082e6ab9044bdd2926092ce2e4f33d2272 -rw-r--r-- 1 schmavery 460 Mar 16 22:33 ae0c5be6f35217e51c4c000fd325d8de0294e99c -rw-r--r-- 1 schmavery 820 Mar 16 22:33 e966b0b8955e8c66a0717acb2ce5041274d7c60a -rw-r--r-- 1 schmavery 650M Mar 17 08:26 f2a06cf32cf585d03b55fef302142a5321b761ec440113925f64f4ceaffc7730 ```<|||||>Hi @Schmavery, thanks for reporting this. I sorry that this bug has being introduced recently. It seems that Windows has issues following absolute symlinks in some cases. It has been reported in https://github.com/huggingface/huggingface_hub/issues/1398, https://github.com/huggingface/diffusers/issues/2729 and https://github.com/huggingface/transformers/pull/22228 (and mentioned in https://github.com/huggingface/huggingface_hub/issues/1396). I'll provide a quick ASAP.<|||||>@Schmavery could you please retry using [`huggingface_hub==0.13.3`](https://github.com/huggingface/huggingface_hub/releases/tag/v0.13.3)? It should fix your problem. Before that you need to delete your folder `"~/.cache/huggingface/hub/models--stabilityai--stable-diffusion-2-base/snapshots/"` to delete the existing (non-working) symlinks. If the issue persists, please let me know.<|||||>@Wauplin I just tried your new version and something still doesn't seem to be working, though it seems like it's something else now. The relative symlink is being created, but the blob that it is supposed to be pointing to is missing from the blobs folder. More specifically, I get this error: ``` OSError: [Errno 22] Invalid argument: 'C:\\Users\\schmavery\\.cache\\huggingface\\hub\\models--stabilityai--stable-diffusion-2-base\\snapshots\\1cb61502fc8b634cdb04e7cd69e06051a728bedf\\text_encoder\\pytorch_model.bin' ``` And then looking around on disk I see this: ``` schmavery ~/git/sd-test $ ls -lh C:\\Users\\schmavery\\.cache\\huggingface\\hub\\models--stabilityai--stable-diffusion-2-base\\snapshots\\1cb61502fc8b634cdb04e7cd69e06051a728bedf\\text_encoder\\pytorch_model.bin lrwxrwxrwx 1 schmavery 79 Mar 20 10:05 'C:\Users\schmavery\.cache\huggingface\hub\models--stabilityai--stable-diffusion-2-base\snapshots\1cb61502fc8b634cdb04e7cd69e06051a728bedf\text_encoder\pytorch_model.bin' -> ../../../blobs/f2a06cf32cf585d03b55fef302142a5321b761ec440113925f64f4ceaffc7730 schmavery ~/.cache/huggingface/hub/models--stabilityai--stable-diffusion-2-base/blobs $ ls 11bc15ceb385823b4adb68bd5bdd7568d0c706c3de5ea9ebcb0b807092fc9030 469be27c5c010538f845f518c4f5e8574c78f7c8 9e3e87514708d0a2b44abfa0096ec14802862f5d ae0c5be6f35217e51c4c000fd325d8de0294e99c 14bcdff46ade71e94221b696cefbad2382223370 4a37db2129e08cb00670e652398a8f3960d97d0e 9ef36adb76dff35bf9dc2fc690ce4ae3bb72360d e966b0b8955e8c66a0717acb2ce5041274d7c60a 34009b21392113e829e498653f739f1ec81244b4a2eaf56f111b0805c9617650 76e821f1b6f0a9709293c3b6b51ed90980b3166b a08e9e082e6ab9044bdd2926092ce2e4f33d2272 ``` It seems the blob starting with `f2a06cf32c` is nowhere to be found. If you think this is an unrelated problem, I'm happy to open another issue (on the huggingface_hub repo, I'd imagine)<|||||>Hi @Schmavery, maybe let's continue here for now. Could you delete entirely the `~/.cache/huggingface/hub/models--stabilityai--stable-diffusion-2-base` folder and try again? I tested your script in a colab notebook using the latest version and it worked for me: https://colab.research.google.com/drive/1xYy-3Q5hXptZ4TKef8kP7EeeSYiUISpa?usp=sharing<|||||>@Wauplin with huggingface-hub==0.13.3 installed, I deleted the whole ~/.cache/huggingface folder and ran the script in the initial post and got this as the full output: ``` schmavery ~/git/sd-test $ python repro.py A matching Triton is not available, some optimizations will not be enabled. Error caught was: No module named 'triton' Downloading (…)p16/model_index.json: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 511/511 [00:00<00:00, 170kB/s] Downloading (…)okenizer_config.json: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 820/820 [00:00<00:00, 51.4kB/s] Downloading (…)cial_tokens_map.json: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 460/460 [00:00<00:00, 51.1kB/s] Downloading (…)cheduler_config.json: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 340/340 [00:00<00:00, 113kB/s] Downloading (…)edf/unet/config.json: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 905/905 [00:00<00:00, 82.3kB/s] Downloading (…)_encoder/config.json: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 629/629 [00:00<00:00, 57.2kB/s] Downloading (…)tokenizer/merges.txt: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 525k/525k [00:00<00:00, 5.04MB/s] Downloading (…)tokenizer/vocab.json: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 1.06M/1.06M [00:00<00:00, 6.42MB/s] Downloading (…)bedf/vae/config.json: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 607/607 [00:00<00:00, 202kB/s] Downloading (…)on_pytorch_model.bin: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 167M/167M [00:05<00:00, 32.0MB/s] Downloading pytorch_model.bin: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 681M/681M [00:17<00:00, 39.8MB/s] Downloading (…)on_pytorch_model.bin: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 1.73G/1.73G [00:43<00:00, 39.8MB/s] Fetching 12 files: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 12/12 [00:44<00:00, 3.73s/it] Traceback (most recent call last):n: 40%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Ž | 692M/1.73G [00:16<00:27, 37.2MB/s] File "C:\Users\schmavery\git\sd-test\venv\lib\site-packages\transformers\modeling_utils.py", line 415, in load_state_dictβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Š| 1.73G/1.73G [00:43<00:00, 59.0MB/s] return torch.load(checkpoint_file, map_location="cpu") File "C:\Users\schmavery\git\sd-test\venv\lib\site-packages\torch\serialization.py", line 771, in load with _open_file_like(f, 'rb') as opened_file: File "C:\Users\schmavery\git\sd-test\venv\lib\site-packages\torch\serialization.py", line 270, in _open_file_like return _open_file(name_or_buffer, mode) File "C:\Users\schmavery\git\sd-test\venv\lib\site-packages\torch\serialization.py", line 251, in __init__ super(_open_file, self).__init__(open(name, mode)) OSError: [Errno 22] Invalid argument: 'C:\\Users\\schmavery\\.cache\\huggingface\\hub\\models--stabilityai--stable-diffusion-2-base\\snapshots\\1cb61502fc8b634cdb04e7cd69e06051a728bedf\\text_encoder\\pytorch_model.bin' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "C:\Users\schmavery\git\sd-test\repro.py", line 5, in <module> pipe = DiffusionPipeline.from_pretrained(repo_id, torch_dtype=torch.float16, revision="fp16") File "C:\Users\schmavery\git\sd-test\venv\lib\site-packages\diffusers\pipelines\pipeline_utils.py", line 944, in from_pretrained loaded_sub_model = load_method(os.path.join(cached_folder, name), **loading_kwargs) File "C:\Users\schmavery\git\sd-test\venv\lib\site-packages\transformers\modeling_utils.py", line 2429, in from_pretrained state_dict = load_state_dict(resolved_archive_file) File "C:\Users\schmavery\git\sd-test\venv\lib\site-packages\transformers\modeling_utils.py", line 418, in load_state_dict with open(checkpoint_file) as f: OSError: [Errno 22] Invalid argument: 'C:\\Users\\schmavery\\.cache\\huggingface\\hub\\models--stabilityai--stable-diffusion-2-base\\snapshots\\1cb61502fc8b634cdb04e7cd69e06051a728bedf\\text_encoder\\pytorch_model.bin' ``` The snapshot file still points to a blob starting with `f2a06cf32cf585d03b` which doesn't exist in the blobs folder.<|||||>@Schmavery sorry that you are experiencing this. I'm making more tests on Windows on my side. Could you tell if you enabled developer mode on your laptop? And can you run `huggingface-cli env` and copy-paste this output here please? Just in case it gives me some hint on what is happening.<|||||>@Wauplin No problem, thanks for the help! The crazy thing is that this seemed to all be working last week (when using my realpath patch), but when I ran it this morning after the weekend, I had this issue, even after a clean reinstall of all the packages. I thought maybe there could have been some problematic update to the model itself but if it's running fine for you then I guess that's not it. Looks like developer mode is turned on ![image](https://user-images.githubusercontent.com/2154522/226381945-0a9a0e55-4143-4b30-932b-ec92607c3fb7.png) Here's the output: ``` schmavery ~/git/sd-test $ huggingface-cli env Copy-and-paste the text below in your GitHub issue. - huggingface_hub version: 0.13.3 - Platform: Windows-10-10.0.19044-SP0 - Python version: 3.9.12 - Running in iPython ?: No - Running in notebook ?: No - Running in Google Colab ?: No - Token path ?: C:\Users\schmavery\.cache\huggingface\token - Has saved token ?: False - Configured git credential helpers: manager-core - FastAI: N/A - Tensorflow: N/A - Torch: 1.13.1+cu117 - Jinja2: N/A - Graphviz: N/A - Pydot: N/A - Pillow: 9.4.0 - hf_transfer: N/A - ENDPOINT: https://huggingface.co - HUGGINGFACE_HUB_CACHE: C:\Users\schmavery\.cache\huggingface\hub - HUGGINGFACE_ASSETS_CACHE: C:\Users\schmavery\.cache\huggingface\assets - HF_TOKEN_PATH: C:\Users\schmavery\.cache\huggingface\token - HF_HUB_OFFLINE: False - HF_HUB_DISABLE_TELEMETRY: False - HF_HUB_DISABLE_PROGRESS_BARS: None - HF_HUB_DISABLE_SYMLINKS_WARNING: False - HF_HUB_DISABLE_IMPLICIT_TOKEN: False - HF_HUB_ENABLE_HF_TRANSFER: False ```<|||||>Thanks for the information @Schmavery . Unfortunately I'm still not able to reproduce your issue. It's good that you have developer mode activated btw (otherwise you wouldn't have symlinks at all and files would be duplicated in the cache). Can we try something else?: 1. Delete the `'C:\\Users\\schmavery\\.cache\\huggingface\\hub\\'` folder (or `'C:\\Users\\schmavery\\.cache\\huggingface\\hub\\models--stabilityai--stable-diffusion-2-base'` if you want to keep other ones) 2. Install `huggingface_hub==0.12.1`. We had some issues with the 0.13 release and I'd like to be sure if the bug you are facing existed before or not. 3. Rerun the script with debug logging enabled i.e. ```py # Add those 2 lines at the beginning of your script: from huggingface_hub.utils.logging import set_verbosity_debug set_verbosity_debug() # Same script as before from diffusers import DiffusionPipeline, DPMSolverMultistepScheduler import torch repo_id = "stabilityai/stable-diffusion-2-base" pipe = DiffusionPipeline.from_pretrained(repo_id, torch_dtype=torch.float16, revision="fp16") pipe.scheduler = DPMSolverMultistepScheduler.from_config(pipe.scheduler.config) pipe = pipe.to("cuda") prompt = "High quality photo of an astronaut riding a horse in space" image = pipe(prompt, num_inference_steps=25).images[0] image.save("astronaut.png") ```<|||||>@Wauplin FWIW I just tried it with `runwayml/stable-diffusion-v1-5` to see if a different model might work, but got a very similar problem: ``` schmavery ~/git/sd-test $ python repro.py A matching Triton is not available, some optimizations will not be enabled. Error caught was: No module named 'triton' Downloading (…)ain/model_index.json: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 543/543 [00:00<00:00, 136kB/s] Downloading (…)rocessor_config.json: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 342/342 [00:00<00:00, 85.5kB/s] Downloading (…)cheduler_config.json: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 308/308 [00:00<00:00, 68.8kB/s] Downloading (…)_checker/config.json: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 4.72k/4.72k [00:00<00:00, 1.33MB/s] Downloading (…)cial_tokens_map.json: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 472/472 [00:00<00:00, 157kB/s] Downloading (…)_encoder/config.json: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 617/617 [00:00<00:00, 154kB/s] Downloading (…)tokenizer/merges.txt: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 525k/525k [00:00<00:00, 4.14MB/s] Downloading (…)okenizer_config.json: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 806/806 [00:00<00:00, 403kB/s] Downloading (…)819/unet/config.json: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 743/743 [00:00<00:00, 248kB/s] Downloading (…)d819/vae/config.json: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 547/547 [00:00<00:00, 182kB/s] Downloading (…)tokenizer/vocab.json: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 1.06M/1.06M [00:00<00:00, 3.91MB/s] Downloading (…)on_pytorch_model.bin: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 335M/335M [00:23<00:00, 14.4MB/s] Downloading pytorch_model.bin: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 492M/492M [00:32<00:00, 15.2MB/s] Downloading pytorch_model.bin: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 1.22G/1.22G [00:58<00:00, 20.6MB/s] Downloading (…)on_pytorch_model.bin: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 3.44G/3.44G [01:36<00:00, 35.7MB/s] Fetching 15 files: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 15/15 [01:36<00:00, 6.46s/it] Traceback (most recent call last):%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 1.22G/1.22G [00:58<00:00, 19.2MB/s] File "C:\Users\schmavery\git\sd-test\venv\lib\site-packages\diffusers\models\modeling_utils.py", line 101, in load_state_dict return torch.load(checkpoint_file, map_location="cpu") File "C:\Users\schmavery\git\sd-test\venv\lib\site-packages\torch\serialization.py", line 771, in load with _open_file_like(f, 'rb') as opened_file: File "C:\Users\schmavery\git\sd-test\venv\lib\site-packages\torch\serialization.py", line 270, in _open_file_like return _open_file(name_or_buffer, mode) File "C:\Users\schmavery\git\sd-test\venv\lib\site-packages\torch\serialization.py", line 251, in __init__ super(_open_file, self).__init__(open(name, mode)) OSError: [Errno 22] Invalid argument: 'C:\\Users\\schmavery\\.cache\\huggingface\\hub\\models--runwayml--stable-diffusion-v1-5\\snapshots\\39593d5650112b4cc580433f6b0435385882d819\\vae\\diffusion_pytorch_model.bin' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "C:\Users\schmavery\git\sd-test\repro.py", line 4, in <module> pipe = StableDiffusionPipeline.from_pretrained( File "C:\Users\schmavery\git\sd-test\venv\lib\site-packages\diffusers\pipelines\pipeline_utils.py", line 944, in from_pretrained loaded_sub_model = load_method(os.path.join(cached_folder, name), **loading_kwargs) File "C:\Users\schmavery\git\sd-test\venv\lib\site-packages\diffusers\models\modeling_utils.py", line 563, in from_pretrained state_dict = load_state_dict(model_file, variant=variant) File "C:\Users\schmavery\git\sd-test\venv\lib\site-packages\diffusers\models\modeling_utils.py", line 106, in load_state_dict with open(checkpoint_file) as f: OSError: [Errno 22] Invalid argument: 'C:\\Users\\schmavery\\.cache\\huggingface\\hub\\models--runwayml--stable-diffusion-v1-5\\snapshots\\39593d5650112b4cc580433f6b0435385882d819\\vae\\diffusion_pytorch_model.bin' (venv) (base) 11:54:56 schmavery@DESKTOP-ML11APV:~/git/sd-test $ ls -lh C:\\Users\\schmavery\\.cache\\huggingface\\hub\\models--runwayml--stable-diffusion-v1-5\\snapshots\\39593d5650112b4cc580433f6b0435385882d819\\vae\\diffusion_pytorch_model.bin lrwxrwxrwx 1 schmavery 79 Mar 20 11:53 'C:\Users\schmavery\.cache\huggingface\hub\models--runwayml--stable-diffusion-v1-5\snapshots\39593d5650112b4cc580433f6b0435385882d819\vae\diffusion_pytorch_model.bin' -> ../../../blobs/1b134cded8eb78b184aefb8805b6b572f36fa77b255c483665dda931fa0130c5 schmavery ~/git/sd-test $ ls ~/.cache/huggingface/hub/ models--runwayml--stable-diffusion-v1-5/ models--stabilityai--stable-diffusion-2-base/ version.txt version_diffusers_cache.txt schmavery ~/git/sd-test $ ls ~/.cache/huggingface/hub/models--runwayml--stable-diffusion-v1-5/blobs/ 193490b58ef62739077262e833bf091c66c29488058681ac25cf7df3d8190974 4d3e873ab5086ad989f407abd50fdce66db8d657 5dbd88952e7e521aa665e5052e6db7def3641d03 82d05b0e688d7ea94675678646c427907419346e 1a02ee8abc93e840ffbcb2d68b66ccbcb74b3ab3 5294955ff7801083f720b34b55d0f1f51313c5c5 6866dceb3a870b077eb970ecf702ce4e1a83b934 c7da0e21ba7ea50637bee26e81c220844defdf01aafca02b2c42ecdadb813de4 2c2130b544c0c5a72d5d00da071ba130a9800fb2 55d78924fee13e4220f24320127c5f16284e13b9 76e821f1b6f0a9709293c3b6b51ed90980b3166b 469be27c5c010538f845f518c4f5e8574c78f7c8 5ba7bf706515bc60487ad0e1816b4929b82542d6 770a47a9ffdcfda0b05506a7888ed714d06131d60267e6cf52765d61cf59fd67 ``` I wonder if it's possible that the hash used in the symlink could be wrong under some circumstances.<|||||>@Schmavery not sure you saw it but could you try my suggestion from https://github.com/huggingface/transformers/pull/22228#issuecomment-1476499630? Thanks in advance <|||||>Oops, missed your message, running that now. I assume you meant `huggingface_hub==0.12.1` rather than `huggingface==0.12.1` but lmk if that's wrong (the latter gave me an error when trying to pip install)<|||||>Ah, yes of course. `huggingface_hub==0.12.1` is the one I meant<|||||>@Wauplin ``` schmavery ~/git/sd-test $ rm -rf ~/.cache/huggingface/hub/ schmavery ~/git/sd-test $ huggingface-cli env Copy-and-paste the text below in your GitHub issue. - huggingface_hub version: 0.12.1 - Platform: Windows-10-10.0.19044-SP0 - Python version: 3.9.12 - Running in iPython ?: No - Running in notebook ?: No - Running in Google Colab ?: No - Token path ?: C:\Users\schmavery\.cache\huggingface\token - Has saved token ?: False - Configured git credential helpers: manager-core - FastAI: N/A - Tensorflow: N/A - Torch: 1.13.1+cu117 - Jinja2: N/A - Graphviz: N/A - Pydot: N/A - Pillow: 9.4.0 - hf_transfer: N/A - ENDPOINT: https://huggingface.co - HUGGINGFACE_HUB_CACHE: C:\Users\schmavery\.cache\huggingface\hub - HUGGINGFACE_ASSETS_CACHE: C:\Users\schmavery\.cache\huggingface\assets - HF_HUB_OFFLINE: False - HF_TOKEN_PATH: C:\Users\schmavery\.cache\huggingface\token - HF_HUB_DISABLE_PROGRESS_BARS: None - HF_HUB_DISABLE_SYMLINKS_WARNING: False - HF_HUB_DISABLE_IMPLICIT_TOKEN: False - HF_HUB_ENABLE_HF_TRANSFER: False schmavery ~/git/sd-test $ python repro.py A matching Triton is not available, some optimizations will not be enabled. Error caught was: No module named 'triton' downloading https://huggingface.co/stabilityai/stable-diffusion-2-base/resolve/fp16/model_index.json to C:\Users\schmavery\.cache\huggingface\hub\tmppiuhc6qi Downloading (…)p16/model_index.json: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 511/511 [00:00<00:00, 170kB/s] storing https://huggingface.co/stabilityai/stable-diffusion-2-base/resolve/fp16/model_index.json in cache at C:\Users\schmavery\.cache\huggingface\hub\models--stabilityai--stable-diffusion-2-base\blobs\9ef36adb76dff35bf9dc2fc690ce4ae3bb72360d creating pointer to C:\Users\schmavery\.cache\huggingface\hub\models--stabilityai--stable-diffusion-2-base\blobs\9ef36adb76dff35bf9dc2fc690ce4ae3bb72360d from C:\Users\schmavery\.cache\huggingface\hub\models--stabilityai--stable-diffusion-2-base\snapshots\1cb61502fc8b634cdb04e7cd69e06051a728bedf\model_index.json Fetching 12 files: 0%| | 0/12 [00:00<?, ?it/s]downloading https://huggingface.co/stabilityai/stable-diffusion-2-base/resolve/1cb61502fc8b634cdb04e7cd69e06051a728bedf/scheduler/scheduler_config.json to C:\Users\schmavery\.cache\huggingface\hub\tmp376tmhv9 downloading https://huggingface.co/stabilityai/stable-diffusion-2-base/resolve/1cb61502fc8b634cdb04e7cd69e06051a728bedf/tokenizer/special_tokens_map.json to C:\Users\schmavery\.cache\huggingface\hub\tmp9onvvyfj downloading https://huggingface.co/stabilityai/stable-diffusion-2-base/resolve/1cb61502fc8b634cdb04e7cd69e06051a728bedf/text_encoder/config.json to C:\Users\schmavery\.cache\huggingface\hub\tmp88p6fmgk downloading https://huggingface.co/stabilityai/stable-diffusion-2-base/resolve/1cb61502fc8b634cdb04e7cd69e06051a728bedf/text_encoder/pytorch_model.bin to C:\Users\schmavery\.cache\huggingface\hub\tmpbiq7cjj2 downloading https://huggingface.co/stabilityai/stable-diffusion-2-base/resolve/1cb61502fc8b634cdb04e7cd69e06051a728bedf/tokenizer/vocab.json to C:\Users\schmavery\.cache\huggingface\hub\tmp7skqyuqq downloading https://huggingface.co/stabilityai/stable-diffusion-2-base/resolve/1cb61502fc8b634cdb04e7cd69e06051a728bedf/unet/config.json to C:\Users\schmavery\.cache\huggingface\hub\tmpvrthjk23 downloading https://huggingface.co/stabilityai/stable-diffusion-2-base/resolve/1cb61502fc8b634cdb04e7cd69e06051a728bedf/tokenizer/merges.txt to C:\Users\schmavery\.cache\huggingface\hub\tmpyi6kdwbo downloading https://huggingface.co/stabilityai/stable-diffusion-2-base/resolve/1cb61502fc8b634cdb04e7cd69e06051a728bedf/tokenizer/tokenizer_config.json to C:\Users\schmavery\.cache\huggingface\hub\tmpy9q44g25 Downloading (…)cial_tokens_map.json: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 460/460 [00:00<00:00, 115kB/s] Downloading (…)"pytorch_model.bin";: 0%| | 0.00/681M [00:00<?, ?B/s] Downloading (…)cial_tokens_map.json: 0%| | 0.00/460 [00:00<?, ?B/s] Downloading (…)cheduler_config.json: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 340/340 [00:00<00:00, 68.0kB/s]bDownloading (…)_encoder/config.json: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 629/629 [00:00<00:00, 210kB/s] storing https://huggingface.co/stabilityai/stable-diffusion-2-base/resolve/1cb61502fc8b634cdb04e7cd69e06051a728bedf/scheduler/scheduler_config.json in cache at C:\Users\schmavery\.cache\huggingface\hub\models--stabilityai--stable-diffusion-2-base\blobs\4a37db2129e08cb00670e652398a8f3960d97d0eson: 0%| | 0.00/629 [00:00<?, ?B/s] storing https://huggingface.co/stabilityai/stable-diffusion-2-base/resolve/1cb61502fc8b634cdb04e7cd69e06051a728bedf/text_encoder/config.json in cache at C:\Users\schmavery\.cache\huggingface\hub\models--stabilityai--stable-diffusion-2-base\blobs\a08e9e082e6ab9044bdd2926092ce2e4f33d2272 creating pointer to C:\Users\schmavery\.cache\huggingface\hub\models--stabilityai--stable-diffusion-2-base\blobs\ae0c5be6f35217e51c4c000fd325d8de0294e99c from C:\Users\schmavery\.cache\huggingface\hub\models--stabilityai--stable-diffusion-2-base\snapshots\1cb61502fc8b634cdb04e7cd69e06051a728bedf\tokenizer\special_tokens_map.json creating pointer to C:\Users\schmavery\.cache\huggingface\hub\models--stabilityai--stable-diffusion-2-base\blobs\4a37db2129e08cb00670e652398a8f3960d97d0e from C:\Users\schmavery\.cache\huggingface\hub\models--stabilityai--stable-diffusion-2-base\snapshots\1cb61502fc8b634cdb04e7cd69e06051a728bedf\scheduler\scheduler_config.json creating pointer to C:\Users\schmavery\.cache\huggingface\hub\models--stabilityai--stable-diffusion-2-base\blobs\a08e9e082e6ab9044bdd2926092ce2e4f33d2272 from C:\Users\schmavery\.cache\huggingface\hub\models--stabilityai--stable-diffusion-2-base\snapshots\1cb61502fc8b634cdb04e7cd69e06051a728bedf\text_encoder\config.json Downloading (…)edf/unet/config.json: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 905/905 [00:00<00:00, 226kB/s] Downloading (…)edf/unet/config.json: 0%| | 0.00/905 [00:00<?, ?B/s] Downloading (…)okenizer_config.json: 0%| | 0.00/820 [00:00<?, ?B/s]sDownloading (…)okenizer_config.json: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 820/820 [00:00<00:00, 164kB/s]0storing https://huggingface.co/stabilityai/stable-diffusion-2-base/resolve/1cb61502fc8b634cdb04e7cd69e06051a728bedf/tokenizer/tokenizer_config.json in cache at C:\Users\schmavery\.cache\huggingface\hub\models--stabilityai--stable-diffusion-2-base\blobs\e966b0b8955e8c66a0717acb2ce5041274d7c60a creating pointer to C:\Users\schmavery\.cache\huggingface\hub\models--stabilityai--stable-diffusion-2-base\blobs\9e3e87514708d0a2b44abfa0096ec14802862f5d from C:\Users\schmavery\.cache\huggingface\hub\models--stabilityai--stable-diffusion-2-base\snapshots\1cb61502fc8b634cdb04e7cd69e06051a728bedf\unet\config.json creating pointer to C:\Users\schmavery\.cache\huggingface\hub\models--stabilityai--stable-diffusion-2-base\blobs\e966b0b8955e8c66a0717acb2ce5041274d7c60a from C:\Users\schmavery\.cache\huggingface\hub\models--stabilityai--stable-diffusion-2-base\snapshots\1cb61502fc8b634cdb04e7cd69e06051a728bedf\tokenizer\tokenizer_config.json downloading https://huggingface.co/stabilityai/stable-diffusion-2-base/resolve/1cb61502fc8b634cdb04e7cd69e06051a728bedf/vae/diffusion_pytorch_model.bin to C:\Users\schmavery\.cache\huggingface\hub\tmpzk2qle5p downloading https://huggingface.co/stabilityai/stable-diffusion-2-base/resolve/1cb61502fc8b634cdb04e7cd69e06051a728bedf/unet/diffusion_pytorch_model.bin to C:\Users\schmavery\.cache\huggingface\hub\tmpj5ly573o | 0.00/525k [00:00<?, ?B/s] downloading https://huggingface.co/stabilityai/stable-diffusion-2-base/resolve/1cb61502fc8b634cdb04e7cd69e06051a728bedf/vae/config.json to C:\Users\schmavery\.cache\huggingface\hub\tmp43prkgdv | 0.00/1.06M [00:00<?, ?B/s] Downloading (…)tokenizer/merges.txt: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 525k/525k [00:00<00:00, 4.64MB/s] storing https://huggingface.co/stabilityai/stable-diffusion-2-base/resolve/1cb61502fc8b634cdb04e7cd69e06051a728bedf/tokenizer/merges.txt in cache at C:\Users\schmavery\.cache\huggingface\hub\models--stabilityai--stable-diffusion-2-base\blobs\76e821f1b6f0a9709293c3b6b51ed90980b3166bzer/merges.txt: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 525k/525k [00:00<00:00, 4.69MB/s] creating pointer to C:\Users\schmavery\.cache\huggingface\hub\models--stabilityai--stable-diffusion-2-base\blobs\76e821f1b6f0a9709293c3b6b51ed90980b3166b from C:\Users\schmavery\.cache\huggingface\hub\models--stabilityai--stable-diffusion-2-base\snapshots\1cb61502fc8b634cdb04e7cd69e06051a728bedf\tokenizer\merges.txt Downloading (…)tokenizer/vocab.json: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 1.06M/1.06M [00:00<00:00, 6.97MB/s] storing https://huggingface.co/stabilityai/stable-diffusion-2-base/resolve/1cb61502fc8b634cdb04e7cd69e06051a728bedf/tokenizer/vocab.json in cache at C:\Users\schmavery\.cache\huggingface\hub\models--stabilityai--stable-diffusion-2-base\blobs\469be27c5c010538f845f518c4f5e8574c78f7c8 creating pointer to C:\Users\schmavery\.cache\huggingface\hub\models--stabilityai--stable-diffusion-2-base\blobs\469be27c5c010538f845f518c4f5e8574c78f7c8 from C:\Users\schmavery\.cache\huggingface\hub\models--stabilityai--stable-diffusion-2-base\snapshots\1cb61502fc8b634cdb04e7cd69e06051a728bedf\tokenizer\vocab.json Downloading (…)bedf/vae/config.json: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 607/607 [00:00<00:00, 152kB/s] storing https://huggingface.co/stabilityai/stable-diffusion-2-base/resolve/1cb61502fc8b634cdb04e7cd69e06051a728bedf/vae/config.json in cache at C:\Users\schmavery\.cache\huggingface\hub\models--stabilityai--stable-diffusion-2-base\blobs\14bcdff46ade71e94221b696cefbad2382223370edf/vae/config.json: 0%| | 0.00/607 [00:00<?, ?B/s] creating pointer to C:\Users\schmavery\.cache\huggingface\hub\models--stabilityai--stable-diffusion-2-base\blobs\14bcdff46ade71e94221b696cefbad2382223370 from C:\Users\schmavery\.cache\huggingface\hub\models--stabilityai--stable-diffusion-2-base\snapshots\1cb61502fc8b634cdb04e7cd69e06051a728bedf\vae\config.json Downloading (…)"pytorch_model.bin";: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 681M/681M [00:07<00:00, 92.9MB/s] storing https://huggingface.co/stabilityai/stable-diffusion-2-base/resolve/1cb61502fc8b634cdb04e7cd69e06051a728bedf/text_encoder/pytorch_model.bin in cache at C:\Users\schmavery\.cache\huggingface\hub\models--stabilityai--stable-diffusion-2-base\blobs\f2a06cf32cf585d03b55fef302142a5321b761ec440113925f64f4ceaffc7730 | 73.4M/1.73G [00:06<02:07, 13.0MB/s] creating pointer to C:\Users\schmavery\.cache\huggingface\hub\models--stabilityai--stable-diffusion-2-base\blobs\f2a06cf32cf585d03b55fef302142a5321b761ec440113925f64f4ceaffc7730 from C:\Users\schmavery\.cache\huggingface\hub\models--stabilityai--stable-diffusion-2-base\snapshots\1cb61502fc8b634cdb04e7cd69e06051a728bedf\text_encoder\pytorch_model.bin Downloading (…)_pytorch_model.bin";: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 167M/167M [00:10<00:00, 15.8MB/s] storing https://huggingface.co/stabilityai/stable-diffusion-2-base/resolve/1cb61502fc8b634cdb04e7cd69e06051a728bedf/vae/diffusion_pytorch_model.bin in cache at C:\Users\schmavery\.cache\huggingface\hub\models--stabilityai--stable-diffusion-2-base\blobs\11bc15ceb385823b4adb68bd5bdd7568d0c706c3de5ea9ebcb0b807092fc9030β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Š | 189M/1.73G [00:10<00:44, 34.3MB/s] creating pointer to C:\Users\schmavery\.cache\huggingface\hub\models--stabilityai--stable-diffusion-2-base\blobs\11bc15ceb385823b4adb68bd5bdd7568d0c706c3de5ea9ebcb0b807092fc9030 from C:\Users\schmavery\.cache\huggingface\hub\models--stabilityai--stable-diffusion-2-base\snapshots\1cb61502fc8b634cdb04e7cd69e06051a728bedf\vae\diffusion_pytorch_model.bin Downloading (…)_pytorch_model.bin";: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 1.73G/1.73G [00:51<00:00, 33.4MB/s] storing https://huggingface.co/stabilityai/stable-diffusion-2-base/resolve/1cb61502fc8b634cdb04e7cd69e06051a728bedf/unet/diffusion_pytorch_model.bin in cache at C:\Users\schmavery\.cache\huggingface\hub\models--stabilityai--stable-diffusion-2-base\blobs\34009b21392113e829e498653f739f1ec81244b4a2eaf56f111b0805c9617650β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Š| 1.73G/1.73G [00:51<00:00, 52.2MB/s] creating pointer to C:\Users\schmavery\.cache\huggingface\hub\models--stabilityai--stable-diffusion-2-base\blobs\34009b21392113e829e498653f739f1ec81244b4a2eaf56f111b0805c9617650 from C:\Users\schmavery\.cache\huggingface\hub\models--stabilityai--stable-diffusion-2-base\snapshots\1cb61502fc8b634cdb04e7cd69e06051a728bedf\unet\diffusion_pytorch_model.bin Fetching 12 files: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 12/12 [00:52<00:00, 4.38s/it] Traceback (most recent call last): File "C:\Users\schmavery\git\sd-test\venv\lib\site-packages\transformers\modeling_utils.py", line 415, in load_state_dict return torch.load(checkpoint_file, map_location="cpu") File "C:\Users\schmavery\git\sd-test\venv\lib\site-packages\torch\serialization.py", line 771, in load with _open_file_like(f, 'rb') as opened_file: File "C:\Users\schmavery\git\sd-test\venv\lib\site-packages\torch\serialization.py", line 270, in _open_file_like return _open_file(name_or_buffer, mode) File "C:\Users\schmavery\git\sd-test\venv\lib\site-packages\torch\serialization.py", line 251, in __init__ super(_open_file, self).__init__(open(name, mode)) OSError: [Errno 22] Invalid argument: 'C:\\Users\\schmavery\\.cache\\huggingface\\hub\\models--stabilityai--stable-diffusion-2-base\\snapshots\\1cb61502fc8b634cdb04e7cd69e06051a728bedf\\text_encoder\\pytorch_model.bin' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "C:\Users\schmavery\git\sd-test\repro.py", line 24, in <module> pipe = DiffusionPipeline.from_pretrained(repo_id, torch_dtype=torch.float16, revision="fp16") File "C:\Users\schmavery\git\sd-test\venv\lib\site-packages\diffusers\pipelines\pipeline_utils.py", line 944, in from_pretrained loaded_sub_model = load_method(os.path.join(cached_folder, name), **loading_kwargs) File "C:\Users\schmavery\git\sd-test\venv\lib\site-packages\transformers\modeling_utils.py", line 2429, in from_pretrained state_dict = load_state_dict(resolved_archive_file) File "C:\Users\schmavery\git\sd-test\venv\lib\site-packages\transformers\modeling_utils.py", line 418, in load_state_dict with open(checkpoint_file) as f: OSError: [Errno 22] Invalid argument: 'C:\\Users\\schmavery\\.cache\\huggingface\\hub\\models--stabilityai--stable-diffusion-2-base\\snapshots\\1cb61502fc8b634cdb04e7cd69e06051a728bedf\\text_encoder\\pytorch_model.bin' schmavery ~/git/sd-test $ ls -lh C:\\Users\\schmavery\\.cache\\huggingface\\hub\\models--stabilityai--stable-diffusion-2-base\\snapshots\\1cb61502fc8b634cdb04e7cd69e06051a728bedf\\text_encoder\\pytorch_model.bin lrwxrwxrwx 1 schmavery 79 Mar 20 12:14 'C:\Users\schmavery\.cache\huggingface\hub\models--stabilityai--stable-diffusion-2-base\snapshots\1cb61502fc8b634cdb04e7cd69e06051a728bedf\text_encoder\pytorch_model.bin' -> ../../../blobs/f2a06cf32cf585d03b55fef302142a5321b761ec440113925f64f4ceaffc7730 schmavery ~/git/sd-test $ ls ~/.cache/huggingface/hub/models--stabilityai--stable-diffusion-2-base/blobs/ 11bc15ceb385823b4adb68bd5bdd7568d0c706c3de5ea9ebcb0b807092fc9030 469be27c5c010538f845f518c4f5e8574c78f7c8 9e3e87514708d0a2b44abfa0096ec14802862f5d ae0c5be6f35217e51c4c000fd325d8de0294e99c 14bcdff46ade71e94221b696cefbad2382223370 4a37db2129e08cb00670e652398a8f3960d97d0e 9ef36adb76dff35bf9dc2fc690ce4ae3bb72360d e966b0b8955e8c66a0717acb2ce5041274d7c60a 34009b21392113e829e498653f739f1ec81244b4a2eaf56f111b0805c9617650 76e821f1b6f0a9709293c3b6b51ed90980b3166b a08e9e082e6ab9044bdd2926092ce2e4f33d2272 ```<|||||>@Wauplin Ok, doing some more investigation. When watching my filesystem during the install, I see the offending f2a06cf32cf585d03b55fef302142a5321b761ec440113925f64f4ceaffc7730 in the blobs folder after it gets to the ``` storing https://huggingface.co/stabilityai/stable-diffusion-2-base/resolve/1cb61502fc8b634cdb04e7cd69e06051a728bedf/text_encoder/pytorch_model.bin in cache at C:\Users\schmavery\.cache\huggingface\hub\models--stabilityai--stable-diffusion-2-base\blobs\f2a06cf32cf585d03b55fef302142a5321b761ec440113925f64f4ceaffc7730 ``` But then at some point it gets deleted/disappears. Any idea what might be triggering that? At this point I'm just running ```python from huggingface_hub.utils.logging import set_verbosity_debug set_verbosity_debug() from diffusers import DiffusionPipeline, DPMSolverMultistepScheduler import torch repo_id = "stabilityai/stable-diffusion-2-base" pipe = DiffusionPipeline.from_pretrained(repo_id, torch_dtype=torch.float16, revision="fp16") ```<|||||>@Schmavery thanks for trying the commands. The fact that it doesn't work on huggingface v0.12.1 makes me think that it's an issue specific to your setup, not something that was introduced recently. It's doesn't mean we should not find the root cause. Maybe let's try to keep the test as minimal as possible: ```py # tested with huggingface_hub==0.12.1 from huggingface_hub.utils.logging import set_verbosity_debug from huggingface_hub import hf_hub_download from huggingface_hub.constants import HUGGINGFACE_HUB_CACHE from pathlib import Path import shutil print("Deleting", HUGGINGFACE_HUB_CACHE) shutil.rmtree(HUGGINGFACE_HUB_CACHE) set_verbosity_debug() path = Path(hf_hub_download(repo_id="stabilityai/stable-diffusion-2-base", filename="text_encoder/pytorch_model.bin", revision="fp16")) print("hf_hub_download", path) print("is_file", path.is_file()) print("is_symlink", path.is_symlink()) print("resolved", path.resolve()) print("resolved size", path.resolve().stat().st_size) ``` should output ``` Deleting C:\Users\Administrator\.cache\huggingface\hub downloading https://huggingface.co/stabilityai/stable-diffusion-2-base/resolve/fp16/text_encoder/pytorch_model.bin to C:\Users\Administrator\.cache\huggingface\hub\tmp9rxs8yls Downloading (…)"pytorch_model.bin";: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 681M/681M [00:05<00:00, 115MB/s] storing https://huggingface.co/stabilityai/stable-diffusion-2-base/resolve/fp16/text_encoder/pytorch_model.bin in cache at C:\Users\Administrator\.cache\huggingface\hub\models--stabilityai--stable-diffusion-2-base\blobs\f2a06cf32cf585d03b55fef302142a5321b761ec440113925f64f4ceaffc7730 creating pointer to C:\Users\Administrator\.cache\huggingface\hub\models--stabilityai--stable-diffusion-2-base\blobs\f2a06cf32cf585d03b55fef302142a5321b761ec440113925f64f4ceaffc7730 from C:\Users\Administrator\.cache\huggingface\hub\models--stabilityai--stable-diffusion-2-base\snapshots\1cb61502fc8b634cdb04e7cd69e06051a728bedf\text_encoder\pytorch_model.bin hf_hub_download C:\Users\Administrator\.cache\huggingface\hub\models--stabilityai--stable-diffusion-2-base\snapshots\1cb61502fc8b634cdb04e7cd69e06051a728bedf\text_encoder\pytorch_model.bin is_file True is_symlink True resolved C:\Users\Administrator\.cache\huggingface\hub\models--stabilityai--stable-diffusion-2-base\blobs\f2a06cf32cf585d03b55fef302142a5321b761ec440113925f64f4ceaffc7730 resolved size 680904225 ``` Can you confirm that or is the blob file missing already ? <|||||>@Wauplin ok I can confirm that much works! ``` schmavery ~/git/sd-test $ python repro2.py Deleting C:\Users\schmavery\.cache\huggingface\hub downloading https://huggingface.co/stabilityai/stable-diffusion-2-base/resolve/fp16/text_encoder/pytorch_model.bin to C:\Users\schmavery\.cache\huggingface\hub\tmp8kmmcj6w Downloading (…)"pytorch_model.bin";: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 681M/681M [00:05<00:00, 114MB/s] storing https://huggingface.co/stabilityai/stable-diffusion-2-base/resolve/fp16/text_encoder/pytorch_model.bin in cache at C:\Users\schmavery\.cache\huggingface\hub\models--stabilityai--stable-diffusion-2-base\blobs\f2a06cf32cf585d03b55fef302142a5321b761ec440113925f64f4ceaffc7730 creating pointer to C:\Users\schmavery\.cache\huggingface\hub\models--stabilityai--stable-diffusion-2-base\blobs\f2a06cf32cf585d03b55fef302142a5321b761ec440113925f64f4ceaffc7730 from C:\Users\schmavery\.cache\huggingface\hub\models--stabilityai--stable-diffusion-2-base\snapshots\1cb61502fc8b634cdb04e7cd69e06051a728bedf\text_encoder\pytorch_model.bin hf_hub_download C:\Users\schmavery\.cache\huggingface\hub\models--stabilityai--stable-diffusion-2-base\snapshots\1cb61502fc8b634cdb04e7cd69e06051a728bedf\text_encoder\pytorch_model.bin is_file True is_symlink True resolved C:\Users\schmavery\.cache\huggingface\hub\models--stabilityai--stable-diffusion-2-base\blobs\f2a06cf32cf585d03b55fef302142a5321b761ec440113925f64f4ceaffc7730 resolved size 680904225 ```<|||||>Ok, that's already a good news. Could you try to load this path from pytorch? (adding the following line to [the previous script](https://github.com/huggingface/transformers/pull/22228#issuecomment-1476649854)). ```py import torch # try to load from symlink directly state_dict = torch.load(path) # or try to load from resolved symlink state_dict = torch.load(path.resolve()) ``` and if that doesn't work, at least try to read the binary file: ```py with open(path, "rb") as f: print("content length", len(f.read()), "(read from file)") # or with open(path.resolve(), "rb") as f: print("content length", len(f.read()), "(read from file)") ```<|||||>Ok. I think I've finally figured out what's going on. @Wauplin Thank you so much for your help in debugging ![image](https://user-images.githubusercontent.com/2154522/226428085-e0203aef-b868-4009-8deb-dc83ed1c1677.png) It looks like somehow the model is triggering some trojan detector in Windows Defender. Looks like a couple other people have run into the issue too: https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/8584 Probably just a false positive but I might try and figure out how to use the safetensor version of the `stabilityai/stable-diffusion-2-base` just in case. <|||||>Thanks again for all the help -- seems like this PR is probably not needed now that huggingface_hub is using relative symlinks.<|||||>@Schmavery Very glad that you finally figured out what's going on! Hope this will help other users switching to safetensors as well :+1: :)
transformers
22,227
closed
Use `dash==2.8.1` for now for daily CI
# What does this PR do? Use `das==2.8.1` for now for daily CI. Currently daily CI jobs all fail, for example, [this job run](https://github.com/huggingface/transformers/actions/runs/4443525103/jobs/7800913606) Issue reported [here](https://github.com/plotly/dash/issues/2460)
03-17-2023 10:14:05
03-17-2023 10:14:05
_The documentation is not available anymore as the PR was closed or merged._
transformers
22,226
closed
fix(docs): fix task guide links in model docs
# What does this PR do? Fixes broken links for task guides in model docs Fixes # (issue) see above ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @MKhalusova
03-17-2023 10:02:05
03-17-2023 10:02:05
_The documentation is not available anymore as the PR was closed or merged._
transformers
22,225
closed
RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.HalfTensor [12, 8192, 1]], which is output 0 of AsStridedBackward0, is at version 1; expected version 0 instead. Hint: the backtrace further above shows the operation that failed to compute its gradient. The variable in question was changed in there or anywhere later.
### System Info tranformer version: 4.2.0 Huggingface hub:0.13.2 Python: Python 3.9.16 ![image](https://user-images.githubusercontent.com/90728105/225866607-921911db-b0b2-4a4e-8278-6a8326c36241.png) ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction When I do trainer.train() to fine-tune pertained long former for text summarization, I get the following error: > /usr/local/lib/python3.9/dist-packages/torch/autograd/__init__.py:197: UserWarning: Error detected in BmmBackward0. Traceback of forward call that caused the error: > File "/usr/local/lib/python3.9/dist-packages/torch/autograd/function.py", line 267, in apply > return user_fn(self, *args) > File "/usr/local/lib/python3.9/dist-packages/torch/utils/checkpoint.py", line 141, in backward > outputs = ctx.run_function(*detached_inputs) > File "/usr/local/lib/python3.9/dist-packages/transformers/models/led/modeling_led.py", line 1701, in custom_forward > return module(*inputs, is_global_attn, output_attentions) > File "/usr/local/lib/python3.9/dist-packages/torch/nn/modules/module.py", line 1194, in _call_impl > return forward_call(*input, **kwargs) > File "/usr/local/lib/python3.9/dist-packages/transformers/models/led/modeling_led.py", line 873, in forward > attn_outputs = self.self_attn( > File "/usr/local/lib/python3.9/dist-packages/torch/nn/modules/module.py", line 1194, in _call_impl > return forward_call(*input, **kwargs) > File "/usr/local/lib/python3.9/dist-packages/transformers/models/led/modeling_led.py", line 695, in forward > self_outputs = self.longformer_self_attn( > File "/usr/local/lib/python3.9/dist-packages/torch/nn/modules/module.py", line 1194, in _call_impl > return forward_call(*input, **kwargs) > File "/usr/local/lib/python3.9/dist-packages/transformers/models/led/modeling_led.py", line 268, in forward > attn_output = self._compute_attn_output_with_global_indices( > File "/usr/local/lib/python3.9/dist-packages/transformers/models/led/modeling_led.py", line 578, in _compute_attn_output_with_global_indices > attn_output_only_global = torch.matmul( > File "/usr/local/lib/python3.9/dist-packages/torch/fx/traceback.py", line 57, in format_stack > return traceback.format_stack() > (Triggered internally at ../torch/csrc/autograd/python_anomaly_mode.cpp:114.) > Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass > /usr/local/lib/python3.9/dist-packages/torch/autograd/__init__.py:197: UserWarning: > > Previous calculation was induced by CheckpointFunctionBackward. Traceback of forward call that induced the previous calculation: > File "/usr/lib/python3.9/runpy.py", line 197, in _run_module_as_main > return _run_code(code, main_globals, None, > File "/usr/lib/python3.9/runpy.py", line 87, in _run_code > exec(code, run_globals) > File "/usr/local/lib/python3.9/dist-packages/ipykernel_launcher.py", line 16, in <module> > app.launch_new_instance() > File "/usr/local/lib/python3.9/dist-packages/traitlets/config/application.py", line 992, in launch_instance > app.start() > File "/usr/local/lib/python3.9/dist-packages/ipykernel/kernelapp.py", line 612, in start > self.io_loop.start() > File "/usr/local/lib/python3.9/dist-packages/tornado/platform/asyncio.py", line 215, in start > self.asyncio_loop.run_forever() > File "/usr/lib/python3.9/asyncio/base_events.py", line 601, in run_forever > self._run_once() > File "/usr/lib/python3.9/asyncio/base_events.py", line 1905, in _run_once > handle._run() > File "/usr/lib/python3.9/asyncio/events.py", line 80, in _run > self._context.run(self._callback, *self._args) > File "/usr/local/lib/python3.9/dist-packages/tornado/ioloop.py", line 687, in <lambda> > lambda f: self._run_callback(functools.partial(callback, future)) > File "/usr/local/lib/python3.9/dist-packages/tornado/ioloop.py", line 740, in _run_callback > ret = callback() > File "/usr/local/lib/python3.9/dist-packages/tornado/gen.py", line 821, in inner > self.ctx_run(self.run) > File "/usr/local/lib/python3.9/dist-packages/tornado/gen.py", line 782, in run > yielded = self.gen.send(value) > File "/usr/local/lib/python3.9/dist-packages/ipykernel/kernelbase.py", line 365, in process_one > yield gen.maybe_future(dispatch(*args)) > File "/usr/local/lib/python3.9/dist-packages/tornado/gen.py", line 234, in wrapper > yielded = ctx_run(next, result) > File "/usr/local/lib/python3.9/dist-packages/ipykernel/kernelbase.py", line 268, in dispatch_shell > yield gen.maybe_future(handler(stream, idents, msg)) > File "/usr/local/lib/python3.9/dist-packages/tornado/gen.py", line 234, in wrapper > yielded = ctx_run(next, result) > File "/usr/local/lib/python3.9/dist-packages/ipykernel/kernelbase.py", line 543, in execute_request > self.do_execute( > File "/usr/local/lib/python3.9/dist-packages/tornado/gen.py", line 234, in wrapper > yielded = ctx_run(next, result) > File "/usr/local/lib/python3.9/dist-packages/ipykernel/ipkernel.py", line 306, in do_execute > res = shell.run_cell(code, store_history=store_history, silent=silent) > File "/usr/local/lib/python3.9/dist-packages/ipykernel/zmqshell.py", line 536, in run_cell > return super(ZMQInteractiveShell, self).run_cell(*args, **kwargs) > File "/usr/local/lib/python3.9/dist-packages/IPython/core/interactiveshell.py", line 2854, in run_cell > result = self._run_cell( > File "/usr/local/lib/python3.9/dist-packages/IPython/core/interactiveshell.py", line 2881, in _run_cell > return runner(coro) > File "/usr/local/lib/python3.9/dist-packages/IPython/core/async_helpers.py", line 68, in _pseudo_sync_runner > coro.send(None) > File "/usr/local/lib/python3.9/dist-packages/IPython/core/interactiveshell.py", line 3057, in run_cell_async > has_raised = await self.run_ast_nodes(code_ast.body, cell_name, > File "/usr/local/lib/python3.9/dist-packages/IPython/core/interactiveshell.py", line 3249, in run_ast_nodes > if (await self.run_code(code, result, async_=asy)): > File "/usr/local/lib/python3.9/dist-packages/IPython/core/interactiveshell.py", line 3326, in run_code > exec(code_obj, self.user_global_ns, self.user_ns) > File "<ipython-input-120-3435b262f1ae>", line 1, in <module> > trainer.train() > File "/usr/local/lib/python3.9/dist-packages/transformers/trainer.py", line 888, in train > tr_loss += self.training_step(model, inputs) > File "/usr/local/lib/python3.9/dist-packages/transformers/trainer.py", line 1248, in training_step > loss = self.compute_loss(model, inputs) > File "/usr/local/lib/python3.9/dist-packages/transformers/trainer.py", line 1277, in compute_loss > outputs = model(**inputs) > File "/usr/local/lib/python3.9/dist-packages/torch/nn/modules/module.py", line 1194, in _call_impl > return forward_call(*input, **kwargs) > File "/usr/local/lib/python3.9/dist-packages/transformers/models/led/modeling_led.py", line 2190, in forward > outputs = self.led( > File "/usr/local/lib/python3.9/dist-packages/torch/nn/modules/module.py", line 1194, in _call_impl > return forward_call(*input, **kwargs) > File "/usr/local/lib/python3.9/dist-packages/transformers/models/led/modeling_led.py", line 2044, in forward > encoder_outputs = self.encoder( > File "/usr/local/lib/python3.9/dist-packages/torch/nn/modules/module.py", line 1194, in _call_impl > return forward_call(*input, **kwargs) > File "/usr/local/lib/python3.9/dist-packages/transformers/models/led/modeling_led.py", line 1705, in forward > layer_outputs = torch.utils.checkpoint.checkpoint( > File "/usr/local/lib/python3.9/dist-packages/torch/utils/checkpoint.py", line 249, in checkpoint > return CheckpointFunction.apply(function, preserve, *args) > File "/usr/local/lib/python3.9/dist-packages/torch/fx/traceback.py", line 57, in format_stack > return traceback.format_stack() > (Triggered internally at ../torch/csrc/autograd/python_anomaly_mode.cpp:121.) > Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass > /usr/local/lib/python3.9/dist-packages/torch/autograd/__init__.py:197: UserWarning: Error detected in CheckpointFunctionBackward. Traceback of forward call that caused the error: > File "/usr/lib/python3.9/runpy.py", line 197, in _run_module_as_main > return _run_code(code, main_globals, None, > File "/usr/lib/python3.9/runpy.py", line 87, in _run_code > exec(code, run_globals) > File "/usr/local/lib/python3.9/dist-packages/ipykernel_launcher.py", line 16, in <module> > app.launch_new_instance() > File "/usr/local/lib/python3.9/dist-packages/traitlets/config/application.py", line 992, in launch_instance > app.start() > File "/usr/local/lib/python3.9/dist-packages/ipykernel/kernelapp.py", line 612, in start > self.io_loop.start() > File "/usr/local/lib/python3.9/dist-packages/tornado/platform/asyncio.py", line 215, in start > self.asyncio_loop.run_forever() > File "/usr/lib/python3.9/asyncio/base_events.py", line 601, in run_forever > self._run_once() > File "/usr/lib/python3.9/asyncio/base_events.py", line 1905, in _run_once > handle._run() > File "/usr/lib/python3.9/asyncio/events.py", line 80, in _run > self._context.run(self._callback, *self._args) > File "/usr/local/lib/python3.9/dist-packages/tornado/ioloop.py", line 687, in <lambda> > lambda f: self._run_callback(functools.partial(callback, future)) > File "/usr/local/lib/python3.9/dist-packages/tornado/ioloop.py", line 740, in _run_callback > ret = callback() > File "/usr/local/lib/python3.9/dist-packages/tornado/gen.py", line 821, in inner > self.ctx_run(self.run) > File "/usr/local/lib/python3.9/dist-packages/tornado/gen.py", line 782, in run > yielded = self.gen.send(value) > File "/usr/local/lib/python3.9/dist-packages/ipykernel/kernelbase.py", line 365, in process_one > yield gen.maybe_future(dispatch(*args)) > File "/usr/local/lib/python3.9/dist-packages/tornado/gen.py", line 234, in wrapper > yielded = ctx_run(next, result) > File "/usr/local/lib/python3.9/dist-packages/ipykernel/kernelbase.py", line 268, in dispatch_shell > yield gen.maybe_future(handler(stream, idents, msg)) > File "/usr/local/lib/python3.9/dist-packages/tornado/gen.py", line 234, in wrapper > yielded = ctx_run(next, result) > File "/usr/local/lib/python3.9/dist-packages/ipykernel/kernelbase.py", line 543, in execute_request > self.do_execute( > File "/usr/local/lib/python3.9/dist-packages/tornado/gen.py", line 234, in wrapper > yielded = ctx_run(next, result) > File "/usr/local/lib/python3.9/dist-packages/ipykernel/ipkernel.py", line 306, in do_execute > res = shell.run_cell(code, store_history=store_history, silent=silent) > File "/usr/local/lib/python3.9/dist-packages/ipykernel/zmqshell.py", line 536, in run_cell > return super(ZMQInteractiveShell, self).run_cell(*args, **kwargs) > File "/usr/local/lib/python3.9/dist-packages/IPython/core/interactiveshell.py", line 2854, in run_cell > result = self._run_cell( > File "/usr/local/lib/python3.9/dist-packages/IPython/core/interactiveshell.py", line 2881, in _run_cell > return runner(coro) > File "/usr/local/lib/python3.9/dist-packages/IPython/core/async_helpers.py", line 68, in _pseudo_sync_runner > coro.send(None) > File "/usr/local/lib/python3.9/dist-packages/IPython/core/interactiveshell.py", line 3057, in run_cell_async > has_raised = await self.run_ast_nodes(code_ast.body, cell_name, > File "/usr/local/lib/python3.9/dist-packages/IPython/core/interactiveshell.py", line 3249, in run_ast_nodes > if (await self.run_code(code, result, async_=asy)): > File "/usr/local/lib/python3.9/dist-packages/IPython/core/interactiveshell.py", line 3326, in run_code > exec(code_obj, self.user_global_ns, self.user_ns) > File "<ipython-input-120-3435b262f1ae>", line 1, in <module> > trainer.train() > File "/usr/local/lib/python3.9/dist-packages/transformers/trainer.py", line 888, in train > tr_loss += self.training_step(model, inputs) > File "/usr/local/lib/python3.9/dist-packages/transformers/trainer.py", line 1248, in training_step > loss = self.compute_loss(model, inputs) > File "/usr/local/lib/python3.9/dist-packages/transformers/trainer.py", line 1277, in compute_loss > outputs = model(**inputs) > File "/usr/local/lib/python3.9/dist-packages/torch/nn/modules/module.py", line 1194, in _call_impl > return forward_call(*input, **kwargs) > File "/usr/local/lib/python3.9/dist-packages/transformers/models/led/modeling_led.py", line 2190, in forward > outputs = self.led( > File "/usr/local/lib/python3.9/dist-packages/torch/nn/modules/module.py", line 1194, in _call_impl > return forward_call(*input, **kwargs) > File "/usr/local/lib/python3.9/dist-packages/transformers/models/led/modeling_led.py", line 2044, in forward > encoder_outputs = self.encoder( > File "/usr/local/lib/python3.9/dist-packages/torch/nn/modules/module.py", line 1194, in _call_impl > return forward_call(*input, **kwargs) > File "/usr/local/lib/python3.9/dist-packages/transformers/models/led/modeling_led.py", line 1705, in forward > layer_outputs = torch.utils.checkpoint.checkpoint( > File "/usr/local/lib/python3.9/dist-packages/torch/utils/checkpoint.py", line 249, in checkpoint > return CheckpointFunction.apply(function, preserve, *args) > File "/usr/local/lib/python3.9/dist-packages/torch/fx/traceback.py", line 57, in format_stack > return traceback.format_stack() > (Triggered internally at ../torch/csrc/autograd/python_anomaly_mode.cpp:114.) > Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass > --------------------------------------------------------------------------- > RuntimeError Traceback (most recent call last) > <ipython-input-120-3435b262f1ae> in <module> > ----> 1 trainer.train() > > 6 frames > /usr/local/lib/python3.9/dist-packages/torch/autograd/__init__.py in backward(tensors, grad_tensors, retain_graph, create_graph, grad_variables, inputs) > 195 # some Python versions print out the first line of a multi-line function > 196 # calls in the traceback and some print out the last line > --> 197 Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass > 198 tensors, grad_tensors_, retain_graph, create_graph, inputs, > 199 allow_unreachable=True, accumulate_grad=True) # Calls into the C++ engine to run the backward pass > > RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.HalfTensor [12, 8192, 1]], which is output 0 of AsStridedBackward0, is at version 1; expected version 0 instead. Hint: the backtrace further above shows the operation that failed to compute its gradient. The variable in question was changed in there or anywhere later. Good luck! code : https://github.com/Tanya-11/experiment/blob/main/lonformer_experiment.ipynb ### Expected behavior trainer.train() should run
03-17-2023 09:34:17
03-17-2023 09:34:17
Hi @Tanya-11, thanks for raising this issue! It's not possible to access the code in the link shared as it is private. Could you share a minimal code snippet to reproduce the error? <|||||>> Hi @Tanya-11, thanks for raising this issue! > > It's not possible to access the code in the link shared as it is private. Could you share a minimal code snippet to reproduce the error? Hi @amyeroberts Pls find the link to my public[ github repo](https://github.com/Tanya-11/experiment/blob/main/lonformer_experiment.ipynb). Thanks! <|||||>Hi @Tanya-11, thanks for sharing the link. I am able to run the example code if I set `gradient_checkpointing=False`. There have been recent updates to the LED model, including this one which resolves an [issue with gradient checkpointing](https://github.com/huggingface/transformers/pull/21840). Can you retry with the most recent release of transformers? <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
22,224
closed
Flax Whisper uses a lot of GPU memory
I'm using Flax whisper-medium and now it's ~3x faster rather than the pytorch deployment. but now it is allocating ~10x more GPU memory. loading Pytorch model takes ~3GB, but loading Flax Whisper-medium takes >30GB of VRAM. Does this huge memory allocation normal? And is there any prepared method for cut it down? @andyehrenberg @ArthurZucker @sanchit-gandhi The code for loading Flax model: ``` with torch.no_grad(): model = FlaxWhisperForConditionalGeneration.from_pretrained(model_id, dtype=jnp.float16, from_pt=True) jit_generate = jax.jit(model.generate, static_argnames=["max_length", "language"]) ```
03-17-2023 09:29:32
03-17-2023 09:29:32
Hey @hannan72! Could you try disabling [`_do_init`](https://github.com/huggingface/transformers/pulls?q=is%3Apr+_do_init+is%3Aclosed)? This way we won't initialise a random version of the parameters. Note that this isn't compatible with `from_pt=True`, so you'll have to load a checkpoint where the Flax weights have already been saved: ```python model, params = FlaxWhisperForConditionalGeneration.from_pretrained(model_id, dtype=jnp.float16, _do_init=False) jit_generate = jax.jit(model.generate, static_argnames=["max_length", "language"]) input_features = jnp.array(input_features, dtype=jnp.float16) pred_ids = jit_generate(input_features, params=params, max_length=128, language='<|en|>') # we need to explicitly pass the params now since we're in Flax's stateless design ``` If you need to load a model where you only have PyTorch weights, you can first convert them to Flax on CPU: ```python import jax # Global flag to set a specific platform, must be used at startup. ONLY DO THIS FOR SAVING WEIGHTS ON CPU! jax.config.update('jax_platform_name', 'cpu') model = FlaxWhisperForConditionalGeneration.from_pretrained(model_id, dtype=jnp.float16, from_pt=True) model.save_pretrained("save/path/to/ckpt/here") ``` Kill this window, and then open up a new one and load: ```python model, params = FlaxWhisperForConditionalGeneration.from_pretrained("save/path/to/ckpt/here", dtype=jnp.float16, _do_init=False) ```<|||||>Thanks for your response @sanchit-gandhi I've tested your proposed approach, save flax model by converting to cpu and then restart kernel and load `FlaxWhisperForConditionalGeneration` by try disabling `_do_init`. But Inference time increased a lot while GPU memory utilization didn't decreased significantly. results when use `from_pt=True` for whisper-medium on a 10 second audio on A100-40GB GPU: - GPU memory usage: ~33.1GB - Inference time: ~0.22 seconds results when use `_do_init=False` for flax saved whisper-medium on a 10 second audio on A100-40GB GPU: - GPU memory usage: ~31.1GB - Inference time: ~16.5 seconds Now Inference time is 80x larger! <|||||>Some of the extra GPU memory can probably be attributed to how the flax generation implements the kv cache. Check what happens when you set max new tokens to be smaller.<|||||>Also, it doesn't make sense to run the flax stuff within a `torch.no_grad()` context.<|||||>I also found that whisper_small checkpoint is also taking ~33GB of GPU RAM!<|||||>> For my fine-tuned whisper-medium, if I don't run inside the `torch.no_grad()`, I get an error and it is just fixed by adding `torch.no_grad()`: ``` RuntimeError Traceback (most recent call last) /s2t-test/client_notebook/Untitled1.ipynb Cell 25 in <cell line: 3>() 1 jax.config.update('jax_platform_name', 'cpu') ----> 2 model = FlaxWhisperForConditionalGeneration.from_pretrained(model_id , dtype=jnp.float16, from_pt=True) 3 model.save_pretrained(model_id+ "/flax/") File /opt/conda/lib/python3.8/site-packages/transformers/modeling_flax_utils.py:810, in FlaxPreTrainedModel.from_pretrained(cls, pretrained_model_name_or_path, dtype, *model_args, **kwargs) 807 model = cls(config, *model_args, _do_init=_do_init, **model_kwargs) 809 if from_pt: --> 810 state = load_pytorch_checkpoint_in_flax_state_dict(model, resolved_archive_file, is_sharded) 811 else: 812 if is_sharded: File /opt/conda/lib/python3.8/site-packages/transformers/modeling_flax_pytorch_utils.py:62, in load_pytorch_checkpoint_in_flax_state_dict(flax_model, pytorch_checkpoint_path, is_sharded, allow_missing_keys) 59 pt_state_dict = torch.load(pt_path, map_location="cpu") 60 logger.info(f"PyTorch checkpoint contains {sum(t.numel() for t in pt_state_dict.values()):,} parameters.") ---> 62 flax_state_dict = convert_pytorch_state_dict_to_flax(pt_state_dict, flax_model) 63 else: 64 # model is sharded and pytorch_checkpoint_path already contains the list of .pt shard files 65 flax_state_dict = convert_pytorch_sharded_state_dict_to_flax(pytorch_checkpoint_path, flax_model) File /opt/conda/lib/python3.8/site-packages/transformers/modeling_flax_pytorch_utils.py:128, in convert_pytorch_state_dict_to_flax(pt_state_dict, flax_model) 126 def convert_pytorch_state_dict_to_flax(pt_state_dict, flax_model): ... --> 128 pt_state_dict = {k: v.numpy() for k, v in pt_state_dict.items()} 130 model_prefix = flax_model.base_model_prefix 132 # use params dict if the model contains batch norm layers RuntimeError: Can't call numpy() on Tensor that requires grad. Use tensor.detach().numpy() instead. ``` (However pretrained models does not need to be loaded inside `torch.no_grad()` ) Albeit the results I mentioned after @sanchit-gandhi 's answer, was test with and without `torch.no_grad()` and it didn't make any change.<|||||>> Now Inference time is 80x larger! There shouldn't be any difference to inference time - are you certain you're running on GPU here? Make sure you **have not** set: ``` jax.config.update('jax_platform_name', 'cpu') ```<|||||>> > Now Inference time is 80x larger! > > There shouldn't be any difference to inference time - are you certain you're running on GPU here? Make sure you **have not** set: > > ``` > jax.config.update('jax_platform_name', 'cpu') > ``` Yes, I kill the window after saving the flax model and afterwards I don't move weights to CPU anymore. But it is so slow. Have you tested this approach @sanchit-gandhi ? <|||||>> Have you tested this approach @sanchit-gandhi ? Extensively! See my results for A100 (PyTorch) vs pmap (TPU v4-8 + JAX): ![Screenshot 2023-04-03 at 11 54 17](https://user-images.githubusercontent.com/93869735/229489827-56b52e7c-fab3-4c37-a02e-c5c34132ace6.png) Could you perhaps share your code @hannan72? There shouldn't be any performance difference between using / not using `_do_init`.<|||||>It could also be that we're recompiling each time - would be great to see your code here @hannan72 to verify!<|||||>> It could also be that we're recompiling each time - would be great to see your code here @hannan72 to verify! This is my full code: Firstly, PyTorch model is loaded and converted to Flax an then saved: ``` import jax import jax.numpy as jnp import torch from transformers import FlaxWhisperForConditionalGeneration, WhisperForConditionalGeneration, WhisperProcessor pt_model_path = "/client_notebook/whisper_model_chkp" model_id = "/client_notebook/flax_whisper_model" jax.config.update('jax_platform_name', 'cpu') with torch.no_grad(): model = FlaxWhisperForConditionalGeneration.from_pretrained(pt_model_path, dtype=jnp.float16, from_pt=True) model.save_pretrained(model_id) ``` For deploying the Flax model, following code is used: ``` import jax import jax.numpy as jnp import torch import flax from scipy.io import wavfile import time from transformers import FlaxWhisperForConditionalGeneration, WhisperForConditionalGeneration, WhisperProcessor model_id = "/client_notebook/flax_whisper_model" processor = WhisperProcessor.from_pretrained(model_id) with torch.no_grad(): model, params = FlaxWhisperForConditionalGeneration.from_pretrained(model_id, dtype=jnp.float16, _do_init=False) jit_generate = jax.jit(model.generate, static_argnames=["max_length", "language", "task"]) audio_file_path = "sample_audio_5s.wav" samplerate, data_waveform = wavfile.read(audio_file_path) ata_waveform = (data_waveform)/32768.0 input_features = processor(data_waveform, padding="max_length", sampling_rate=16000, return_tensors="pt").input_features runtime=[] for i in range(5): start_time = time.time() input_features = jnp.array(input_features, dtype=jnp.float16) pred_ids = jit_generate(input_features, params=params, max_length=128, language='<|de|>', task ="transcribe") runtime.append(time.time() - start_time) print("Inference time:\n", runtime) ``` And the output is as follows: ``` Inference time: [70.23309993743896, 14.300963640213013, 12.430477142333984, 13.643242120742798, 12.125237703323364] ``` GPU memory utilization: 31,127 MB GPU Type: 1x A100-40GB model checkpoint: whisper_medium * Note: GPU memory utilization when the model is directly imported from pt model (By passing `from_pt=True`) is 31,587MB. It is just 460MB larger. But this value (460MB) is exactly the same GPU memory utilization when I put the model to cpu by running `jax.config.update('jax_platform_name', 'cpu')` during the saving of Flax model. @sanchit-gandhi <|||||>Hey @hannan72 - thanks for the super detailed report and attaching your code. This is indeed a very strange phenomenon that we're seeing with such high memory utilisation for the Flax model. Based on what you've said, I think all of this is coming from when we load the model, rather than from when we do the forward pass. I also ran a few tests on an A100, where I was comfortably able to fit a batch size of 16 on a 40GB device. If we're getting 31GB memory in loading, there's no way that's persistent for then the forward pass, otherwise a batch size of 16 wouldn't be possible. I wonder whether we can trick JAX into using the CPU for the heavy weight loading, and then move the weights onto the GPU for the forward pass? Something along the lines of: ```python import jax import jax.numpy as jnp from transformers import FlaxWhisperForConditionalGeneration, WhisperForConditionalGeneration, WhisperProcessor model_id = "/client_notebook/flax_whisper_model" processor = WhisperProcessor.from_pretrained(model_id) # load weights on CPU jax.config.update('jax_platform_name', 'cpu') model, params = FlaxWhisperForConditionalGeneration.from_pretrained(model_id, dtype=jnp.float16, _do_init=False) # now move weights to GPU jax.config.update('jax_platform_name', 'gpu') params = jax.device_put(params, 'gpu') jit_generate = jax.jit(model.generate, static_argnames=["max_length", "language", "task"]) ... ``` This could be a workaround, but not a fix to the high memory usage we're seeing during initialisation<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
22,223
closed
fix typos in llama.mdx
## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). ## Who can review? Documentation: @sgugger, @stevhliu and @MKhalusova
03-17-2023 07:27:14
03-17-2023 07:27:14
_The documentation is not available anymore as the PR was closed or merged._
transformers
22,222
closed
ValueError: Tokenizer class LLaMATokenizer does not exist or is not currently imported.
### System Info 4.27.1 ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction i test llama in colab here is my code and output: !pip install git+https://github.com/huggingface/transformers !pip install sentencepiece import torch from transformers import pipeline,LlamaTokenizer,LlamaForCausalLM device = "cuda:0" if torch.cuda.is_available() else "cpu" print(device) # tokenizer = LlamaTokenizer.from_pretrained("decapoda-research/llama-7b-hf") # model = LlamaForCausalLM.from_pretrained("decapoda-research/llama-7b-hf") generator = pipeline(model="decapoda-research/llama-7b-hf", device=device) generator("I can't believe you did such a ") ValueError Traceback (most recent call last) [<ipython-input-3-c1d71e177e5a>](https://localhost:8080/#) in <module> 7 # tokenizer = LlamaTokenizer.from_pretrained("decapoda-research/llama-7b-hf") 8 # model = LlamaForCausalLM.from_pretrained("decapoda-research/llama-7b-hf") ----> 9 generator = pipeline(model="decapoda-research/llama-7b-hf", device=device) 10 generator("I can't believe you did such a ") 1 frames [/usr/local/lib/python3.9/dist-packages/transformers/models/auto/tokenization_auto.py](https://localhost:8080/#) in from_pretrained(cls, pretrained_model_name_or_path, *inputs, **kwargs) 675 676 if tokenizer_class is None: --> 677 raise ValueError( 678 f"Tokenizer class {tokenizer_class_candidate} does not exist or is not currently imported." 679 ) ValueError: Tokenizer class LLaMATokenizer does not exist or is not currently imported. ### Expected behavior expect output generated info
03-17-2023 07:23:16
03-17-2023 07:23:16
I face the same issue<|||||>Hi @candowu, thanks for raising this issue. This is arising, because the `tokenizer` in the [config on the hub](https://huggingface.co/decapoda-research/llama-7b-hf/blob/main/tokenizer_config.json) points to `LLaMATokenizer`. However, the tokenizer in the library is `LlamaTokenizer`. This is likely due to the configuration files being created before the final PR was merged in. <|||||>I cloned the repo and changed the tokenizer in the config file to LlamaTokenizer but I got ValueError: Tokenizer class LlamaTokenizer does not exist or is not currently imported. <|||||>For anybody interested I was able to load an earlier saved model with the same issue using my [fork](https://github.com/mbehm/transformers) with the capitalization restored. That being said for future it's probably better to try find or save a new model with the new naming.<|||||>@yhifny Are you able to import the tokenizer directly using `from transformers import LlamaTokenizer `? If not, can you make sure that you are working from the development branch in your environment using: `pip install git+https://github.com/huggingface/transformers` more details [here](https://huggingface.co/docs/transformers/installation#install-from-source).<|||||>I can import the `LlamaTokenizer` class, but getting error that `from_pretrained` method is None. Anyone else having this issue?<|||||>As the error message probably mentions, you need to install sentencepiece: `pip install sentencepiece`.<|||||>Working now. I swear I had sentencepiece, but probably forgot to reset the runtime 🀦 My bad!<|||||>> For anybody interested I was able to load an earlier saved model with the same issue using my [fork](https://github.com/mbehm/transformers) with the capitalization restored. That being said for future it's probably better to try find or save a new model with the new naming. Thanks, man, your link solved all the problem<|||||>> Hi @candowu, thanks for raising this issue. This is arising, because the `tokenizer` in the [config on the hub](https://huggingface.co/decapoda-research/llama-7b-hf/blob/main/tokenizer_config.json) points to `LLaMATokenizer`. However, the tokenizer in the library is `LlamaTokenizer`. > > This is likely due to the configuration files being created before the final PR was merged in. Change the **LLaMATokenizer** in tokenizer_config.json into lowercase **LlamaTokenizer** and it works like a charm.<|||||>> For anybody interested I was able to load an earlier saved model with the same issue using my [fork](https://github.com/mbehm/transformers) with the capitalization restored. That being said for future it's probably better to try find or save a new model with the new naming. Thank you so much for this! Works!<|||||>> > Hi @candowu, thanks for raising this issue. This is arising, because the `tokenizer` in the [config on the hub](https://huggingface.co/decapoda-research/llama-7b-hf/blob/main/tokenizer_config.json) points to `LLaMATokenizer`. However, the tokenizer in the library is `LlamaTokenizer`. > > This is likely due to the configuration files being created before the final PR was merged in. > > Change the **LLaMATokenizer** in tokenizer_config.json into lowercase **LlamaTokenizer** and it works like a charm. I assume this is applied to the llama-7b cloned repo from HuggingFace right? How can I instantiate the model and the tokenizer after doing that please?<|||||>> you are a life saver. There docs on the site should be updated for this reference. <|||||>Thank you so much for this! Works! That's amazing!<|||||>You can try this for a ather crazy way to find out what is the right casing for the module: ```python import transformers from itertools import product import importlib def find_variable_case(s, max_tries=1000): var_permutations = list(map("".join, product(*zip(s.upper(), s.lower())))) # Intuitively, any camel casing should minimize the no. of upper chars. # From https://stackoverflow.com/a/58789587/610569 var_permutations.sort(key=lambda ss: (sum(map(str.isupper, ss)), len(ss))) for i, v in enumerate(var_permutations): if i > max_tries: return try: dir(transformers).index(v) return v except: continue v = find_variable_case('LLaMatokenizer') exec(f"from transformers import {v}") vars()[v] ``` [out]: ``` transformers.utils.dummy_sentencepiece_objects.LlamaTokenizer ```<|||||>I encountered the same issue identified at the thread today 4/2/2023. The post https://github.com/huggingface/transformers/issues/22222#issuecomment-1477171703 fixed the problem for me. Thank you.<|||||>Hi! I am facing the same problem. I try to import LlamaTokenizer, But:--------------------------------------------------------------------------- ImportError Traceback (most recent call last) Cell In[27], line 1 ----> 1 from transformers import LlamaTokenizer ImportError: cannot import name 'LlamaTokenizer' from 'transformers' (/usr/local/anaconda3/envs/abc/lib/python3.10/site-packages/transformers/__init__.py) and the version of transformers is "transformers 4.28.0.dev0 pypi_0 pypi" plz tell me how to fix it.<|||||>You need to install the library from source to be able to use the LLaMA model.<|||||>> You need to install the library from source to be able to use the LLaMA model. Thanks! Where can I get it? And how to install it? Actually I have already installed transformers 4.28.0.dev0, I'm not sure about what you mean.<|||||>You can open the documentation at the [install page](https://huggingface.co/docs/transformers/installation#install-from-source).<|||||>Great! I restart my server and it works! thank you !!!<|||||>Hi I installed from source git clone https://github.com/huggingface/transformers.git cd transformers pip install -e . pip list show: transformers 4.29.0.dev0 D:\myfolder\transformers but I still have ValueError: Tokenizer class LlamaTokenizer does not exist or is not currently imported. <|||||>+1 on @thibaudart comment, I have the same issue.<|||||>> Hi > > I installed from source > > git clone https://github.com/huggingface/transformers.git cd transformers pip install -e . > > pip list show: > > transformers 4.29.0.dev0 D:\myfolder\transformers > > but I still have > > ValueError: Tokenizer class LlamaTokenizer does not exist or is not currently imported. hey, try this rep `pip install git+https://github.com/mbehm/transformers`, maybe it can work<|||||>Will this problem be fixed by updating to newest version of transformers, or must we modify the config file manually each time?<|||||>You should just using that checkpoint. The maintainers of that repo have made it clear that they are not interested in being compatible with Transformers by ignoring the 62 PRs trying to fix their checkpoints. The huggyllama checkpoints are confirmed to work if you are looking for an alternative (but you should still request the weights to Meta following their official form). There are now 903 checkpoints for llama on the Hub and only the 4 from decapoda-research do not work since they created them before the PR for Llama was merged into Transformers. We won't break the code for the other 899 checkpoints.<|||||> if( "LLaMATokenizer" == tokenizer_class_candidate ): ## add these 2 line to solve it. tokenizer_class_candidate = 'LlamaTokenizer' tokenizer_class = tokenizer_class_from_name(tokenizer_class_candidate)<|||||>@MasterLivens hi, i am currently using colab, which file should i add this code? <|||||>@zhiyixu The code being referred to should go into .../site-packages/transformers/models/auto/tokenization_auto.py However, what worked for me was updating my transformers and tokenizers package. tokenization_auto.py has a mapping of tokenizers at the beginning and I realized that llama wasn't included in the version I had.<|||||>> > Hi @candowu, thanks for raising this issue. This is arising, because the `tokenizer` in the [config on the hub](https://huggingface.co/decapoda-research/llama-7b-hf/blob/main/tokenizer_config.json) points to `LLaMATokenizer`. However, the tokenizer in the library is `LlamaTokenizer`. > > This is likely due to the configuration files being created before the final PR was merged in. > > Change the **LLaMATokenizer** in tokenizer_config.json into lowercase **LlamaTokenizer** and it works like a charm. Can you please enlighten me on how this could be achieved please? I'm new to this <|||||>Hi, @nameless0704. First, I would like to thank you for the insightful comment of `changing the LLaMATokenizer in tokenizer_config.json into lowercase LlamaTokenizer `. I am fairly new in this area. May I ask how to Change the LLaMATokenizer in tokenizer_config.json into lowercase LlamaTokenizer? I could not figure it out and would like to seek your help. Any information is appreciated. Thank you very much in advance!<|||||>I share a experiment, Just replace your llama model to https://huggingface.co/elinas/llama-7b-hf-transformers-4.29 will solve the error like ImportError: cannot import name 'LLaMATokenizer' from 'transformers'<|||||>Example of how to use LLaMA AutoTokenizer ``` !pip install tokenizers==0.13.3 !pip install sentencepiece from transformers import AutoTokenizer, AutoModelForCausalLM # model_name = "openlm-research/open_llama_3b" # model_name = "openlm-research/open_llama_7b" model_name = "openlm-research/open_llama_13b" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForCausalLM.from_pretrained(model_name) ```<|||||>> @MasterLivens hi, i am currently using colab, which file should i add this code? the specified error file in the error message.<|||||>> > Hi @candowu, thanks for raising this issue. This is arising, because the `tokenizer` in the [config on the hub](https://huggingface.co/decapoda-research/llama-7b-hf/blob/main/tokenizer_config.json) points to `LLaMATokenizer`. However, the tokenizer in the library is `LlamaTokenizer`. > > This is likely due to the configuration files being created before the final PR was merged in. > > Change the **LLaMATokenizer** in tokenizer_config.json into lowercase **LlamaTokenizer** and it works like a charm. Where is the tokenizer_config.json?<|||||>> > > Hi @candowu, thanks for raising this issue. This is arising, because the `tokenizer` in the [config on the hub](https://huggingface.co/decapoda-research/llama-7b-hf/blob/main/tokenizer_config.json) points to `LLaMATokenizer`. However, the tokenizer in the library is `LlamaTokenizer`. > > > This is likely due to the configuration files being created before the final PR was merged in. > > > > > > Change the **LLaMATokenizer** in tokenizer_config.json into lowercase **LlamaTokenizer** and it works like a charm. > > Where is the tokenizer_config.json? I think this is the location: .cache/huggingface/hub/models--decapoda-research--llama-65b-hf/snapshots/47d2b93e8c0a3d5d6582bdec13f233ca0527499a/tokenizer_config.json<|||||>Please. Im facing the same issue. Can anyone help ? I tried all the above methods. <|||||>> Please. Im facing the same issue. Can anyone help ? I tried all the above methods. I had the same issue and it was solved by: pip uninstall transformers pip install transformers<|||||>in my code, transformer==4.30.0 can fix it<|||||>looked at `tokenization_auto.py` in the transformers package that was installed via `pip install git+https://github.com/huggingface/transformers` ``` ( "llama", ( "LlamaTokenizer" if is_sentencepiece_available() else None, "LlamaTokenizerFast" if is_tokenizers_available() else None, ), ), ``` I had to install `sentencepiece` to bypass the not found error, running into other errors though :)
transformers
22,221
open
export clip to text encoder and image encoder two onnxs
### Model description i want to export clip to text encoder and image encoder two onnx, but it seems can only convert the whole model, how can i seperate clip to two onnx models? ### Open source status - [X] The model implementation is available - [X] The model weights are available ### Provide useful links for the implementation _No response_
03-17-2023 07:07:50
03-17-2023 07:07:50
cc @michaelbenayoun <|||||>Hi @susht3 , You mean that you want to export a `CLIPTextModel` and `CLIPVisionModel`? We support the CLIP export in `optimum`: ```bash optimum-cli export onnx -m openai/clip-vit-base-patch32 --task default clip ``` But as I understand here, you want to export two models?<|||||>> Hi @susht3 , You mean that you want to export a `CLIPTextModel` and `CLIPVisionModel`? > > We support the CLIP export in `optimum`: > > ```shell > optimum-cli export onnx -m openai/clip-vit-base-patch32 --task default clip > ``` > > But as I understand here, you want to export two models? yes,i try to convert by transformer.onnx but failed, my code like this: model = CLIPModel.from_pretrained(model_path) processor = CLIPProcessor.from_pretrained(model_path) text = processor.tokenizer("[UNK]”, return_tensors="np") image = processor.feature_extractor(Image.open("CLIP.png")) text_model = model.text_model image_model = model.vision_model onnx_inputs, onnx_outputs = export( preprocessor=tokenizer, model=text_model, config=onnx_config, opset=10, output=onnx_model_path ) <|||||>You want what kind of inputs? Anyways, you should use `optimum.exporters.onnx` for this. You should be able to export the text model easily because we have a [`CLIPTextOnnxConfig`](https://github.com/huggingface/optimum/blob/main/optimum/exporters/onnx/model_configs.py#LL620C49-L620C49). For the rest we have `CLIPOnnxConfig` as well.<|||||>> [`CLIPTextOnnxConfig`](https://github.com/huggingface/optimum/blob/main/optimum/exporters/onnx/model_configs.py#LL620C49-L620C49). thanks,and which is clip visual onxx config? i can't find it<|||||>I think we do not have it, but you can make a PR and add it if you are interested!
transformers
22,220
open
Positinal Encoding for T5 family of models
### Feature request Please create a hook to allow users to modify the T5 family of models and change the default positional embeddings to custom positional embeddings. ### Motivation I am trying to build a T5 version that takes non-text input, and the traditional positional encodings are getting in the way, and there is no way to switch it off, or to make them learnable parameters, etc. BART gives limited access to positional encodings, but T5 family gives nearly 0 access. ### Your contribution If i knew where the positional encoding were calculated and added in to the input_ids, I could create this hook myself
03-17-2023 06:50:42
03-17-2023 06:50:42
Hi @SreehariSankar, thanks for raising this issue. Questions on designing custom hook for modifying the models are better placed in the [forum](https://discuss.huggingface.co/). All of the code for the model, including producing the embeddings are in the modeling files e.g. [this one for T5](https://github.com/huggingface/transformers/blob/main/src/transformers/models/t5/modeling_t5.py). Note: the T5 family of models do not use the same positional embedding logic as in the traditional transformer i.e. there isn't a fixed embedding for each position, but instead a relative position embedding.
transformers
22,219
closed
fix code example in mgp-str doc
# What does this PR do? Fix code example in mgp-str doc. ## Before submitting - [√] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [√] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [√] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [√] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [√] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @amyeroberts <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer: @stas00, Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
03-17-2023 06:20:16
03-17-2023 06:20:16
_The documentation is not available anymore as the PR was closed or merged._
transformers
22,218
closed
Hotfix for natten on CircleCI
# What does this PR do? Hotfix for natten on CircleCI. The PR CI in #22204 run with `natten` with the version `0.14.4` which run successfully. However, when I merged that PR into `main`, natten 0.14.5 is released and cause some issues.
03-16-2023 22:30:03
03-16-2023 22:30:03
_The documentation is not available anymore as the PR was closed or merged._<|||||>Here is the update regarding this issue https://github.com/SHI-Labs/NATTEN/issues/23#issuecomment-1473865224
transformers
22,217
closed
Fix LLaMATokenizer naming
# What does this PR do? Simple fix for the naming of LLaMATokenizer class
03-16-2023 21:24:17
03-16-2023 21:24:17
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_22217). All of your documentation changes will be reflected on that endpoint.<|||||>Ah ok, it was preventing me from loading a saved model because of the capitalization change so thought it was a mistake. In that case I'll be closing this, for anyone coming across the same issue ("Tokenizer class LLaMATokenizer does not exist or is not currently imported.") they can use my fork to load them for now.
transformers
22,216
closed
LLaMA house-keeping
# What does this PR do? This PR just groups a couple of nits I had on the LLaMA model PR, but didn't want to add there to merge the PR quickly. I have tested the conversion scripts on all four models and it works fine. cc @zphang for information.
03-16-2023 20:51:37
03-16-2023 20:51:37
_The documentation is not available anymore as the PR was closed or merged._
transformers
22,215
closed
torch.compile() and FSDP/DDP wrappers are called in the wrong order.
### System Info transformers main branch ### Who can help? @sgugger ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction When training/fine-tuning a model, activate torch.compile() and FSDP with `torch_compile=True` and `fsdp="full_shard auto_wrap"` as training arguments. The model is compiled before the FSDP wrapping, preventing optimizations on the backwards passes. According to the PyTorch docs, both DDP and FSDP wrappers have special optimizations that run with torch.compile() to ensure model training doesn't end up slower instead of faster (see [here](https://dev-discuss.pytorch.org/t/torchdynamo-update-11-making-fsdp-and-dynamo-work-together/1037)). ### Expected behavior Therefore, the model would need to be torch.compile()'d after being wrapped in either FSDP or DDP. Right now, in `src/transformers/trainer.py` that is not the case, with compile() being the first call in `_wrap_model()`. Before making a PR with the change, I figured I'd make this bug report to ensure nothing prevents that change from happening.
03-16-2023 20:25:43
03-16-2023 20:25:43
Note that we haven't tested `torch.compile` with any kind of distributed training yet, so it's normal if there are issues. If you have the fix, we'd be happy to look at a PR!<|||||>Ok! I'll make the PR then, just figured I'd ask before.
transformers
22,214
closed
whisper return_timestamp error
### System Info - `transformers` version: 4.27.1 - Platform: Linux-5.10.147+-x86_64-with-glibc2.31 - Python version: 3.9.16 - Huggingface_hub version: 0.13.2 - PyTorch version (GPU?): 1.13.1+cu116 (True) - Tensorflow version (GPU?): 2.11.0 (True) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <yes> - Using distributed or parallel set-up in script?: <no> ### Who can help? @ArthurZucker @younesbelkada @sanchit-gandhi ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction ``` device = "cuda:0" if torch.cuda.is_available() else "cpu" pipe = pipeline( "automatic-speech-recognition", model="openai/whisper-tiny", chunk_length_s=30, device=device, ) audio, _ = librosa.load(mypath+ filename, sr = 16000) prediction = pipe(np.array(audio),return_timestamps=True)['chunks'] ``` Below is the full error message. ``` IndexError Traceback (most recent call last) [<ipython-input-20-17a93ca487ee>](https://localhost:8080/#) in <module> ----> 1 prediction = pipe(np.array(audio),return_timestamps=True,stride_length_s=(4, 2))['chunks'] 4 frames [/usr/local/lib/python3.9/dist-packages/transformers/pipelines/automatic_speech_recognition.py](https://localhost:8080/#) in __call__(self, inputs, **kwargs) 376 logger.warning( 377 "Using `chunk_length_s` is very experimental with seq2seq models. The results will not necessarily" --> 378 " be entirely accurate and will have caveats. More information:" 379 " https://github.com/huggingface/transformers/pull/20104. Ignore this warning with pipeline(...," 380 " ignore_warning=True)" [/usr/local/lib/python3.9/dist-packages/transformers/pipelines/base.py](https://localhost:8080/#) in __call__(self, inputs, num_workers, batch_size, *args, **kwargs) 1074 ) 1075 -> 1076 is_dataset = Dataset is not None and isinstance(inputs, Dataset) 1077 is_generator = isinstance(inputs, types.GeneratorType) 1078 is_list = isinstance(inputs, list) [/usr/local/lib/python3.9/dist-packages/transformers/pipelines/pt_utils.py](https://localhost:8080/#) in __next__(self) 123 # We're out of items within a batch 124 item = next(self.iterator) --> 125 processed = self.infer(item, **self.params) 126 # We now have a batch of "inferred things". 127 if self.loader_batch_size is not None: [/usr/local/lib/python3.9/dist-packages/transformers/pipelines/automatic_speech_recognition.py](https://localhost:8080/#) in postprocess(self, model_outputs, decoder_kwargs, return_timestamps) 625 if previous_sequence[0] < (timestamp_begin + offset - overlap_time) and idx != 0: 626 break # the previous sequence is too far in the past --> 627 if len(previous_tokens) > 0: 628 # find the longest common sequence between the overlapping parts 629 index_left, index_right, match_length = _fast_find_longest_common_sequence( [/usr/local/lib/python3.9/dist-packages/transformers/pipelines/automatic_speech_recognition.py](https://localhost:8080/#) in _find_timestamp_sequence(sequences, tokenizer, feature_extractor, max_source_positions) 174 <Tip> 175 --> 176 For more information on how to effectively use `stride_length_s`, please have a look at the [ASR chunking 177 blog post](https://huggingface.co/blog/asr-chunking). 178 IndexError: list index out of range ``` ### Expected behavior The tiny.en model returns a 'list out of index' error for some files. It works for all files if not adding the return_timetamps = True argument. The tiny model also returns the same error for some different audio files when return_timetamps = True. The base.en, and the base model also returns the same error for some (but different) files.
03-16-2023 20:05:47
03-16-2023 20:05:47
Hey! Thanks for reporting. Could you share the audio that you are using? We never stumbled upon something like this πŸ˜… Problem seems to come from `sequence[1:relevant_timestamp]` ,but the traceback is a bit messed up cc @Narsil <|||||>`prediction = pipe(np.array(audio), return_timestamps=True, stride_length_s=(4, 2))['chunks']` The "stride_length_s" parameter determines the length of the audio chunks to be processed at each time, as well as the length of the gaps between them. This parameter is different from the "chunk_length_s" parameter and is set by default to half of the "chunk_length_s" parameter.<|||||>The recommended parameters are: * `chunk_length_s=30.0` * `stride_length_s=(6, 0)` (or `stride_length_s=None`, and the pipeline will set this to `(chunk_length_s / 5, 0)` for you) See the following Colab for details: https://colab.research.google.com/drive/1rS1L4YSJqKUH_3YxIQHBI982zso23wor?usp=sharing#scrollTo=Mh_e6rV62QUM<|||||>Any luck here @ataturkiyebmka changing the hyper parameters?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
22,213
closed
LLAMA model won't release VRAM when deleted
### System Info latest git, windows (tested on WSL as well), pytorch 1.13 ### Who can help? _No response_ ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction Run this code tokenizer = LlamaTokenizer.from_pretrained("decapoda-research/llama-7b-hf") model = LlamaForCausalLM.from_pretrained( "decapoda-research/llama-7b-hf", load_in_8bit=True, torch_dtype=torch.float16, device_map={'':0}, ) import time #try to unload the model from GPU memory del model torch.cuda.empty_cache() time.sleep(5) ### Expected behavior After del the memory should be freed from VRAM.
03-16-2023 19:52:45
03-16-2023 19:52:45
You also need to call the garbage collector: ```python import gc gc.collect() ```<|||||>It worked thanks!, I did try it before and it didn't but checked it again now and it did lol
transformers
22,212
closed
Add MaskedImageModelingOutput
# What does this PR do? - Adds `MaskedImageModelingOutput` and `TFMaskedImageModelingOutput` classes for masked image modeling / completion / in-painting models. - Replaces the inaccurate MaskedLMOutput used for ViT and DeiT MIM heads with the new output class - Ensures backward compatibility by adding `logits` as a property to the new output class ## Before submitting - [X ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [X ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [X ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ X] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests?
03-16-2023 18:18:23
03-16-2023 18:18:23
_The documentation is not available anymore as the PR was closed or merged._<|||||>Pinging @NielsRogge for the final approval<|||||>@NielsRogge could you take another look? I think all comments are addressed
transformers
22,211
closed
Generate: Add assisted generation
# What does this PR do? Here it is, the PR for assisted generation πŸ™Œ In a nutshell, it uses an assistant model (which should be a smaller model with the same tokenizer) to speed up generation, taking advantage of the reduced need for memory transfers in the main model forward pass. It leverages the same property that makes batched inference faster per token. Since it is meant to be a reference implementation, the code is meant to be clear and well-commented. If you come across any non-obvious steps, let me know so I can clarify them! Follow-up steps after this PR: 1. Add support for a `sample` version of assisted generation (many cool apps rely on sampling, including chatbots/assistants) 2. Write a blog post a prepare strong communications about the feature _________________________________________________________________ To process the potential speedup visually, consider the following script and the two videos. They correspond to greedy search using a 6.9B GPTNeoX model on an nvidia 3090 πŸš€ <details> <summary>Script</summary> ```py from transformers import AutoTokenizer, AutoModelForCausalLM, TextStreamer import torch import time model_id = "EleutherAI/pythia-6.9b-deduped" assistant_id = "EleutherAI/pythia-160m-deduped" tokenizer = AutoTokenizer.from_pretrained(model_id) assistant_model = AutoModelForCausalLM.from_pretrained(assistant_id) assistant_model = assistant_model.to("cuda") model_kwargs = { "pretrained_model_name_or_path": model_id, "device_map": "auto", "max_memory": {0: "20GiB", "cpu": "50GiB"}, "torch_dtype": torch.float16, } model = AutoModelForCausalLM.from_pretrained(**model_kwargs) inputs = tokenizer("Here's how to cook a good ramen:", return_tensors="pt").to("cuda") streamer = TextStreamer(tokenizer=tokenizer) print("Without assistance:") start = time.time() model.generate(**inputs, streamer=streamer, max_new_tokens=128) print(f"Elapsed time: {time.time() - start:.2f} seconds") print("With assistance:") start = time.time() model.generate(**inputs, assistant_model=assistant_model, streamer=streamer, max_new_tokens=128) print(f"Elapsed time: {time.time() - start:.2f} seconds") ``` </details> Without assistant | With assistant :-------------------------:|:-------------------------: <img src="https://user-images.githubusercontent.com/12240844/232580502-19965b8d-0f9e-45d8-b57b-86fad2d4681b.gif"/> | <img src="https://user-images.githubusercontent.com/12240844/232580535-30a27fd2-1338-4c71-a0ba-68055a825605.gif"/> (focus on the speed and the fact that the output is the same, not on the output itself)
03-16-2023 18:13:31
03-16-2023 18:13:31
_The documentation is not available anymore as the PR was closed or merged._<|||||>@amyeroberts @sgugger -- since this PR is a bit more complex than most, I've decided to request a review from you two πŸ€— <|||||>@amyeroberts regarding splitting up, I totally agree! And not only on this method but on most parts of `GenerationMixin`. Not only are the functions long, but they reuse a significant part of the logic. I want to address that in the near future, by designing a `.generate()` that can be somehow composed of a sequence of smaller functional blocks. I haven't figured out the deets, but I'd expect that a good implementation would get us better readability, less code duplication, and higher flexibility for HW/model/decoding-specific implementations! πŸ’… Before merging, I'm going to double-check that the current code keeps the performance numbers I got a few weeks ago. If everything goes well, it will be merged today πŸ™
transformers
22,210
closed
Rag-end2end
### System Info ```shell raise misconfigurationexception( pytorch_lightning.utilities.exceptions.misconfigurationexception: the provided lr scheduler `lambdalr` doesn't follow pytorch's lrscheduler api. you should override the `lightningmodule.lr_scheduler_step` hook with your own logic if you are using a custom lr scheduler. stopped all 7 ray processes. pytorch_lightning=1.6.4 ``` ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction raise misconfigurationexception( pytorch_lightning.utilities.exceptions.misconfigurationexception: the provided lr scheduler `lambdalr` doesn't follow pytorch's lrscheduler api. you should override the `lightningmodule.lr_scheduler_step` hook with your own logic if you are using a custom lr scheduler. stopped all 7 ray processes. ### Expected behavior ```shell Its not working, it was working previously but now there is some misconfigurations error. ``` ### Checklist - [X] I have read the migration guide in the readme. ([pytorch-transformers](https://github.com/huggingface/transformers#migrating-from-pytorch-transformers-to-transformers); [pytorch-pretrained-bert](https://github.com/huggingface/transformers#migrating-from-pytorch-pretrained-bert-to-transformers)) - [X] I checked if a related official extension example runs on my machine.
03-16-2023 16:12:12
03-16-2023 16:12:12
Hi @Rajdoshi99, thanks for raising this issue! It seems the issue is coming from pytorch lighnting. So that we can best help, could you give us more information about the error and how to reproduce. Specifically: * Your environment. Run `transformers-cli env` in the terminal to get the necessary info to share * A snippet of code that we can run to try and reproduce the error * A full trackback of the error that occurred <|||||>Following from my comment above ^ - this is likely an issue with the pytorch lightning version and its compatibility with the example. Pytorch Lighting 1.6.4 was released last June, whereas this example is three years old. We don't actively maintain the examples in the library. I would recommend downgrading the pytorch lighting version in your environment if you wish to run it. <|||||>HI @amyeroberts Traceback (most recent call last): File "/home/ec2-user/SageMaker/transformers/examples/research_projects/rag-end2end-retriever/finetune_rag.py", line 815, in <module> main(args) File "/home/ec2-user/SageMaker/transformers/examples/research_projects/rag-end2end-retriever/finetune_rag.py", line 780, in main trainer: pl.Trainer = generic_train( File "/home/ec2-user/SageMaker/transformers/examples/research_projects/rag-end2end-retriever/lightning_base.py", line 410, in generic_train trainer.fit(model) File "/home/ec2-user/SageMaker/.cs/conda/envs/codeserver_py39/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py", line 770, in fit self._call_and_handle_interrupt( File "/home/ec2-user/SageMaker/.cs/conda/envs/codeserver_py39/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py", line 723, in _call_and_handle_interrupt return trainer_fn(*args, **kwargs) File "/home/ec2-user/SageMaker/.cs/conda/envs/codeserver_py39/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py", line 811, in _fit_impl results = self._run(model, ckpt_path=self.ckpt_path) File "/home/ec2-user/SageMaker/.cs/conda/envs/codeserver_py39/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py", line 1217, in _run self.strategy.setup(self) File "/home/ec2-user/SageMaker/.cs/conda/envs/codeserver_py39/lib/python3.9/site-packages/pytorch_lightning/strategies/ddp.py", line 179, in setup self.setup_optimizers(trainer) File "/home/ec2-user/SageMaker/.cs/conda/envs/codeserver_py39/lib/python3.9/site-packages/pytorch_lightning/strategies/strategy.py", line 128, in setup_optimizers self.optimizers, self.lr_scheduler_configs, self.optimizer_frequencies = _init_optimizers_and_lr_schedulers( File "/home/ec2-user/SageMaker/.cs/conda/envs/codeserver_py39/lib/python3.9/site-packages/pytorch_lightning/core/optimizer.py", line 195, in _init_optimizers_and_lr_schedulers _validate_scheduler_api(lr_scheduler_configs, model) File "/home/ec2-user/SageMaker/.cs/conda/envs/codeserver_py39/lib/python3.9/site-packages/pytorch_lightning/core/optimizer.py", line 350, in _validate_scheduler_api raise MisconfigurationException( pytorch_lightning.utilities.exceptions.MisconfigurationException: The provided lr scheduler `LambdaLR` doesn't follow PyTorch's LRScheduler API. You should override the `LightningModule.lr_scheduler_step` hook with your own logic if you are using a custom LR scheduler. Rag-End2End Retriever <|||||>Transformer CLI ENV - `transformers` version: 4.27.1 - Platform: Linux-5.10.157-139.675.amzn2.x86_64-x86_64-with-glibc2.26 - Python version: 3.9.16 - Huggingface_hub version: 0.13.2 - PyTorch version (GPU?): 2.0.0+cu117 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <yes> - Using distributed or parallel set-up in script?: <ray><|||||>Hi @Rajdoshi99, thanks for providing this information! Looking at the traceback, the issue is indeed arising from pytorch lightning itself and its compatibility with the script. We don't actively maintain the research examples. If you wish to run the script I would suggest downgrading the pytorch lighting version in your environment. As the script is old, I unfortunately can't guarantee that will be enough to make it work.
transformers
22,209
closed
Add LlamaForSequenceClassification
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> This PR adds the `LlamaForSequenceClassification` class, which among standard applications can be used for reward modelling in the RLHF pipeline :) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer: @stas00, Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 --> cc @ArthurZucker and @younesbelkada
03-16-2023 16:09:30
03-16-2023 16:09:30
_The documentation is not available anymore as the PR was closed or merged._<|||||>> I would also add potentially a test with single_label_classification to make sure everything works! Done in https://github.com/huggingface/transformers/pull/22209/commits/6737e380fc6a4cb73150da4fa821dd463f9a7204 :) <|||||>Awesome thanks a lot @lewtun ! πŸš€
transformers
22,208
closed
fixes a typo in WhisperFeatureExtractor docs.
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes a typo. ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer: @stas00, Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
03-16-2023 15:25:32
03-16-2023 15:25:32
_The documentation is not available anymore as the PR was closed or merged._
transformers
22,207
closed
[`XGLM`] Add `accelerate` support for XGLM
# What does this PR do? Fixes: https://github.com/huggingface/transformers/issues/22188 With this PR users will be able to load XGLM models in 8bit cc @amyeroberts
03-16-2023 14:52:40
03-16-2023 14:52:40
_The documentation is not available anymore as the PR was closed or merged._
transformers
22,206
closed
Error with load_tf_weights_in_albert when transforming tf checkpoint to pytorch model
### System Info - `transformers` version: 4.24.0 - Platform: Linux-4.19.128-microsoft-standard-x86_64-with-glibc2.31 - Python version: 3.10.9 - Huggingface_hub version: 0.10.1 - PyTorch version (GPU?): 1.13.1 (False) - Tensorflow version (GPU?): 2.10.0 (False) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help? _No response_ ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction #### The error happens when : - Using transformer-cli convert script for ALBERT model ``` transformers-cli convert --model_type albert \ --tf_checkpoint $ALBERT_BASE_DIR/model.ckpt-best \ --config $ALBERT_BASE_DIR/albert_config.json \ --pytorch_dump_output $ALBERT_BASE_DIR/pytorch_model.bin ``` - Or directly using convert_albert_original_tf_checkpoint_to_pytorch.py script ``` python3 compatibility.py --tf_checkpoint_path $ALBERT_BASE_DIR/model.ckpt-best --pytorch_dump_path $ALBERT_BASE_DIR/pytorch_model.bin --albert_config_file $ALBERT_BASE_DIR/albert_config.json ``` #### The error message : ``` Traceback (most recent call last): File "/path/transformers-cli", line 11, in <module> sys.exit(main()) File "/path/transformers/commands/transformers_cli.py", line 55, in main service.run() File "/path/transformers/commands/convert.py", line 94, in run convert_tf_checkpoint_to_pytorch(self._tf_checkpoint, self._config, self._pytorch_dump_output) File "/path/transformers/models/albert/convert_albert_original_tf_checkpoint_to_pytorch.py", line 36, in convert_tf_checkpoint_to_pytorch load_tf_weights_in_albert(model, config, tf_checkpoint_path) File "/path/transformers/models/albert/modeling_albert.py", line 164, in load_tf_weights_in_albert pointer = getattr(pointer, "bias") File "/path/torch/nn/modules/module.py", line 1207, in __getattr__ raise AttributeError("'{}' object has no attribute '{}'".format( AttributeError: 'AlbertEmbeddings' object has no attribute 'bias' ``` #### Code causing this error Inside load_tf_weights_in_albert function from modeling_albert.py file. More precisely it's the part with ```scope_names[0] == gamma```` and ```beta``` : ``` pointer = model for m_name in name: if re.fullmatch(r"[A-Za-z]+_\d+", m_name): scope_names = re.split(r"_(\d+)", m_name) else: scope_names = [m_name] if scope_names[0] == "kernel" or scope_names[0] == "gamma": pointer = getattr(pointer, "weight") elif scope_names[0] == "output_bias" or scope_names[0] == "beta": pointer = getattr(pointer, "bias") elif scope_names[0] == "output_weights": pointer = getattr(pointer, "weight") elif scope_names[0] == "squad": pointer = getattr(pointer, "classifier") else: try: pointer = getattr(pointer, scope_names[0]) except AttributeError: logger.info(f"Skipping {'/'.join(name)}") continue if len(scope_names) >= 2: num = int(scope_names[1]) pointer = pointer[num] ``` #### What I suspect is happening : In this part of the code, a newly instantiated pytorch ```AlbertForPreTraining``` model (instantiated inside convert_tf_checkpoint_to_pytorch.py) is being filled with tensorflow variables' arrays. In order to achieve this, tensorflow variables are red, their names modified and corresponding arrays are copied to similar pytorch variables. In order to fill the correct pytorch variable/attribute, a pointer is moved to the corresponding element according to the variable name. This errors occurs when a tensorflow variable either contains a ```beta``` or ```gamma``` in its name (example of variable name : albert/embeddings/layer_normalization/beta). Because, in those cases, the class/object that the pointer is representing doesn't contains any ```bias``` or ```weight```, resulting in an error when the ```getattr``` function is trying to retrieve them. This seems to happen with every variable name corresponding to a normalization layer. #### Example of my reasoning : The current variable name is ```albert/embeddings/layer_normalization/beta```. It was split on ```/``` and we're now on the ```beta``` substring. ```pointer``` is currently pointing to ```AlbertEmbeddings``` object. We reach the condition ```if scope_names[0] == "output_bias" or scope_names[0] == "beta```. ```getattr``` function is trying to retrieve ```bias``` from ```AlbertEmbeddings``` but there is no corresponding attribute, resulting in the displayed error. #### What should ```pointer``` retrieve inside ```AlbertForPreTraining``` architecture when meeting a ```gamma``` or ```beta``` ? Normalization layer's weight and bias ? ### Expected behavior Obtaining a pytorch bin file from a tensorflow checkpoint, without errors occurring in the process. #### Note : I couldn't find any recent or opened issues on this subject, but similar closed ones are #2006 and #3779
03-16-2023 14:04:43
03-16-2023 14:04:43
Note that this command only works for the original TensorFlow checkpoints of Albert and we do not maintain it as those checkpoints have long been converted and are available on the Hub. To convert Hugging Face models from TensorFlow to PyTorch or vice versa, use [this guide](https://huggingface.co/docs/transformers/model_sharing#convert-a-model-for-all-frameworks).<|||||>Thanks @sgugger for your quick answer ! I tried the guide with : ``` pytorch_model = AlbertForPreTraining.from_pretrained("tf_checkpoint_folder", from_tf=True) pt_model.save_pretrained("generated_pytorch_model") ``` But the same error is occurring. What do you mean by "original Tensorflow checkpoints" ? <|||||>Where does your TensorFlow checkpoint come from?<|||||>From a pretraining with google-resarch official github code<|||||>We do not maintain a generic conversion command that works with all repos outside of Hugging Face. The command you are using is the one we used to convert the original ALBERT checkpoints three years ago, but we don't guarantee it will work with more recent ones. You will need to adapt the code a bit yourself to solve this error, I'm afraid.<|||||>I'm answering after trying to adapt a little bit the script. I managed to copy tensorflow variables to seemingly corresponding pytorch tensors. Only optimizers and the layer norm of AlbertAttention module keeps their originally instantiated values (as there is no need to copy optimizers variables and there is not equivalent for the layer norm of attention module). But for some reason, the model I obtained doesn't seems "right" as it doesn't perform learning when fine-tuned on a simple task. Maybe some others manipulations are needed, such as the transpose operation performed in line 181 of modeling_albert.py ? Sorry to bother you again with that, but is there someone which could have a slight idea of what could be done and give me some tips ? Thanks again for your attention <|||||>Hi @Ala-Na, thanks for raising an issue! For custom situations like this, the question is best placed in the [forums](https://discuss.huggingface.co/). We try to reserve issues for feature requests and bug reports specific to the transformers library. <|||||>Thank you @amyeroberts for the suggestion. I just created a post about it on the forum : https://discuss.huggingface.co/t/help-appreciated-modifying-load-tf-weights-in-albert-for-transforming-albert-tensorflow-checkpoint-to-pytorch-model/34415 For anyone who may have an idea of what need to be done, please don't hesitate to respond there. Thanks !<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
22,205
closed
Depth estimation task guide
This PR adds a zero-shot depth estimation task guide that covers inference with a pipeline, as well as manually, for monocular depth estimation supported by DPT and GLPN.
03-16-2023 13:21:55
03-16-2023 13:21:55
PR with images: https://huggingface.co/datasets/huggingface/documentation-images/discussions/64<|||||>_The documentation is not available anymore as the PR was closed or merged._
transformers
22,204
closed
πŸ”₯py38 + torch 2 πŸ”₯πŸ”₯πŸ”₯πŸš€
# What does this PR do? Title is all we need. There is one line in `setup.py` I don't know if I need to change and if yes how. ```python3 deps["importlib_metadata"] + ";python_version<'3.8'", # importlib_metadata for Python versions that don't have it ```
03-16-2023 12:22:06
03-16-2023 12:22:06
_The documentation is not available anymore as the PR was closed or merged._<|||||>The whole suite of tests is run and passed in this [run](https://app.circleci.com/pipelines/github/huggingface/transformers/60009/workflows/73d754f8-017d-458f-8dc2-c6166d30e1de)
transformers
22,203
closed
GenerationConfig argument for Seq2SeqTrainer / Seq2SeqTrainingArgument
### Feature request πŸ‘‹ The request is for a way to pass a `GenerationConfig` to a `Seq2SeqTrainer` (through `Seq2SeqTrainingArguments`). ### Motivation ATOW, `Seq2SeqTrainer` only supports a few arguments for generation: `max_length` / `max_new_tokens`, `num_beams`. Being able to pass a `GenerationConfig` configuration to generate would allow users to have much more control over the prediction step. I noticed that this is already possible as in `generate`, if no `GenerationConfig` arg is given, it is [retrieved from `self.generation_config`](https://github.com/huggingface/transformers/blob/98d88b23f54e5a23e741833f1e973fdf600cc2c5/src/transformers/generation/utils.py#L1195), [itself deduced from `model.config`](https://github.com/huggingface/transformers/blob/98d88b23f54e5a23e741833f1e973fdf600cc2c5/src/transformers/modeling_utils.py#L1036) at model init (but gen args in `PretrainedConfig` is legacy / will be removed right ?). Currently, overriding `model.generation_config` model attribute would conduct to the desired result, however this does not seem to be documented. ### Your contribution I don't know if this have been discussed. Do you think this should be added ? If not, maybe edit the documentation to clarify the `model.generation_config` way ? I can help in both cases. cc @sgugger @gante
03-16-2023 12:00:29
03-16-2023 12:00:29
I'm not sure we can have a `generation_config` directly in the `Seq2SeqTrainingArguments` as it wouldn't work with the CLI. But maybe we can have a `generation_config_file` argument instead? Also yes to the `model.generation_config` way being better documented!<|||||>Good point (CLI)! In that can a json file could work, and alternatively the argument could maybe accept both paths to this file and a `GenerationConfig` object ?<|||||>Yes, that works for me!<|||||>@Natooz that sounds great! Would you like to have a go at it?<|||||>Hey @gante, yep, just clearing my backlog, it should be done by the week-end<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>(completed)<|||||>Hello! It's a great idea to be able to pass GenerationConfig to the Seq2SeqTrainer. However, it would be great to have a matching `GenerationArguments` class that allows parsing. Right now I see that the documentation of `GenerationConfig` describes the types of the attributes in words but these are not defined in the code. Am I missing where these are defined?<|||||>Hey @artidoro πŸ‘‹ I'm not sure if I got your question right -- were you asking for support to pass generation arguments directly to the trainer (e.g. `--top-k 50`), as opposed to the solution added as a result of this issue (passing a whole generation config file)?
transformers
22,202
closed
Update tiny model creation script
# What does this PR do? - Update `UNCONVERTIBLE_MODEL_ARCHITECTURES` with a few recent models: they don't have processor class (or not included in the `XXX_MAPPING_NAMES`) - Make the detection of model test class more robust.
03-16-2023 11:15:51
03-16-2023 11:15:51
_The documentation is not available anymore as the PR was closed or merged._<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_22202). All of your documentation changes will be reflected on that endpoint.
transformers
22,201
closed
not related summary
I passed a Text about the food industry in the model the summarization would be something about atoms and totally unrelated
03-16-2023 10:18:05
03-16-2023 10:18:05
Hi @aylix, could you please follow the issue template, giving details about the model, your environment, a reproducible snippet, and the expected behaviour? <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
22,200
closed
trainer.push_to_hub(**kwargs) requires "git pull" first
### System Info Hi, I am using the Segformer, following the tutorial https://huggingface.co/blog/fine-tune-segformer Everytime aftern training the conflict error will come after executing the code "trainer.push_to_hub(**kwargs)". Error messages: ! [rejected] main -> main (fetch first) error: failed to push some refs to 'https://huggingface.co/yiming19/segformer-b0-finetuned-segments-construction-1' hint: Updates were rejected because the remote contains work that you do hint: not have locally. This is usually caused by another repository pushing hint: to the same ref. You may want to first integrate the remote changes hint: (e.g., 'git pull ...') before pushing again. hint: See the 'Note about fast-forwards' in 'git push --help' for details. I could use git pull and git push manually to push model, but why will this error come or before push the model can execute git pull first? @sgugger Thanks. ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction kwargs = { "tags": ["vision", "image-segmentation"], "finetuned_from": pretrained_model_name, "dataset": hf_dataset_identifier, } feature_extractor.push_to_hub(hub_model_id) trainer.push_to_hub(**kwargs) ### Expected behavior I just follow the tutorial https://huggingface.co/blog/fine-tune-segformer Should no error come.
03-16-2023 09:58:04
03-16-2023 09:58:04
You either need to `--overwrite_output_dir` or make sure the `output_dir` you are using is a local clone of your repo that is up to date, yes.<|||||>Thanks to your fast reply. How could I change the code? Cause I donot want to pull and push in the terminal manually, want to do "pull" firstly then "push" in the code example provided by the link.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
22,199
closed
Fix typo in Align docs
# What does this PR do? Fixes a broken link to the blog post in the ALIGN docs ## Before submitting - [ X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
03-16-2023 09:23:14
03-16-2023 09:23:14
_The documentation is not available anymore as the PR was closed or merged._
transformers
22,198
closed
Import "transformers" could not be resolved
### System Info **I have tried different python versions 3.7 and 3.11** ![Screenshot 2023-03-16 at 13 18 32](https://user-images.githubusercontent.com/2458760/225571262-7deb343d-9d76-4d14-8a24-1c75da0276fa.png) ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction from transformers import pipeline classifier = pipeline("sentiment-analysis") res = classifier ("I've been waiting for a Hugging Face course my whole life.") print (res) ### Expected behavior should work
03-16-2023 09:23:05
03-16-2023 09:23:05
Hi @givik, thanks for raising this issue. This isn't related to transformers - it's to do with vscode and the environment. The error shown is coming from PyLance, and is indicating that the environment it's looking in doesn't have `transformers` installed. Please make sure transformers [is installed](https://huggingface.co/docs/transformers/installation) and PyLance is looking in the [correct place](https://code.visualstudio.com/docs/python/environments).<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>i get the same issue
transformers
22,197
closed
[Pytorch 2.0] Cannot load `BERT` model `No module named 'torch._six'`
### System Info - `transformers` version: 4.27.1 - Platform: Linux-5.15.0-1031-aws-x86_64-with-glibc2.31 - Python version: 3.9.16 - Huggingface_hub version: 0.13.1 - PyTorch version (GPU?): 2.0.0+cu117 (True) ### Who can help? _No response_ ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction Steps to reproduction 1. `!pip install "torch>=2.0" --extra-index-url https://download.pytorch.org/whl/cu117 --upgrade --quiet` 2. `!pip install "transformers==4.27.1" "datasets==2.9.0" "accelerate==0.17.1" "evaluate==0.4.0" tensorboard scikit-learn --upgrade --quiet` 3. load model ```python from transformers import AutoModelForSequenceClassification # Model id to load the tokenizer model_id = "bert-base-uncased" model = AutoModelForSequenceClassification.from_pretrained( model_id, num_labels=2 ) ``` ### Expected behavior Can load model, below is the error ```bash Traceback (most recent call last): File "/opt/conda/envs/pytorch/lib/python3.9/site-packages/transformers/utils/import_utils.py", line 1126, in _get_module return importlib.import_module("." + module_name, self.__name__) File "/opt/conda/envs/pytorch/lib/python3.9/importlib/__init__.py", line 127, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "<frozen importlib._bootstrap>", line 1030, in _gcd_import File "<frozen importlib._bootstrap>", line 1007, in _find_and_load File "<frozen importlib._bootstrap>", line 986, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 680, in _load_unlocked File "<frozen importlib._bootstrap_external>", line 850, in exec_module File "<frozen importlib._bootstrap>", line 228, in _call_with_frames_removed File "/opt/conda/envs/pytorch/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 42, in <module> from ...modeling_utils import PreTrainedModel File "/opt/conda/envs/pytorch/lib/python3.9/site-packages/transformers/modeling_utils.py", line 37, in <module> from .deepspeed import deepspeed_config, is_deepspeed_zero3_enabled File "/opt/conda/envs/pytorch/lib/python3.9/site-packages/transformers/deepspeed.py", line 38, in <module> from accelerate.utils.deepspeed import HfDeepSpeedConfig as DeepSpeedConfig File "/opt/conda/envs/pytorch/lib/python3.9/site-packages/accelerate/__init__.py", line 3, in <module> from .accelerator import Accelerator File "/opt/conda/envs/pytorch/lib/python3.9/site-packages/accelerate/accelerator.py", line 30, in <module> from .checkpointing import load_accelerator_state, load_custom_state, save_accelerator_state, save_custom_state File "/opt/conda/envs/pytorch/lib/python3.9/site-packages/accelerate/checkpointing.py", line 24, in <module> from .utils import ( File "/opt/conda/envs/pytorch/lib/python3.9/site-packages/accelerate/utils/__init__.py", line 105, in <module> from .launch import ( File "/opt/conda/envs/pytorch/lib/python3.9/site-packages/accelerate/utils/launch.py", line 28, in <module> from ..utils.other import merge_dicts File "/opt/conda/envs/pytorch/lib/python3.9/site-packages/accelerate/utils/other.py", line 28, in <module> from deepspeed import DeepSpeedEngine File "/opt/conda/envs/pytorch/lib/python3.9/site-packages/deepspeed/__init__.py", line 15, in <module> from . import module_inject File "/opt/conda/envs/pytorch/lib/python3.9/site-packages/deepspeed/module_inject/__init__.py", line 1, in <module> from .replace_module import replace_transformer_layer, revert_transformer_layer, ReplaceWithTensorSlicing, GroupQuantizer, generic_injection File "/opt/conda/envs/pytorch/lib/python3.9/site-packages/deepspeed/module_inject/replace_module.py", line 801, in <module> from ..pipe import PipelineModule File "/opt/conda/envs/pytorch/lib/python3.9/site-packages/deepspeed/pipe/__init__.py", line 1, in <module> from ..runtime.pipe import PipelineModule, LayerSpec, TiedLayerSpec File "/opt/conda/envs/pytorch/lib/python3.9/site-packages/deepspeed/runtime/pipe/__init__.py", line 1, in <module> from .module import PipelineModule, LayerSpec, TiedLayerSpec File "/opt/conda/envs/pytorch/lib/python3.9/site-packages/deepspeed/runtime/pipe/module.py", line 13, in <module> from .. import utils as ds_utils File "/opt/conda/envs/pytorch/lib/python3.9/site-packages/deepspeed/runtime/utils.py", line 19, in <module> from torch._six import inf ModuleNotFoundError: No module named 'torch._six' The above exception was the direct cause of the following exception: Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/opt/conda/envs/pytorch/lib/python3.9/site-packages/transformers/models/auto/auto_factory.py", line 470, in from_pretrained model_class = _get_model_class(config, cls._model_mapping) File "/opt/conda/envs/pytorch/lib/python3.9/site-packages/transformers/models/auto/auto_factory.py", line 360, in _get_model_class supported_models = model_mapping[type(config)] File "/opt/conda/envs/pytorch/lib/python3.9/site-packages/transformers/models/auto/auto_factory.py", line 602, in __getitem__ return self._load_attr_from_module(model_type, model_name) File "/opt/conda/envs/pytorch/lib/python3.9/site-packages/transformers/models/auto/auto_factory.py", line 616, in _load_attr_from_module return getattribute_from_module(self._modules[module_name], attr) File "/opt/conda/envs/pytorch/lib/python3.9/site-packages/transformers/models/auto/auto_factory.py", line 561, in getattribute_from_module if hasattr(module, attr): File "/opt/conda/envs/pytorch/lib/python3.9/site-packages/transformers/utils/import_utils.py", line 1116, in __getattr__ module = self._get_module(self._class_to_module[name]) File "/opt/conda/envs/pytorch/lib/python3.9/site-packages/transformers/utils/import_utils.py", line 1128, in _get_module raise RuntimeError( RuntimeError: Failed to import transformers.models.bert.modeling_bert because of the following error (look up to see its traceback): No module named 'torch._six' ```
03-16-2023 07:40:09
03-16-2023 07:40:09
Error was thrown since i had `deepspeed` installed which uses/used `torch._six` see: https://github.com/microsoft/DeepSpeed/pull/2863<|||||>Is DeepSpeed part of transformers 4.27.1? I get the same error message when using torch>=2.0 and transformers, but I am not installing deepspeed myself, at least not knowingly. How did you uninstall it or turn it off? I am not using the Trainer or Accelerate directly. Instead I use: ``` ... = AutoTokenizer.from_pretrained(tokenizer_variant,do_lower_case=True) ... ... = AutoModelForSequenceClassification.from_pretrained(variant, **config_params) ``` ``` Successfully installed absl-py-1.4.0 altair-4.2.2 cachetools-5.3.0 entrypoints-0.4 filelock-3.10.0 google-auth-2.16.2 google-auth-oauthlib-0.4.6 grpcio-1.51.3 huggingface-hub-0.13.2 jsonlines-3.1.0 jsonschema-4.17.3 lit-15.0.7 markdown-3.4.1 mpmath-1.3.0 nltk-3.8.1 nvidia-cublas-cu11-11.10.3.66 nvidia-cuda-cupti-cu11-11.7.101 nvidia-cuda-nvrtc-cu11-11.7.99 nvidia-cuda-runtime-cu11-11.7.99 nvidia-cudnn-cu11-8.5.0.96 nvidia-cufft-cu11-10.9.0.58 nvidia-curand-cu11-10.2.10.91 nvidia-cusolver-cu11-11.4.0.1 nvidia-cusparse-cu11-11.7.4.91 nvidia-nccl-cu11-2.14.3 nvidia-nvtx-cu11-11.7.91 oauthlib-3.2.2 pyasn1-modules-0.2.8 pyrsistent-0.19.3 regex-2022.10.31 requests-oauthlib-1.3.1 sympy-1.11.1 tensorboard-2.12.0 tensorboard-data-server-0.7.0 tensorboard-plugin-wit-1.8.1 tokenizers-0.13.2 torch-2.0.0 transformers-4.27.1 triton-2.0.0 ``` ``` 2023-03-17 18:37:15,836 - models.transformers - INFO - Loader AutoModel from pre-trained. -- 2023-03-17 18:37:16,156 - models.transformers - ERROR - Giving up on: Failed to import transformers.models.bert.modeling_bert because of the following error (look up to see its traceback): No module named 'torch._six' 2023-03-17 18:37:16,156 - models.transformers - ERROR - Failed to import transformers.models.bert.modeling_bert because of the following error (look up to see its traceback): No module named 'torch._six' Traceback (most recent call last): File "/opt/conda/lib/python3.9/site-packages/transformers/utils/import_utils.py", line 1126, in _get_module return importlib.import_module("." + module_name, self.__name__) File "/opt/conda/lib/python3.9/importlib/__init__.py", line 127, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "<frozen importlib._bootstrap>", line 1030, in _gcd_import File "<frozen importlib._bootstrap>", line 1007, in _find_and_load File "<frozen importlib._bootstrap>", line 986, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 680, in _load_unlocked File "<frozen importlib._bootstrap_external>", line 850, in exec_module File "<frozen importlib._bootstrap>", line 228, in _call_with_frames_removed File "/opt/conda/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 42, in <module> from ...modeling_utils import PreTrainedModel File "/opt/conda/lib/python3.9/site-packages/transformers/modeling_utils.py", line 83, in <module> from accelerate import __version__ as accelerate_version File "/opt/conda/lib/python3.9/site-packages/accelerate/__init__.py", line 7, in <module> from .accelerator import Accelerator File "/opt/conda/lib/python3.9/site-packages/accelerate/accelerator.py", line 29, in <module> from .checkpointing import load_accelerator_state, load_custom_state, save_accelerator_state, save_custom_state File "/opt/conda/lib/python3.9/site-packages/accelerate/checkpointing.py", line 24, in <module> from .utils import ( File "/opt/conda/lib/python3.9/site-packages/accelerate/utils/__init__.py", line 124, in <module> from .other import ( File "/opt/conda/lib/python3.9/site-packages/accelerate/utils/other.py", line 27, in <module> from deepspeed import DeepSpeedEngine File "/opt/conda/lib/python3.9/site-packages/deepspeed/__init__.py", line 16, in <module> from .runtime.engine import DeepSpeedEngine, DeepSpeedOptimizerCallable, DeepSpeedSchedulerCallable File "/opt/conda/lib/python3.9/site-packages/deepspeed/runtime/engine.py", line 25, in <module> from deepspeed.runtime.utils import see_memory_usage, get_ma_status, DummyOptim File "/opt/conda/lib/python3.9/site-packages/deepspeed/runtime/utils.py", line 18, in <module> from torch._six import inf ModuleNotFoundError: No module named 'torch._six' ```<|||||>I just upgraded deepspeed to the 0.8.2 and it worked ``` pip install deepspeed --upgrade ```<|||||>I don't think I need deepspeed for what I wrote above, but adding deepspeed>=0.8.2 to requirements.txt works for me as well. Thanks @maloyan!<|||||>hey! I have same error but I don't need to use deepspeed, any idea how to solve it? thx!<|||||>for me this was because of apex, had to do `pip uninstall -y apex` a couple of times since I wasn't using it anyway<|||||>It worked for me. > for me this was because of apex, had to do `pip uninstall -y apex` a couple of times since I wasn't using it anyway
transformers
22,196
closed
fix AutoTP in deepspeed could not work for bloom
# What does this PR do? Fixes # (issue) fix AutoTP in deepspeed could not work for bloom ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - deepspeed: HF Trainer: @stas00
03-16-2023 07:29:52
03-16-2023 07:29:52
should work with https://github.com/microsoft/DeepSpeed/pull/3035<|||||>@sgugger please help review <|||||>@yao-matrix<|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>Actually, just checked the modeling file and this function is only used in this class, so it would be cleaner to just make it a method. Could you update your PR in that direction?<|||||>@sgugger I see code like "from transformers.models.bloom.modeling_bloom import build_alibi_tensor" in petals, if we make this a method, the petals code needs to be changed as well. may happen to other repo that use bloom as well.<|||||>Ok so let's keep it as a function in that module. I'd still prefer a real method (that directly returns the result of the function) to setting a function attribute like this if you don't mind.<|||||>@sgugger update the PR.<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_22196). All of your documentation changes will be reflected on that endpoint.
transformers
22,195
closed
Update expected values in `MgpstrModelIntegrationTest`
# What does this PR do? [CI](https://github.com/huggingface/transformers/actions/runs/4422120632/jobs/7753698495) failed: the expected values provided by the contributor didn't match the one given in CI runner, and we just need to update it.
03-16-2023 04:48:39
03-16-2023 04:48:39
_The documentation is not available anymore as the PR was closed or merged._
transformers
22,194
closed
Fix DeepSpeed CI
# What does this PR do? Since 2 days, the daily CI runs with torch 2.0.0. In DeepSpeed CI job, there is an issue regarding `torch-tensorrt` (currently `v1.3.0`). The installation was already disabled in #22135, but there was a version shipped with the base image, and I forgot to uninstall it in our docker image during building. Remark: The failure is our `undefined symbo` friend ```python E ImportError: /opt/conda/lib/python3.8/site-packages/torch_tensorrt/lib/libtorchtrt.so: undefined symbol: _ZN2at11show_configB5cxx11Ev ```
03-16-2023 04:00:56
03-16-2023 04:00:56
_The documentation is not available anymore as the PR was closed or merged._<|||||>ouch, i checked it took tensor-rt 1 month to release a version supporting pt-1.13 after the latter was released. so I won't expect any coming updates quickly. do we need tensor-rt for anything? Thank you for fixing this, @ydshieh! Glad to have you on top of things as always!<|||||>Hi @stas00 It's probably my bad. In #20758, it (the one shipped with the base image) was uninstalled (same reason as this PR) In #20758 (next day), I found a way to install it - and just updated the docker file without thinking if we need it. A quick search gives me ```python def is_torch_tensorrt_fx_available(): if importlib.util.find_spec("torch_tensorrt") is None: return False return importlib.util.find_spec("torch_tensorrt.fx") is not None ``` But I think it's irrelevant to DeepSpeed CI job. **I can actually remove these 2 lines** ```bash # This installation instruction will uninstall torch 2.0.0 # TODO: uncomment and update the following line once `torch-tensorrt` is ready for `torch 2.0.0` ``` ~~**if you are also OK with this.**~~ Well, let's remove the installation line, as it was never there originally - it was just shipped with the base image (BTW, thank you for checking the release time of `torch-tensorrt` ❀️ )
transformers
22,193
closed
[trainer] param count for deepspeed zero3
As reported in https://github.com/huggingface/transformers/issues/22179 the trainer code doesn't handle the sharded models correctly in reporting "the Number of trainable parameters" - I'm not sure if FSDP models have the same issue. This PR fixes this situation with Deepspeed ZeRO3 which otherwise reported a count of 0. Fixes: https://github.com/huggingface/transformers/issues/22179
03-16-2023 03:26:38
03-16-2023 03:26:38
_The documentation is not available anymore as the PR was closed or merged._<|||||>An explanation is needed here. The Deepspeed team had to invent their own tensor substitute since 2 years ago nothing of a kind existed in pytorch. They had to replace tensors with placeholders to be able to support sharded tensors. The meta tensors were added just recently so they are looking at possibly switching to those. The API I used in this PR is not public per-se. And the "clean" way would be to gather tensors and then get their normal `t.numel()` - but this is extremely wasteful and expensive memory and time-wise. So I hacked it to get the internal equivalent to make it almost instant. I'm not planning on leaving it this way and asking for deepspeed to provide an efficient method to return the sizes w/o me using a non-public API. There are many other hidden issues wrt this tensor substitution that impacts only ZeRO stage 3 https://github.com/microsoft/DeepSpeed/issues/2650 - and yesterday I have discovered at least one bug in our examples because of that, while debugging the user report that lead to this PR. All examples resize the embedding under zero3 because their check if the vocab is larger than embedding size always returns True, since the embed size is reported to be of size 0, because it's not gathered :( I'm working on ensuring that the Deepspeed addresses this issue because it's subtle and very problematic. Please let me know if you're OK with merging this now that you know more details. I can also easily recode it to gather tensors first, but it'd be very inefficient.
transformers
22,192
closed
Modify electra loss calcualation part
`loss_fct(logits.view(-1, self.num_labels), labels.view(-1))` and `loss_fct(logits, labels)` do the same thing, the latter code is more efficient.
03-16-2023 01:48:25
03-16-2023 01:48:25
_The documentation is not available anymore as the PR was closed or merged._
transformers
22,191
closed
run_qa.py on custom datasets raise TypeError: __init__() got an unexpected keyword argument 'field'
### System Info Hello, I'm trying to train the qa model on SageMaker following the instracution, but I got ```TypeError: __init__() got an unexpected keyword argument 'field'``` issue when try to use my own datasets. I used SageMaker instance so it already install every dependency in requirements.txt. I checked the datasets code and seems like it does not support "field" anymore? Please fix this issue or let me know if there's something I did wrong. ### Who can help? _No response_ ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction Run load_datasets has the same error ### Expected behavior run run_qa.py in sagemaker successfully
03-15-2023 19:36:27
03-15-2023 19:36:27
It may be installing old versions of the library so you have to pick up the corresponding version of the example (cc @philschmid for the exact versions)<|||||>> It may be installing old versions of the library so you have to pick up the corresponding version of the example (cc @philschmid for the exact versions) Thank you for your reply! That's also my assumption, I basically just used the train code from: https://huggingface.co/deepset/roberta-base-squad2 under train/SageMaker. Could be that the datasets version is too new in my instance, but in this case, which datasets version would you recommend? Thanks!<|||||>@TongJiL could you share the exact code snippet? <|||||>@philschmid ``` import sagemaker from sagemaker.huggingface import HuggingFace role = sagemaker.get_execution_role() hyperparameters = { 'model_name_or_path':'deepset/roberta-base-squad2', 'output_dir':'/opt/ml/model' 'train_file';'/opt/ml/input/data/train/qa_train_data.csv' } git_config = {'repo': 'https://github.com/huggingface/transformers.git','branch': 'v4.17.0'} huggingface_estimator = HuggingFace( entry_point='run_qa.py', source_dir='./examples/pytorch/question-answering', instance_type='ml.p3.2xlarge', instance_count=1, role=role, git_config=git_config, transformers_version='4.17.0', pytorch_version='1.10.2', py_version='py38', hyperparameters = hyperparameters ) data = { 'train': "s3://my_s3_path/qa_train_data.csv" } huggingface_estimator.fit(data)```<|||||>Turns out the "filed" works for Json but not csv.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
22,190
closed
Regression pipeline device
# What does this PR do? Fixes the regression introduced by #21479. Basically doing ```py from transformers import pipeline classifier = pipeline("text-classification", device=-1) ``` now fails on v4.27.0 whereas it used to work on 4.26.1 This PR fixes this and adds a test to avoid future regression. Fixes #22189
03-15-2023 18:04:01
03-15-2023 18:04:01
Tests pipelines pass for both PyTorch and TF so merging!<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_22190). All of your documentation changes will be reflected on that endpoint.
transformers
22,189
closed
transformers-cli serve not working
### System Info System info ``` bash - `transformers` version: 4.27.0 - Platform: macOS-12.3.1-arm64-arm-64bit - Python version: 3.8.12 - Huggingface_hub version: 0.13.2 - PyTorch version (GPU?): 2.0.0 (False) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ``` ### Who can help? _No response_ ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction The following command fails for `transformers[serving]==4.27.0` ```bash transformers-cli serve --task=fill-mask --model=bert-base-uncased ``` this is the traceback ```bash Traceback (most recent call last): File "venv/bin/transformers-cli", line 8, in <module> sys.exit(main()) File "venv/lib/python3.8/site-packages/transformers/commands/transformers_cli.py", line 54, in main service = args.func(args) File "venv/lib/python3.8/site-packages/transformers/commands/serving.py", line 49, in serve_command_factory nlp = pipeline( File "venv/lib/python3.8/site-packages/transformers/pipelines/__init__.py", line 976, in pipeline return pipeline_class(model=model, framework=framework, task=task, **kwargs) File "venv/lib/python3.8/site-packages/transformers/pipelines/base.py", line 773, in __init__ self.model = self.model.to(device=device) File "venv/lib/python3.8/site-packages/transformers/modeling_utils.py", line 1811, in to return super().to(*args, **kwargs) File "venv/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1126, in to device, dtype, non_blocking, convert_to_format = torch._C._nn._parse_to(*args, **kwargs) RuntimeError: Device index must not be negative ``` ### Expected behavior However, downgrading to `transformers[serving]==4.26.1` fixes the issue ```bash INFO: Started server process [22054] INFO: Waiting for application startup. INFO: Application startup complete. INFO: Uvicorn running on http://localhost:8888 (Press CTRL+C to quit) ```
03-15-2023 17:34:31
03-15-2023 17:34:31
cc @Narsil <|||||>This will be patched very soon, thanks for reporting!<|||||>> This will be patched very soon, thanks for reporting! Thank you for fixing it so quickly:)
transformers
22,188
closed
XGLMForCausalLM does not support `device_map='auto'` for load 8 bit
### System Info transformers: v4.27.0 ### Who can help? @sgugger @muellerzr ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction I was use this code. ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_name = "facebook/xglm-1.7B" tokenizer = AutoTokenizer.from_pretrained(model_name) model_8bit = AutoModelForCausalLM.from_pretrained(model_name, load_in_8bit=True, device_map='auto') ``` Error: ```python Overriding torch_dtype=None with `torch_dtype=torch.float16` due to requirements of `bitsandbytes` to enable model loading in mixed int8. Either pass torch_dtype=torch.float16 or don't pass this argument at all to remove this warning. --------------------------------------------------------------------------- ValueError Traceback (most recent call last) Cell In[5], line 3 1 model_name = "facebook/xglm-1.7B" 2 tokenizer = AutoTokenizer.from_pretrained(model_name) ----> 3 model_8bit = AutoModelForCausalLM.from_pretrained(model_name, load_in_8bit=True, device_map='auto') File /usr/local/lib/python3.8/dist-packages/transformers/models/auto/auto_factory.py:471, in _BaseAutoModelClass.from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs) 469 elif type(config) in cls._model_mapping.keys(): 470 model_class = _get_model_class(config, cls._model_mapping) --> 471 return model_class.from_pretrained( 472 pretrained_model_name_or_path, *model_args, config=config, **hub_kwargs, **kwargs 473 ) 474 raise ValueError( 475 f"Unrecognized configuration class {config.__class__} for this kind of AutoModel: {cls.__name__}.\n" 476 f"Model type should be one of {', '.join(c.__name__ for c in cls._model_mapping.keys())}." 477 ) File /usr/local/lib/python3.8/dist-packages/transformers/modeling_utils.py:2556, in PreTrainedModel.from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs) 2550 special_dtypes = { 2551 name: torch.float32 2552 for name, _ in model.named_parameters() 2553 if any(m in name for m in keep_in_fp32_modules) 2554 } 2555 if model._no_split_modules is None: -> 2556 raise ValueError(f"{model.__class__.__name__} does not support `device_map='{device_map}'` yet.") 2557 no_split_modules = model._no_split_modules 2558 if device_map not in ["auto", "balanced", "balanced_low_0", "sequential"]: ValueError: XGLMForCausalLM does not support `device_map='auto'` yet. ``` ### Expected behavior XGLMForCausalLM should support `device_map='auto'`.
03-15-2023 17:31:54
03-15-2023 17:31:54
cc @younesbelkada <|||||>Should be addressed in #22207 ! <|||||>hi @tontan1998 You can now benefit from XGLM 8bit on the `main` branch of `transformers`: ```bash pip install git+https://github.com/huggingface/transformers.git ```<|||||>> hi @tontan1998 You can now benefit from XGLM 8bit on the `main` branch of `transformers`: > > ```shell > pip install git+https://github.com/huggingface/transformers.git > ``` Thank you!
transformers
22,187
closed
Revert 22152 MaskedImageCompletionOutput changes
# What does this PR do? Reverts the breaking changes introduced by #22152. Temporary fix until it's decided how to change the model output. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? cc @alaradirik
03-15-2023 17:00:35
03-15-2023 17:00:35
_The documentation is not available anymore as the PR was closed or merged._<|||||>Will do a patch asap
transformers
22,186
closed
Fix `ViTForMaskedImageModeling` doc example
# What does this PR do? Same #22185
03-15-2023 16:32:23
03-15-2023 16:32:23
@ydshieh @alaradirik @fxmarty The issue coming from #22152 was an oversight on my part about breaking changes. Perhaps we should revert that PR first and then agree how to introduce this change as it is intended to be added to other vision model? <|||||>Oh, I thought it was a new model head! Indeed a breaking change there. Good for me to revert that PR, but would be nice to talk to Sylvain or Lysandre first (if you feel necessary). I will leave you judge. Regarding a solution if we really want to have this new attribute and the new name `MaskedImageCompletionOutput`, adding a new property (named `logits`) to `MaskedImageCompletionOutput` might be a way, but I didn't think about this deeply.<|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>Converted to draft for now<|||||>@ydshieh Yep - let's get @LysandreJik and @sgugger 's opinions. I think having the `logits` param is probably the best solution. As far as I know, it's very rare to check against the model output type itself. I believe `reconstruction` was chosen because of the `ImageSuperResolutionOutput` data class. As mentioned in the original PR - we probably do want a different model type to be returned as the documented shapes are incorrect. @alaradirik - could you open a PR to revert the changes? <|||||>Actually, it's late for @alaradirik. I'll open the PR now.<|||||>Yes we can't rename the parameter in the outputs like that for a model that has been around for a bit. What is even more annoying is that the commit was in the release, so we will need to make a patch with the fix.<|||||>Close this PR as it's clear we will and have to definitely use the original `logits`.<|||||>Sorry for being late to comment, I added the `MaskedImageCompletionOutput` to replace the inaccurate `MaskedLMOutput `class used by the masked image modeling heads (ViT and DeiT). Neither of these models have any checkpoints on the hub as mentioned in #22152 . Swin's MIM head has its own output class but no fine-tuned checkpoints for the MIM task either. With that said, ViT and Swin's MIM heads are implementations of [SimMIM](https://arxiv.org/abs/2111.09886) and SimMIM have recently released fine-tuned checkpoints for these two models (as opposed to the base model weights on the hub for Swin MIM head). I'm planning to convert these checkpoints and add a `masked-image-completion` pipeline after @sheonhan merges the ICT PR (a contemporary, better performing MIM model). It'd be great to add an output class and fix inaccurate class output (listed as a language model in the docs) before that. While logits is not an accurate output name in this case as the model returns full reconstructed images, I could replace `reconstruction` with `logits` and open a new PR. What do you think @amyeroberts @sgugger? CC @LysandreJik @ydshieh
transformers
22,185
closed
Fix ViTForMaskedImageModeling example in documentation
Following https://github.com/huggingface/transformers/pull/22152, `logits` is not the right output key name. cc @alaradirik @amyeroberts
03-15-2023 15:46:23
03-15-2023 15:46:23
_The documentation is not available anymore as the PR was closed or merged._<|||||>@fxmarty I am just going to open a PR, but you are too fast! Thanks.<|||||>@ydshieh It could be I missed it elsewhere, so feel free to push here / do an other PR!<|||||>You have to check your CircleCI setting however. The CI has issue to run in this PR.<|||||>OK, nothing missed, but as the CI has some problem, I am going to open PR.<|||||>I will make sure you are a contributor in #22186 @fxmarty . Close this one as mentioned above.<|||||>Ah yes I did not configure SSO with CircleCI, maybe that's the issue.
transformers
22,184
closed
Fix: unfinished_sequences with correct device
The original code was causing errors when running torch.jit.trace due to the tensor options being incorrect. I fixed this by using torch.ones to create a tensor with the correct device and dtype. This should resolve the issue with running torch.jit.trace. # What does this PR do? This PR fixes a bug that causes errors when running torch.jit.trace in transformers. <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> ## Before submitting - [x] This PR fixes a bug. - [x] I read the contributor guideline. - [ ] This was discussed in issue # (insert issue number here). - [ ] I updated the documentation. - [ ] I wrote new tests. ## Who can review? Please tag @ArthurZucker and @younesbelkada for text models review. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer: @stas00, Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
03-15-2023 15:18:33
03-15-2023 15:18:33
cc @gante <|||||>The following code will reproduce the error: ```python import torch from transformers import AutoTokenizer, AutoModel device = 'cuda' if torch.cuda.is_available() else 'cpu' class Wrapper(torch.nn.Module): """ Wrapper for the model to be traced """ def __init__(self): super().__init__() self.model = AutoModel.from_pretrained("THUDM/chatglm-6b", trust_remote_code=True).half().to(device) self.tokenizer = AutoTokenizer.from_pretrained("THUDM/chatglm-6b", trust_remote_code=True) def forward(self, input_ids): self.model.eval() input_ids = input_ids.to(device) gen_kwargs = {"max_length": 2048, "num_beams": 1, "do_sample": True, "top_p": 0.7, "temperature": 0.95} outputs = self.model.generate(input_ids=input_ids, **gen_kwargs) return outputs[0, len(input_ids[0]) - 2:] tokenizer = AutoTokenizer.from_pretrained("THUDM/chatglm-6b", trust_remote_code=True) query = "hello" input = tokenizer([query], return_tensors="pt", padding=True) model = Wrapper() torch.jit.trace(model, (input.input_ids,)).save("chatglm-6b.pt") ```<|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>Interestingly, I don't see `.new()` on pytorch's docs. Good thing we're removing it :D
transformers
22,183
closed
Italian Translation of migration.mdx
<!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> See issue [#17459] Add italian translation of migration.mdx and update _toctree.yml. It's my first pull request, so i hope it's ok <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer: @stas00, Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
03-15-2023 14:23:37
03-15-2023 14:23:37
_The documentation is not available anymore as the PR was closed or merged._<|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>Thanks @Baelish03! Can you move your changes in toctree between *big_models* and *debugging*? ``` - local: big_models title: Istanziare un big model - local: migration title: Passaggio da pacchetti precedenti - local: debugging title: Debugging ``` The translation LGTM, only two sentences sound strange, I propose this modification: 192: Se chiamavi i modelli con nomi di parole chiave per argomenti di parole chiave, ad esempio `model(inputs_ids, attention_mask=attention_mask, token_type_ids=token_type_ids)`, questo non dovrebbe causare alcun cambiamento. --> Se inizializzavi i modelli usando parole chiave per gli argomenti, ad esempio `model(inputs_ids, attention_mask=attention_mask, token_type_ids=token_type_ids)`, questo non dovrebbe causare alcun cambiamento. 194: Se chiamavi i modelli con input posizionali per argomenti di parole chiave, ad esempio `model(inputs_ids, attention_mask, token_type_ids)`, potrebbe essere necessario ricontrollare l'ordine esatto degli argomenti di input. --> Se inizializzavi i modelli con input posizionali gli argomenti, ad esempio `model(inputs_ids, attention_mask, token_type_ids)`, potrebbe essere necessario ricontrollare l'ordine esatto degli argomenti di input.<|||||>Thanks for the advice, I re-upload files with corrections<|||||>Thanks @Baelish03 LGTM @sgugger, @stevhliu and @MKhalusova @omarespejel
transformers
22,182
closed
Add Video Mask2Former
### What does this PR do? This PR adds Video Mask2Former model. Original repo: https://github.com/facebookresearch/Mask2Former/ Mask2Former for Video Instance Segmentation Paper: https://arxiv.org/abs/2112.10764 Co-authored with: @alaradirik - [x] Update model checkpoints - [x] Update model cards - [ ] transfer model checkpoints to facebook organization ### Who can review? @alaradirik
03-15-2023 14:13:52
03-15-2023 14:13:52
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_22182). All of your documentation changes will be reflected on that endpoint.<|||||>> I suggest that instead of adapting the logic within the current `Mask2Former` layers and image processor, a new model `Mask2FormerVideo` is added, with its own image processor and modeling file. This would enable simpler logic in the forward passes for the respective models, and an image processor which can take and return videos directly. > > This PR is in a good state, so it should be fairly simple to make this change but of course let us know if you have any questions about this implementation. Hey @amyeroberts , thanks a lot for the review :) Regarding the structure...Alara and I actually had a discussion regarding this earlier...whether to add this as a separate model or not but we felt that since video mask2former is not very different from original mask2former, it would be alright if we just modify the existing implementation to handle both video and image. But I'd be happy to convert this into a separate model. Just want to quickly double check with @alaradirik too before proceeding as this relates to our earlier discussion. <|||||>@shivalikasingh95 I'm fine with going either way since the video segmentation model only has minor differences but perhaps we could keep most of changes to the existing sub classes and add a new head class - `Mask2FormerForVideoSegmentation` and add it to the model docs. I think this would boost the visibility and usage of the model as well, what do you think? Just asking for future models @amyeroberts, would we have a video_processing_xxx.py file and `VideoProcessorXXX` class to process videos? We have talked about creating video processing utilities with @shivalikasingh95 before but I wasn't sure about the best way to handle it.<|||||>> @shivalikasingh95 I'm fine with going either way since the video segmentation model only has minor differences but perhaps we could keep most of changes to the existing sub classes and add a new head class - `Mask2FormerForVideoSegmentation` and add it to the model docs. I think this would boost the visibility and usage of the model as well, what do you think? @alaradirik I think adding a new head class - `Mask2FormerForVideoSegmentation` is a really good idea! > Just asking for future models @amyeroberts, would we have a video_processing_xxx.py file and `VideoProcessorXXX` class to process videos? If we can add something like `Mask2FormerVideoProcessor` which can handle videos directly then that would be perfect. ______________________________________________________________________________________________ I'm not fully sure if it would make sense to add a separate modeling file altogether for video-mask2former since the authors wanted to show how easily we can use mask2former for video segmentation too. Quote from the paper -: _"We find Mask2Former also achieves state-of-the-art performance on video instance segmentation without modifying the architecture, the loss or even the training pipeline"_ And hence, implementation wise there isn't really much difference. ______________________________________________________________________________________________ I think adding a new head and video processor class should help in making the implementation cleaner. But again, I'm not sure if a model can have both an ImageProcessor and VideoProcessor class. If you guys feel, this may not be the best approach then may be we can go for turning this into a separate model. <|||||>> I'm not fully sure if it would make sense to add a separate modeling file altogether for video-mask2former I agree that having a separate model implementation isn't in line with the spirit of the model. However, at the moment, supporting video inputs does require a modification of architecture, as shown by the need for the `is_video` flag throughout `modeling_mask2formers.py`. > If we can add something like Mask2FormerVideoProcessor which can handle videos directly then that would be perfect. I don't think a `VideoProcessor` class is necessary at the moment. We already have models with video inputs e.g. [VideoMAE](https://huggingface.co/docs/transformers/main/en/model_doc/videomae) which use image processors. If we want to use the same processing class as the image model, adding a method such as `preprocess_video` is a possibility. This does mean the processor won't be compatible with the usual API, i.e. it could not be directly called `image_processor(video_inputs)`. However, the current method `post_process_video_instance_segmentation` also breaks this. Having a separate modeling file resolves the issue of one class handling both images and videos. <|||||>> Having a separate modeling file resolves the issue of one class handling both images and videos. Sure @amyeroberts. I understand now why adding a separate modeling file would make more sense. Had a discussion with @alaradirik too regarding this change on Friday. I'll take care of adding the new modeling file. >I don't think a VideoProcessor class is necessary at the moment. We already have models with video inputs e.g. [VideoMAE](https://huggingface.co/docs/transformers/main/en/model_doc/videomae) which use image processors. Again makes sense, @amyeroberts. I only meant to suggest adding a VideoProcessor class for Mask2Former in case you guys were planning on introducing VideoProcessor classes in general as part of transformers library. >If we want to use the same processing class as the image model, adding a method such as preprocess_video is a possibility. This does mean the processor won't be compatible with the usual API, i.e. it could not be directly called image_processor(video_inputs) Would adding a separate image processor class, `VideoMask2FormerImageProcessor` make sense if we don't want to have the same image processing class for image and video models? This way the usual API behaviour would not be broken. In this case, we can directly use `image_processor(video_inputs)`. <|||||>>Would adding a separate image processor class, VideoMask2FormerImageProcessor make sense if we don't want to have the same image processing class for image and video models? This way the usual API behaviour would not be broken. In this case, we can directly use image_processor(video_inputs). @shivalikasingh95 - yep, I think that works! <|||||>@alaradirik and @amyeroberts please feel free to review this PR. I'm just getting a few failing CI checks due to [this error](https://app.circleci.com/pipelines/github/huggingface/transformers/62275/workflows/568ede06-a91a-4e5c-a02c-6bbafcbbcd64/jobs/766732). Would be great if I can get some help on how to fix it.<|||||>@amyeroberts could you do a final review, Video Mask2Former is a separate model now and the PR is in good shape :)<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
22,181
closed
RuntimeError: Failed to import transformers.pipelines because of the following error (look up to see its traceback): cannot import name '_rename_parameter' from 'scipy._lib._util' (/databricks/python/lib/python3.8/site-packages/scipy/_lib/_util.py)
## Environment info transformers version: '4.26.1' Platform: Databricks the command to import, return the error below ``` from transformers import pipeline ``` ``` RuntimeError: Failed to import transformers.pipelines because of the following error (look up to see its traceback): cannot import name '_rename_parameter' from 'scipy._lib._util' (/databricks/python/lib/python3.8/site-packages/scipy/_lib/_util.py) ``` ## Who can help @Narsil full trace of error ``` --------------------------------------------------------------------------- ImportError Traceback (most recent call last) /databricks/python/lib/python3.8/site-packages/transformers/utils/import_utils.py in _get_module(self, module_name) 1109 try: -> 1110 return importlib.import_module("." + module_name, self.__name__) 1111 except Exception as e: /usr/lib/python3.8/importlib/__init__.py in import_module(name, package) 126 level += 1 --> 127 return _bootstrap._gcd_import(name[level:], package, level) 128 /usr/lib/python3.8/importlib/_bootstrap.py in _gcd_import(name, package, level) /usr/lib/python3.8/importlib/_bootstrap.py in _find_and_load(name, import_) /usr/lib/python3.8/importlib/_bootstrap.py in _find_and_load_unlocked(name, import_) /usr/lib/python3.8/importlib/_bootstrap.py in _load_unlocked(spec) /usr/lib/python3.8/importlib/_bootstrap_external.py in exec_module(self, module) /usr/lib/python3.8/importlib/_bootstrap.py in _call_with_frames_removed(f, *args, **kwds) /databricks/python/lib/python3.8/site-packages/transformers/pipelines/__init__.py in <module> 64 from .depth_estimation import DepthEstimationPipeline ---> 65 from .document_question_answering import DocumentQuestionAnsweringPipeline 66 from .feature_extraction import FeatureExtractionPipeline /databricks/python_shell/dbruntime/PythonPackageImportsInstrumentation/__init__.py in import_patch(name, globals, locals, fromlist, level) 161 # look at preceding stack frames for relevant error information. --> 162 original_result = python_builtin_import(name, globals, locals, fromlist, level) 163 /databricks/python/lib/python3.8/site-packages/transformers/pipelines/document_question_answering.py in <module> 28 from .base import PIPELINE_INIT_ARGS, ChunkPipeline ---> 29 from .question_answering import select_starts_ends 30 /databricks/python_shell/dbruntime/PythonPackageImportsInstrumentation/__init__.py in import_patch(name, globals, locals, fromlist, level) 161 # look at preceding stack frames for relevant error information. --> 162 original_result = python_builtin_import(name, globals, locals, fromlist, level) 163 /databricks/python/lib/python3.8/site-packages/transformers/pipelines/question_answering.py in <module> 7 ----> 8 from ..data import SquadExample, SquadFeatures, squad_convert_examples_to_features 9 from ..modelcard import ModelCard /databricks/python_shell/dbruntime/PythonPackageImportsInstrumentation/__init__.py in import_patch(name, globals, locals, fromlist, level) 161 # look at preceding stack frames for relevant error information. --> 162 original_result = python_builtin_import(name, globals, locals, fromlist, level) 163 /databricks/python/lib/python3.8/site-packages/transformers/data/__init__.py in <module> 29 ) ---> 30 from .metrics import glue_compute_metrics, xnli_compute_metrics 31 from .processors import ( /databricks/python_shell/dbruntime/PythonPackageImportsInstrumentation/__init__.py in import_patch(name, globals, locals, fromlist, level) 161 # look at preceding stack frames for relevant error information. --> 162 original_result = python_builtin_import(name, globals, locals, fromlist, level) 163 /databricks/python/lib/python3.8/site-packages/transformers/data/metrics/__init__.py in <module> 21 ---> 22 if is_sklearn_available(): 23 from sklearn.metrics import f1_score, matthews_corrcoef /databricks/python/lib/python3.8/site-packages/transformers/utils/import_utils.py in is_sklearn_available() 563 return False --> 564 return is_scipy_available() and importlib.util.find_spec("sklearn.metrics") 565 /usr/lib/python3.8/importlib/util.py in find_spec(name, package) 93 if parent_name: ---> 94 parent = __import__(parent_name, fromlist=['__path__']) 95 try: /databricks/python_shell/dbruntime/PythonPackageImportsInstrumentation/__init__.py in import_patch(name, globals, locals, fromlist, level) 161 # look at preceding stack frames for relevant error information. --> 162 original_result = python_builtin_import(name, globals, locals, fromlist, level) 163 /databricks/python/lib/python3.8/site-packages/mlflow/utils/import_hooks/__init__.py in load_module(self, fullname) 234 try: --> 235 module = self.loader.load_module(fullname) 236 notify_module_loaded(module) /databricks/python_shell/dbruntime/PostImportHook.py in load_module(self, fullname) 215 try: --> 216 module = self.loader.load_module(fullname) 217 notify_module_loaded(module) /databricks/python/lib/python3.8/site-packages/sklearn/__init__.py in <module> 81 from . import __check_build # noqa: F401 ---> 82 from .base import clone 83 from .utils._show_versions import show_versions /databricks/python_shell/dbruntime/PythonPackageImportsInstrumentation/__init__.py in import_patch(name, globals, locals, fromlist, level) 161 # look at preceding stack frames for relevant error information. --> 162 original_result = python_builtin_import(name, globals, locals, fromlist, level) 163 /databricks/python/lib/python3.8/site-packages/sklearn/base.py in <module> 16 from ._config import get_config ---> 17 from .utils import _IS_32BIT 18 from .utils._tags import ( /databricks/python_shell/dbruntime/PythonPackageImportsInstrumentation/__init__.py in import_patch(name, globals, locals, fromlist, level) 161 # look at preceding stack frames for relevant error information. --> 162 original_result = python_builtin_import(name, globals, locals, fromlist, level) 163 /databricks/python/lib/python3.8/site-packages/sklearn/utils/__init__.py in <module> 22 from .murmurhash import murmurhash3_32 ---> 23 from .class_weight import compute_class_weight, compute_sample_weight 24 from . import _joblib /databricks/python_shell/dbruntime/PythonPackageImportsInstrumentation/__init__.py in import_patch(name, globals, locals, fromlist, level) 161 # look at preceding stack frames for relevant error information. --> 162 original_result = python_builtin_import(name, globals, locals, fromlist, level) 163 /databricks/python/lib/python3.8/site-packages/sklearn/utils/class_weight.py in <module> 6 ----> 7 from .validation import _deprecate_positional_args 8 /databricks/python_shell/dbruntime/PythonPackageImportsInstrumentation/__init__.py in import_patch(name, globals, locals, fromlist, level) 161 # look at preceding stack frames for relevant error information. --> 162 original_result = python_builtin_import(name, globals, locals, fromlist, level) 163 /databricks/python/lib/python3.8/site-packages/sklearn/utils/validation.py in <module> 25 ---> 26 from .fixes import _object_dtype_isnan, parse_version 27 from .. import get_config as _get_config /databricks/python_shell/dbruntime/PythonPackageImportsInstrumentation/__init__.py in import_patch(name, globals, locals, fromlist, level) 161 # look at preceding stack frames for relevant error information. --> 162 original_result = python_builtin_import(name, globals, locals, fromlist, level) 163 /databricks/python/lib/python3.8/site-packages/sklearn/utils/fixes.py in <module> 19 import scipy ---> 20 import scipy.stats 21 from scipy.sparse.linalg import lsqr as sparse_lsqr # noqa /databricks/python_shell/dbruntime/PythonPackageImportsInstrumentation/__init__.py in import_patch(name, globals, locals, fromlist, level) 161 # look at preceding stack frames for relevant error information. --> 162 original_result = python_builtin_import(name, globals, locals, fromlist, level) 163 /databricks/python/lib/python3.8/site-packages/scipy/stats/__init__.py in <module> 484 DegenerateDataWarning, FitError) --> 485 from ._stats_py import * 486 from ._variation import variation /databricks/python_shell/dbruntime/PythonPackageImportsInstrumentation/__init__.py in import_patch(name, globals, locals, fromlist, level) 161 # look at preceding stack frames for relevant error information. --> 162 original_result = python_builtin_import(name, globals, locals, fromlist, level) 163 /databricks/python/lib/python3.8/site-packages/scipy/stats/_stats_py.py in <module> 40 from scipy.ndimage import _measurements ---> 41 from scipy._lib._util import (check_random_state, MapWrapper, 42 rng_integers, _rename_parameter, _contains_nan) ImportError: cannot import name '_rename_parameter' from 'scipy._lib._util' (/databricks/python/lib/python3.8/site-packages/scipy/_lib/_util.py) The above exception was the direct cause of the following exception: RuntimeError Traceback (most recent call last) <command-632418863495886> in <module> 3 import plotly.express as plx 4 ----> 5 from transformers import pipeline 6 # from transformers import AutoTokenizer /databricks/python_shell/dbruntime/PythonPackageImportsInstrumentation/__init__.py in import_patch(name, globals, locals, fromlist, level) 160 # Import the desired module. If you’re seeing this while debugging a failed import, 161 # look at preceding stack frames for relevant error information. --> 162 original_result = python_builtin_import(name, globals, locals, fromlist, level) 163 164 is_root_import = thread_local._nest_level == 1 /usr/lib/python3.8/importlib/_bootstrap.py in _handle_fromlist(module, fromlist, import_, recursive) /databricks/python/lib/python3.8/site-packages/transformers/utils/import_utils.py in __getattr__(self, name) 1098 value = self._get_module(name) 1099 elif name in self._class_to_module.keys(): -> 1100 module = self._get_module(self._class_to_module[name]) 1101 value = getattr(module, name) 1102 else: /databricks/python/lib/python3.8/site-packages/transformers/utils/import_utils.py in _get_module(self, module_name) 1110 return importlib.import_module("." + module_name, self.__name__) 1111 except Exception as e: -> 1112 raise RuntimeError( 1113 f"Failed to import {self.__name__}.{module_name} because of the following error (look up to see its" 1114 f" traceback):\n{e}" RuntimeError: Failed to import transformers.pipelines because of the following error (look up to see its traceback): cannot import name '_rename_parameter' from 'scipy._lib._util' (/databricks/python/lib/python3.8/site-packages/scipy/_lib/_util.py) ``` ### Who can help? _No response_ ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction ``` from transformers import pipeline ``` ### Expected behavior Successful import of pipeline
03-15-2023 12:49:33
03-15-2023 12:49:33
Hi @k3ybladewielder, thanks for raising this issue. The error is coming from python not being able to import a specific scipy module and so isn't a `transformers` related bug per se. I would try reinstalling transformers with a package manager and the `sklearn` option as it should install all the necessary dependencies in your environment: `pip install transformers[sklearn] --force-reinstall`<|||||>@amyeroberts thanks for the answer. Its works<|||||>**can yall help me too:** ``` from transformers import pipeline ``` ``` Failed to import transformers.pipelines because of the following error (look up to see its traceback): Unable to convert function return value to a Python type! The signature was () -> handle ``` **full trace of error:** ``` TypeError Traceback (most recent call last) File ~/anaconda3/lib/python3.9/site-packages/transformers/utils/import_utils.py:1099, in _LazyModule._get_module(self, module_name) 1098 try: -> 1099 return importlib.import_module("." + module_name, self.__name__) 1100 except Exception as e: File ~/anaconda3/lib/python3.9/importlib/__init__.py:127, in import_module(name, package) 126 level += 1 --> 127 return _bootstrap._gcd_import(name[level:], package, level) File <frozen importlib._bootstrap>:1030, in _gcd_import(name, package, level) File <frozen importlib._bootstrap>:1007, in _find_and_load(name, import_) File <frozen importlib._bootstrap>:986, in _find_and_load_unlocked(name, import_) File <frozen importlib._bootstrap>:680, in _load_unlocked(spec) File <frozen importlib._bootstrap_external>:850, in exec_module(self, module) File <frozen importlib._bootstrap>:228, in _call_with_frames_removed(f, *args, **kwds) File ~/anaconda3/lib/python3.9/site-packages/transformers/pipelines/__init__.py:44, in <module> 35 from ..utils import ( 36 HUGGINGFACE_CO_RESOLVE_ENDPOINT, 37 is_kenlm_available, (...) 42 logging, 43 ) ---> 44 from .audio_classification import AudioClassificationPipeline 45 from .automatic_speech_recognition import AutomaticSpeechRecognitionPipeline File ~/anaconda3/lib/python3.9/site-packages/transformers/pipelines/audio_classification.py:21, in <module> 20 from ..utils import add_end_docstrings, is_torch_available, is_torchaudio_available, logging ---> 21 from .base import PIPELINE_INIT_ARGS, Pipeline 24 if is_torch_available(): File ~/anaconda3/lib/python3.9/site-packages/transformers/pipelines/base.py:35, in <module> 34 from ..image_processing_utils import BaseImageProcessor ---> 35 from ..modelcard import ModelCard 36 from ..models.auto.configuration_auto import AutoConfig File ~/anaconda3/lib/python3.9/site-packages/transformers/modelcard.py:48, in <module> 32 from .models.auto.modeling_auto import ( 33 MODEL_FOR_AUDIO_CLASSIFICATION_MAPPING_NAMES, 34 MODEL_FOR_CAUSAL_LM_MAPPING_NAMES, (...) 46 MODEL_FOR_ZERO_SHOT_IMAGE_CLASSIFICATION_MAPPING_NAMES, 47 ) ---> 48 from .training_args import ParallelMode 49 from .utils import ( 50 MODEL_CARD_NAME, 51 cached_file, (...) 57 logging, 58 ) File ~/anaconda3/lib/python3.9/site-packages/transformers/training_args.py:30, in <module> 29 from .debug_utils import DebugOption ---> 30 from .trainer_utils import ( 31 EvaluationStrategy, 32 FSDPOption, 33 HubStrategy, 34 IntervalStrategy, 35 SchedulerType, 36 ShardedDDPOption, 37 ) 38 from .utils import ( 39 ExplicitEnum, 40 cached_property, (...) 53 requires_backends, 54 ) File ~/anaconda3/lib/python3.9/site-packages/transformers/trainer_utils.py:48, in <module> 47 if is_tf_available(): ---> 48 import tensorflow as tf 51 def seed_worker(_): File ~/anaconda3/lib/python3.9/site-packages/tensorflow/__init__.py:37, in <module> 35 import typing as _typing ---> 37 from tensorflow.python.tools import module_util as _module_util 38 from tensorflow.python.util.lazy_loader import LazyLoader as _LazyLoader File ~/anaconda3/lib/python3.9/site-packages/tensorflow/python/__init__.py:42, in <module> 39 # pylint: enable=wildcard-import 40 41 # Bring in subpackages. ---> 42 from tensorflow.python import data 43 from tensorflow.python import distribute File ~/anaconda3/lib/python3.9/site-packages/tensorflow/python/data/__init__.py:21, in <module> 20 # pylint: disable=unused-import ---> 21 from tensorflow.python.data import experimental 22 from tensorflow.python.data.ops.dataset_ops import AUTOTUNE File ~/anaconda3/lib/python3.9/site-packages/tensorflow/python/data/experimental/__init__.py:96, in <module> 95 # pylint: disable=unused-import ---> 96 from tensorflow.python.data.experimental import service 97 from tensorflow.python.data.experimental.ops.batching import dense_to_ragged_batch File ~/anaconda3/lib/python3.9/site-packages/tensorflow/python/data/experimental/service/__init__.py:419, in <module> 15 """API for using the tf.data service. 16 17 This module contains: (...) 416 job of ParameterServerStrategy). 417 """ --> 419 from tensorflow.python.data.experimental.ops.data_service_ops import distribute 420 from tensorflow.python.data.experimental.ops.data_service_ops import from_dataset_id File ~/anaconda3/lib/python3.9/site-packages/tensorflow/python/data/experimental/ops/data_service_ops.py:22, in <module> 21 from tensorflow.python import tf2 ---> 22 from tensorflow.python.data.experimental.ops import compression_ops 23 from tensorflow.python.data.experimental.service import _pywrap_server_lib File ~/anaconda3/lib/python3.9/site-packages/tensorflow/python/data/experimental/ops/compression_ops.py:16, in <module> 15 """Ops for compressing and uncompressing dataset elements.""" ---> 16 from tensorflow.python.data.util import structure 17 from tensorflow.python.ops import gen_experimental_dataset_ops as ged_ops File ~/anaconda3/lib/python3.9/site-packages/tensorflow/python/data/util/structure.py:22, in <module> 20 import wrapt ---> 22 from tensorflow.python.data.util import nest 23 from tensorflow.python.framework import composite_tensor File ~/anaconda3/lib/python3.9/site-packages/tensorflow/python/data/util/nest.py:34, in <module> 16 """## Functions for working with arbitrarily nested sequences of elements. 17 18 NOTE(mrry): This fork of the `tensorflow.python.util.nest` module (...) 31 arrays. 32 """ ---> 34 from tensorflow.python.framework import sparse_tensor as _sparse_tensor 35 from tensorflow.python.util import _pywrap_utils File ~/anaconda3/lib/python3.9/site-packages/tensorflow/python/framework/sparse_tensor.py:24, in <module> 23 from tensorflow.python.framework import composite_tensor ---> 24 from tensorflow.python.framework import constant_op 25 from tensorflow.python.framework import dtypes File ~/anaconda3/lib/python3.9/site-packages/tensorflow/python/framework/constant_op.py:25, in <module> 24 from tensorflow.python.eager import context ---> 25 from tensorflow.python.eager import execute 26 from tensorflow.python.framework import dtypes File ~/anaconda3/lib/python3.9/site-packages/tensorflow/python/eager/execute.py:21, in <module> 20 from tensorflow.python.eager import core ---> 21 from tensorflow.python.framework import dtypes 22 from tensorflow.python.framework import ops File ~/anaconda3/lib/python3.9/site-packages/tensorflow/python/framework/dtypes.py:34, in <module> 32 from tensorflow.core.function import trace_type ---> 34 _np_bfloat16 = _pywrap_bfloat16.TF_bfloat16_type() 37 class DTypeMeta(type(_dtypes.DType), abc.ABCMeta): TypeError: Unable to convert function return value to a Python type! The signature was () -> handle The above exception was the direct cause of the following exception: RuntimeError Traceback (most recent call last) Input In [11], in <cell line: 2>() 1 import gradio as gr # UI library ----> 2 from transformers import pipeline File <frozen importlib._bootstrap>:1055, in _handle_fromlist(module, fromlist, import_, recursive) File ~/anaconda3/lib/python3.9/site-packages/transformers/utils/import_utils.py:1089, in _LazyModule.__getattr__(self, name) 1087 value = self._get_module(name) 1088 elif name in self._class_to_module.keys(): -> 1089 module = self._get_module(self._class_to_module[name]) 1090 value = getattr(module, name) 1091 else: File ~/anaconda3/lib/python3.9/site-packages/transformers/utils/import_utils.py:1101, in _LazyModule._get_module(self, module_name) 1099 return importlib.import_module("." + module_name, self.__name__) 1100 except Exception as e: -> 1101 raise RuntimeError( 1102 f"Failed to import {self.__name__}.{module_name} because of the following error (look up to see its" 1103 f" traceback):\n{e}" 1104 ) from e RuntimeError: Failed to import transformers.pipelines because of the following error (look up to see its traceback): Unable to convert function return value to a Python type! The signature was () -> handle ``` <|||||>@CliffLopes could you open a new issue, including all the information requested in the issue template e.g. running environment etc. ? Just from the traceback, it looks like the issue is coming from the tensorflow installed in the environment. In the issue, please make sure to include any relevant information about tensorflow e.g. version and how it was installed.
transformers
22,180
closed
A script to add/update `pipeline_model_mapping` systematically
# What does this PR do? This script will ease the process of adding and/or updating `pipeline_model_mapping` in test files in the future, in a systematical way (no manual editing). This is also one part of the process of [tiny model creation/upload + tiny model info update + **test file update**] The basic idea: For a test file: - find a test class - check if `pipeline_model_mapping` is already defined - yes + overwrite is `True`: remove it (more precisely, mark them to be removed, and remove them before writing to file) - compute `pipeline_model_mapping` via the mappings defined in `XXXPipelineTests` classes defined in files in `tests/pipelines/test_xxx.py` - compute the position to which we add `pipeline_model_mapping` - add `pipeline_model_mapping` and write to file Remark: There are (very) few exception cases not handled in this PR. For example, 2 test classes defined in a single test file, like `Blip2ForConditionalGenerationDecoderOnlyTest` and `Blip2ModelTest`. Example usage / demo: ```python python utils\add_pipeline_model_mapping_to_test.py --test_file tests\models\bert\test_modeling_bert.py --overwrite ```
03-15-2023 11:52:26
03-15-2023 11:52:26
Request @sgugger for review, as he is involved in a few related PRs before. If the core maintainers decide to let @amyeroberts to review, I can provide more context (regarding previous PRs) for her to ease the review process.<|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>I think I should add a test for this new script, just like `tests/repo_utils/test_tests_fetcher.py` that tests the script `utils/tests_fetcher.py`. However, this script needs file(s) in `tests/model/` and require models being in `transformers`. I might need to see if there is similar cases being done before to implement a test for this new script.<|||||>Hey @ydshieh, thanks for your PR! In which settings would you want to leverage such a script? Would it be when users contribute new models?<|||||>Hi @LysandreJik I am thinking it is better to have a CI job that runs this script in a regular basis (as a check, or even open a PR automatically), similar to #22275. At some occasions, we (`transformers` members) might want to run it for a particular test file to check/update something quickly. I don't expect contributors to run this - fewer steps, less friction and happier for them :-) <|||||>If it makes your workflow simpler then why not, but I would make it very explicit what this script does and document it (even in the script directly). If I had trouble understanding what it was for/when and who should use it, I figure others will have the same problem :) Thanks!<|||||>> If it makes your workflow simpler then why not, but I would make it very explicit what this script does and document it (even in the script directly). If I had trouble understanding what it was for/when and who should use it, I figure others will have the same problem :) > > Thanks! Nice point ❀️.I will add some comments in the script. Thank you for the feedback.<|||||>@LysandreJik Hope you will ❀️ the added comment added in https://github.com/huggingface/transformers/pull/22180/commits/87433ee115ba14b509df6386d2feff8e7ad41567 <|||||>(just rebase on `main`)<|||||>Hi @LysandreJik . A description is added in [this commit](https://github.com/huggingface/transformers/pull/22180/commits/87433ee115ba14b509df6386d2feff8e7ad41567). Let me know if you have any further comment/review(s) :-). Thank you.<|||||>Just fix a few things before merging. Nothing really big. I have use this new script to update the attributes in #22606.
transformers
22,179
closed
When I use Trainer with Deepspeed, the Number of trainable parameters is 0
The version information is as follows: - Deepspeed. 0.8.1 - transformers. 4.26.1 ## Problem When I use Trainer with Deepspeed, the Number of trainable parameters is 0. Like this: ![image](https://user-images.githubusercontent.com/63763578/225277324-3650bbea-78f7-493a-97a2-3f9ef0bdcd5a.png) And it happens when using zero3. When I use zero2, it does not have this problem.
03-15-2023 10:13:06
03-15-2023 10:13:06
cc @stas00 <|||||>Thank you for the report, @noob-ctrl Please let me know if this fix works for you: https://github.com/huggingface/transformers/pull/22193 <|||||>@stas00 Hi, it works now. Thank you!<|||||>Thank you for testing, @noob-ctrl - the PR has been merged.
transformers
22,178
open
Add BEiTv3
### Model description Microsoft just open-sourced BEiTv3: https://github.com/microsoft/unilm/tree/master/beit3 This is a very powerful vision-language model that can be used as backbone for a variety of downstream tasks, from image classification to VQA to object detection. Time to add it to HF Transformers! :) ### Open source status - [X] The model implementation is available - [X] The model weights are available ### Provide useful links for the implementation https://github.com/microsoft/unilm/tree/master/beit3
03-15-2023 10:01:21
03-15-2023 10:01:21
I will start working on adding the model !<|||||>If required I can help as well<|||||>hello, I want to test the zero shot Image text retrieval of the model on some images and texts, can you help me ?
transformers
22,177
closed
[`bnb`] Let's make serialization of int8 models possible
# What does this PR do? Before this PR, it was not possible to save an 8bit model, or load an 8bit model from the Hub. This PR makes this feature possible. If this PR gets merged, users can upload 8bit models on the Hub and/or load 8bit models from the Hub, hence save 2x memory compared to half-precision models. ```python from transformers import AutoModelForCausalLM, AutoTokenizer model = AutoModelForCausalLM.from_pretrained("bigscience/bloom-560m", device_map="auto", load_in_8bit=True) tokenizer = AutoTokenizer.from_pretrained("bigscience/bloom-560m") text = "Hello my name is" inputs = tokenizer(text, return_tensors="pt").to(0) outputs = model.generate(**inputs) print(tokenizer.decode(outputs[0])) >>> Hello my name is Nate, I am a professional photographer and I am a member of the model.save_pretrained("./saved_int8") model = AutoModelForCausalLM.from_pretrained("./saved_int8") outputs = model.generate(**inputs) print(tokenizer.decode(outputs[0])) >>> Hello my name is Nate, I am a professional photographer and I am a member of the ``` Depends on https://github.com/TimDettmers/bitsandbytes/pull/159 Let's put it as draft before I address the last TODOs and open questions & before https://github.com/TimDettmers/bitsandbytes/pull/159 gets merged. ## TODOs and open questions: - ability to push `BitsAndBytesConfig` - Do we want to save the serialized model under the name `pytorch_model.bin` ? I would say yes for simplicity reasons but we need to make sure that a user calls `from_pretrained` with `load_in_8bit`, hence add a warning if there is a `quantization_config.json` on the Hub repo + the user is not passing `load_in_8bit=True`. - Force `load_in_8bit=True` if there is a `quantization_config.json` on the Hub repo? - Update docs - Update warnings - Safety checkers for `bnb` versions - Add a test to check if it works using sharded fp16 weights cc @sgugger I left few open questions, would love to hear your thoughts on these!
03-15-2023 08:46:09
03-15-2023 08:46:09
_The documentation is not available anymore as the PR was closed or merged._<|||||>The design is not easy enough to use. If a user saves a quantized model and pushes to the Hub, it should work directly with `from_pretrained`. This is why I insisted that the quantization config should be saved inside the model config. This way you won't need to have the user pass `load_in_8_bit=True`, as you can read it from the config.<|||||>awesome ok, I'll work on that, so if there is a quantized config on the repo we should force-use `device_map=auto` & `load_in_8bit` in this case<|||||>The PR is ready for review @sgugger ! This PR is not mergeable before the bnb release of course<|||||>Thanks for the heads up! :D It should be much better now! For me the PR is ready for a review now