text
stringlengths
96
319k
id
stringlengths
14
178
metadata
dict
<!--Copyright 2024 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> <p align="center"> <br> <img src="https://raw.githubusercontent.com/huggingface/diffusers/77aadfee6a891ab9fcfb780f87c693f7a5beeb8e/docs/source/imgs/diffusers_library.jpg" width="400"/> <br> </p> # Diffusers 🤗 Diffusers는 이미지, 오디오, 심지어 분자의 3D 구조를 생성하기 위한 최첨단 사전 훈련된 diffusion 모델을 위한 라이브러리입니다. 간단한 추론 솔루션을 찾고 있든, 자체 diffusion 모델을 훈련하고 싶든, 🤗 Diffusers는 두 가지 모두를 지원하는 모듈식 툴박스입니다. 저희 라이브러리는 [성능보다 사용성](conceptual/philosophy#usability-over-performance), [간편함보다 단순함](conceptual/philosophy#simple-over-easy), 그리고 [추상화보다 사용자 지정 가능성](conceptual/philosophy#tweakable-contributorfriendly-over-abstraction)에 중점을 두고 설계되었습니다. 이 라이브러리에는 세 가지 주요 구성 요소가 있습니다: - 몇 줄의 코드만으로 추론할 수 있는 최첨단 [diffusion 파이프라인](api/pipelines/overview). - 생성 속도와 품질 간의 균형을 맞추기 위해 상호교환적으로 사용할 수 있는 [노이즈 스케줄러](api/schedulers/overview). - 빌딩 블록으로 사용할 수 있고 스케줄러와 결합하여 자체적인 end-to-end diffusion 시스템을 만들 수 있는 사전 학습된 [모델](api/models). <div class="mt-10"> <div class="w-full flex flex-col space-y-4 md:space-y-0 md:grid md:grid-cols-2 md:gap-y-4 md:gap-x-5"> <a class="!no-underline border dark:border-gray-700 p-5 rounded-lg shadow hover:shadow-lg" href="./tutorials/tutorial_overview" ><div class="w-full text-center bg-gradient-to-br from-blue-400 to-blue-500 rounded-lg py-1.5 font-semibold mb-5 text-white text-lg leading-relaxed">Tutorials</div> <p class="text-gray-700">결과물을 생성하고, 나만의 diffusion 시스템을 구축하고, 확산 모델을 훈련하는 데 필요한 기본 기술을 배워보세요. 🤗 Diffusers를 처음 사용하는 경우 여기에서 시작하는 것이 좋습니다!</p> </a> <a class="!no-underline border dark:border-gray-700 p-5 rounded-lg shadow hover:shadow-lg" href="./using-diffusers/loading_overview" ><div class="w-full text-center bg-gradient-to-br from-indigo-400 to-indigo-500 rounded-lg py-1.5 font-semibold mb-5 text-white text-lg leading-relaxed">How-to guides</div> <p class="text-gray-700">파이프라인, 모델, 스케줄러를 로드하는 데 도움이 되는 실용적인 가이드입니다. 또한 특정 작업에 파이프라인을 사용하고, 출력 생성 방식을 제어하고, 추론 속도에 맞게 최적화하고, 다양한 학습 기법을 사용하는 방법도 배울 수 있습니다.</p> </a> <a class="!no-underline border dark:border-gray-700 p-5 rounded-lg shadow hover:shadow-lg" href="./conceptual/philosophy" ><div class="w-full text-center bg-gradient-to-br from-pink-400 to-pink-500 rounded-lg py-1.5 font-semibold mb-5 text-white text-lg leading-relaxed">Conceptual guides</div> <p class="text-gray-700">라이브러리가 왜 이런 방식으로 설계되었는지 이해하고, 라이브러리 이용에 대한 윤리적 가이드라인과 안전 구현에 대해 자세히 알아보세요.</p> </a> <a class="!no-underline border dark:border-gray-700 p-5 rounded-lg shadow hover:shadow-lg" href="./api/models" ><div class="w-full text-center bg-gradient-to-br from-purple-400 to-purple-500 rounded-lg py-1.5 font-semibold mb-5 text-white text-lg leading-relaxed">Reference</div> <p class="text-gray-700">🤗 Diffusers 클래스 및 메서드의 작동 방식에 대한 기술 설명.</p> </a> </div> </div>
diffusers/docs/source/ko/index.md/0
{ "file_path": "diffusers/docs/source/ko/index.md", "repo_id": "diffusers", "token_count": 2459 }
<!--Copyright 2024 Custom Diffusion authors The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> # 커스텀 Diffusion 학습 예제 [커스텀 Diffusion](https://arxiv.org/abs/2212.04488)은 피사체의 이미지 몇 장(4~5장)만 주어지면 Stable Diffusion처럼 text-to-image 모델을 커스터마이징하는 방법입니다. 'train_custom_diffusion.py' 스크립트는 학습 과정을 구현하고 이를 Stable Diffusion에 맞게 조정하는 방법을 보여줍니다. 이 교육 사례는 [Nupur Kumari](https://nupurkmr9.github.io/)가 제공하였습니다. (Custom Diffusion의 저자 중 한명). ## 로컬에서 PyTorch로 실행하기 ### Dependencies 설치하기 스크립트를 실행하기 전에 라이브러리의 학습 dependencies를 설치해야 합니다: **중요** 예제 스크립트의 최신 버전을 성공적으로 실행하려면 **소스로부터 설치**하는 것을 매우 권장하며, 예제 스크립트를 자주 업데이트하는 만큼 일부 예제별 요구 사항을 설치하고 설치를 최신 상태로 유지하는 것이 좋습니다. 이를 위해 새 가상 환경에서 다음 단계를 실행하세요: ```bash git clone https://github.com/huggingface/diffusers cd diffusers pip install -e . ``` [example folder](https://github.com/huggingface/diffusers/tree/main/examples/custom_diffusion)로 cd하여 이동하세요. ``` cd examples/custom_diffusion ``` 이제 실행 ```bash pip install -r requirements.txt pip install clip-retrieval ``` 그리고 [🤗Accelerate](https://github.com/huggingface/accelerate/) 환경을 초기화: ```bash accelerate config ``` 또는 사용자 환경에 대한 질문에 답하지 않고 기본 가속 구성을 사용하려면 다음과 같이 하세요. ```bash accelerate config default ``` 또는 사용 중인 환경이 대화형 셸을 지원하지 않는 경우(예: jupyter notebook) ```python from accelerate.utils import write_basic_config write_basic_config() ``` ### 고양이 예제 😺 이제 데이터셋을 가져옵니다. [여기](https://www.cs.cmu.edu/~custom-diffusion/assets/data.zip)에서 데이터셋을 다운로드하고 압축을 풉니다. 직접 데이터셋을 사용하려면 [학습용 데이터셋 생성하기](create_dataset) 가이드를 참고하세요. 또한 'clip-retrieval'을 사용하여 200개의 실제 이미지를 수집하고, regularization으로서 이를 학습 데이터셋의 타겟 이미지와 결합합니다. 이렇게 하면 주어진 타겟 이미지에 대한 과적합을 방지할 수 있습니다. 다음 플래그를 사용하면 `prior_loss_weight=1.`로 `prior_preservation`, `real_prior` regularization을 활성화할 수 있습니다. 클래스_프롬프트`는 대상 이미지와 동일한 카테고리 이름이어야 합니다. 수집된 실제 이미지에는 `class_prompt`와 유사한 텍스트 캡션이 있습니다. 검색된 이미지는 `class_data_dir`에 저장됩니다. 생성된 이미지를 regularization으로 사용하기 위해 `real_prior`를 비활성화할 수 있습니다. 실제 이미지를 수집하려면 훈련 전에 이 명령을 먼저 사용하십시오. ```bash pip install clip-retrieval python retrieve.py --class_prompt cat --class_data_dir real_reg/samples_cat --num_class_images 200 ``` **___참고: [stable-diffusion-2](https://huggingface.co/stabilityai/stable-diffusion-2) 768x768 모델을 사용하는 경우 '해상도'를 768로 변경하세요.___** 스크립트는 모델 체크포인트와 `pytorch_custom_diffusion_weights.bin` 파일을 생성하여 저장소에 저장합니다. ```bash export MODEL_NAME="CompVis/stable-diffusion-v1-4" export OUTPUT_DIR="path-to-save-model" export INSTANCE_DIR="./data/cat" accelerate launch train_custom_diffusion.py \ --pretrained_model_name_or_path=$MODEL_NAME \ --instance_data_dir=$INSTANCE_DIR \ --output_dir=$OUTPUT_DIR \ --class_data_dir=./real_reg/samples_cat/ \ --with_prior_preservation --real_prior --prior_loss_weight=1.0 \ --class_prompt="cat" --num_class_images=200 \ --instance_prompt="photo of a <new1> cat" \ --resolution=512 \ --train_batch_size=2 \ --learning_rate=1e-5 \ --lr_warmup_steps=0 \ --max_train_steps=250 \ --scale_lr --hflip \ --modifier_token "<new1>" \ --push_to_hub ``` **더 낮은 VRAM 요구 사항(GPU당 16GB)으로 더 빠르게 훈련하려면 `--enable_xformers_memory_efficient_attention`을 사용하세요. 설치 방법은 [가이드](https://github.com/facebookresearch/xformers)를 따르세요.** 가중치 및 편향(`wandb`)을 사용하여 실험을 추적하고 중간 결과를 저장하려면(강력히 권장합니다) 다음 단계를 따르세요: * `wandb` 설치: `pip install wandb`. * 로그인 : `wandb login`. * 그런 다음 트레이닝을 시작하는 동안 `validation_prompt`를 지정하고 `report_to`를 `wandb`로 설정합니다. 다음과 같은 관련 인수를 구성할 수도 있습니다: * `num_validation_images` * `validation_steps` ```bash accelerate launch train_custom_diffusion.py \ --pretrained_model_name_or_path=$MODEL_NAME \ --instance_data_dir=$INSTANCE_DIR \ --output_dir=$OUTPUT_DIR \ --class_data_dir=./real_reg/samples_cat/ \ --with_prior_preservation --real_prior --prior_loss_weight=1.0 \ --class_prompt="cat" --num_class_images=200 \ --instance_prompt="photo of a <new1> cat" \ --resolution=512 \ --train_batch_size=2 \ --learning_rate=1e-5 \ --lr_warmup_steps=0 \ --max_train_steps=250 \ --scale_lr --hflip \ --modifier_token "<new1>" \ --validation_prompt="<new1> cat sitting in a bucket" \ --report_to="wandb" \ --push_to_hub ``` 다음은 [Weights and Biases page](https://wandb.ai/sayakpaul/custom-diffusion/runs/26ghrcau)의 예시이며, 여러 학습 세부 정보와 함께 중간 결과들을 확인할 수 있습니다. `--push_to_hub`를 지정하면 학습된 파라미터가 허깅 페이스 허브의 리포지토리에 푸시됩니다. 다음은 [예제 리포지토리](https://huggingface.co/sayakpaul/custom-diffusion-cat)입니다. ### 멀티 컨셉에 대한 학습 🐱🪵 [this](https://github.com/ShivamShrirao/diffusers/blob/main/examples/dreambooth/train_dreambooth.py)와 유사하게 각 컨셉에 대한 정보가 포함된 [json](https://github.com/adobe-research/custom-diffusion/blob/main/assets/concept_list.json) 파일을 제공합니다. 실제 이미지를 수집하려면 json 파일의 각 컨셉에 대해 이 명령을 실행합니다. ```bash pip install clip-retrieval python retrieve.py --class_prompt {} --class_data_dir {} --num_class_images 200 ``` 그럼 우리는 학습시킬 준비가 되었습니다! ```bash export MODEL_NAME="CompVis/stable-diffusion-v1-4" export OUTPUT_DIR="path-to-save-model" accelerate launch train_custom_diffusion.py \ --pretrained_model_name_or_path=$MODEL_NAME \ --output_dir=$OUTPUT_DIR \ --concepts_list=./concept_list.json \ --with_prior_preservation --real_prior --prior_loss_weight=1.0 \ --resolution=512 \ --train_batch_size=2 \ --learning_rate=1e-5 \ --lr_warmup_steps=0 \ --max_train_steps=500 \ --num_class_images=200 \ --scale_lr --hflip \ --modifier_token "<new1>+<new2>" \ --push_to_hub ``` 다음은 [Weights and Biases page](https://wandb.ai/sayakpaul/custom-diffusion/runs/3990tzkg)의 예시이며, 다른 학습 세부 정보와 함께 중간 결과들을 확인할 수 있습니다. ### 사람 얼굴에 대한 학습 사람 얼굴에 대한 파인튜닝을 위해 다음과 같은 설정이 더 효과적이라는 것을 확인했습니다: `learning_rate=5e-6`, `max_train_steps=1000 to 2000`, `freeze_model=crossattn`을 최소 15~20개의 이미지로 설정합니다. 실제 이미지를 수집하려면 훈련 전에 이 명령을 먼저 사용하십시오. ```bash pip install clip-retrieval python retrieve.py --class_prompt person --class_data_dir real_reg/samples_person --num_class_images 200 ``` 이제 학습을 시작하세요! ```bash export MODEL_NAME="CompVis/stable-diffusion-v1-4" export OUTPUT_DIR="path-to-save-model" export INSTANCE_DIR="path-to-images" accelerate launch train_custom_diffusion.py \ --pretrained_model_name_or_path=$MODEL_NAME \ --instance_data_dir=$INSTANCE_DIR \ --output_dir=$OUTPUT_DIR \ --class_data_dir=./real_reg/samples_person/ \ --with_prior_preservation --real_prior --prior_loss_weight=1.0 \ --class_prompt="person" --num_class_images=200 \ --instance_prompt="photo of a <new1> person" \ --resolution=512 \ --train_batch_size=2 \ --learning_rate=5e-6 \ --lr_warmup_steps=0 \ --max_train_steps=1000 \ --scale_lr --hflip --noaug \ --freeze_model crossattn \ --modifier_token "<new1>" \ --enable_xformers_memory_efficient_attention \ --push_to_hub ``` ## 추론 위 프롬프트를 사용하여 모델을 학습시킨 후에는 아래 프롬프트를 사용하여 추론을 실행할 수 있습니다. 프롬프트에 'modifier token'(예: 위 예제에서는 \<new1\>)을 반드시 포함해야 합니다. ```python import torch from diffusers import DiffusionPipeline pipe = DiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4", torch_dtype=torch.float16).to("cuda") pipe.unet.load_attn_procs("path-to-save-model", weight_name="pytorch_custom_diffusion_weights.bin") pipe.load_textual_inversion("path-to-save-model", weight_name="<new1>.bin") image = pipe( "<new1> cat sitting in a bucket", num_inference_steps=100, guidance_scale=6.0, eta=1.0, ).images[0] image.save("cat.png") ``` 허브 리포지토리에서 이러한 매개변수를 직접 로드할 수 있습니다: ```python import torch from huggingface_hub.repocard import RepoCard from diffusers import DiffusionPipeline model_id = "sayakpaul/custom-diffusion-cat" card = RepoCard.load(model_id) base_model_id = card.data.to_dict()["base_model"] pipe = DiffusionPipeline.from_pretrained(base_model_id, torch_dtype=torch.float16).to("cuda") pipe.unet.load_attn_procs(model_id, weight_name="pytorch_custom_diffusion_weights.bin") pipe.load_textual_inversion(model_id, weight_name="<new1>.bin") image = pipe( "<new1> cat sitting in a bucket", num_inference_steps=100, guidance_scale=6.0, eta=1.0, ).images[0] image.save("cat.png") ``` 다음은 여러 컨셉으로 추론을 수행하는 예제입니다: ```python import torch from huggingface_hub.repocard import RepoCard from diffusers import DiffusionPipeline model_id = "sayakpaul/custom-diffusion-cat-wooden-pot" card = RepoCard.load(model_id) base_model_id = card.data.to_dict()["base_model"] pipe = DiffusionPipeline.from_pretrained(base_model_id, torch_dtype=torch.float16).to("cuda") pipe.unet.load_attn_procs(model_id, weight_name="pytorch_custom_diffusion_weights.bin") pipe.load_textual_inversion(model_id, weight_name="<new1>.bin") pipe.load_textual_inversion(model_id, weight_name="<new2>.bin") image = pipe( "the <new1> cat sculpture in the style of a <new2> wooden pot", num_inference_steps=100, guidance_scale=6.0, eta=1.0, ).images[0] image.save("multi-subject.png") ``` 여기서 '고양이'와 '나무 냄비'는 여러 컨셉을 말합니다. ### 학습된 체크포인트에서 추론하기 `--checkpointing_steps` 인수를 사용한 경우 학습 과정에서 저장된 전체 체크포인트 중 하나에서 추론을 수행할 수도 있습니다. ## Grads를 None으로 설정 더 많은 메모리를 절약하려면 스크립트에 `--set_grads_to_none` 인수를 전달하세요. 이렇게 하면 성적이 0이 아닌 없음으로 설정됩니다. 그러나 특정 동작이 변경되므로 문제가 발생하면 이 인수를 제거하세요. 자세한 정보: https://pytorch.org/docs/stable/generated/torch.optim.Optimizer.zero_grad.html ## 실험 결과 실험에 대한 자세한 내용은 [당사 웹페이지](https://www.cs.cmu.edu/~custom-diffusion/)를 참조하세요.
diffusers/docs/source/ko/training/custom_diffusion.md/0
{ "file_path": "diffusers/docs/source/ko/training/custom_diffusion.md", "repo_id": "diffusers", "token_count": 7041 }
<!--Copyright 2024 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> # 텍스트 기반 image-to-image 생성 [[open-in-colab]] [`StableDiffusionImg2ImgPipeline`]을 사용하면 텍스트 프롬프트와 시작 이미지를 전달하여 새 이미지 생성의 조건을 지정할 수 있습니다. 시작하기 전에 필요한 라이브러리가 모두 설치되어 있는지 확인하세요: ```bash !pip install diffusers transformers ftfy accelerate ``` [`nitrosocke/Ghibli-Diffusion`](https://huggingface.co/nitrosocke/Ghibli-Diffusion)과 같은 사전학습된 stable diffusion 모델로 [`StableDiffusionImg2ImgPipeline`]을 생성하여 시작하세요. ```python import torch import requests from PIL import Image from io import BytesIO from diffusers import StableDiffusionImg2ImgPipeline device = "cuda" pipe = StableDiffusionImg2ImgPipeline.from_pretrained("nitrosocke/Ghibli-Diffusion", torch_dtype=torch.float16).to( device ) ``` 초기 이미지를 다운로드하고 사전 처리하여 파이프라인에 전달할 수 있습니다: ```python url = "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg" response = requests.get(url) init_image = Image.open(BytesIO(response.content)).convert("RGB") init_image.thumbnail((768, 768)) init_image ``` <div class="flex justify-center"> <img src="https://huggingface.co/datasets/YiYiXu/test-doc-assets/resolve/main/image_2_image_using_diffusers_cell_8_output_0.jpeg"/> </div> <Tip> 💡 `strength`는 입력 이미지에 추가되는 노이즈의 양을 제어하는 0.0에서 1.0 사이의 값입니다. 1.0에 가까운 값은 다양한 변형을 허용하지만 입력 이미지와 의미적으로 일치하지 않는 이미지를 생성합니다. </Tip> 프롬프트를 정의하고(지브리 스타일(Ghibli-style)에 맞게 조정된 이 체크포인트의 경우 프롬프트 앞에 `ghibli style` 토큰을 붙여야 합니다) 파이프라인을 실행합니다: ```python prompt = "ghibli style, a fantasy landscape with castles" generator = torch.Generator(device=device).manual_seed(1024) image = pipe(prompt=prompt, image=init_image, strength=0.75, guidance_scale=7.5, generator=generator).images[0] image ``` <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/ghibli-castles.png"/> </div> 다른 스케줄러로 실험하여 출력에 어떤 영향을 미치는지 확인할 수도 있습니다: ```python from diffusers import LMSDiscreteScheduler lms = LMSDiscreteScheduler.from_config(pipe.scheduler.config) pipe.scheduler = lms generator = torch.Generator(device=device).manual_seed(1024) image = pipe(prompt=prompt, image=init_image, strength=0.75, guidance_scale=7.5, generator=generator).images[0] image ``` <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/lms-ghibli.png"/> </div> 아래 공백을 확인하고 `strength` 값을 다르게 설정하여 이미지를 생성해 보세요. `strength`를 낮게 설정하면 원본 이미지와 더 유사한 이미지가 생성되는 것을 확인할 수 있습니다. 자유롭게 스케줄러를 [`LMSDiscreteScheduler`]로 전환하여 출력에 어떤 영향을 미치는지 확인해 보세요. <iframe src="https://stevhliu-ghibli-img2img.hf.space" frameborder="0" width="850" height="500" ></iframe>
diffusers/docs/source/ko/using-diffusers/img2img.md/0
{ "file_path": "diffusers/docs/source/ko/using-diffusers/img2img.md", "repo_id": "diffusers", "token_count": 2084 }
#!/usr/bin/env python # coding=utf-8 # Copyright 2025 The HuggingFace Inc. team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and import argparse import copy import itertools import logging import math import os import random import re import shutil from contextlib import nullcontext from pathlib import Path from typing import List, Optional, Union import numpy as np import torch import torch.utils.checkpoint import transformers from accelerate import Accelerator from accelerate.logging import get_logger from accelerate.utils import DistributedDataParallelKwargs, ProjectConfiguration, set_seed from huggingface_hub import create_repo, upload_folder from huggingface_hub.utils import insecure_hashlib from peft import LoraConfig, set_peft_model_state_dict from peft.utils import get_peft_model_state_dict from PIL import Image from PIL.ImageOps import exif_transpose from safetensors.torch import save_file from torch.utils.data import Dataset from torchvision import transforms from torchvision.transforms.functional import crop from tqdm.auto import tqdm from transformers import CLIPTokenizer, PretrainedConfig, T5TokenizerFast import diffusers from diffusers import ( AutoencoderKL, FlowMatchEulerDiscreteScheduler, FluxPipeline, FluxTransformer2DModel, ) from diffusers.optimization import get_scheduler from diffusers.training_utils import ( _set_state_dict_into_text_encoder, cast_training_params, compute_density_for_timestep_sampling, compute_loss_weighting_for_sd3, free_memory, ) from diffusers.utils import ( check_min_version, convert_unet_state_dict_to_peft, is_wandb_available, ) from diffusers.utils.hub_utils import load_or_create_model_card, populate_model_card from diffusers.utils.torch_utils import is_compiled_module if is_wandb_available(): import wandb # Will error if the minimal version of diffusers is not installed. Remove at your own risks. check_min_version("0.33.0.dev0") logger = get_logger(__name__) def save_model_card( repo_id: str, images=None, base_model: str = None, train_text_encoder=False, train_text_encoder_ti=False, enable_t5_ti=False, pure_textual_inversion=False, token_abstraction_dict=None, instance_prompt=None, validation_prompt=None, repo_folder=None, ): widget_dict = [] trigger_str = f"You should use {instance_prompt} to trigger the image generation." if images is not None: for i, image in enumerate(images): image.save(os.path.join(repo_folder, f"image_{i}.png")) widget_dict.append( {"text": validation_prompt if validation_prompt else " ", "output": {"url": f"image_{i}.png"}} ) diffusers_load_lora = "" diffusers_imports_pivotal = "" diffusers_example_pivotal = "" if not pure_textual_inversion: diffusers_load_lora = ( f"""pipeline.load_lora_weights('{repo_id}', weight_name='pytorch_lora_weights.safetensors')""" ) if train_text_encoder_ti: embeddings_filename = f"{repo_folder}_emb" ti_keys = ", ".join(f'"{match}"' for match in re.findall(r"<s\d+>", instance_prompt)) trigger_str = ( "To trigger image generation of trained concept(or concepts) replace each concept identifier " "in you prompt with the new inserted tokens:\n" ) diffusers_imports_pivotal = """from huggingface_hub import hf_hub_download from safetensors.torch import load_file """ if enable_t5_ti: diffusers_example_pivotal = f"""embedding_path = hf_hub_download(repo_id='{repo_id}', filename='{embeddings_filename}.safetensors', repo_type="model") state_dict = load_file(embedding_path) pipeline.load_textual_inversion(state_dict["clip_l"], token=[{ti_keys}], text_encoder=pipeline.text_encoder, tokenizer=pipeline.tokenizer) pipeline.load_textual_inversion(state_dict["t5"], token=[{ti_keys}], text_encoder=pipeline.text_encoder_2, tokenizer=pipeline.tokenizer_2) """ else: diffusers_example_pivotal = f"""embedding_path = hf_hub_download(repo_id='{repo_id}', filename='{embeddings_filename}.safetensors', repo_type="model") state_dict = load_file(embedding_path) pipeline.load_textual_inversion(state_dict["clip_l"], token=[{ti_keys}], text_encoder=pipeline.text_encoder, tokenizer=pipeline.tokenizer) """ if token_abstraction_dict: for key, value in token_abstraction_dict.items(): tokens = "".join(value) trigger_str += f""" to trigger concept `{key}` → use `{tokens}` in your prompt \n """ model_description = f""" # Flux DreamBooth LoRA - {repo_id} <Gallery /> ## Model description These are {repo_id} DreamBooth LoRA weights for {base_model}. The weights were trained using [DreamBooth](https://dreambooth.github.io/) with the [Flux diffusers trainer](https://github.com/huggingface/diffusers/blob/main/examples/dreambooth/README_flux.md). Was LoRA for the text encoder enabled? {train_text_encoder}. Pivotal tuning was enabled: {train_text_encoder_ti}. ## Trigger words {trigger_str} ## Download model [Download the *.safetensors LoRA]({repo_id}/tree/main) in the Files & versions tab. ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch {diffusers_imports_pivotal} pipeline = AutoPipelineForText2Image.from_pretrained("black-forest-labs/FLUX.1-dev", torch_dtype=torch.bfloat16).to('cuda') {diffusers_load_lora} {diffusers_example_pivotal} image = pipeline('{validation_prompt if validation_prompt else instance_prompt}').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## License Please adhere to the licensing terms as described [here](https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md). """ model_card = load_or_create_model_card( repo_id_or_path=repo_id, from_training=True, license="other", base_model=base_model, prompt=instance_prompt, model_description=model_description, widget=widget_dict, ) tags = [ "text-to-image", "diffusers-training", "diffusers", "lora", "flux", "flux-diffusers", "template:sd-lora", ] model_card = populate_model_card(model_card, tags=tags) model_card.save(os.path.join(repo_folder, "README.md")) def load_text_encoders(class_one, class_two): text_encoder_one = class_one.from_pretrained( args.pretrained_model_name_or_path, subfolder="text_encoder", revision=args.revision, variant=args.variant ) text_encoder_two = class_two.from_pretrained( args.pretrained_model_name_or_path, subfolder="text_encoder_2", revision=args.revision, variant=args.variant ) return text_encoder_one, text_encoder_two def log_validation( pipeline, args, accelerator, pipeline_args, epoch, torch_dtype, is_final_validation=False, ): logger.info( f"Running validation... \n Generating {args.num_validation_images} images with prompt:" f" {args.validation_prompt}." ) pipeline = pipeline.to(accelerator.device, dtype=torch_dtype) pipeline.set_progress_bar_config(disable=True) # run inference generator = torch.Generator(device=accelerator.device).manual_seed(args.seed) if args.seed else None autocast_ctx = nullcontext() with autocast_ctx: images = [pipeline(**pipeline_args, generator=generator).images[0] for _ in range(args.num_validation_images)] for tracker in accelerator.trackers: phase_name = "test" if is_final_validation else "validation" if tracker.name == "tensorboard": np_images = np.stack([np.asarray(img) for img in images]) tracker.writer.add_images(phase_name, np_images, epoch, dataformats="NHWC") if tracker.name == "wandb": tracker.log( { phase_name: [ wandb.Image(image, caption=f"{i}: {args.validation_prompt}") for i, image in enumerate(images) ] } ) del pipeline free_memory() return images def import_model_class_from_model_name_or_path( pretrained_model_name_or_path: str, revision: str, subfolder: str = "text_encoder" ): text_encoder_config = PretrainedConfig.from_pretrained( pretrained_model_name_or_path, subfolder=subfolder, revision=revision ) model_class = text_encoder_config.architectures[0] if model_class == "CLIPTextModel": from transformers import CLIPTextModel return CLIPTextModel elif model_class == "T5EncoderModel": from transformers import T5EncoderModel return T5EncoderModel else: raise ValueError(f"{model_class} is not supported.") def parse_args(input_args=None): parser = argparse.ArgumentParser(description="Simple example of a training script.") parser.add_argument( "--pretrained_model_name_or_path", type=str, default=None, required=True, help="Path to pretrained model or model identifier from huggingface.co/models.", ) parser.add_argument( "--revision", type=str, default=None, required=False, help="Revision of pretrained model identifier from huggingface.co/models.", ) parser.add_argument( "--variant", type=str, default=None, help="Variant of the model files of the pretrained model identifier from huggingface.co/models, 'e.g.' fp16", ) parser.add_argument( "--dataset_name", type=str, default=None, help=( "The name of the Dataset (from the HuggingFace hub) containing the training data of instance images (could be your own, possibly private," " dataset). It can also be a path pointing to a local copy of a dataset in your filesystem," " or to a folder containing files that 🤗 Datasets can understand." ), ) parser.add_argument( "--dataset_config_name", type=str, default=None, help="The config of the Dataset, leave as None if there's only one config.", ) parser.add_argument( "--instance_data_dir", type=str, default=None, help=("A folder containing the training data. "), ) parser.add_argument( "--cache_dir", type=str, default=None, help="The directory where the downloaded models and datasets will be stored.", ) parser.add_argument( "--image_column", type=str, default="image", help="The column of the dataset containing the target image. By " "default, the standard Image Dataset maps out 'file_name' " "to 'image'.", ) parser.add_argument( "--caption_column", type=str, default=None, help="The column of the dataset containing the instance prompt for each image", ) parser.add_argument("--repeats", type=int, default=1, help="How many times to repeat the training data.") parser.add_argument( "--class_data_dir", type=str, default=None, required=False, help="A folder containing the training data of class images.", ) parser.add_argument( "--instance_prompt", type=str, default=None, required=True, help="The prompt with identifier specifying the instance, e.g. 'photo of a TOK dog', 'in the style of TOK'", ) parser.add_argument( "--token_abstraction", type=str, default="TOK", help="identifier specifying the instance(or instances) as used in instance_prompt, validation prompt, " "captions - e.g. TOK. To use multiple identifiers, please specify them in a comma separated string - e.g. " "'TOK,TOK2,TOK3' etc.", ) parser.add_argument( "--num_new_tokens_per_abstraction", type=int, default=None, help="number of new tokens inserted to the tokenizers per token_abstraction identifier when " "--train_text_encoder_ti = True. By default, each --token_abstraction (e.g. TOK) is mapped to 2 new " "tokens - <si><si+1> ", ) parser.add_argument( "--initializer_concept", type=str, default=None, help="the concept to use to initialize the new inserted tokens when training with " "--train_text_encoder_ti = True. By default, new tokens (<si><si+1>) are initialized with random value. " "Alternatively, you could specify a different word/words whos value will be used as the starting point for the new inserted tokens. " "--num_new_tokens_per_abstraction is ignored when initializer_concept is provided", ) parser.add_argument( "--class_prompt", type=str, default=None, help="The prompt to specify images in the same class as provided instance images.", ) parser.add_argument( "--max_sequence_length", type=int, default=512, help="Maximum sequence length to use with with the T5 text encoder", ) parser.add_argument( "--validation_prompt", type=str, default=None, help="A prompt that is used during validation to verify that the model is learning.", ) parser.add_argument( "--num_validation_images", type=int, default=4, help="Number of images that should be generated during validation with `validation_prompt`.", ) parser.add_argument( "--validation_epochs", type=int, default=50, help=( "Run dreambooth validation every X epochs. Dreambooth validation consists of running the prompt" " `args.validation_prompt` multiple times: `args.num_validation_images`." ), ) parser.add_argument( "--rank", type=int, default=4, help=("The dimension of the LoRA update matrices."), ) parser.add_argument( "--with_prior_preservation", default=False, action="store_true", help="Flag to add prior preservation loss.", ) parser.add_argument("--prior_loss_weight", type=float, default=1.0, help="The weight of prior preservation loss.") parser.add_argument( "--num_class_images", type=int, default=100, help=( "Minimal class images for prior preservation loss. If there are not enough images already present in" " class_data_dir, additional images will be sampled with class_prompt." ), ) parser.add_argument( "--output_dir", type=str, default="flux-dreambooth-lora", help="The output directory where the model predictions and checkpoints will be written.", ) parser.add_argument("--seed", type=int, default=None, help="A seed for reproducible training.") parser.add_argument( "--resolution", type=int, default=512, help=( "The resolution for input images, all the images in the train/validation dataset will be resized to this" " resolution" ), ) parser.add_argument( "--center_crop", default=False, action="store_true", help=( "Whether to center crop the input images to the resolution. If not set, the images will be randomly" " cropped. The images will be resized to the resolution first before cropping." ), ) parser.add_argument( "--random_flip", action="store_true", help="whether to randomly flip images horizontally", ) parser.add_argument( "--train_text_encoder", action="store_true", help="Whether to train the text encoder. If set, the text encoder should be float32 precision.", ) parser.add_argument( "--train_text_encoder_ti", action="store_true", help=("Whether to use pivotal tuning / textual inversion"), ) parser.add_argument( "--enable_t5_ti", action="store_true", help=( "Whether to use pivotal tuning / textual inversion for the T5 encoder as well (in addition to CLIP encoder)" ), ) parser.add_argument( "--train_text_encoder_ti_frac", type=float, default=0.5, help=("The percentage of epochs to perform textual inversion"), ) parser.add_argument( "--train_text_encoder_frac", type=float, default=1.0, help=("The percentage of epochs to perform text encoder tuning"), ) parser.add_argument( "--train_transformer_frac", type=float, default=1.0, help=("The percentage of epochs to perform transformer tuning"), ) parser.add_argument( "--train_batch_size", type=int, default=4, help="Batch size (per device) for the training dataloader." ) parser.add_argument( "--sample_batch_size", type=int, default=4, help="Batch size (per device) for sampling images." ) parser.add_argument("--num_train_epochs", type=int, default=1) parser.add_argument( "--max_train_steps", type=int, default=None, help="Total number of training steps to perform. If provided, overrides num_train_epochs.", ) parser.add_argument( "--checkpointing_steps", type=int, default=500, help=( "Save a checkpoint of the training state every X updates. These checkpoints can be used both as final" " checkpoints in case they are better than the last checkpoint, and are also suitable for resuming" " training using `--resume_from_checkpoint`." ), ) parser.add_argument( "--checkpoints_total_limit", type=int, default=None, help=("Max number of checkpoints to store."), ) parser.add_argument( "--resume_from_checkpoint", type=str, default=None, help=( "Whether training should be resumed from a previous checkpoint. Use a path saved by" ' `--checkpointing_steps`, or `"latest"` to automatically select the last available checkpoint.' ), ) parser.add_argument( "--gradient_accumulation_steps", type=int, default=1, help="Number of updates steps to accumulate before performing a backward/update pass.", ) parser.add_argument( "--gradient_checkpointing", action="store_true", help="Whether or not to use gradient checkpointing to save memory at the expense of slower backward pass.", ) parser.add_argument( "--learning_rate", type=float, default=1e-4, help="Initial learning rate (after the potential warmup period) to use.", ) parser.add_argument( "--guidance_scale", type=float, default=3.5, help="the FLUX.1 dev variant is a guidance distilled model", ) parser.add_argument( "--text_encoder_lr", type=float, default=5e-6, help="Text encoder learning rate to use.", ) parser.add_argument( "--scale_lr", action="store_true", default=False, help="Scale the learning rate by the number of GPUs, gradient accumulation steps, and batch size.", ) parser.add_argument( "--lr_scheduler", type=str, default="constant", help=( 'The scheduler type to use. Choose between ["linear", "cosine", "cosine_with_restarts", "polynomial",' ' "constant", "constant_with_warmup"]' ), ) parser.add_argument( "--lr_warmup_steps", type=int, default=500, help="Number of steps for the warmup in the lr scheduler." ) parser.add_argument( "--lr_num_cycles", type=int, default=1, help="Number of hard resets of the lr in cosine_with_restarts scheduler.", ) parser.add_argument("--lr_power", type=float, default=1.0, help="Power factor of the polynomial scheduler.") parser.add_argument( "--dataloader_num_workers", type=int, default=0, help=( "Number of subprocesses to use for data loading. 0 means that the data will be loaded in the main process." ), ) parser.add_argument( "--weighting_scheme", type=str, default="none", choices=["sigma_sqrt", "logit_normal", "mode", "cosmap", "none"], help=('We default to the "none" weighting scheme for uniform sampling and uniform loss'), ) parser.add_argument( "--logit_mean", type=float, default=0.0, help="mean to use when using the `'logit_normal'` weighting scheme." ) parser.add_argument( "--logit_std", type=float, default=1.0, help="std to use when using the `'logit_normal'` weighting scheme." ) parser.add_argument( "--mode_scale", type=float, default=1.29, help="Scale of mode weighting scheme. Only effective when using the `'mode'` as the `weighting_scheme`.", ) parser.add_argument( "--optimizer", type=str, default="AdamW", help=('The optimizer type to use. Choose between ["AdamW", "prodigy"]'), ) parser.add_argument( "--use_8bit_adam", action="store_true", help="Whether or not to use 8-bit Adam from bitsandbytes. Ignored if optimizer is not set to AdamW", ) parser.add_argument( "--adam_beta1", type=float, default=0.9, help="The beta1 parameter for the Adam and Prodigy optimizers." ) parser.add_argument( "--adam_beta2", type=float, default=0.999, help="The beta2 parameter for the Adam and Prodigy optimizers." ) parser.add_argument( "--prodigy_beta3", type=float, default=None, help="coefficients for computing the Prodigy stepsize using running averages. If set to None, " "uses the value of square root of beta2. Ignored if optimizer is adamW", ) parser.add_argument("--prodigy_decouple", type=bool, default=True, help="Use AdamW style decoupled weight decay") parser.add_argument( "--adam_weight_decay", type=float, default=1e-04, help="Weight decay to use for transformer params" ) parser.add_argument( "--adam_weight_decay_text_encoder", type=float, default=1e-03, help="Weight decay to use for text_encoder" ) parser.add_argument( "--lora_layers", type=str, default=None, help=( "The transformer modules to apply LoRA training on. Please specify the layers in a comma seperated. " 'E.g. - "to_k,to_q,to_v,to_out.0" will result in lora training of attention layers only. For more examples refer to https://github.com/huggingface/diffusers/blob/main/examples/advanced_diffusion_training/README_flux.md' ), ) parser.add_argument( "--adam_epsilon", type=float, default=1e-08, help="Epsilon value for the Adam optimizer and Prodigy optimizers.", ) parser.add_argument( "--prodigy_use_bias_correction", type=bool, default=True, help="Turn on Adam's bias correction. True by default. Ignored if optimizer is adamW", ) parser.add_argument( "--prodigy_safeguard_warmup", type=bool, default=True, help="Remove lr from the denominator of D estimate to avoid issues during warm-up stage. True by default. " "Ignored if optimizer is adamW", ) parser.add_argument("--max_grad_norm", default=1.0, type=float, help="Max gradient norm.") parser.add_argument("--push_to_hub", action="store_true", help="Whether or not to push the model to the Hub.") parser.add_argument("--hub_token", type=str, default=None, help="The token to use to push to the Model Hub.") parser.add_argument( "--hub_model_id", type=str, default=None, help="The name of the repository to keep in sync with the local `output_dir`.", ) parser.add_argument( "--logging_dir", type=str, default="logs", help=( "[TensorBoard](https://www.tensorflow.org/tensorboard) log directory. Will default to" " *output_dir/runs/**CURRENT_DATETIME_HOSTNAME***." ), ) parser.add_argument( "--allow_tf32", action="store_true", help=( "Whether or not to allow TF32 on Ampere GPUs. Can be used to speed up training. For more information, see" " https://pytorch.org/docs/stable/notes/cuda.html#tensorfloat-32-tf32-on-ampere-devices" ), ) parser.add_argument( "--cache_latents", action="store_true", default=False, help="Cache the VAE latents", ) parser.add_argument( "--report_to", type=str, default="tensorboard", help=( 'The integration to report the results and logs to. Supported platforms are `"tensorboard"`' ' (default), `"wandb"` and `"comet_ml"`. Use `"all"` to report to all integrations.' ), ) parser.add_argument( "--mixed_precision", type=str, default=None, choices=["no", "fp16", "bf16"], help=( "Whether to use mixed precision. Choose between fp16 and bf16 (bfloat16). Bf16 requires PyTorch >=" " 1.10.and an Nvidia Ampere GPU. Default to the value of accelerate config of the current system or the" " flag passed with the `accelerate.launch` command. Use this argument to override the accelerate config." ), ) parser.add_argument( "--prior_generation_precision", type=str, default=None, choices=["no", "fp32", "fp16", "bf16"], help=( "Choose prior generation precision between fp32, fp16 and bf16 (bfloat16). Bf16 requires PyTorch >=" " 1.10.and an Nvidia Ampere GPU. Default to fp16 if a GPU is available else fp32." ), ) parser.add_argument("--local_rank", type=int, default=-1, help="For distributed training: local_rank") if input_args is not None: args = parser.parse_args(input_args) else: args = parser.parse_args() if args.dataset_name is None and args.instance_data_dir is None: raise ValueError("Specify either `--dataset_name` or `--instance_data_dir`") if args.dataset_name is not None and args.instance_data_dir is not None: raise ValueError("Specify only one of `--dataset_name` or `--instance_data_dir`") if args.train_text_encoder and args.train_text_encoder_ti: raise ValueError( "Specify only one of `--train_text_encoder` or `--train_text_encoder_ti. " "For full LoRA text encoder training check --train_text_encoder, for textual " "inversion training check `--train_text_encoder_ti`" ) if args.train_transformer_frac < 1 and not args.train_text_encoder_ti: raise ValueError( "--train_transformer_frac must be == 1 if text_encoder training / textual inversion is not enabled." ) if args.train_transformer_frac < 1 and args.train_text_encoder_ti_frac < 1: raise ValueError( "--train_transformer_frac and --train_text_encoder_ti_frac are identical and smaller than 1. " "This contradicts with --max_train_steps, please specify different values or set both to 1." ) if args.enable_t5_ti and not args.train_text_encoder_ti: logger.warning("You need not use --enable_t5_ti without --train_text_encoder_ti.") if args.train_text_encoder_ti and args.initializer_concept and args.num_new_tokens_per_abstraction: logger.warning( "When specifying --initializer_concept, the number of tokens per abstraction is detrimned " "by the initializer token. --num_new_tokens_per_abstraction will be ignored" ) env_local_rank = int(os.environ.get("LOCAL_RANK", -1)) if env_local_rank != -1 and env_local_rank != args.local_rank: args.local_rank = env_local_rank if args.with_prior_preservation: if args.class_data_dir is None: raise ValueError("You must specify a data directory for class images.") if args.class_prompt is None: raise ValueError("You must specify prompt for class images.") else: if args.class_data_dir is not None: logger.warning("You need not use --class_data_dir without --with_prior_preservation.") if args.class_prompt is not None: logger.warning("You need not use --class_prompt without --with_prior_preservation.") return args # Modified from https://github.com/replicate/cog-sdxl/blob/main/dataset_and_utils.py class TokenEmbeddingsHandler: def __init__(self, text_encoders, tokenizers): self.text_encoders = text_encoders self.tokenizers = tokenizers self.train_ids: Optional[torch.Tensor] = None self.train_ids_t5: Optional[torch.Tensor] = None self.inserting_toks: Optional[List[str]] = None self.embeddings_settings = {} def initialize_new_tokens(self, inserting_toks: List[str]): idx = 0 for tokenizer, text_encoder in zip(self.tokenizers, self.text_encoders): assert isinstance(inserting_toks, list), "inserting_toks should be a list of strings." assert all( isinstance(tok, str) for tok in inserting_toks ), "All elements in inserting_toks should be strings." self.inserting_toks = inserting_toks special_tokens_dict = {"additional_special_tokens": self.inserting_toks} tokenizer.add_special_tokens(special_tokens_dict) # Resize the token embeddings as we are adding new special tokens to the tokenizer text_encoder.resize_token_embeddings(len(tokenizer)) # Convert the token abstractions to ids if idx == 0: self.train_ids = tokenizer.convert_tokens_to_ids(self.inserting_toks) else: self.train_ids_t5 = tokenizer.convert_tokens_to_ids(self.inserting_toks) # random initialization of new tokens embeds = ( text_encoder.text_model.embeddings.token_embedding if idx == 0 else text_encoder.encoder.embed_tokens ) std_token_embedding = embeds.weight.data.std() logger.info(f"{idx} text encoder's std_token_embedding: {std_token_embedding}") train_ids = self.train_ids if idx == 0 else self.train_ids_t5 # if initializer_concept are not provided, token embeddings are initialized randomly if args.initializer_concept is None: hidden_size = ( text_encoder.text_model.config.hidden_size if idx == 0 else text_encoder.encoder.config.hidden_size ) embeds.weight.data[train_ids] = ( torch.randn(len(train_ids), hidden_size).to(device=self.device).to(dtype=self.dtype) * std_token_embedding ) else: # Convert the initializer_token, placeholder_token to ids initializer_token_ids = tokenizer.encode(args.initializer_concept, add_special_tokens=False) for token_idx, token_id in enumerate(train_ids): embeds.weight.data[token_id] = (embeds.weight.data)[ initializer_token_ids[token_idx % len(initializer_token_ids)] ].clone() self.embeddings_settings[f"original_embeddings_{idx}"] = embeds.weight.data.clone() self.embeddings_settings[f"std_token_embedding_{idx}"] = std_token_embedding # makes sure we don't update any embedding weights besides the newly added token index_no_updates = torch.ones((len(tokenizer),), dtype=torch.bool) index_no_updates[train_ids] = False self.embeddings_settings[f"index_no_updates_{idx}"] = index_no_updates logger.info(self.embeddings_settings[f"index_no_updates_{idx}"].shape) idx += 1 def save_embeddings(self, file_path: str): assert self.train_ids is not None, "Initialize new tokens before saving embeddings." tensors = {} # text_encoder_one, idx==0 - CLIP ViT-L/14, text_encoder_two, idx==1 - T5 xxl idx_to_text_encoder_name = {0: "clip_l", 1: "t5"} for idx, text_encoder in enumerate(self.text_encoders): train_ids = self.train_ids if idx == 0 else self.train_ids_t5 embeds = ( text_encoder.text_model.embeddings.token_embedding if idx == 0 else text_encoder.encoder.embed_tokens ) assert embeds.weight.data.shape[0] == len(self.tokenizers[idx]), "Tokenizers should be the same." new_token_embeddings = embeds.weight.data[train_ids] # New tokens for each text encoder are saved under "clip_l" (for text_encoder 0), # Note: When loading with diffusers, any name can work - simply specify in inference tensors[idx_to_text_encoder_name[idx]] = new_token_embeddings # tensors[f"text_encoders_{idx}"] = new_token_embeddings save_file(tensors, file_path) @property def dtype(self): return self.text_encoders[0].dtype @property def device(self): return self.text_encoders[0].device @torch.no_grad() def retract_embeddings(self): for idx, text_encoder in enumerate(self.text_encoders): embeds = ( text_encoder.text_model.embeddings.token_embedding if idx == 0 else text_encoder.encoder.embed_tokens ) index_no_updates = self.embeddings_settings[f"index_no_updates_{idx}"] embeds.weight.data[index_no_updates] = ( self.embeddings_settings[f"original_embeddings_{idx}"][index_no_updates] .to(device=text_encoder.device) .to(dtype=text_encoder.dtype) ) # for the parts that were updated, we need to normalize them # to have the same std as before std_token_embedding = self.embeddings_settings[f"std_token_embedding_{idx}"] index_updates = ~index_no_updates new_embeddings = embeds.weight.data[index_updates] off_ratio = std_token_embedding / new_embeddings.std() new_embeddings = new_embeddings * (off_ratio**0.1) embeds.weight.data[index_updates] = new_embeddings class DreamBoothDataset(Dataset): """ A dataset to prepare the instance and class images with the prompts for fine-tuning the model. It pre-processes the images. """ def __init__( self, instance_data_root, instance_prompt, class_prompt, train_text_encoder_ti, token_abstraction_dict=None, # token mapping for textual inversion class_data_root=None, class_num=None, size=1024, repeats=1, center_crop=False, ): self.size = size self.center_crop = center_crop self.instance_prompt = instance_prompt self.custom_instance_prompts = None self.class_prompt = class_prompt self.token_abstraction_dict = token_abstraction_dict self.train_text_encoder_ti = train_text_encoder_ti # if --dataset_name is provided or a metadata jsonl file is provided in the local --instance_data directory, # we load the training data using load_dataset if args.dataset_name is not None: try: from datasets import load_dataset except ImportError: raise ImportError( "You are trying to load your data using the datasets library. If you wish to train using custom " "captions please install the datasets library: `pip install datasets`. If you wish to load a " "local folder containing images only, specify --instance_data_dir instead." ) # Downloading and loading a dataset from the hub. # See more about loading custom images at # https://huggingface.co/docs/datasets/v2.0.0/en/dataset_script dataset = load_dataset( args.dataset_name, args.dataset_config_name, cache_dir=args.cache_dir, ) # Preprocessing the datasets. column_names = dataset["train"].column_names # 6. Get the column names for input/target. if args.image_column is None: image_column = column_names[0] logger.info(f"image column defaulting to {image_column}") else: image_column = args.image_column if image_column not in column_names: raise ValueError( f"`--image_column` value '{args.image_column}' not found in dataset columns. Dataset columns are: {', '.join(column_names)}" ) instance_images = dataset["train"][image_column] if args.caption_column is None: logger.info( "No caption column provided, defaulting to instance_prompt for all images. If your dataset " "contains captions/prompts for the images, make sure to specify the " "column as --caption_column" ) self.custom_instance_prompts = None else: if args.caption_column not in column_names: raise ValueError( f"`--caption_column` value '{args.caption_column}' not found in dataset columns. Dataset columns are: {', '.join(column_names)}" ) custom_instance_prompts = dataset["train"][args.caption_column] # create final list of captions according to --repeats self.custom_instance_prompts = [] for caption in custom_instance_prompts: self.custom_instance_prompts.extend(itertools.repeat(caption, repeats)) else: self.instance_data_root = Path(instance_data_root) if not self.instance_data_root.exists(): raise ValueError("Instance images root doesn't exists.") instance_images = [Image.open(path) for path in list(Path(instance_data_root).iterdir())] self.custom_instance_prompts = None self.instance_images = [] for img in instance_images: self.instance_images.extend(itertools.repeat(img, repeats)) self.pixel_values = [] train_resize = transforms.Resize(size, interpolation=transforms.InterpolationMode.BILINEAR) train_crop = transforms.CenterCrop(size) if center_crop else transforms.RandomCrop(size) train_flip = transforms.RandomHorizontalFlip(p=1.0) train_transforms = transforms.Compose( [ transforms.ToTensor(), transforms.Normalize([0.5], [0.5]), ] ) for image in self.instance_images: image = exif_transpose(image) if not image.mode == "RGB": image = image.convert("RGB") image = train_resize(image) if args.random_flip and random.random() < 0.5: # flip image = train_flip(image) if args.center_crop: y1 = max(0, int(round((image.height - args.resolution) / 2.0))) x1 = max(0, int(round((image.width - args.resolution) / 2.0))) image = train_crop(image) else: y1, x1, h, w = train_crop.get_params(image, (args.resolution, args.resolution)) image = crop(image, y1, x1, h, w) image = train_transforms(image) self.pixel_values.append(image) self.num_instance_images = len(self.instance_images) self._length = self.num_instance_images if class_data_root is not None: self.class_data_root = Path(class_data_root) self.class_data_root.mkdir(parents=True, exist_ok=True) self.class_images_path = list(self.class_data_root.iterdir()) if class_num is not None: self.num_class_images = min(len(self.class_images_path), class_num) else: self.num_class_images = len(self.class_images_path) self._length = max(self.num_class_images, self.num_instance_images) else: self.class_data_root = None self.image_transforms = transforms.Compose( [ transforms.Resize(size, interpolation=transforms.InterpolationMode.BILINEAR), transforms.CenterCrop(size) if center_crop else transforms.RandomCrop(size), transforms.ToTensor(), transforms.Normalize([0.5], [0.5]), ] ) def __len__(self): return self._length def __getitem__(self, index): example = {} instance_image = self.pixel_values[index % self.num_instance_images] example["instance_images"] = instance_image if self.custom_instance_prompts: caption = self.custom_instance_prompts[index % self.num_instance_images] if caption: if self.train_text_encoder_ti: # replace instances of --token_abstraction in caption with the new tokens: "<si><si+1>" etc. for token_abs, token_replacement in self.token_abstraction_dict.items(): caption = caption.replace(token_abs, "".join(token_replacement)) example["instance_prompt"] = caption else: example["instance_prompt"] = self.instance_prompt else: # the given instance prompt is used for all images example["instance_prompt"] = self.instance_prompt if self.class_data_root: class_image = Image.open(self.class_images_path[index % self.num_class_images]) class_image = exif_transpose(class_image) if not class_image.mode == "RGB": class_image = class_image.convert("RGB") example["class_images"] = self.image_transforms(class_image) example["class_prompt"] = self.class_prompt return example def collate_fn(examples, with_prior_preservation=False): pixel_values = [example["instance_images"] for example in examples] prompts = [example["instance_prompt"] for example in examples] # Concat class and instance examples for prior preservation. # We do this to avoid doing two forward passes. if with_prior_preservation: pixel_values += [example["class_images"] for example in examples] prompts += [example["class_prompt"] for example in examples] pixel_values = torch.stack(pixel_values) pixel_values = pixel_values.to(memory_format=torch.contiguous_format).float() batch = {"pixel_values": pixel_values, "prompts": prompts} return batch class PromptDataset(Dataset): "A simple dataset to prepare the prompts to generate class images on multiple GPUs." def __init__(self, prompt, num_samples): self.prompt = prompt self.num_samples = num_samples def __len__(self): return self.num_samples def __getitem__(self, index): example = {} example["prompt"] = self.prompt example["index"] = index return example def tokenize_prompt(tokenizer, prompt, max_sequence_length, add_special_tokens=False): text_inputs = tokenizer( prompt, padding="max_length", max_length=max_sequence_length, truncation=True, return_length=False, return_overflowing_tokens=False, add_special_tokens=add_special_tokens, return_tensors="pt", ) text_input_ids = text_inputs.input_ids return text_input_ids def _get_t5_prompt_embeds( text_encoder, tokenizer, max_sequence_length=512, prompt=None, num_images_per_prompt=1, device=None, text_input_ids=None, ): prompt = [prompt] if isinstance(prompt, str) else prompt batch_size = len(prompt) if tokenizer is not None: text_inputs = tokenizer( prompt, padding="max_length", max_length=max_sequence_length, truncation=True, return_length=False, return_overflowing_tokens=False, return_tensors="pt", ) text_input_ids = text_inputs.input_ids else: if text_input_ids is None: raise ValueError("text_input_ids must be provided when the tokenizer is not specified") prompt_embeds = text_encoder(text_input_ids.to(device))[0] dtype = text_encoder.dtype prompt_embeds = prompt_embeds.to(dtype=dtype, device=device) _, seq_len, _ = prompt_embeds.shape # duplicate text embeddings and attention mask for each generation per prompt, using mps friendly method prompt_embeds = prompt_embeds.repeat(1, num_images_per_prompt, 1) prompt_embeds = prompt_embeds.view(batch_size * num_images_per_prompt, seq_len, -1) return prompt_embeds def _get_clip_prompt_embeds( text_encoder, tokenizer, prompt: str, device=None, text_input_ids=None, num_images_per_prompt: int = 1, ): prompt = [prompt] if isinstance(prompt, str) else prompt batch_size = len(prompt) if tokenizer is not None: text_inputs = tokenizer( prompt, padding="max_length", max_length=77, truncation=True, return_overflowing_tokens=False, return_length=False, return_tensors="pt", ) text_input_ids = text_inputs.input_ids else: if text_input_ids is None: raise ValueError("text_input_ids must be provided when the tokenizer is not specified") prompt_embeds = text_encoder(text_input_ids.to(device), output_hidden_states=False) # Use pooled output of CLIPTextModel prompt_embeds = prompt_embeds.pooler_output prompt_embeds = prompt_embeds.to(dtype=text_encoder.dtype, device=device) # duplicate text embeddings for each generation per prompt, using mps friendly method prompt_embeds = prompt_embeds.repeat(1, num_images_per_prompt, 1) prompt_embeds = prompt_embeds.view(batch_size * num_images_per_prompt, -1) return prompt_embeds def encode_prompt( text_encoders, tokenizers, prompt: str, max_sequence_length, device=None, num_images_per_prompt: int = 1, text_input_ids_list=None, ): prompt = [prompt] if isinstance(prompt, str) else prompt batch_size = len(prompt) dtype = text_encoders[0].dtype pooled_prompt_embeds = _get_clip_prompt_embeds( text_encoder=text_encoders[0], tokenizer=tokenizers[0], prompt=prompt, device=device if device is not None else text_encoders[0].device, num_images_per_prompt=num_images_per_prompt, text_input_ids=text_input_ids_list[0] if text_input_ids_list is not None else None, ) prompt_embeds = _get_t5_prompt_embeds( text_encoder=text_encoders[1], tokenizer=tokenizers[1], max_sequence_length=max_sequence_length, prompt=prompt, num_images_per_prompt=num_images_per_prompt, device=device if device is not None else text_encoders[1].device, text_input_ids=text_input_ids_list[1] if text_input_ids_list is not None else None, ) text_ids = torch.zeros(batch_size, prompt_embeds.shape[1], 3).to(device=device, dtype=dtype) text_ids = text_ids.repeat(num_images_per_prompt, 1, 1) return prompt_embeds, pooled_prompt_embeds, text_ids # CustomFlowMatchEulerDiscreteScheduler was taken from ostris ai-toolkit trainer: # https://github.com/ostris/ai-toolkit/blob/9ee1ef2a0a2a9a02b92d114a95f21312e5906e54/toolkit/samplers/custom_flowmatch_sampler.py#L95 class CustomFlowMatchEulerDiscreteScheduler(FlowMatchEulerDiscreteScheduler): def __init__(self, *args, **kwargs): super().__init__(*args, **kwargs) with torch.no_grad(): # create weights for timesteps num_timesteps = 1000 # generate the multiplier based on cosmap loss weighing # this is only used on linear timesteps for now # cosine map weighing is higher in the middle and lower at the ends # bot = 1 - 2 * self.sigmas + 2 * self.sigmas ** 2 # cosmap_weighing = 2 / (math.pi * bot) # sigma sqrt weighing is significantly higher at the end and lower at the beginning sigma_sqrt_weighing = (self.sigmas**-2.0).float() # clip at 1e4 (1e6 is too high) sigma_sqrt_weighing = torch.clamp(sigma_sqrt_weighing, max=1e4) # bring to a mean of 1 sigma_sqrt_weighing = sigma_sqrt_weighing / sigma_sqrt_weighing.mean() # Create linear timesteps from 1000 to 0 timesteps = torch.linspace(1000, 0, num_timesteps, device="cpu") self.linear_timesteps = timesteps # self.linear_timesteps_weights = cosmap_weighing self.linear_timesteps_weights = sigma_sqrt_weighing # self.sigmas = self.get_sigmas(timesteps, n_dim=1, dtype=torch.float32, device='cpu') pass def get_weights_for_timesteps(self, timesteps: torch.Tensor) -> torch.Tensor: # Get the indices of the timesteps step_indices = [(self.timesteps == t).nonzero().item() for t in timesteps] # Get the weights for the timesteps weights = self.linear_timesteps_weights[step_indices].flatten() return weights def get_sigmas(self, timesteps: torch.Tensor, n_dim, dtype, device) -> torch.Tensor: sigmas = self.sigmas.to(device=device, dtype=dtype) schedule_timesteps = self.timesteps.to(device) timesteps = timesteps.to(device) step_indices = [(schedule_timesteps == t).nonzero().item() for t in timesteps] sigma = sigmas[step_indices].flatten() while len(sigma.shape) < n_dim: sigma = sigma.unsqueeze(-1) return sigma def add_noise( self, original_samples: torch.Tensor, noise: torch.Tensor, timesteps: torch.Tensor, ) -> torch.Tensor: ## ref https://github.com/huggingface/diffusers/blob/fbe29c62984c33c6cf9cf7ad120a992fe6d20854/examples/dreambooth/train_dreambooth_sd3.py#L1578 ## Add noise according to flow matching. ## zt = (1 - texp) * x + texp * z1 # sigmas = get_sigmas(timesteps, n_dim=model_input.ndim, dtype=model_input.dtype) # noisy_model_input = (1.0 - sigmas) * model_input + sigmas * noise # timestep needs to be in [0, 1], we store them in [0, 1000] # noisy_sample = (1 - timestep) * latent + timestep * noise t_01 = (timesteps / 1000).to(original_samples.device) noisy_model_input = (1 - t_01) * original_samples + t_01 * noise # n_dim = original_samples.ndim # sigmas = self.get_sigmas(timesteps, n_dim, original_samples.dtype, original_samples.device) # noisy_model_input = (1.0 - sigmas) * original_samples + sigmas * noise return noisy_model_input def scale_model_input(self, sample: torch.Tensor, timestep: Union[float, torch.Tensor]) -> torch.Tensor: return sample def set_train_timesteps(self, num_timesteps, device, linear=False): if linear: timesteps = torch.linspace(1000, 0, num_timesteps, device=device) self.timesteps = timesteps return timesteps else: # distribute them closer to center. Inference distributes them as a bias toward first # Generate values from 0 to 1 t = torch.sigmoid(torch.randn((num_timesteps,), device=device)) # Scale and reverse the values to go from 1000 to 0 timesteps = (1 - t) * 1000 # Sort the timesteps in descending order timesteps, _ = torch.sort(timesteps, descending=True) self.timesteps = timesteps.to(device=device) return timesteps def main(args): if args.report_to == "wandb" and args.hub_token is not None: raise ValueError( "You cannot use both --report_to=wandb and --hub_token due to a security risk of exposing your token." " Please use `huggingface-cli login` to authenticate with the Hub." ) if torch.backends.mps.is_available() and args.mixed_precision == "bf16": # due to pytorch#99272, MPS does not yet support bfloat16. raise ValueError( "Mixed precision training with bfloat16 is not supported on MPS. Please use fp16 (recommended) or fp32 instead." ) logging_dir = Path(args.output_dir, args.logging_dir) accelerator_project_config = ProjectConfiguration(project_dir=args.output_dir, logging_dir=logging_dir) kwargs = DistributedDataParallelKwargs(find_unused_parameters=True) accelerator = Accelerator( gradient_accumulation_steps=args.gradient_accumulation_steps, mixed_precision=args.mixed_precision, log_with=args.report_to, project_config=accelerator_project_config, kwargs_handlers=[kwargs], ) # Disable AMP for MPS. if torch.backends.mps.is_available(): accelerator.native_amp = False if args.report_to == "wandb": if not is_wandb_available(): raise ImportError("Make sure to install wandb if you want to use it for logging during training.") # Make one log on every process with the configuration for debugging. logging.basicConfig( format="%(asctime)s - %(levelname)s - %(name)s - %(message)s", datefmt="%m/%d/%Y %H:%M:%S", level=logging.INFO, ) logger.info(accelerator.state, main_process_only=False) if accelerator.is_local_main_process: transformers.utils.logging.set_verbosity_warning() diffusers.utils.logging.set_verbosity_info() else: transformers.utils.logging.set_verbosity_error() diffusers.utils.logging.set_verbosity_error() # If passed along, set the training seed now. if args.seed is not None: set_seed(args.seed) # Generate class images if prior preservation is enabled. if args.with_prior_preservation: class_images_dir = Path(args.class_data_dir) if not class_images_dir.exists(): class_images_dir.mkdir(parents=True) cur_class_images = len(list(class_images_dir.iterdir())) if cur_class_images < args.num_class_images: has_supported_fp16_accelerator = torch.cuda.is_available() or torch.backends.mps.is_available() torch_dtype = torch.float16 if has_supported_fp16_accelerator else torch.float32 if args.prior_generation_precision == "fp32": torch_dtype = torch.float32 elif args.prior_generation_precision == "fp16": torch_dtype = torch.float16 elif args.prior_generation_precision == "bf16": torch_dtype = torch.bfloat16 pipeline = FluxPipeline.from_pretrained( args.pretrained_model_name_or_path, torch_dtype=torch_dtype, revision=args.revision, variant=args.variant, ) pipeline.set_progress_bar_config(disable=True) num_new_images = args.num_class_images - cur_class_images logger.info(f"Number of class images to sample: {num_new_images}.") sample_dataset = PromptDataset(args.class_prompt, num_new_images) sample_dataloader = torch.utils.data.DataLoader(sample_dataset, batch_size=args.sample_batch_size) sample_dataloader = accelerator.prepare(sample_dataloader) pipeline.to(accelerator.device) for example in tqdm( sample_dataloader, desc="Generating class images", disable=not accelerator.is_local_main_process ): images = pipeline(example["prompt"]).images for i, image in enumerate(images): hash_image = insecure_hashlib.sha1(image.tobytes()).hexdigest() image_filename = class_images_dir / f"{example['index'][i] + cur_class_images}-{hash_image}.jpg" image.save(image_filename) del pipeline free_memory() # Handle the repository creation if accelerator.is_main_process: if args.output_dir is not None: os.makedirs(args.output_dir, exist_ok=True) model_id = args.hub_model_id or Path(args.output_dir).name repo_id = None if args.push_to_hub: repo_id = create_repo( repo_id=model_id, exist_ok=True, ).repo_id # Load the tokenizers tokenizer_one = CLIPTokenizer.from_pretrained( args.pretrained_model_name_or_path, subfolder="tokenizer", revision=args.revision, ) tokenizer_two = T5TokenizerFast.from_pretrained( args.pretrained_model_name_or_path, subfolder="tokenizer_2", revision=args.revision, ) # import correct text encoder classes text_encoder_cls_one = import_model_class_from_model_name_or_path( args.pretrained_model_name_or_path, args.revision ) text_encoder_cls_two = import_model_class_from_model_name_or_path( args.pretrained_model_name_or_path, args.revision, subfolder="text_encoder_2" ) # Load scheduler and models noise_scheduler = CustomFlowMatchEulerDiscreteScheduler.from_pretrained( args.pretrained_model_name_or_path, subfolder="scheduler" ) noise_scheduler_copy = copy.deepcopy(noise_scheduler) text_encoder_one, text_encoder_two = load_text_encoders(text_encoder_cls_one, text_encoder_cls_two) vae = AutoencoderKL.from_pretrained( args.pretrained_model_name_or_path, subfolder="vae", revision=args.revision, variant=args.variant, ) transformer = FluxTransformer2DModel.from_pretrained( args.pretrained_model_name_or_path, subfolder="transformer", revision=args.revision, variant=args.variant ) if args.train_text_encoder_ti: # we parse the provided token identifier (or identifiers) into a list. s.t. - "TOK" -> ["TOK"], "TOK, # TOK2" -> ["TOK", "TOK2"] etc. token_abstraction_list = [place_holder.strip() for place_holder in re.split(r",\s*", args.token_abstraction)] logger.info(f"list of token identifiers: {token_abstraction_list}") if args.initializer_concept is None: num_new_tokens_per_abstraction = ( 2 if args.num_new_tokens_per_abstraction is None else args.num_new_tokens_per_abstraction ) # if args.initializer_concept is provided, we ignore args.num_new_tokens_per_abstraction else: token_ids = tokenizer_one.encode(args.initializer_concept, add_special_tokens=False) num_new_tokens_per_abstraction = len(token_ids) if args.enable_t5_ti: token_ids_t5 = tokenizer_two.encode(args.initializer_concept, add_special_tokens=False) num_new_tokens_per_abstraction = max(len(token_ids), len(token_ids_t5)) logger.info( f"initializer_concept: {args.initializer_concept}, num_new_tokens_per_abstraction: {num_new_tokens_per_abstraction}" ) token_abstraction_dict = {} token_idx = 0 for i, token in enumerate(token_abstraction_list): token_abstraction_dict[token] = [f"<s{token_idx + i + j}>" for j in range(num_new_tokens_per_abstraction)] token_idx += num_new_tokens_per_abstraction - 1 # replace instances of --token_abstraction in --instance_prompt with the new tokens: "<si><si+1>" etc. for token_abs, token_replacement in token_abstraction_dict.items(): new_instance_prompt = args.instance_prompt.replace(token_abs, "".join(token_replacement)) if args.instance_prompt == new_instance_prompt: logger.warning( "Note! the instance prompt provided in --instance_prompt does not include the token abstraction specified " "--token_abstraction. This may lead to incorrect optimization of text embeddings during pivotal tuning" ) args.instance_prompt = new_instance_prompt if args.with_prior_preservation: args.class_prompt = args.class_prompt.replace(token_abs, "".join(token_replacement)) if args.validation_prompt: args.validation_prompt = args.validation_prompt.replace(token_abs, "".join(token_replacement)) # initialize the new tokens for textual inversion text_encoders = [text_encoder_one, text_encoder_two] if args.enable_t5_ti else [text_encoder_one] tokenizers = [tokenizer_one, tokenizer_two] if args.enable_t5_ti else [tokenizer_one] embedding_handler = TokenEmbeddingsHandler(text_encoders, tokenizers) inserting_toks = [] for new_tok in token_abstraction_dict.values(): inserting_toks.extend(new_tok) embedding_handler.initialize_new_tokens(inserting_toks=inserting_toks) # We only train the additional adapter LoRA layers transformer.requires_grad_(False) vae.requires_grad_(False) text_encoder_one.requires_grad_(False) text_encoder_two.requires_grad_(False) # For mixed precision training we cast all non-trainable weights (vae, text_encoder and transformer) to half-precision # as these weights are only used for inference, keeping weights in full precision is not required. weight_dtype = torch.float32 if accelerator.mixed_precision == "fp16": weight_dtype = torch.float16 elif accelerator.mixed_precision == "bf16": weight_dtype = torch.bfloat16 if torch.backends.mps.is_available() and weight_dtype == torch.bfloat16: # due to pytorch#99272, MPS does not yet support bfloat16. raise ValueError( "Mixed precision training with bfloat16 is not supported on MPS. Please use fp16 (recommended) or fp32 instead." ) vae.to(accelerator.device, dtype=weight_dtype) transformer.to(accelerator.device, dtype=weight_dtype) text_encoder_one.to(accelerator.device, dtype=weight_dtype) text_encoder_two.to(accelerator.device, dtype=weight_dtype) if args.gradient_checkpointing: transformer.enable_gradient_checkpointing() if args.train_text_encoder: text_encoder_one.gradient_checkpointing_enable() if args.lora_layers is not None: target_modules = [layer.strip() for layer in args.lora_layers.split(",")] else: target_modules = [ "attn.to_k", "attn.to_q", "attn.to_v", "attn.to_out.0", "attn.add_k_proj", "attn.add_q_proj", "attn.add_v_proj", "attn.to_add_out", "ff.net.0.proj", "ff.net.2", "ff_context.net.0.proj", "ff_context.net.2", ] # now we will add new LoRA weights to the attention layers transformer_lora_config = LoraConfig( r=args.rank, lora_alpha=args.rank, init_lora_weights="gaussian", target_modules=target_modules, ) transformer.add_adapter(transformer_lora_config) if args.train_text_encoder: text_lora_config = LoraConfig( r=args.rank, lora_alpha=args.rank, init_lora_weights="gaussian", target_modules=["q_proj", "k_proj", "v_proj", "out_proj"], ) text_encoder_one.add_adapter(text_lora_config) def unwrap_model(model): model = accelerator.unwrap_model(model) model = model._orig_mod if is_compiled_module(model) else model return model # create custom saving & loading hooks so that `accelerator.save_state(...)` serializes in a nice format def save_model_hook(models, weights, output_dir): if accelerator.is_main_process: transformer_lora_layers_to_save = None text_encoder_one_lora_layers_to_save = None for model in models: if isinstance(model, type(unwrap_model(transformer))): transformer_lora_layers_to_save = get_peft_model_state_dict(model) elif isinstance(model, type(unwrap_model(text_encoder_one))): if args.train_text_encoder: # when --train_text_encoder_ti we don't save the layers text_encoder_one_lora_layers_to_save = get_peft_model_state_dict(model) elif isinstance(model, type(unwrap_model(text_encoder_two))): pass # when --train_text_encoder_ti and --enable_t5_ti we don't save the layers else: raise ValueError(f"unexpected save model: {model.__class__}") # make sure to pop weight so that corresponding model is not saved again weights.pop() FluxPipeline.save_lora_weights( output_dir, transformer_lora_layers=transformer_lora_layers_to_save, text_encoder_lora_layers=text_encoder_one_lora_layers_to_save, ) if args.train_text_encoder_ti: embedding_handler.save_embeddings(f"{args.output_dir}/{Path(args.output_dir).name}_emb.safetensors") def load_model_hook(models, input_dir): transformer_ = None text_encoder_one_ = None while len(models) > 0: model = models.pop() if isinstance(model, type(unwrap_model(transformer))): transformer_ = model elif isinstance(model, type(unwrap_model(text_encoder_one))): text_encoder_one_ = model else: raise ValueError(f"unexpected save model: {model.__class__}") lora_state_dict = FluxPipeline.lora_state_dict(input_dir) transformer_state_dict = { f'{k.replace("transformer.", "")}': v for k, v in lora_state_dict.items() if k.startswith("transformer.") } transformer_state_dict = convert_unet_state_dict_to_peft(transformer_state_dict) incompatible_keys = set_peft_model_state_dict(transformer_, transformer_state_dict, adapter_name="default") if incompatible_keys is not None: # check only for unexpected keys unexpected_keys = getattr(incompatible_keys, "unexpected_keys", None) if unexpected_keys: logger.warning( f"Loading adapter weights from state_dict led to unexpected keys not found in the model: " f" {unexpected_keys}. " ) if args.train_text_encoder: # Do we need to call `scale_lora_layers()` here? _set_state_dict_into_text_encoder(lora_state_dict, prefix="text_encoder.", text_encoder=text_encoder_one_) # Make sure the trainable params are in float32. This is again needed since the base models # are in `weight_dtype`. More details: # https://github.com/huggingface/diffusers/pull/6514#discussion_r1449796804 if args.mixed_precision == "fp16": models = [transformer_] if args.train_text_encoder: models.extend([text_encoder_one_]) # only upcast trainable parameters (LoRA) into fp32 cast_training_params(models) accelerator.register_save_state_pre_hook(save_model_hook) accelerator.register_load_state_pre_hook(load_model_hook) # Enable TF32 for faster training on Ampere GPUs, # cf https://pytorch.org/docs/stable/notes/cuda.html#tensorfloat-32-tf32-on-ampere-devices if args.allow_tf32 and torch.cuda.is_available(): torch.backends.cuda.matmul.allow_tf32 = True if args.scale_lr: args.learning_rate = ( args.learning_rate * args.gradient_accumulation_steps * args.train_batch_size * accelerator.num_processes ) # Make sure the trainable params are in float32. if args.mixed_precision == "fp16": models = [transformer] if args.train_text_encoder: models.extend([text_encoder_one]) # only upcast trainable parameters (LoRA) into fp32 cast_training_params(models, dtype=torch.float32) transformer_lora_parameters = list(filter(lambda p: p.requires_grad, transformer.parameters())) if args.train_text_encoder: text_lora_parameters_one = list(filter(lambda p: p.requires_grad, text_encoder_one.parameters())) # if we use textual inversion, we freeze all parameters except for the token embeddings # in text encoder elif args.train_text_encoder_ti: text_lora_parameters_one = [] # CLIP for name, param in text_encoder_one.named_parameters(): if "token_embedding" in name: # ensure that dtype is float32, even if rest of the model that isn't trained is loaded in fp16 param.data = param.to(dtype=torch.float32) param.requires_grad = True text_lora_parameters_one.append(param) else: param.requires_grad = False if args.enable_t5_ti: # whether to do pivotal tuning/textual inversion for T5 as well text_lora_parameters_two = [] for name, param in text_encoder_two.named_parameters(): if "token_embedding" in name: # ensure that dtype is float32, even if rest of the model that isn't trained is loaded in fp16 param.data = param.to(dtype=torch.float32) param.requires_grad = True text_lora_parameters_two.append(param) else: param.requires_grad = False # If neither --train_text_encoder nor --train_text_encoder_ti, text_encoders remain frozen during training freeze_text_encoder = not (args.train_text_encoder or args.train_text_encoder_ti) # if --train_text_encoder_ti and train_transformer_frac == 0 where essentially performing textual inversion # and not training transformer LoRA layers pure_textual_inversion = args.train_text_encoder_ti and args.train_transformer_frac == 0 # Optimization parameters transformer_parameters_with_lr = {"params": transformer_lora_parameters, "lr": args.learning_rate} if not freeze_text_encoder: # different learning rate for text encoder and transformer text_parameters_one_with_lr = { "params": text_lora_parameters_one, "weight_decay": args.adam_weight_decay_text_encoder if args.adam_weight_decay_text_encoder else args.adam_weight_decay, "lr": args.text_encoder_lr, } if not args.enable_t5_ti: # pure textual inversion - only clip if pure_textual_inversion: params_to_optimize = [text_parameters_one_with_lr] te_idx = 0 else: # regular te training or regular pivotal for clip params_to_optimize = [transformer_parameters_with_lr, text_parameters_one_with_lr] te_idx = 1 elif args.enable_t5_ti: # pivotal tuning of clip & t5 text_parameters_two_with_lr = { "params": text_lora_parameters_two, "weight_decay": args.adam_weight_decay_text_encoder if args.adam_weight_decay_text_encoder else args.adam_weight_decay, "lr": args.text_encoder_lr, } # pure textual inversion - only clip & t5 if pure_textual_inversion: params_to_optimize = [text_parameters_one_with_lr, text_parameters_two_with_lr] te_idx = 0 else: # regular pivotal tuning of clip & t5 params_to_optimize = [ transformer_parameters_with_lr, text_parameters_one_with_lr, text_parameters_two_with_lr, ] te_idx = 1 else: params_to_optimize = [transformer_parameters_with_lr] # Optimizer creation if not (args.optimizer.lower() == "prodigy" or args.optimizer.lower() == "adamw"): logger.warning( f"Unsupported choice of optimizer: {args.optimizer}.Supported optimizers include [adamW, prodigy]." "Defaulting to adamW" ) args.optimizer = "adamw" if args.use_8bit_adam and not args.optimizer.lower() == "adamw": logger.warning( f"use_8bit_adam is ignored when optimizer is not set to 'AdamW'. Optimizer was " f"set to {args.optimizer.lower()}" ) if args.optimizer.lower() == "adamw": if args.use_8bit_adam: try: import bitsandbytes as bnb except ImportError: raise ImportError( "To use 8-bit Adam, please install the bitsandbytes library: `pip install bitsandbytes`." ) optimizer_class = bnb.optim.AdamW8bit else: optimizer_class = torch.optim.AdamW optimizer = optimizer_class( params_to_optimize, betas=(args.adam_beta1, args.adam_beta2), weight_decay=args.adam_weight_decay, eps=args.adam_epsilon, ) if args.optimizer.lower() == "prodigy": try: import prodigyopt except ImportError: raise ImportError("To use Prodigy, please install the prodigyopt library: `pip install prodigyopt`") optimizer_class = prodigyopt.Prodigy if args.learning_rate <= 0.1: logger.warning( "Learning rate is too low. When using prodigy, it's generally better to set learning rate around 1.0" ) if not freeze_text_encoder and args.text_encoder_lr: logger.warning( f"Learning rates were provided both for the transformer and the text encoder- e.g. text_encoder_lr:" f" {args.text_encoder_lr} and learning_rate: {args.learning_rate}. " f"When using prodigy only learning_rate is used as the initial learning rate." ) # changes the learning rate of text_encoder_parameters to be # --learning_rate params_to_optimize[te_idx]["lr"] = args.learning_rate params_to_optimize[-1]["lr"] = args.learning_rate optimizer = optimizer_class( params_to_optimize, betas=(args.adam_beta1, args.adam_beta2), beta3=args.prodigy_beta3, weight_decay=args.adam_weight_decay, eps=args.adam_epsilon, decouple=args.prodigy_decouple, use_bias_correction=args.prodigy_use_bias_correction, safeguard_warmup=args.prodigy_safeguard_warmup, ) # Dataset and DataLoaders creation: train_dataset = DreamBoothDataset( instance_data_root=args.instance_data_dir, instance_prompt=args.instance_prompt, train_text_encoder_ti=args.train_text_encoder_ti, token_abstraction_dict=token_abstraction_dict if args.train_text_encoder_ti else None, class_prompt=args.class_prompt, class_data_root=args.class_data_dir if args.with_prior_preservation else None, class_num=args.num_class_images, size=args.resolution, repeats=args.repeats, center_crop=args.center_crop, ) train_dataloader = torch.utils.data.DataLoader( train_dataset, batch_size=args.train_batch_size, shuffle=True, collate_fn=lambda examples: collate_fn(examples, args.with_prior_preservation), num_workers=args.dataloader_num_workers, ) if freeze_text_encoder: tokenizers = [tokenizer_one, tokenizer_two] text_encoders = [text_encoder_one, text_encoder_two] def compute_text_embeddings(prompt, text_encoders, tokenizers): with torch.no_grad(): prompt_embeds, pooled_prompt_embeds, text_ids = encode_prompt( text_encoders, tokenizers, prompt, args.max_sequence_length ) prompt_embeds = prompt_embeds.to(accelerator.device) pooled_prompt_embeds = pooled_prompt_embeds.to(accelerator.device) text_ids = text_ids.to(accelerator.device) return prompt_embeds, pooled_prompt_embeds, text_ids # If no type of tuning is done on the text_encoder and custom instance prompts are NOT # provided (i.e. the --instance_prompt is used for all images), we encode the instance prompt once to avoid # the redundant encoding. if freeze_text_encoder and not train_dataset.custom_instance_prompts: instance_prompt_hidden_states, instance_pooled_prompt_embeds, instance_text_ids = compute_text_embeddings( args.instance_prompt, text_encoders, tokenizers ) # Handle class prompt for prior-preservation. if args.with_prior_preservation: if freeze_text_encoder: class_prompt_hidden_states, class_pooled_prompt_embeds, class_text_ids = compute_text_embeddings( args.class_prompt, text_encoders, tokenizers ) # Clear the memory here if freeze_text_encoder and not train_dataset.custom_instance_prompts: del tokenizers, text_encoders, text_encoder_one, text_encoder_two free_memory() # if --train_text_encoder_ti we need add_special_tokens to be True for textual inversion add_special_tokens_clip = True if args.train_text_encoder_ti else False add_special_tokens_t5 = True if (args.train_text_encoder_ti and args.enable_t5_ti) else False # If custom instance prompts are NOT provided (i.e. the instance prompt is used for all images), # pack the statically computed variables appropriately here. This is so that we don't # have to pass them to the dataloader. if not train_dataset.custom_instance_prompts: if freeze_text_encoder: prompt_embeds = instance_prompt_hidden_states pooled_prompt_embeds = instance_pooled_prompt_embeds text_ids = instance_text_ids if args.with_prior_preservation: prompt_embeds = torch.cat([prompt_embeds, class_prompt_hidden_states], dim=0) pooled_prompt_embeds = torch.cat([pooled_prompt_embeds, class_pooled_prompt_embeds], dim=0) text_ids = torch.cat([text_ids, class_text_ids], dim=0) # if we're optimizing the text encoder (both if instance prompt is used for all images or custom prompts) # we need to tokenize and encode the batch prompts on all training steps else: tokens_one = tokenize_prompt( tokenizer_one, args.instance_prompt, max_sequence_length=77, add_special_tokens=add_special_tokens_clip ) tokens_two = tokenize_prompt( tokenizer_two, args.instance_prompt, max_sequence_length=args.max_sequence_length, add_special_tokens=add_special_tokens_t5, ) if args.with_prior_preservation: class_tokens_one = tokenize_prompt( tokenizer_one, args.class_prompt, max_sequence_length=77, add_special_tokens=add_special_tokens_clip, ) class_tokens_two = tokenize_prompt( tokenizer_two, args.class_prompt, max_sequence_length=args.max_sequence_length, add_special_tokens=add_special_tokens_t5, ) tokens_one = torch.cat([tokens_one, class_tokens_one], dim=0) tokens_two = torch.cat([tokens_two, class_tokens_two], dim=0) vae_config_shift_factor = vae.config.shift_factor vae_config_scaling_factor = vae.config.scaling_factor vae_config_block_out_channels = vae.config.block_out_channels if args.cache_latents: latents_cache = [] for batch in tqdm(train_dataloader, desc="Caching latents"): with torch.no_grad(): batch["pixel_values"] = batch["pixel_values"].to( accelerator.device, non_blocking=True, dtype=weight_dtype ) latents_cache.append(vae.encode(batch["pixel_values"]).latent_dist) if args.validation_prompt is None: del vae free_memory() # Scheduler and math around the number of training steps. overrode_max_train_steps = False num_update_steps_per_epoch = math.ceil(len(train_dataloader) / args.gradient_accumulation_steps) if args.max_train_steps is None: args.max_train_steps = args.num_train_epochs * num_update_steps_per_epoch overrode_max_train_steps = True lr_scheduler = get_scheduler( args.lr_scheduler, optimizer=optimizer, num_warmup_steps=args.lr_warmup_steps * accelerator.num_processes, num_training_steps=args.max_train_steps * accelerator.num_processes, num_cycles=args.lr_num_cycles, power=args.lr_power, ) # Prepare everything with our `accelerator`. if not freeze_text_encoder: if args.enable_t5_ti: ( transformer, text_encoder_one, text_encoder_two, optimizer, train_dataloader, lr_scheduler, ) = accelerator.prepare( transformer, text_encoder_one, text_encoder_two, optimizer, train_dataloader, lr_scheduler, ) else: transformer, text_encoder_one, optimizer, train_dataloader, lr_scheduler = accelerator.prepare( transformer, text_encoder_one, optimizer, train_dataloader, lr_scheduler ) else: transformer, optimizer, train_dataloader, lr_scheduler = accelerator.prepare( transformer, optimizer, train_dataloader, lr_scheduler ) # We need to recalculate our total training steps as the size of the training dataloader may have changed. num_update_steps_per_epoch = math.ceil(len(train_dataloader) / args.gradient_accumulation_steps) if overrode_max_train_steps: args.max_train_steps = args.num_train_epochs * num_update_steps_per_epoch # Afterwards we recalculate our number of training epochs args.num_train_epochs = math.ceil(args.max_train_steps / num_update_steps_per_epoch) # We need to initialize the trackers we use, and also store our configuration. # The trackers initializes automatically on the main process. if accelerator.is_main_process: tracker_name = "dreambooth-flux-dev-lora-advanced" accelerator.init_trackers(tracker_name, config=vars(args)) # Train! total_batch_size = args.train_batch_size * accelerator.num_processes * args.gradient_accumulation_steps logger.info("***** Running training *****") logger.info(f" Num examples = {len(train_dataset)}") logger.info(f" Num batches each epoch = {len(train_dataloader)}") logger.info(f" Num Epochs = {args.num_train_epochs}") logger.info(f" Instantaneous batch size per device = {args.train_batch_size}") logger.info(f" Total train batch size (w. parallel, distributed & accumulation) = {total_batch_size}") logger.info(f" Gradient Accumulation steps = {args.gradient_accumulation_steps}") logger.info(f" Total optimization steps = {args.max_train_steps}") global_step = 0 first_epoch = 0 # Potentially load in the weights and states from a previous save if args.resume_from_checkpoint: if args.resume_from_checkpoint != "latest": path = os.path.basename(args.resume_from_checkpoint) else: # Get the mos recent checkpoint dirs = os.listdir(args.output_dir) dirs = [d for d in dirs if d.startswith("checkpoint")] dirs = sorted(dirs, key=lambda x: int(x.split("-")[1])) path = dirs[-1] if len(dirs) > 0 else None if path is None: logger.info(f"Checkpoint '{args.resume_from_checkpoint}' does not exist. Starting a new training run.") args.resume_from_checkpoint = None initial_global_step = 0 else: logger.info(f"Resuming from checkpoint {path}") accelerator.load_state(os.path.join(args.output_dir, path)) global_step = int(path.split("-")[1]) initial_global_step = global_step first_epoch = global_step // num_update_steps_per_epoch else: initial_global_step = 0 progress_bar = tqdm( range(0, args.max_train_steps), initial=initial_global_step, desc="Steps", # Only show the progress bar once on each machine. disable=not accelerator.is_local_main_process, ) def get_sigmas(timesteps, n_dim=4, dtype=torch.float32): sigmas = noise_scheduler_copy.sigmas.to(device=accelerator.device, dtype=dtype) schedule_timesteps = noise_scheduler_copy.timesteps.to(accelerator.device) timesteps = timesteps.to(accelerator.device) step_indices = [(schedule_timesteps == t).nonzero().item() for t in timesteps] sigma = sigmas[step_indices].flatten() while len(sigma.shape) < n_dim: sigma = sigma.unsqueeze(-1) return sigma if args.train_text_encoder: num_train_epochs_text_encoder = int(args.train_text_encoder_frac * args.num_train_epochs) num_train_epochs_transformer = int(args.train_transformer_frac * args.num_train_epochs) elif args.train_text_encoder_ti: # args.train_text_encoder_ti num_train_epochs_text_encoder = int(args.train_text_encoder_ti_frac * args.num_train_epochs) num_train_epochs_transformer = int(args.train_transformer_frac * args.num_train_epochs) # flag used for textual inversion pivoted_te = False pivoted_tr = False for epoch in range(first_epoch, args.num_train_epochs): transformer.train() # if performing any kind of optimization of text_encoder params if args.train_text_encoder or args.train_text_encoder_ti: if epoch == num_train_epochs_text_encoder: # flag to stop text encoder optimization logger.info(f"PIVOT TE {epoch}") pivoted_te = True else: # still optimizing the text encoder if args.train_text_encoder: text_encoder_one.train() # set top parameter requires_grad = True for gradient checkpointing works accelerator.unwrap_model(text_encoder_one).text_model.embeddings.requires_grad_(True) elif args.train_text_encoder_ti: # textual inversion / pivotal tuning text_encoder_one.train() if args.enable_t5_ti: text_encoder_two.train() if epoch == num_train_epochs_transformer: # flag to stop transformer optimization logger.info(f"PIVOT TRANSFORMER {epoch}") pivoted_tr = True for step, batch in enumerate(train_dataloader): if pivoted_te: # stopping optimization of text_encoder params optimizer.param_groups[te_idx]["lr"] = 0.0 optimizer.param_groups[-1]["lr"] = 0.0 elif pivoted_tr and not pure_textual_inversion: logger.info(f"PIVOT TRANSFORMER {epoch}") optimizer.param_groups[0]["lr"] = 0.0 with accelerator.accumulate(transformer): prompts = batch["prompts"] # encode batch prompts when custom prompts are provided for each image - if train_dataset.custom_instance_prompts: elems_to_repeat = 1 if freeze_text_encoder: prompt_embeds, pooled_prompt_embeds, text_ids = compute_text_embeddings( prompts, text_encoders, tokenizers ) else: tokens_one = tokenize_prompt( tokenizer_one, prompts, max_sequence_length=77, add_special_tokens=add_special_tokens_clip ) tokens_two = tokenize_prompt( tokenizer_two, prompts, max_sequence_length=args.max_sequence_length, add_special_tokens=add_special_tokens_t5, ) else: elems_to_repeat = len(prompts) if not freeze_text_encoder: prompt_embeds, pooled_prompt_embeds, text_ids = encode_prompt( text_encoders=[text_encoder_one, text_encoder_two], tokenizers=[None, None], text_input_ids_list=[ tokens_one.repeat(elems_to_repeat, 1), tokens_two.repeat(elems_to_repeat, 1), ], max_sequence_length=args.max_sequence_length, device=accelerator.device, prompt=prompts, ) # Convert images to latent space if args.cache_latents: model_input = latents_cache[step].sample() else: pixel_values = batch["pixel_values"].to(dtype=vae.dtype) model_input = vae.encode(pixel_values).latent_dist.sample() model_input = (model_input - vae_config_shift_factor) * vae_config_scaling_factor model_input = model_input.to(dtype=weight_dtype) vae_scale_factor = 2 ** (len(vae_config_block_out_channels)) latent_image_ids = FluxPipeline._prepare_latent_image_ids( model_input.shape[0], model_input.shape[2] // 2, model_input.shape[3] // 2, accelerator.device, weight_dtype, ) # Sample noise that we'll add to the latents noise = torch.randn_like(model_input) bsz = model_input.shape[0] # Sample a random timestep for each image # for weighting schemes where we sample timesteps non-uniformly u = compute_density_for_timestep_sampling( weighting_scheme=args.weighting_scheme, batch_size=bsz, logit_mean=args.logit_mean, logit_std=args.logit_std, mode_scale=args.mode_scale, ) indices = (u * noise_scheduler_copy.config.num_train_timesteps).long() timesteps = noise_scheduler_copy.timesteps[indices].to(device=model_input.device) # Add noise according to flow matching. # zt = (1 - texp) * x + texp * z1 sigmas = get_sigmas(timesteps, n_dim=model_input.ndim, dtype=model_input.dtype) noisy_model_input = (1.0 - sigmas) * model_input + sigmas * noise packed_noisy_model_input = FluxPipeline._pack_latents( noisy_model_input, batch_size=model_input.shape[0], num_channels_latents=model_input.shape[1], height=model_input.shape[2], width=model_input.shape[3], ) # handle guidance if transformer.config.guidance_embeds: guidance = torch.tensor([args.guidance_scale], device=accelerator.device) guidance = guidance.expand(model_input.shape[0]) else: guidance = None # Predict the noise residual model_pred = transformer( hidden_states=packed_noisy_model_input, # YiYi notes: divide it by 1000 for now because we scale it by 1000 in the transforme rmodel (we should not keep it but I want to keep the inputs same for the model for testing) timestep=timesteps / 1000, guidance=guidance, pooled_projections=pooled_prompt_embeds, encoder_hidden_states=prompt_embeds, txt_ids=text_ids, img_ids=latent_image_ids, return_dict=False, )[0] model_pred = FluxPipeline._unpack_latents( model_pred, height=model_input.shape[2] * vae_scale_factor, width=model_input.shape[3] * vae_scale_factor, vae_scale_factor=vae_scale_factor, ) # these weighting schemes use a uniform timestep sampling # and instead post-weight the loss weighting = compute_loss_weighting_for_sd3(weighting_scheme=args.weighting_scheme, sigmas=sigmas) # flow matching loss target = noise - model_input if args.with_prior_preservation: # Chunk the noise and model_pred into two parts and compute the loss on each part separately. model_pred, model_pred_prior = torch.chunk(model_pred, 2, dim=0) target, target_prior = torch.chunk(target, 2, dim=0) # Compute prior loss prior_loss = torch.mean( (weighting.float() * (model_pred_prior.float() - target_prior.float()) ** 2).reshape( target_prior.shape[0], -1 ), 1, ) prior_loss = prior_loss.mean() # Compute regular loss. loss = torch.mean( (weighting.float() * (model_pred.float() - target.float()) ** 2).reshape(target.shape[0], -1), 1, ) loss = loss.mean() if args.with_prior_preservation: # Add the prior loss to the instance loss. loss = loss + args.prior_loss_weight * prior_loss accelerator.backward(loss) if accelerator.sync_gradients: if not freeze_text_encoder: if args.train_text_encoder: params_to_clip = itertools.chain(transformer.parameters(), text_encoder_one.parameters()) elif pure_textual_inversion: params_to_clip = itertools.chain( text_encoder_one.parameters(), text_encoder_two.parameters() ) else: params_to_clip = itertools.chain( transformer.parameters(), text_encoder_one.parameters(), text_encoder_two.parameters() ) else: params_to_clip = itertools.chain(transformer.parameters()) accelerator.clip_grad_norm_(params_to_clip, args.max_grad_norm) optimizer.step() lr_scheduler.step() optimizer.zero_grad() # every step, we reset the embeddings to the original embeddings. if args.train_text_encoder_ti: embedding_handler.retract_embeddings() # Checks if the accelerator has performed an optimization step behind the scenes if accelerator.sync_gradients: progress_bar.update(1) global_step += 1 if accelerator.is_main_process: if global_step % args.checkpointing_steps == 0: # _before_ saving state, check if this save would set us over the `checkpoints_total_limit` if args.checkpoints_total_limit is not None: checkpoints = os.listdir(args.output_dir) checkpoints = [d for d in checkpoints if d.startswith("checkpoint")] checkpoints = sorted(checkpoints, key=lambda x: int(x.split("-")[1])) # before we save the new checkpoint, we need to have at _most_ `checkpoints_total_limit - 1` checkpoints if len(checkpoints) >= args.checkpoints_total_limit: num_to_remove = len(checkpoints) - args.checkpoints_total_limit + 1 removing_checkpoints = checkpoints[0:num_to_remove] logger.info( f"{len(checkpoints)} checkpoints already exist, removing {len(removing_checkpoints)} checkpoints" ) logger.info(f"removing checkpoints: {', '.join(removing_checkpoints)}") for removing_checkpoint in removing_checkpoints: removing_checkpoint = os.path.join(args.output_dir, removing_checkpoint) shutil.rmtree(removing_checkpoint) save_path = os.path.join(args.output_dir, f"checkpoint-{global_step}") accelerator.save_state(save_path) logger.info(f"Saved state to {save_path}") logs = {"loss": loss.detach().item(), "lr": lr_scheduler.get_last_lr()[0]} progress_bar.set_postfix(**logs) accelerator.log(logs, step=global_step) if global_step >= args.max_train_steps: break if accelerator.is_main_process: if args.validation_prompt is not None and epoch % args.validation_epochs == 0: # create pipeline if freeze_text_encoder: text_encoder_one, text_encoder_two = load_text_encoders(text_encoder_cls_one, text_encoder_cls_two) pipeline = FluxPipeline.from_pretrained( args.pretrained_model_name_or_path, vae=vae, text_encoder=accelerator.unwrap_model(text_encoder_one), text_encoder_2=accelerator.unwrap_model(text_encoder_two), transformer=accelerator.unwrap_model(transformer), revision=args.revision, variant=args.variant, torch_dtype=weight_dtype, ) pipeline_args = {"prompt": args.validation_prompt} images = log_validation( pipeline=pipeline, args=args, accelerator=accelerator, pipeline_args=pipeline_args, epoch=epoch, torch_dtype=weight_dtype, ) images = None del pipeline if freeze_text_encoder: del text_encoder_one, text_encoder_two free_memory() elif args.train_text_encoder: del text_encoder_two free_memory() # Save the lora layers accelerator.wait_for_everyone() if accelerator.is_main_process: transformer = unwrap_model(transformer) transformer = transformer.to(weight_dtype) transformer_lora_layers = get_peft_model_state_dict(transformer) if args.train_text_encoder: text_encoder_one = unwrap_model(text_encoder_one) text_encoder_lora_layers = get_peft_model_state_dict(text_encoder_one.to(torch.float32)) else: text_encoder_lora_layers = None if not pure_textual_inversion: FluxPipeline.save_lora_weights( save_directory=args.output_dir, transformer_lora_layers=transformer_lora_layers, text_encoder_lora_layers=text_encoder_lora_layers, ) if args.train_text_encoder_ti: embeddings_path = f"{args.output_dir}/{os.path.basename(args.output_dir)}_emb.safetensors" embedding_handler.save_embeddings(embeddings_path) # Final inference # Load previous pipeline pipeline = FluxPipeline.from_pretrained( args.pretrained_model_name_or_path, revision=args.revision, variant=args.variant, torch_dtype=weight_dtype, ) if not pure_textual_inversion: # load attention processors pipeline.load_lora_weights(args.output_dir) # run inference images = [] if args.validation_prompt and args.num_validation_images > 0: pipeline_args = {"prompt": args.validation_prompt} images = log_validation( pipeline=pipeline, args=args, accelerator=accelerator, pipeline_args=pipeline_args, epoch=epoch, torch_dtype=weight_dtype, is_final_validation=True, ) save_model_card( model_id if not args.push_to_hub else repo_id, images=images, base_model=args.pretrained_model_name_or_path, train_text_encoder=args.train_text_encoder, train_text_encoder_ti=args.train_text_encoder_ti, enable_t5_ti=args.enable_t5_ti, pure_textual_inversion=pure_textual_inversion, token_abstraction_dict=train_dataset.token_abstraction_dict, instance_prompt=args.instance_prompt, validation_prompt=args.validation_prompt, repo_folder=args.output_dir, ) if args.push_to_hub: upload_folder( repo_id=repo_id, folder_path=args.output_dir, commit_message="End of training", ignore_patterns=["step_*", "epoch_*"], ) images = None del pipeline accelerator.end_training() if __name__ == "__main__": args = parse_args() main(args)
diffusers/examples/advanced_diffusion_training/train_dreambooth_lora_flux_advanced.py/0
{ "file_path": "diffusers/examples/advanced_diffusion_training/train_dreambooth_lora_flux_advanced.py", "repo_id": "diffusers", "token_count": 47389 }
import inspect from typing import Any, Callable, Dict, List, Optional, Union import numpy as np import torch from transformers import CLIPImageProcessor, CLIPTextModel, CLIPTokenizer from diffusers.image_processor import VaeImageProcessor from diffusers.loaders import FromSingleFileMixin, StableDiffusionLoraLoaderMixin, TextualInversionLoaderMixin from diffusers.models import AutoencoderKL, UNet2DConditionModel from diffusers.models.lora import adjust_lora_scale_text_encoder from diffusers.pipelines.pipeline_utils import DiffusionPipeline, StableDiffusionMixin from diffusers.pipelines.stable_diffusion import StableDiffusionPipelineOutput, StableDiffusionSafetyChecker from diffusers.schedulers import LCMScheduler from diffusers.utils import ( USE_PEFT_BACKEND, deprecate, logging, replace_example_docstring, scale_lora_layers, unscale_lora_layers, ) from diffusers.utils.torch_utils import randn_tensor logger = logging.get_logger(__name__) # pylint: disable=invalid-name EXAMPLE_DOC_STRING = """ Examples: ```py >>> import torch >>> import numpy as np >>> from diffusers import DiffusionPipeline >>> pipe = DiffusionPipeline.from_pretrained("SimianLuo/LCM_Dreamshaper_v7", custom_pipeline="latent_consistency_interpolate") >>> # To save GPU memory, torch.float16 can be used, but it may compromise image quality. >>> pipe.to(torch_device="cuda", torch_dtype=torch.float32) >>> prompts = ["A cat", "A dog", "A horse"] >>> num_inference_steps = 4 >>> num_interpolation_steps = 24 >>> seed = 1337 >>> torch.manual_seed(seed) >>> np.random.seed(seed) >>> images = pipe( prompt=prompts, height=512, width=512, num_inference_steps=num_inference_steps, num_interpolation_steps=num_interpolation_steps, guidance_scale=8.0, embedding_interpolation_type="lerp", latent_interpolation_type="slerp", process_batch_size=4, # Make it higher or lower based on your GPU memory generator=torch.Generator(seed), ) >>> # Save the images as a video >>> import imageio >>> from PIL import Image >>> def pil_to_video(images: List[Image.Image], filename: str, fps: int = 60) -> None: frames = [np.array(image) for image in images] with imageio.get_writer(filename, fps=fps) as video_writer: for frame in frames: video_writer.append_data(frame) >>> pil_to_video(images, "lcm_interpolate.mp4", fps=24) ``` """ def lerp( v0: Union[torch.Tensor, np.ndarray], v1: Union[torch.Tensor, np.ndarray], t: Union[float, torch.Tensor, np.ndarray], ) -> Union[torch.Tensor, np.ndarray]: """ Linearly interpolate between two vectors/tensors. Args: v0 (`torch.Tensor` or `np.ndarray`): First vector/tensor. v1 (`torch.Tensor` or `np.ndarray`): Second vector/tensor. t: (`float`, `torch.Tensor`, or `np.ndarray`): Interpolation factor. If float, must be between 0 and 1. If np.ndarray or torch.Tensor, must be one dimensional with values between 0 and 1. Returns: Union[torch.Tensor, np.ndarray] Interpolated vector/tensor between v0 and v1. """ inputs_are_torch = False t_is_float = False if isinstance(v0, torch.Tensor): inputs_are_torch = True input_device = v0.device v0 = v0.cpu().numpy() v1 = v1.cpu().numpy() if isinstance(t, torch.Tensor): inputs_are_torch = True input_device = t.device t = t.cpu().numpy() elif isinstance(t, float): t_is_float = True t = np.array([t]) t = t[..., None] v0 = v0[None, ...] v1 = v1[None, ...] v2 = (1 - t) * v0 + t * v1 if t_is_float and v0.ndim > 1: assert v2.shape[0] == 1 v2 = np.squeeze(v2, axis=0) if inputs_are_torch: v2 = torch.from_numpy(v2).to(input_device) return v2 def slerp( v0: Union[torch.Tensor, np.ndarray], v1: Union[torch.Tensor, np.ndarray], t: Union[float, torch.Tensor, np.ndarray], DOT_THRESHOLD=0.9995, ) -> Union[torch.Tensor, np.ndarray]: """ Spherical linear interpolation between two vectors/tensors. Args: v0 (`torch.Tensor` or `np.ndarray`): First vector/tensor. v1 (`torch.Tensor` or `np.ndarray`): Second vector/tensor. t: (`float`, `torch.Tensor`, or `np.ndarray`): Interpolation factor. If float, must be between 0 and 1. If np.ndarray or torch.Tensor, must be one dimensional with values between 0 and 1. DOT_THRESHOLD (`float`, *optional*, default=0.9995): Threshold for when to use linear interpolation instead of spherical interpolation. Returns: `torch.Tensor` or `np.ndarray`: Interpolated vector/tensor between v0 and v1. """ inputs_are_torch = False t_is_float = False if isinstance(v0, torch.Tensor): inputs_are_torch = True input_device = v0.device v0 = v0.cpu().numpy() v1 = v1.cpu().numpy() if isinstance(t, torch.Tensor): inputs_are_torch = True input_device = t.device t = t.cpu().numpy() elif isinstance(t, float): t_is_float = True t = np.array([t], dtype=v0.dtype) dot = np.sum(v0 * v1 / (np.linalg.norm(v0) * np.linalg.norm(v1))) if np.abs(dot) > DOT_THRESHOLD: # v1 and v2 are close to parallel # Use linear interpolation instead v2 = lerp(v0, v1, t) else: theta_0 = np.arccos(dot) sin_theta_0 = np.sin(theta_0) theta_t = theta_0 * t sin_theta_t = np.sin(theta_t) s0 = np.sin(theta_0 - theta_t) / sin_theta_0 s1 = sin_theta_t / sin_theta_0 s0 = s0[..., None] s1 = s1[..., None] v0 = v0[None, ...] v1 = v1[None, ...] v2 = s0 * v0 + s1 * v1 if t_is_float and v0.ndim > 1: assert v2.shape[0] == 1 v2 = np.squeeze(v2, axis=0) if inputs_are_torch: v2 = torch.from_numpy(v2).to(input_device) return v2 class LatentConsistencyModelWalkPipeline( DiffusionPipeline, StableDiffusionMixin, TextualInversionLoaderMixin, StableDiffusionLoraLoaderMixin, FromSingleFileMixin, ): r""" Pipeline for text-to-image generation using a latent consistency model. This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: - [`~loaders.TextualInversionLoaderMixin.load_textual_inversion`] for loading textual inversion embeddings - [`~loaders.StableDiffusionLoraLoaderMixin.load_lora_weights`] for loading LoRA weights - [`~loaders.StableDiffusionLoraLoaderMixin.save_lora_weights`] for saving LoRA weights - [`~loaders.FromSingleFileMixin.from_single_file`] for loading `.ckpt` files Args: vae ([`AutoencoderKL`]): Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder ([`~transformers.CLIPTextModel`]): Frozen text-encoder ([clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14)). tokenizer ([`~transformers.CLIPTokenizer`]): A `CLIPTokenizer` to tokenize text. unet ([`UNet2DConditionModel`]): A `UNet2DConditionModel` to denoise the encoded image latents. scheduler ([`SchedulerMixin`]): A scheduler to be used in combination with `unet` to denoise the encoded image latents. Currently only supports [`LCMScheduler`]. safety_checker ([`StableDiffusionSafetyChecker`]): Classification module that estimates whether generated images could be considered offensive or harmful. Please refer to the [model card](https://huggingface.co/runwayml/stable-diffusion-v1-5) for more details about a model's potential harms. feature_extractor ([`~transformers.CLIPImageProcessor`]): A `CLIPImageProcessor` to extract features from generated images; used as inputs to the `safety_checker`. requires_safety_checker (`bool`, *optional*, defaults to `True`): Whether the pipeline requires a safety checker component. """ model_cpu_offload_seq = "text_encoder->unet->vae" _optional_components = ["safety_checker", "feature_extractor"] _exclude_from_cpu_offload = ["safety_checker"] _callback_tensor_inputs = ["latents", "denoised", "prompt_embeds", "w_embedding"] def __init__( self, vae: AutoencoderKL, text_encoder: CLIPTextModel, tokenizer: CLIPTokenizer, unet: UNet2DConditionModel, scheduler: LCMScheduler, safety_checker: StableDiffusionSafetyChecker, feature_extractor: CLIPImageProcessor, requires_safety_checker: bool = True, ): super().__init__() if safety_checker is None and requires_safety_checker: logger.warning( f"You have disabled the safety checker for {self.__class__} by passing `safety_checker=None`. Ensure" " that you abide to the conditions of the Stable Diffusion license and do not expose unfiltered" " results in services or applications open to the public. Both the diffusers team and Hugging Face" " strongly recommend to keep the safety filter enabled in all public facing circumstances, disabling" " it only for use-cases that involve analyzing network behavior or auditing its results. For more" " information, please have a look at https://github.com/huggingface/diffusers/pull/254 ." ) if safety_checker is not None and feature_extractor is None: raise ValueError( "Make sure to define a feature extractor when loading {self.__class__} if you want to use the safety" " checker. If you do not want to use the safety checker, you can pass `'safety_checker=None'` instead." ) self.register_modules( vae=vae, text_encoder=text_encoder, tokenizer=tokenizer, unet=unet, scheduler=scheduler, safety_checker=safety_checker, feature_extractor=feature_extractor, ) self.vae_scale_factor = 2 ** (len(self.vae.config.block_out_channels) - 1) if getattr(self, "vae", None) else 8 self.image_processor = VaeImageProcessor(vae_scale_factor=self.vae_scale_factor) self.register_to_config(requires_safety_checker=requires_safety_checker) # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.encode_prompt def encode_prompt( self, prompt, device, num_images_per_prompt, do_classifier_free_guidance, negative_prompt=None, prompt_embeds: Optional[torch.Tensor] = None, negative_prompt_embeds: Optional[torch.Tensor] = None, lora_scale: Optional[float] = None, clip_skip: Optional[int] = None, ): r""" Encodes the prompt into text encoder hidden states. Args: prompt (`str` or `List[str]`, *optional*): prompt to be encoded device: (`torch.device`): torch device num_images_per_prompt (`int`): number of images that should be generated per prompt do_classifier_free_guidance (`bool`): whether to use classifier free guidance or not negative_prompt (`str` or `List[str]`, *optional*): The prompt or prompts not to guide the image generation. If not defined, one has to pass `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is less than `1`). prompt_embeds (`torch.Tensor`, *optional*): Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not provided, text embeddings will be generated from `prompt` input argument. negative_prompt_embeds (`torch.Tensor`, *optional*): Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input argument. lora_scale (`float`, *optional*): A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (`int`, *optional*): Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that the output of the pre-final layer will be used for computing the prompt embeddings. """ # set lora scale so that monkey patched LoRA # function of text encoder can correctly access it if lora_scale is not None and isinstance(self, StableDiffusionLoraLoaderMixin): self._lora_scale = lora_scale # dynamically adjust the LoRA scale if not USE_PEFT_BACKEND: adjust_lora_scale_text_encoder(self.text_encoder, lora_scale) else: scale_lora_layers(self.text_encoder, lora_scale) if prompt is not None and isinstance(prompt, str): batch_size = 1 elif prompt is not None and isinstance(prompt, list): batch_size = len(prompt) else: batch_size = prompt_embeds.shape[0] if prompt_embeds is None: # textual inversion: process multi-vector tokens if necessary if isinstance(self, TextualInversionLoaderMixin): prompt = self.maybe_convert_prompt(prompt, self.tokenizer) text_inputs = self.tokenizer( prompt, padding="max_length", max_length=self.tokenizer.model_max_length, truncation=True, return_tensors="pt", ) text_input_ids = text_inputs.input_ids untruncated_ids = self.tokenizer(prompt, padding="longest", return_tensors="pt").input_ids if untruncated_ids.shape[-1] >= text_input_ids.shape[-1] and not torch.equal( text_input_ids, untruncated_ids ): removed_text = self.tokenizer.batch_decode( untruncated_ids[:, self.tokenizer.model_max_length - 1 : -1] ) logger.warning( "The following part of your input was truncated because CLIP can only handle sequences up to" f" {self.tokenizer.model_max_length} tokens: {removed_text}" ) if hasattr(self.text_encoder.config, "use_attention_mask") and self.text_encoder.config.use_attention_mask: attention_mask = text_inputs.attention_mask.to(device) else: attention_mask = None if clip_skip is None: prompt_embeds = self.text_encoder(text_input_ids.to(device), attention_mask=attention_mask) prompt_embeds = prompt_embeds[0] else: prompt_embeds = self.text_encoder( text_input_ids.to(device), attention_mask=attention_mask, output_hidden_states=True ) # Access the `hidden_states` first, that contains a tuple of # all the hidden states from the encoder layers. Then index into # the tuple to access the hidden states from the desired layer. prompt_embeds = prompt_embeds[-1][-(clip_skip + 1)] # We also need to apply the final LayerNorm here to not mess with the # representations. The `last_hidden_states` that we typically use for # obtaining the final prompt representations passes through the LayerNorm # layer. prompt_embeds = self.text_encoder.text_model.final_layer_norm(prompt_embeds) if self.text_encoder is not None: prompt_embeds_dtype = self.text_encoder.dtype elif self.unet is not None: prompt_embeds_dtype = self.unet.dtype else: prompt_embeds_dtype = prompt_embeds.dtype prompt_embeds = prompt_embeds.to(dtype=prompt_embeds_dtype, device=device) bs_embed, seq_len, _ = prompt_embeds.shape # duplicate text embeddings for each generation per prompt, using mps friendly method prompt_embeds = prompt_embeds.repeat(1, num_images_per_prompt, 1) prompt_embeds = prompt_embeds.view(bs_embed * num_images_per_prompt, seq_len, -1) # get unconditional embeddings for classifier free guidance if do_classifier_free_guidance and negative_prompt_embeds is None: uncond_tokens: List[str] if negative_prompt is None: uncond_tokens = [""] * batch_size elif prompt is not None and type(prompt) is not type(negative_prompt): raise TypeError( f"`negative_prompt` should be the same type to `prompt`, but got {type(negative_prompt)} !=" f" {type(prompt)}." ) elif isinstance(negative_prompt, str): uncond_tokens = [negative_prompt] elif batch_size != len(negative_prompt): raise ValueError( f"`negative_prompt`: {negative_prompt} has batch size {len(negative_prompt)}, but `prompt`:" f" {prompt} has batch size {batch_size}. Please make sure that passed `negative_prompt` matches" " the batch size of `prompt`." ) else: uncond_tokens = negative_prompt # textual inversion: process multi-vector tokens if necessary if isinstance(self, TextualInversionLoaderMixin): uncond_tokens = self.maybe_convert_prompt(uncond_tokens, self.tokenizer) max_length = prompt_embeds.shape[1] uncond_input = self.tokenizer( uncond_tokens, padding="max_length", max_length=max_length, truncation=True, return_tensors="pt", ) if hasattr(self.text_encoder.config, "use_attention_mask") and self.text_encoder.config.use_attention_mask: attention_mask = uncond_input.attention_mask.to(device) else: attention_mask = None negative_prompt_embeds = self.text_encoder( uncond_input.input_ids.to(device), attention_mask=attention_mask, ) negative_prompt_embeds = negative_prompt_embeds[0] if do_classifier_free_guidance: # duplicate unconditional embeddings for each generation per prompt, using mps friendly method seq_len = negative_prompt_embeds.shape[1] negative_prompt_embeds = negative_prompt_embeds.to(dtype=prompt_embeds_dtype, device=device) negative_prompt_embeds = negative_prompt_embeds.repeat(1, num_images_per_prompt, 1) negative_prompt_embeds = negative_prompt_embeds.view(batch_size * num_images_per_prompt, seq_len, -1) if isinstance(self, StableDiffusionLoraLoaderMixin) and USE_PEFT_BACKEND: # Retrieve the original scale by scaling back the LoRA layers unscale_lora_layers(self.text_encoder, lora_scale) return prompt_embeds, negative_prompt_embeds # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.run_safety_checker def run_safety_checker(self, image, device, dtype): if self.safety_checker is None: has_nsfw_concept = None else: if torch.is_tensor(image): feature_extractor_input = self.image_processor.postprocess(image, output_type="pil") else: feature_extractor_input = self.image_processor.numpy_to_pil(image) safety_checker_input = self.feature_extractor(feature_extractor_input, return_tensors="pt").to(device) image, has_nsfw_concept = self.safety_checker( images=image, clip_input=safety_checker_input.pixel_values.to(dtype) ) return image, has_nsfw_concept # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.prepare_latents def prepare_latents(self, batch_size, num_channels_latents, height, width, dtype, device, generator, latents=None): shape = ( batch_size, num_channels_latents, int(height) // self.vae_scale_factor, int(width) // self.vae_scale_factor, ) if isinstance(generator, list) and len(generator) != batch_size: raise ValueError( f"You have passed a list of generators of length {len(generator)}, but requested an effective batch" f" size of {batch_size}. Make sure the batch size matches the length of the generators." ) if latents is None: latents = randn_tensor(shape, generator=generator, device=device, dtype=dtype) else: latents = latents.to(device) # scale the initial noise by the standard deviation required by the scheduler latents = latents * self.scheduler.init_noise_sigma return latents def get_guidance_scale_embedding(self, w, embedding_dim=512, dtype=torch.float32): """ See https://github.com/google-research/vdm/blob/dc27b98a554f65cdc654b800da5aa1846545d41b/model_vdm.py#L298 Args: timesteps (`torch.Tensor`): generate embedding vectors at these timesteps embedding_dim (`int`, *optional*, defaults to 512): dimension of the embeddings to generate dtype: data type of the generated embeddings Returns: `torch.Tensor`: Embedding vectors with shape `(len(timesteps), embedding_dim)` """ assert len(w.shape) == 1 w = w * 1000.0 half_dim = embedding_dim // 2 emb = torch.log(torch.tensor(10000.0)) / (half_dim - 1) emb = torch.exp(torch.arange(half_dim, dtype=dtype) * -emb) emb = w.to(dtype)[:, None] * emb[None, :] emb = torch.cat([torch.sin(emb), torch.cos(emb)], dim=1) if embedding_dim % 2 == 1: # zero pad emb = torch.nn.functional.pad(emb, (0, 1)) assert emb.shape == (w.shape[0], embedding_dim) return emb # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.prepare_extra_step_kwargs def prepare_extra_step_kwargs(self, generator, eta): # prepare extra kwargs for the scheduler step, since not all schedulers have the same signature # eta (η) is only used with the DDIMScheduler, it will be ignored for other schedulers. # eta corresponds to η in DDIM paper: https://arxiv.org/abs/2010.02502 # and should be between [0, 1] accepts_eta = "eta" in set(inspect.signature(self.scheduler.step).parameters.keys()) extra_step_kwargs = {} if accepts_eta: extra_step_kwargs["eta"] = eta # check if the scheduler accepts generator accepts_generator = "generator" in set(inspect.signature(self.scheduler.step).parameters.keys()) if accepts_generator: extra_step_kwargs["generator"] = generator return extra_step_kwargs # Currently StableDiffusionPipeline.check_inputs with negative prompt stuff removed def check_inputs( self, prompt: Union[str, List[str]], height: int, width: int, callback_steps: int, prompt_embeds: Optional[torch.Tensor] = None, callback_on_step_end_tensor_inputs=None, ): if height % 8 != 0 or width % 8 != 0: raise ValueError(f"`height` and `width` have to be divisible by 8 but are {height} and {width}.") if callback_steps is not None and (not isinstance(callback_steps, int) or callback_steps <= 0): raise ValueError( f"`callback_steps` has to be a positive integer but is {callback_steps} of type" f" {type(callback_steps)}." ) if callback_on_step_end_tensor_inputs is not None and not all( k in self._callback_tensor_inputs for k in callback_on_step_end_tensor_inputs ): raise ValueError( f"`callback_on_step_end_tensor_inputs` has to be in {self._callback_tensor_inputs}, but found {[k for k in callback_on_step_end_tensor_inputs if k not in self._callback_tensor_inputs]}" ) if prompt is not None and prompt_embeds is not None: raise ValueError( f"Cannot forward both `prompt`: {prompt} and `prompt_embeds`: {prompt_embeds}. Please make sure to" " only forward one of the two." ) elif prompt is None and prompt_embeds is None: raise ValueError( "Provide either `prompt` or `prompt_embeds`. Cannot leave both `prompt` and `prompt_embeds` undefined." ) elif prompt is not None and (not isinstance(prompt, str) and not isinstance(prompt, list)): raise ValueError(f"`prompt` has to be of type `str` or `list` but is {type(prompt)}") @torch.no_grad() def interpolate_embedding( self, start_embedding: torch.Tensor, end_embedding: torch.Tensor, num_interpolation_steps: Union[int, List[int]], interpolation_type: str, ) -> torch.Tensor: if interpolation_type == "lerp": interpolation_fn = lerp elif interpolation_type == "slerp": interpolation_fn = slerp else: raise ValueError( f"embedding_interpolation_type must be one of ['lerp', 'slerp'], got {interpolation_type}." ) embedding = torch.cat([start_embedding, end_embedding]) steps = torch.linspace(0, 1, num_interpolation_steps, dtype=embedding.dtype).cpu().numpy() steps = np.expand_dims(steps, axis=tuple(range(1, embedding.ndim))) interpolations = [] # Interpolate between text embeddings # TODO(aryan): Think of a better way of doing this # See if it can be done parallelly instead for i in range(embedding.shape[0] - 1): interpolations.append(interpolation_fn(embedding[i], embedding[i + 1], steps).squeeze(dim=1)) interpolations = torch.cat(interpolations) return interpolations @torch.no_grad() def interpolate_latent( self, start_latent: torch.Tensor, end_latent: torch.Tensor, num_interpolation_steps: Union[int, List[int]], interpolation_type: str, ) -> torch.Tensor: if interpolation_type == "lerp": interpolation_fn = lerp elif interpolation_type == "slerp": interpolation_fn = slerp latent = torch.cat([start_latent, end_latent]) steps = torch.linspace(0, 1, num_interpolation_steps, dtype=latent.dtype).cpu().numpy() steps = np.expand_dims(steps, axis=tuple(range(1, latent.ndim))) interpolations = [] # Interpolate between latents # TODO: Think of a better way of doing this # See if it can be done parallelly instead for i in range(latent.shape[0] - 1): interpolations.append(interpolation_fn(latent[i], latent[i + 1], steps).squeeze(dim=1)) return torch.cat(interpolations) @property def guidance_scale(self): return self._guidance_scale @property def cross_attention_kwargs(self): return self._cross_attention_kwargs @property def clip_skip(self): return self._clip_skip @property def num_timesteps(self): return self._num_timesteps @torch.no_grad() @replace_example_docstring(EXAMPLE_DOC_STRING) def __call__( self, prompt: Union[str, List[str]] = None, height: Optional[int] = None, width: Optional[int] = None, num_inference_steps: int = 4, num_interpolation_steps: int = 8, original_inference_steps: int = None, guidance_scale: float = 8.5, num_images_per_prompt: Optional[int] = 1, generator: Optional[Union[torch.Generator, List[torch.Generator]]] = None, latents: Optional[torch.Tensor] = None, prompt_embeds: Optional[torch.Tensor] = None, output_type: Optional[str] = "pil", return_dict: bool = True, cross_attention_kwargs: Optional[Dict[str, Any]] = None, clip_skip: Optional[int] = None, callback_on_step_end: Optional[Callable[[int, int, Dict], None]] = None, callback_on_step_end_tensor_inputs: List[str] = ["latents"], embedding_interpolation_type: str = "lerp", latent_interpolation_type: str = "slerp", process_batch_size: int = 4, **kwargs, ): r""" The call function to the pipeline for generation. Args: prompt (`str` or `List[str]`, *optional*): The prompt or prompts to guide image generation. If not defined, you need to pass `prompt_embeds`. height (`int`, *optional*, defaults to `self.unet.config.sample_size * self.vae_scale_factor`): The height in pixels of the generated image. width (`int`, *optional*, defaults to `self.unet.config.sample_size * self.vae_scale_factor`): The width in pixels of the generated image. num_inference_steps (`int`, *optional*, defaults to 50): The number of denoising steps. More denoising steps usually lead to a higher quality image at the expense of slower inference. original_inference_steps (`int`, *optional*): The original number of inference steps use to generate a linearly-spaced timestep schedule, from which we will draw `num_inference_steps` evenly spaced timesteps from as our final timestep schedule, following the Skipping-Step method in the paper (see Section 4.3). If not set this will default to the scheduler's `original_inference_steps` attribute. guidance_scale (`float`, *optional*, defaults to 7.5): A higher guidance scale value encourages the model to generate images closely linked to the text `prompt` at the expense of lower image quality. Guidance scale is enabled when `guidance_scale > 1`. Note that the original latent consistency models paper uses a different CFG formulation where the guidance scales are decreased by 1 (so in the paper formulation CFG is enabled when `guidance_scale > 0`). num_images_per_prompt (`int`, *optional*, defaults to 1): The number of images to generate per prompt. generator (`torch.Generator` or `List[torch.Generator]`, *optional*): A [`torch.Generator`](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make generation deterministic. latents (`torch.Tensor`, *optional*): Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image generation. Can be used to tweak the same generation with different prompts. If not provided, a latents tensor is generated by sampling using the supplied random `generator`. prompt_embeds (`torch.Tensor`, *optional*): Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not provided, text embeddings are generated from the `prompt` input argument. output_type (`str`, *optional*, defaults to `"pil"`): The output format of the generated image. Choose between `PIL.Image` or `np.array`. return_dict (`bool`, *optional*, defaults to `True`): Whether or not to return a [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] instead of a plain tuple. cross_attention_kwargs (`dict`, *optional*): A kwargs dictionary that if specified is passed along to the [`AttentionProcessor`] as defined in [`self.processor`](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py). clip_skip (`int`, *optional*): Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that the output of the pre-final layer will be used for computing the prompt embeddings. callback_on_step_end (`Callable`, *optional*): A function that calls at the end of each denoising steps during the inference. The function is called with the following arguments: `callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict)`. `callback_kwargs` will include a list of all tensors as specified by `callback_on_step_end_tensor_inputs`. callback_on_step_end_tensor_inputs (`List`, *optional*): The list of tensor inputs for the `callback_on_step_end` function. The tensors specified in the list will be passed as `callback_kwargs` argument. You will only be able to include variables listed in the `._callback_tensor_inputs` attribute of your pipeline class. embedding_interpolation_type (`str`, *optional*, defaults to `"lerp"`): The type of interpolation to use for interpolating between text embeddings. Choose between `"lerp"` and `"slerp"`. latent_interpolation_type (`str`, *optional*, defaults to `"slerp"`): The type of interpolation to use for interpolating between latents. Choose between `"lerp"` and `"slerp"`. process_batch_size (`int`, *optional*, defaults to 4): The batch size to use for processing the images. This is useful when generating a large number of images and you want to avoid running out of memory. Examples: Returns: [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] or `tuple`: If `return_dict` is `True`, [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] is returned, otherwise a `tuple` is returned where the first element is a list with the generated images and the second element is a list of `bool`s indicating whether the corresponding generated image contains "not-safe-for-work" (nsfw) content. """ callback = kwargs.pop("callback", None) callback_steps = kwargs.pop("callback_steps", None) if callback is not None: deprecate( "callback", "1.0.0", "Passing `callback` as an input argument to `__call__` is deprecated, consider use `callback_on_step_end`", ) if callback_steps is not None: deprecate( "callback_steps", "1.0.0", "Passing `callback_steps` as an input argument to `__call__` is deprecated, consider use `callback_on_step_end`", ) # 0. Default height and width to unet height = height or self.unet.config.sample_size * self.vae_scale_factor width = width or self.unet.config.sample_size * self.vae_scale_factor # 1. Check inputs. Raise error if not correct self.check_inputs(prompt, height, width, callback_steps, prompt_embeds, callback_on_step_end_tensor_inputs) self._guidance_scale = guidance_scale self._clip_skip = clip_skip self._cross_attention_kwargs = cross_attention_kwargs # 2. Define call parameters if prompt is not None and isinstance(prompt, str): batch_size = 1 elif prompt is not None and isinstance(prompt, list): batch_size = len(prompt) else: batch_size = prompt_embeds.shape[0] if batch_size < 2: raise ValueError(f"`prompt` must have length of at least 2 but found {batch_size}") if num_images_per_prompt != 1: raise ValueError("`num_images_per_prompt` must be `1` as no other value is supported yet") if prompt_embeds is not None: raise ValueError("`prompt_embeds` must be None since it is not supported yet") if latents is not None: raise ValueError("`latents` must be None since it is not supported yet") device = self._execution_device # do_classifier_free_guidance = guidance_scale > 1.0 lora_scale = ( self.cross_attention_kwargs.get("scale", None) if self.cross_attention_kwargs is not None else None ) self.scheduler.set_timesteps(num_inference_steps, device, original_inference_steps=original_inference_steps) timesteps = self.scheduler.timesteps num_channels_latents = self.unet.config.in_channels # bs = batch_size * num_images_per_prompt # 3. Encode initial input prompt prompt_embeds_1, _ = self.encode_prompt( prompt[:1], device, num_images_per_prompt=num_images_per_prompt, do_classifier_free_guidance=False, negative_prompt=None, prompt_embeds=prompt_embeds, negative_prompt_embeds=None, lora_scale=lora_scale, clip_skip=self.clip_skip, ) # 4. Prepare initial latent variables latents_1 = self.prepare_latents( 1, num_channels_latents, height, width, prompt_embeds_1.dtype, device, generator, latents, ) extra_step_kwargs = self.prepare_extra_step_kwargs(generator, None) num_warmup_steps = len(timesteps) - num_inference_steps * self.scheduler.order self._num_timesteps = len(timesteps) images = [] # 5. Iterate over prompts and perform latent walk. Note that we do this two prompts at a time # otherwise the memory usage ends up being too high. with self.progress_bar(total=batch_size - 1) as prompt_progress_bar: for i in range(1, batch_size): # 6. Encode current prompt prompt_embeds_2, _ = self.encode_prompt( prompt[i : i + 1], device, num_images_per_prompt=num_images_per_prompt, do_classifier_free_guidance=False, negative_prompt=None, prompt_embeds=prompt_embeds, negative_prompt_embeds=None, lora_scale=lora_scale, clip_skip=self.clip_skip, ) # 7. Prepare current latent variables latents_2 = self.prepare_latents( 1, num_channels_latents, height, width, prompt_embeds_2.dtype, device, generator, latents, ) # 8. Interpolate between previous and current prompt embeddings and latents inference_embeddings = self.interpolate_embedding( start_embedding=prompt_embeds_1, end_embedding=prompt_embeds_2, num_interpolation_steps=num_interpolation_steps, interpolation_type=embedding_interpolation_type, ) inference_latents = self.interpolate_latent( start_latent=latents_1, end_latent=latents_2, num_interpolation_steps=num_interpolation_steps, interpolation_type=latent_interpolation_type, ) next_prompt_embeds = inference_embeddings[-1:].detach().clone() next_latents = inference_latents[-1:].detach().clone() bs = num_interpolation_steps # 9. Perform inference in batches. Note the use of `process_batch_size` to control the batch size # of the inference. This is useful for reducing memory usage and can be configured based on the # available GPU memory. with self.progress_bar( total=(bs + process_batch_size - 1) // process_batch_size ) as batch_progress_bar: for batch_index in range(0, bs, process_batch_size): batch_inference_latents = inference_latents[batch_index : batch_index + process_batch_size] batch_inference_embeddings = inference_embeddings[ batch_index : batch_index + process_batch_size ] self.scheduler.set_timesteps( num_inference_steps, device, original_inference_steps=original_inference_steps ) timesteps = self.scheduler.timesteps current_bs = batch_inference_embeddings.shape[0] w = torch.tensor(self.guidance_scale - 1).repeat(current_bs) w_embedding = self.get_guidance_scale_embedding( w, embedding_dim=self.unet.config.time_cond_proj_dim ).to(device=device, dtype=latents_1.dtype) # 10. Perform inference for current batch with self.progress_bar(total=num_inference_steps) as progress_bar: for index, t in enumerate(timesteps): batch_inference_latents = batch_inference_latents.to(batch_inference_embeddings.dtype) # model prediction (v-prediction, eps, x) model_pred = self.unet( batch_inference_latents, t, timestep_cond=w_embedding, encoder_hidden_states=batch_inference_embeddings, cross_attention_kwargs=self.cross_attention_kwargs, return_dict=False, )[0] # compute the previous noisy sample x_t -> x_t-1 batch_inference_latents, denoised = self.scheduler.step( model_pred, t, batch_inference_latents, **extra_step_kwargs, return_dict=False ) if callback_on_step_end is not None: callback_kwargs = {} for k in callback_on_step_end_tensor_inputs: callback_kwargs[k] = locals()[k] callback_outputs = callback_on_step_end(self, index, t, callback_kwargs) batch_inference_latents = callback_outputs.pop("latents", batch_inference_latents) batch_inference_embeddings = callback_outputs.pop( "prompt_embeds", batch_inference_embeddings ) w_embedding = callback_outputs.pop("w_embedding", w_embedding) denoised = callback_outputs.pop("denoised", denoised) # call the callback, if provided if index == len(timesteps) - 1 or ( (index + 1) > num_warmup_steps and (index + 1) % self.scheduler.order == 0 ): progress_bar.update() if callback is not None and index % callback_steps == 0: step_idx = index // getattr(self.scheduler, "order", 1) callback(step_idx, t, batch_inference_latents) denoised = denoised.to(batch_inference_embeddings.dtype) # Note: This is not supported because you would get black images in your latent walk if # NSFW concept is detected # if not output_type == "latent": # image = self.vae.decode(denoised / self.vae.config.scaling_factor, return_dict=False)[0] # image, has_nsfw_concept = self.run_safety_checker(image, device, inference_embeddings.dtype) # else: # image = denoised # has_nsfw_concept = None # if has_nsfw_concept is None: # do_denormalize = [True] * image.shape[0] # else: # do_denormalize = [not has_nsfw for has_nsfw in has_nsfw_concept] image = self.vae.decode(denoised / self.vae.config.scaling_factor, return_dict=False)[0] do_denormalize = [True] * image.shape[0] has_nsfw_concept = None image = self.image_processor.postprocess( image, output_type=output_type, do_denormalize=do_denormalize ) images.append(image) batch_progress_bar.update() prompt_embeds_1 = next_prompt_embeds latents_1 = next_latents prompt_progress_bar.update() # 11. Determine what should be returned if output_type == "pil": images = [image for image_list in images for image in image_list] elif output_type == "np": images = np.concatenate(images) elif output_type == "pt": images = torch.cat(images) else: raise ValueError("`output_type` must be one of 'pil', 'np' or 'pt'.") # Offload all models self.maybe_free_model_hooks() if not return_dict: return (images, has_nsfw_concept) return StableDiffusionPipelineOutput(images=images, nsfw_content_detected=has_nsfw_concept)
diffusers/examples/community/latent_consistency_interpolate.py/0
{ "file_path": "diffusers/examples/community/latent_consistency_interpolate.py", "repo_id": "diffusers", "token_count": 22091 }
# Copyright 2024 The HuggingFace Team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Note: # This pipeline relies on a "hack" discovered by the community that allows # the generation of videos given an input image with AnimateDiff. It works # by creating a copy of the image `num_frames` times and progressively adding # more noise to the image based on the strength and latent interpolation method. import inspect from types import FunctionType from typing import Any, Callable, Dict, List, Optional, Union import numpy as np import torch from transformers import CLIPImageProcessor, CLIPTextModel, CLIPTokenizer, CLIPVisionModelWithProjection from diffusers.image_processor import PipelineImageInput, VaeImageProcessor from diffusers.loaders import IPAdapterMixin, StableDiffusionLoraLoaderMixin, TextualInversionLoaderMixin from diffusers.models import AutoencoderKL, ImageProjection, UNet2DConditionModel, UNetMotionModel from diffusers.models.lora import adjust_lora_scale_text_encoder from diffusers.models.unet_motion_model import MotionAdapter from diffusers.pipelines.animatediff.pipeline_output import AnimateDiffPipelineOutput from diffusers.pipelines.pipeline_utils import DiffusionPipeline, StableDiffusionMixin from diffusers.schedulers import ( DDIMScheduler, DPMSolverMultistepScheduler, EulerAncestralDiscreteScheduler, EulerDiscreteScheduler, LMSDiscreteScheduler, PNDMScheduler, ) from diffusers.utils import USE_PEFT_BACKEND, logging, scale_lora_layers, unscale_lora_layers from diffusers.utils.torch_utils import randn_tensor logger = logging.get_logger(__name__) # pylint: disable=invalid-name EXAMPLE_DOC_STRING = """ Examples: ```py >>> import torch >>> from diffusers import MotionAdapter, DiffusionPipeline, DDIMScheduler >>> from diffusers.utils import export_to_gif, load_image >>> model_id = "SG161222/Realistic_Vision_V5.1_noVAE" >>> adapter = MotionAdapter.from_pretrained("guoyww/animatediff-motion-adapter-v1-5-2") >>> pipe = DiffusionPipeline.from_pretrained("SG161222/Realistic_Vision_V5.1_noVAE", motion_adapter=adapter, custom_pipeline="pipeline_animatediff_img2video").to("cuda") >>> pipe.scheduler = pipe.scheduler = DDIMScheduler.from_pretrained(model_id, subfolder="scheduler", clip_sample=False, timestep_spacing="linspace", beta_schedule="linear", steps_offset=1) >>> image = load_image("snail.png") >>> output = pipe(image=image, prompt="A snail moving on the ground", strength=0.8, latent_interpolation_method="slerp") >>> frames = output.frames[0] >>> export_to_gif(frames, "animation.gif") ``` """ def lerp( v0: torch.Tensor, v1: torch.Tensor, t: Union[float, torch.Tensor], ) -> torch.Tensor: r""" Linear Interpolation between two tensors. Args: v0 (`torch.Tensor`): First tensor. v1 (`torch.Tensor`): Second tensor. t: (`float` or `torch.Tensor`): Interpolation factor. """ t_is_float = False input_device = v0.device v0 = v0.cpu().numpy() v1 = v1.cpu().numpy() if isinstance(t, torch.Tensor): t = t.cpu().numpy() else: t_is_float = True t = np.array([t], dtype=v0.dtype) t = t[..., None] v0 = v0[None, ...] v1 = v1[None, ...] v2 = (1 - t) * v0 + t * v1 if t_is_float and v0.ndim > 1: assert v2.shape[0] == 1 v2 = np.squeeze(v2, axis=0) v2 = torch.from_numpy(v2).to(input_device) return v2 def slerp( v0: torch.Tensor, v1: torch.Tensor, t: Union[float, torch.Tensor], DOT_THRESHOLD: float = 0.9995, ) -> torch.Tensor: r""" Spherical Linear Interpolation between two tensors. Args: v0 (`torch.Tensor`): First tensor. v1 (`torch.Tensor`): Second tensor. t: (`float` or `torch.Tensor`): Interpolation factor. DOT_THRESHOLD (`float`): Dot product threshold exceeding which linear interpolation will be used because input tensors are close to parallel. """ t_is_float = False input_device = v0.device v0 = v0.cpu().numpy() v1 = v1.cpu().numpy() if isinstance(t, torch.Tensor): t = t.cpu().numpy() else: t_is_float = True t = np.array([t], dtype=v0.dtype) dot = np.sum(v0 * v1 / (np.linalg.norm(v0) * np.linalg.norm(v1))) if np.abs(dot) > DOT_THRESHOLD: # v0 and v1 are close to parallel, so use linear interpolation instead v2 = lerp(v0, v1, t) else: theta_0 = np.arccos(dot) sin_theta_0 = np.sin(theta_0) theta_t = theta_0 * t sin_theta_t = np.sin(theta_t) s0 = np.sin(theta_0 - theta_t) / sin_theta_0 s1 = sin_theta_t / sin_theta_0 s0 = s0[..., None] s1 = s1[..., None] v0 = v0[None, ...] v1 = v1[None, ...] v2 = s0 * v0 + s1 * v1 if t_is_float and v0.ndim > 1: assert v2.shape[0] == 1 v2 = np.squeeze(v2, axis=0) v2 = torch.from_numpy(v2).to(input_device) return v2 # Copied from diffusers.pipelines.animatediff.pipeline_animatediff.tensor2vid def tensor2vid(video: torch.Tensor, processor, output_type="np"): batch_size, channels, num_frames, height, width = video.shape outputs = [] for batch_idx in range(batch_size): batch_vid = video[batch_idx].permute(1, 0, 2, 3) batch_output = processor.postprocess(batch_vid, output_type) outputs.append(batch_output) if output_type == "np": outputs = np.stack(outputs) elif output_type == "pt": outputs = torch.stack(outputs) elif not output_type == "pil": raise ValueError(f"{output_type} does not exist. Please choose one of ['np', 'pt', 'pil']") return outputs # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion_img2img.retrieve_latents def retrieve_latents( encoder_output: torch.Tensor, generator: Optional[torch.Generator] = None, sample_mode: str = "sample" ): if hasattr(encoder_output, "latent_dist") and sample_mode == "sample": return encoder_output.latent_dist.sample(generator) elif hasattr(encoder_output, "latent_dist") and sample_mode == "argmax": return encoder_output.latent_dist.mode() elif hasattr(encoder_output, "latents"): return encoder_output.latents else: raise AttributeError("Could not access latents of provided encoder_output") # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.retrieve_timesteps def retrieve_timesteps( scheduler, num_inference_steps: Optional[int] = None, device: Optional[Union[str, torch.device]] = None, timesteps: Optional[List[int]] = None, **kwargs, ): """ Calls the scheduler's `set_timesteps` method and retrieves timesteps from the scheduler after the call. Handles custom timesteps. Any kwargs will be supplied to `scheduler.set_timesteps`. Args: scheduler (`SchedulerMixin`): The scheduler to get timesteps from. num_inference_steps (`int`): The number of diffusion steps used when generating samples with a pre-trained model. If used, `timesteps` must be `None`. device (`str` or `torch.device`, *optional*): The device to which the timesteps should be moved to. If `None`, the timesteps are not moved. timesteps (`List[int]`, *optional*): Custom timesteps used to support arbitrary spacing between timesteps. If `None`, then the default timestep spacing strategy of the scheduler is used. If `timesteps` is passed, `num_inference_steps` must be `None`. Returns: `Tuple[torch.Tensor, int]`: A tuple where the first element is the timestep schedule from the scheduler and the second element is the number of inference steps. """ if timesteps is not None: accepts_timesteps = "timesteps" in set(inspect.signature(scheduler.set_timesteps).parameters.keys()) if not accepts_timesteps: raise ValueError( f"The current scheduler class {scheduler.__class__}'s `set_timesteps` does not support custom" f" timestep schedules. Please check whether you are using the correct scheduler." ) scheduler.set_timesteps(timesteps=timesteps, device=device, **kwargs) timesteps = scheduler.timesteps num_inference_steps = len(timesteps) else: scheduler.set_timesteps(num_inference_steps, device=device, **kwargs) timesteps = scheduler.timesteps return timesteps, num_inference_steps class AnimateDiffImgToVideoPipeline( DiffusionPipeline, StableDiffusionMixin, TextualInversionLoaderMixin, IPAdapterMixin, StableDiffusionLoraLoaderMixin, ): r""" Pipeline for image-to-video generation. This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: - [`~loaders.TextualInversionLoaderMixin.load_textual_inversion`] for loading textual inversion embeddings - [`~loaders.StableDiffusionLoraLoaderMixin.load_lora_weights`] for loading LoRA weights - [`~loaders.StableDiffusionLoraLoaderMixin.save_lora_weights`] for saving LoRA weights - [`~loaders.IPAdapterMixin.load_ip_adapter`] for loading IP Adapters Args: vae ([`AutoencoderKL`]): Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. text_encoder ([`CLIPTextModel`]): Frozen text-encoder ([clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14)). tokenizer (`CLIPTokenizer`): A [`~transformers.CLIPTokenizer`] to tokenize text. unet ([`UNet2DConditionModel`]): A [`UNet2DConditionModel`] used to create a UNetMotionModel to denoise the encoded video latents. motion_adapter ([`MotionAdapter`]): A [`MotionAdapter`] to be used in combination with `unet` to denoise the encoded video latents. scheduler ([`SchedulerMixin`]): A scheduler to be used in combination with `unet` to denoise the encoded image latents. Can be one of [`DDIMScheduler`], [`LMSDiscreteScheduler`], or [`PNDMScheduler`]. """ model_cpu_offload_seq = "text_encoder->image_encoder->unet->vae" _optional_components = ["feature_extractor", "image_encoder"] def __init__( self, vae: AutoencoderKL, text_encoder: CLIPTextModel, tokenizer: CLIPTokenizer, unet: UNet2DConditionModel, motion_adapter: MotionAdapter, scheduler: Union[ DDIMScheduler, PNDMScheduler, LMSDiscreteScheduler, EulerDiscreteScheduler, EulerAncestralDiscreteScheduler, DPMSolverMultistepScheduler, ], feature_extractor: CLIPImageProcessor = None, image_encoder: CLIPVisionModelWithProjection = None, ): super().__init__() unet = UNetMotionModel.from_unet2d(unet, motion_adapter) self.register_modules( vae=vae, text_encoder=text_encoder, tokenizer=tokenizer, unet=unet, motion_adapter=motion_adapter, scheduler=scheduler, feature_extractor=feature_extractor, image_encoder=image_encoder, ) self.vae_scale_factor = 2 ** (len(self.vae.config.block_out_channels) - 1) if getattr(self, "vae", None) else 8 self.image_processor = VaeImageProcessor(vae_scale_factor=self.vae_scale_factor) # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.encode_prompt with num_images_per_prompt -> num_videos_per_prompt def encode_prompt( self, prompt, device, num_images_per_prompt, do_classifier_free_guidance, negative_prompt=None, prompt_embeds: Optional[torch.Tensor] = None, negative_prompt_embeds: Optional[torch.Tensor] = None, lora_scale: Optional[float] = None, clip_skip: Optional[int] = None, ): r""" Encodes the prompt into text encoder hidden states. Args: prompt (`str` or `List[str]`, *optional*): prompt to be encoded device: (`torch.device`): torch device num_images_per_prompt (`int`): number of images that should be generated per prompt do_classifier_free_guidance (`bool`): whether to use classifier free guidance or not negative_prompt (`str` or `List[str]`, *optional*): The prompt or prompts not to guide the image generation. If not defined, one has to pass `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is less than `1`). prompt_embeds (`torch.Tensor`, *optional*): Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not provided, text embeddings will be generated from `prompt` input argument. negative_prompt_embeds (`torch.Tensor`, *optional*): Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input argument. lora_scale (`float`, *optional*): A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (`int`, *optional*): Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that the output of the pre-final layer will be used for computing the prompt embeddings. """ # set lora scale so that monkey patched LoRA # function of text encoder can correctly access it if lora_scale is not None and isinstance(self, StableDiffusionLoraLoaderMixin): self._lora_scale = lora_scale # dynamically adjust the LoRA scale if not USE_PEFT_BACKEND: adjust_lora_scale_text_encoder(self.text_encoder, lora_scale) else: scale_lora_layers(self.text_encoder, lora_scale) if prompt is not None and isinstance(prompt, str): batch_size = 1 elif prompt is not None and isinstance(prompt, list): batch_size = len(prompt) else: batch_size = prompt_embeds.shape[0] if prompt_embeds is None: # textual inversion: procecss multi-vector tokens if necessary if isinstance(self, TextualInversionLoaderMixin): prompt = self.maybe_convert_prompt(prompt, self.tokenizer) text_inputs = self.tokenizer( prompt, padding="max_length", max_length=self.tokenizer.model_max_length, truncation=True, return_tensors="pt", ) text_input_ids = text_inputs.input_ids untruncated_ids = self.tokenizer(prompt, padding="longest", return_tensors="pt").input_ids if untruncated_ids.shape[-1] >= text_input_ids.shape[-1] and not torch.equal( text_input_ids, untruncated_ids ): removed_text = self.tokenizer.batch_decode( untruncated_ids[:, self.tokenizer.model_max_length - 1 : -1] ) logger.warning( "The following part of your input was truncated because CLIP can only handle sequences up to" f" {self.tokenizer.model_max_length} tokens: {removed_text}" ) if hasattr(self.text_encoder.config, "use_attention_mask") and self.text_encoder.config.use_attention_mask: attention_mask = text_inputs.attention_mask.to(device) else: attention_mask = None if clip_skip is None: prompt_embeds = self.text_encoder(text_input_ids.to(device), attention_mask=attention_mask) prompt_embeds = prompt_embeds[0] else: prompt_embeds = self.text_encoder( text_input_ids.to(device), attention_mask=attention_mask, output_hidden_states=True ) # Access the `hidden_states` first, that contains a tuple of # all the hidden states from the encoder layers. Then index into # the tuple to access the hidden states from the desired layer. prompt_embeds = prompt_embeds[-1][-(clip_skip + 1)] # We also need to apply the final LayerNorm here to not mess with the # representations. The `last_hidden_states` that we typically use for # obtaining the final prompt representations passes through the LayerNorm # layer. prompt_embeds = self.text_encoder.text_model.final_layer_norm(prompt_embeds) if self.text_encoder is not None: prompt_embeds_dtype = self.text_encoder.dtype elif self.unet is not None: prompt_embeds_dtype = self.unet.dtype else: prompt_embeds_dtype = prompt_embeds.dtype prompt_embeds = prompt_embeds.to(dtype=prompt_embeds_dtype, device=device) bs_embed, seq_len, _ = prompt_embeds.shape # duplicate text embeddings for each generation per prompt, using mps friendly method prompt_embeds = prompt_embeds.repeat(1, num_images_per_prompt, 1) prompt_embeds = prompt_embeds.view(bs_embed * num_images_per_prompt, seq_len, -1) # get unconditional embeddings for classifier free guidance if do_classifier_free_guidance and negative_prompt_embeds is None: uncond_tokens: List[str] if negative_prompt is None: uncond_tokens = [""] * batch_size elif prompt is not None and type(prompt) is not type(negative_prompt): raise TypeError( f"`negative_prompt` should be the same type to `prompt`, but got {type(negative_prompt)} !=" f" {type(prompt)}." ) elif isinstance(negative_prompt, str): uncond_tokens = [negative_prompt] elif batch_size != len(negative_prompt): raise ValueError( f"`negative_prompt`: {negative_prompt} has batch size {len(negative_prompt)}, but `prompt`:" f" {prompt} has batch size {batch_size}. Please make sure that passed `negative_prompt` matches" " the batch size of `prompt`." ) else: uncond_tokens = negative_prompt # textual inversion: procecss multi-vector tokens if necessary if isinstance(self, TextualInversionLoaderMixin): uncond_tokens = self.maybe_convert_prompt(uncond_tokens, self.tokenizer) max_length = prompt_embeds.shape[1] uncond_input = self.tokenizer( uncond_tokens, padding="max_length", max_length=max_length, truncation=True, return_tensors="pt", ) if hasattr(self.text_encoder.config, "use_attention_mask") and self.text_encoder.config.use_attention_mask: attention_mask = uncond_input.attention_mask.to(device) else: attention_mask = None negative_prompt_embeds = self.text_encoder( uncond_input.input_ids.to(device), attention_mask=attention_mask, ) negative_prompt_embeds = negative_prompt_embeds[0] if do_classifier_free_guidance: # duplicate unconditional embeddings for each generation per prompt, using mps friendly method seq_len = negative_prompt_embeds.shape[1] negative_prompt_embeds = negative_prompt_embeds.to(dtype=prompt_embeds_dtype, device=device) negative_prompt_embeds = negative_prompt_embeds.repeat(1, num_images_per_prompt, 1) negative_prompt_embeds = negative_prompt_embeds.view(batch_size * num_images_per_prompt, seq_len, -1) if isinstance(self, StableDiffusionLoraLoaderMixin) and USE_PEFT_BACKEND: # Retrieve the original scale by scaling back the LoRA layers unscale_lora_layers(self.text_encoder, lora_scale) return prompt_embeds, negative_prompt_embeds # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.encode_image def encode_image(self, image, device, num_images_per_prompt, output_hidden_states=None): dtype = next(self.image_encoder.parameters()).dtype if not isinstance(image, torch.Tensor): image = self.feature_extractor(image, return_tensors="pt").pixel_values image = image.to(device=device, dtype=dtype) if output_hidden_states: image_enc_hidden_states = self.image_encoder(image, output_hidden_states=True).hidden_states[-2] image_enc_hidden_states = image_enc_hidden_states.repeat_interleave(num_images_per_prompt, dim=0) uncond_image_enc_hidden_states = self.image_encoder( torch.zeros_like(image), output_hidden_states=True ).hidden_states[-2] uncond_image_enc_hidden_states = uncond_image_enc_hidden_states.repeat_interleave( num_images_per_prompt, dim=0 ) return image_enc_hidden_states, uncond_image_enc_hidden_states else: image_embeds = self.image_encoder(image).image_embeds image_embeds = image_embeds.repeat_interleave(num_images_per_prompt, dim=0) uncond_image_embeds = torch.zeros_like(image_embeds) return image_embeds, uncond_image_embeds # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.prepare_ip_adapter_image_embeds def prepare_ip_adapter_image_embeds( self, ip_adapter_image, ip_adapter_image_embeds, device, num_images_per_prompt ): if ip_adapter_image_embeds is None: if not isinstance(ip_adapter_image, list): ip_adapter_image = [ip_adapter_image] if len(ip_adapter_image) != len(self.unet.encoder_hid_proj.image_projection_layers): raise ValueError( f"`ip_adapter_image` must have same length as the number of IP Adapters. Got {len(ip_adapter_image)} images and {len(self.unet.encoder_hid_proj.image_projection_layers)} IP Adapters." ) image_embeds = [] for single_ip_adapter_image, image_proj_layer in zip( ip_adapter_image, self.unet.encoder_hid_proj.image_projection_layers ): output_hidden_state = not isinstance(image_proj_layer, ImageProjection) single_image_embeds, single_negative_image_embeds = self.encode_image( single_ip_adapter_image, device, 1, output_hidden_state ) single_image_embeds = torch.stack([single_image_embeds] * num_images_per_prompt, dim=0) single_negative_image_embeds = torch.stack( [single_negative_image_embeds] * num_images_per_prompt, dim=0 ) if self.do_classifier_free_guidance: single_image_embeds = torch.cat([single_negative_image_embeds, single_image_embeds]) single_image_embeds = single_image_embeds.to(device) image_embeds.append(single_image_embeds) else: image_embeds = ip_adapter_image_embeds return image_embeds # Copied from diffusers.pipelines.text_to_video_synthesis/pipeline_text_to_video_synth.TextToVideoSDPipeline.decode_latents def decode_latents(self, latents): latents = 1 / self.vae.config.scaling_factor * latents batch_size, channels, num_frames, height, width = latents.shape latents = latents.permute(0, 2, 1, 3, 4).reshape(batch_size * num_frames, channels, height, width) image = self.vae.decode(latents).sample video = ( image[None, :] .reshape( ( batch_size, num_frames, -1, ) + image.shape[2:] ) .permute(0, 2, 1, 3, 4) ) # we always cast to float32 as this does not cause significant overhead and is compatible with bfloat16 video = video.float() return video # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.prepare_extra_step_kwargs def prepare_extra_step_kwargs(self, generator, eta): # prepare extra kwargs for the scheduler step, since not all schedulers have the same signature # eta (η) is only used with the DDIMScheduler, it will be ignored for other schedulers. # eta corresponds to η in DDIM paper: https://arxiv.org/abs/2010.02502 # and should be between [0, 1] accepts_eta = "eta" in set(inspect.signature(self.scheduler.step).parameters.keys()) extra_step_kwargs = {} if accepts_eta: extra_step_kwargs["eta"] = eta # check if the scheduler accepts generator accepts_generator = "generator" in set(inspect.signature(self.scheduler.step).parameters.keys()) if accepts_generator: extra_step_kwargs["generator"] = generator return extra_step_kwargs def check_inputs( self, prompt, height, width, callback_steps, negative_prompt=None, prompt_embeds=None, negative_prompt_embeds=None, callback_on_step_end_tensor_inputs=None, latent_interpolation_method=None, ): if height % 8 != 0 or width % 8 != 0: raise ValueError(f"`height` and `width` have to be divisible by 8 but are {height} and {width}.") if callback_steps is not None and (not isinstance(callback_steps, int) or callback_steps <= 0): raise ValueError( f"`callback_steps` has to be a positive integer but is {callback_steps} of type" f" {type(callback_steps)}." ) if callback_on_step_end_tensor_inputs is not None and not all( k in self._callback_tensor_inputs for k in callback_on_step_end_tensor_inputs ): raise ValueError( f"`callback_on_step_end_tensor_inputs` has to be in {self._callback_tensor_inputs}, but found {[k for k in callback_on_step_end_tensor_inputs if k not in self._callback_tensor_inputs]}" ) if prompt is not None and prompt_embeds is not None: raise ValueError( f"Cannot forward both `prompt`: {prompt} and `prompt_embeds`: {prompt_embeds}. Please make sure to" " only forward one of the two." ) elif prompt is None and prompt_embeds is None: raise ValueError( "Provide either `prompt` or `prompt_embeds`. Cannot leave both `prompt` and `prompt_embeds` undefined." ) elif prompt is not None and (not isinstance(prompt, str) and not isinstance(prompt, list)): raise ValueError(f"`prompt` has to be of type `str` or `list` but is {type(prompt)}") if negative_prompt is not None and negative_prompt_embeds is not None: raise ValueError( f"Cannot forward both `negative_prompt`: {negative_prompt} and `negative_prompt_embeds`:" f" {negative_prompt_embeds}. Please make sure to only forward one of the two." ) if prompt_embeds is not None and negative_prompt_embeds is not None: if prompt_embeds.shape != negative_prompt_embeds.shape: raise ValueError( "`prompt_embeds` and `negative_prompt_embeds` must have the same shape when passed directly, but" f" got: `prompt_embeds` {prompt_embeds.shape} != `negative_prompt_embeds`" f" {negative_prompt_embeds.shape}." ) if latent_interpolation_method is not None: if latent_interpolation_method not in ["lerp", "slerp"] and not isinstance( latent_interpolation_method, FunctionType ): raise ValueError( "`latent_interpolation_method` must be one of `lerp`, `slerp` or a Callable[[torch.Tensor, torch.Tensor, int], torch.Tensor]" ) def prepare_latents( self, image, strength, batch_size, num_channels_latents, num_frames, height, width, dtype, device, generator, latents=None, latent_interpolation_method="slerp", ): shape = ( batch_size, num_channels_latents, num_frames, height // self.vae_scale_factor, width // self.vae_scale_factor, ) if latents is None: image = image.to(device=device, dtype=dtype) if image.shape[1] == 4: latents = image else: # make sure the VAE is in float32 mode, as it overflows in float16 if self.vae.config.force_upcast: image = image.float() self.vae.to(dtype=torch.float32) if isinstance(generator, list): if len(generator) != batch_size: raise ValueError( f"You have passed a list of generators of length {len(generator)}, but requested an effective batch" f" size of {batch_size}. Make sure the batch size matches the length of the generators." ) init_latents = [ retrieve_latents(self.vae.encode(image[i : i + 1]), generator=generator[i]) for i in range(batch_size) ] init_latents = torch.cat(init_latents, dim=0) else: init_latents = retrieve_latents(self.vae.encode(image), generator=generator) if self.vae.config.force_upcast: self.vae.to(dtype) init_latents = init_latents.to(dtype) init_latents = self.vae.config.scaling_factor * init_latents latents = randn_tensor(shape, generator=generator, device=device, dtype=dtype) latents = latents * self.scheduler.init_noise_sigma if latent_interpolation_method == "lerp": def latent_cls(v0, v1, index): return lerp(v0, v1, index / num_frames * (1 - strength)) elif latent_interpolation_method == "slerp": def latent_cls(v0, v1, index): return slerp(v0, v1, index / num_frames * (1 - strength)) else: latent_cls = latent_interpolation_method for i in range(num_frames): latents[:, :, i, :, :] = latent_cls(latents[:, :, i, :, :], init_latents, i) else: if shape != latents.shape: # [B, C, F, H, W] raise ValueError(f"`latents` expected to have {shape=}, but found {latents.shape=}") latents = latents.to(device, dtype=dtype) return latents @torch.no_grad() def __call__( self, image: PipelineImageInput, prompt: Optional[Union[str, List[str]]] = None, height: Optional[int] = None, width: Optional[int] = None, num_frames: int = 16, num_inference_steps: int = 50, timesteps: Optional[List[int]] = None, guidance_scale: float = 7.5, strength: float = 0.8, negative_prompt: Optional[Union[str, List[str]]] = None, num_videos_per_prompt: Optional[int] = 1, eta: float = 0.0, generator: Optional[Union[torch.Generator, List[torch.Generator]]] = None, latents: Optional[torch.Tensor] = None, prompt_embeds: Optional[torch.Tensor] = None, negative_prompt_embeds: Optional[torch.Tensor] = None, ip_adapter_image: Optional[PipelineImageInput] = None, ip_adapter_image_embeds: Optional[PipelineImageInput] = None, output_type: Optional[str] = "pil", return_dict: bool = True, callback: Optional[Callable[[int, int, torch.Tensor], None]] = None, callback_steps: Optional[int] = 1, cross_attention_kwargs: Optional[Dict[str, Any]] = None, clip_skip: Optional[int] = None, latent_interpolation_method: Union[str, Callable[[torch.Tensor, torch.Tensor, int], torch.Tensor]] = "slerp", ): r""" The call function to the pipeline for generation. Args: image (`PipelineImageInput`): The input image to condition the generation on. prompt (`str` or `List[str]`, *optional*): The prompt or prompts to guide image generation. If not defined, you need to pass `prompt_embeds`. height (`int`, *optional*, defaults to `self.unet.config.sample_size * self.vae_scale_factor`): The height in pixels of the generated video. width (`int`, *optional*, defaults to `self.unet.config.sample_size * self.vae_scale_factor`): The width in pixels of the generated video. num_frames (`int`, *optional*, defaults to 16): The number of video frames that are generated. Defaults to 16 frames which at 8 frames per seconds amounts to 2 seconds of video. num_inference_steps (`int`, *optional*, defaults to 50): The number of denoising steps. More denoising steps usually lead to a higher quality videos at the expense of slower inference. strength (`float`, *optional*, defaults to 0.8): Higher strength leads to more differences between original image and generated video. guidance_scale (`float`, *optional*, defaults to 7.5): A higher guidance scale value encourages the model to generate images closely linked to the text `prompt` at the expense of lower image quality. Guidance scale is enabled when `guidance_scale > 1`. negative_prompt (`str` or `List[str]`, *optional*): The prompt or prompts to guide what to not include in image generation. If not defined, you need to pass `negative_prompt_embeds` instead. Ignored when not using guidance (`guidance_scale < 1`). eta (`float`, *optional*, defaults to 0.0): Corresponds to parameter eta (η) from the [DDIM](https://arxiv.org/abs/2010.02502) paper. Only applies to the [`~schedulers.DDIMScheduler`], and is ignored in other schedulers. generator (`torch.Generator` or `List[torch.Generator]`, *optional*): A [`torch.Generator`](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make generation deterministic. latents (`torch.Tensor`, *optional*): Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for video generation. Can be used to tweak the same generation with different prompts. If not provided, a latents tensor is generated by sampling using the supplied random `generator`. Latents should be of shape `(batch_size, num_channel, num_frames, height, width)`. prompt_embeds (`torch.Tensor`, *optional*): Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not provided, text embeddings are generated from the `prompt` input argument. negative_prompt_embeds (`torch.Tensor`, *optional*): Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not provided, `negative_prompt_embeds` are generated from the `negative_prompt` input argument. ip_adapter_image: (`PipelineImageInput`, *optional*): Optional image input to work with IP Adapters. ip_adapter_image_embeds (`List[torch.Tensor]`, *optional*): Pre-generated image embeddings for IP-Adapter. It should be a list of length same as number of IP-adapters. Each element should be a tensor of shape `(batch_size, num_images, emb_dim)`. It should contain the negative image embedding if `do_classifier_free_guidance` is set to `True`. If not provided, embeddings are computed from the `ip_adapter_image` input argument. output_type (`str`, *optional*, defaults to `"pil"`): The output format of the generated video. Choose between `torch.Tensor`, `PIL.Image` or `np.array`. return_dict (`bool`, *optional*, defaults to `True`): Whether or not to return a [`AnimateDiffImgToVideoPipelineOutput`] instead of a plain tuple. callback (`Callable`, *optional*): A function that calls every `callback_steps` steps during inference. The function is called with the following arguments: `callback(step: int, timestep: int, latents: torch.Tensor)`. callback_steps (`int`, *optional*, defaults to 1): The frequency at which the `callback` function is called. If not specified, the callback is called at every step. cross_attention_kwargs (`dict`, *optional*): A kwargs dictionary that if specified is passed along to the [`AttentionProcessor`] as defined in [`self.processor`](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py). clip_skip (`int`, *optional*): Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that the output of the pre-final layer will be used for computing the prompt embeddings. latent_interpolation_method (`str` or `Callable[[torch.Tensor, torch.Tensor, int], torch.Tensor]]`, *optional*): Must be one of "lerp", "slerp" or a callable that takes in a random noisy latent, image latent and a frame index as input and returns an initial latent for sampling. Examples: Returns: [`~pipelines.animatediff.pipeline_output.AnimateDiffPipelineOutput`] or `tuple`: If `return_dict` is `True`, [`~pipelines.animatediff.pipeline_output.AnimateDiffPipelineOutput`] is returned, otherwise a `tuple` is returned where the first element is a list with the generated frames. """ # 0. Default height and width to unet height = height or self.unet.config.sample_size * self.vae_scale_factor width = width or self.unet.config.sample_size * self.vae_scale_factor num_videos_per_prompt = 1 # 1. Check inputs. Raise error if not correct self.check_inputs( prompt=prompt, height=height, width=width, callback_steps=callback_steps, negative_prompt=negative_prompt, prompt_embeds=prompt_embeds, negative_prompt_embeds=negative_prompt_embeds, latent_interpolation_method=latent_interpolation_method, ) # 2. Define call parameters if prompt is not None and isinstance(prompt, str): batch_size = 1 elif prompt is not None and isinstance(prompt, list): batch_size = len(prompt) else: batch_size = prompt_embeds.shape[0] device = self._execution_device # here `guidance_scale` is defined analog to the guidance weight `w` of equation (2) # of the Imagen paper: https://arxiv.org/pdf/2205.11487.pdf . `guidance_scale = 1` # corresponds to doing no classifier free guidance. do_classifier_free_guidance = guidance_scale > 1.0 # 3. Encode input prompt text_encoder_lora_scale = ( cross_attention_kwargs.get("scale", None) if cross_attention_kwargs is not None else None ) prompt_embeds, negative_prompt_embeds = self.encode_prompt( prompt, device, num_videos_per_prompt, do_classifier_free_guidance, negative_prompt, prompt_embeds=prompt_embeds, negative_prompt_embeds=negative_prompt_embeds, lora_scale=text_encoder_lora_scale, clip_skip=clip_skip, ) # For classifier free guidance, we need to do two forward passes. # Here we concatenate the unconditional and text embeddings into a single batch # to avoid doing two forward passes if do_classifier_free_guidance: prompt_embeds = torch.cat([negative_prompt_embeds, prompt_embeds]) if ip_adapter_image is not None: image_embeds = self.prepare_ip_adapter_image_embeds( ip_adapter_image, ip_adapter_image_embeds, device, batch_size * num_videos_per_prompt ) # 4. Preprocess image image = self.image_processor.preprocess(image, height=height, width=width) # 5. Prepare timesteps timesteps, num_inference_steps = retrieve_timesteps(self.scheduler, num_inference_steps, device, timesteps) # 6. Prepare latent variables num_channels_latents = self.unet.config.in_channels latents = self.prepare_latents( image=image, strength=strength, batch_size=batch_size * num_videos_per_prompt, num_channels_latents=num_channels_latents, num_frames=num_frames, height=height, width=width, dtype=prompt_embeds.dtype, device=device, generator=generator, latents=latents, latent_interpolation_method=latent_interpolation_method, ) # 7. Prepare extra step kwargs. TODO: Logic should ideally just be moved out of the pipeline extra_step_kwargs = self.prepare_extra_step_kwargs(generator, eta) # 8. Add image embeds for IP-Adapter added_cond_kwargs = ( {"image_embeds": image_embeds} if ip_adapter_image is not None or ip_adapter_image_embeds is not None else None ) # 9. Denoising loop num_warmup_steps = len(timesteps) - num_inference_steps * self.scheduler.order with self.progress_bar(total=num_inference_steps) as progress_bar: for i, t in enumerate(timesteps): # expand the latents if we are doing classifier free guidance latent_model_input = torch.cat([latents] * 2) if do_classifier_free_guidance else latents latent_model_input = self.scheduler.scale_model_input(latent_model_input, t) # predict the noise residual noise_pred = self.unet( latent_model_input, t, encoder_hidden_states=prompt_embeds, cross_attention_kwargs=cross_attention_kwargs, added_cond_kwargs=added_cond_kwargs, ).sample # perform guidance if do_classifier_free_guidance: noise_pred_uncond, noise_pred_text = noise_pred.chunk(2) noise_pred = noise_pred_uncond + guidance_scale * (noise_pred_text - noise_pred_uncond) # compute the previous noisy sample x_t -> x_t-1 latents = self.scheduler.step(noise_pred, t, latents, **extra_step_kwargs).prev_sample # call the callback, if provided if i == len(timesteps) - 1 or ((i + 1) > num_warmup_steps and (i + 1) % self.scheduler.order == 0): progress_bar.update() if callback is not None and i % callback_steps == 0: callback(i, t, latents) if output_type == "latent": return AnimateDiffPipelineOutput(frames=latents) # 10. Post-processing if output_type == "latent": video = latents else: video_tensor = self.decode_latents(latents) video = tensor2vid(video_tensor, self.image_processor, output_type=output_type) # 11. Offload all models self.maybe_free_model_hooks() if not return_dict: return (video,) return AnimateDiffPipelineOutput(frames=video)
diffusers/examples/community/pipeline_animatediff_img2video.py/0
{ "file_path": "diffusers/examples/community/pipeline_animatediff_img2video.py", "repo_id": "diffusers", "token_count": 20616 }
import types from typing import List, Optional, Tuple, Union import torch from transformers import CLIPTextModelWithProjection, CLIPTokenizer from transformers.models.clip.modeling_clip import CLIPTextModelOutput from diffusers.models import PriorTransformer from diffusers.pipelines import DiffusionPipeline, StableDiffusionImageVariationPipeline from diffusers.schedulers import UnCLIPScheduler from diffusers.utils import logging from diffusers.utils.torch_utils import randn_tensor logger = logging.get_logger(__name__) # pylint: disable=invalid-name def _encode_image(self, image, device, num_images_per_prompt, do_classifier_free_guidance): image = image.to(device=device) image_embeddings = image # take image as image_embeddings image_embeddings = image_embeddings.unsqueeze(1) # duplicate image embeddings for each generation per prompt, using mps friendly method bs_embed, seq_len, _ = image_embeddings.shape image_embeddings = image_embeddings.repeat(1, num_images_per_prompt, 1) image_embeddings = image_embeddings.view(bs_embed * num_images_per_prompt, seq_len, -1) if do_classifier_free_guidance: uncond_embeddings = torch.zeros_like(image_embeddings) # For classifier free guidance, we need to do two forward passes. # Here we concatenate the unconditional and text embeddings into a single batch # to avoid doing two forward passes image_embeddings = torch.cat([uncond_embeddings, image_embeddings]) return image_embeddings class StableUnCLIPPipeline(DiffusionPipeline): def __init__( self, prior: PriorTransformer, tokenizer: CLIPTokenizer, text_encoder: CLIPTextModelWithProjection, prior_scheduler: UnCLIPScheduler, decoder_pipe_kwargs: Optional[dict] = None, ): super().__init__() decoder_pipe_kwargs = {"image_encoder": None} if decoder_pipe_kwargs is None else decoder_pipe_kwargs decoder_pipe_kwargs["torch_dtype"] = decoder_pipe_kwargs.get("torch_dtype", None) or prior.dtype self.decoder_pipe = StableDiffusionImageVariationPipeline.from_pretrained( "lambdalabs/sd-image-variations-diffusers", **decoder_pipe_kwargs ) # replace `_encode_image` method self.decoder_pipe._encode_image = types.MethodType(_encode_image, self.decoder_pipe) self.register_modules( prior=prior, tokenizer=tokenizer, text_encoder=text_encoder, prior_scheduler=prior_scheduler, ) def _encode_prompt( self, prompt, device, num_images_per_prompt, do_classifier_free_guidance, text_model_output: Optional[Union[CLIPTextModelOutput, Tuple]] = None, text_attention_mask: Optional[torch.Tensor] = None, ): if text_model_output is None: batch_size = len(prompt) if isinstance(prompt, list) else 1 # get prompt text embeddings text_inputs = self.tokenizer( prompt, padding="max_length", max_length=self.tokenizer.model_max_length, return_tensors="pt", ) text_input_ids = text_inputs.input_ids text_mask = text_inputs.attention_mask.bool().to(device) if text_input_ids.shape[-1] > self.tokenizer.model_max_length: removed_text = self.tokenizer.batch_decode(text_input_ids[:, self.tokenizer.model_max_length :]) logger.warning( "The following part of your input was truncated because CLIP can only handle sequences up to" f" {self.tokenizer.model_max_length} tokens: {removed_text}" ) text_input_ids = text_input_ids[:, : self.tokenizer.model_max_length] text_encoder_output = self.text_encoder(text_input_ids.to(device)) text_embeddings = text_encoder_output.text_embeds text_encoder_hidden_states = text_encoder_output.last_hidden_state else: batch_size = text_model_output[0].shape[0] text_embeddings, text_encoder_hidden_states = text_model_output[0], text_model_output[1] text_mask = text_attention_mask text_embeddings = text_embeddings.repeat_interleave(num_images_per_prompt, dim=0) text_encoder_hidden_states = text_encoder_hidden_states.repeat_interleave(num_images_per_prompt, dim=0) text_mask = text_mask.repeat_interleave(num_images_per_prompt, dim=0) if do_classifier_free_guidance: uncond_tokens = [""] * batch_size uncond_input = self.tokenizer( uncond_tokens, padding="max_length", max_length=self.tokenizer.model_max_length, truncation=True, return_tensors="pt", ) uncond_text_mask = uncond_input.attention_mask.bool().to(device) uncond_embeddings_text_encoder_output = self.text_encoder(uncond_input.input_ids.to(device)) uncond_embeddings = uncond_embeddings_text_encoder_output.text_embeds uncond_text_encoder_hidden_states = uncond_embeddings_text_encoder_output.last_hidden_state # duplicate unconditional embeddings for each generation per prompt, using mps friendly method seq_len = uncond_embeddings.shape[1] uncond_embeddings = uncond_embeddings.repeat(1, num_images_per_prompt) uncond_embeddings = uncond_embeddings.view(batch_size * num_images_per_prompt, seq_len) seq_len = uncond_text_encoder_hidden_states.shape[1] uncond_text_encoder_hidden_states = uncond_text_encoder_hidden_states.repeat(1, num_images_per_prompt, 1) uncond_text_encoder_hidden_states = uncond_text_encoder_hidden_states.view( batch_size * num_images_per_prompt, seq_len, -1 ) uncond_text_mask = uncond_text_mask.repeat_interleave(num_images_per_prompt, dim=0) # done duplicates # For classifier free guidance, we need to do two forward passes. # Here we concatenate the unconditional and text embeddings into a single batch # to avoid doing two forward passes text_embeddings = torch.cat([uncond_embeddings, text_embeddings]) text_encoder_hidden_states = torch.cat([uncond_text_encoder_hidden_states, text_encoder_hidden_states]) text_mask = torch.cat([uncond_text_mask, text_mask]) return text_embeddings, text_encoder_hidden_states, text_mask @property def _execution_device(self): r""" Returns the device on which the pipeline's models will be executed. After calling `pipeline.enable_sequential_cpu_offload()` the execution device can only be inferred from Accelerate's module hooks. """ if self.device != torch.device("meta") or not hasattr(self.prior, "_hf_hook"): return self.device for module in self.prior.modules(): if ( hasattr(module, "_hf_hook") and hasattr(module._hf_hook, "execution_device") and module._hf_hook.execution_device is not None ): return torch.device(module._hf_hook.execution_device) return self.device def prepare_latents(self, shape, dtype, device, generator, latents, scheduler): if latents is None: latents = randn_tensor(shape, generator=generator, device=device, dtype=dtype) else: if latents.shape != shape: raise ValueError(f"Unexpected latents shape, got {latents.shape}, expected {shape}") latents = latents.to(device) latents = latents * scheduler.init_noise_sigma return latents def to(self, torch_device: Optional[Union[str, torch.device]] = None): self.decoder_pipe.to(torch_device) super().to(torch_device) @torch.no_grad() def __call__( self, prompt: Optional[Union[str, List[str]]] = None, height: Optional[int] = None, width: Optional[int] = None, num_images_per_prompt: int = 1, prior_num_inference_steps: int = 25, generator: Optional[torch.Generator] = None, prior_latents: Optional[torch.Tensor] = None, text_model_output: Optional[Union[CLIPTextModelOutput, Tuple]] = None, text_attention_mask: Optional[torch.Tensor] = None, prior_guidance_scale: float = 4.0, decoder_guidance_scale: float = 8.0, decoder_num_inference_steps: int = 50, decoder_num_images_per_prompt: Optional[int] = 1, decoder_eta: float = 0.0, output_type: Optional[str] = "pil", return_dict: bool = True, ): if prompt is not None: if isinstance(prompt, str): batch_size = 1 elif isinstance(prompt, list): batch_size = len(prompt) else: raise ValueError(f"`prompt` has to be of type `str` or `list` but is {type(prompt)}") else: batch_size = text_model_output[0].shape[0] device = self._execution_device batch_size = batch_size * num_images_per_prompt do_classifier_free_guidance = prior_guidance_scale > 1.0 or decoder_guidance_scale > 1.0 text_embeddings, text_encoder_hidden_states, text_mask = self._encode_prompt( prompt, device, num_images_per_prompt, do_classifier_free_guidance, text_model_output, text_attention_mask ) # prior self.prior_scheduler.set_timesteps(prior_num_inference_steps, device=device) prior_timesteps_tensor = self.prior_scheduler.timesteps embedding_dim = self.prior.config.embedding_dim prior_latents = self.prepare_latents( (batch_size, embedding_dim), text_embeddings.dtype, device, generator, prior_latents, self.prior_scheduler, ) for i, t in enumerate(self.progress_bar(prior_timesteps_tensor)): # expand the latents if we are doing classifier free guidance latent_model_input = torch.cat([prior_latents] * 2) if do_classifier_free_guidance else prior_latents predicted_image_embedding = self.prior( latent_model_input, timestep=t, proj_embedding=text_embeddings, encoder_hidden_states=text_encoder_hidden_states, attention_mask=text_mask, ).predicted_image_embedding if do_classifier_free_guidance: predicted_image_embedding_uncond, predicted_image_embedding_text = predicted_image_embedding.chunk(2) predicted_image_embedding = predicted_image_embedding_uncond + prior_guidance_scale * ( predicted_image_embedding_text - predicted_image_embedding_uncond ) if i + 1 == prior_timesteps_tensor.shape[0]: prev_timestep = None else: prev_timestep = prior_timesteps_tensor[i + 1] prior_latents = self.prior_scheduler.step( predicted_image_embedding, timestep=t, sample=prior_latents, generator=generator, prev_timestep=prev_timestep, ).prev_sample prior_latents = self.prior.post_process_latents(prior_latents) image_embeddings = prior_latents output = self.decoder_pipe( image=image_embeddings, height=height, width=width, num_inference_steps=decoder_num_inference_steps, guidance_scale=decoder_guidance_scale, generator=generator, output_type=output_type, return_dict=return_dict, num_images_per_prompt=decoder_num_images_per_prompt, eta=decoder_eta, ) return output
diffusers/examples/community/stable_unclip.py/0
{ "file_path": "diffusers/examples/community/stable_unclip.py", "repo_id": "diffusers", "token_count": 5488 }
# ControlNet training example [Adding Conditional Control to Text-to-Image Diffusion Models](https://arxiv.org/abs/2302.05543) by Lvmin Zhang and Maneesh Agrawala. This example is based on the [training example in the original ControlNet repository](https://github.com/lllyasviel/ControlNet/blob/main/docs/train.md). It trains a ControlNet to fill circles using a [small synthetic dataset](https://huggingface.co/datasets/fusing/fill50k). ## Installing the dependencies Before running the scripts, make sure to install the library's training dependencies: **Important** To make sure you can successfully run the latest versions of the example scripts, we highly recommend **installing from source** and keeping the install up to date as we update the example scripts frequently and install some example-specific requirements. To do this, execute the following steps in a new virtual environment: ```bash git clone https://github.com/huggingface/diffusers cd diffusers pip install -e . ``` Then cd in the example folder and run ```bash pip install -r requirements.txt ``` And initialize an [🤗Accelerate](https://github.com/huggingface/accelerate/) environment with: ```bash accelerate config ``` Or for a default accelerate configuration without answering questions about your environment ```bash accelerate config default ``` Or if your environment doesn't support an interactive shell e.g. a notebook ```python from accelerate.utils import write_basic_config write_basic_config() ``` ## Circle filling dataset The original dataset is hosted in the [ControlNet repo](https://huggingface.co/lllyasviel/ControlNet/blob/main/training/fill50k.zip). We re-uploaded it to be compatible with `datasets` [here](https://huggingface.co/datasets/fusing/fill50k). Note that `datasets` handles dataloading within the training script. Our training examples use [Stable Diffusion 1.5](https://huggingface.co/stable-diffusion-v1-5/stable-diffusion-v1-5) as the original set of ControlNet models were trained from it. However, ControlNet can be trained to augment any Stable Diffusion compatible model (such as [CompVis/stable-diffusion-v1-4](https://huggingface.co/CompVis/stable-diffusion-v1-4)) or [stabilityai/stable-diffusion-2-1](https://huggingface.co/stabilityai/stable-diffusion-2-1). ## Training Our training examples use two test conditioning images. They can be downloaded by running ```sh wget https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/controlnet_training/conditioning_image_1.png wget https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/controlnet_training/conditioning_image_2.png ``` ```bash export MODEL_DIR="stable-diffusion-v1-5/stable-diffusion-v1-5" export OUTPUT_DIR="path to save model" accelerate launch train_controlnet.py \ --pretrained_model_name_or_path=$MODEL_DIR \ --output_dir=$OUTPUT_DIR \ --dataset_name=fusing/fill50k \ --resolution=512 \ --learning_rate=1e-5 \ --validation_image "./conditioning_image_1.png" "./conditioning_image_2.png" \ --validation_prompt "red circle with blue background" "cyan circle with brown floral background" \ --train_batch_size=4 ``` This default configuration requires ~38GB VRAM. By default, the training script logs outputs to tensorboard. Pass `--report_to wandb` to use weights and biases. Gradient accumulation with a smaller batch size can be used to reduce training requirements to ~20 GB VRAM. ```bash export MODEL_DIR="stable-diffusion-v1-5/stable-diffusion-v1-5" export OUTPUT_DIR="path to save model" accelerate launch train_controlnet.py \ --pretrained_model_name_or_path=$MODEL_DIR \ --output_dir=$OUTPUT_DIR \ --dataset_name=fusing/fill50k \ --resolution=512 \ --learning_rate=1e-5 \ --validation_image "./conditioning_image_1.png" "./conditioning_image_2.png" \ --validation_prompt "red circle with blue background" "cyan circle with brown floral background" \ --train_batch_size=1 \ --gradient_accumulation_steps=4 ``` ## Training with multiple GPUs `accelerate` allows for seamless multi-GPU training. Follow the instructions [here](https://huggingface.co/docs/accelerate/basic_tutorials/launch) for running distributed training with `accelerate`. Here is an example command: ```bash export MODEL_DIR="stable-diffusion-v1-5/stable-diffusion-v1-5" export OUTPUT_DIR="path to save model" accelerate launch --mixed_precision="fp16" --multi_gpu train_controlnet.py \ --pretrained_model_name_or_path=$MODEL_DIR \ --output_dir=$OUTPUT_DIR \ --dataset_name=fusing/fill50k \ --resolution=512 \ --learning_rate=1e-5 \ --validation_image "./conditioning_image_1.png" "./conditioning_image_2.png" \ --validation_prompt "red circle with blue background" "cyan circle with brown floral background" \ --train_batch_size=4 \ --mixed_precision="fp16" \ --tracker_project_name="controlnet-demo" \ --report_to=wandb ``` ## Example results #### After 300 steps with batch size 8 | | | |-------------------|:-------------------------:| | | red circle with blue background | ![conditioning image](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/controlnet_training/conditioning_image_1.png) | ![red circle with blue background](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/controlnet_training/red_circle_with_blue_background_300_steps.png) | | | cyan circle with brown floral background | ![conditioning image](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/controlnet_training/conditioning_image_2.png) | ![cyan circle with brown floral background](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/controlnet_training/cyan_circle_with_brown_floral_background_300_steps.png) | #### After 6000 steps with batch size 8: | | | |-------------------|:-------------------------:| | | red circle with blue background | ![conditioning image](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/controlnet_training/conditioning_image_1.png) | ![red circle with blue background](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/controlnet_training/red_circle_with_blue_background_6000_steps.png) | | | cyan circle with brown floral background | ![conditioning image](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/controlnet_training/conditioning_image_2.png) | ![cyan circle with brown floral background](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/controlnet_training/cyan_circle_with_brown_floral_background_6000_steps.png) | ## Training on a 16 GB GPU Optimizations: - Gradient checkpointing - bitsandbyte's 8-bit optimizer [bitandbytes install instructions](https://github.com/TimDettmers/bitsandbytes#requirements--installation). ```bash export MODEL_DIR="stable-diffusion-v1-5/stable-diffusion-v1-5" export OUTPUT_DIR="path to save model" accelerate launch train_controlnet.py \ --pretrained_model_name_or_path=$MODEL_DIR \ --output_dir=$OUTPUT_DIR \ --dataset_name=fusing/fill50k \ --resolution=512 \ --learning_rate=1e-5 \ --validation_image "./conditioning_image_1.png" "./conditioning_image_2.png" \ --validation_prompt "red circle with blue background" "cyan circle with brown floral background" \ --train_batch_size=1 \ --gradient_accumulation_steps=4 \ --gradient_checkpointing \ --use_8bit_adam ``` ## Training on a 12 GB GPU Optimizations: - Gradient checkpointing - bitsandbyte's 8-bit optimizer - xformers - set grads to none ```bash export MODEL_DIR="stable-diffusion-v1-5/stable-diffusion-v1-5" export OUTPUT_DIR="path to save model" accelerate launch train_controlnet.py \ --pretrained_model_name_or_path=$MODEL_DIR \ --output_dir=$OUTPUT_DIR \ --dataset_name=fusing/fill50k \ --resolution=512 \ --learning_rate=1e-5 \ --validation_image "./conditioning_image_1.png" "./conditioning_image_2.png" \ --validation_prompt "red circle with blue background" "cyan circle with brown floral background" \ --train_batch_size=1 \ --gradient_accumulation_steps=4 \ --gradient_checkpointing \ --use_8bit_adam \ --enable_xformers_memory_efficient_attention \ --set_grads_to_none ``` When using `enable_xformers_memory_efficient_attention`, please make sure to install `xformers` by `pip install xformers`. ## Training on an 8 GB GPU We have not exhaustively tested DeepSpeed support for ControlNet. While the configuration does save memory, we have not confirmed the configuration to train successfully. You will very likely have to make changes to the config to have a successful training run. Optimizations: - Gradient checkpointing - xformers - set grads to none - DeepSpeed stage 2 with parameter and optimizer offloading - fp16 mixed precision [DeepSpeed](https://www.deepspeed.ai/) can offload tensors from VRAM to either CPU or NVME. This requires significantly more RAM (about 25 GB). Use `accelerate config` to enable DeepSpeed stage 2. The relevant parts of the resulting accelerate config file are ```yaml compute_environment: LOCAL_MACHINE deepspeed_config: gradient_accumulation_steps: 4 offload_optimizer_device: cpu offload_param_device: cpu zero3_init_flag: false zero_stage: 2 distributed_type: DEEPSPEED ``` See [documentation](https://huggingface.co/docs/accelerate/usage_guides/deepspeed) for more DeepSpeed configuration options. Changing the default Adam optimizer to DeepSpeed's Adam `deepspeed.ops.adam.DeepSpeedCPUAdam` gives a substantial speedup but it requires CUDA toolchain with the same version as pytorch. 8-bit optimizer does not seem to be compatible with DeepSpeed at the moment. ```bash export MODEL_DIR="stable-diffusion-v1-5/stable-diffusion-v1-5" export OUTPUT_DIR="path to save model" accelerate launch train_controlnet.py \ --pretrained_model_name_or_path=$MODEL_DIR \ --output_dir=$OUTPUT_DIR \ --dataset_name=fusing/fill50k \ --resolution=512 \ --validation_image "./conditioning_image_1.png" "./conditioning_image_2.png" \ --validation_prompt "red circle with blue background" "cyan circle with brown floral background" \ --train_batch_size=1 \ --gradient_accumulation_steps=4 \ --gradient_checkpointing \ --enable_xformers_memory_efficient_attention \ --set_grads_to_none \ --mixed_precision fp16 ``` ## Performing inference with the trained ControlNet The trained model can be run the same as the original ControlNet pipeline with the newly trained ControlNet. Set `base_model_path` and `controlnet_path` to the values `--pretrained_model_name_or_path` and `--output_dir` were respectively set to in the training script. ```py from diffusers import StableDiffusionControlNetPipeline, ControlNetModel, UniPCMultistepScheduler from diffusers.utils import load_image import torch base_model_path = "path to model" controlnet_path = "path to controlnet" controlnet = ControlNetModel.from_pretrained(controlnet_path, torch_dtype=torch.float16) pipe = StableDiffusionControlNetPipeline.from_pretrained( base_model_path, controlnet=controlnet, torch_dtype=torch.float16 ) # speed up diffusion process with faster scheduler and memory optimization pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config) # remove following line if xformers is not installed or when using Torch 2.0. pipe.enable_xformers_memory_efficient_attention() # memory optimization. pipe.enable_model_cpu_offload() control_image = load_image("./conditioning_image_1.png") prompt = "pale golden rod circle with old lace background" # generate image generator = torch.manual_seed(0) image = pipe( prompt, num_inference_steps=20, generator=generator, image=control_image ).images[0] image.save("./output.png") ``` ## Training with Flax/JAX For faster training on TPUs and GPUs you can leverage the flax training example. Follow the instructions above to get the model and dataset before running the script. ### Running on Google Cloud TPU See below for commands to set up a TPU VM(`--accelerator-type v4-8`). For more details about how to set up and use TPUs, refer to [Cloud docs for single VM setup](https://cloud.google.com/tpu/docs/run-calculation-jax). First create a single TPUv4-8 VM and connect to it: ``` ZONE=us-central2-b TPU_TYPE=v4-8 VM_NAME=hg_flax gcloud alpha compute tpus tpu-vm create $VM_NAME \ --zone $ZONE \ --accelerator-type $TPU_TYPE \ --version tpu-vm-v4-base gcloud alpha compute tpus tpu-vm ssh $VM_NAME --zone $ZONE -- \ ``` When connected install JAX `0.4.5`: ```sh pip install "jax[tpu]==0.4.5" -f https://storage.googleapis.com/jax-releases/libtpu_releases.html ``` To verify that JAX was correctly installed, you can run the following command: ```py import jax jax.device_count() ``` This should display the number of TPU cores, which should be 4 on a TPUv4-8 VM. Then install Diffusers and the library's training dependencies: ```bash git clone https://github.com/huggingface/diffusers cd diffusers pip install . ``` Then cd in the example folder and run ```bash pip install -U -r requirements_flax.txt ``` If you want to use Weights and Biases logging, you should also install `wandb` now ```bash pip install wandb ``` Now let's downloading two conditioning images that we will use to run validation during the training in order to track our progress ```sh wget https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/controlnet_training/conditioning_image_1.png wget https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/controlnet_training/conditioning_image_2.png ``` We encourage you to store or share your model with the community. To use huggingface hub, please login to your Hugging Face account, or ([create one](https://huggingface.co/docs/diffusers/main/en/training/hf.co/join) if you don’t have one already): ```sh huggingface-cli login ``` Make sure you have the `MODEL_DIR`,`OUTPUT_DIR` and `HUB_MODEL_ID` environment variables set. The `OUTPUT_DIR` and `HUB_MODEL_ID` variables specify where to save the model to on the Hub: ```bash export MODEL_DIR="stable-diffusion-v1-5/stable-diffusion-v1-5" export OUTPUT_DIR="runs/fill-circle-{timestamp}" export HUB_MODEL_ID="controlnet-fill-circle" ``` And finally start the training ```bash python3 train_controlnet_flax.py \ --pretrained_model_name_or_path=$MODEL_DIR \ --output_dir=$OUTPUT_DIR \ --dataset_name=fusing/fill50k \ --resolution=512 \ --learning_rate=1e-5 \ --validation_image "./conditioning_image_1.png" "./conditioning_image_2.png" \ --validation_prompt "red circle with blue background" "cyan circle with brown floral background" \ --validation_steps=1000 \ --train_batch_size=2 \ --revision="non-ema" \ --from_pt \ --report_to="wandb" \ --tracker_project_name=$HUB_MODEL_ID \ --num_train_epochs=11 \ --push_to_hub \ --hub_model_id=$HUB_MODEL_ID ``` Since we passed the `--push_to_hub` flag, it will automatically create a model repo under your huggingface account based on `$HUB_MODEL_ID`. By the end of training, the final checkpoint will be automatically stored on the hub. You can find an example model repo [here](https://huggingface.co/YiYiXu/fill-circle-controlnet). Our training script also provides limited support for streaming large datasets from the Hugging Face Hub. In order to enable streaming, one must also set `--max_train_samples`. Here is an example command (from [this blog article](https://huggingface.co/blog/train-your-controlnet)): ```bash export MODEL_DIR="stable-diffusion-v1-5/stable-diffusion-v1-5" export OUTPUT_DIR="runs/uncanny-faces-{timestamp}" export HUB_MODEL_ID="controlnet-uncanny-faces" python3 train_controlnet_flax.py \ --pretrained_model_name_or_path=$MODEL_DIR \ --output_dir=$OUTPUT_DIR \ --dataset_name=multimodalart/facesyntheticsspigacaptioned \ --streaming \ --conditioning_image_column=spiga_seg \ --image_column=image \ --caption_column=image_caption \ --resolution=512 \ --max_train_samples 100000 \ --learning_rate=1e-5 \ --train_batch_size=1 \ --revision="flax" \ --report_to="wandb" \ --tracker_project_name=$HUB_MODEL_ID ``` Note, however, that the performance of the TPUs might get bottlenecked as streaming with `datasets` is not optimized for images. For ensuring maximum throughput, we encourage you to explore the following options: * [Webdataset](https://webdataset.github.io/webdataset/) * [TorchData](https://github.com/pytorch/data) * [TensorFlow Datasets](https://www.tensorflow.org/datasets/tfless_tfds) When work with a larger dataset, you may need to run training process for a long time and it’s useful to save regular checkpoints during the process. You can use the following argument to enable intermediate checkpointing: ```bash --checkpointing_steps=500 ``` This will save the trained model in subfolders of your output_dir. Subfolder names is the number of steps performed so far; for example: a checkpoint saved after 500 training steps would be saved in a subfolder named 500 You can then start your training from this saved checkpoint with ```bash --controlnet_model_name_or_path="./control_out/500" ``` We support training with the Min-SNR weighting strategy proposed in [Efficient Diffusion Training via Min-SNR Weighting Strategy](https://arxiv.org/abs/2303.09556) which helps to achieve faster convergence by rebalancing the loss. To use it, one needs to set the `--snr_gamma` argument. The recommended value when using it is `5.0`. We also support gradient accumulation - it is a technique that lets you use a bigger batch size than your machine would normally be able to fit into memory. You can use `gradient_accumulation_steps` argument to set gradient accumulation steps. The ControlNet author recommends using gradient accumulation to achieve better convergence. Read more [here](https://github.com/lllyasviel/ControlNet/blob/main/docs/train.md#more-consideration-sudden-converge-phenomenon-and-gradient-accumulation). You can **profile your code** with: ```bash --profile_steps==5 ``` Refer to the [JAX documentation on profiling](https://jax.readthedocs.io/en/latest/profiling.html). To inspect the profile trace, you'll have to install and start Tensorboard with the profile plugin: ```bash pip install tensorflow tensorboard-plugin-profile tensorboard --logdir runs/fill-circle-100steps-20230411_165612/ ``` The profile can then be inspected at http://localhost:6006/#profile Sometimes you'll get version conflicts (error messages like `Duplicate plugins for name projector`), which means that you have to uninstall and reinstall all versions of Tensorflow/Tensorboard (e.g. with `pip uninstall tensorflow tf-nightly tensorboard tb-nightly tensorboard-plugin-profile && pip install tf-nightly tbp-nightly tensorboard-plugin-profile`). Note that the debugging functionality of the Tensorboard `profile` plugin is still under active development. Not all views are fully functional, and for example the `trace_viewer` cuts off events after 1M (which can result in all your device traces getting lost if you for example profile the compilation step by accident). ## Support for Stable Diffusion XL We provide a training script for training a ControlNet with [Stable Diffusion XL](https://huggingface.co/papers/2307.01952). Please refer to [README_sdxl.md](./README_sdxl.md) for more details.
diffusers/examples/controlnet/README.md/0
{ "file_path": "diffusers/examples/controlnet/README.md", "repo_id": "diffusers", "token_count": 6086 }
# coding=utf-8 # Copyright 2024 HuggingFace Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import logging import os import shutil import sys import tempfile from diffusers import DiffusionPipeline, UNet2DConditionModel sys.path.append("..") from test_examples_utils import ExamplesTestsAccelerate, run_command # noqa: E402 logging.basicConfig(level=logging.DEBUG) logger = logging.getLogger() stream_handler = logging.StreamHandler(sys.stdout) logger.addHandler(stream_handler) class DreamBooth(ExamplesTestsAccelerate): def test_dreambooth(self): with tempfile.TemporaryDirectory() as tmpdir: test_args = f""" examples/dreambooth/train_dreambooth.py --pretrained_model_name_or_path hf-internal-testing/tiny-stable-diffusion-pipe --instance_data_dir docs/source/en/imgs --instance_prompt photo --resolution 64 --train_batch_size 1 --gradient_accumulation_steps 1 --max_train_steps 2 --learning_rate 5.0e-04 --scale_lr --lr_scheduler constant --lr_warmup_steps 0 --output_dir {tmpdir} """.split() run_command(self._launch_args + test_args) # save_pretrained smoke test self.assertTrue(os.path.isfile(os.path.join(tmpdir, "unet", "diffusion_pytorch_model.safetensors"))) self.assertTrue(os.path.isfile(os.path.join(tmpdir, "scheduler", "scheduler_config.json"))) def test_dreambooth_if(self): with tempfile.TemporaryDirectory() as tmpdir: test_args = f""" examples/dreambooth/train_dreambooth.py --pretrained_model_name_or_path hf-internal-testing/tiny-if-pipe --instance_data_dir docs/source/en/imgs --instance_prompt photo --resolution 64 --train_batch_size 1 --gradient_accumulation_steps 1 --max_train_steps 2 --learning_rate 5.0e-04 --scale_lr --lr_scheduler constant --lr_warmup_steps 0 --output_dir {tmpdir} --pre_compute_text_embeddings --tokenizer_max_length=77 --text_encoder_use_attention_mask """.split() run_command(self._launch_args + test_args) # save_pretrained smoke test self.assertTrue(os.path.isfile(os.path.join(tmpdir, "unet", "diffusion_pytorch_model.safetensors"))) self.assertTrue(os.path.isfile(os.path.join(tmpdir, "scheduler", "scheduler_config.json"))) def test_dreambooth_checkpointing(self): instance_prompt = "photo" pretrained_model_name_or_path = "hf-internal-testing/tiny-stable-diffusion-pipe" with tempfile.TemporaryDirectory() as tmpdir: # Run training script with checkpointing # max_train_steps == 4, checkpointing_steps == 2 # Should create checkpoints at steps 2, 4 initial_run_args = f""" examples/dreambooth/train_dreambooth.py --pretrained_model_name_or_path {pretrained_model_name_or_path} --instance_data_dir docs/source/en/imgs --instance_prompt {instance_prompt} --resolution 64 --train_batch_size 1 --gradient_accumulation_steps 1 --max_train_steps 4 --learning_rate 5.0e-04 --scale_lr --lr_scheduler constant --lr_warmup_steps 0 --output_dir {tmpdir} --checkpointing_steps=2 --seed=0 """.split() run_command(self._launch_args + initial_run_args) # check can run the original fully trained output pipeline pipe = DiffusionPipeline.from_pretrained(tmpdir, safety_checker=None) pipe(instance_prompt, num_inference_steps=1) # check checkpoint directories exist self.assertTrue(os.path.isdir(os.path.join(tmpdir, "checkpoint-2"))) self.assertTrue(os.path.isdir(os.path.join(tmpdir, "checkpoint-4"))) # check can run an intermediate checkpoint unet = UNet2DConditionModel.from_pretrained(tmpdir, subfolder="checkpoint-2/unet") pipe = DiffusionPipeline.from_pretrained(pretrained_model_name_or_path, unet=unet, safety_checker=None) pipe(instance_prompt, num_inference_steps=1) # Remove checkpoint 2 so that we can check only later checkpoints exist after resuming shutil.rmtree(os.path.join(tmpdir, "checkpoint-2")) # Run training script for 7 total steps resuming from checkpoint 4 resume_run_args = f""" examples/dreambooth/train_dreambooth.py --pretrained_model_name_or_path {pretrained_model_name_or_path} --instance_data_dir docs/source/en/imgs --instance_prompt {instance_prompt} --resolution 64 --train_batch_size 1 --gradient_accumulation_steps 1 --max_train_steps 6 --learning_rate 5.0e-04 --scale_lr --lr_scheduler constant --lr_warmup_steps 0 --output_dir {tmpdir} --checkpointing_steps=2 --resume_from_checkpoint=checkpoint-4 --seed=0 """.split() run_command(self._launch_args + resume_run_args) # check can run new fully trained pipeline pipe = DiffusionPipeline.from_pretrained(tmpdir, safety_checker=None) pipe(instance_prompt, num_inference_steps=1) # check old checkpoints do not exist self.assertFalse(os.path.isdir(os.path.join(tmpdir, "checkpoint-2"))) # check new checkpoints exist self.assertTrue(os.path.isdir(os.path.join(tmpdir, "checkpoint-4"))) self.assertTrue(os.path.isdir(os.path.join(tmpdir, "checkpoint-6"))) def test_dreambooth_checkpointing_checkpoints_total_limit(self): with tempfile.TemporaryDirectory() as tmpdir: test_args = f""" examples/dreambooth/train_dreambooth.py --pretrained_model_name_or_path=hf-internal-testing/tiny-stable-diffusion-pipe --instance_data_dir=docs/source/en/imgs --output_dir={tmpdir} --instance_prompt=prompt --resolution=64 --train_batch_size=1 --gradient_accumulation_steps=1 --max_train_steps=6 --checkpoints_total_limit=2 --checkpointing_steps=2 """.split() run_command(self._launch_args + test_args) self.assertEqual( {x for x in os.listdir(tmpdir) if "checkpoint" in x}, {"checkpoint-4", "checkpoint-6"}, ) def test_dreambooth_checkpointing_checkpoints_total_limit_removes_multiple_checkpoints(self): with tempfile.TemporaryDirectory() as tmpdir: test_args = f""" examples/dreambooth/train_dreambooth.py --pretrained_model_name_or_path=hf-internal-testing/tiny-stable-diffusion-pipe --instance_data_dir=docs/source/en/imgs --output_dir={tmpdir} --instance_prompt=prompt --resolution=64 --train_batch_size=1 --gradient_accumulation_steps=1 --max_train_steps=4 --checkpointing_steps=2 """.split() run_command(self._launch_args + test_args) self.assertEqual( {x for x in os.listdir(tmpdir) if "checkpoint" in x}, {"checkpoint-2", "checkpoint-4"}, ) resume_run_args = f""" examples/dreambooth/train_dreambooth.py --pretrained_model_name_or_path=hf-internal-testing/tiny-stable-diffusion-pipe --instance_data_dir=docs/source/en/imgs --output_dir={tmpdir} --instance_prompt=prompt --resolution=64 --train_batch_size=1 --gradient_accumulation_steps=1 --max_train_steps=8 --checkpointing_steps=2 --resume_from_checkpoint=checkpoint-4 --checkpoints_total_limit=2 """.split() run_command(self._launch_args + resume_run_args) self.assertEqual({x for x in os.listdir(tmpdir) if "checkpoint" in x}, {"checkpoint-6", "checkpoint-8"})
diffusers/examples/dreambooth/test_dreambooth.py/0
{ "file_path": "diffusers/examples/dreambooth/test_dreambooth.py", "repo_id": "diffusers", "token_count": 4466 }
# InstructPix2Pix text-to-edit-image fine-tuning This extended LoRA training script was authored by [Aiden-Frost](https://github.com/Aiden-Frost). This is an experimental LoRA extension of [this example](https://github.com/huggingface/diffusers/blob/main/examples/instruct_pix2pix/train_instruct_pix2pix.py). This script provides further support add LoRA layers for unet model. ## Running locally with PyTorch ### Installing the dependencies Before running the scripts, make sure to install the library's training dependencies: **Important** To make sure you can successfully run the latest versions of the example scripts, we highly recommend **installing from source** and keeping the install up to date as we update the example scripts frequently and install some example-specific requirements. To do this, execute the following steps in a new virtual environment: ```bash git clone https://github.com/huggingface/diffusers cd diffusers pip install . ``` Then cd in the example folder and run ```bash pip install -r requirements.txt ``` And initialize an [🤗Accelerate](https://github.com/huggingface/accelerate/) environment with: ```bash accelerate config ``` Note also that we use PEFT library as backend for LoRA training, make sure to have `peft>=0.6.0` installed in your environment. ## Training script example ```bash export MODEL_ID="timbrooks/instruct-pix2pix" export DATASET_ID="instruction-tuning-sd/cartoonization" export OUTPUT_DIR="instructPix2Pix-cartoonization" accelerate launch train_instruct_pix2pix_lora.py \ --pretrained_model_name_or_path=$MODEL_ID \ --dataset_name=$DATASET_ID \ --enable_xformers_memory_efficient_attention \ --resolution=256 --random_flip \ --train_batch_size=2 --gradient_accumulation_steps=4 --gradient_checkpointing \ --max_train_steps=15000 \ --checkpointing_steps=5000 --checkpoints_total_limit=1 \ --learning_rate=5e-05 --lr_warmup_steps=0 \ --val_image_url="https://hf.co/datasets/diffusers/diffusers-images-docs/resolve/main/mountain.png" \ --validation_prompt="Generate a cartoonized version of the natural image" \ --seed=42 \ --rank=4 \ --output_dir=$OUTPUT_DIR \ --report_to=wandb \ --push_to_hub \ --original_image_column="original_image" \ --edited_image_column="cartoonized_image" \ --edit_prompt_column="edit_prompt" ``` ## Inference After training the model and the lora weight of the model is stored in the ```$OUTPUT_DIR```. ```py # load the base model pipeline pipe_lora = StableDiffusionInstructPix2PixPipeline.from_pretrained("timbrooks/instruct-pix2pix") # Load LoRA weights from the provided path output_dir = "path/to/lora_weight_directory" pipe_lora.unet.load_attn_procs(output_dir) input_image_path = "/path/to/input_image" input_image = Image.open(input_image_path) edited_images = pipe_lora(num_images_per_prompt=1, prompt=args.edit_prompt, image=input_image, num_inference_steps=1000).images edited_images[0].show() ``` ## Results Here is an example of using the script to train a instructpix2pix model. Trained on google colab T4 GPU ```bash MODEL_ID="timbrooks/instruct-pix2pix" DATASET_ID="instruction-tuning-sd/cartoonization" TRAIN_EPOCHS=100 ``` Below are few examples for given the input image, edit_prompt and the edited_image (output of the model) <p align="center"> <img src="https://github.com/Aiden-Frost/Efficiently-teaching-counting-and-cartoonization-to-InstructPix2Pix.-/blob/main/diffusers_result_assets/edited_image_results.png?raw=true" alt="instructpix2pix-inputs" width=600/> </p> Here are some rough statistics about the training model using this script <p align="center"> <img src="https://github.com/Aiden-Frost/Efficiently-teaching-counting-and-cartoonization-to-InstructPix2Pix.-/blob/main/diffusers_result_assets/results.png?raw=true" alt="instructpix2pix-inputs" width=600/> </p> ## References * InstructPix2Pix - https://github.com/timothybrooks/instruct-pix2pix * Dataset and example training script - https://huggingface.co/blog/instruction-tuning-sd * For more information about the project - https://github.com/Aiden-Frost/Efficiently-teaching-counting-and-cartoonization-to-InstructPix2Pix.-
diffusers/examples/research_projects/instructpix2pix_lora/README.md/0
{ "file_path": "diffusers/examples/research_projects/instructpix2pix_lora/README.md", "repo_id": "diffusers", "token_count": 1406 }
# coding=utf-8 # Copyright 2023 The HuggingFace Inc. team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """Fine-tuning script for Stable Diffusion for text2image with HuggingFace diffusers.""" import argparse import gc import logging import math import os import random import shutil from pathlib import Path import accelerate import datasets import numpy as np import torch import torch.nn.functional as F import torch.utils.checkpoint import transformers from accelerate import Accelerator from accelerate.logging import get_logger from accelerate.utils import ProjectConfiguration, set_seed from datasets import load_dataset from huggingface_hub import create_repo, upload_folder from packaging import version from PIL import Image from pipeline_pixart_alpha_controlnet import PixArtAlphaControlnetPipeline from torchvision import transforms from tqdm.auto import tqdm from transformers import T5EncoderModel, T5Tokenizer import diffusers from diffusers import AutoencoderKL, DDPMScheduler from diffusers.models import PixArtTransformer2DModel from diffusers.optimization import get_scheduler from diffusers.training_utils import compute_snr from diffusers.utils import check_min_version, is_wandb_available, make_image_grid from diffusers.utils.hub_utils import load_or_create_model_card, populate_model_card from diffusers.utils.import_utils import is_xformers_available from diffusers.utils.torch_utils import is_compiled_module from examples.research_projects.pixart.controlnet_pixart_alpha import ( PixArtControlNetAdapterModel, PixArtControlNetTransformerModel, ) if is_wandb_available(): import wandb # Will error if the minimal version of diffusers is not installed. Remove at your own risks. check_min_version("0.29.2") logger = get_logger(__name__, log_level="INFO") def log_validation( vae, transformer, controlnet, tokenizer, scheduler, text_encoder, args, accelerator, weight_dtype, step, is_final_validation=False, ): if weight_dtype == torch.float16 or weight_dtype == torch.bfloat16: raise ValueError( "Validation is not supported with mixed precision training, disable validation and use the validation script, that will generate images from the saved checkpoints." ) if not is_final_validation: logger.info(f"Running validation step {step} ... ") controlnet = accelerator.unwrap_model(controlnet) pipeline = PixArtAlphaControlnetPipeline.from_pretrained( args.pretrained_model_name_or_path, vae=vae, transformer=transformer, scheduler=scheduler, text_encoder=text_encoder, tokenizer=tokenizer, controlnet=controlnet, revision=args.revision, variant=args.variant, torch_dtype=weight_dtype, ) else: logger.info("Running validation - final ... ") controlnet = PixArtControlNetAdapterModel.from_pretrained(args.output_dir, torch_dtype=weight_dtype) pipeline = PixArtAlphaControlnetPipeline.from_pretrained( args.pretrained_model_name_or_path, controlnet=controlnet, revision=args.revision, variant=args.variant, torch_dtype=weight_dtype, ) pipeline = pipeline.to(accelerator.device) pipeline.set_progress_bar_config(disable=True) if args.enable_xformers_memory_efficient_attention: pipeline.enable_xformers_memory_efficient_attention() if args.seed is None: generator = None else: generator = torch.Generator(device=accelerator.device).manual_seed(args.seed) if len(args.validation_image) == len(args.validation_prompt): validation_images = args.validation_image validation_prompts = args.validation_prompt elif len(args.validation_image) == 1: validation_images = args.validation_image * len(args.validation_prompt) validation_prompts = args.validation_prompt elif len(args.validation_prompt) == 1: validation_images = args.validation_image validation_prompts = args.validation_prompt * len(args.validation_image) else: raise ValueError( "number of `args.validation_image` and `args.validation_prompt` should be checked in `parse_args`" ) image_logs = [] for validation_prompt, validation_image in zip(validation_prompts, validation_images): validation_image = Image.open(validation_image).convert("RGB") validation_image = validation_image.resize((args.resolution, args.resolution)) images = [] for _ in range(args.num_validation_images): image = pipeline( prompt=validation_prompt, image=validation_image, num_inference_steps=20, generator=generator ).images[0] images.append(image) image_logs.append( {"validation_image": validation_image, "images": images, "validation_prompt": validation_prompt} ) tracker_key = "test" if is_final_validation else "validation" for tracker in accelerator.trackers: if tracker.name == "tensorboard": for log in image_logs: images = log["images"] validation_prompt = log["validation_prompt"] validation_image = log["validation_image"] formatted_images = [] formatted_images.append(np.asarray(validation_image)) for image in images: formatted_images.append(np.asarray(image)) formatted_images = np.stack(formatted_images) tracker.writer.add_images(validation_prompt, formatted_images, step, dataformats="NHWC") elif tracker.name == "wandb": formatted_images = [] for log in image_logs: images = log["images"] validation_prompt = log["validation_prompt"] validation_image = log["validation_image"] formatted_images.append(wandb.Image(validation_image, caption="Controlnet conditioning")) for image in images: image = wandb.Image(image, caption=validation_prompt) formatted_images.append(image) tracker.log({tracker_key: formatted_images}) else: logger.warning(f"image logging not implemented for {tracker.name}") del pipeline gc.collect() torch.cuda.empty_cache() logger.info("Validation done!!") return image_logs def save_model_card(repo_id: str, image_logs=None, base_model=str, dataset_name=str, repo_folder=None): img_str = "" if image_logs is not None: img_str = "You can find some example images below.\n\n" for i, log in enumerate(image_logs): images = log["images"] validation_prompt = log["validation_prompt"] validation_image = log["validation_image"] validation_image.save(os.path.join(repo_folder, "image_control.png")) img_str += f"prompt: {validation_prompt}\n" images = [validation_image] + images make_image_grid(images, 1, len(images)).save(os.path.join(repo_folder, f"images_{i}.png")) img_str += f"![images_{i})](./images_{i}.png)\n" model_description = f""" # controlnet-{repo_id} These are controlnet weights trained on {base_model} with new type of conditioning. {img_str} """ model_card = load_or_create_model_card( repo_id_or_path=repo_id, from_training=True, license="openrail++", base_model=base_model, model_description=model_description, inference=True, ) tags = [ "pixart-alpha", "pixart-alpha-diffusers", "text-to-image", "diffusers", "controlnet", "diffusers-training", ] model_card = populate_model_card(model_card, tags=tags) model_card.save(os.path.join(repo_folder, "README.md")) def parse_args(): parser = argparse.ArgumentParser(description="Simple example of a training script.") parser.add_argument( "--pretrained_model_name_or_path", type=str, default=None, required=True, help="Path to pretrained model or model identifier from huggingface.co/models.", ) parser.add_argument( "--revision", type=str, default=None, required=False, help="Revision of pretrained model identifier from huggingface.co/models.", ) parser.add_argument( "--variant", type=str, default=None, help="Variant of the model files of the pretrained model identifier from huggingface.co/models, 'e.g.' fp16", ) parser.add_argument( "--controlnet_model_name_or_path", type=str, default=None, help="Path to pretrained controlnet model or model identifier from huggingface.co/models." " If not specified controlnet weights are initialized from the transformer.", ) parser.add_argument( "--dataset_name", type=str, default=None, help=( "The name of the Dataset (from the HuggingFace hub) to train on (could be your own, possibly private," " dataset). It can also be a path pointing to a local copy of a dataset in your filesystem," " or to a folder containing files that 🤗 Datasets can understand." ), ) parser.add_argument( "--dataset_config_name", type=str, default=None, help="The config of the Dataset, leave as None if there's only one config.", ) parser.add_argument( "--train_data_dir", type=str, default=None, help=( "A folder containing the training data. Folder contents must follow the structure described in" " https://huggingface.co/docs/datasets/image_dataset#imagefolder. In particular, a `metadata.jsonl` file" " must exist to provide the captions for the images. Ignored if `dataset_name` is specified." ), ) parser.add_argument( "--image_column", type=str, default="image", help="The column of the dataset containing an image." ) parser.add_argument( "--conditioning_image_column", type=str, default="conditioning_image", help="The column of the dataset containing the controlnet conditioning image.", ) parser.add_argument( "--caption_column", type=str, default="text", help="The column of the dataset containing a caption or a list of captions.", ) parser.add_argument( "--validation_prompt", type=str, nargs="+", default=None, help="One or more prompts to be evaluated every `--validation_steps`." " Provide either a matching number of `--validation_image`s, a single `--validation_image`" " to be used with all prompts, or a single prompt that will be used with all `--validation_image`s.", ) parser.add_argument( "--validation_image", type=str, default=None, nargs="+", help=( "A set of paths to the controlnet conditioning image be evaluated every `--validation_steps`" " and logged to `--report_to`. Provide either a matching number of `--validation_prompt`s, a" " a single `--validation_prompt` to be used with all `--validation_image`s, or a single" " `--validation_image` that will be used with all `--validation_prompt`s." ), ) parser.add_argument( "--num_validation_images", type=int, default=4, help="Number of images that should be generated during validation with `validation_prompt`.", ) parser.add_argument( "--validation_steps", type=int, default=100, help=( "Run fine-tuning validation every X epochs. The validation process consists of running the prompt" " `args.validation_prompt` multiple times: `args.num_validation_images`." ), ) parser.add_argument( "--max_train_samples", type=int, default=None, help=( "For debugging purposes or quicker training, truncate the number of training examples to this " "value if set." ), ) parser.add_argument( "--output_dir", type=str, default="pixart-controlnet", help="The output directory where the model predictions and checkpoints will be written.", ) parser.add_argument( "--cache_dir", type=str, default=None, help="The directory where the downloaded models and datasets will be stored.", ) parser.add_argument("--seed", type=int, default=None, help="A seed for reproducible training.") parser.add_argument( "--resolution", type=int, default=512, help=( "The resolution for input images, all the images in the train/validation dataset will be resized to this" " resolution" ), ) parser.add_argument( "--train_batch_size", type=int, default=16, help="Batch size (per device) for the training dataloader." ) parser.add_argument("--num_train_epochs", type=int, default=1) parser.add_argument( "--max_train_steps", type=int, default=None, help="Total number of training steps to perform. If provided, overrides num_train_epochs.", ) parser.add_argument( "--gradient_accumulation_steps", type=int, default=1, help="Number of updates steps to accumulate before performing a backward/update pass.", ) parser.add_argument( "--gradient_checkpointing", action="store_true", help="Whether or not to use gradient checkpointing to save memory at the expense of slower backward pass.", ) parser.add_argument( "--learning_rate", type=float, default=1e-6, help="Initial learning rate (after the potential warmup period) to use.", ) parser.add_argument( "--scale_lr", action="store_true", default=False, help="Scale the learning rate by the number of GPUs, gradient accumulation steps, and batch size.", ) parser.add_argument( "--lr_scheduler", type=str, default="constant", help=( 'The scheduler type to use. Choose between ["linear", "cosine", "cosine_with_restarts", "polynomial",' ' "constant", "constant_with_warmup"]' ), ) parser.add_argument( "--lr_warmup_steps", type=int, default=500, help="Number of steps for the warmup in the lr scheduler." ) parser.add_argument( "--snr_gamma", type=float, default=None, help="SNR weighting gamma to be used if rebalancing the loss. Recommended value is 5.0. " "More details here: https://arxiv.org/abs/2303.09556.", ) parser.add_argument( "--use_8bit_adam", action="store_true", help="Whether or not to use 8-bit Adam from bitsandbytes." ) parser.add_argument( "--allow_tf32", action="store_true", help=( "Whether or not to allow TF32 on Ampere GPUs. Can be used to speed up training. For more information, see" " https://pytorch.org/docs/stable/notes/cuda.html#tensorfloat-32-tf32-on-ampere-devices" ), ) parser.add_argument( "--dataloader_num_workers", type=int, default=0, help=( "Number of subprocesses to use for data loading. 0 means that the data will be loaded in the main process." ), ) parser.add_argument("--adam_beta1", type=float, default=0.9, help="The beta1 parameter for the Adam optimizer.") parser.add_argument("--adam_beta2", type=float, default=0.999, help="The beta2 parameter for the Adam optimizer.") parser.add_argument("--adam_weight_decay", type=float, default=1e-2, help="Weight decay to use.") parser.add_argument("--adam_epsilon", type=float, default=1e-08, help="Epsilon value for the Adam optimizer") parser.add_argument("--max_grad_norm", default=1.0, type=float, help="Max gradient norm.") parser.add_argument("--push_to_hub", action="store_true", help="Whether or not to push the model to the Hub.") parser.add_argument("--hub_token", type=str, default=None, help="The token to use to push to the Model Hub.") # ----Diffusion Training Arguments---- parser.add_argument( "--proportion_empty_prompts", type=float, default=0, help="Proportion of image prompts to be replaced with empty strings. Defaults to 0 (no prompt replacement).", ) parser.add_argument( "--prediction_type", type=str, default=None, help="The prediction_type that shall be used for training. Choose between 'epsilon' or 'v_prediction' or leave `None`. If left to `None` the default prediction type of the scheduler: `noise_scheduler.config.prediciton_type` is chosen.", ) parser.add_argument( "--hub_model_id", type=str, default=None, help="The name of the repository to keep in sync with the local `output_dir`.", ) parser.add_argument( "--logging_dir", type=str, default="logs", help=( "[TensorBoard](https://www.tensorflow.org/tensorboard) log directory. Will default to" " *output_dir/runs/**CURRENT_DATETIME_HOSTNAME***." ), ) parser.add_argument( "--mixed_precision", type=str, default=None, choices=["no", "fp16", "bf16"], help=( "Whether to use mixed precision. Choose between fp16 and bf16 (bfloat16). Bf16 requires PyTorch >=" " 1.10.and an Nvidia Ampere GPU. Default to the value of accelerate config of the current system or the" " flag passed with the `accelerate.launch` command. Use this argument to override the accelerate config." ), ) parser.add_argument( "--report_to", type=str, default="tensorboard", help=( 'The integration to report the results and logs to. Supported platforms are `"tensorboard"`' ' (default), `"wandb"` and `"comet_ml"`. Use `"all"` to report to all integrations.' ), ) parser.add_argument( "--checkpointing_steps", type=int, default=500, help=( "Save a checkpoint of the training state every X updates. These checkpoints are only suitable for resuming" " training using `--resume_from_checkpoint`." ), ) parser.add_argument( "--checkpoints_total_limit", type=int, default=None, help=("Max number of checkpoints to store."), ) parser.add_argument( "--resume_from_checkpoint", type=str, default=None, help=( "Whether training should be resumed from a previous checkpoint. Use a path saved by" ' `--checkpointing_steps`, or `"latest"` to automatically select the last available checkpoint.' ), ) parser.add_argument( "--enable_xformers_memory_efficient_attention", action="store_true", help="Whether or not to use xformers." ) parser.add_argument("--noise_offset", type=float, default=0, help="The scale of noise offset.") parser.add_argument( "--tracker_project_name", type=str, default="pixart_controlnet", help=( "The `project_name` argument passed to Accelerator.init_trackers for" " more information see https://huggingface.co/docs/accelerate/v0.17.0/en/package_reference/accelerator#accelerate.Accelerator" ), ) args = parser.parse_args() # Sanity checks if args.dataset_name is None and args.train_data_dir is None: raise ValueError("Need either a dataset name or a training folder.") if args.proportion_empty_prompts < 0 or args.proportion_empty_prompts > 1: raise ValueError("`--proportion_empty_prompts` must be in the range [0, 1].") return args def main(): args = parse_args() if args.report_to == "wandb" and args.hub_token is not None: raise ValueError( "You cannot use both --report_to=wandb and --hub_token due to a security risk of exposing your token." " Please use `huggingface-cli login` to authenticate with the Hub." ) logging_dir = Path(args.output_dir, args.logging_dir) accelerator_project_config = ProjectConfiguration(project_dir=args.output_dir, logging_dir=logging_dir) accelerator = Accelerator( gradient_accumulation_steps=args.gradient_accumulation_steps, mixed_precision=args.mixed_precision, log_with=args.report_to, project_config=accelerator_project_config, ) if args.report_to == "wandb": if not is_wandb_available(): raise ImportError("Make sure to install wandb if you want to use it for logging during training.") # Make one log on every process with the configuration for debugging. logging.basicConfig( format="%(asctime)s - %(levelname)s - %(name)s - %(message)s", datefmt="%m/%d/%Y %H:%M:%S", level=logging.INFO, ) logger.info(accelerator.state, main_process_only=False) if accelerator.is_local_main_process: datasets.utils.logging.set_verbosity_warning() transformers.utils.logging.set_verbosity_warning() diffusers.utils.logging.set_verbosity_info() else: datasets.utils.logging.set_verbosity_error() transformers.utils.logging.set_verbosity_error() diffusers.utils.logging.set_verbosity_error() # If passed along, set the training seed now. if args.seed is not None: set_seed(args.seed) # Handle the repository creation if accelerator.is_main_process: if args.output_dir is not None: os.makedirs(args.output_dir, exist_ok=True) if args.push_to_hub: repo_id = create_repo( repo_id=args.hub_model_id or Path(args.output_dir).name, exist_ok=True, token=args.hub_token ).repo_id # See Section 3.1. of the paper. max_length = 120 # For mixed precision training we cast all non-trainable weigths (vae, text_encoder) to half-precision # as these weights are only used for inference, keeping weights in full precision is not required. weight_dtype = torch.float32 if accelerator.mixed_precision == "fp16": weight_dtype = torch.float16 elif accelerator.mixed_precision == "bf16": weight_dtype = torch.bfloat16 # Load scheduler, tokenizer and models. noise_scheduler = DDPMScheduler.from_pretrained( args.pretrained_model_name_or_path, subfolder="scheduler", torch_dtype=weight_dtype ) tokenizer = T5Tokenizer.from_pretrained( args.pretrained_model_name_or_path, subfolder="tokenizer", revision=args.revision, torch_dtype=weight_dtype ) text_encoder = T5EncoderModel.from_pretrained( args.pretrained_model_name_or_path, subfolder="text_encoder", revision=args.revision, torch_dtype=weight_dtype ) text_encoder.requires_grad_(False) text_encoder.to(accelerator.device) vae = AutoencoderKL.from_pretrained( args.pretrained_model_name_or_path, subfolder="vae", revision=args.revision, variant=args.variant, torch_dtype=weight_dtype, ) vae.requires_grad_(False) vae.to(accelerator.device) transformer = PixArtTransformer2DModel.from_pretrained(args.pretrained_model_name_or_path, subfolder="transformer") transformer.to(accelerator.device) transformer.requires_grad_(False) if args.controlnet_model_name_or_path: logger.info("Loading existing controlnet weights") controlnet = PixArtControlNetAdapterModel.from_pretrained(args.controlnet_model_name_or_path) else: logger.info("Initializing controlnet weights from transformer.") controlnet = PixArtControlNetAdapterModel.from_transformer(transformer) transformer.to(dtype=weight_dtype) controlnet.to(accelerator.device) controlnet.train() def unwrap_model(model, keep_fp32_wrapper=True): model = accelerator.unwrap_model(model, keep_fp32_wrapper=keep_fp32_wrapper) model = model._orig_mod if is_compiled_module(model) else model return model # 10. Handle saving and loading of checkpoints # `accelerate` 0.16.0 will have better support for customized saving if version.parse(accelerate.__version__) >= version.parse("0.16.0"): # create custom saving & loading hooks so that `accelerator.save_state(...)` serializes in a nice format def save_model_hook(models, weights, output_dir): if accelerator.is_main_process: for _, model in enumerate(models): if isinstance(model, PixArtControlNetTransformerModel): print(f"Saving model {model.__class__.__name__} to {output_dir}") model.controlnet.save_pretrained(os.path.join(output_dir, "controlnet")) # make sure to pop weight so that corresponding model is not saved again weights.pop() def load_model_hook(models, input_dir): # rc todo: test and load the controlenet adapter and transformer raise ValueError("load model hook not tested") for i in range(len(models)): # pop models so that they are not loaded again model = models.pop() if isinstance(model, PixArtControlNetTransformerModel): load_model = PixArtControlNetAdapterModel.from_pretrained(input_dir, subfolder="controlnet") model.register_to_config(**load_model.config) model.load_state_dict(load_model.state_dict()) del load_model accelerator.register_save_state_pre_hook(save_model_hook) accelerator.register_load_state_pre_hook(load_model_hook) if args.enable_xformers_memory_efficient_attention: if is_xformers_available(): import xformers xformers_version = version.parse(xformers.__version__) if xformers_version == version.parse("0.0.16"): logger.warn( "xFormers 0.0.16 cannot be used for training in some GPUs. If you observe problems during training, please update xFormers to at least 0.0.17. See https://huggingface.co/docs/diffusers/main/en/optimization/xformers for more details." ) transformer.enable_xformers_memory_efficient_attention() controlnet.enable_xformers_memory_efficient_attention() else: raise ValueError("xformers is not available. Make sure it is installed correctly") if unwrap_model(controlnet).dtype != torch.float32: raise ValueError( f"Transformer loaded as datatype {unwrap_model(controlnet).dtype}. The trainable parameters should be in torch.float32." ) # Enable TF32 for faster training on Ampere GPUs, # cf https://pytorch.org/docs/stable/notes/cuda.html#tensorfloat-32-tf32-on-ampere-devices if args.allow_tf32: torch.backends.cuda.matmul.allow_tf32 = True if args.gradient_checkpointing: transformer.enable_gradient_checkpointing() controlnet.enable_gradient_checkpointing() if args.scale_lr: args.learning_rate = ( args.learning_rate * args.gradient_accumulation_steps * args.train_batch_size * accelerator.num_processes ) # Initialize the optimizer if args.use_8bit_adam: try: import bitsandbytes as bnb except ImportError: raise ImportError( "Please install bitsandbytes to use 8-bit Adam. You can do so by running `pip install bitsandbytes`" ) optimizer_cls = bnb.optim.AdamW8bit else: optimizer_cls = torch.optim.AdamW params_to_optimize = controlnet.parameters() optimizer = optimizer_cls( params_to_optimize, lr=args.learning_rate, betas=(args.adam_beta1, args.adam_beta2), weight_decay=args.adam_weight_decay, eps=args.adam_epsilon, ) # Get the datasets: you can either provide your own training and evaluation files (see below) # or specify a Dataset from the hub (the dataset will be downloaded automatically from the datasets Hub). # In distributed training, the load_dataset function guarantees that only one local process can concurrently # download the dataset. if args.dataset_name is not None: # Downloading and loading a dataset from the hub. dataset = load_dataset( args.dataset_name, args.dataset_config_name, cache_dir=args.cache_dir, data_dir=args.train_data_dir, ) else: data_files = {} if args.train_data_dir is not None: data_files["train"] = os.path.join(args.train_data_dir, "**") dataset = load_dataset( "imagefolder", data_files=data_files, cache_dir=args.cache_dir, ) # See more about loading custom images at # https://huggingface.co/docs/datasets/v2.4.0/en/image_load#imagefolder # Preprocessing the datasets. # We need to tokenize inputs and targets. column_names = dataset["train"].column_names # 6. Get the column names for input/target. if args.image_column is None: image_column = column_names[0] else: image_column = args.image_column if image_column not in column_names: raise ValueError( f"--image_column' value '{args.image_column}' needs to be one of: {', '.join(column_names)}" ) if args.caption_column is None: caption_column = column_names[1] else: caption_column = args.caption_column if caption_column not in column_names: raise ValueError( f"--caption_column' value '{args.caption_column}' needs to be one of: {', '.join(column_names)}" ) if args.conditioning_image_column is None: conditioning_image_column = column_names[2] logger.info(f"conditioning image column defaulting to {conditioning_image_column}") else: conditioning_image_column = args.conditioning_image_column if conditioning_image_column not in column_names: raise ValueError( f"`--conditioning_image_column` value '{args.conditioning_image_column}' not found in dataset columns. Dataset columns are: {', '.join(column_names)}" ) # Preprocessing the datasets. # We need to tokenize input captions and transform the images. def tokenize_captions(examples, is_train=True, proportion_empty_prompts=0.0, max_length=120): captions = [] for caption in examples[caption_column]: if random.random() < proportion_empty_prompts: captions.append("") elif isinstance(caption, str): captions.append(caption) elif isinstance(caption, (list, np.ndarray)): # take a random caption if there are multiple captions.append(random.choice(caption) if is_train else caption[0]) else: raise ValueError( f"Caption column `{caption_column}` should contain either strings or lists of strings." ) inputs = tokenizer(captions, max_length=max_length, padding="max_length", truncation=True, return_tensors="pt") return inputs.input_ids, inputs.attention_mask # Preprocessing the datasets. train_transforms = transforms.Compose( [ transforms.Resize(args.resolution, interpolation=transforms.InterpolationMode.BILINEAR), transforms.CenterCrop(args.resolution), transforms.ToTensor(), transforms.Normalize([0.5], [0.5]), ] ) conditioning_image_transforms = transforms.Compose( [ transforms.Resize(args.resolution, interpolation=transforms.InterpolationMode.BILINEAR), transforms.CenterCrop(args.resolution), transforms.ToTensor(), ] ) def preprocess_train(examples): images = [image.convert("RGB") for image in examples[image_column]] examples["pixel_values"] = [train_transforms(image) for image in images] conditioning_images = [image.convert("RGB") for image in examples[args.conditioning_image_column]] examples["conditioning_pixel_values"] = [conditioning_image_transforms(image) for image in conditioning_images] examples["input_ids"], examples["prompt_attention_mask"] = tokenize_captions( examples, proportion_empty_prompts=args.proportion_empty_prompts, max_length=max_length ) return examples with accelerator.main_process_first(): if args.max_train_samples is not None: dataset["train"] = dataset["train"].shuffle(seed=args.seed).select(range(args.max_train_samples)) # Set the training transforms train_dataset = dataset["train"].with_transform(preprocess_train) def collate_fn(examples): pixel_values = torch.stack([example["pixel_values"] for example in examples]) pixel_values = pixel_values.to(memory_format=torch.contiguous_format).float() conditioning_pixel_values = torch.stack([example["conditioning_pixel_values"] for example in examples]) conditioning_pixel_values = conditioning_pixel_values.to(memory_format=torch.contiguous_format).float() input_ids = torch.stack([example["input_ids"] for example in examples]) prompt_attention_mask = torch.stack([example["prompt_attention_mask"] for example in examples]) return { "pixel_values": pixel_values, "conditioning_pixel_values": conditioning_pixel_values, "input_ids": input_ids, "prompt_attention_mask": prompt_attention_mask, } # DataLoaders creation: train_dataloader = torch.utils.data.DataLoader( train_dataset, shuffle=True, collate_fn=collate_fn, batch_size=args.train_batch_size, num_workers=args.dataloader_num_workers, ) # Scheduler and math around the number of training steps. overrode_max_train_steps = False num_update_steps_per_epoch = math.ceil(len(train_dataloader) / args.gradient_accumulation_steps) if args.max_train_steps is None: args.max_train_steps = args.num_train_epochs * num_update_steps_per_epoch overrode_max_train_steps = True lr_scheduler = get_scheduler( args.lr_scheduler, optimizer=optimizer, num_warmup_steps=args.lr_warmup_steps * accelerator.num_processes, num_training_steps=args.max_train_steps * accelerator.num_processes, ) # Prepare everything with our `accelerator`. controlnet_transformer = PixArtControlNetTransformerModel(transformer, controlnet, training=True) controlnet_transformer, optimizer, train_dataloader, lr_scheduler = accelerator.prepare( controlnet_transformer, optimizer, train_dataloader, lr_scheduler ) # We need to recalculate our total training steps as the size of the training dataloader may have changed. num_update_steps_per_epoch = math.ceil(len(train_dataloader) / args.gradient_accumulation_steps) if overrode_max_train_steps: args.max_train_steps = args.num_train_epochs * num_update_steps_per_epoch # Afterwards we recalculate our number of training epochs args.num_train_epochs = math.ceil(args.max_train_steps / num_update_steps_per_epoch) # We need to initialize the trackers we use, and also store our configuration. # The trackers initializes automatically on the main process. if accelerator.is_main_process: accelerator.init_trackers(args.tracker_project_name, config=vars(args)) # Train! total_batch_size = args.train_batch_size * accelerator.num_processes * args.gradient_accumulation_steps logger.info("***** Running training *****") logger.info(f" Num examples = {len(train_dataset)}") logger.info(f" Num Epochs = {args.num_train_epochs}") logger.info(f" Instantaneous batch size per device = {args.train_batch_size}") logger.info(f" Total train batch size (w. parallel, distributed & accumulation) = {total_batch_size}") logger.info(f" Gradient Accumulation steps = {args.gradient_accumulation_steps}") logger.info(f" Total optimization steps = {args.max_train_steps}") global_step = 0 first_epoch = 0 # Potentially load in the weights and states from a previous save if args.resume_from_checkpoint: if args.resume_from_checkpoint != "latest": path = os.path.basename(args.resume_from_checkpoint) else: # Get the most recent checkpoint dirs = os.listdir(args.output_dir) dirs = [d for d in dirs if d.startswith("checkpoint")] dirs = sorted(dirs, key=lambda x: int(x.split("-")[1])) path = dirs[-1] if len(dirs) > 0 else None if path is None: accelerator.print( f"Checkpoint '{args.resume_from_checkpoint}' does not exist. Starting a new training run." ) args.resume_from_checkpoint = None initial_global_step = 0 else: accelerator.print(f"Resuming from checkpoint {path}") accelerator.load_state(os.path.join(args.output_dir, path)) global_step = int(path.split("-")[1]) initial_global_step = global_step first_epoch = global_step // num_update_steps_per_epoch else: initial_global_step = 0 progress_bar = tqdm( range(0, args.max_train_steps), initial=initial_global_step, desc="Steps", # Only show the progress bar once on each machine. disable=not accelerator.is_local_main_process, ) latent_channels = transformer.config.in_channels for epoch in range(first_epoch, args.num_train_epochs): controlnet_transformer.controlnet.train() train_loss = 0.0 for step, batch in enumerate(train_dataloader): with accelerator.accumulate(controlnet): # Convert images to latent space latents = vae.encode(batch["pixel_values"].to(dtype=weight_dtype)).latent_dist.sample() latents = latents * vae.config.scaling_factor # Convert control images to latent space controlnet_image_latents = vae.encode( batch["conditioning_pixel_values"].to(dtype=weight_dtype) ).latent_dist.sample() controlnet_image_latents = controlnet_image_latents * vae.config.scaling_factor # Sample noise that we'll add to the latents noise = torch.randn_like(latents) if args.noise_offset: # https://www.crosslabs.org//blog/diffusion-with-offset-noise noise += args.noise_offset * torch.randn( (latents.shape[0], latents.shape[1], 1, 1), device=latents.device ) bsz = latents.shape[0] # Sample a random timestep for each image timesteps = torch.randint(0, noise_scheduler.config.num_train_timesteps, (bsz,), device=latents.device) timesteps = timesteps.long() # Add noise to the latents according to the noise magnitude at each timestep # (this is the forward diffusion process) noisy_latents = noise_scheduler.add_noise(latents, noise, timesteps) # Get the text embedding for conditioning prompt_embeds = text_encoder(batch["input_ids"], attention_mask=batch["prompt_attention_mask"])[0] prompt_attention_mask = batch["prompt_attention_mask"] # Get the target for loss depending on the prediction type if args.prediction_type is not None: # set prediction_type of scheduler if defined noise_scheduler.register_to_config(prediction_type=args.prediction_type) if noise_scheduler.config.prediction_type == "epsilon": target = noise elif noise_scheduler.config.prediction_type == "v_prediction": target = noise_scheduler.get_velocity(latents, noise, timesteps) else: raise ValueError(f"Unknown prediction type {noise_scheduler.config.prediction_type}") # Prepare micro-conditions. added_cond_kwargs = {"resolution": None, "aspect_ratio": None} if getattr(transformer, "module", transformer).config.sample_size == 128: resolution = torch.tensor([args.resolution, args.resolution]).repeat(bsz, 1) aspect_ratio = torch.tensor([float(args.resolution / args.resolution)]).repeat(bsz, 1) resolution = resolution.to(dtype=weight_dtype, device=latents.device) aspect_ratio = aspect_ratio.to(dtype=weight_dtype, device=latents.device) added_cond_kwargs = {"resolution": resolution, "aspect_ratio": aspect_ratio} # Predict the noise residual and compute loss model_pred = controlnet_transformer( noisy_latents, encoder_hidden_states=prompt_embeds, encoder_attention_mask=prompt_attention_mask, timestep=timesteps, controlnet_cond=controlnet_image_latents, added_cond_kwargs=added_cond_kwargs, return_dict=False, )[0] if transformer.config.out_channels // 2 == latent_channels: model_pred = model_pred.chunk(2, dim=1)[0] else: model_pred = model_pred if args.snr_gamma is None: loss = F.mse_loss(model_pred.float(), target.float(), reduction="mean") else: # Compute loss-weights as per Section 3.4 of https://arxiv.org/abs/2303.09556. # Since we predict the noise instead of x_0, the original formulation is slightly changed. # This is discussed in Section 4.2 of the same paper. snr = compute_snr(noise_scheduler, timesteps) if noise_scheduler.config.prediction_type == "v_prediction": # Velocity objective requires that we add one to SNR values before we divide by them. snr = snr + 1 mse_loss_weights = ( torch.stack([snr, args.snr_gamma * torch.ones_like(timesteps)], dim=1).min(dim=1)[0] / snr ) loss = F.mse_loss(model_pred.float(), target.float(), reduction="none") loss = loss.mean(dim=list(range(1, len(loss.shape)))) * mse_loss_weights loss = loss.mean() # Gather the losses across all processes for logging (if we use distributed training). avg_loss = accelerator.gather(loss.repeat(args.train_batch_size)).mean() train_loss += avg_loss.item() / args.gradient_accumulation_steps # Backpropagate accelerator.backward(loss) if accelerator.sync_gradients: params_to_clip = controlnet_transformer.controlnet.parameters() accelerator.clip_grad_norm_(params_to_clip, args.max_grad_norm) optimizer.step() lr_scheduler.step() optimizer.zero_grad() # Checks if the accelerator has performed an optimization step behind the scenes if accelerator.sync_gradients: progress_bar.update(1) global_step += 1 accelerator.log({"train_loss": train_loss}, step=global_step) train_loss = 0.0 if accelerator.is_main_process: if global_step % args.checkpointing_steps == 0: # _before_ saving state, check if this save would set us over the `checkpoints_total_limit` if args.checkpoints_total_limit is not None: checkpoints = os.listdir(args.output_dir) checkpoints = [d for d in checkpoints if d.startswith("checkpoint")] checkpoints = sorted(checkpoints, key=lambda x: int(x.split("-")[1])) # before we save the new checkpoint, we need to have at _most_ `checkpoints_total_limit - 1` checkpoints if len(checkpoints) >= args.checkpoints_total_limit: num_to_remove = len(checkpoints) - args.checkpoints_total_limit + 1 removing_checkpoints = checkpoints[0:num_to_remove] logger.info( f"{len(checkpoints)} checkpoints already exist, removing {len(removing_checkpoints)} checkpoints" ) logger.info(f"removing checkpoints: {', '.join(removing_checkpoints)}") for removing_checkpoint in removing_checkpoints: removing_checkpoint = os.path.join(args.output_dir, removing_checkpoint) shutil.rmtree(removing_checkpoint) save_path = os.path.join(args.output_dir, f"checkpoint-{global_step}") accelerator.save_state(save_path) logger.info(f"Saved state to {save_path}") if args.validation_prompt is not None and global_step % args.validation_steps == 0: log_validation( vae, transformer, controlnet_transformer.controlnet, tokenizer, noise_scheduler, text_encoder, args, accelerator, weight_dtype, global_step, is_final_validation=False, ) logs = {"step_loss": loss.detach().item(), "lr": lr_scheduler.get_last_lr()[0]} progress_bar.set_postfix(**logs) if global_step >= args.max_train_steps: break # Save the lora layers accelerator.wait_for_everyone() if accelerator.is_main_process: controlnet = unwrap_model(controlnet_transformer.controlnet, keep_fp32_wrapper=False) controlnet.save_pretrained(os.path.join(args.output_dir, "controlnet")) image_logs = None if args.validation_prompt is not None: image_logs = log_validation( vae, transformer, controlnet, tokenizer, noise_scheduler, text_encoder, args, accelerator, weight_dtype, global_step, is_final_validation=True, ) if args.push_to_hub: save_model_card( repo_id, image_logs=image_logs, base_model=args.pretrained_model_name_or_path, dataset_name=args.dataset_name, repo_folder=args.output_dir, ) upload_folder( repo_id=repo_id, folder_path=args.output_dir, commit_message="End of training", ignore_patterns=["step_*", "epoch_*"], ) accelerator.end_training() if __name__ == "__main__": main()
diffusers/examples/research_projects/pixart/train_pixart_controlnet_hf.py/0
{ "file_path": "diffusers/examples/research_projects/pixart/train_pixart_controlnet_hf.py", "repo_id": "diffusers", "token_count": 21169 }
import argparse import copy import itertools import logging import math import os import random import shutil from pathlib import Path import numpy as np import torch import torch.nn.functional as F import torch.utils.checkpoint import torchvision.transforms.v2 as transforms_v2 import transformers from accelerate import Accelerator from accelerate.logging import get_logger from accelerate.utils import set_seed from huggingface_hub import create_repo, upload_folder from packaging import version from peft import LoraConfig, PeftModel, get_peft_model from PIL import Image from PIL.ImageOps import exif_transpose from torch.utils.data import Dataset from tqdm.auto import tqdm from transformers import AutoTokenizer, CLIPTextModel import diffusers from diffusers import ( AutoencoderKL, DDPMScheduler, DPMSolverMultistepScheduler, StableDiffusionInpaintPipeline, UNet2DConditionModel, ) from diffusers.optimization import get_scheduler from diffusers.utils import check_min_version, is_wandb_available from diffusers.utils.import_utils import is_xformers_available if is_wandb_available(): import wandb # Will error if the minimal version of diffusers is not installed. Remove at your own risks. check_min_version("0.20.1") logger = get_logger(__name__) def make_mask(images, resolution, times=30): mask, times = torch.ones_like(images[0:1, :, :]), np.random.randint(1, times) min_size, max_size, margin = np.array([0.03, 0.25, 0.01]) * resolution max_size = min(max_size, resolution - margin * 2) for _ in range(times): width = np.random.randint(int(min_size), int(max_size)) height = np.random.randint(int(min_size), int(max_size)) x_start = np.random.randint(int(margin), resolution - int(margin) - width + 1) y_start = np.random.randint(int(margin), resolution - int(margin) - height + 1) mask[:, y_start : y_start + height, x_start : x_start + width] = 0 mask = 1 - mask if random.random() < 0.5 else mask return mask def save_model_card( repo_id: str, images=None, base_model=str, repo_folder=None, ): img_str = "" for i, image in enumerate(images): image.save(os.path.join(repo_folder, f"image_{i}.png")) img_str += f"![img_{i}](./image_{i}.png)\n" yaml = f""" --- license: creativeml-openrail-m base_model: {base_model} prompt: "a photo of sks" tags: - stable-diffusion-inpainting - stable-diffusion-inpainting-diffusers - text-to-image - diffusers - realfill - diffusers-training inference: true --- """ model_card = f""" # RealFill - {repo_id} This is a realfill model derived from {base_model}. The weights were trained using [RealFill](https://realfill.github.io/). You can find some example images in the following. \n {img_str} """ with open(os.path.join(repo_folder, "README.md"), "w") as f: f.write(yaml + model_card) def log_validation( text_encoder, tokenizer, unet, args, accelerator, weight_dtype, epoch, ): logger.info(f"Running validation... \nGenerating {args.num_validation_images} images") # create pipeline (note: unet and vae are loaded again in float32) pipeline = StableDiffusionInpaintPipeline.from_pretrained( args.pretrained_model_name_or_path, tokenizer=tokenizer, revision=args.revision, torch_dtype=weight_dtype, ) # set `keep_fp32_wrapper` to True because we do not want to remove # mixed precision hooks while we are still training pipeline.unet = accelerator.unwrap_model(unet, keep_fp32_wrapper=True) pipeline.text_encoder = accelerator.unwrap_model(text_encoder, keep_fp32_wrapper=True) pipeline.scheduler = DPMSolverMultistepScheduler.from_config(pipeline.scheduler.config) pipeline = pipeline.to(accelerator.device) pipeline.set_progress_bar_config(disable=True) # run inference generator = None if args.seed is None else torch.Generator(device=accelerator.device).manual_seed(args.seed) target_dir = Path(args.train_data_dir) / "target" target_image, target_mask = target_dir / "target.png", target_dir / "mask.png" image, mask_image = Image.open(target_image), Image.open(target_mask) if image.mode != "RGB": image = image.convert("RGB") images = [] for _ in range(args.num_validation_images): image = pipeline( prompt="a photo of sks", image=image, mask_image=mask_image, num_inference_steps=25, guidance_scale=5, generator=generator, ).images[0] images.append(image) for tracker in accelerator.trackers: if tracker.name == "tensorboard": np_images = np.stack([np.asarray(img) for img in images]) tracker.writer.add_images("validation", np_images, epoch, dataformats="NHWC") if tracker.name == "wandb": tracker.log({"validation": [wandb.Image(image, caption=str(i)) for i, image in enumerate(images)]}) del pipeline torch.cuda.empty_cache() return images def parse_args(input_args=None): parser = argparse.ArgumentParser(description="Simple example of a training script.") parser.add_argument( "--pretrained_model_name_or_path", type=str, default=None, required=True, help="Path to pretrained model or model identifier from huggingface.co/models.", ) parser.add_argument( "--revision", type=str, default=None, required=False, help="Revision of pretrained model identifier from huggingface.co/models.", ) parser.add_argument( "--tokenizer_name", type=str, default=None, help="Pretrained tokenizer name or path if not the same as model_name", ) parser.add_argument( "--train_data_dir", type=str, default=None, required=True, help="A folder containing the training data of images.", ) parser.add_argument( "--num_validation_images", type=int, default=4, help="Number of images that should be generated during validation with `validation_conditioning`.", ) parser.add_argument( "--validation_steps", type=int, default=100, help=( "Run realfill validation every X steps. RealFill validation consists of running the conditioning" " `args.validation_conditioning` multiple times: `args.num_validation_images`." ), ) parser.add_argument( "--output_dir", type=str, default="realfill-model", help="The output directory where the model predictions and checkpoints will be written.", ) parser.add_argument("--seed", type=int, default=None, help="A seed for reproducible training.") parser.add_argument( "--resolution", type=int, default=512, help=( "The resolution for input images, all the images in the train/validation dataset will be resized to this" " resolution" ), ) parser.add_argument( "--train_batch_size", type=int, default=4, help="Batch size (per device) for the training dataloader." ) parser.add_argument("--num_train_epochs", type=int, default=1) parser.add_argument( "--max_train_steps", type=int, default=None, help="Total number of training steps to perform. If provided, overrides num_train_epochs.", ) parser.add_argument( "--checkpointing_steps", type=int, default=500, help=( "Save a checkpoint of the training state every X updates. These checkpoints can be used both as final" " checkpoints in case they are better than the last checkpoint, and are also suitable for resuming" " training using `--resume_from_checkpoint`." ), ) parser.add_argument( "--checkpoints_total_limit", type=int, default=None, help=("Max number of checkpoints to store."), ) parser.add_argument( "--resume_from_checkpoint", type=str, default=None, help=( "Whether training should be resumed from a previous checkpoint. Use a path saved by" ' `--checkpointing_steps`, or `"latest"` to automatically select the last available checkpoint.' ), ) parser.add_argument( "--gradient_accumulation_steps", type=int, default=1, help="Number of updates steps to accumulate before performing a backward/update pass.", ) parser.add_argument( "--gradient_checkpointing", action="store_true", help="Whether or not to use gradient checkpointing to save memory at the expense of slower backward pass.", ) parser.add_argument( "--unet_learning_rate", type=float, default=2e-4, help="Learning rate to use for unet.", ) parser.add_argument( "--text_encoder_learning_rate", type=float, default=4e-5, help="Learning rate to use for text encoder.", ) parser.add_argument( "--scale_lr", action="store_true", default=False, help="Scale the learning rate by the number of GPUs, gradient accumulation steps, and batch size.", ) parser.add_argument( "--lr_scheduler", type=str, default="constant", help=( 'The scheduler type to use. Choose between ["linear", "cosine", "cosine_with_restarts", "polynomial",' ' "constant", "constant_with_warmup"]' ), ) parser.add_argument( "--lr_warmup_steps", type=int, default=500, help="Number of steps for the warmup in the lr scheduler." ) parser.add_argument( "--lr_num_cycles", type=int, default=1, help="Number of hard resets of the lr in cosine_with_restarts scheduler.", ) parser.add_argument("--lr_power", type=float, default=1.0, help="Power factor of the polynomial scheduler.") parser.add_argument( "--use_8bit_adam", action="store_true", help="Whether or not to use 8-bit Adam from bitsandbytes." ) parser.add_argument("--adam_beta1", type=float, default=0.9, help="The beta1 parameter for the Adam optimizer.") parser.add_argument("--adam_beta2", type=float, default=0.999, help="The beta2 parameter for the Adam optimizer.") parser.add_argument("--adam_weight_decay", type=float, default=1e-2, help="Weight decay to use.") parser.add_argument("--adam_epsilon", type=float, default=1e-08, help="Epsilon value for the Adam optimizer") parser.add_argument("--max_grad_norm", default=1.0, type=float, help="Max gradient norm.") parser.add_argument("--push_to_hub", action="store_true", help="Whether or not to push the model to the Hub.") parser.add_argument("--hub_token", type=str, default=None, help="The token to use to push to the Model Hub.") parser.add_argument( "--hub_model_id", type=str, default=None, help="The name of the repository to keep in sync with the local `output_dir`.", ) parser.add_argument( "--logging_dir", type=str, default="logs", help=( "[TensorBoard](https://www.tensorflow.org/tensorboard) log directory. Will default to" " *output_dir/runs/**CURRENT_DATETIME_HOSTNAME***." ), ) parser.add_argument( "--allow_tf32", action="store_true", help=( "Whether or not to allow TF32 on Ampere GPUs. Can be used to speed up training. For more information, see" " https://pytorch.org/docs/stable/notes/cuda.html#tensorfloat-32-tf32-on-ampere-devices" ), ) parser.add_argument( "--report_to", type=str, default="tensorboard", help=( 'The integration to report the results and logs to. Supported platforms are `"tensorboard"`' ' (default), `"wandb"` and `"comet_ml"`. Use `"all"` to report to all integrations.' ), ) parser.add_argument( "--wandb_key", type=str, default=None, help=("If report to option is set to wandb, api-key for wandb used for login to wandb "), ) parser.add_argument( "--wandb_project_name", type=str, default=None, help=("If report to option is set to wandb, project name in wandb for log tracking "), ) parser.add_argument( "--mixed_precision", type=str, default=None, choices=["no", "fp16", "bf16"], help=( "Whether to use mixed precision. Choose between fp16 and bf16 (bfloat16). Bf16 requires PyTorch >=" " 1.10.and an Nvidia Ampere GPU. Default to the value of accelerate config of the current system or the" " flag passed with the `accelerate.launch` command. Use this argument to override the accelerate config." ), ) parser.add_argument("--local_rank", type=int, default=-1, help="For distributed training: local_rank") parser.add_argument( "--enable_xformers_memory_efficient_attention", action="store_true", help="Whether or not to use xformers." ) parser.add_argument( "--set_grads_to_none", action="store_true", help=( "Save more memory by using setting grads to None instead of zero. Be aware, that this changes certain" " behaviors, so disable this argument if it causes any problems. More info:" " https://pytorch.org/docs/stable/generated/torch.optim.Optimizer.zero_grad.html" ), ) parser.add_argument( "--lora_rank", type=int, default=16, help=("The dimension of the LoRA update matrices."), ) parser.add_argument( "--lora_alpha", type=int, default=27, help=("The alpha constant of the LoRA update matrices."), ) parser.add_argument( "--lora_dropout", type=float, default=0.0, help="The dropout rate of the LoRA update matrices.", ) parser.add_argument( "--lora_bias", type=str, default="none", help="The bias type of the Lora update matrices. Must be 'none', 'all' or 'lora_only'.", ) if input_args is not None: args = parser.parse_args(input_args) else: args = parser.parse_args() env_local_rank = int(os.environ.get("LOCAL_RANK", -1)) if env_local_rank != -1 and env_local_rank != args.local_rank: args.local_rank = env_local_rank return args class RealFillDataset(Dataset): """ A dataset to prepare the training and conditioning images and the masks with the dummy prompt for fine-tuning the model. It pre-processes the images, masks and tokenizes the prompts. """ def __init__( self, train_data_root, tokenizer, size=512, ): self.size = size self.tokenizer = tokenizer self.ref_data_root = Path(train_data_root) / "ref" self.target_image = Path(train_data_root) / "target" / "target.png" self.target_mask = Path(train_data_root) / "target" / "mask.png" if not (self.ref_data_root.exists() and self.target_image.exists() and self.target_mask.exists()): raise ValueError("Train images root doesn't exists.") self.train_images_path = list(self.ref_data_root.iterdir()) + [self.target_image] self.num_train_images = len(self.train_images_path) self.train_prompt = "a photo of sks" self.transform = transforms_v2.Compose( [ transforms_v2.ToImage(), transforms_v2.RandomResize(size, int(1.125 * size)), transforms_v2.RandomCrop(size), transforms_v2.ToDtype(torch.float32, scale=True), transforms_v2.Normalize([0.5], [0.5]), ] ) def __len__(self): return self.num_train_images def __getitem__(self, index): example = {} image = Image.open(self.train_images_path[index]) image = exif_transpose(image) if not image.mode == "RGB": image = image.convert("RGB") if index < len(self) - 1: weighting = Image.new("L", image.size) else: weighting = Image.open(self.target_mask) weighting = exif_transpose(weighting) image, weighting = self.transform(image, weighting) example["images"], example["weightings"] = image, weighting < 0 if random.random() < 0.1: example["masks"] = torch.ones_like(example["images"][0:1, :, :]) else: example["masks"] = make_mask(example["images"], self.size) example["conditioning_images"] = example["images"] * (example["masks"] < 0.5) train_prompt = "" if random.random() < 0.1 else self.train_prompt example["prompt_ids"] = self.tokenizer( train_prompt, truncation=True, padding="max_length", max_length=self.tokenizer.model_max_length, return_tensors="pt", ).input_ids return example def collate_fn(examples): input_ids = [example["prompt_ids"] for example in examples] images = [example["images"] for example in examples] masks = [example["masks"] for example in examples] weightings = [example["weightings"] for example in examples] conditioning_images = [example["conditioning_images"] for example in examples] images = torch.stack(images) images = images.to(memory_format=torch.contiguous_format).float() masks = torch.stack(masks) masks = masks.to(memory_format=torch.contiguous_format).float() weightings = torch.stack(weightings) weightings = weightings.to(memory_format=torch.contiguous_format).float() conditioning_images = torch.stack(conditioning_images) conditioning_images = conditioning_images.to(memory_format=torch.contiguous_format).float() input_ids = torch.cat(input_ids, dim=0) batch = { "input_ids": input_ids, "images": images, "masks": masks, "weightings": weightings, "conditioning_images": conditioning_images, } return batch def main(args): if args.report_to == "wandb" and args.hub_token is not None: raise ValueError( "You cannot use both --report_to=wandb and --hub_token due to a security risk of exposing your token." " Please use `huggingface-cli login` to authenticate with the Hub." ) logging_dir = Path(args.output_dir, args.logging_dir) accelerator = Accelerator( gradient_accumulation_steps=args.gradient_accumulation_steps, mixed_precision=args.mixed_precision, log_with=args.report_to, project_dir=logging_dir, ) if args.report_to == "wandb": if not is_wandb_available(): raise ImportError("Make sure to install wandb if you want to use it for logging during training.") wandb.login(key=args.wandb_key) wandb.init(project=args.wandb_project_name) # Make one log on every process with the configuration for debugging. logging.basicConfig( format="%(asctime)s - %(levelname)s - %(name)s - %(message)s", datefmt="%m/%d/%Y %H:%M:%S", level=logging.INFO, ) logger.info(accelerator.state, main_process_only=False) if accelerator.is_local_main_process: transformers.utils.logging.set_verbosity_warning() diffusers.utils.logging.set_verbosity_info() else: transformers.utils.logging.set_verbosity_error() diffusers.utils.logging.set_verbosity_error() # If passed along, set the training seed now. if args.seed is not None: set_seed(args.seed) # Handle the repository creation if accelerator.is_main_process: if args.output_dir is not None: os.makedirs(args.output_dir, exist_ok=True) if args.push_to_hub: repo_id = create_repo( repo_id=args.hub_model_id or Path(args.output_dir).name, exist_ok=True, token=args.hub_token ).repo_id # Load the tokenizer if args.tokenizer_name: tokenizer = AutoTokenizer.from_pretrained(args.tokenizer_name, revision=args.revision, use_fast=False) elif args.pretrained_model_name_or_path: tokenizer = AutoTokenizer.from_pretrained( args.pretrained_model_name_or_path, subfolder="tokenizer", revision=args.revision, use_fast=False, ) # Load scheduler and models noise_scheduler = DDPMScheduler.from_pretrained(args.pretrained_model_name_or_path, subfolder="scheduler") text_encoder = CLIPTextModel.from_pretrained( args.pretrained_model_name_or_path, subfolder="text_encoder", revision=args.revision ) vae = AutoencoderKL.from_pretrained(args.pretrained_model_name_or_path, subfolder="vae", revision=args.revision) unet = UNet2DConditionModel.from_pretrained( args.pretrained_model_name_or_path, subfolder="unet", revision=args.revision ) config = LoraConfig( r=args.lora_rank, lora_alpha=args.lora_alpha, target_modules=["to_k", "to_q", "to_v", "key", "query", "value"], lora_dropout=args.lora_dropout, bias=args.lora_bias, ) unet = get_peft_model(unet, config) config = LoraConfig( r=args.lora_rank, lora_alpha=args.lora_alpha, target_modules=["k_proj", "q_proj", "v_proj"], lora_dropout=args.lora_dropout, bias=args.lora_bias, ) text_encoder = get_peft_model(text_encoder, config) vae.requires_grad_(False) if args.enable_xformers_memory_efficient_attention: if is_xformers_available(): import xformers xformers_version = version.parse(xformers.__version__) if xformers_version == version.parse("0.0.16"): logger.warning( "xFormers 0.0.16 cannot be used for training in some GPUs. If you observe problems during training, please update xFormers to at least 0.0.17. See https://huggingface.co/docs/diffusers/main/en/optimization/xformers for more details." ) unet.enable_xformers_memory_efficient_attention() else: raise ValueError("xformers is not available. Make sure it is installed correctly") if args.gradient_checkpointing: unet.enable_gradient_checkpointing() text_encoder.gradient_checkpointing_enable() # create custom saving & loading hooks so that `accelerator.save_state(...)` serializes in a nice format def save_model_hook(models, weights, output_dir): if accelerator.is_main_process: for model in models: sub_dir = ( "unet" if isinstance(model.base_model.model, type(accelerator.unwrap_model(unet).base_model.model)) else "text_encoder" ) model.save_pretrained(os.path.join(output_dir, sub_dir)) # make sure to pop weight so that corresponding model is not saved again weights.pop() def load_model_hook(models, input_dir): while len(models) > 0: # pop models so that they are not loaded again model = models.pop() sub_dir = ( "unet" if isinstance(model.base_model.model, type(accelerator.unwrap_model(unet).base_model.model)) else "text_encoder" ) model_cls = ( UNet2DConditionModel if isinstance(model.base_model.model, type(accelerator.unwrap_model(unet).base_model.model)) else CLIPTextModel ) load_model = model_cls.from_pretrained(args.pretrained_model_name_or_path, subfolder=sub_dir) load_model = PeftModel.from_pretrained(load_model, input_dir, subfolder=sub_dir) model.load_state_dict(load_model.state_dict()) del load_model accelerator.register_save_state_pre_hook(save_model_hook) accelerator.register_load_state_pre_hook(load_model_hook) # Enable TF32 for faster training on Ampere GPUs, # cf https://pytorch.org/docs/stable/notes/cuda.html#tensorfloat-32-tf32-on-ampere-devices if args.allow_tf32: torch.backends.cuda.matmul.allow_tf32 = True if args.scale_lr: args.unet_learning_rate = ( args.unet_learning_rate * args.gradient_accumulation_steps * args.train_batch_size * accelerator.num_processes ) args.text_encoder_learning_rate = ( args.text_encoder_learning_rate * args.gradient_accumulation_steps * args.train_batch_size * accelerator.num_processes ) # Use 8-bit Adam for lower memory usage or to fine-tune the model in 16GB GPUs if args.use_8bit_adam: try: import bitsandbytes as bnb except ImportError: raise ImportError( "To use 8-bit Adam, please install the bitsandbytes library: `pip install bitsandbytes`." ) optimizer_class = bnb.optim.AdamW8bit else: optimizer_class = torch.optim.AdamW # Optimizer creation optimizer = optimizer_class( [ {"params": unet.parameters(), "lr": args.unet_learning_rate}, {"params": text_encoder.parameters(), "lr": args.text_encoder_learning_rate}, ], betas=(args.adam_beta1, args.adam_beta2), weight_decay=args.adam_weight_decay, eps=args.adam_epsilon, ) # Dataset and DataLoaders creation: train_dataset = RealFillDataset( train_data_root=args.train_data_dir, tokenizer=tokenizer, size=args.resolution, ) train_dataloader = torch.utils.data.DataLoader( train_dataset, batch_size=args.train_batch_size, shuffle=True, collate_fn=collate_fn, num_workers=1, ) # Scheduler and math around the number of training steps. overrode_max_train_steps = False num_update_steps_per_epoch = math.ceil(len(train_dataloader) / args.gradient_accumulation_steps) if args.max_train_steps is None: args.max_train_steps = args.num_train_epochs * num_update_steps_per_epoch overrode_max_train_steps = True lr_scheduler = get_scheduler( args.lr_scheduler, optimizer=optimizer, num_warmup_steps=args.lr_warmup_steps * args.gradient_accumulation_steps, num_training_steps=args.max_train_steps * args.gradient_accumulation_steps, num_cycles=args.lr_num_cycles, power=args.lr_power, ) # Prepare everything with our `accelerator`. unet, text_encoder, optimizer, train_dataloader = accelerator.prepare( unet, text_encoder, optimizer, train_dataloader ) # For mixed precision training we cast all non-trainable weigths (vae, non-lora text_encoder and non-lora unet) to half-precision # as these weights are only used for inference, keeping weights in full precision is not required. weight_dtype = torch.float32 if accelerator.mixed_precision == "fp16": weight_dtype = torch.float16 elif accelerator.mixed_precision == "bf16": weight_dtype = torch.bfloat16 # Move vae to device and cast to weight_dtype vae.to(accelerator.device, dtype=weight_dtype) # We need to recalculate our total training steps as the size of the training dataloader may have changed. num_update_steps_per_epoch = math.ceil(len(train_dataloader) / args.gradient_accumulation_steps) if overrode_max_train_steps: args.max_train_steps = args.num_train_epochs * num_update_steps_per_epoch # Afterwards we recalculate our number of training epochs args.num_train_epochs = math.ceil(args.max_train_steps / num_update_steps_per_epoch) # We need to initialize the trackers we use, and also store our configuration. # The trackers initializes automatically on the main process. if accelerator.is_main_process: tracker_config = vars(copy.deepcopy(args)) accelerator.init_trackers("realfill", config=tracker_config) # Train! total_batch_size = args.train_batch_size * accelerator.num_processes * args.gradient_accumulation_steps logger.info("***** Running training *****") logger.info(f" Num examples = {len(train_dataset)}") logger.info(f" Num batches each epoch = {len(train_dataloader)}") logger.info(f" Num Epochs = {args.num_train_epochs}") logger.info(f" Instantaneous batch size per device = {args.train_batch_size}") logger.info(f" Total train batch size (w. parallel, distributed & accumulation) = {total_batch_size}") logger.info(f" Gradient Accumulation steps = {args.gradient_accumulation_steps}") logger.info(f" Total optimization steps = {args.max_train_steps}") global_step = 0 first_epoch = 0 # Potentially load in the weights and states from a previous save if args.resume_from_checkpoint: if args.resume_from_checkpoint != "latest": path = os.path.basename(args.resume_from_checkpoint) else: # Get the mos recent checkpoint dirs = os.listdir(args.output_dir) dirs = [d for d in dirs if d.startswith("checkpoint")] dirs = sorted(dirs, key=lambda x: int(x.split("-")[1])) path = dirs[-1] if len(dirs) > 0 else None if path is None: accelerator.print( f"Checkpoint '{args.resume_from_checkpoint}' does not exist. Starting a new training run." ) args.resume_from_checkpoint = None initial_global_step = 0 else: accelerator.print(f"Resuming from checkpoint {path}") accelerator.load_state(os.path.join(args.output_dir, path)) global_step = int(path.split("-")[1]) initial_global_step = global_step first_epoch = global_step // num_update_steps_per_epoch else: initial_global_step = 0 progress_bar = tqdm( range(0, args.max_train_steps), initial=initial_global_step, desc="Steps", # Only show the progress bar once on each machine. disable=not accelerator.is_local_main_process, ) for epoch in range(first_epoch, args.num_train_epochs): unet.train() text_encoder.train() for step, batch in enumerate(train_dataloader): with accelerator.accumulate(unet, text_encoder): # Convert images to latent space latents = vae.encode(batch["images"].to(dtype=weight_dtype)).latent_dist.sample() latents = latents * 0.18215 # Convert masked images to latent space conditionings = vae.encode(batch["conditioning_images"].to(dtype=weight_dtype)).latent_dist.sample() conditionings = conditionings * 0.18215 # Downsample mask and weighting so that they match with the latents masks, size = batch["masks"].to(dtype=weight_dtype), latents.shape[2:] masks = F.interpolate(masks, size=size) weightings = batch["weightings"].to(dtype=weight_dtype) weightings = F.interpolate(weightings, size=size) # Sample noise that we'll add to the latents noise = torch.randn_like(latents) bsz = latents.shape[0] # Sample a random timestep for each image timesteps = torch.randint(0, noise_scheduler.config.num_train_timesteps, (bsz,), device=latents.device) timesteps = timesteps.long() # Add noise to the latents according to the noise magnitude at each timestep # (this is the forward diffusion process) noisy_latents = noise_scheduler.add_noise(latents, noise, timesteps) # Concatenate noisy latents, masks and conditionings to get inputs to unet inputs = torch.cat([noisy_latents, masks, conditionings], dim=1) # Get the text embedding for conditioning encoder_hidden_states = text_encoder(batch["input_ids"])[0] # Predict the noise residual model_pred = unet(inputs, timesteps, encoder_hidden_states).sample # Compute the diffusion loss assert noise_scheduler.config.prediction_type == "epsilon" loss = (weightings * F.mse_loss(model_pred.float(), noise.float(), reduction="none")).mean() # Backpropagate accelerator.backward(loss) if accelerator.sync_gradients: params_to_clip = itertools.chain(unet.parameters(), text_encoder.parameters()) accelerator.clip_grad_norm_(params_to_clip, args.max_grad_norm) optimizer.step() lr_scheduler.step() optimizer.zero_grad(set_to_none=args.set_grads_to_none) # Checks if the accelerator has performed an optimization step behind the scenes if accelerator.sync_gradients: progress_bar.update(1) if args.report_to == "wandb": accelerator.print(progress_bar) global_step += 1 if accelerator.is_main_process: if global_step % args.checkpointing_steps == 0: # _before_ saving state, check if this save would set us over the `checkpoints_total_limit` if args.checkpoints_total_limit is not None: checkpoints = os.listdir(args.output_dir) checkpoints = [d for d in checkpoints if d.startswith("checkpoint")] checkpoints = sorted(checkpoints, key=lambda x: int(x.split("-")[1])) # before we save the new checkpoint, we need to have at _most_ `checkpoints_total_limit - 1` checkpoints if len(checkpoints) >= args.checkpoints_total_limit: num_to_remove = len(checkpoints) - args.checkpoints_total_limit + 1 removing_checkpoints = checkpoints[0:num_to_remove] logger.info( f"{len(checkpoints)} checkpoints already exist, removing {len(removing_checkpoints)} checkpoints" ) logger.info(f"removing checkpoints: {', '.join(removing_checkpoints)}") for removing_checkpoint in removing_checkpoints: removing_checkpoint = os.path.join(args.output_dir, removing_checkpoint) shutil.rmtree(removing_checkpoint) save_path = os.path.join(args.output_dir, f"checkpoint-{global_step}") accelerator.save_state(save_path) logger.info(f"Saved state to {save_path}") if global_step % args.validation_steps == 0: log_validation( text_encoder, tokenizer, unet, args, accelerator, weight_dtype, global_step, ) logs = {"loss": loss.detach().item()} progress_bar.set_postfix(**logs) accelerator.log(logs, step=global_step) if global_step >= args.max_train_steps: break # Save the lora layers accelerator.wait_for_everyone() if accelerator.is_main_process: pipeline = StableDiffusionInpaintPipeline.from_pretrained( args.pretrained_model_name_or_path, unet=accelerator.unwrap_model(unet, keep_fp32_wrapper=True).merge_and_unload(), text_encoder=accelerator.unwrap_model(text_encoder, keep_fp32_wrapper=True).merge_and_unload(), revision=args.revision, ) pipeline.save_pretrained(args.output_dir) # Final inference images = log_validation( text_encoder, tokenizer, unet, args, accelerator, weight_dtype, global_step, ) if args.push_to_hub: save_model_card( repo_id, images=images, base_model=args.pretrained_model_name_or_path, repo_folder=args.output_dir, ) upload_folder( repo_id=repo_id, folder_path=args.output_dir, commit_message="End of training", ignore_patterns=["step_*", "epoch_*"], ) accelerator.end_training() if __name__ == "__main__": args = parse_args() main(args)
diffusers/examples/research_projects/realfill/train_realfill.py/0
{ "file_path": "diffusers/examples/research_projects/realfill/train_realfill.py", "repo_id": "diffusers", "token_count": 16396 }
# VAE `vae_roundtrip.py` Demonstrates the use of a VAE by roundtripping an image through the encoder and decoder. Original and reconstructed images are displayed side by side. ``` cd examples/research_projects/vae python vae_roundtrip.py \ --pretrained_model_name_or_path="stable-diffusion-v1-5/stable-diffusion-v1-5" \ --subfolder="vae" \ --input_image="/path/to/your/input.png" ```
diffusers/examples/research_projects/vae/README.md/0
{ "file_path": "diffusers/examples/research_projects/vae/README.md", "repo_id": "diffusers", "token_count": 140 }
import argparse from typing import Any, Dict import torch from huggingface_hub import hf_hub_download from safetensors.torch import load_file from diffusers import AutoencoderDC def remap_qkv_(key: str, state_dict: Dict[str, Any]): qkv = state_dict.pop(key) q, k, v = torch.chunk(qkv, 3, dim=0) parent_module, _, _ = key.rpartition(".qkv.conv.weight") state_dict[f"{parent_module}.to_q.weight"] = q.squeeze() state_dict[f"{parent_module}.to_k.weight"] = k.squeeze() state_dict[f"{parent_module}.to_v.weight"] = v.squeeze() def remap_proj_conv_(key: str, state_dict: Dict[str, Any]): parent_module, _, _ = key.rpartition(".proj.conv.weight") state_dict[f"{parent_module}.to_out.weight"] = state_dict.pop(key).squeeze() AE_KEYS_RENAME_DICT = { # common "main.": "", "op_list.": "", "context_module": "attn", "local_module": "conv_out", # NOTE: The below two lines work because scales in the available configs only have a tuple length of 1 # If there were more scales, there would be more layers, so a loop would be better to handle this "aggreg.0.0": "to_qkv_multiscale.0.proj_in", "aggreg.0.1": "to_qkv_multiscale.0.proj_out", "depth_conv.conv": "conv_depth", "inverted_conv.conv": "conv_inverted", "point_conv.conv": "conv_point", "point_conv.norm": "norm", "conv.conv.": "conv.", "conv1.conv": "conv1", "conv2.conv": "conv2", "conv2.norm": "norm", "proj.norm": "norm_out", # encoder "encoder.project_in.conv": "encoder.conv_in", "encoder.project_out.0.conv": "encoder.conv_out", "encoder.stages": "encoder.down_blocks", # decoder "decoder.project_in.conv": "decoder.conv_in", "decoder.project_out.0": "decoder.norm_out", "decoder.project_out.2.conv": "decoder.conv_out", "decoder.stages": "decoder.up_blocks", } AE_F32C32_KEYS = { # encoder "encoder.project_in.conv": "encoder.conv_in.conv", # decoder "decoder.project_out.2.conv": "decoder.conv_out.conv", } AE_F64C128_KEYS = { # encoder "encoder.project_in.conv": "encoder.conv_in.conv", # decoder "decoder.project_out.2.conv": "decoder.conv_out.conv", } AE_F128C512_KEYS = { # encoder "encoder.project_in.conv": "encoder.conv_in.conv", # decoder "decoder.project_out.2.conv": "decoder.conv_out.conv", } AE_SPECIAL_KEYS_REMAP = { "qkv.conv.weight": remap_qkv_, "proj.conv.weight": remap_proj_conv_, } def get_state_dict(saved_dict: Dict[str, Any]) -> Dict[str, Any]: state_dict = saved_dict if "model" in saved_dict.keys(): state_dict = state_dict["model"] if "module" in saved_dict.keys(): state_dict = state_dict["module"] if "state_dict" in saved_dict.keys(): state_dict = state_dict["state_dict"] return state_dict def update_state_dict_(state_dict: Dict[str, Any], old_key: str, new_key: str) -> Dict[str, Any]: state_dict[new_key] = state_dict.pop(old_key) def convert_ae(config_name: str, dtype: torch.dtype): config = get_ae_config(config_name) hub_id = f"mit-han-lab/{config_name}" ckpt_path = hf_hub_download(hub_id, "model.safetensors") original_state_dict = get_state_dict(load_file(ckpt_path)) ae = AutoencoderDC(**config).to(dtype=dtype) for key in list(original_state_dict.keys()): new_key = key[:] for replace_key, rename_key in AE_KEYS_RENAME_DICT.items(): new_key = new_key.replace(replace_key, rename_key) update_state_dict_(original_state_dict, key, new_key) for key in list(original_state_dict.keys()): for special_key, handler_fn_inplace in AE_SPECIAL_KEYS_REMAP.items(): if special_key not in key: continue handler_fn_inplace(key, original_state_dict) ae.load_state_dict(original_state_dict, strict=True) return ae def get_ae_config(name: str): if name in ["dc-ae-f32c32-sana-1.0"]: config = { "latent_channels": 32, "encoder_block_types": ( "ResBlock", "ResBlock", "ResBlock", "EfficientViTBlock", "EfficientViTBlock", "EfficientViTBlock", ), "decoder_block_types": ( "ResBlock", "ResBlock", "ResBlock", "EfficientViTBlock", "EfficientViTBlock", "EfficientViTBlock", ), "encoder_block_out_channels": (128, 256, 512, 512, 1024, 1024), "decoder_block_out_channels": (128, 256, 512, 512, 1024, 1024), "encoder_qkv_multiscales": ((), (), (), (5,), (5,), (5,)), "decoder_qkv_multiscales": ((), (), (), (5,), (5,), (5,)), "encoder_layers_per_block": (2, 2, 2, 3, 3, 3), "decoder_layers_per_block": [3, 3, 3, 3, 3, 3], "downsample_block_type": "conv", "upsample_block_type": "interpolate", "decoder_norm_types": "rms_norm", "decoder_act_fns": "silu", "scaling_factor": 0.41407, } elif name in ["dc-ae-f32c32-in-1.0", "dc-ae-f32c32-mix-1.0"]: AE_KEYS_RENAME_DICT.update(AE_F32C32_KEYS) config = { "latent_channels": 32, "encoder_block_types": [ "ResBlock", "ResBlock", "ResBlock", "EfficientViTBlock", "EfficientViTBlock", "EfficientViTBlock", ], "decoder_block_types": [ "ResBlock", "ResBlock", "ResBlock", "EfficientViTBlock", "EfficientViTBlock", "EfficientViTBlock", ], "encoder_block_out_channels": [128, 256, 512, 512, 1024, 1024], "decoder_block_out_channels": [128, 256, 512, 512, 1024, 1024], "encoder_layers_per_block": [0, 4, 8, 2, 2, 2], "decoder_layers_per_block": [0, 5, 10, 2, 2, 2], "encoder_qkv_multiscales": ((), (), (), (), (), ()), "decoder_qkv_multiscales": ((), (), (), (), (), ()), "decoder_norm_types": ["batch_norm", "batch_norm", "batch_norm", "rms_norm", "rms_norm", "rms_norm"], "decoder_act_fns": ["relu", "relu", "relu", "silu", "silu", "silu"], } if name == "dc-ae-f32c32-in-1.0": config["scaling_factor"] = 0.3189 elif name == "dc-ae-f32c32-mix-1.0": config["scaling_factor"] = 0.4552 elif name in ["dc-ae-f64c128-in-1.0", "dc-ae-f64c128-mix-1.0"]: AE_KEYS_RENAME_DICT.update(AE_F64C128_KEYS) config = { "latent_channels": 128, "encoder_block_types": [ "ResBlock", "ResBlock", "ResBlock", "EfficientViTBlock", "EfficientViTBlock", "EfficientViTBlock", "EfficientViTBlock", ], "decoder_block_types": [ "ResBlock", "ResBlock", "ResBlock", "EfficientViTBlock", "EfficientViTBlock", "EfficientViTBlock", "EfficientViTBlock", ], "encoder_block_out_channels": [128, 256, 512, 512, 1024, 1024, 2048], "decoder_block_out_channels": [128, 256, 512, 512, 1024, 1024, 2048], "encoder_layers_per_block": [0, 4, 8, 2, 2, 2, 2], "decoder_layers_per_block": [0, 5, 10, 2, 2, 2, 2], "encoder_qkv_multiscales": ((), (), (), (), (), (), ()), "decoder_qkv_multiscales": ((), (), (), (), (), (), ()), "decoder_norm_types": [ "batch_norm", "batch_norm", "batch_norm", "rms_norm", "rms_norm", "rms_norm", "rms_norm", ], "decoder_act_fns": ["relu", "relu", "relu", "silu", "silu", "silu", "silu"], } if name == "dc-ae-f64c128-in-1.0": config["scaling_factor"] = 0.2889 elif name == "dc-ae-f64c128-mix-1.0": config["scaling_factor"] = 0.4538 elif name in ["dc-ae-f128c512-in-1.0", "dc-ae-f128c512-mix-1.0"]: AE_KEYS_RENAME_DICT.update(AE_F128C512_KEYS) config = { "latent_channels": 512, "encoder_block_types": [ "ResBlock", "ResBlock", "ResBlock", "EfficientViTBlock", "EfficientViTBlock", "EfficientViTBlock", "EfficientViTBlock", "EfficientViTBlock", ], "decoder_block_types": [ "ResBlock", "ResBlock", "ResBlock", "EfficientViTBlock", "EfficientViTBlock", "EfficientViTBlock", "EfficientViTBlock", "EfficientViTBlock", ], "encoder_block_out_channels": [128, 256, 512, 512, 1024, 1024, 2048, 2048], "decoder_block_out_channels": [128, 256, 512, 512, 1024, 1024, 2048, 2048], "encoder_layers_per_block": [0, 4, 8, 2, 2, 2, 2, 2], "decoder_layers_per_block": [0, 5, 10, 2, 2, 2, 2, 2], "encoder_qkv_multiscales": ((), (), (), (), (), (), (), ()), "decoder_qkv_multiscales": ((), (), (), (), (), (), (), ()), "decoder_norm_types": [ "batch_norm", "batch_norm", "batch_norm", "rms_norm", "rms_norm", "rms_norm", "rms_norm", "rms_norm", ], "decoder_act_fns": ["relu", "relu", "relu", "silu", "silu", "silu", "silu", "silu"], } if name == "dc-ae-f128c512-in-1.0": config["scaling_factor"] = 0.4883 elif name == "dc-ae-f128c512-mix-1.0": config["scaling_factor"] = 0.3620 else: raise ValueError("Invalid config name provided.") return config def get_args(): parser = argparse.ArgumentParser() parser.add_argument( "--config_name", type=str, default="dc-ae-f32c32-sana-1.0", choices=[ "dc-ae-f32c32-sana-1.0", "dc-ae-f32c32-in-1.0", "dc-ae-f32c32-mix-1.0", "dc-ae-f64c128-in-1.0", "dc-ae-f64c128-mix-1.0", "dc-ae-f128c512-in-1.0", "dc-ae-f128c512-mix-1.0", ], help="The DCAE checkpoint to convert", ) parser.add_argument("--output_path", type=str, required=True, help="Path where converted model should be saved") parser.add_argument("--dtype", default="fp32", help="Torch dtype to save the model in.") return parser.parse_args() DTYPE_MAPPING = { "fp32": torch.float32, "fp16": torch.float16, "bf16": torch.bfloat16, } VARIANT_MAPPING = { "fp32": None, "fp16": "fp16", "bf16": "bf16", } if __name__ == "__main__": args = get_args() dtype = DTYPE_MAPPING[args.dtype] variant = VARIANT_MAPPING[args.dtype] ae = convert_ae(args.config_name, dtype) ae.save_pretrained(args.output_path, safe_serialization=True, max_shard_size="5GB", variant=variant)
diffusers/scripts/convert_dcae_to_diffusers.py/0
{ "file_path": "diffusers/scripts/convert_dcae_to_diffusers.py", "repo_id": "diffusers", "token_count": 6118 }
# coding=utf-8 # Copyright 2025 The HuggingFace Inc. team. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """ Conversion script for the T2I-Adapter checkpoints. """ import argparse import torch from diffusers import T2IAdapter def convert_adapter(src_state, in_channels): original_body_length = max([int(x.split(".")[1]) for x in src_state.keys() if "body." in x]) + 1 assert original_body_length == 8 # (0, 1) -> channels 1 assert src_state["body.0.block1.weight"].shape == (320, 320, 3, 3) # (2, 3) -> channels 2 assert src_state["body.2.in_conv.weight"].shape == (640, 320, 1, 1) # (4, 5) -> channels 3 assert src_state["body.4.in_conv.weight"].shape == (1280, 640, 1, 1) # (6, 7) -> channels 4 assert src_state["body.6.block1.weight"].shape == (1280, 1280, 3, 3) res_state = { "adapter.conv_in.weight": src_state.pop("conv_in.weight"), "adapter.conv_in.bias": src_state.pop("conv_in.bias"), # 0.resnets.0 "adapter.body.0.resnets.0.block1.weight": src_state.pop("body.0.block1.weight"), "adapter.body.0.resnets.0.block1.bias": src_state.pop("body.0.block1.bias"), "adapter.body.0.resnets.0.block2.weight": src_state.pop("body.0.block2.weight"), "adapter.body.0.resnets.0.block2.bias": src_state.pop("body.0.block2.bias"), # 0.resnets.1 "adapter.body.0.resnets.1.block1.weight": src_state.pop("body.1.block1.weight"), "adapter.body.0.resnets.1.block1.bias": src_state.pop("body.1.block1.bias"), "adapter.body.0.resnets.1.block2.weight": src_state.pop("body.1.block2.weight"), "adapter.body.0.resnets.1.block2.bias": src_state.pop("body.1.block2.bias"), # 1 "adapter.body.1.in_conv.weight": src_state.pop("body.2.in_conv.weight"), "adapter.body.1.in_conv.bias": src_state.pop("body.2.in_conv.bias"), # 1.resnets.0 "adapter.body.1.resnets.0.block1.weight": src_state.pop("body.2.block1.weight"), "adapter.body.1.resnets.0.block1.bias": src_state.pop("body.2.block1.bias"), "adapter.body.1.resnets.0.block2.weight": src_state.pop("body.2.block2.weight"), "adapter.body.1.resnets.0.block2.bias": src_state.pop("body.2.block2.bias"), # 1.resnets.1 "adapter.body.1.resnets.1.block1.weight": src_state.pop("body.3.block1.weight"), "adapter.body.1.resnets.1.block1.bias": src_state.pop("body.3.block1.bias"), "adapter.body.1.resnets.1.block2.weight": src_state.pop("body.3.block2.weight"), "adapter.body.1.resnets.1.block2.bias": src_state.pop("body.3.block2.bias"), # 2 "adapter.body.2.in_conv.weight": src_state.pop("body.4.in_conv.weight"), "adapter.body.2.in_conv.bias": src_state.pop("body.4.in_conv.bias"), # 2.resnets.0 "adapter.body.2.resnets.0.block1.weight": src_state.pop("body.4.block1.weight"), "adapter.body.2.resnets.0.block1.bias": src_state.pop("body.4.block1.bias"), "adapter.body.2.resnets.0.block2.weight": src_state.pop("body.4.block2.weight"), "adapter.body.2.resnets.0.block2.bias": src_state.pop("body.4.block2.bias"), # 2.resnets.1 "adapter.body.2.resnets.1.block1.weight": src_state.pop("body.5.block1.weight"), "adapter.body.2.resnets.1.block1.bias": src_state.pop("body.5.block1.bias"), "adapter.body.2.resnets.1.block2.weight": src_state.pop("body.5.block2.weight"), "adapter.body.2.resnets.1.block2.bias": src_state.pop("body.5.block2.bias"), # 3.resnets.0 "adapter.body.3.resnets.0.block1.weight": src_state.pop("body.6.block1.weight"), "adapter.body.3.resnets.0.block1.bias": src_state.pop("body.6.block1.bias"), "adapter.body.3.resnets.0.block2.weight": src_state.pop("body.6.block2.weight"), "adapter.body.3.resnets.0.block2.bias": src_state.pop("body.6.block2.bias"), # 3.resnets.1 "adapter.body.3.resnets.1.block1.weight": src_state.pop("body.7.block1.weight"), "adapter.body.3.resnets.1.block1.bias": src_state.pop("body.7.block1.bias"), "adapter.body.3.resnets.1.block2.weight": src_state.pop("body.7.block2.weight"), "adapter.body.3.resnets.1.block2.bias": src_state.pop("body.7.block2.bias"), } assert len(src_state) == 0 adapter = T2IAdapter(in_channels=in_channels, adapter_type="full_adapter") adapter.load_state_dict(res_state) return adapter def convert_light_adapter(src_state): original_body_length = max([int(x.split(".")[1]) for x in src_state.keys() if "body." in x]) + 1 assert original_body_length == 4 res_state = { # body.0.in_conv "adapter.body.0.in_conv.weight": src_state.pop("body.0.in_conv.weight"), "adapter.body.0.in_conv.bias": src_state.pop("body.0.in_conv.bias"), # body.0.resnets.0 "adapter.body.0.resnets.0.block1.weight": src_state.pop("body.0.body.0.block1.weight"), "adapter.body.0.resnets.0.block1.bias": src_state.pop("body.0.body.0.block1.bias"), "adapter.body.0.resnets.0.block2.weight": src_state.pop("body.0.body.0.block2.weight"), "adapter.body.0.resnets.0.block2.bias": src_state.pop("body.0.body.0.block2.bias"), # body.0.resnets.1 "adapter.body.0.resnets.1.block1.weight": src_state.pop("body.0.body.1.block1.weight"), "adapter.body.0.resnets.1.block1.bias": src_state.pop("body.0.body.1.block1.bias"), "adapter.body.0.resnets.1.block2.weight": src_state.pop("body.0.body.1.block2.weight"), "adapter.body.0.resnets.1.block2.bias": src_state.pop("body.0.body.1.block2.bias"), # body.0.resnets.2 "adapter.body.0.resnets.2.block1.weight": src_state.pop("body.0.body.2.block1.weight"), "adapter.body.0.resnets.2.block1.bias": src_state.pop("body.0.body.2.block1.bias"), "adapter.body.0.resnets.2.block2.weight": src_state.pop("body.0.body.2.block2.weight"), "adapter.body.0.resnets.2.block2.bias": src_state.pop("body.0.body.2.block2.bias"), # body.0.resnets.3 "adapter.body.0.resnets.3.block1.weight": src_state.pop("body.0.body.3.block1.weight"), "adapter.body.0.resnets.3.block1.bias": src_state.pop("body.0.body.3.block1.bias"), "adapter.body.0.resnets.3.block2.weight": src_state.pop("body.0.body.3.block2.weight"), "adapter.body.0.resnets.3.block2.bias": src_state.pop("body.0.body.3.block2.bias"), # body.0.out_conv "adapter.body.0.out_conv.weight": src_state.pop("body.0.out_conv.weight"), "adapter.body.0.out_conv.bias": src_state.pop("body.0.out_conv.bias"), # body.1.in_conv "adapter.body.1.in_conv.weight": src_state.pop("body.1.in_conv.weight"), "adapter.body.1.in_conv.bias": src_state.pop("body.1.in_conv.bias"), # body.1.resnets.0 "adapter.body.1.resnets.0.block1.weight": src_state.pop("body.1.body.0.block1.weight"), "adapter.body.1.resnets.0.block1.bias": src_state.pop("body.1.body.0.block1.bias"), "adapter.body.1.resnets.0.block2.weight": src_state.pop("body.1.body.0.block2.weight"), "adapter.body.1.resnets.0.block2.bias": src_state.pop("body.1.body.0.block2.bias"), # body.1.resnets.1 "adapter.body.1.resnets.1.block1.weight": src_state.pop("body.1.body.1.block1.weight"), "adapter.body.1.resnets.1.block1.bias": src_state.pop("body.1.body.1.block1.bias"), "adapter.body.1.resnets.1.block2.weight": src_state.pop("body.1.body.1.block2.weight"), "adapter.body.1.resnets.1.block2.bias": src_state.pop("body.1.body.1.block2.bias"), # body.1.body.2 "adapter.body.1.resnets.2.block1.weight": src_state.pop("body.1.body.2.block1.weight"), "adapter.body.1.resnets.2.block1.bias": src_state.pop("body.1.body.2.block1.bias"), "adapter.body.1.resnets.2.block2.weight": src_state.pop("body.1.body.2.block2.weight"), "adapter.body.1.resnets.2.block2.bias": src_state.pop("body.1.body.2.block2.bias"), # body.1.body.3 "adapter.body.1.resnets.3.block1.weight": src_state.pop("body.1.body.3.block1.weight"), "adapter.body.1.resnets.3.block1.bias": src_state.pop("body.1.body.3.block1.bias"), "adapter.body.1.resnets.3.block2.weight": src_state.pop("body.1.body.3.block2.weight"), "adapter.body.1.resnets.3.block2.bias": src_state.pop("body.1.body.3.block2.bias"), # body.1.out_conv "adapter.body.1.out_conv.weight": src_state.pop("body.1.out_conv.weight"), "adapter.body.1.out_conv.bias": src_state.pop("body.1.out_conv.bias"), # body.2.in_conv "adapter.body.2.in_conv.weight": src_state.pop("body.2.in_conv.weight"), "adapter.body.2.in_conv.bias": src_state.pop("body.2.in_conv.bias"), # body.2.body.0 "adapter.body.2.resnets.0.block1.weight": src_state.pop("body.2.body.0.block1.weight"), "adapter.body.2.resnets.0.block1.bias": src_state.pop("body.2.body.0.block1.bias"), "adapter.body.2.resnets.0.block2.weight": src_state.pop("body.2.body.0.block2.weight"), "adapter.body.2.resnets.0.block2.bias": src_state.pop("body.2.body.0.block2.bias"), # body.2.body.1 "adapter.body.2.resnets.1.block1.weight": src_state.pop("body.2.body.1.block1.weight"), "adapter.body.2.resnets.1.block1.bias": src_state.pop("body.2.body.1.block1.bias"), "adapter.body.2.resnets.1.block2.weight": src_state.pop("body.2.body.1.block2.weight"), "adapter.body.2.resnets.1.block2.bias": src_state.pop("body.2.body.1.block2.bias"), # body.2.body.2 "adapter.body.2.resnets.2.block1.weight": src_state.pop("body.2.body.2.block1.weight"), "adapter.body.2.resnets.2.block1.bias": src_state.pop("body.2.body.2.block1.bias"), "adapter.body.2.resnets.2.block2.weight": src_state.pop("body.2.body.2.block2.weight"), "adapter.body.2.resnets.2.block2.bias": src_state.pop("body.2.body.2.block2.bias"), # body.2.body.3 "adapter.body.2.resnets.3.block1.weight": src_state.pop("body.2.body.3.block1.weight"), "adapter.body.2.resnets.3.block1.bias": src_state.pop("body.2.body.3.block1.bias"), "adapter.body.2.resnets.3.block2.weight": src_state.pop("body.2.body.3.block2.weight"), "adapter.body.2.resnets.3.block2.bias": src_state.pop("body.2.body.3.block2.bias"), # body.2.out_conv "adapter.body.2.out_conv.weight": src_state.pop("body.2.out_conv.weight"), "adapter.body.2.out_conv.bias": src_state.pop("body.2.out_conv.bias"), # body.3.in_conv "adapter.body.3.in_conv.weight": src_state.pop("body.3.in_conv.weight"), "adapter.body.3.in_conv.bias": src_state.pop("body.3.in_conv.bias"), # body.3.body.0 "adapter.body.3.resnets.0.block1.weight": src_state.pop("body.3.body.0.block1.weight"), "adapter.body.3.resnets.0.block1.bias": src_state.pop("body.3.body.0.block1.bias"), "adapter.body.3.resnets.0.block2.weight": src_state.pop("body.3.body.0.block2.weight"), "adapter.body.3.resnets.0.block2.bias": src_state.pop("body.3.body.0.block2.bias"), # body.3.body.1 "adapter.body.3.resnets.1.block1.weight": src_state.pop("body.3.body.1.block1.weight"), "adapter.body.3.resnets.1.block1.bias": src_state.pop("body.3.body.1.block1.bias"), "adapter.body.3.resnets.1.block2.weight": src_state.pop("body.3.body.1.block2.weight"), "adapter.body.3.resnets.1.block2.bias": src_state.pop("body.3.body.1.block2.bias"), # body.3.body.2 "adapter.body.3.resnets.2.block1.weight": src_state.pop("body.3.body.2.block1.weight"), "adapter.body.3.resnets.2.block1.bias": src_state.pop("body.3.body.2.block1.bias"), "adapter.body.3.resnets.2.block2.weight": src_state.pop("body.3.body.2.block2.weight"), "adapter.body.3.resnets.2.block2.bias": src_state.pop("body.3.body.2.block2.bias"), # body.3.body.3 "adapter.body.3.resnets.3.block1.weight": src_state.pop("body.3.body.3.block1.weight"), "adapter.body.3.resnets.3.block1.bias": src_state.pop("body.3.body.3.block1.bias"), "adapter.body.3.resnets.3.block2.weight": src_state.pop("body.3.body.3.block2.weight"), "adapter.body.3.resnets.3.block2.bias": src_state.pop("body.3.body.3.block2.bias"), # body.3.out_conv "adapter.body.3.out_conv.weight": src_state.pop("body.3.out_conv.weight"), "adapter.body.3.out_conv.bias": src_state.pop("body.3.out_conv.bias"), } assert len(src_state) == 0 adapter = T2IAdapter(in_channels=3, channels=[320, 640, 1280], num_res_blocks=4, adapter_type="light_adapter") adapter.load_state_dict(res_state) return adapter if __name__ == "__main__": parser = argparse.ArgumentParser() parser.add_argument( "--checkpoint_path", default=None, type=str, required=True, help="Path to the checkpoint to convert." ) parser.add_argument( "--output_path", default=None, type=str, required=True, help="Path to the store the result checkpoint." ) parser.add_argument( "--is_adapter_light", action="store_true", help="Is checkpoint come from Adapter-Light architecture. ex: color-adapter", ) parser.add_argument("--in_channels", required=False, type=int, help="Input channels for non-light adapter") args = parser.parse_args() src_state = torch.load(args.checkpoint_path) if args.is_adapter_light: adapter = convert_light_adapter(src_state) else: if args.in_channels is None: raise ValueError("set `--in_channels=<n>`") adapter = convert_adapter(src_state, args.in_channels) adapter.save_pretrained(args.output_path)
diffusers/scripts/convert_original_t2i_adapter.py/0
{ "file_path": "diffusers/scripts/convert_original_t2i_adapter.py", "repo_id": "diffusers", "token_count": 6734 }
# Convert the original UniDiffuser checkpoints into diffusers equivalents. import argparse from argparse import Namespace import torch from transformers import ( CLIPImageProcessor, CLIPTextConfig, CLIPTextModel, CLIPTokenizer, CLIPVisionConfig, CLIPVisionModelWithProjection, GPT2Tokenizer, ) from diffusers import ( AutoencoderKL, DPMSolverMultistepScheduler, UniDiffuserModel, UniDiffuserPipeline, UniDiffuserTextDecoder, ) SCHEDULER_CONFIG = Namespace( **{ "beta_start": 0.00085, "beta_end": 0.012, "beta_schedule": "scaled_linear", "solver_order": 3, } ) # Copied from diffusers.pipelines.stable_diffusion.convert_from_ckpt.shave_segments def shave_segments(path, n_shave_prefix_segments=1): """ Removes segments. Positive values shave the first segments, negative shave the last segments. """ if n_shave_prefix_segments >= 0: return ".".join(path.split(".")[n_shave_prefix_segments:]) else: return ".".join(path.split(".")[:n_shave_prefix_segments]) # Copied from diffusers.pipelines.stable_diffusion.convert_from_ckpt.renew_vae_resnet_paths def renew_vae_resnet_paths(old_list, n_shave_prefix_segments=0): """ Updates paths inside resnets to the new naming scheme (local renaming) """ mapping = [] for old_item in old_list: new_item = old_item new_item = new_item.replace("nin_shortcut", "conv_shortcut") new_item = shave_segments(new_item, n_shave_prefix_segments=n_shave_prefix_segments) mapping.append({"old": old_item, "new": new_item}) return mapping # Copied from diffusers.pipelines.stable_diffusion.convert_from_ckpt.renew_vae_attention_paths def renew_vae_attention_paths(old_list, n_shave_prefix_segments=0): """ Updates paths inside attentions to the new naming scheme (local renaming) """ mapping = [] for old_item in old_list: new_item = old_item new_item = new_item.replace("norm.weight", "group_norm.weight") new_item = new_item.replace("norm.bias", "group_norm.bias") new_item = new_item.replace("q.weight", "to_q.weight") new_item = new_item.replace("q.bias", "to_q.bias") new_item = new_item.replace("k.weight", "to_k.weight") new_item = new_item.replace("k.bias", "to_k.bias") new_item = new_item.replace("v.weight", "to_v.weight") new_item = new_item.replace("v.bias", "to_v.bias") new_item = new_item.replace("proj_out.weight", "to_out.0.weight") new_item = new_item.replace("proj_out.bias", "to_out.0.bias") new_item = shave_segments(new_item, n_shave_prefix_segments=n_shave_prefix_segments) mapping.append({"old": old_item, "new": new_item}) return mapping # Copied from diffusers.pipelines.stable_diffusion.convert_from_ckpt.conv_attn_to_linear def conv_attn_to_linear(checkpoint): keys = list(checkpoint.keys()) attn_keys = ["query.weight", "key.weight", "value.weight"] for key in keys: if ".".join(key.split(".")[-2:]) in attn_keys: if checkpoint[key].ndim > 2: checkpoint[key] = checkpoint[key][:, :, 0, 0] elif "proj_attn.weight" in key: if checkpoint[key].ndim > 2: checkpoint[key] = checkpoint[key][:, :, 0] # Modified from diffusers.pipelines.stable_diffusion.convert_from_ckpt.assign_to_checkpoint # config.num_head_channels => num_head_channels def assign_to_checkpoint( paths, checkpoint, old_checkpoint, attention_paths_to_split=None, additional_replacements=None, num_head_channels=1, ): """ This does the final conversion step: take locally converted weights and apply a global renaming to them. It splits attention layers, and takes into account additional replacements that may arise. Assigns the weights to the new checkpoint. """ assert isinstance(paths, list), "Paths should be a list of dicts containing 'old' and 'new' keys." # Splits the attention layers into three variables. if attention_paths_to_split is not None: for path, path_map in attention_paths_to_split.items(): old_tensor = old_checkpoint[path] channels = old_tensor.shape[0] // 3 target_shape = (-1, channels) if len(old_tensor.shape) == 3 else (-1) num_heads = old_tensor.shape[0] // num_head_channels // 3 old_tensor = old_tensor.reshape((num_heads, 3 * channels // num_heads) + old_tensor.shape[1:]) query, key, value = old_tensor.split(channels // num_heads, dim=1) checkpoint[path_map["query"]] = query.reshape(target_shape) checkpoint[path_map["key"]] = key.reshape(target_shape) checkpoint[path_map["value"]] = value.reshape(target_shape) for path in paths: new_path = path["new"] # These have already been assigned if attention_paths_to_split is not None and new_path in attention_paths_to_split: continue # Global renaming happens here new_path = new_path.replace("middle_block.0", "mid_block.resnets.0") new_path = new_path.replace("middle_block.1", "mid_block.attentions.0") new_path = new_path.replace("middle_block.2", "mid_block.resnets.1") if additional_replacements is not None: for replacement in additional_replacements: new_path = new_path.replace(replacement["old"], replacement["new"]) # proj_attn.weight has to be converted from conv 1D to linear is_attn_weight = "proj_attn.weight" in new_path or ("attentions" in new_path and "to_" in new_path) shape = old_checkpoint[path["old"]].shape if is_attn_weight and len(shape) == 3: checkpoint[new_path] = old_checkpoint[path["old"]][:, :, 0] elif is_attn_weight and len(shape) == 4: checkpoint[new_path] = old_checkpoint[path["old"]][:, :, 0, 0] else: checkpoint[new_path] = old_checkpoint[path["old"]] def create_vae_diffusers_config(config_type): # Hardcoded for now if args.config_type == "test": vae_config = create_vae_diffusers_config_test() elif args.config_type == "big": vae_config = create_vae_diffusers_config_big() else: raise NotImplementedError( f"Config type {config_type} is not implemented, currently only config types" " 'test' and 'big' are available." ) return vae_config def create_unidiffuser_unet_config(config_type, version): # Hardcoded for now if args.config_type == "test": unet_config = create_unidiffuser_unet_config_test() elif args.config_type == "big": unet_config = create_unidiffuser_unet_config_big() else: raise NotImplementedError( f"Config type {config_type} is not implemented, currently only config types" " 'test' and 'big' are available." ) # Unidiffuser-v1 uses data type embeddings if version == 1: unet_config["use_data_type_embedding"] = True return unet_config def create_text_decoder_config(config_type): # Hardcoded for now if args.config_type == "test": text_decoder_config = create_text_decoder_config_test() elif args.config_type == "big": text_decoder_config = create_text_decoder_config_big() else: raise NotImplementedError( f"Config type {config_type} is not implemented, currently only config types" " 'test' and 'big' are available." ) return text_decoder_config # Hardcoded configs for test versions of the UniDiffuser models, corresponding to those in the fast default tests. def create_vae_diffusers_config_test(): vae_config = { "sample_size": 32, "in_channels": 3, "out_channels": 3, "down_block_types": ["DownEncoderBlock2D", "DownEncoderBlock2D"], "up_block_types": ["UpDecoderBlock2D", "UpDecoderBlock2D"], "block_out_channels": [32, 64], "latent_channels": 4, "layers_per_block": 1, } return vae_config def create_unidiffuser_unet_config_test(): unet_config = { "text_dim": 32, "clip_img_dim": 32, "num_text_tokens": 77, "num_attention_heads": 2, "attention_head_dim": 8, "in_channels": 4, "out_channels": 4, "num_layers": 2, "dropout": 0.0, "norm_num_groups": 32, "attention_bias": False, "sample_size": 16, "patch_size": 2, "activation_fn": "gelu", "num_embeds_ada_norm": 1000, "norm_type": "layer_norm", "block_type": "unidiffuser", "pre_layer_norm": False, "use_timestep_embedding": False, "norm_elementwise_affine": True, "use_patch_pos_embed": False, "ff_final_dropout": True, "use_data_type_embedding": False, } return unet_config def create_text_decoder_config_test(): text_decoder_config = { "prefix_length": 77, "prefix_inner_dim": 32, "prefix_hidden_dim": 32, "vocab_size": 1025, # 1024 + 1 for new EOS token "n_positions": 1024, "n_embd": 32, "n_layer": 5, "n_head": 4, "n_inner": 37, "activation_function": "gelu", "resid_pdrop": 0.1, "embd_pdrop": 0.1, "attn_pdrop": 0.1, "layer_norm_epsilon": 1e-5, "initializer_range": 0.02, } return text_decoder_config # Hardcoded configs for the UniDiffuser V1 model at https://huggingface.co/thu-ml/unidiffuser-v1 # See also https://github.com/thu-ml/unidiffuser/blob/main/configs/sample_unidiffuser_v1.py def create_vae_diffusers_config_big(): vae_config = { "sample_size": 256, "in_channels": 3, "out_channels": 3, "down_block_types": ["DownEncoderBlock2D", "DownEncoderBlock2D", "DownEncoderBlock2D", "DownEncoderBlock2D"], "up_block_types": ["UpDecoderBlock2D", "UpDecoderBlock2D", "UpDecoderBlock2D", "UpDecoderBlock2D"], "block_out_channels": [128, 256, 512, 512], "latent_channels": 4, "layers_per_block": 2, } return vae_config def create_unidiffuser_unet_config_big(): unet_config = { "text_dim": 64, "clip_img_dim": 512, "num_text_tokens": 77, "num_attention_heads": 24, "attention_head_dim": 64, "in_channels": 4, "out_channels": 4, "num_layers": 30, "dropout": 0.0, "norm_num_groups": 32, "attention_bias": False, "sample_size": 64, "patch_size": 2, "activation_fn": "gelu", "num_embeds_ada_norm": 1000, "norm_type": "layer_norm", "block_type": "unidiffuser", "pre_layer_norm": False, "use_timestep_embedding": False, "norm_elementwise_affine": True, "use_patch_pos_embed": False, "ff_final_dropout": True, "use_data_type_embedding": False, } return unet_config # From https://huggingface.co/gpt2/blob/main/config.json, the GPT2 checkpoint used by UniDiffuser def create_text_decoder_config_big(): text_decoder_config = { "prefix_length": 77, "prefix_inner_dim": 768, "prefix_hidden_dim": 64, "vocab_size": 50258, # 50257 + 1 for new EOS token "n_positions": 1024, "n_embd": 768, "n_layer": 12, "n_head": 12, "n_inner": 3072, "activation_function": "gelu", "resid_pdrop": 0.1, "embd_pdrop": 0.1, "attn_pdrop": 0.1, "layer_norm_epsilon": 1e-5, "initializer_range": 0.02, } return text_decoder_config # Based on diffusers.pipelines.stable_diffusion.convert_from_ckpt.convert_ldm_vae_checkpoint def convert_vae_to_diffusers(ckpt, diffusers_model, num_head_channels=1): """ Converts a UniDiffuser autoencoder_kl.pth checkpoint to a diffusers AutoencoderKL. """ # autoencoder_kl.pth ckpt is a torch state dict vae_state_dict = torch.load(ckpt, map_location="cpu") new_checkpoint = {} new_checkpoint["encoder.conv_in.weight"] = vae_state_dict["encoder.conv_in.weight"] new_checkpoint["encoder.conv_in.bias"] = vae_state_dict["encoder.conv_in.bias"] new_checkpoint["encoder.conv_out.weight"] = vae_state_dict["encoder.conv_out.weight"] new_checkpoint["encoder.conv_out.bias"] = vae_state_dict["encoder.conv_out.bias"] new_checkpoint["encoder.conv_norm_out.weight"] = vae_state_dict["encoder.norm_out.weight"] new_checkpoint["encoder.conv_norm_out.bias"] = vae_state_dict["encoder.norm_out.bias"] new_checkpoint["decoder.conv_in.weight"] = vae_state_dict["decoder.conv_in.weight"] new_checkpoint["decoder.conv_in.bias"] = vae_state_dict["decoder.conv_in.bias"] new_checkpoint["decoder.conv_out.weight"] = vae_state_dict["decoder.conv_out.weight"] new_checkpoint["decoder.conv_out.bias"] = vae_state_dict["decoder.conv_out.bias"] new_checkpoint["decoder.conv_norm_out.weight"] = vae_state_dict["decoder.norm_out.weight"] new_checkpoint["decoder.conv_norm_out.bias"] = vae_state_dict["decoder.norm_out.bias"] new_checkpoint["quant_conv.weight"] = vae_state_dict["quant_conv.weight"] new_checkpoint["quant_conv.bias"] = vae_state_dict["quant_conv.bias"] new_checkpoint["post_quant_conv.weight"] = vae_state_dict["post_quant_conv.weight"] new_checkpoint["post_quant_conv.bias"] = vae_state_dict["post_quant_conv.bias"] # Retrieves the keys for the encoder down blocks only num_down_blocks = len({".".join(layer.split(".")[:3]) for layer in vae_state_dict if "encoder.down" in layer}) down_blocks = { layer_id: [key for key in vae_state_dict if f"down.{layer_id}" in key] for layer_id in range(num_down_blocks) } # Retrieves the keys for the decoder up blocks only num_up_blocks = len({".".join(layer.split(".")[:3]) for layer in vae_state_dict if "decoder.up" in layer}) up_blocks = { layer_id: [key for key in vae_state_dict if f"up.{layer_id}" in key] for layer_id in range(num_up_blocks) } for i in range(num_down_blocks): resnets = [key for key in down_blocks[i] if f"down.{i}" in key and f"down.{i}.downsample" not in key] if f"encoder.down.{i}.downsample.conv.weight" in vae_state_dict: new_checkpoint[f"encoder.down_blocks.{i}.downsamplers.0.conv.weight"] = vae_state_dict.pop( f"encoder.down.{i}.downsample.conv.weight" ) new_checkpoint[f"encoder.down_blocks.{i}.downsamplers.0.conv.bias"] = vae_state_dict.pop( f"encoder.down.{i}.downsample.conv.bias" ) paths = renew_vae_resnet_paths(resnets) meta_path = {"old": f"down.{i}.block", "new": f"down_blocks.{i}.resnets"} assign_to_checkpoint( paths, new_checkpoint, vae_state_dict, additional_replacements=[meta_path], num_head_channels=num_head_channels, # not used in vae ) mid_resnets = [key for key in vae_state_dict if "encoder.mid.block" in key] num_mid_res_blocks = 2 for i in range(1, num_mid_res_blocks + 1): resnets = [key for key in mid_resnets if f"encoder.mid.block_{i}" in key] paths = renew_vae_resnet_paths(resnets) meta_path = {"old": f"mid.block_{i}", "new": f"mid_block.resnets.{i - 1}"} assign_to_checkpoint( paths, new_checkpoint, vae_state_dict, additional_replacements=[meta_path], num_head_channels=num_head_channels, # not used in vae ) mid_attentions = [key for key in vae_state_dict if "encoder.mid.attn" in key] paths = renew_vae_attention_paths(mid_attentions) meta_path = {"old": "mid.attn_1", "new": "mid_block.attentions.0"} assign_to_checkpoint( paths, new_checkpoint, vae_state_dict, additional_replacements=[meta_path], num_head_channels=num_head_channels, # not used in vae ) conv_attn_to_linear(new_checkpoint) for i in range(num_up_blocks): block_id = num_up_blocks - 1 - i resnets = [ key for key in up_blocks[block_id] if f"up.{block_id}" in key and f"up.{block_id}.upsample" not in key ] if f"decoder.up.{block_id}.upsample.conv.weight" in vae_state_dict: new_checkpoint[f"decoder.up_blocks.{i}.upsamplers.0.conv.weight"] = vae_state_dict[ f"decoder.up.{block_id}.upsample.conv.weight" ] new_checkpoint[f"decoder.up_blocks.{i}.upsamplers.0.conv.bias"] = vae_state_dict[ f"decoder.up.{block_id}.upsample.conv.bias" ] paths = renew_vae_resnet_paths(resnets) meta_path = {"old": f"up.{block_id}.block", "new": f"up_blocks.{i}.resnets"} assign_to_checkpoint( paths, new_checkpoint, vae_state_dict, additional_replacements=[meta_path], num_head_channels=num_head_channels, # not used in vae ) mid_resnets = [key for key in vae_state_dict if "decoder.mid.block" in key] num_mid_res_blocks = 2 for i in range(1, num_mid_res_blocks + 1): resnets = [key for key in mid_resnets if f"decoder.mid.block_{i}" in key] paths = renew_vae_resnet_paths(resnets) meta_path = {"old": f"mid.block_{i}", "new": f"mid_block.resnets.{i - 1}"} assign_to_checkpoint( paths, new_checkpoint, vae_state_dict, additional_replacements=[meta_path], num_head_channels=num_head_channels, # not used in vae ) mid_attentions = [key for key in vae_state_dict if "decoder.mid.attn" in key] paths = renew_vae_attention_paths(mid_attentions) meta_path = {"old": "mid.attn_1", "new": "mid_block.attentions.0"} assign_to_checkpoint( paths, new_checkpoint, vae_state_dict, additional_replacements=[meta_path], num_head_channels=num_head_channels, # not used in vae ) conv_attn_to_linear(new_checkpoint) missing_keys, unexpected_keys = diffusers_model.load_state_dict(new_checkpoint) for missing_key in missing_keys: print(f"Missing key: {missing_key}") for unexpected_key in unexpected_keys: print(f"Unexpected key: {unexpected_key}") return diffusers_model def convert_uvit_block_to_diffusers_block( uvit_state_dict, new_state_dict, block_prefix, new_prefix="transformer.transformer_", skip_connection=False, ): """ Maps the keys in a UniDiffuser transformer block (`Block`) to the keys in a diffusers transformer block (`UTransformerBlock`/`UniDiffuserBlock`). """ prefix = new_prefix + block_prefix if skip_connection: new_state_dict[prefix + ".skip.skip_linear.weight"] = uvit_state_dict[block_prefix + ".skip_linear.weight"] new_state_dict[prefix + ".skip.skip_linear.bias"] = uvit_state_dict[block_prefix + ".skip_linear.bias"] new_state_dict[prefix + ".skip.norm.weight"] = uvit_state_dict[block_prefix + ".norm1.weight"] new_state_dict[prefix + ".skip.norm.bias"] = uvit_state_dict[block_prefix + ".norm1.bias"] # Create the prefix string for out_blocks. prefix += ".block" # Split up attention qkv.weight into to_q.weight, to_k.weight, to_v.weight qkv = uvit_state_dict[block_prefix + ".attn.qkv.weight"] new_attn_keys = [".attn1.to_q.weight", ".attn1.to_k.weight", ".attn1.to_v.weight"] new_attn_keys = [prefix + key for key in new_attn_keys] shape = qkv.shape[0] // len(new_attn_keys) for i, attn_key in enumerate(new_attn_keys): new_state_dict[attn_key] = qkv[i * shape : (i + 1) * shape] new_state_dict[prefix + ".attn1.to_out.0.weight"] = uvit_state_dict[block_prefix + ".attn.proj.weight"] new_state_dict[prefix + ".attn1.to_out.0.bias"] = uvit_state_dict[block_prefix + ".attn.proj.bias"] new_state_dict[prefix + ".norm1.weight"] = uvit_state_dict[block_prefix + ".norm2.weight"] new_state_dict[prefix + ".norm1.bias"] = uvit_state_dict[block_prefix + ".norm2.bias"] new_state_dict[prefix + ".ff.net.0.proj.weight"] = uvit_state_dict[block_prefix + ".mlp.fc1.weight"] new_state_dict[prefix + ".ff.net.0.proj.bias"] = uvit_state_dict[block_prefix + ".mlp.fc1.bias"] new_state_dict[prefix + ".ff.net.2.weight"] = uvit_state_dict[block_prefix + ".mlp.fc2.weight"] new_state_dict[prefix + ".ff.net.2.bias"] = uvit_state_dict[block_prefix + ".mlp.fc2.bias"] new_state_dict[prefix + ".norm3.weight"] = uvit_state_dict[block_prefix + ".norm3.weight"] new_state_dict[prefix + ".norm3.bias"] = uvit_state_dict[block_prefix + ".norm3.bias"] return uvit_state_dict, new_state_dict def convert_uvit_to_diffusers(ckpt, diffusers_model): """ Converts a UniDiffuser uvit_v*.pth checkpoint to a diffusers UniDiffusersModel. """ # uvit_v*.pth ckpt is a torch state dict uvit_state_dict = torch.load(ckpt, map_location="cpu") new_state_dict = {} # Input layers new_state_dict["vae_img_in.proj.weight"] = uvit_state_dict["patch_embed.proj.weight"] new_state_dict["vae_img_in.proj.bias"] = uvit_state_dict["patch_embed.proj.bias"] new_state_dict["clip_img_in.weight"] = uvit_state_dict["clip_img_embed.weight"] new_state_dict["clip_img_in.bias"] = uvit_state_dict["clip_img_embed.bias"] new_state_dict["text_in.weight"] = uvit_state_dict["text_embed.weight"] new_state_dict["text_in.bias"] = uvit_state_dict["text_embed.bias"] new_state_dict["pos_embed"] = uvit_state_dict["pos_embed"] # Handle data type token embeddings for UniDiffuser-v1 if "token_embedding.weight" in uvit_state_dict and diffusers_model.use_data_type_embedding: new_state_dict["data_type_pos_embed_token"] = uvit_state_dict["pos_embed_token"] new_state_dict["data_type_token_embedding.weight"] = uvit_state_dict["token_embedding.weight"] # Also initialize the PatchEmbedding in UTransformer2DModel with the PatchEmbedding from the checkpoint. # This isn't used in the current implementation, so might want to remove. new_state_dict["transformer.pos_embed.proj.weight"] = uvit_state_dict["patch_embed.proj.weight"] new_state_dict["transformer.pos_embed.proj.bias"] = uvit_state_dict["patch_embed.proj.bias"] # Output layers new_state_dict["transformer.norm_out.weight"] = uvit_state_dict["norm.weight"] new_state_dict["transformer.norm_out.bias"] = uvit_state_dict["norm.bias"] new_state_dict["vae_img_out.weight"] = uvit_state_dict["decoder_pred.weight"] new_state_dict["vae_img_out.bias"] = uvit_state_dict["decoder_pred.bias"] new_state_dict["clip_img_out.weight"] = uvit_state_dict["clip_img_out.weight"] new_state_dict["clip_img_out.bias"] = uvit_state_dict["clip_img_out.bias"] new_state_dict["text_out.weight"] = uvit_state_dict["text_out.weight"] new_state_dict["text_out.bias"] = uvit_state_dict["text_out.bias"] # in_blocks in_blocks_prefixes = {".".join(layer.split(".")[:2]) for layer in uvit_state_dict if "in_blocks" in layer} for in_block_prefix in list(in_blocks_prefixes): convert_uvit_block_to_diffusers_block(uvit_state_dict, new_state_dict, in_block_prefix) # mid_block # Assume there's only one mid block convert_uvit_block_to_diffusers_block(uvit_state_dict, new_state_dict, "mid_block") # out_blocks out_blocks_prefixes = {".".join(layer.split(".")[:2]) for layer in uvit_state_dict if "out_blocks" in layer} for out_block_prefix in list(out_blocks_prefixes): convert_uvit_block_to_diffusers_block(uvit_state_dict, new_state_dict, out_block_prefix, skip_connection=True) missing_keys, unexpected_keys = diffusers_model.load_state_dict(new_state_dict) for missing_key in missing_keys: print(f"Missing key: {missing_key}") for unexpected_key in unexpected_keys: print(f"Unexpected key: {unexpected_key}") return diffusers_model def convert_caption_decoder_to_diffusers(ckpt, diffusers_model): """ Converts a UniDiffuser caption_decoder.pth checkpoint to a diffusers UniDiffuserTextDecoder. """ # caption_decoder.pth ckpt is a torch state dict checkpoint_state_dict = torch.load(ckpt, map_location="cpu") decoder_state_dict = {} # Remove the "module." prefix, if necessary caption_decoder_key = "module." for key in checkpoint_state_dict: if key.startswith(caption_decoder_key): decoder_state_dict[key.replace(caption_decoder_key, "")] = checkpoint_state_dict.get(key) else: decoder_state_dict[key] = checkpoint_state_dict.get(key) new_state_dict = {} # Encoder and Decoder new_state_dict["encode_prefix.weight"] = decoder_state_dict["encode_prefix.weight"] new_state_dict["encode_prefix.bias"] = decoder_state_dict["encode_prefix.bias"] new_state_dict["decode_prefix.weight"] = decoder_state_dict["decode_prefix.weight"] new_state_dict["decode_prefix.bias"] = decoder_state_dict["decode_prefix.bias"] # Internal GPT2LMHeadModel transformer model for key, val in decoder_state_dict.items(): if key.startswith("gpt"): suffix = key[len("gpt") :] new_state_dict["transformer" + suffix] = val missing_keys, unexpected_keys = diffusers_model.load_state_dict(new_state_dict) for missing_key in missing_keys: print(f"Missing key: {missing_key}") for unexpected_key in unexpected_keys: print(f"Unexpected key: {unexpected_key}") return diffusers_model if __name__ == "__main__": parser = argparse.ArgumentParser() parser.add_argument( "--caption_decoder_checkpoint_path", default=None, type=str, required=False, help="Path to caption decoder checkpoint to convert.", ) parser.add_argument( "--uvit_checkpoint_path", default=None, type=str, required=False, help="Path to U-ViT checkpoint to convert." ) parser.add_argument( "--vae_checkpoint_path", default=None, type=str, required=False, help="Path to VAE checkpoint to convert.", ) parser.add_argument( "--pipeline_output_path", default=None, type=str, required=True, help="Path to save the output pipeline to.", ) parser.add_argument( "--config_type", default="test", type=str, help=( "Config type to use. Should be 'test' to create small models for testing or 'big' to convert a full" " checkpoint." ), ) parser.add_argument( "--version", default=0, type=int, help="The UniDiffuser model type to convert to. Should be 0 for UniDiffuser-v0 and 1 for UniDiffuser-v1.", ) parser.add_argument( "--safe_serialization", action="store_true", help="Whether to use safetensors/safe seialization when saving the pipeline.", ) args = parser.parse_args() # Convert the VAE model. if args.vae_checkpoint_path is not None: vae_config = create_vae_diffusers_config(args.config_type) vae = AutoencoderKL(**vae_config) vae = convert_vae_to_diffusers(args.vae_checkpoint_path, vae) # Convert the U-ViT ("unet") model. if args.uvit_checkpoint_path is not None: unet_config = create_unidiffuser_unet_config(args.config_type, args.version) unet = UniDiffuserModel(**unet_config) unet = convert_uvit_to_diffusers(args.uvit_checkpoint_path, unet) # Convert the caption decoder ("text_decoder") model. if args.caption_decoder_checkpoint_path is not None: text_decoder_config = create_text_decoder_config(args.config_type) text_decoder = UniDiffuserTextDecoder(**text_decoder_config) text_decoder = convert_caption_decoder_to_diffusers(args.caption_decoder_checkpoint_path, text_decoder) # Scheduler is the same for both the test and big models. scheduler_config = SCHEDULER_CONFIG scheduler = DPMSolverMultistepScheduler( beta_start=scheduler_config.beta_start, beta_end=scheduler_config.beta_end, beta_schedule=scheduler_config.beta_schedule, solver_order=scheduler_config.solver_order, ) if args.config_type == "test": # Make a small random CLIPTextModel torch.manual_seed(0) clip_text_encoder_config = CLIPTextConfig( bos_token_id=0, eos_token_id=2, hidden_size=32, intermediate_size=37, layer_norm_eps=1e-05, num_attention_heads=4, num_hidden_layers=5, pad_token_id=1, vocab_size=1000, ) text_encoder = CLIPTextModel(clip_text_encoder_config) clip_tokenizer = CLIPTokenizer.from_pretrained("hf-internal-testing/tiny-random-clip") # Make a small random CLIPVisionModel and accompanying CLIPImageProcessor torch.manual_seed(0) clip_image_encoder_config = CLIPVisionConfig( image_size=32, patch_size=2, num_channels=3, hidden_size=32, projection_dim=32, num_hidden_layers=5, num_attention_heads=4, intermediate_size=37, dropout=0.1, attention_dropout=0.1, initializer_range=0.02, ) image_encoder = CLIPVisionModelWithProjection(clip_image_encoder_config) image_processor = CLIPImageProcessor(crop_size=32, size=32) # Note that the text_decoder should already have its token embeddings resized. text_tokenizer = GPT2Tokenizer.from_pretrained("hf-internal-testing/tiny-random-GPT2Model") eos = "<|EOS|>" special_tokens_dict = {"eos_token": eos} text_tokenizer.add_special_tokens(special_tokens_dict) elif args.config_type == "big": text_encoder = CLIPTextModel.from_pretrained("openai/clip-vit-large-patch14") clip_tokenizer = CLIPTokenizer.from_pretrained("openai/clip-vit-large-patch14") image_encoder = CLIPVisionModelWithProjection.from_pretrained("openai/clip-vit-base-patch32") image_processor = CLIPImageProcessor.from_pretrained("openai/clip-vit-base-patch32") # Note that the text_decoder should already have its token embeddings resized. text_tokenizer = GPT2Tokenizer.from_pretrained("gpt2") eos = "<|EOS|>" special_tokens_dict = {"eos_token": eos} text_tokenizer.add_special_tokens(special_tokens_dict) else: raise NotImplementedError( f"Config type {args.config_type} is not implemented, currently only config types" " 'test' and 'big' are available." ) pipeline = UniDiffuserPipeline( vae=vae, text_encoder=text_encoder, image_encoder=image_encoder, clip_image_processor=image_processor, clip_tokenizer=clip_tokenizer, text_decoder=text_decoder, text_tokenizer=text_tokenizer, unet=unet, scheduler=scheduler, ) pipeline.save_pretrained(args.pipeline_output_path, safe_serialization=args.safe_serialization)
diffusers/scripts/convert_unidiffuser_to_diffusers.py/0
{ "file_path": "diffusers/scripts/convert_unidiffuser_to_diffusers.py", "repo_id": "diffusers", "token_count": 13871 }
# coding=utf-8 # Copyright 2025 The HuggingFace Inc. team. # Copyright (c) 2022, NVIDIA CORPORATION. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """ConfigMixin base class and utilities.""" import dataclasses import functools import importlib import inspect import json import os import re from collections import OrderedDict from pathlib import Path from typing import Any, Dict, Optional, Tuple, Union import numpy as np from huggingface_hub import DDUFEntry, create_repo, hf_hub_download from huggingface_hub.utils import ( EntryNotFoundError, RepositoryNotFoundError, RevisionNotFoundError, validate_hf_hub_args, ) from requests import HTTPError from . import __version__ from .utils import ( HUGGINGFACE_CO_RESOLVE_ENDPOINT, DummyObject, deprecate, extract_commit_hash, http_user_agent, logging, ) logger = logging.get_logger(__name__) _re_configuration_file = re.compile(r"config\.(.*)\.json") class FrozenDict(OrderedDict): def __init__(self, *args, **kwargs): super().__init__(*args, **kwargs) for key, value in self.items(): setattr(self, key, value) self.__frozen = True def __delitem__(self, *args, **kwargs): raise Exception(f"You cannot use ``__delitem__`` on a {self.__class__.__name__} instance.") def setdefault(self, *args, **kwargs): raise Exception(f"You cannot use ``setdefault`` on a {self.__class__.__name__} instance.") def pop(self, *args, **kwargs): raise Exception(f"You cannot use ``pop`` on a {self.__class__.__name__} instance.") def update(self, *args, **kwargs): raise Exception(f"You cannot use ``update`` on a {self.__class__.__name__} instance.") def __setattr__(self, name, value): if hasattr(self, "__frozen") and self.__frozen: raise Exception(f"You cannot use ``__setattr__`` on a {self.__class__.__name__} instance.") super().__setattr__(name, value) def __setitem__(self, name, value): if hasattr(self, "__frozen") and self.__frozen: raise Exception(f"You cannot use ``__setattr__`` on a {self.__class__.__name__} instance.") super().__setitem__(name, value) class ConfigMixin: r""" Base class for all configuration classes. All configuration parameters are stored under `self.config`. Also provides the [`~ConfigMixin.from_config`] and [`~ConfigMixin.save_config`] methods for loading, downloading, and saving classes that inherit from [`ConfigMixin`]. Class attributes: - **config_name** (`str`) -- A filename under which the config should stored when calling [`~ConfigMixin.save_config`] (should be overridden by parent class). - **ignore_for_config** (`List[str]`) -- A list of attributes that should not be saved in the config (should be overridden by subclass). - **has_compatibles** (`bool`) -- Whether the class has compatible classes (should be overridden by subclass). - **_deprecated_kwargs** (`List[str]`) -- Keyword arguments that are deprecated. Note that the `init` function should only have a `kwargs` argument if at least one argument is deprecated (should be overridden by subclass). """ config_name = None ignore_for_config = [] has_compatibles = False _deprecated_kwargs = [] def register_to_config(self, **kwargs): if self.config_name is None: raise NotImplementedError(f"Make sure that {self.__class__} has defined a class name `config_name`") # Special case for `kwargs` used in deprecation warning added to schedulers # TODO: remove this when we remove the deprecation warning, and the `kwargs` argument, # or solve in a more general way. kwargs.pop("kwargs", None) if not hasattr(self, "_internal_dict"): internal_dict = kwargs else: previous_dict = dict(self._internal_dict) internal_dict = {**self._internal_dict, **kwargs} logger.debug(f"Updating config from {previous_dict} to {internal_dict}") self._internal_dict = FrozenDict(internal_dict) def __getattr__(self, name: str) -> Any: """The only reason we overwrite `getattr` here is to gracefully deprecate accessing config attributes directly. See https://github.com/huggingface/diffusers/pull/3129 This function is mostly copied from PyTorch's __getattr__ overwrite: https://pytorch.org/docs/stable/_modules/torch/nn/modules/module.html#Module """ is_in_config = "_internal_dict" in self.__dict__ and hasattr(self.__dict__["_internal_dict"], name) is_attribute = name in self.__dict__ if is_in_config and not is_attribute: deprecation_message = f"Accessing config attribute `{name}` directly via '{type(self).__name__}' object attribute is deprecated. Please access '{name}' over '{type(self).__name__}'s config object instead, e.g. 'scheduler.config.{name}'." deprecate("direct config name access", "1.0.0", deprecation_message, standard_warn=False) return self._internal_dict[name] raise AttributeError(f"'{type(self).__name__}' object has no attribute '{name}'") def save_config(self, save_directory: Union[str, os.PathLike], push_to_hub: bool = False, **kwargs): """ Save a configuration object to the directory specified in `save_directory` so that it can be reloaded using the [`~ConfigMixin.from_config`] class method. Args: save_directory (`str` or `os.PathLike`): Directory where the configuration JSON file is saved (will be created if it does not exist). push_to_hub (`bool`, *optional*, defaults to `False`): Whether or not to push your model to the Hugging Face Hub after saving it. You can specify the repository you want to push to with `repo_id` (will default to the name of `save_directory` in your namespace). kwargs (`Dict[str, Any]`, *optional*): Additional keyword arguments passed along to the [`~utils.PushToHubMixin.push_to_hub`] method. """ if os.path.isfile(save_directory): raise AssertionError(f"Provided path ({save_directory}) should be a directory, not a file") os.makedirs(save_directory, exist_ok=True) # If we save using the predefined names, we can load using `from_config` output_config_file = os.path.join(save_directory, self.config_name) self.to_json_file(output_config_file) logger.info(f"Configuration saved in {output_config_file}") if push_to_hub: commit_message = kwargs.pop("commit_message", None) private = kwargs.pop("private", None) create_pr = kwargs.pop("create_pr", False) token = kwargs.pop("token", None) repo_id = kwargs.pop("repo_id", save_directory.split(os.path.sep)[-1]) repo_id = create_repo(repo_id, exist_ok=True, private=private, token=token).repo_id self._upload_folder( save_directory, repo_id, token=token, commit_message=commit_message, create_pr=create_pr, ) @classmethod def from_config(cls, config: Union[FrozenDict, Dict[str, Any]] = None, return_unused_kwargs=False, **kwargs): r""" Instantiate a Python class from a config dictionary. Parameters: config (`Dict[str, Any]`): A config dictionary from which the Python class is instantiated. Make sure to only load configuration files of compatible classes. return_unused_kwargs (`bool`, *optional*, defaults to `False`): Whether kwargs that are not consumed by the Python class should be returned or not. kwargs (remaining dictionary of keyword arguments, *optional*): Can be used to update the configuration object (after it is loaded) and initiate the Python class. `**kwargs` are passed directly to the underlying scheduler/model's `__init__` method and eventually overwrite the same named arguments in `config`. Returns: [`ModelMixin`] or [`SchedulerMixin`]: A model or scheduler object instantiated from a config dictionary. Examples: ```python >>> from diffusers import DDPMScheduler, DDIMScheduler, PNDMScheduler >>> # Download scheduler from huggingface.co and cache. >>> scheduler = DDPMScheduler.from_pretrained("google/ddpm-cifar10-32") >>> # Instantiate DDIM scheduler class with same config as DDPM >>> scheduler = DDIMScheduler.from_config(scheduler.config) >>> # Instantiate PNDM scheduler class with same config as DDPM >>> scheduler = PNDMScheduler.from_config(scheduler.config) ``` """ # <===== TO BE REMOVED WITH DEPRECATION # TODO(Patrick) - make sure to remove the following lines when config=="model_path" is deprecated if "pretrained_model_name_or_path" in kwargs: config = kwargs.pop("pretrained_model_name_or_path") if config is None: raise ValueError("Please make sure to provide a config as the first positional argument.") # ======> if not isinstance(config, dict): deprecation_message = "It is deprecated to pass a pretrained model name or path to `from_config`." if "Scheduler" in cls.__name__: deprecation_message += ( f"If you were trying to load a scheduler, please use {cls}.from_pretrained(...) instead." " Otherwise, please make sure to pass a configuration dictionary instead. This functionality will" " be removed in v1.0.0." ) elif "Model" in cls.__name__: deprecation_message += ( f"If you were trying to load a model, please use {cls}.load_config(...) followed by" f" {cls}.from_config(...) instead. Otherwise, please make sure to pass a configuration dictionary" " instead. This functionality will be removed in v1.0.0." ) deprecate("config-passed-as-path", "1.0.0", deprecation_message, standard_warn=False) config, kwargs = cls.load_config(pretrained_model_name_or_path=config, return_unused_kwargs=True, **kwargs) init_dict, unused_kwargs, hidden_dict = cls.extract_init_dict(config, **kwargs) # Allow dtype to be specified on initialization if "dtype" in unused_kwargs: init_dict["dtype"] = unused_kwargs.pop("dtype") # add possible deprecated kwargs for deprecated_kwarg in cls._deprecated_kwargs: if deprecated_kwarg in unused_kwargs: init_dict[deprecated_kwarg] = unused_kwargs.pop(deprecated_kwarg) # Return model and optionally state and/or unused_kwargs model = cls(**init_dict) # make sure to also save config parameters that might be used for compatible classes # update _class_name if "_class_name" in hidden_dict: hidden_dict["_class_name"] = cls.__name__ model.register_to_config(**hidden_dict) # add hidden kwargs of compatible classes to unused_kwargs unused_kwargs = {**unused_kwargs, **hidden_dict} if return_unused_kwargs: return (model, unused_kwargs) else: return model @classmethod def get_config_dict(cls, *args, **kwargs): deprecation_message = ( f" The function get_config_dict is deprecated. Please use {cls}.load_config instead. This function will be" " removed in version v1.0.0" ) deprecate("get_config_dict", "1.0.0", deprecation_message, standard_warn=False) return cls.load_config(*args, **kwargs) @classmethod @validate_hf_hub_args def load_config( cls, pretrained_model_name_or_path: Union[str, os.PathLike], return_unused_kwargs=False, return_commit_hash=False, **kwargs, ) -> Tuple[Dict[str, Any], Dict[str, Any]]: r""" Load a model or scheduler configuration. Parameters: pretrained_model_name_or_path (`str` or `os.PathLike`, *optional*): Can be either: - A string, the *model id* (for example `google/ddpm-celebahq-256`) of a pretrained model hosted on the Hub. - A path to a *directory* (for example `./my_model_directory`) containing model weights saved with [`~ConfigMixin.save_config`]. cache_dir (`Union[str, os.PathLike]`, *optional*): Path to a directory where a downloaded pretrained model configuration is cached if the standard cache is not used. force_download (`bool`, *optional*, defaults to `False`): Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. proxies (`Dict[str, str]`, *optional*): A dictionary of proxy servers to use by protocol or endpoint, for example, `{'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}`. The proxies are used on each request. output_loading_info(`bool`, *optional*, defaults to `False`): Whether or not to also return a dictionary containing missing keys, unexpected keys and error messages. local_files_only (`bool`, *optional*, defaults to `False`): Whether to only load local model weights and configuration files or not. If set to `True`, the model won't be downloaded from the Hub. token (`str` or *bool*, *optional*): The token to use as HTTP bearer authorization for remote files. If `True`, the token generated from `diffusers-cli login` (stored in `~/.huggingface`) is used. revision (`str`, *optional*, defaults to `"main"`): The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier allowed by Git. subfolder (`str`, *optional*, defaults to `""`): The subfolder location of a model file within a larger model repository on the Hub or locally. return_unused_kwargs (`bool`, *optional*, defaults to `False): Whether unused keyword arguments of the config are returned. return_commit_hash (`bool`, *optional*, defaults to `False): Whether the `commit_hash` of the loaded configuration are returned. Returns: `dict`: A dictionary of all the parameters stored in a JSON configuration file. """ cache_dir = kwargs.pop("cache_dir", None) local_dir = kwargs.pop("local_dir", None) local_dir_use_symlinks = kwargs.pop("local_dir_use_symlinks", "auto") force_download = kwargs.pop("force_download", False) proxies = kwargs.pop("proxies", None) token = kwargs.pop("token", None) local_files_only = kwargs.pop("local_files_only", False) revision = kwargs.pop("revision", None) _ = kwargs.pop("mirror", None) subfolder = kwargs.pop("subfolder", None) user_agent = kwargs.pop("user_agent", {}) dduf_entries: Optional[Dict[str, DDUFEntry]] = kwargs.pop("dduf_entries", None) user_agent = {**user_agent, "file_type": "config"} user_agent = http_user_agent(user_agent) pretrained_model_name_or_path = str(pretrained_model_name_or_path) if cls.config_name is None: raise ValueError( "`self.config_name` is not defined. Note that one should not load a config from " "`ConfigMixin`. Please make sure to define `config_name` in a class inheriting from `ConfigMixin`" ) # Custom path for now if dduf_entries: if subfolder is not None: raise ValueError( "DDUF file only allow for 1 level of directory (e.g transformer/model1/model.safetentors is not allowed). " "Please check the DDUF structure" ) config_file = cls._get_config_file_from_dduf(pretrained_model_name_or_path, dduf_entries) elif os.path.isfile(pretrained_model_name_or_path): config_file = pretrained_model_name_or_path elif os.path.isdir(pretrained_model_name_or_path): if subfolder is not None and os.path.isfile( os.path.join(pretrained_model_name_or_path, subfolder, cls.config_name) ): config_file = os.path.join(pretrained_model_name_or_path, subfolder, cls.config_name) elif os.path.isfile(os.path.join(pretrained_model_name_or_path, cls.config_name)): # Load from a PyTorch checkpoint config_file = os.path.join(pretrained_model_name_or_path, cls.config_name) else: raise EnvironmentError( f"Error no file named {cls.config_name} found in directory {pretrained_model_name_or_path}." ) else: try: # Load from URL or cache if already cached config_file = hf_hub_download( pretrained_model_name_or_path, filename=cls.config_name, cache_dir=cache_dir, force_download=force_download, proxies=proxies, local_files_only=local_files_only, token=token, user_agent=user_agent, subfolder=subfolder, revision=revision, local_dir=local_dir, local_dir_use_symlinks=local_dir_use_symlinks, ) except RepositoryNotFoundError: raise EnvironmentError( f"{pretrained_model_name_or_path} is not a local folder and is not a valid model identifier" " listed on 'https://huggingface.co/models'\nIf this is a private repository, make sure to pass a" " token having permission to this repo with `token` or log in with `huggingface-cli login`." ) except RevisionNotFoundError: raise EnvironmentError( f"{revision} is not a valid git identifier (branch name, tag name or commit id) that exists for" " this model name. Check the model page at" f" 'https://huggingface.co/{pretrained_model_name_or_path}' for available revisions." ) except EntryNotFoundError: raise EnvironmentError( f"{pretrained_model_name_or_path} does not appear to have a file named {cls.config_name}." ) except HTTPError as err: raise EnvironmentError( "There was a specific connection error when trying to load" f" {pretrained_model_name_or_path}:\n{err}" ) except ValueError: raise EnvironmentError( f"We couldn't connect to '{HUGGINGFACE_CO_RESOLVE_ENDPOINT}' to load this model, couldn't find it" f" in the cached files and it looks like {pretrained_model_name_or_path} is not the path to a" f" directory containing a {cls.config_name} file.\nCheckout your internet connection or see how to" " run the library in offline mode at" " 'https://huggingface.co/docs/diffusers/installation#offline-mode'." ) except EnvironmentError: raise EnvironmentError( f"Can't load config for '{pretrained_model_name_or_path}'. If you were trying to load it from " "'https://huggingface.co/models', make sure you don't have a local directory with the same name. " f"Otherwise, make sure '{pretrained_model_name_or_path}' is the correct path to a directory " f"containing a {cls.config_name} file" ) try: config_dict = cls._dict_from_json_file(config_file, dduf_entries=dduf_entries) commit_hash = extract_commit_hash(config_file) except (json.JSONDecodeError, UnicodeDecodeError): raise EnvironmentError(f"It looks like the config file at '{config_file}' is not a valid JSON file.") if not (return_unused_kwargs or return_commit_hash): return config_dict outputs = (config_dict,) if return_unused_kwargs: outputs += (kwargs,) if return_commit_hash: outputs += (commit_hash,) return outputs @staticmethod def _get_init_keys(input_class): return set(dict(inspect.signature(input_class.__init__).parameters).keys()) @classmethod def extract_init_dict(cls, config_dict, **kwargs): # Skip keys that were not present in the original config, so default __init__ values were used used_defaults = config_dict.get("_use_default_values", []) config_dict = {k: v for k, v in config_dict.items() if k not in used_defaults and k != "_use_default_values"} # 0. Copy origin config dict original_dict = dict(config_dict.items()) # 1. Retrieve expected config attributes from __init__ signature expected_keys = cls._get_init_keys(cls) expected_keys.remove("self") # remove general kwargs if present in dict if "kwargs" in expected_keys: expected_keys.remove("kwargs") # remove flax internal keys if hasattr(cls, "_flax_internal_args"): for arg in cls._flax_internal_args: expected_keys.remove(arg) # 2. Remove attributes that cannot be expected from expected config attributes # remove keys to be ignored if len(cls.ignore_for_config) > 0: expected_keys = expected_keys - set(cls.ignore_for_config) # load diffusers library to import compatible and original scheduler diffusers_library = importlib.import_module(__name__.split(".")[0]) if cls.has_compatibles: compatible_classes = [c for c in cls._get_compatibles() if not isinstance(c, DummyObject)] else: compatible_classes = [] expected_keys_comp_cls = set() for c in compatible_classes: expected_keys_c = cls._get_init_keys(c) expected_keys_comp_cls = expected_keys_comp_cls.union(expected_keys_c) expected_keys_comp_cls = expected_keys_comp_cls - cls._get_init_keys(cls) config_dict = {k: v for k, v in config_dict.items() if k not in expected_keys_comp_cls} # remove attributes from orig class that cannot be expected orig_cls_name = config_dict.pop("_class_name", cls.__name__) if ( isinstance(orig_cls_name, str) and orig_cls_name != cls.__name__ and hasattr(diffusers_library, orig_cls_name) ): orig_cls = getattr(diffusers_library, orig_cls_name) unexpected_keys_from_orig = cls._get_init_keys(orig_cls) - expected_keys config_dict = {k: v for k, v in config_dict.items() if k not in unexpected_keys_from_orig} elif not isinstance(orig_cls_name, str) and not isinstance(orig_cls_name, (list, tuple)): raise ValueError( "Make sure that the `_class_name` is of type string or list of string (for custom pipelines)." ) # remove private attributes config_dict = {k: v for k, v in config_dict.items() if not k.startswith("_")} # remove quantization_config config_dict = {k: v for k, v in config_dict.items() if k != "quantization_config"} # 3. Create keyword arguments that will be passed to __init__ from expected keyword arguments init_dict = {} for key in expected_keys: # if config param is passed to kwarg and is present in config dict # it should overwrite existing config dict key if key in kwargs and key in config_dict: config_dict[key] = kwargs.pop(key) if key in kwargs: # overwrite key init_dict[key] = kwargs.pop(key) elif key in config_dict: # use value from config dict init_dict[key] = config_dict.pop(key) # 4. Give nice warning if unexpected values have been passed if len(config_dict) > 0: logger.warning( f"The config attributes {config_dict} were passed to {cls.__name__}, " "but are not expected and will be ignored. Please verify your " f"{cls.config_name} configuration file." ) # 5. Give nice info if config attributes are initialized to default because they have not been passed passed_keys = set(init_dict.keys()) if len(expected_keys - passed_keys) > 0: logger.info( f"{expected_keys - passed_keys} was not found in config. Values will be initialized to default values." ) # 6. Define unused keyword arguments unused_kwargs = {**config_dict, **kwargs} # 7. Define "hidden" config parameters that were saved for compatible classes hidden_config_dict = {k: v for k, v in original_dict.items() if k not in init_dict} return init_dict, unused_kwargs, hidden_config_dict @classmethod def _dict_from_json_file( cls, json_file: Union[str, os.PathLike], dduf_entries: Optional[Dict[str, DDUFEntry]] = None ): if dduf_entries: text = dduf_entries[json_file].read_text() else: with open(json_file, "r", encoding="utf-8") as reader: text = reader.read() return json.loads(text) def __repr__(self): return f"{self.__class__.__name__} {self.to_json_string()}" @property def config(self) -> Dict[str, Any]: """ Returns the config of the class as a frozen dictionary Returns: `Dict[str, Any]`: Config of the class. """ return self._internal_dict def to_json_string(self) -> str: """ Serializes the configuration instance to a JSON string. Returns: `str`: String containing all the attributes that make up the configuration instance in JSON format. """ config_dict = self._internal_dict if hasattr(self, "_internal_dict") else {} config_dict["_class_name"] = self.__class__.__name__ config_dict["_diffusers_version"] = __version__ def to_json_saveable(value): if isinstance(value, np.ndarray): value = value.tolist() elif isinstance(value, Path): value = value.as_posix() return value if "quantization_config" in config_dict: config_dict["quantization_config"] = ( config_dict.quantization_config.to_dict() if not isinstance(config_dict.quantization_config, dict) else config_dict.quantization_config ) config_dict = {k: to_json_saveable(v) for k, v in config_dict.items()} # Don't save "_ignore_files" or "_use_default_values" config_dict.pop("_ignore_files", None) config_dict.pop("_use_default_values", None) # pop the `_pre_quantization_dtype` as torch.dtypes are not serializable. _ = config_dict.pop("_pre_quantization_dtype", None) return json.dumps(config_dict, indent=2, sort_keys=True) + "\n" def to_json_file(self, json_file_path: Union[str, os.PathLike]): """ Save the configuration instance's parameters to a JSON file. Args: json_file_path (`str` or `os.PathLike`): Path to the JSON file to save a configuration instance's parameters. """ with open(json_file_path, "w", encoding="utf-8") as writer: writer.write(self.to_json_string()) @classmethod def _get_config_file_from_dduf(cls, pretrained_model_name_or_path: str, dduf_entries: Dict[str, DDUFEntry]): # paths inside a DDUF file must always be "/" config_file = ( cls.config_name if pretrained_model_name_or_path == "" else "/".join([pretrained_model_name_or_path, cls.config_name]) ) if config_file not in dduf_entries: raise ValueError( f"We did not manage to find the file {config_file} in the dduf file. We only have the following files {dduf_entries.keys()}" ) return config_file def register_to_config(init): r""" Decorator to apply on the init of classes inheriting from [`ConfigMixin`] so that all the arguments are automatically sent to `self.register_for_config`. To ignore a specific argument accepted by the init but that shouldn't be registered in the config, use the `ignore_for_config` class variable Warning: Once decorated, all private arguments (beginning with an underscore) are trashed and not sent to the init! """ @functools.wraps(init) def inner_init(self, *args, **kwargs): # Ignore private kwargs in the init. init_kwargs = {k: v for k, v in kwargs.items() if not k.startswith("_")} config_init_kwargs = {k: v for k, v in kwargs.items() if k.startswith("_")} if not isinstance(self, ConfigMixin): raise RuntimeError( f"`@register_for_config` was applied to {self.__class__.__name__} init method, but this class does " "not inherit from `ConfigMixin`." ) ignore = getattr(self, "ignore_for_config", []) # Get positional arguments aligned with kwargs new_kwargs = {} signature = inspect.signature(init) parameters = { name: p.default for i, (name, p) in enumerate(signature.parameters.items()) if i > 0 and name not in ignore } for arg, name in zip(args, parameters.keys()): new_kwargs[name] = arg # Then add all kwargs new_kwargs.update( { k: init_kwargs.get(k, default) for k, default in parameters.items() if k not in ignore and k not in new_kwargs } ) # Take note of the parameters that were not present in the loaded config if len(set(new_kwargs.keys()) - set(init_kwargs)) > 0: new_kwargs["_use_default_values"] = list(set(new_kwargs.keys()) - set(init_kwargs)) new_kwargs = {**config_init_kwargs, **new_kwargs} getattr(self, "register_to_config")(**new_kwargs) init(self, *args, **init_kwargs) return inner_init def flax_register_to_config(cls): original_init = cls.__init__ @functools.wraps(original_init) def init(self, *args, **kwargs): if not isinstance(self, ConfigMixin): raise RuntimeError( f"`@register_for_config` was applied to {self.__class__.__name__} init method, but this class does " "not inherit from `ConfigMixin`." ) # Ignore private kwargs in the init. Retrieve all passed attributes init_kwargs = dict(kwargs.items()) # Retrieve default values fields = dataclasses.fields(self) default_kwargs = {} for field in fields: # ignore flax specific attributes if field.name in self._flax_internal_args: continue if type(field.default) == dataclasses._MISSING_TYPE: default_kwargs[field.name] = None else: default_kwargs[field.name] = getattr(self, field.name) # Make sure init_kwargs override default kwargs new_kwargs = {**default_kwargs, **init_kwargs} # dtype should be part of `init_kwargs`, but not `new_kwargs` if "dtype" in new_kwargs: new_kwargs.pop("dtype") # Get positional arguments aligned with kwargs for i, arg in enumerate(args): name = fields[i].name new_kwargs[name] = arg # Take note of the parameters that were not present in the loaded config if len(set(new_kwargs.keys()) - set(init_kwargs)) > 0: new_kwargs["_use_default_values"] = list(set(new_kwargs.keys()) - set(init_kwargs)) getattr(self, "register_to_config")(**new_kwargs) original_init(self, *args, **kwargs) cls.__init__ = init return cls class LegacyConfigMixin(ConfigMixin): r""" A subclass of `ConfigMixin` to resolve class mapping from legacy classes (like `Transformer2DModel`) to more pipeline-specific classes (like `DiTTransformer2DModel`). """ @classmethod def from_config(cls, config: Union[FrozenDict, Dict[str, Any]] = None, return_unused_kwargs=False, **kwargs): # To prevent dependency import problem. from .models.model_loading_utils import _fetch_remapped_cls_from_config # resolve remapping remapped_class = _fetch_remapped_cls_from_config(config, cls) return remapped_class.from_config(config, return_unused_kwargs, **kwargs)
diffusers/src/diffusers/configuration_utils.py/0
{ "file_path": "diffusers/src/diffusers/configuration_utils.py", "repo_id": "diffusers", "token_count": 14673 }
# Copyright 2024 The HuggingFace Team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import os from typing import Callable, Dict, List, Optional, Union import torch from huggingface_hub.utils import validate_hf_hub_args from ..utils import ( USE_PEFT_BACKEND, deprecate, get_submodule_by_name, is_peft_available, is_peft_version, is_torch_version, is_transformers_available, is_transformers_version, logging, ) from .lora_base import ( # noqa LORA_WEIGHT_NAME, LORA_WEIGHT_NAME_SAFE, LoraBaseMixin, _fetch_state_dict, _load_lora_into_text_encoder, ) from .lora_conversion_utils import ( _convert_bfl_flux_control_lora_to_diffusers, _convert_hunyuan_video_lora_to_diffusers, _convert_kohya_flux_lora_to_diffusers, _convert_non_diffusers_lora_to_diffusers, _convert_xlabs_flux_lora_to_diffusers, _maybe_map_sgm_blocks_to_diffusers, ) _LOW_CPU_MEM_USAGE_DEFAULT_LORA = False if is_torch_version(">=", "1.9.0"): if ( is_peft_available() and is_peft_version(">=", "0.13.1") and is_transformers_available() and is_transformers_version(">", "4.45.2") ): _LOW_CPU_MEM_USAGE_DEFAULT_LORA = True logger = logging.get_logger(__name__) TEXT_ENCODER_NAME = "text_encoder" UNET_NAME = "unet" TRANSFORMER_NAME = "transformer" _MODULE_NAME_TO_ATTRIBUTE_MAP_FLUX = {"x_embedder": "in_channels"} class StableDiffusionLoraLoaderMixin(LoraBaseMixin): r""" Load LoRA layers into Stable Diffusion [`UNet2DConditionModel`] and [`CLIPTextModel`](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPTextModel). """ _lora_loadable_modules = ["unet", "text_encoder"] unet_name = UNET_NAME text_encoder_name = TEXT_ENCODER_NAME def load_lora_weights( self, pretrained_model_name_or_path_or_dict: Union[str, Dict[str, torch.Tensor]], adapter_name=None, **kwargs ): """ Load LoRA weights specified in `pretrained_model_name_or_path_or_dict` into `self.unet` and `self.text_encoder`. All kwargs are forwarded to `self.lora_state_dict`. See [`~loaders.StableDiffusionLoraLoaderMixin.lora_state_dict`] for more details on how the state dict is loaded. See [`~loaders.StableDiffusionLoraLoaderMixin.load_lora_into_unet`] for more details on how the state dict is loaded into `self.unet`. See [`~loaders.StableDiffusionLoraLoaderMixin.load_lora_into_text_encoder`] for more details on how the state dict is loaded into `self.text_encoder`. Parameters: pretrained_model_name_or_path_or_dict (`str` or `os.PathLike` or `dict`): See [`~loaders.StableDiffusionLoraLoaderMixin.lora_state_dict`]. adapter_name (`str`, *optional*): Adapter name to be used for referencing the loaded adapter model. If not specified, it will use `default_{i}` where i is the total number of adapters being loaded. low_cpu_mem_usage (`bool`, *optional*): Speed up model loading by only loading the pretrained LoRA weights and not initializing the random weights. kwargs (`dict`, *optional*): See [`~loaders.StableDiffusionLoraLoaderMixin.lora_state_dict`]. """ if not USE_PEFT_BACKEND: raise ValueError("PEFT backend is required for this method.") low_cpu_mem_usage = kwargs.pop("low_cpu_mem_usage", _LOW_CPU_MEM_USAGE_DEFAULT_LORA) if low_cpu_mem_usage and not is_peft_version(">=", "0.13.1"): raise ValueError( "`low_cpu_mem_usage=True` is not compatible with this `peft` version. Please update it with `pip install -U peft`." ) # if a dict is passed, copy it instead of modifying it inplace if isinstance(pretrained_model_name_or_path_or_dict, dict): pretrained_model_name_or_path_or_dict = pretrained_model_name_or_path_or_dict.copy() # First, ensure that the checkpoint is a compatible one and can be successfully loaded. state_dict, network_alphas = self.lora_state_dict(pretrained_model_name_or_path_or_dict, **kwargs) is_correct_format = all("lora" in key for key in state_dict.keys()) if not is_correct_format: raise ValueError("Invalid LoRA checkpoint.") self.load_lora_into_unet( state_dict, network_alphas=network_alphas, unet=getattr(self, self.unet_name) if not hasattr(self, "unet") else self.unet, adapter_name=adapter_name, _pipeline=self, low_cpu_mem_usage=low_cpu_mem_usage, ) self.load_lora_into_text_encoder( state_dict, network_alphas=network_alphas, text_encoder=getattr(self, self.text_encoder_name) if not hasattr(self, "text_encoder") else self.text_encoder, lora_scale=self.lora_scale, adapter_name=adapter_name, _pipeline=self, low_cpu_mem_usage=low_cpu_mem_usage, ) @classmethod @validate_hf_hub_args def lora_state_dict( cls, pretrained_model_name_or_path_or_dict: Union[str, Dict[str, torch.Tensor]], **kwargs, ): r""" Return state dict for lora weights and the network alphas. <Tip warning={true}> We support loading A1111 formatted LoRA checkpoints in a limited capacity. This function is experimental and might change in the future. </Tip> Parameters: pretrained_model_name_or_path_or_dict (`str` or `os.PathLike` or `dict`): Can be either: - A string, the *model id* (for example `google/ddpm-celebahq-256`) of a pretrained model hosted on the Hub. - A path to a *directory* (for example `./my_model_directory`) containing the model weights saved with [`ModelMixin.save_pretrained`]. - A [torch state dict](https://pytorch.org/tutorials/beginner/saving_loading_models.html#what-is-a-state-dict). cache_dir (`Union[str, os.PathLike]`, *optional*): Path to a directory where a downloaded pretrained model configuration is cached if the standard cache is not used. force_download (`bool`, *optional*, defaults to `False`): Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. proxies (`Dict[str, str]`, *optional*): A dictionary of proxy servers to use by protocol or endpoint, for example, `{'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}`. The proxies are used on each request. local_files_only (`bool`, *optional*, defaults to `False`): Whether to only load local model weights and configuration files or not. If set to `True`, the model won't be downloaded from the Hub. token (`str` or *bool*, *optional*): The token to use as HTTP bearer authorization for remote files. If `True`, the token generated from `diffusers-cli login` (stored in `~/.huggingface`) is used. revision (`str`, *optional*, defaults to `"main"`): The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier allowed by Git. subfolder (`str`, *optional*, defaults to `""`): The subfolder location of a model file within a larger model repository on the Hub or locally. weight_name (`str`, *optional*, defaults to None): Name of the serialized state dict file. """ # Load the main state dict first which has the LoRA layers for either of # UNet and text encoder or both. cache_dir = kwargs.pop("cache_dir", None) force_download = kwargs.pop("force_download", False) proxies = kwargs.pop("proxies", None) local_files_only = kwargs.pop("local_files_only", None) token = kwargs.pop("token", None) revision = kwargs.pop("revision", None) subfolder = kwargs.pop("subfolder", None) weight_name = kwargs.pop("weight_name", None) unet_config = kwargs.pop("unet_config", None) use_safetensors = kwargs.pop("use_safetensors", None) allow_pickle = False if use_safetensors is None: use_safetensors = True allow_pickle = True user_agent = { "file_type": "attn_procs_weights", "framework": "pytorch", } state_dict = _fetch_state_dict( pretrained_model_name_or_path_or_dict=pretrained_model_name_or_path_or_dict, weight_name=weight_name, use_safetensors=use_safetensors, local_files_only=local_files_only, cache_dir=cache_dir, force_download=force_download, proxies=proxies, token=token, revision=revision, subfolder=subfolder, user_agent=user_agent, allow_pickle=allow_pickle, ) is_dora_scale_present = any("dora_scale" in k for k in state_dict) if is_dora_scale_present: warn_msg = "It seems like you are using a DoRA checkpoint that is not compatible in Diffusers at the moment. So, we are going to filter out the keys associated to 'dora_scale` from the state dict. If you think this is a mistake please open an issue https://github.com/huggingface/diffusers/issues/new." logger.warning(warn_msg) state_dict = {k: v for k, v in state_dict.items() if "dora_scale" not in k} network_alphas = None # TODO: replace it with a method from `state_dict_utils` if all( ( k.startswith("lora_te_") or k.startswith("lora_unet_") or k.startswith("lora_te1_") or k.startswith("lora_te2_") ) for k in state_dict.keys() ): # Map SDXL blocks correctly. if unet_config is not None: # use unet config to remap block numbers state_dict = _maybe_map_sgm_blocks_to_diffusers(state_dict, unet_config) state_dict, network_alphas = _convert_non_diffusers_lora_to_diffusers(state_dict) return state_dict, network_alphas @classmethod def load_lora_into_unet( cls, state_dict, network_alphas, unet, adapter_name=None, _pipeline=None, low_cpu_mem_usage=False ): """ This will load the LoRA layers specified in `state_dict` into `unet`. Parameters: state_dict (`dict`): A standard state dict containing the lora layer parameters. The keys can either be indexed directly into the unet or prefixed with an additional `unet` which can be used to distinguish between text encoder lora layers. network_alphas (`Dict[str, float]`): The value of the network alpha used for stable learning and preventing underflow. This value has the same meaning as the `--network_alpha` option in the kohya-ss trainer script. Refer to [this link](https://github.com/darkstorm2150/sd-scripts/blob/main/docs/train_network_README-en.md#execute-learning). unet (`UNet2DConditionModel`): The UNet model to load the LoRA layers into. adapter_name (`str`, *optional*): Adapter name to be used for referencing the loaded adapter model. If not specified, it will use `default_{i}` where i is the total number of adapters being loaded. low_cpu_mem_usage (`bool`, *optional*): Speed up model loading only loading the pretrained LoRA weights and not initializing the random weights. """ if not USE_PEFT_BACKEND: raise ValueError("PEFT backend is required for this method.") if low_cpu_mem_usage and not is_peft_version(">=", "0.13.1"): raise ValueError( "`low_cpu_mem_usage=True` is not compatible with this `peft` version. Please update it with `pip install -U peft`." ) # If the serialization format is new (introduced in https://github.com/huggingface/diffusers/pull/2918), # then the `state_dict` keys should have `cls.unet_name` and/or `cls.text_encoder_name` as # their prefixes. keys = list(state_dict.keys()) only_text_encoder = all(key.startswith(cls.text_encoder_name) for key in keys) if not only_text_encoder: # Load the layers corresponding to UNet. logger.info(f"Loading {cls.unet_name}.") unet.load_lora_adapter( state_dict, prefix=cls.unet_name, network_alphas=network_alphas, adapter_name=adapter_name, _pipeline=_pipeline, low_cpu_mem_usage=low_cpu_mem_usage, ) @classmethod def load_lora_into_text_encoder( cls, state_dict, network_alphas, text_encoder, prefix=None, lora_scale=1.0, adapter_name=None, _pipeline=None, low_cpu_mem_usage=False, ): """ This will load the LoRA layers specified in `state_dict` into `text_encoder` Parameters: state_dict (`dict`): A standard state dict containing the lora layer parameters. The key should be prefixed with an additional `text_encoder` to distinguish between unet lora layers. network_alphas (`Dict[str, float]`): The value of the network alpha used for stable learning and preventing underflow. This value has the same meaning as the `--network_alpha` option in the kohya-ss trainer script. Refer to [this link](https://github.com/darkstorm2150/sd-scripts/blob/main/docs/train_network_README-en.md#execute-learning). text_encoder (`CLIPTextModel`): The text encoder model to load the LoRA layers into. prefix (`str`): Expected prefix of the `text_encoder` in the `state_dict`. lora_scale (`float`): How much to scale the output of the lora linear layer before it is added with the output of the regular lora layer. adapter_name (`str`, *optional*): Adapter name to be used for referencing the loaded adapter model. If not specified, it will use `default_{i}` where i is the total number of adapters being loaded. low_cpu_mem_usage (`bool`, *optional*): Speed up model loading by only loading the pretrained LoRA weights and not initializing the random weights. """ _load_lora_into_text_encoder( state_dict=state_dict, network_alphas=network_alphas, lora_scale=lora_scale, text_encoder=text_encoder, prefix=prefix, text_encoder_name=cls.text_encoder_name, adapter_name=adapter_name, _pipeline=_pipeline, low_cpu_mem_usage=low_cpu_mem_usage, ) @classmethod def save_lora_weights( cls, save_directory: Union[str, os.PathLike], unet_lora_layers: Dict[str, Union[torch.nn.Module, torch.Tensor]] = None, text_encoder_lora_layers: Dict[str, torch.nn.Module] = None, is_main_process: bool = True, weight_name: str = None, save_function: Callable = None, safe_serialization: bool = True, ): r""" Save the LoRA parameters corresponding to the UNet and text encoder. Arguments: save_directory (`str` or `os.PathLike`): Directory to save LoRA parameters to. Will be created if it doesn't exist. unet_lora_layers (`Dict[str, torch.nn.Module]` or `Dict[str, torch.Tensor]`): State dict of the LoRA layers corresponding to the `unet`. text_encoder_lora_layers (`Dict[str, torch.nn.Module]` or `Dict[str, torch.Tensor]`): State dict of the LoRA layers corresponding to the `text_encoder`. Must explicitly pass the text encoder LoRA state dict because it comes from 🤗 Transformers. is_main_process (`bool`, *optional*, defaults to `True`): Whether the process calling this is the main process or not. Useful during distributed training and you need to call this function on all processes. In this case, set `is_main_process=True` only on the main process to avoid race conditions. save_function (`Callable`): The function to use to save the state dictionary. Useful during distributed training when you need to replace `torch.save` with another method. Can be configured with the environment variable `DIFFUSERS_SAVE_MODE`. safe_serialization (`bool`, *optional*, defaults to `True`): Whether to save the model using `safetensors` or the traditional PyTorch way with `pickle`. """ state_dict = {} if not (unet_lora_layers or text_encoder_lora_layers): raise ValueError("You must pass at least one of `unet_lora_layers` and `text_encoder_lora_layers`.") if unet_lora_layers: state_dict.update(cls.pack_weights(unet_lora_layers, cls.unet_name)) if text_encoder_lora_layers: state_dict.update(cls.pack_weights(text_encoder_lora_layers, cls.text_encoder_name)) # Save the model cls.write_lora_layers( state_dict=state_dict, save_directory=save_directory, is_main_process=is_main_process, weight_name=weight_name, save_function=save_function, safe_serialization=safe_serialization, ) def fuse_lora( self, components: List[str] = ["unet", "text_encoder"], lora_scale: float = 1.0, safe_fusing: bool = False, adapter_names: Optional[List[str]] = None, **kwargs, ): r""" Fuses the LoRA parameters into the original parameters of the corresponding blocks. <Tip warning={true}> This is an experimental API. </Tip> Args: components: (`List[str]`): List of LoRA-injectable components to fuse the LoRAs into. lora_scale (`float`, defaults to 1.0): Controls how much to influence the outputs with the LoRA parameters. safe_fusing (`bool`, defaults to `False`): Whether to check fused weights for NaN values before fusing and if values are NaN not fusing them. adapter_names (`List[str]`, *optional*): Adapter names to be used for fusing. If nothing is passed, all active adapters will be fused. Example: ```py from diffusers import DiffusionPipeline import torch pipeline = DiffusionPipeline.from_pretrained( "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16 ).to("cuda") pipeline.load_lora_weights("nerijs/pixel-art-xl", weight_name="pixel-art-xl.safetensors", adapter_name="pixel") pipeline.fuse_lora(lora_scale=0.7) ``` """ super().fuse_lora( components=components, lora_scale=lora_scale, safe_fusing=safe_fusing, adapter_names=adapter_names ) def unfuse_lora(self, components: List[str] = ["unet", "text_encoder"], **kwargs): r""" Reverses the effect of [`pipe.fuse_lora()`](https://huggingface.co/docs/diffusers/main/en/api/loaders#diffusers.loaders.LoraBaseMixin.fuse_lora). <Tip warning={true}> This is an experimental API. </Tip> Args: components (`List[str]`): List of LoRA-injectable components to unfuse LoRA from. unfuse_unet (`bool`, defaults to `True`): Whether to unfuse the UNet LoRA parameters. unfuse_text_encoder (`bool`, defaults to `True`): Whether to unfuse the text encoder LoRA parameters. If the text encoder wasn't monkey-patched with the LoRA parameters then it won't have any effect. """ super().unfuse_lora(components=components) class StableDiffusionXLLoraLoaderMixin(LoraBaseMixin): r""" Load LoRA layers into Stable Diffusion XL [`UNet2DConditionModel`], [`CLIPTextModel`](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPTextModel), and [`CLIPTextModelWithProjection`](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPTextModelWithProjection). """ _lora_loadable_modules = ["unet", "text_encoder", "text_encoder_2"] unet_name = UNET_NAME text_encoder_name = TEXT_ENCODER_NAME def load_lora_weights( self, pretrained_model_name_or_path_or_dict: Union[str, Dict[str, torch.Tensor]], adapter_name: Optional[str] = None, **kwargs, ): """ Load LoRA weights specified in `pretrained_model_name_or_path_or_dict` into `self.unet` and `self.text_encoder`. All kwargs are forwarded to `self.lora_state_dict`. See [`~loaders.StableDiffusionLoraLoaderMixin.lora_state_dict`] for more details on how the state dict is loaded. See [`~loaders.StableDiffusionLoraLoaderMixin.load_lora_into_unet`] for more details on how the state dict is loaded into `self.unet`. See [`~loaders.StableDiffusionLoraLoaderMixin.load_lora_into_text_encoder`] for more details on how the state dict is loaded into `self.text_encoder`. Parameters: pretrained_model_name_or_path_or_dict (`str` or `os.PathLike` or `dict`): See [`~loaders.StableDiffusionLoraLoaderMixin.lora_state_dict`]. adapter_name (`str`, *optional*): Adapter name to be used for referencing the loaded adapter model. If not specified, it will use `default_{i}` where i is the total number of adapters being loaded. low_cpu_mem_usage (`bool`, *optional*): Speed up model loading by only loading the pretrained LoRA weights and not initializing the random weights. kwargs (`dict`, *optional*): See [`~loaders.StableDiffusionLoraLoaderMixin.lora_state_dict`]. """ if not USE_PEFT_BACKEND: raise ValueError("PEFT backend is required for this method.") low_cpu_mem_usage = kwargs.pop("low_cpu_mem_usage", _LOW_CPU_MEM_USAGE_DEFAULT_LORA) if low_cpu_mem_usage and not is_peft_version(">=", "0.13.1"): raise ValueError( "`low_cpu_mem_usage=True` is not compatible with this `peft` version. Please update it with `pip install -U peft`." ) # We could have accessed the unet config from `lora_state_dict()` too. We pass # it here explicitly to be able to tell that it's coming from an SDXL # pipeline. # if a dict is passed, copy it instead of modifying it inplace if isinstance(pretrained_model_name_or_path_or_dict, dict): pretrained_model_name_or_path_or_dict = pretrained_model_name_or_path_or_dict.copy() # First, ensure that the checkpoint is a compatible one and can be successfully loaded. state_dict, network_alphas = self.lora_state_dict( pretrained_model_name_or_path_or_dict, unet_config=self.unet.config, **kwargs, ) is_correct_format = all("lora" in key for key in state_dict.keys()) if not is_correct_format: raise ValueError("Invalid LoRA checkpoint.") self.load_lora_into_unet( state_dict, network_alphas=network_alphas, unet=self.unet, adapter_name=adapter_name, _pipeline=self, low_cpu_mem_usage=low_cpu_mem_usage, ) text_encoder_state_dict = {k: v for k, v in state_dict.items() if "text_encoder." in k} if len(text_encoder_state_dict) > 0: self.load_lora_into_text_encoder( text_encoder_state_dict, network_alphas=network_alphas, text_encoder=self.text_encoder, prefix="text_encoder", lora_scale=self.lora_scale, adapter_name=adapter_name, _pipeline=self, low_cpu_mem_usage=low_cpu_mem_usage, ) text_encoder_2_state_dict = {k: v for k, v in state_dict.items() if "text_encoder_2." in k} if len(text_encoder_2_state_dict) > 0: self.load_lora_into_text_encoder( text_encoder_2_state_dict, network_alphas=network_alphas, text_encoder=self.text_encoder_2, prefix="text_encoder_2", lora_scale=self.lora_scale, adapter_name=adapter_name, _pipeline=self, low_cpu_mem_usage=low_cpu_mem_usage, ) @classmethod @validate_hf_hub_args # Copied from diffusers.loaders.lora_pipeline.StableDiffusionLoraLoaderMixin.lora_state_dict def lora_state_dict( cls, pretrained_model_name_or_path_or_dict: Union[str, Dict[str, torch.Tensor]], **kwargs, ): r""" Return state dict for lora weights and the network alphas. <Tip warning={true}> We support loading A1111 formatted LoRA checkpoints in a limited capacity. This function is experimental and might change in the future. </Tip> Parameters: pretrained_model_name_or_path_or_dict (`str` or `os.PathLike` or `dict`): Can be either: - A string, the *model id* (for example `google/ddpm-celebahq-256`) of a pretrained model hosted on the Hub. - A path to a *directory* (for example `./my_model_directory`) containing the model weights saved with [`ModelMixin.save_pretrained`]. - A [torch state dict](https://pytorch.org/tutorials/beginner/saving_loading_models.html#what-is-a-state-dict). cache_dir (`Union[str, os.PathLike]`, *optional*): Path to a directory where a downloaded pretrained model configuration is cached if the standard cache is not used. force_download (`bool`, *optional*, defaults to `False`): Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. proxies (`Dict[str, str]`, *optional*): A dictionary of proxy servers to use by protocol or endpoint, for example, `{'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}`. The proxies are used on each request. local_files_only (`bool`, *optional*, defaults to `False`): Whether to only load local model weights and configuration files or not. If set to `True`, the model won't be downloaded from the Hub. token (`str` or *bool*, *optional*): The token to use as HTTP bearer authorization for remote files. If `True`, the token generated from `diffusers-cli login` (stored in `~/.huggingface`) is used. revision (`str`, *optional*, defaults to `"main"`): The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier allowed by Git. subfolder (`str`, *optional*, defaults to `""`): The subfolder location of a model file within a larger model repository on the Hub or locally. weight_name (`str`, *optional*, defaults to None): Name of the serialized state dict file. """ # Load the main state dict first which has the LoRA layers for either of # UNet and text encoder or both. cache_dir = kwargs.pop("cache_dir", None) force_download = kwargs.pop("force_download", False) proxies = kwargs.pop("proxies", None) local_files_only = kwargs.pop("local_files_only", None) token = kwargs.pop("token", None) revision = kwargs.pop("revision", None) subfolder = kwargs.pop("subfolder", None) weight_name = kwargs.pop("weight_name", None) unet_config = kwargs.pop("unet_config", None) use_safetensors = kwargs.pop("use_safetensors", None) allow_pickle = False if use_safetensors is None: use_safetensors = True allow_pickle = True user_agent = { "file_type": "attn_procs_weights", "framework": "pytorch", } state_dict = _fetch_state_dict( pretrained_model_name_or_path_or_dict=pretrained_model_name_or_path_or_dict, weight_name=weight_name, use_safetensors=use_safetensors, local_files_only=local_files_only, cache_dir=cache_dir, force_download=force_download, proxies=proxies, token=token, revision=revision, subfolder=subfolder, user_agent=user_agent, allow_pickle=allow_pickle, ) is_dora_scale_present = any("dora_scale" in k for k in state_dict) if is_dora_scale_present: warn_msg = "It seems like you are using a DoRA checkpoint that is not compatible in Diffusers at the moment. So, we are going to filter out the keys associated to 'dora_scale` from the state dict. If you think this is a mistake please open an issue https://github.com/huggingface/diffusers/issues/new." logger.warning(warn_msg) state_dict = {k: v for k, v in state_dict.items() if "dora_scale" not in k} network_alphas = None # TODO: replace it with a method from `state_dict_utils` if all( ( k.startswith("lora_te_") or k.startswith("lora_unet_") or k.startswith("lora_te1_") or k.startswith("lora_te2_") ) for k in state_dict.keys() ): # Map SDXL blocks correctly. if unet_config is not None: # use unet config to remap block numbers state_dict = _maybe_map_sgm_blocks_to_diffusers(state_dict, unet_config) state_dict, network_alphas = _convert_non_diffusers_lora_to_diffusers(state_dict) return state_dict, network_alphas @classmethod # Copied from diffusers.loaders.lora_pipeline.StableDiffusionLoraLoaderMixin.load_lora_into_unet def load_lora_into_unet( cls, state_dict, network_alphas, unet, adapter_name=None, _pipeline=None, low_cpu_mem_usage=False ): """ This will load the LoRA layers specified in `state_dict` into `unet`. Parameters: state_dict (`dict`): A standard state dict containing the lora layer parameters. The keys can either be indexed directly into the unet or prefixed with an additional `unet` which can be used to distinguish between text encoder lora layers. network_alphas (`Dict[str, float]`): The value of the network alpha used for stable learning and preventing underflow. This value has the same meaning as the `--network_alpha` option in the kohya-ss trainer script. Refer to [this link](https://github.com/darkstorm2150/sd-scripts/blob/main/docs/train_network_README-en.md#execute-learning). unet (`UNet2DConditionModel`): The UNet model to load the LoRA layers into. adapter_name (`str`, *optional*): Adapter name to be used for referencing the loaded adapter model. If not specified, it will use `default_{i}` where i is the total number of adapters being loaded. low_cpu_mem_usage (`bool`, *optional*): Speed up model loading only loading the pretrained LoRA weights and not initializing the random weights. """ if not USE_PEFT_BACKEND: raise ValueError("PEFT backend is required for this method.") if low_cpu_mem_usage and not is_peft_version(">=", "0.13.1"): raise ValueError( "`low_cpu_mem_usage=True` is not compatible with this `peft` version. Please update it with `pip install -U peft`." ) # If the serialization format is new (introduced in https://github.com/huggingface/diffusers/pull/2918), # then the `state_dict` keys should have `cls.unet_name` and/or `cls.text_encoder_name` as # their prefixes. keys = list(state_dict.keys()) only_text_encoder = all(key.startswith(cls.text_encoder_name) for key in keys) if not only_text_encoder: # Load the layers corresponding to UNet. logger.info(f"Loading {cls.unet_name}.") unet.load_lora_adapter( state_dict, prefix=cls.unet_name, network_alphas=network_alphas, adapter_name=adapter_name, _pipeline=_pipeline, low_cpu_mem_usage=low_cpu_mem_usage, ) @classmethod # Copied from diffusers.loaders.lora_pipeline.StableDiffusionLoraLoaderMixin.load_lora_into_text_encoder def load_lora_into_text_encoder( cls, state_dict, network_alphas, text_encoder, prefix=None, lora_scale=1.0, adapter_name=None, _pipeline=None, low_cpu_mem_usage=False, ): """ This will load the LoRA layers specified in `state_dict` into `text_encoder` Parameters: state_dict (`dict`): A standard state dict containing the lora layer parameters. The key should be prefixed with an additional `text_encoder` to distinguish between unet lora layers. network_alphas (`Dict[str, float]`): The value of the network alpha used for stable learning and preventing underflow. This value has the same meaning as the `--network_alpha` option in the kohya-ss trainer script. Refer to [this link](https://github.com/darkstorm2150/sd-scripts/blob/main/docs/train_network_README-en.md#execute-learning). text_encoder (`CLIPTextModel`): The text encoder model to load the LoRA layers into. prefix (`str`): Expected prefix of the `text_encoder` in the `state_dict`. lora_scale (`float`): How much to scale the output of the lora linear layer before it is added with the output of the regular lora layer. adapter_name (`str`, *optional*): Adapter name to be used for referencing the loaded adapter model. If not specified, it will use `default_{i}` where i is the total number of adapters being loaded. low_cpu_mem_usage (`bool`, *optional*): Speed up model loading by only loading the pretrained LoRA weights and not initializing the random weights. """ _load_lora_into_text_encoder( state_dict=state_dict, network_alphas=network_alphas, lora_scale=lora_scale, text_encoder=text_encoder, prefix=prefix, text_encoder_name=cls.text_encoder_name, adapter_name=adapter_name, _pipeline=_pipeline, low_cpu_mem_usage=low_cpu_mem_usage, ) @classmethod def save_lora_weights( cls, save_directory: Union[str, os.PathLike], unet_lora_layers: Dict[str, Union[torch.nn.Module, torch.Tensor]] = None, text_encoder_lora_layers: Dict[str, Union[torch.nn.Module, torch.Tensor]] = None, text_encoder_2_lora_layers: Dict[str, Union[torch.nn.Module, torch.Tensor]] = None, is_main_process: bool = True, weight_name: str = None, save_function: Callable = None, safe_serialization: bool = True, ): r""" Save the LoRA parameters corresponding to the UNet and text encoder. Arguments: save_directory (`str` or `os.PathLike`): Directory to save LoRA parameters to. Will be created if it doesn't exist. unet_lora_layers (`Dict[str, torch.nn.Module]` or `Dict[str, torch.Tensor]`): State dict of the LoRA layers corresponding to the `unet`. text_encoder_lora_layers (`Dict[str, torch.nn.Module]` or `Dict[str, torch.Tensor]`): State dict of the LoRA layers corresponding to the `text_encoder`. Must explicitly pass the text encoder LoRA state dict because it comes from 🤗 Transformers. text_encoder_2_lora_layers (`Dict[str, torch.nn.Module]` or `Dict[str, torch.Tensor]`): State dict of the LoRA layers corresponding to the `text_encoder_2`. Must explicitly pass the text encoder LoRA state dict because it comes from 🤗 Transformers. is_main_process (`bool`, *optional*, defaults to `True`): Whether the process calling this is the main process or not. Useful during distributed training and you need to call this function on all processes. In this case, set `is_main_process=True` only on the main process to avoid race conditions. save_function (`Callable`): The function to use to save the state dictionary. Useful during distributed training when you need to replace `torch.save` with another method. Can be configured with the environment variable `DIFFUSERS_SAVE_MODE`. safe_serialization (`bool`, *optional*, defaults to `True`): Whether to save the model using `safetensors` or the traditional PyTorch way with `pickle`. """ state_dict = {} if not (unet_lora_layers or text_encoder_lora_layers or text_encoder_2_lora_layers): raise ValueError( "You must pass at least one of `unet_lora_layers`, `text_encoder_lora_layers` or `text_encoder_2_lora_layers`." ) if unet_lora_layers: state_dict.update(cls.pack_weights(unet_lora_layers, "unet")) if text_encoder_lora_layers: state_dict.update(cls.pack_weights(text_encoder_lora_layers, "text_encoder")) if text_encoder_2_lora_layers: state_dict.update(cls.pack_weights(text_encoder_2_lora_layers, "text_encoder_2")) cls.write_lora_layers( state_dict=state_dict, save_directory=save_directory, is_main_process=is_main_process, weight_name=weight_name, save_function=save_function, safe_serialization=safe_serialization, ) def fuse_lora( self, components: List[str] = ["unet", "text_encoder", "text_encoder_2"], lora_scale: float = 1.0, safe_fusing: bool = False, adapter_names: Optional[List[str]] = None, **kwargs, ): r""" Fuses the LoRA parameters into the original parameters of the corresponding blocks. <Tip warning={true}> This is an experimental API. </Tip> Args: components: (`List[str]`): List of LoRA-injectable components to fuse the LoRAs into. lora_scale (`float`, defaults to 1.0): Controls how much to influence the outputs with the LoRA parameters. safe_fusing (`bool`, defaults to `False`): Whether to check fused weights for NaN values before fusing and if values are NaN not fusing them. adapter_names (`List[str]`, *optional*): Adapter names to be used for fusing. If nothing is passed, all active adapters will be fused. Example: ```py from diffusers import DiffusionPipeline import torch pipeline = DiffusionPipeline.from_pretrained( "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16 ).to("cuda") pipeline.load_lora_weights("nerijs/pixel-art-xl", weight_name="pixel-art-xl.safetensors", adapter_name="pixel") pipeline.fuse_lora(lora_scale=0.7) ``` """ super().fuse_lora( components=components, lora_scale=lora_scale, safe_fusing=safe_fusing, adapter_names=adapter_names ) def unfuse_lora(self, components: List[str] = ["unet", "text_encoder", "text_encoder_2"], **kwargs): r""" Reverses the effect of [`pipe.fuse_lora()`](https://huggingface.co/docs/diffusers/main/en/api/loaders#diffusers.loaders.LoraBaseMixin.fuse_lora). <Tip warning={true}> This is an experimental API. </Tip> Args: components (`List[str]`): List of LoRA-injectable components to unfuse LoRA from. unfuse_unet (`bool`, defaults to `True`): Whether to unfuse the UNet LoRA parameters. unfuse_text_encoder (`bool`, defaults to `True`): Whether to unfuse the text encoder LoRA parameters. If the text encoder wasn't monkey-patched with the LoRA parameters then it won't have any effect. """ super().unfuse_lora(components=components) class SD3LoraLoaderMixin(LoraBaseMixin): r""" Load LoRA layers into [`SD3Transformer2DModel`], [`CLIPTextModel`](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPTextModel), and [`CLIPTextModelWithProjection`](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPTextModelWithProjection). Specific to [`StableDiffusion3Pipeline`]. """ _lora_loadable_modules = ["transformer", "text_encoder", "text_encoder_2"] transformer_name = TRANSFORMER_NAME text_encoder_name = TEXT_ENCODER_NAME @classmethod @validate_hf_hub_args def lora_state_dict( cls, pretrained_model_name_or_path_or_dict: Union[str, Dict[str, torch.Tensor]], **kwargs, ): r""" Return state dict for lora weights and the network alphas. <Tip warning={true}> We support loading A1111 formatted LoRA checkpoints in a limited capacity. This function is experimental and might change in the future. </Tip> Parameters: pretrained_model_name_or_path_or_dict (`str` or `os.PathLike` or `dict`): Can be either: - A string, the *model id* (for example `google/ddpm-celebahq-256`) of a pretrained model hosted on the Hub. - A path to a *directory* (for example `./my_model_directory`) containing the model weights saved with [`ModelMixin.save_pretrained`]. - A [torch state dict](https://pytorch.org/tutorials/beginner/saving_loading_models.html#what-is-a-state-dict). cache_dir (`Union[str, os.PathLike]`, *optional*): Path to a directory where a downloaded pretrained model configuration is cached if the standard cache is not used. force_download (`bool`, *optional*, defaults to `False`): Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. proxies (`Dict[str, str]`, *optional*): A dictionary of proxy servers to use by protocol or endpoint, for example, `{'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}`. The proxies are used on each request. local_files_only (`bool`, *optional*, defaults to `False`): Whether to only load local model weights and configuration files or not. If set to `True`, the model won't be downloaded from the Hub. token (`str` or *bool*, *optional*): The token to use as HTTP bearer authorization for remote files. If `True`, the token generated from `diffusers-cli login` (stored in `~/.huggingface`) is used. revision (`str`, *optional*, defaults to `"main"`): The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier allowed by Git. subfolder (`str`, *optional*, defaults to `""`): The subfolder location of a model file within a larger model repository on the Hub or locally. """ # Load the main state dict first which has the LoRA layers for either of # transformer and text encoder or both. cache_dir = kwargs.pop("cache_dir", None) force_download = kwargs.pop("force_download", False) proxies = kwargs.pop("proxies", None) local_files_only = kwargs.pop("local_files_only", None) token = kwargs.pop("token", None) revision = kwargs.pop("revision", None) subfolder = kwargs.pop("subfolder", None) weight_name = kwargs.pop("weight_name", None) use_safetensors = kwargs.pop("use_safetensors", None) allow_pickle = False if use_safetensors is None: use_safetensors = True allow_pickle = True user_agent = { "file_type": "attn_procs_weights", "framework": "pytorch", } state_dict = _fetch_state_dict( pretrained_model_name_or_path_or_dict=pretrained_model_name_or_path_or_dict, weight_name=weight_name, use_safetensors=use_safetensors, local_files_only=local_files_only, cache_dir=cache_dir, force_download=force_download, proxies=proxies, token=token, revision=revision, subfolder=subfolder, user_agent=user_agent, allow_pickle=allow_pickle, ) is_dora_scale_present = any("dora_scale" in k for k in state_dict) if is_dora_scale_present: warn_msg = "It seems like you are using a DoRA checkpoint that is not compatible in Diffusers at the moment. So, we are going to filter out the keys associated to 'dora_scale` from the state dict. If you think this is a mistake please open an issue https://github.com/huggingface/diffusers/issues/new." logger.warning(warn_msg) state_dict = {k: v for k, v in state_dict.items() if "dora_scale" not in k} return state_dict def load_lora_weights( self, pretrained_model_name_or_path_or_dict: Union[str, Dict[str, torch.Tensor]], adapter_name=None, **kwargs ): """ Load LoRA weights specified in `pretrained_model_name_or_path_or_dict` into `self.unet` and `self.text_encoder`. All kwargs are forwarded to `self.lora_state_dict`. See [`~loaders.StableDiffusionLoraLoaderMixin.lora_state_dict`] for more details on how the state dict is loaded. See [`~loaders.StableDiffusionLoraLoaderMixin.load_lora_into_transformer`] for more details on how the state dict is loaded into `self.transformer`. Parameters: pretrained_model_name_or_path_or_dict (`str` or `os.PathLike` or `dict`): See [`~loaders.StableDiffusionLoraLoaderMixin.lora_state_dict`]. adapter_name (`str`, *optional*): Adapter name to be used for referencing the loaded adapter model. If not specified, it will use `default_{i}` where i is the total number of adapters being loaded. low_cpu_mem_usage (`bool`, *optional*): Speed up model loading by only loading the pretrained LoRA weights and not initializing the random weights. kwargs (`dict`, *optional*): See [`~loaders.StableDiffusionLoraLoaderMixin.lora_state_dict`]. """ if not USE_PEFT_BACKEND: raise ValueError("PEFT backend is required for this method.") low_cpu_mem_usage = kwargs.pop("low_cpu_mem_usage", _LOW_CPU_MEM_USAGE_DEFAULT_LORA) if low_cpu_mem_usage and is_peft_version("<", "0.13.0"): raise ValueError( "`low_cpu_mem_usage=True` is not compatible with this `peft` version. Please update it with `pip install -U peft`." ) # if a dict is passed, copy it instead of modifying it inplace if isinstance(pretrained_model_name_or_path_or_dict, dict): pretrained_model_name_or_path_or_dict = pretrained_model_name_or_path_or_dict.copy() # First, ensure that the checkpoint is a compatible one and can be successfully loaded. state_dict = self.lora_state_dict(pretrained_model_name_or_path_or_dict, **kwargs) is_correct_format = all("lora" in key for key in state_dict.keys()) if not is_correct_format: raise ValueError("Invalid LoRA checkpoint.") transformer_state_dict = {k: v for k, v in state_dict.items() if "transformer." in k} if len(transformer_state_dict) > 0: self.load_lora_into_transformer( state_dict, transformer=getattr(self, self.transformer_name) if not hasattr(self, "transformer") else self.transformer, adapter_name=adapter_name, _pipeline=self, low_cpu_mem_usage=low_cpu_mem_usage, ) text_encoder_state_dict = {k: v for k, v in state_dict.items() if "text_encoder." in k} if len(text_encoder_state_dict) > 0: self.load_lora_into_text_encoder( text_encoder_state_dict, network_alphas=None, text_encoder=self.text_encoder, prefix="text_encoder", lora_scale=self.lora_scale, adapter_name=adapter_name, _pipeline=self, low_cpu_mem_usage=low_cpu_mem_usage, ) text_encoder_2_state_dict = {k: v for k, v in state_dict.items() if "text_encoder_2." in k} if len(text_encoder_2_state_dict) > 0: self.load_lora_into_text_encoder( text_encoder_2_state_dict, network_alphas=None, text_encoder=self.text_encoder_2, prefix="text_encoder_2", lora_scale=self.lora_scale, adapter_name=adapter_name, _pipeline=self, low_cpu_mem_usage=low_cpu_mem_usage, ) @classmethod def load_lora_into_transformer( cls, state_dict, transformer, adapter_name=None, _pipeline=None, low_cpu_mem_usage=False ): """ This will load the LoRA layers specified in `state_dict` into `transformer`. Parameters: state_dict (`dict`): A standard state dict containing the lora layer parameters. The keys can either be indexed directly into the unet or prefixed with an additional `unet` which can be used to distinguish between text encoder lora layers. transformer (`SD3Transformer2DModel`): The Transformer model to load the LoRA layers into. adapter_name (`str`, *optional*): Adapter name to be used for referencing the loaded adapter model. If not specified, it will use `default_{i}` where i is the total number of adapters being loaded. low_cpu_mem_usage (`bool`, *optional*): Speed up model loading by only loading the pretrained LoRA weights and not initializing the random weights. """ if low_cpu_mem_usage and is_peft_version("<", "0.13.0"): raise ValueError( "`low_cpu_mem_usage=True` is not compatible with this `peft` version. Please update it with `pip install -U peft`." ) # Load the layers corresponding to transformer. logger.info(f"Loading {cls.transformer_name}.") transformer.load_lora_adapter( state_dict, network_alphas=None, adapter_name=adapter_name, _pipeline=_pipeline, low_cpu_mem_usage=low_cpu_mem_usage, ) @classmethod # Copied from diffusers.loaders.lora_pipeline.StableDiffusionLoraLoaderMixin.load_lora_into_text_encoder def load_lora_into_text_encoder( cls, state_dict, network_alphas, text_encoder, prefix=None, lora_scale=1.0, adapter_name=None, _pipeline=None, low_cpu_mem_usage=False, ): """ This will load the LoRA layers specified in `state_dict` into `text_encoder` Parameters: state_dict (`dict`): A standard state dict containing the lora layer parameters. The key should be prefixed with an additional `text_encoder` to distinguish between unet lora layers. network_alphas (`Dict[str, float]`): The value of the network alpha used for stable learning and preventing underflow. This value has the same meaning as the `--network_alpha` option in the kohya-ss trainer script. Refer to [this link](https://github.com/darkstorm2150/sd-scripts/blob/main/docs/train_network_README-en.md#execute-learning). text_encoder (`CLIPTextModel`): The text encoder model to load the LoRA layers into. prefix (`str`): Expected prefix of the `text_encoder` in the `state_dict`. lora_scale (`float`): How much to scale the output of the lora linear layer before it is added with the output of the regular lora layer. adapter_name (`str`, *optional*): Adapter name to be used for referencing the loaded adapter model. If not specified, it will use `default_{i}` where i is the total number of adapters being loaded. low_cpu_mem_usage (`bool`, *optional*): Speed up model loading by only loading the pretrained LoRA weights and not initializing the random weights. """ _load_lora_into_text_encoder( state_dict=state_dict, network_alphas=network_alphas, lora_scale=lora_scale, text_encoder=text_encoder, prefix=prefix, text_encoder_name=cls.text_encoder_name, adapter_name=adapter_name, _pipeline=_pipeline, low_cpu_mem_usage=low_cpu_mem_usage, ) @classmethod def save_lora_weights( cls, save_directory: Union[str, os.PathLike], transformer_lora_layers: Dict[str, torch.nn.Module] = None, text_encoder_lora_layers: Dict[str, Union[torch.nn.Module, torch.Tensor]] = None, text_encoder_2_lora_layers: Dict[str, Union[torch.nn.Module, torch.Tensor]] = None, is_main_process: bool = True, weight_name: str = None, save_function: Callable = None, safe_serialization: bool = True, ): r""" Save the LoRA parameters corresponding to the UNet and text encoder. Arguments: save_directory (`str` or `os.PathLike`): Directory to save LoRA parameters to. Will be created if it doesn't exist. transformer_lora_layers (`Dict[str, torch.nn.Module]` or `Dict[str, torch.Tensor]`): State dict of the LoRA layers corresponding to the `transformer`. text_encoder_lora_layers (`Dict[str, torch.nn.Module]` or `Dict[str, torch.Tensor]`): State dict of the LoRA layers corresponding to the `text_encoder`. Must explicitly pass the text encoder LoRA state dict because it comes from 🤗 Transformers. text_encoder_2_lora_layers (`Dict[str, torch.nn.Module]` or `Dict[str, torch.Tensor]`): State dict of the LoRA layers corresponding to the `text_encoder_2`. Must explicitly pass the text encoder LoRA state dict because it comes from 🤗 Transformers. is_main_process (`bool`, *optional*, defaults to `True`): Whether the process calling this is the main process or not. Useful during distributed training and you need to call this function on all processes. In this case, set `is_main_process=True` only on the main process to avoid race conditions. save_function (`Callable`): The function to use to save the state dictionary. Useful during distributed training when you need to replace `torch.save` with another method. Can be configured with the environment variable `DIFFUSERS_SAVE_MODE`. safe_serialization (`bool`, *optional*, defaults to `True`): Whether to save the model using `safetensors` or the traditional PyTorch way with `pickle`. """ state_dict = {} if not (transformer_lora_layers or text_encoder_lora_layers or text_encoder_2_lora_layers): raise ValueError( "You must pass at least one of `transformer_lora_layers`, `text_encoder_lora_layers`, `text_encoder_2_lora_layers`." ) if transformer_lora_layers: state_dict.update(cls.pack_weights(transformer_lora_layers, cls.transformer_name)) if text_encoder_lora_layers: state_dict.update(cls.pack_weights(text_encoder_lora_layers, "text_encoder")) if text_encoder_2_lora_layers: state_dict.update(cls.pack_weights(text_encoder_2_lora_layers, "text_encoder_2")) # Save the model cls.write_lora_layers( state_dict=state_dict, save_directory=save_directory, is_main_process=is_main_process, weight_name=weight_name, save_function=save_function, safe_serialization=safe_serialization, ) def fuse_lora( self, components: List[str] = ["transformer", "text_encoder", "text_encoder_2"], lora_scale: float = 1.0, safe_fusing: bool = False, adapter_names: Optional[List[str]] = None, **kwargs, ): r""" Fuses the LoRA parameters into the original parameters of the corresponding blocks. <Tip warning={true}> This is an experimental API. </Tip> Args: components: (`List[str]`): List of LoRA-injectable components to fuse the LoRAs into. lora_scale (`float`, defaults to 1.0): Controls how much to influence the outputs with the LoRA parameters. safe_fusing (`bool`, defaults to `False`): Whether to check fused weights for NaN values before fusing and if values are NaN not fusing them. adapter_names (`List[str]`, *optional*): Adapter names to be used for fusing. If nothing is passed, all active adapters will be fused. Example: ```py from diffusers import DiffusionPipeline import torch pipeline = DiffusionPipeline.from_pretrained( "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16 ).to("cuda") pipeline.load_lora_weights("nerijs/pixel-art-xl", weight_name="pixel-art-xl.safetensors", adapter_name="pixel") pipeline.fuse_lora(lora_scale=0.7) ``` """ super().fuse_lora( components=components, lora_scale=lora_scale, safe_fusing=safe_fusing, adapter_names=adapter_names ) def unfuse_lora(self, components: List[str] = ["transformer", "text_encoder", "text_encoder_2"], **kwargs): r""" Reverses the effect of [`pipe.fuse_lora()`](https://huggingface.co/docs/diffusers/main/en/api/loaders#diffusers.loaders.LoraBaseMixin.fuse_lora). <Tip warning={true}> This is an experimental API. </Tip> Args: components (`List[str]`): List of LoRA-injectable components to unfuse LoRA from. unfuse_unet (`bool`, defaults to `True`): Whether to unfuse the UNet LoRA parameters. unfuse_text_encoder (`bool`, defaults to `True`): Whether to unfuse the text encoder LoRA parameters. If the text encoder wasn't monkey-patched with the LoRA parameters then it won't have any effect. """ super().unfuse_lora(components=components) class FluxLoraLoaderMixin(LoraBaseMixin): r""" Load LoRA layers into [`FluxTransformer2DModel`], [`CLIPTextModel`](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPTextModel). Specific to [`StableDiffusion3Pipeline`]. """ _lora_loadable_modules = ["transformer", "text_encoder"] transformer_name = TRANSFORMER_NAME text_encoder_name = TEXT_ENCODER_NAME _control_lora_supported_norm_keys = ["norm_q", "norm_k", "norm_added_q", "norm_added_k"] @classmethod @validate_hf_hub_args def lora_state_dict( cls, pretrained_model_name_or_path_or_dict: Union[str, Dict[str, torch.Tensor]], return_alphas: bool = False, **kwargs, ): r""" Return state dict for lora weights and the network alphas. <Tip warning={true}> We support loading A1111 formatted LoRA checkpoints in a limited capacity. This function is experimental and might change in the future. </Tip> Parameters: pretrained_model_name_or_path_or_dict (`str` or `os.PathLike` or `dict`): Can be either: - A string, the *model id* (for example `google/ddpm-celebahq-256`) of a pretrained model hosted on the Hub. - A path to a *directory* (for example `./my_model_directory`) containing the model weights saved with [`ModelMixin.save_pretrained`]. - A [torch state dict](https://pytorch.org/tutorials/beginner/saving_loading_models.html#what-is-a-state-dict). cache_dir (`Union[str, os.PathLike]`, *optional*): Path to a directory where a downloaded pretrained model configuration is cached if the standard cache is not used. force_download (`bool`, *optional*, defaults to `False`): Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. proxies (`Dict[str, str]`, *optional*): A dictionary of proxy servers to use by protocol or endpoint, for example, `{'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}`. The proxies are used on each request. local_files_only (`bool`, *optional*, defaults to `False`): Whether to only load local model weights and configuration files or not. If set to `True`, the model won't be downloaded from the Hub. token (`str` or *bool*, *optional*): The token to use as HTTP bearer authorization for remote files. If `True`, the token generated from `diffusers-cli login` (stored in `~/.huggingface`) is used. revision (`str`, *optional*, defaults to `"main"`): The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier allowed by Git. subfolder (`str`, *optional*, defaults to `""`): The subfolder location of a model file within a larger model repository on the Hub or locally. """ # Load the main state dict first which has the LoRA layers for either of # transformer and text encoder or both. cache_dir = kwargs.pop("cache_dir", None) force_download = kwargs.pop("force_download", False) proxies = kwargs.pop("proxies", None) local_files_only = kwargs.pop("local_files_only", None) token = kwargs.pop("token", None) revision = kwargs.pop("revision", None) subfolder = kwargs.pop("subfolder", None) weight_name = kwargs.pop("weight_name", None) use_safetensors = kwargs.pop("use_safetensors", None) allow_pickle = False if use_safetensors is None: use_safetensors = True allow_pickle = True user_agent = { "file_type": "attn_procs_weights", "framework": "pytorch", } state_dict = _fetch_state_dict( pretrained_model_name_or_path_or_dict=pretrained_model_name_or_path_or_dict, weight_name=weight_name, use_safetensors=use_safetensors, local_files_only=local_files_only, cache_dir=cache_dir, force_download=force_download, proxies=proxies, token=token, revision=revision, subfolder=subfolder, user_agent=user_agent, allow_pickle=allow_pickle, ) is_dora_scale_present = any("dora_scale" in k for k in state_dict) if is_dora_scale_present: warn_msg = "It seems like you are using a DoRA checkpoint that is not compatible in Diffusers at the moment. So, we are going to filter out the keys associated to 'dora_scale` from the state dict. If you think this is a mistake please open an issue https://github.com/huggingface/diffusers/issues/new." logger.warning(warn_msg) state_dict = {k: v for k, v in state_dict.items() if "dora_scale" not in k} # TODO (sayakpaul): to a follow-up to clean and try to unify the conditions. is_kohya = any(".lora_down.weight" in k for k in state_dict) if is_kohya: state_dict = _convert_kohya_flux_lora_to_diffusers(state_dict) # Kohya already takes care of scaling the LoRA parameters with alpha. return (state_dict, None) if return_alphas else state_dict is_xlabs = any("processor" in k for k in state_dict) if is_xlabs: state_dict = _convert_xlabs_flux_lora_to_diffusers(state_dict) # xlabs doesn't use `alpha`. return (state_dict, None) if return_alphas else state_dict is_bfl_control = any("query_norm.scale" in k for k in state_dict) if is_bfl_control: state_dict = _convert_bfl_flux_control_lora_to_diffusers(state_dict) return (state_dict, None) if return_alphas else state_dict # For state dicts like # https://huggingface.co/TheLastBen/Jon_Snow_Flux_LoRA keys = list(state_dict.keys()) network_alphas = {} for k in keys: if "alpha" in k: alpha_value = state_dict.get(k) if (torch.is_tensor(alpha_value) and torch.is_floating_point(alpha_value)) or isinstance( alpha_value, float ): network_alphas[k] = state_dict.pop(k) else: raise ValueError( f"The alpha key ({k}) seems to be incorrect. If you think this error is unexpected, please open as issue." ) if return_alphas: return state_dict, network_alphas else: return state_dict def load_lora_weights( self, pretrained_model_name_or_path_or_dict: Union[str, Dict[str, torch.Tensor]], adapter_name=None, **kwargs ): """ Load LoRA weights specified in `pretrained_model_name_or_path_or_dict` into `self.transformer` and `self.text_encoder`. All kwargs are forwarded to `self.lora_state_dict`. See [`~loaders.StableDiffusionLoraLoaderMixin.lora_state_dict`] for more details on how the state dict is loaded. See [`~loaders.StableDiffusionLoraLoaderMixin.load_lora_into_transformer`] for more details on how the state dict is loaded into `self.transformer`. Parameters: pretrained_model_name_or_path_or_dict (`str` or `os.PathLike` or `dict`): See [`~loaders.StableDiffusionLoraLoaderMixin.lora_state_dict`]. kwargs (`dict`, *optional*): See [`~loaders.StableDiffusionLoraLoaderMixin.lora_state_dict`]. adapter_name (`str`, *optional*): Adapter name to be used for referencing the loaded adapter model. If not specified, it will use `default_{i}` where i is the total number of adapters being loaded. low_cpu_mem_usage (`bool`, *optional*): `Speed up model loading by only loading the pretrained LoRA weights and not initializing the random weights. """ if not USE_PEFT_BACKEND: raise ValueError("PEFT backend is required for this method.") low_cpu_mem_usage = kwargs.pop("low_cpu_mem_usage", _LOW_CPU_MEM_USAGE_DEFAULT_LORA) if low_cpu_mem_usage and not is_peft_version(">=", "0.13.1"): raise ValueError( "`low_cpu_mem_usage=True` is not compatible with this `peft` version. Please update it with `pip install -U peft`." ) # if a dict is passed, copy it instead of modifying it inplace if isinstance(pretrained_model_name_or_path_or_dict, dict): pretrained_model_name_or_path_or_dict = pretrained_model_name_or_path_or_dict.copy() # First, ensure that the checkpoint is a compatible one and can be successfully loaded. state_dict, network_alphas = self.lora_state_dict( pretrained_model_name_or_path_or_dict, return_alphas=True, **kwargs ) has_lora_keys = any("lora" in key for key in state_dict.keys()) # Flux Control LoRAs also have norm keys has_norm_keys = any( norm_key in key for key in state_dict.keys() for norm_key in self._control_lora_supported_norm_keys ) if not (has_lora_keys or has_norm_keys): raise ValueError("Invalid LoRA checkpoint.") transformer_lora_state_dict = { k: state_dict.pop(k) for k in list(state_dict.keys()) if "transformer." in k and "lora" in k } transformer_norm_state_dict = { k: state_dict.pop(k) for k in list(state_dict.keys()) if "transformer." in k and any(norm_key in k for norm_key in self._control_lora_supported_norm_keys) } transformer = getattr(self, self.transformer_name) if not hasattr(self, "transformer") else self.transformer has_param_with_expanded_shape = self._maybe_expand_transformer_param_shape_or_error_( transformer, transformer_lora_state_dict, transformer_norm_state_dict ) if has_param_with_expanded_shape: logger.info( "The LoRA weights contain parameters that have different shapes that expected by the transformer. " "As a result, the state_dict of the transformer has been expanded to match the LoRA parameter shapes. " "To get a comprehensive list of parameter names that were modified, enable debug logging." ) transformer_lora_state_dict = self._maybe_expand_lora_state_dict( transformer=transformer, lora_state_dict=transformer_lora_state_dict ) if len(transformer_lora_state_dict) > 0: self.load_lora_into_transformer( transformer_lora_state_dict, network_alphas=network_alphas, transformer=transformer, adapter_name=adapter_name, _pipeline=self, low_cpu_mem_usage=low_cpu_mem_usage, ) if len(transformer_norm_state_dict) > 0: transformer._transformer_norm_layers = self._load_norm_into_transformer( transformer_norm_state_dict, transformer=transformer, discard_original_layers=False, ) text_encoder_state_dict = {k: v for k, v in state_dict.items() if "text_encoder." in k} if len(text_encoder_state_dict) > 0: self.load_lora_into_text_encoder( text_encoder_state_dict, network_alphas=network_alphas, text_encoder=self.text_encoder, prefix="text_encoder", lora_scale=self.lora_scale, adapter_name=adapter_name, _pipeline=self, low_cpu_mem_usage=low_cpu_mem_usage, ) @classmethod def load_lora_into_transformer( cls, state_dict, network_alphas, transformer, adapter_name=None, _pipeline=None, low_cpu_mem_usage=False ): """ This will load the LoRA layers specified in `state_dict` into `transformer`. Parameters: state_dict (`dict`): A standard state dict containing the lora layer parameters. The keys can either be indexed directly into the unet or prefixed with an additional `unet` which can be used to distinguish between text encoder lora layers. network_alphas (`Dict[str, float]`): The value of the network alpha used for stable learning and preventing underflow. This value has the same meaning as the `--network_alpha` option in the kohya-ss trainer script. Refer to [this link](https://github.com/darkstorm2150/sd-scripts/blob/main/docs/train_network_README-en.md#execute-learning). transformer (`FluxTransformer2DModel`): The Transformer model to load the LoRA layers into. adapter_name (`str`, *optional*): Adapter name to be used for referencing the loaded adapter model. If not specified, it will use `default_{i}` where i is the total number of adapters being loaded. low_cpu_mem_usage (`bool`, *optional*): Speed up model loading by only loading the pretrained LoRA weights and not initializing the random weights. """ if low_cpu_mem_usage and not is_peft_version(">=", "0.13.1"): raise ValueError( "`low_cpu_mem_usage=True` is not compatible with this `peft` version. Please update it with `pip install -U peft`." ) # Load the layers corresponding to transformer. keys = list(state_dict.keys()) transformer_present = any(key.startswith(cls.transformer_name) for key in keys) if transformer_present: logger.info(f"Loading {cls.transformer_name}.") transformer.load_lora_adapter( state_dict, network_alphas=network_alphas, adapter_name=adapter_name, _pipeline=_pipeline, low_cpu_mem_usage=low_cpu_mem_usage, ) @classmethod def _load_norm_into_transformer( cls, state_dict, transformer, prefix=None, discard_original_layers=False, ) -> Dict[str, torch.Tensor]: # Remove prefix if present prefix = prefix or cls.transformer_name for key in list(state_dict.keys()): if key.split(".")[0] == prefix: state_dict[key[len(f"{prefix}.") :]] = state_dict.pop(key) # Find invalid keys transformer_state_dict = transformer.state_dict() transformer_keys = set(transformer_state_dict.keys()) state_dict_keys = set(state_dict.keys()) extra_keys = list(state_dict_keys - transformer_keys) if extra_keys: logger.warning( f"Unsupported keys found in state dict when trying to load normalization layers into the transformer. The following keys will be ignored:\n{extra_keys}." ) for key in extra_keys: state_dict.pop(key) # Save the layers that are going to be overwritten so that unload_lora_weights can work as expected overwritten_layers_state_dict = {} if not discard_original_layers: for key in state_dict.keys(): overwritten_layers_state_dict[key] = transformer_state_dict[key].clone() logger.info( "The provided state dict contains normalization layers in addition to LoRA layers. The normalization layers will directly update the state_dict of the transformer " 'as opposed to the LoRA layers that will co-exist separately until the "fuse_lora()" method is called. That is to say, the normalization layers will always be directly ' "fused into the transformer and can only be unfused if `discard_original_layers=True` is passed. This might also have implications when dealing with multiple LoRAs. " "If you notice something unexpected, please open an issue: https://github.com/huggingface/diffusers/issues." ) # We can't load with strict=True because the current state_dict does not contain all the transformer keys incompatible_keys = transformer.load_state_dict(state_dict, strict=False) unexpected_keys = getattr(incompatible_keys, "unexpected_keys", None) # We shouldn't expect to see the supported norm keys here being present in the unexpected keys. if unexpected_keys: if any(norm_key in k for k in unexpected_keys for norm_key in cls._control_lora_supported_norm_keys): raise ValueError( f"Found {unexpected_keys} as unexpected keys while trying to load norm layers into the transformer." ) return overwritten_layers_state_dict @classmethod # Copied from diffusers.loaders.lora_pipeline.StableDiffusionLoraLoaderMixin.load_lora_into_text_encoder def load_lora_into_text_encoder( cls, state_dict, network_alphas, text_encoder, prefix=None, lora_scale=1.0, adapter_name=None, _pipeline=None, low_cpu_mem_usage=False, ): """ This will load the LoRA layers specified in `state_dict` into `text_encoder` Parameters: state_dict (`dict`): A standard state dict containing the lora layer parameters. The key should be prefixed with an additional `text_encoder` to distinguish between unet lora layers. network_alphas (`Dict[str, float]`): The value of the network alpha used for stable learning and preventing underflow. This value has the same meaning as the `--network_alpha` option in the kohya-ss trainer script. Refer to [this link](https://github.com/darkstorm2150/sd-scripts/blob/main/docs/train_network_README-en.md#execute-learning). text_encoder (`CLIPTextModel`): The text encoder model to load the LoRA layers into. prefix (`str`): Expected prefix of the `text_encoder` in the `state_dict`. lora_scale (`float`): How much to scale the output of the lora linear layer before it is added with the output of the regular lora layer. adapter_name (`str`, *optional*): Adapter name to be used for referencing the loaded adapter model. If not specified, it will use `default_{i}` where i is the total number of adapters being loaded. low_cpu_mem_usage (`bool`, *optional*): Speed up model loading by only loading the pretrained LoRA weights and not initializing the random weights. """ _load_lora_into_text_encoder( state_dict=state_dict, network_alphas=network_alphas, lora_scale=lora_scale, text_encoder=text_encoder, prefix=prefix, text_encoder_name=cls.text_encoder_name, adapter_name=adapter_name, _pipeline=_pipeline, low_cpu_mem_usage=low_cpu_mem_usage, ) @classmethod # Copied from diffusers.loaders.lora_pipeline.StableDiffusionLoraLoaderMixin.save_lora_weights with unet->transformer def save_lora_weights( cls, save_directory: Union[str, os.PathLike], transformer_lora_layers: Dict[str, Union[torch.nn.Module, torch.Tensor]] = None, text_encoder_lora_layers: Dict[str, torch.nn.Module] = None, is_main_process: bool = True, weight_name: str = None, save_function: Callable = None, safe_serialization: bool = True, ): r""" Save the LoRA parameters corresponding to the UNet and text encoder. Arguments: save_directory (`str` or `os.PathLike`): Directory to save LoRA parameters to. Will be created if it doesn't exist. transformer_lora_layers (`Dict[str, torch.nn.Module]` or `Dict[str, torch.Tensor]`): State dict of the LoRA layers corresponding to the `transformer`. text_encoder_lora_layers (`Dict[str, torch.nn.Module]` or `Dict[str, torch.Tensor]`): State dict of the LoRA layers corresponding to the `text_encoder`. Must explicitly pass the text encoder LoRA state dict because it comes from 🤗 Transformers. is_main_process (`bool`, *optional*, defaults to `True`): Whether the process calling this is the main process or not. Useful during distributed training and you need to call this function on all processes. In this case, set `is_main_process=True` only on the main process to avoid race conditions. save_function (`Callable`): The function to use to save the state dictionary. Useful during distributed training when you need to replace `torch.save` with another method. Can be configured with the environment variable `DIFFUSERS_SAVE_MODE`. safe_serialization (`bool`, *optional*, defaults to `True`): Whether to save the model using `safetensors` or the traditional PyTorch way with `pickle`. """ state_dict = {} if not (transformer_lora_layers or text_encoder_lora_layers): raise ValueError("You must pass at least one of `transformer_lora_layers` and `text_encoder_lora_layers`.") if transformer_lora_layers: state_dict.update(cls.pack_weights(transformer_lora_layers, cls.transformer_name)) if text_encoder_lora_layers: state_dict.update(cls.pack_weights(text_encoder_lora_layers, cls.text_encoder_name)) # Save the model cls.write_lora_layers( state_dict=state_dict, save_directory=save_directory, is_main_process=is_main_process, weight_name=weight_name, save_function=save_function, safe_serialization=safe_serialization, ) def fuse_lora( self, components: List[str] = ["transformer"], lora_scale: float = 1.0, safe_fusing: bool = False, adapter_names: Optional[List[str]] = None, **kwargs, ): r""" Fuses the LoRA parameters into the original parameters of the corresponding blocks. <Tip warning={true}> This is an experimental API. </Tip> Args: components: (`List[str]`): List of LoRA-injectable components to fuse the LoRAs into. lora_scale (`float`, defaults to 1.0): Controls how much to influence the outputs with the LoRA parameters. safe_fusing (`bool`, defaults to `False`): Whether to check fused weights for NaN values before fusing and if values are NaN not fusing them. adapter_names (`List[str]`, *optional*): Adapter names to be used for fusing. If nothing is passed, all active adapters will be fused. Example: ```py from diffusers import DiffusionPipeline import torch pipeline = DiffusionPipeline.from_pretrained( "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16 ).to("cuda") pipeline.load_lora_weights("nerijs/pixel-art-xl", weight_name="pixel-art-xl.safetensors", adapter_name="pixel") pipeline.fuse_lora(lora_scale=0.7) ``` """ transformer = getattr(self, self.transformer_name) if not hasattr(self, "transformer") else self.transformer if ( hasattr(transformer, "_transformer_norm_layers") and isinstance(transformer._transformer_norm_layers, dict) and len(transformer._transformer_norm_layers.keys()) > 0 ): logger.info( "The provided state dict contains normalization layers in addition to LoRA layers. The normalization layers will be directly updated the state_dict of the transformer " "as opposed to the LoRA layers that will co-exist separately until the 'fuse_lora()' method is called. That is to say, the normalization layers will always be directly " "fused into the transformer and can only be unfused if `discard_original_layers=True` is passed." ) super().fuse_lora( components=components, lora_scale=lora_scale, safe_fusing=safe_fusing, adapter_names=adapter_names ) def unfuse_lora(self, components: List[str] = ["transformer", "text_encoder"], **kwargs): r""" Reverses the effect of [`pipe.fuse_lora()`](https://huggingface.co/docs/diffusers/main/en/api/loaders#diffusers.loaders.LoraBaseMixin.fuse_lora). <Tip warning={true}> This is an experimental API. </Tip> Args: components (`List[str]`): List of LoRA-injectable components to unfuse LoRA from. """ transformer = getattr(self, self.transformer_name) if not hasattr(self, "transformer") else self.transformer if hasattr(transformer, "_transformer_norm_layers") and transformer._transformer_norm_layers: transformer.load_state_dict(transformer._transformer_norm_layers, strict=False) super().unfuse_lora(components=components) # We override this here account for `_transformer_norm_layers` and `_overwritten_params`. def unload_lora_weights(self, reset_to_overwritten_params=False): """ Unloads the LoRA parameters. Args: reset_to_overwritten_params (`bool`, defaults to `False`): Whether to reset the LoRA-loaded modules to their original params. Refer to the [Flux documentation](https://huggingface.co/docs/diffusers/main/en/api/pipelines/flux) to learn more. Examples: ```python >>> # Assuming `pipeline` is already loaded with the LoRA parameters. >>> pipeline.unload_lora_weights() >>> ... ``` """ super().unload_lora_weights() transformer = getattr(self, self.transformer_name) if not hasattr(self, "transformer") else self.transformer if hasattr(transformer, "_transformer_norm_layers") and transformer._transformer_norm_layers: transformer.load_state_dict(transformer._transformer_norm_layers, strict=False) transformer._transformer_norm_layers = None if reset_to_overwritten_params and getattr(transformer, "_overwritten_params", None) is not None: overwritten_params = transformer._overwritten_params module_names = set() for param_name in overwritten_params: if param_name.endswith(".weight"): module_names.add(param_name.replace(".weight", "")) for name, module in transformer.named_modules(): if isinstance(module, torch.nn.Linear) and name in module_names: module_weight = module.weight.data module_bias = module.bias.data if module.bias is not None else None bias = module_bias is not None parent_module_name, _, current_module_name = name.rpartition(".") parent_module = transformer.get_submodule(parent_module_name) current_param_weight = overwritten_params[f"{name}.weight"] in_features, out_features = current_param_weight.shape[1], current_param_weight.shape[0] with torch.device("meta"): original_module = torch.nn.Linear( in_features, out_features, bias=bias, dtype=module_weight.dtype, ) tmp_state_dict = {"weight": current_param_weight} if module_bias is not None: tmp_state_dict.update({"bias": overwritten_params[f"{name}.bias"]}) original_module.load_state_dict(tmp_state_dict, assign=True, strict=True) setattr(parent_module, current_module_name, original_module) del tmp_state_dict if current_module_name in _MODULE_NAME_TO_ATTRIBUTE_MAP_FLUX: attribute_name = _MODULE_NAME_TO_ATTRIBUTE_MAP_FLUX[current_module_name] new_value = int(current_param_weight.shape[1]) old_value = getattr(transformer.config, attribute_name) setattr(transformer.config, attribute_name, new_value) logger.info( f"Set the {attribute_name} attribute of the model to {new_value} from {old_value}." ) @classmethod def _maybe_expand_transformer_param_shape_or_error_( cls, transformer: torch.nn.Module, lora_state_dict=None, norm_state_dict=None, prefix=None, ) -> bool: """ Control LoRA expands the shape of the input layer from (3072, 64) to (3072, 128). This method handles that and generalizes things a bit so that any parameter that needs expansion receives appropriate treatement. """ state_dict = {} if lora_state_dict is not None: state_dict.update(lora_state_dict) if norm_state_dict is not None: state_dict.update(norm_state_dict) # Remove prefix if present prefix = prefix or cls.transformer_name for key in list(state_dict.keys()): if key.split(".")[0] == prefix: state_dict[key[len(f"{prefix}.") :]] = state_dict.pop(key) # Expand transformer parameter shapes if they don't match lora has_param_with_shape_update = False overwritten_params = {} is_peft_loaded = getattr(transformer, "peft_config", None) is not None for name, module in transformer.named_modules(): if isinstance(module, torch.nn.Linear): module_weight = module.weight.data module_bias = module.bias.data if module.bias is not None else None bias = module_bias is not None lora_base_name = name.replace(".base_layer", "") if is_peft_loaded else name lora_A_weight_name = f"{lora_base_name}.lora_A.weight" lora_B_weight_name = f"{lora_base_name}.lora_B.weight" if lora_A_weight_name not in state_dict: continue in_features = state_dict[lora_A_weight_name].shape[1] out_features = state_dict[lora_B_weight_name].shape[0] # Model maybe loaded with different quantization schemes which may flatten the params. # `bitsandbytes`, for example, flatten the weights when using 4bit. 8bit bnb models # preserve weight shape. module_weight_shape = cls._calculate_module_shape(model=transformer, base_module=module) # This means there's no need for an expansion in the params, so we simply skip. if tuple(module_weight_shape) == (out_features, in_features): continue # TODO (sayakpaul): We still need to consider if the module we're expanding is # quantized and handle it accordingly if that is the case. module_out_features, module_in_features = module_weight.shape debug_message = "" if in_features > module_in_features: debug_message += ( f'Expanding the nn.Linear input/output features for module="{name}" because the provided LoRA ' f"checkpoint contains higher number of features than expected. The number of input_features will be " f"expanded from {module_in_features} to {in_features}" ) if out_features > module_out_features: debug_message += ( ", and the number of output features will be " f"expanded from {module_out_features} to {out_features}." ) else: debug_message += "." if debug_message: logger.debug(debug_message) if out_features > module_out_features or in_features > module_in_features: has_param_with_shape_update = True parent_module_name, _, current_module_name = name.rpartition(".") parent_module = transformer.get_submodule(parent_module_name) with torch.device("meta"): expanded_module = torch.nn.Linear( in_features, out_features, bias=bias, dtype=module_weight.dtype ) # Only weights are expanded and biases are not. This is because only the input dimensions # are changed while the output dimensions remain the same. The shape of the weight tensor # is (out_features, in_features), while the shape of bias tensor is (out_features,), which # explains the reason why only weights are expanded. new_weight = torch.zeros_like( expanded_module.weight.data, device=module_weight.device, dtype=module_weight.dtype ) slices = tuple(slice(0, dim) for dim in module_weight.shape) new_weight[slices] = module_weight tmp_state_dict = {"weight": new_weight} if module_bias is not None: tmp_state_dict["bias"] = module_bias expanded_module.load_state_dict(tmp_state_dict, strict=True, assign=True) setattr(parent_module, current_module_name, expanded_module) del tmp_state_dict if current_module_name in _MODULE_NAME_TO_ATTRIBUTE_MAP_FLUX: attribute_name = _MODULE_NAME_TO_ATTRIBUTE_MAP_FLUX[current_module_name] new_value = int(expanded_module.weight.data.shape[1]) old_value = getattr(transformer.config, attribute_name) setattr(transformer.config, attribute_name, new_value) logger.info( f"Set the {attribute_name} attribute of the model to {new_value} from {old_value}." ) # For `unload_lora_weights()`. # TODO: this could lead to more memory overhead if the number of overwritten params # are large. Should be revisited later and tackled through a `discard_original_layers` arg. overwritten_params[f"{current_module_name}.weight"] = module_weight if module_bias is not None: overwritten_params[f"{current_module_name}.bias"] = module_bias if len(overwritten_params) > 0: transformer._overwritten_params = overwritten_params return has_param_with_shape_update @classmethod def _maybe_expand_lora_state_dict(cls, transformer, lora_state_dict): expanded_module_names = set() transformer_state_dict = transformer.state_dict() prefix = f"{cls.transformer_name}." lora_module_names = [ key[: -len(".lora_A.weight")] for key in lora_state_dict if key.endswith(".lora_A.weight") ] lora_module_names = [name[len(prefix) :] for name in lora_module_names if name.startswith(prefix)] lora_module_names = sorted(set(lora_module_names)) transformer_module_names = sorted({name for name, _ in transformer.named_modules()}) unexpected_modules = set(lora_module_names) - set(transformer_module_names) if unexpected_modules: logger.debug(f"Found unexpected modules: {unexpected_modules}. These will be ignored.") is_peft_loaded = getattr(transformer, "peft_config", None) is not None for k in lora_module_names: if k in unexpected_modules: continue base_param_name = ( f"{k.replace(prefix, '')}.base_layer.weight" if is_peft_loaded and f"{k.replace(prefix, '')}.base_layer.weight" in transformer_state_dict else f"{k.replace(prefix, '')}.weight" ) base_weight_param = transformer_state_dict[base_param_name] lora_A_param = lora_state_dict[f"{prefix}{k}.lora_A.weight"] # TODO (sayakpaul): Handle the cases when we actually need to expand when using quantization. base_module_shape = cls._calculate_module_shape(model=transformer, base_weight_param_name=base_param_name) if base_module_shape[1] > lora_A_param.shape[1]: shape = (lora_A_param.shape[0], base_weight_param.shape[1]) expanded_state_dict_weight = torch.zeros(shape, device=base_weight_param.device) expanded_state_dict_weight[:, : lora_A_param.shape[1]].copy_(lora_A_param) lora_state_dict[f"{prefix}{k}.lora_A.weight"] = expanded_state_dict_weight expanded_module_names.add(k) elif base_module_shape[1] < lora_A_param.shape[1]: raise NotImplementedError( f"This LoRA param ({k}.lora_A.weight) has an incompatible shape {lora_A_param.shape}. Please open an issue to file for a feature request - https://github.com/huggingface/diffusers/issues/new." ) if expanded_module_names: logger.info( f"The following LoRA modules were zero padded to match the state dict of {cls.transformer_name}: {expanded_module_names}. Please open an issue if you think this was unexpected - https://github.com/huggingface/diffusers/issues/new." ) return lora_state_dict @staticmethod def _calculate_module_shape( model: "torch.nn.Module", base_module: "torch.nn.Linear" = None, base_weight_param_name: str = None, ) -> "torch.Size": def _get_weight_shape(weight: torch.Tensor): return weight.quant_state.shape if weight.__class__.__name__ == "Params4bit" else weight.shape if base_module is not None: return _get_weight_shape(base_module.weight) elif base_weight_param_name is not None: if not base_weight_param_name.endswith(".weight"): raise ValueError( f"Invalid `base_weight_param_name` passed as it does not end with '.weight' {base_weight_param_name=}." ) module_path = base_weight_param_name.rsplit(".weight", 1)[0] submodule = get_submodule_by_name(model, module_path) return _get_weight_shape(submodule.weight) raise ValueError("Either `base_module` or `base_weight_param_name` must be provided.") # The reason why we subclass from `StableDiffusionLoraLoaderMixin` here is because Amused initially # relied on `StableDiffusionLoraLoaderMixin` for its LoRA support. class AmusedLoraLoaderMixin(StableDiffusionLoraLoaderMixin): _lora_loadable_modules = ["transformer", "text_encoder"] transformer_name = TRANSFORMER_NAME text_encoder_name = TEXT_ENCODER_NAME @classmethod # Copied from diffusers.loaders.lora_pipeline.FluxLoraLoaderMixin.load_lora_into_transformer with FluxTransformer2DModel->UVit2DModel def load_lora_into_transformer( cls, state_dict, network_alphas, transformer, adapter_name=None, _pipeline=None, low_cpu_mem_usage=False ): """ This will load the LoRA layers specified in `state_dict` into `transformer`. Parameters: state_dict (`dict`): A standard state dict containing the lora layer parameters. The keys can either be indexed directly into the unet or prefixed with an additional `unet` which can be used to distinguish between text encoder lora layers. network_alphas (`Dict[str, float]`): The value of the network alpha used for stable learning and preventing underflow. This value has the same meaning as the `--network_alpha` option in the kohya-ss trainer script. Refer to [this link](https://github.com/darkstorm2150/sd-scripts/blob/main/docs/train_network_README-en.md#execute-learning). transformer (`UVit2DModel`): The Transformer model to load the LoRA layers into. adapter_name (`str`, *optional*): Adapter name to be used for referencing the loaded adapter model. If not specified, it will use `default_{i}` where i is the total number of adapters being loaded. low_cpu_mem_usage (`bool`, *optional*): Speed up model loading by only loading the pretrained LoRA weights and not initializing the random weights. """ if low_cpu_mem_usage and not is_peft_version(">=", "0.13.1"): raise ValueError( "`low_cpu_mem_usage=True` is not compatible with this `peft` version. Please update it with `pip install -U peft`." ) # Load the layers corresponding to transformer. keys = list(state_dict.keys()) transformer_present = any(key.startswith(cls.transformer_name) for key in keys) if transformer_present: logger.info(f"Loading {cls.transformer_name}.") transformer.load_lora_adapter( state_dict, network_alphas=network_alphas, adapter_name=adapter_name, _pipeline=_pipeline, low_cpu_mem_usage=low_cpu_mem_usage, ) @classmethod # Copied from diffusers.loaders.lora_pipeline.StableDiffusionLoraLoaderMixin.load_lora_into_text_encoder def load_lora_into_text_encoder( cls, state_dict, network_alphas, text_encoder, prefix=None, lora_scale=1.0, adapter_name=None, _pipeline=None, low_cpu_mem_usage=False, ): """ This will load the LoRA layers specified in `state_dict` into `text_encoder` Parameters: state_dict (`dict`): A standard state dict containing the lora layer parameters. The key should be prefixed with an additional `text_encoder` to distinguish between unet lora layers. network_alphas (`Dict[str, float]`): The value of the network alpha used for stable learning and preventing underflow. This value has the same meaning as the `--network_alpha` option in the kohya-ss trainer script. Refer to [this link](https://github.com/darkstorm2150/sd-scripts/blob/main/docs/train_network_README-en.md#execute-learning). text_encoder (`CLIPTextModel`): The text encoder model to load the LoRA layers into. prefix (`str`): Expected prefix of the `text_encoder` in the `state_dict`. lora_scale (`float`): How much to scale the output of the lora linear layer before it is added with the output of the regular lora layer. adapter_name (`str`, *optional*): Adapter name to be used for referencing the loaded adapter model. If not specified, it will use `default_{i}` where i is the total number of adapters being loaded. low_cpu_mem_usage (`bool`, *optional*): Speed up model loading by only loading the pretrained LoRA weights and not initializing the random weights. """ _load_lora_into_text_encoder( state_dict=state_dict, network_alphas=network_alphas, lora_scale=lora_scale, text_encoder=text_encoder, prefix=prefix, text_encoder_name=cls.text_encoder_name, adapter_name=adapter_name, _pipeline=_pipeline, low_cpu_mem_usage=low_cpu_mem_usage, ) @classmethod def save_lora_weights( cls, save_directory: Union[str, os.PathLike], text_encoder_lora_layers: Dict[str, torch.nn.Module] = None, transformer_lora_layers: Dict[str, torch.nn.Module] = None, is_main_process: bool = True, weight_name: str = None, save_function: Callable = None, safe_serialization: bool = True, ): r""" Save the LoRA parameters corresponding to the UNet and text encoder. Arguments: save_directory (`str` or `os.PathLike`): Directory to save LoRA parameters to. Will be created if it doesn't exist. unet_lora_layers (`Dict[str, torch.nn.Module]` or `Dict[str, torch.Tensor]`): State dict of the LoRA layers corresponding to the `unet`. text_encoder_lora_layers (`Dict[str, torch.nn.Module]` or `Dict[str, torch.Tensor]`): State dict of the LoRA layers corresponding to the `text_encoder`. Must explicitly pass the text encoder LoRA state dict because it comes from 🤗 Transformers. is_main_process (`bool`, *optional*, defaults to `True`): Whether the process calling this is the main process or not. Useful during distributed training and you need to call this function on all processes. In this case, set `is_main_process=True` only on the main process to avoid race conditions. save_function (`Callable`): The function to use to save the state dictionary. Useful during distributed training when you need to replace `torch.save` with another method. Can be configured with the environment variable `DIFFUSERS_SAVE_MODE`. safe_serialization (`bool`, *optional*, defaults to `True`): Whether to save the model using `safetensors` or the traditional PyTorch way with `pickle`. """ state_dict = {} if not (transformer_lora_layers or text_encoder_lora_layers): raise ValueError("You must pass at least one of `transformer_lora_layers` or `text_encoder_lora_layers`.") if transformer_lora_layers: state_dict.update(cls.pack_weights(transformer_lora_layers, cls.transformer_name)) if text_encoder_lora_layers: state_dict.update(cls.pack_weights(text_encoder_lora_layers, cls.text_encoder_name)) # Save the model cls.write_lora_layers( state_dict=state_dict, save_directory=save_directory, is_main_process=is_main_process, weight_name=weight_name, save_function=save_function, safe_serialization=safe_serialization, ) class CogVideoXLoraLoaderMixin(LoraBaseMixin): r""" Load LoRA layers into [`CogVideoXTransformer3DModel`]. Specific to [`CogVideoXPipeline`]. """ _lora_loadable_modules = ["transformer"] transformer_name = TRANSFORMER_NAME @classmethod @validate_hf_hub_args # Copied from diffusers.loaders.lora_pipeline.SD3LoraLoaderMixin.lora_state_dict def lora_state_dict( cls, pretrained_model_name_or_path_or_dict: Union[str, Dict[str, torch.Tensor]], **kwargs, ): r""" Return state dict for lora weights and the network alphas. <Tip warning={true}> We support loading A1111 formatted LoRA checkpoints in a limited capacity. This function is experimental and might change in the future. </Tip> Parameters: pretrained_model_name_or_path_or_dict (`str` or `os.PathLike` or `dict`): Can be either: - A string, the *model id* (for example `google/ddpm-celebahq-256`) of a pretrained model hosted on the Hub. - A path to a *directory* (for example `./my_model_directory`) containing the model weights saved with [`ModelMixin.save_pretrained`]. - A [torch state dict](https://pytorch.org/tutorials/beginner/saving_loading_models.html#what-is-a-state-dict). cache_dir (`Union[str, os.PathLike]`, *optional*): Path to a directory where a downloaded pretrained model configuration is cached if the standard cache is not used. force_download (`bool`, *optional*, defaults to `False`): Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. proxies (`Dict[str, str]`, *optional*): A dictionary of proxy servers to use by protocol or endpoint, for example, `{'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}`. The proxies are used on each request. local_files_only (`bool`, *optional*, defaults to `False`): Whether to only load local model weights and configuration files or not. If set to `True`, the model won't be downloaded from the Hub. token (`str` or *bool*, *optional*): The token to use as HTTP bearer authorization for remote files. If `True`, the token generated from `diffusers-cli login` (stored in `~/.huggingface`) is used. revision (`str`, *optional*, defaults to `"main"`): The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier allowed by Git. subfolder (`str`, *optional*, defaults to `""`): The subfolder location of a model file within a larger model repository on the Hub or locally. """ # Load the main state dict first which has the LoRA layers for either of # transformer and text encoder or both. cache_dir = kwargs.pop("cache_dir", None) force_download = kwargs.pop("force_download", False) proxies = kwargs.pop("proxies", None) local_files_only = kwargs.pop("local_files_only", None) token = kwargs.pop("token", None) revision = kwargs.pop("revision", None) subfolder = kwargs.pop("subfolder", None) weight_name = kwargs.pop("weight_name", None) use_safetensors = kwargs.pop("use_safetensors", None) allow_pickle = False if use_safetensors is None: use_safetensors = True allow_pickle = True user_agent = { "file_type": "attn_procs_weights", "framework": "pytorch", } state_dict = _fetch_state_dict( pretrained_model_name_or_path_or_dict=pretrained_model_name_or_path_or_dict, weight_name=weight_name, use_safetensors=use_safetensors, local_files_only=local_files_only, cache_dir=cache_dir, force_download=force_download, proxies=proxies, token=token, revision=revision, subfolder=subfolder, user_agent=user_agent, allow_pickle=allow_pickle, ) is_dora_scale_present = any("dora_scale" in k for k in state_dict) if is_dora_scale_present: warn_msg = "It seems like you are using a DoRA checkpoint that is not compatible in Diffusers at the moment. So, we are going to filter out the keys associated to 'dora_scale` from the state dict. If you think this is a mistake please open an issue https://github.com/huggingface/diffusers/issues/new." logger.warning(warn_msg) state_dict = {k: v for k, v in state_dict.items() if "dora_scale" not in k} return state_dict def load_lora_weights( self, pretrained_model_name_or_path_or_dict: Union[str, Dict[str, torch.Tensor]], adapter_name=None, **kwargs ): """ Load LoRA weights specified in `pretrained_model_name_or_path_or_dict` into `self.transformer` and `self.text_encoder`. All kwargs are forwarded to `self.lora_state_dict`. See [`~loaders.StableDiffusionLoraLoaderMixin.lora_state_dict`] for more details on how the state dict is loaded. See [`~loaders.StableDiffusionLoraLoaderMixin.load_lora_into_transformer`] for more details on how the state dict is loaded into `self.transformer`. Parameters: pretrained_model_name_or_path_or_dict (`str` or `os.PathLike` or `dict`): See [`~loaders.StableDiffusionLoraLoaderMixin.lora_state_dict`]. adapter_name (`str`, *optional*): Adapter name to be used for referencing the loaded adapter model. If not specified, it will use `default_{i}` where i is the total number of adapters being loaded. low_cpu_mem_usage (`bool`, *optional*): Speed up model loading by only loading the pretrained LoRA weights and not initializing the random weights. kwargs (`dict`, *optional*): See [`~loaders.StableDiffusionLoraLoaderMixin.lora_state_dict`]. """ if not USE_PEFT_BACKEND: raise ValueError("PEFT backend is required for this method.") low_cpu_mem_usage = kwargs.pop("low_cpu_mem_usage", _LOW_CPU_MEM_USAGE_DEFAULT_LORA) if low_cpu_mem_usage and is_peft_version("<", "0.13.0"): raise ValueError( "`low_cpu_mem_usage=True` is not compatible with this `peft` version. Please update it with `pip install -U peft`." ) # if a dict is passed, copy it instead of modifying it inplace if isinstance(pretrained_model_name_or_path_or_dict, dict): pretrained_model_name_or_path_or_dict = pretrained_model_name_or_path_or_dict.copy() # First, ensure that the checkpoint is a compatible one and can be successfully loaded. state_dict = self.lora_state_dict(pretrained_model_name_or_path_or_dict, **kwargs) is_correct_format = all("lora" in key for key in state_dict.keys()) if not is_correct_format: raise ValueError("Invalid LoRA checkpoint.") self.load_lora_into_transformer( state_dict, transformer=getattr(self, self.transformer_name) if not hasattr(self, "transformer") else self.transformer, adapter_name=adapter_name, _pipeline=self, low_cpu_mem_usage=low_cpu_mem_usage, ) @classmethod # Copied from diffusers.loaders.lora_pipeline.SD3LoraLoaderMixin.load_lora_into_transformer with SD3Transformer2DModel->CogVideoXTransformer3DModel def load_lora_into_transformer( cls, state_dict, transformer, adapter_name=None, _pipeline=None, low_cpu_mem_usage=False ): """ This will load the LoRA layers specified in `state_dict` into `transformer`. Parameters: state_dict (`dict`): A standard state dict containing the lora layer parameters. The keys can either be indexed directly into the unet or prefixed with an additional `unet` which can be used to distinguish between text encoder lora layers. transformer (`CogVideoXTransformer3DModel`): The Transformer model to load the LoRA layers into. adapter_name (`str`, *optional*): Adapter name to be used for referencing the loaded adapter model. If not specified, it will use `default_{i}` where i is the total number of adapters being loaded. low_cpu_mem_usage (`bool`, *optional*): Speed up model loading by only loading the pretrained LoRA weights and not initializing the random weights. """ if low_cpu_mem_usage and is_peft_version("<", "0.13.0"): raise ValueError( "`low_cpu_mem_usage=True` is not compatible with this `peft` version. Please update it with `pip install -U peft`." ) # Load the layers corresponding to transformer. logger.info(f"Loading {cls.transformer_name}.") transformer.load_lora_adapter( state_dict, network_alphas=None, adapter_name=adapter_name, _pipeline=_pipeline, low_cpu_mem_usage=low_cpu_mem_usage, ) @classmethod # Adapted from diffusers.loaders.lora_pipeline.StableDiffusionLoraLoaderMixin.save_lora_weights without support for text encoder def save_lora_weights( cls, save_directory: Union[str, os.PathLike], transformer_lora_layers: Dict[str, Union[torch.nn.Module, torch.Tensor]] = None, is_main_process: bool = True, weight_name: str = None, save_function: Callable = None, safe_serialization: bool = True, ): r""" Save the LoRA parameters corresponding to the UNet and text encoder. Arguments: save_directory (`str` or `os.PathLike`): Directory to save LoRA parameters to. Will be created if it doesn't exist. transformer_lora_layers (`Dict[str, torch.nn.Module]` or `Dict[str, torch.Tensor]`): State dict of the LoRA layers corresponding to the `transformer`. is_main_process (`bool`, *optional*, defaults to `True`): Whether the process calling this is the main process or not. Useful during distributed training and you need to call this function on all processes. In this case, set `is_main_process=True` only on the main process to avoid race conditions. save_function (`Callable`): The function to use to save the state dictionary. Useful during distributed training when you need to replace `torch.save` with another method. Can be configured with the environment variable `DIFFUSERS_SAVE_MODE`. safe_serialization (`bool`, *optional*, defaults to `True`): Whether to save the model using `safetensors` or the traditional PyTorch way with `pickle`. """ state_dict = {} if not transformer_lora_layers: raise ValueError("You must pass `transformer_lora_layers`.") if transformer_lora_layers: state_dict.update(cls.pack_weights(transformer_lora_layers, cls.transformer_name)) # Save the model cls.write_lora_layers( state_dict=state_dict, save_directory=save_directory, is_main_process=is_main_process, weight_name=weight_name, save_function=save_function, safe_serialization=safe_serialization, ) def fuse_lora( self, components: List[str] = ["transformer"], lora_scale: float = 1.0, safe_fusing: bool = False, adapter_names: Optional[List[str]] = None, **kwargs, ): r""" Fuses the LoRA parameters into the original parameters of the corresponding blocks. <Tip warning={true}> This is an experimental API. </Tip> Args: components: (`List[str]`): List of LoRA-injectable components to fuse the LoRAs into. lora_scale (`float`, defaults to 1.0): Controls how much to influence the outputs with the LoRA parameters. safe_fusing (`bool`, defaults to `False`): Whether to check fused weights for NaN values before fusing and if values are NaN not fusing them. adapter_names (`List[str]`, *optional*): Adapter names to be used for fusing. If nothing is passed, all active adapters will be fused. Example: ```py from diffusers import DiffusionPipeline import torch pipeline = DiffusionPipeline.from_pretrained( "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16 ).to("cuda") pipeline.load_lora_weights("nerijs/pixel-art-xl", weight_name="pixel-art-xl.safetensors", adapter_name="pixel") pipeline.fuse_lora(lora_scale=0.7) ``` """ super().fuse_lora( components=components, lora_scale=lora_scale, safe_fusing=safe_fusing, adapter_names=adapter_names ) def unfuse_lora(self, components: List[str] = ["transformer"], **kwargs): r""" Reverses the effect of [`pipe.fuse_lora()`](https://huggingface.co/docs/diffusers/main/en/api/loaders#diffusers.loaders.LoraBaseMixin.fuse_lora). <Tip warning={true}> This is an experimental API. </Tip> Args: components (`List[str]`): List of LoRA-injectable components to unfuse LoRA from. unfuse_transformer (`bool`, defaults to `True`): Whether to unfuse the UNet LoRA parameters. """ super().unfuse_lora(components=components) class Mochi1LoraLoaderMixin(LoraBaseMixin): r""" Load LoRA layers into [`MochiTransformer3DModel`]. Specific to [`MochiPipeline`]. """ _lora_loadable_modules = ["transformer"] transformer_name = TRANSFORMER_NAME @classmethod @validate_hf_hub_args # Copied from diffusers.loaders.lora_pipeline.SD3LoraLoaderMixin.lora_state_dict def lora_state_dict( cls, pretrained_model_name_or_path_or_dict: Union[str, Dict[str, torch.Tensor]], **kwargs, ): r""" Return state dict for lora weights and the network alphas. <Tip warning={true}> We support loading A1111 formatted LoRA checkpoints in a limited capacity. This function is experimental and might change in the future. </Tip> Parameters: pretrained_model_name_or_path_or_dict (`str` or `os.PathLike` or `dict`): Can be either: - A string, the *model id* (for example `google/ddpm-celebahq-256`) of a pretrained model hosted on the Hub. - A path to a *directory* (for example `./my_model_directory`) containing the model weights saved with [`ModelMixin.save_pretrained`]. - A [torch state dict](https://pytorch.org/tutorials/beginner/saving_loading_models.html#what-is-a-state-dict). cache_dir (`Union[str, os.PathLike]`, *optional*): Path to a directory where a downloaded pretrained model configuration is cached if the standard cache is not used. force_download (`bool`, *optional*, defaults to `False`): Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. proxies (`Dict[str, str]`, *optional*): A dictionary of proxy servers to use by protocol or endpoint, for example, `{'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}`. The proxies are used on each request. local_files_only (`bool`, *optional*, defaults to `False`): Whether to only load local model weights and configuration files or not. If set to `True`, the model won't be downloaded from the Hub. token (`str` or *bool*, *optional*): The token to use as HTTP bearer authorization for remote files. If `True`, the token generated from `diffusers-cli login` (stored in `~/.huggingface`) is used. revision (`str`, *optional*, defaults to `"main"`): The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier allowed by Git. subfolder (`str`, *optional*, defaults to `""`): The subfolder location of a model file within a larger model repository on the Hub or locally. """ # Load the main state dict first which has the LoRA layers for either of # transformer and text encoder or both. cache_dir = kwargs.pop("cache_dir", None) force_download = kwargs.pop("force_download", False) proxies = kwargs.pop("proxies", None) local_files_only = kwargs.pop("local_files_only", None) token = kwargs.pop("token", None) revision = kwargs.pop("revision", None) subfolder = kwargs.pop("subfolder", None) weight_name = kwargs.pop("weight_name", None) use_safetensors = kwargs.pop("use_safetensors", None) allow_pickle = False if use_safetensors is None: use_safetensors = True allow_pickle = True user_agent = { "file_type": "attn_procs_weights", "framework": "pytorch", } state_dict = _fetch_state_dict( pretrained_model_name_or_path_or_dict=pretrained_model_name_or_path_or_dict, weight_name=weight_name, use_safetensors=use_safetensors, local_files_only=local_files_only, cache_dir=cache_dir, force_download=force_download, proxies=proxies, token=token, revision=revision, subfolder=subfolder, user_agent=user_agent, allow_pickle=allow_pickle, ) is_dora_scale_present = any("dora_scale" in k for k in state_dict) if is_dora_scale_present: warn_msg = "It seems like you are using a DoRA checkpoint that is not compatible in Diffusers at the moment. So, we are going to filter out the keys associated to 'dora_scale` from the state dict. If you think this is a mistake please open an issue https://github.com/huggingface/diffusers/issues/new." logger.warning(warn_msg) state_dict = {k: v for k, v in state_dict.items() if "dora_scale" not in k} return state_dict # Copied from diffusers.loaders.lora_pipeline.CogVideoXLoraLoaderMixin.load_lora_weights def load_lora_weights( self, pretrained_model_name_or_path_or_dict: Union[str, Dict[str, torch.Tensor]], adapter_name=None, **kwargs ): """ Load LoRA weights specified in `pretrained_model_name_or_path_or_dict` into `self.transformer` and `self.text_encoder`. All kwargs are forwarded to `self.lora_state_dict`. See [`~loaders.StableDiffusionLoraLoaderMixin.lora_state_dict`] for more details on how the state dict is loaded. See [`~loaders.StableDiffusionLoraLoaderMixin.load_lora_into_transformer`] for more details on how the state dict is loaded into `self.transformer`. Parameters: pretrained_model_name_or_path_or_dict (`str` or `os.PathLike` or `dict`): See [`~loaders.StableDiffusionLoraLoaderMixin.lora_state_dict`]. adapter_name (`str`, *optional*): Adapter name to be used for referencing the loaded adapter model. If not specified, it will use `default_{i}` where i is the total number of adapters being loaded. low_cpu_mem_usage (`bool`, *optional*): Speed up model loading by only loading the pretrained LoRA weights and not initializing the random weights. kwargs (`dict`, *optional*): See [`~loaders.StableDiffusionLoraLoaderMixin.lora_state_dict`]. """ if not USE_PEFT_BACKEND: raise ValueError("PEFT backend is required for this method.") low_cpu_mem_usage = kwargs.pop("low_cpu_mem_usage", _LOW_CPU_MEM_USAGE_DEFAULT_LORA) if low_cpu_mem_usage and is_peft_version("<", "0.13.0"): raise ValueError( "`low_cpu_mem_usage=True` is not compatible with this `peft` version. Please update it with `pip install -U peft`." ) # if a dict is passed, copy it instead of modifying it inplace if isinstance(pretrained_model_name_or_path_or_dict, dict): pretrained_model_name_or_path_or_dict = pretrained_model_name_or_path_or_dict.copy() # First, ensure that the checkpoint is a compatible one and can be successfully loaded. state_dict = self.lora_state_dict(pretrained_model_name_or_path_or_dict, **kwargs) is_correct_format = all("lora" in key for key in state_dict.keys()) if not is_correct_format: raise ValueError("Invalid LoRA checkpoint.") self.load_lora_into_transformer( state_dict, transformer=getattr(self, self.transformer_name) if not hasattr(self, "transformer") else self.transformer, adapter_name=adapter_name, _pipeline=self, low_cpu_mem_usage=low_cpu_mem_usage, ) @classmethod # Copied from diffusers.loaders.lora_pipeline.SD3LoraLoaderMixin.load_lora_into_transformer with SD3Transformer2DModel->MochiTransformer3DModel def load_lora_into_transformer( cls, state_dict, transformer, adapter_name=None, _pipeline=None, low_cpu_mem_usage=False ): """ This will load the LoRA layers specified in `state_dict` into `transformer`. Parameters: state_dict (`dict`): A standard state dict containing the lora layer parameters. The keys can either be indexed directly into the unet or prefixed with an additional `unet` which can be used to distinguish between text encoder lora layers. transformer (`MochiTransformer3DModel`): The Transformer model to load the LoRA layers into. adapter_name (`str`, *optional*): Adapter name to be used for referencing the loaded adapter model. If not specified, it will use `default_{i}` where i is the total number of adapters being loaded. low_cpu_mem_usage (`bool`, *optional*): Speed up model loading by only loading the pretrained LoRA weights and not initializing the random weights. """ if low_cpu_mem_usage and is_peft_version("<", "0.13.0"): raise ValueError( "`low_cpu_mem_usage=True` is not compatible with this `peft` version. Please update it with `pip install -U peft`." ) # Load the layers corresponding to transformer. logger.info(f"Loading {cls.transformer_name}.") transformer.load_lora_adapter( state_dict, network_alphas=None, adapter_name=adapter_name, _pipeline=_pipeline, low_cpu_mem_usage=low_cpu_mem_usage, ) @classmethod # Copied from diffusers.loaders.lora_pipeline.CogVideoXLoraLoaderMixin.save_lora_weights def save_lora_weights( cls, save_directory: Union[str, os.PathLike], transformer_lora_layers: Dict[str, Union[torch.nn.Module, torch.Tensor]] = None, is_main_process: bool = True, weight_name: str = None, save_function: Callable = None, safe_serialization: bool = True, ): r""" Save the LoRA parameters corresponding to the UNet and text encoder. Arguments: save_directory (`str` or `os.PathLike`): Directory to save LoRA parameters to. Will be created if it doesn't exist. transformer_lora_layers (`Dict[str, torch.nn.Module]` or `Dict[str, torch.Tensor]`): State dict of the LoRA layers corresponding to the `transformer`. is_main_process (`bool`, *optional*, defaults to `True`): Whether the process calling this is the main process or not. Useful during distributed training and you need to call this function on all processes. In this case, set `is_main_process=True` only on the main process to avoid race conditions. save_function (`Callable`): The function to use to save the state dictionary. Useful during distributed training when you need to replace `torch.save` with another method. Can be configured with the environment variable `DIFFUSERS_SAVE_MODE`. safe_serialization (`bool`, *optional*, defaults to `True`): Whether to save the model using `safetensors` or the traditional PyTorch way with `pickle`. """ state_dict = {} if not transformer_lora_layers: raise ValueError("You must pass `transformer_lora_layers`.") if transformer_lora_layers: state_dict.update(cls.pack_weights(transformer_lora_layers, cls.transformer_name)) # Save the model cls.write_lora_layers( state_dict=state_dict, save_directory=save_directory, is_main_process=is_main_process, weight_name=weight_name, save_function=save_function, safe_serialization=safe_serialization, ) def fuse_lora( self, components: List[str] = ["transformer"], lora_scale: float = 1.0, safe_fusing: bool = False, adapter_names: Optional[List[str]] = None, **kwargs, ): r""" Fuses the LoRA parameters into the original parameters of the corresponding blocks. <Tip warning={true}> This is an experimental API. </Tip> Args: components: (`List[str]`): List of LoRA-injectable components to fuse the LoRAs into. lora_scale (`float`, defaults to 1.0): Controls how much to influence the outputs with the LoRA parameters. safe_fusing (`bool`, defaults to `False`): Whether to check fused weights for NaN values before fusing and if values are NaN not fusing them. adapter_names (`List[str]`, *optional*): Adapter names to be used for fusing. If nothing is passed, all active adapters will be fused. Example: ```py from diffusers import DiffusionPipeline import torch pipeline = DiffusionPipeline.from_pretrained( "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16 ).to("cuda") pipeline.load_lora_weights("nerijs/pixel-art-xl", weight_name="pixel-art-xl.safetensors", adapter_name="pixel") pipeline.fuse_lora(lora_scale=0.7) ``` """ super().fuse_lora( components=components, lora_scale=lora_scale, safe_fusing=safe_fusing, adapter_names=adapter_names ) def unfuse_lora(self, components: List[str] = ["transformer"], **kwargs): r""" Reverses the effect of [`pipe.fuse_lora()`](https://huggingface.co/docs/diffusers/main/en/api/loaders#diffusers.loaders.LoraBaseMixin.fuse_lora). <Tip warning={true}> This is an experimental API. </Tip> Args: components (`List[str]`): List of LoRA-injectable components to unfuse LoRA from. unfuse_transformer (`bool`, defaults to `True`): Whether to unfuse the UNet LoRA parameters. """ super().unfuse_lora(components=components) class LTXVideoLoraLoaderMixin(LoraBaseMixin): r""" Load LoRA layers into [`LTXVideoTransformer3DModel`]. Specific to [`LTXPipeline`]. """ _lora_loadable_modules = ["transformer"] transformer_name = TRANSFORMER_NAME @classmethod @validate_hf_hub_args # Copied from diffusers.loaders.lora_pipeline.CogVideoXLoraLoaderMixin.lora_state_dict def lora_state_dict( cls, pretrained_model_name_or_path_or_dict: Union[str, Dict[str, torch.Tensor]], **kwargs, ): r""" Return state dict for lora weights and the network alphas. <Tip warning={true}> We support loading A1111 formatted LoRA checkpoints in a limited capacity. This function is experimental and might change in the future. </Tip> Parameters: pretrained_model_name_or_path_or_dict (`str` or `os.PathLike` or `dict`): Can be either: - A string, the *model id* (for example `google/ddpm-celebahq-256`) of a pretrained model hosted on the Hub. - A path to a *directory* (for example `./my_model_directory`) containing the model weights saved with [`ModelMixin.save_pretrained`]. - A [torch state dict](https://pytorch.org/tutorials/beginner/saving_loading_models.html#what-is-a-state-dict). cache_dir (`Union[str, os.PathLike]`, *optional*): Path to a directory where a downloaded pretrained model configuration is cached if the standard cache is not used. force_download (`bool`, *optional*, defaults to `False`): Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. proxies (`Dict[str, str]`, *optional*): A dictionary of proxy servers to use by protocol or endpoint, for example, `{'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}`. The proxies are used on each request. local_files_only (`bool`, *optional*, defaults to `False`): Whether to only load local model weights and configuration files or not. If set to `True`, the model won't be downloaded from the Hub. token (`str` or *bool*, *optional*): The token to use as HTTP bearer authorization for remote files. If `True`, the token generated from `diffusers-cli login` (stored in `~/.huggingface`) is used. revision (`str`, *optional*, defaults to `"main"`): The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier allowed by Git. subfolder (`str`, *optional*, defaults to `""`): The subfolder location of a model file within a larger model repository on the Hub or locally. """ # Load the main state dict first which has the LoRA layers for either of # transformer and text encoder or both. cache_dir = kwargs.pop("cache_dir", None) force_download = kwargs.pop("force_download", False) proxies = kwargs.pop("proxies", None) local_files_only = kwargs.pop("local_files_only", None) token = kwargs.pop("token", None) revision = kwargs.pop("revision", None) subfolder = kwargs.pop("subfolder", None) weight_name = kwargs.pop("weight_name", None) use_safetensors = kwargs.pop("use_safetensors", None) allow_pickle = False if use_safetensors is None: use_safetensors = True allow_pickle = True user_agent = { "file_type": "attn_procs_weights", "framework": "pytorch", } state_dict = _fetch_state_dict( pretrained_model_name_or_path_or_dict=pretrained_model_name_or_path_or_dict, weight_name=weight_name, use_safetensors=use_safetensors, local_files_only=local_files_only, cache_dir=cache_dir, force_download=force_download, proxies=proxies, token=token, revision=revision, subfolder=subfolder, user_agent=user_agent, allow_pickle=allow_pickle, ) is_dora_scale_present = any("dora_scale" in k for k in state_dict) if is_dora_scale_present: warn_msg = "It seems like you are using a DoRA checkpoint that is not compatible in Diffusers at the moment. So, we are going to filter out the keys associated to 'dora_scale` from the state dict. If you think this is a mistake please open an issue https://github.com/huggingface/diffusers/issues/new." logger.warning(warn_msg) state_dict = {k: v for k, v in state_dict.items() if "dora_scale" not in k} return state_dict # Copied from diffusers.loaders.lora_pipeline.CogVideoXLoraLoaderMixin.load_lora_weights def load_lora_weights( self, pretrained_model_name_or_path_or_dict: Union[str, Dict[str, torch.Tensor]], adapter_name=None, **kwargs ): """ Load LoRA weights specified in `pretrained_model_name_or_path_or_dict` into `self.transformer` and `self.text_encoder`. All kwargs are forwarded to `self.lora_state_dict`. See [`~loaders.StableDiffusionLoraLoaderMixin.lora_state_dict`] for more details on how the state dict is loaded. See [`~loaders.StableDiffusionLoraLoaderMixin.load_lora_into_transformer`] for more details on how the state dict is loaded into `self.transformer`. Parameters: pretrained_model_name_or_path_or_dict (`str` or `os.PathLike` or `dict`): See [`~loaders.StableDiffusionLoraLoaderMixin.lora_state_dict`]. adapter_name (`str`, *optional*): Adapter name to be used for referencing the loaded adapter model. If not specified, it will use `default_{i}` where i is the total number of adapters being loaded. low_cpu_mem_usage (`bool`, *optional*): Speed up model loading by only loading the pretrained LoRA weights and not initializing the random weights. kwargs (`dict`, *optional*): See [`~loaders.StableDiffusionLoraLoaderMixin.lora_state_dict`]. """ if not USE_PEFT_BACKEND: raise ValueError("PEFT backend is required for this method.") low_cpu_mem_usage = kwargs.pop("low_cpu_mem_usage", _LOW_CPU_MEM_USAGE_DEFAULT_LORA) if low_cpu_mem_usage and is_peft_version("<", "0.13.0"): raise ValueError( "`low_cpu_mem_usage=True` is not compatible with this `peft` version. Please update it with `pip install -U peft`." ) # if a dict is passed, copy it instead of modifying it inplace if isinstance(pretrained_model_name_or_path_or_dict, dict): pretrained_model_name_or_path_or_dict = pretrained_model_name_or_path_or_dict.copy() # First, ensure that the checkpoint is a compatible one and can be successfully loaded. state_dict = self.lora_state_dict(pretrained_model_name_or_path_or_dict, **kwargs) is_correct_format = all("lora" in key for key in state_dict.keys()) if not is_correct_format: raise ValueError("Invalid LoRA checkpoint.") self.load_lora_into_transformer( state_dict, transformer=getattr(self, self.transformer_name) if not hasattr(self, "transformer") else self.transformer, adapter_name=adapter_name, _pipeline=self, low_cpu_mem_usage=low_cpu_mem_usage, ) @classmethod # Copied from diffusers.loaders.lora_pipeline.SD3LoraLoaderMixin.load_lora_into_transformer with SD3Transformer2DModel->LTXVideoTransformer3DModel def load_lora_into_transformer( cls, state_dict, transformer, adapter_name=None, _pipeline=None, low_cpu_mem_usage=False ): """ This will load the LoRA layers specified in `state_dict` into `transformer`. Parameters: state_dict (`dict`): A standard state dict containing the lora layer parameters. The keys can either be indexed directly into the unet or prefixed with an additional `unet` which can be used to distinguish between text encoder lora layers. transformer (`LTXVideoTransformer3DModel`): The Transformer model to load the LoRA layers into. adapter_name (`str`, *optional*): Adapter name to be used for referencing the loaded adapter model. If not specified, it will use `default_{i}` where i is the total number of adapters being loaded. low_cpu_mem_usage (`bool`, *optional*): Speed up model loading by only loading the pretrained LoRA weights and not initializing the random weights. """ if low_cpu_mem_usage and is_peft_version("<", "0.13.0"): raise ValueError( "`low_cpu_mem_usage=True` is not compatible with this `peft` version. Please update it with `pip install -U peft`." ) # Load the layers corresponding to transformer. logger.info(f"Loading {cls.transformer_name}.") transformer.load_lora_adapter( state_dict, network_alphas=None, adapter_name=adapter_name, _pipeline=_pipeline, low_cpu_mem_usage=low_cpu_mem_usage, ) @classmethod # Copied from diffusers.loaders.lora_pipeline.CogVideoXLoraLoaderMixin.save_lora_weights def save_lora_weights( cls, save_directory: Union[str, os.PathLike], transformer_lora_layers: Dict[str, Union[torch.nn.Module, torch.Tensor]] = None, is_main_process: bool = True, weight_name: str = None, save_function: Callable = None, safe_serialization: bool = True, ): r""" Save the LoRA parameters corresponding to the UNet and text encoder. Arguments: save_directory (`str` or `os.PathLike`): Directory to save LoRA parameters to. Will be created if it doesn't exist. transformer_lora_layers (`Dict[str, torch.nn.Module]` or `Dict[str, torch.Tensor]`): State dict of the LoRA layers corresponding to the `transformer`. is_main_process (`bool`, *optional*, defaults to `True`): Whether the process calling this is the main process or not. Useful during distributed training and you need to call this function on all processes. In this case, set `is_main_process=True` only on the main process to avoid race conditions. save_function (`Callable`): The function to use to save the state dictionary. Useful during distributed training when you need to replace `torch.save` with another method. Can be configured with the environment variable `DIFFUSERS_SAVE_MODE`. safe_serialization (`bool`, *optional*, defaults to `True`): Whether to save the model using `safetensors` or the traditional PyTorch way with `pickle`. """ state_dict = {} if not transformer_lora_layers: raise ValueError("You must pass `transformer_lora_layers`.") if transformer_lora_layers: state_dict.update(cls.pack_weights(transformer_lora_layers, cls.transformer_name)) # Save the model cls.write_lora_layers( state_dict=state_dict, save_directory=save_directory, is_main_process=is_main_process, weight_name=weight_name, save_function=save_function, safe_serialization=safe_serialization, ) def fuse_lora( self, components: List[str] = ["transformer"], lora_scale: float = 1.0, safe_fusing: bool = False, adapter_names: Optional[List[str]] = None, **kwargs, ): r""" Fuses the LoRA parameters into the original parameters of the corresponding blocks. <Tip warning={true}> This is an experimental API. </Tip> Args: components: (`List[str]`): List of LoRA-injectable components to fuse the LoRAs into. lora_scale (`float`, defaults to 1.0): Controls how much to influence the outputs with the LoRA parameters. safe_fusing (`bool`, defaults to `False`): Whether to check fused weights for NaN values before fusing and if values are NaN not fusing them. adapter_names (`List[str]`, *optional*): Adapter names to be used for fusing. If nothing is passed, all active adapters will be fused. Example: ```py from diffusers import DiffusionPipeline import torch pipeline = DiffusionPipeline.from_pretrained( "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16 ).to("cuda") pipeline.load_lora_weights("nerijs/pixel-art-xl", weight_name="pixel-art-xl.safetensors", adapter_name="pixel") pipeline.fuse_lora(lora_scale=0.7) ``` """ super().fuse_lora( components=components, lora_scale=lora_scale, safe_fusing=safe_fusing, adapter_names=adapter_names ) def unfuse_lora(self, components: List[str] = ["transformer"], **kwargs): r""" Reverses the effect of [`pipe.fuse_lora()`](https://huggingface.co/docs/diffusers/main/en/api/loaders#diffusers.loaders.LoraBaseMixin.fuse_lora). <Tip warning={true}> This is an experimental API. </Tip> Args: components (`List[str]`): List of LoRA-injectable components to unfuse LoRA from. unfuse_transformer (`bool`, defaults to `True`): Whether to unfuse the UNet LoRA parameters. """ super().unfuse_lora(components=components) class SanaLoraLoaderMixin(LoraBaseMixin): r""" Load LoRA layers into [`SanaTransformer2DModel`]. Specific to [`SanaPipeline`]. """ _lora_loadable_modules = ["transformer"] transformer_name = TRANSFORMER_NAME @classmethod @validate_hf_hub_args # Copied from diffusers.loaders.lora_pipeline.SD3LoraLoaderMixin.lora_state_dict def lora_state_dict( cls, pretrained_model_name_or_path_or_dict: Union[str, Dict[str, torch.Tensor]], **kwargs, ): r""" Return state dict for lora weights and the network alphas. <Tip warning={true}> We support loading A1111 formatted LoRA checkpoints in a limited capacity. This function is experimental and might change in the future. </Tip> Parameters: pretrained_model_name_or_path_or_dict (`str` or `os.PathLike` or `dict`): Can be either: - A string, the *model id* (for example `google/ddpm-celebahq-256`) of a pretrained model hosted on the Hub. - A path to a *directory* (for example `./my_model_directory`) containing the model weights saved with [`ModelMixin.save_pretrained`]. - A [torch state dict](https://pytorch.org/tutorials/beginner/saving_loading_models.html#what-is-a-state-dict). cache_dir (`Union[str, os.PathLike]`, *optional*): Path to a directory where a downloaded pretrained model configuration is cached if the standard cache is not used. force_download (`bool`, *optional*, defaults to `False`): Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. proxies (`Dict[str, str]`, *optional*): A dictionary of proxy servers to use by protocol or endpoint, for example, `{'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}`. The proxies are used on each request. local_files_only (`bool`, *optional*, defaults to `False`): Whether to only load local model weights and configuration files or not. If set to `True`, the model won't be downloaded from the Hub. token (`str` or *bool*, *optional*): The token to use as HTTP bearer authorization for remote files. If `True`, the token generated from `diffusers-cli login` (stored in `~/.huggingface`) is used. revision (`str`, *optional*, defaults to `"main"`): The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier allowed by Git. subfolder (`str`, *optional*, defaults to `""`): The subfolder location of a model file within a larger model repository on the Hub or locally. """ # Load the main state dict first which has the LoRA layers for either of # transformer and text encoder or both. cache_dir = kwargs.pop("cache_dir", None) force_download = kwargs.pop("force_download", False) proxies = kwargs.pop("proxies", None) local_files_only = kwargs.pop("local_files_only", None) token = kwargs.pop("token", None) revision = kwargs.pop("revision", None) subfolder = kwargs.pop("subfolder", None) weight_name = kwargs.pop("weight_name", None) use_safetensors = kwargs.pop("use_safetensors", None) allow_pickle = False if use_safetensors is None: use_safetensors = True allow_pickle = True user_agent = { "file_type": "attn_procs_weights", "framework": "pytorch", } state_dict = _fetch_state_dict( pretrained_model_name_or_path_or_dict=pretrained_model_name_or_path_or_dict, weight_name=weight_name, use_safetensors=use_safetensors, local_files_only=local_files_only, cache_dir=cache_dir, force_download=force_download, proxies=proxies, token=token, revision=revision, subfolder=subfolder, user_agent=user_agent, allow_pickle=allow_pickle, ) is_dora_scale_present = any("dora_scale" in k for k in state_dict) if is_dora_scale_present: warn_msg = "It seems like you are using a DoRA checkpoint that is not compatible in Diffusers at the moment. So, we are going to filter out the keys associated to 'dora_scale` from the state dict. If you think this is a mistake please open an issue https://github.com/huggingface/diffusers/issues/new." logger.warning(warn_msg) state_dict = {k: v for k, v in state_dict.items() if "dora_scale" not in k} return state_dict # Copied from diffusers.loaders.lora_pipeline.CogVideoXLoraLoaderMixin.load_lora_weights def load_lora_weights( self, pretrained_model_name_or_path_or_dict: Union[str, Dict[str, torch.Tensor]], adapter_name=None, **kwargs ): """ Load LoRA weights specified in `pretrained_model_name_or_path_or_dict` into `self.transformer` and `self.text_encoder`. All kwargs are forwarded to `self.lora_state_dict`. See [`~loaders.StableDiffusionLoraLoaderMixin.lora_state_dict`] for more details on how the state dict is loaded. See [`~loaders.StableDiffusionLoraLoaderMixin.load_lora_into_transformer`] for more details on how the state dict is loaded into `self.transformer`. Parameters: pretrained_model_name_or_path_or_dict (`str` or `os.PathLike` or `dict`): See [`~loaders.StableDiffusionLoraLoaderMixin.lora_state_dict`]. adapter_name (`str`, *optional*): Adapter name to be used for referencing the loaded adapter model. If not specified, it will use `default_{i}` where i is the total number of adapters being loaded. low_cpu_mem_usage (`bool`, *optional*): Speed up model loading by only loading the pretrained LoRA weights and not initializing the random weights. kwargs (`dict`, *optional*): See [`~loaders.StableDiffusionLoraLoaderMixin.lora_state_dict`]. """ if not USE_PEFT_BACKEND: raise ValueError("PEFT backend is required for this method.") low_cpu_mem_usage = kwargs.pop("low_cpu_mem_usage", _LOW_CPU_MEM_USAGE_DEFAULT_LORA) if low_cpu_mem_usage and is_peft_version("<", "0.13.0"): raise ValueError( "`low_cpu_mem_usage=True` is not compatible with this `peft` version. Please update it with `pip install -U peft`." ) # if a dict is passed, copy it instead of modifying it inplace if isinstance(pretrained_model_name_or_path_or_dict, dict): pretrained_model_name_or_path_or_dict = pretrained_model_name_or_path_or_dict.copy() # First, ensure that the checkpoint is a compatible one and can be successfully loaded. state_dict = self.lora_state_dict(pretrained_model_name_or_path_or_dict, **kwargs) is_correct_format = all("lora" in key for key in state_dict.keys()) if not is_correct_format: raise ValueError("Invalid LoRA checkpoint.") self.load_lora_into_transformer( state_dict, transformer=getattr(self, self.transformer_name) if not hasattr(self, "transformer") else self.transformer, adapter_name=adapter_name, _pipeline=self, low_cpu_mem_usage=low_cpu_mem_usage, ) @classmethod # Copied from diffusers.loaders.lora_pipeline.SD3LoraLoaderMixin.load_lora_into_transformer with SD3Transformer2DModel->SanaTransformer2DModel def load_lora_into_transformer( cls, state_dict, transformer, adapter_name=None, _pipeline=None, low_cpu_mem_usage=False ): """ This will load the LoRA layers specified in `state_dict` into `transformer`. Parameters: state_dict (`dict`): A standard state dict containing the lora layer parameters. The keys can either be indexed directly into the unet or prefixed with an additional `unet` which can be used to distinguish between text encoder lora layers. transformer (`SanaTransformer2DModel`): The Transformer model to load the LoRA layers into. adapter_name (`str`, *optional*): Adapter name to be used for referencing the loaded adapter model. If not specified, it will use `default_{i}` where i is the total number of adapters being loaded. low_cpu_mem_usage (`bool`, *optional*): Speed up model loading by only loading the pretrained LoRA weights and not initializing the random weights. """ if low_cpu_mem_usage and is_peft_version("<", "0.13.0"): raise ValueError( "`low_cpu_mem_usage=True` is not compatible with this `peft` version. Please update it with `pip install -U peft`." ) # Load the layers corresponding to transformer. logger.info(f"Loading {cls.transformer_name}.") transformer.load_lora_adapter( state_dict, network_alphas=None, adapter_name=adapter_name, _pipeline=_pipeline, low_cpu_mem_usage=low_cpu_mem_usage, ) @classmethod # Copied from diffusers.loaders.lora_pipeline.CogVideoXLoraLoaderMixin.save_lora_weights def save_lora_weights( cls, save_directory: Union[str, os.PathLike], transformer_lora_layers: Dict[str, Union[torch.nn.Module, torch.Tensor]] = None, is_main_process: bool = True, weight_name: str = None, save_function: Callable = None, safe_serialization: bool = True, ): r""" Save the LoRA parameters corresponding to the UNet and text encoder. Arguments: save_directory (`str` or `os.PathLike`): Directory to save LoRA parameters to. Will be created if it doesn't exist. transformer_lora_layers (`Dict[str, torch.nn.Module]` or `Dict[str, torch.Tensor]`): State dict of the LoRA layers corresponding to the `transformer`. is_main_process (`bool`, *optional*, defaults to `True`): Whether the process calling this is the main process or not. Useful during distributed training and you need to call this function on all processes. In this case, set `is_main_process=True` only on the main process to avoid race conditions. save_function (`Callable`): The function to use to save the state dictionary. Useful during distributed training when you need to replace `torch.save` with another method. Can be configured with the environment variable `DIFFUSERS_SAVE_MODE`. safe_serialization (`bool`, *optional*, defaults to `True`): Whether to save the model using `safetensors` or the traditional PyTorch way with `pickle`. """ state_dict = {} if not transformer_lora_layers: raise ValueError("You must pass `transformer_lora_layers`.") if transformer_lora_layers: state_dict.update(cls.pack_weights(transformer_lora_layers, cls.transformer_name)) # Save the model cls.write_lora_layers( state_dict=state_dict, save_directory=save_directory, is_main_process=is_main_process, weight_name=weight_name, save_function=save_function, safe_serialization=safe_serialization, ) def fuse_lora( self, components: List[str] = ["transformer"], lora_scale: float = 1.0, safe_fusing: bool = False, adapter_names: Optional[List[str]] = None, **kwargs, ): r""" Fuses the LoRA parameters into the original parameters of the corresponding blocks. <Tip warning={true}> This is an experimental API. </Tip> Args: components: (`List[str]`): List of LoRA-injectable components to fuse the LoRAs into. lora_scale (`float`, defaults to 1.0): Controls how much to influence the outputs with the LoRA parameters. safe_fusing (`bool`, defaults to `False`): Whether to check fused weights for NaN values before fusing and if values are NaN not fusing them. adapter_names (`List[str]`, *optional*): Adapter names to be used for fusing. If nothing is passed, all active adapters will be fused. Example: ```py from diffusers import DiffusionPipeline import torch pipeline = DiffusionPipeline.from_pretrained( "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16 ).to("cuda") pipeline.load_lora_weights("nerijs/pixel-art-xl", weight_name="pixel-art-xl.safetensors", adapter_name="pixel") pipeline.fuse_lora(lora_scale=0.7) ``` """ super().fuse_lora( components=components, lora_scale=lora_scale, safe_fusing=safe_fusing, adapter_names=adapter_names ) def unfuse_lora(self, components: List[str] = ["transformer"], **kwargs): r""" Reverses the effect of [`pipe.fuse_lora()`](https://huggingface.co/docs/diffusers/main/en/api/loaders#diffusers.loaders.LoraBaseMixin.fuse_lora). <Tip warning={true}> This is an experimental API. </Tip> Args: components (`List[str]`): List of LoRA-injectable components to unfuse LoRA from. unfuse_transformer (`bool`, defaults to `True`): Whether to unfuse the UNet LoRA parameters. """ super().unfuse_lora(components=components) class HunyuanVideoLoraLoaderMixin(LoraBaseMixin): r""" Load LoRA layers into [`HunyuanVideoTransformer3DModel`]. Specific to [`HunyuanVideoPipeline`]. """ _lora_loadable_modules = ["transformer"] transformer_name = TRANSFORMER_NAME @classmethod @validate_hf_hub_args def lora_state_dict( cls, pretrained_model_name_or_path_or_dict: Union[str, Dict[str, torch.Tensor]], **kwargs, ): r""" Return state dict for lora weights and the network alphas. <Tip warning={true}> We support loading original format HunyuanVideo LoRA checkpoints. This function is experimental and might change in the future. </Tip> Parameters: pretrained_model_name_or_path_or_dict (`str` or `os.PathLike` or `dict`): Can be either: - A string, the *model id* (for example `google/ddpm-celebahq-256`) of a pretrained model hosted on the Hub. - A path to a *directory* (for example `./my_model_directory`) containing the model weights saved with [`ModelMixin.save_pretrained`]. - A [torch state dict](https://pytorch.org/tutorials/beginner/saving_loading_models.html#what-is-a-state-dict). cache_dir (`Union[str, os.PathLike]`, *optional*): Path to a directory where a downloaded pretrained model configuration is cached if the standard cache is not used. force_download (`bool`, *optional*, defaults to `False`): Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. proxies (`Dict[str, str]`, *optional*): A dictionary of proxy servers to use by protocol or endpoint, for example, `{'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}`. The proxies are used on each request. local_files_only (`bool`, *optional*, defaults to `False`): Whether to only load local model weights and configuration files or not. If set to `True`, the model won't be downloaded from the Hub. token (`str` or *bool*, *optional*): The token to use as HTTP bearer authorization for remote files. If `True`, the token generated from `diffusers-cli login` (stored in `~/.huggingface`) is used. revision (`str`, *optional*, defaults to `"main"`): The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier allowed by Git. subfolder (`str`, *optional*, defaults to `""`): The subfolder location of a model file within a larger model repository on the Hub or locally. """ # Load the main state dict first which has the LoRA layers for either of # transformer and text encoder or both. cache_dir = kwargs.pop("cache_dir", None) force_download = kwargs.pop("force_download", False) proxies = kwargs.pop("proxies", None) local_files_only = kwargs.pop("local_files_only", None) token = kwargs.pop("token", None) revision = kwargs.pop("revision", None) subfolder = kwargs.pop("subfolder", None) weight_name = kwargs.pop("weight_name", None) use_safetensors = kwargs.pop("use_safetensors", None) allow_pickle = False if use_safetensors is None: use_safetensors = True allow_pickle = True user_agent = { "file_type": "attn_procs_weights", "framework": "pytorch", } state_dict = _fetch_state_dict( pretrained_model_name_or_path_or_dict=pretrained_model_name_or_path_or_dict, weight_name=weight_name, use_safetensors=use_safetensors, local_files_only=local_files_only, cache_dir=cache_dir, force_download=force_download, proxies=proxies, token=token, revision=revision, subfolder=subfolder, user_agent=user_agent, allow_pickle=allow_pickle, ) is_dora_scale_present = any("dora_scale" in k for k in state_dict) if is_dora_scale_present: warn_msg = "It seems like you are using a DoRA checkpoint that is not compatible in Diffusers at the moment. So, we are going to filter out the keys associated to 'dora_scale` from the state dict. If you think this is a mistake please open an issue https://github.com/huggingface/diffusers/issues/new." logger.warning(warn_msg) state_dict = {k: v for k, v in state_dict.items() if "dora_scale" not in k} is_original_hunyuan_video = any("img_attn_qkv" in k for k in state_dict) if is_original_hunyuan_video: state_dict = _convert_hunyuan_video_lora_to_diffusers(state_dict) return state_dict # Copied from diffusers.loaders.lora_pipeline.CogVideoXLoraLoaderMixin.load_lora_weights def load_lora_weights( self, pretrained_model_name_or_path_or_dict: Union[str, Dict[str, torch.Tensor]], adapter_name=None, **kwargs ): """ Load LoRA weights specified in `pretrained_model_name_or_path_or_dict` into `self.transformer` and `self.text_encoder`. All kwargs are forwarded to `self.lora_state_dict`. See [`~loaders.StableDiffusionLoraLoaderMixin.lora_state_dict`] for more details on how the state dict is loaded. See [`~loaders.StableDiffusionLoraLoaderMixin.load_lora_into_transformer`] for more details on how the state dict is loaded into `self.transformer`. Parameters: pretrained_model_name_or_path_or_dict (`str` or `os.PathLike` or `dict`): See [`~loaders.StableDiffusionLoraLoaderMixin.lora_state_dict`]. adapter_name (`str`, *optional*): Adapter name to be used for referencing the loaded adapter model. If not specified, it will use `default_{i}` where i is the total number of adapters being loaded. low_cpu_mem_usage (`bool`, *optional*): Speed up model loading by only loading the pretrained LoRA weights and not initializing the random weights. kwargs (`dict`, *optional*): See [`~loaders.StableDiffusionLoraLoaderMixin.lora_state_dict`]. """ if not USE_PEFT_BACKEND: raise ValueError("PEFT backend is required for this method.") low_cpu_mem_usage = kwargs.pop("low_cpu_mem_usage", _LOW_CPU_MEM_USAGE_DEFAULT_LORA) if low_cpu_mem_usage and is_peft_version("<", "0.13.0"): raise ValueError( "`low_cpu_mem_usage=True` is not compatible with this `peft` version. Please update it with `pip install -U peft`." ) # if a dict is passed, copy it instead of modifying it inplace if isinstance(pretrained_model_name_or_path_or_dict, dict): pretrained_model_name_or_path_or_dict = pretrained_model_name_or_path_or_dict.copy() # First, ensure that the checkpoint is a compatible one and can be successfully loaded. state_dict = self.lora_state_dict(pretrained_model_name_or_path_or_dict, **kwargs) is_correct_format = all("lora" in key for key in state_dict.keys()) if not is_correct_format: raise ValueError("Invalid LoRA checkpoint.") self.load_lora_into_transformer( state_dict, transformer=getattr(self, self.transformer_name) if not hasattr(self, "transformer") else self.transformer, adapter_name=adapter_name, _pipeline=self, low_cpu_mem_usage=low_cpu_mem_usage, ) @classmethod # Copied from diffusers.loaders.lora_pipeline.SD3LoraLoaderMixin.load_lora_into_transformer with SD3Transformer2DModel->HunyuanVideoTransformer3DModel def load_lora_into_transformer( cls, state_dict, transformer, adapter_name=None, _pipeline=None, low_cpu_mem_usage=False ): """ This will load the LoRA layers specified in `state_dict` into `transformer`. Parameters: state_dict (`dict`): A standard state dict containing the lora layer parameters. The keys can either be indexed directly into the unet or prefixed with an additional `unet` which can be used to distinguish between text encoder lora layers. transformer (`HunyuanVideoTransformer3DModel`): The Transformer model to load the LoRA layers into. adapter_name (`str`, *optional*): Adapter name to be used for referencing the loaded adapter model. If not specified, it will use `default_{i}` where i is the total number of adapters being loaded. low_cpu_mem_usage (`bool`, *optional*): Speed up model loading by only loading the pretrained LoRA weights and not initializing the random weights. """ if low_cpu_mem_usage and is_peft_version("<", "0.13.0"): raise ValueError( "`low_cpu_mem_usage=True` is not compatible with this `peft` version. Please update it with `pip install -U peft`." ) # Load the layers corresponding to transformer. logger.info(f"Loading {cls.transformer_name}.") transformer.load_lora_adapter( state_dict, network_alphas=None, adapter_name=adapter_name, _pipeline=_pipeline, low_cpu_mem_usage=low_cpu_mem_usage, ) @classmethod # Copied from diffusers.loaders.lora_pipeline.CogVideoXLoraLoaderMixin.save_lora_weights def save_lora_weights( cls, save_directory: Union[str, os.PathLike], transformer_lora_layers: Dict[str, Union[torch.nn.Module, torch.Tensor]] = None, is_main_process: bool = True, weight_name: str = None, save_function: Callable = None, safe_serialization: bool = True, ): r""" Save the LoRA parameters corresponding to the UNet and text encoder. Arguments: save_directory (`str` or `os.PathLike`): Directory to save LoRA parameters to. Will be created if it doesn't exist. transformer_lora_layers (`Dict[str, torch.nn.Module]` or `Dict[str, torch.Tensor]`): State dict of the LoRA layers corresponding to the `transformer`. is_main_process (`bool`, *optional*, defaults to `True`): Whether the process calling this is the main process or not. Useful during distributed training and you need to call this function on all processes. In this case, set `is_main_process=True` only on the main process to avoid race conditions. save_function (`Callable`): The function to use to save the state dictionary. Useful during distributed training when you need to replace `torch.save` with another method. Can be configured with the environment variable `DIFFUSERS_SAVE_MODE`. safe_serialization (`bool`, *optional*, defaults to `True`): Whether to save the model using `safetensors` or the traditional PyTorch way with `pickle`. """ state_dict = {} if not transformer_lora_layers: raise ValueError("You must pass `transformer_lora_layers`.") if transformer_lora_layers: state_dict.update(cls.pack_weights(transformer_lora_layers, cls.transformer_name)) # Save the model cls.write_lora_layers( state_dict=state_dict, save_directory=save_directory, is_main_process=is_main_process, weight_name=weight_name, save_function=save_function, safe_serialization=safe_serialization, ) def fuse_lora( self, components: List[str] = ["transformer"], lora_scale: float = 1.0, safe_fusing: bool = False, adapter_names: Optional[List[str]] = None, **kwargs, ): r""" Fuses the LoRA parameters into the original parameters of the corresponding blocks. <Tip warning={true}> This is an experimental API. </Tip> Args: components: (`List[str]`): List of LoRA-injectable components to fuse the LoRAs into. lora_scale (`float`, defaults to 1.0): Controls how much to influence the outputs with the LoRA parameters. safe_fusing (`bool`, defaults to `False`): Whether to check fused weights for NaN values before fusing and if values are NaN not fusing them. adapter_names (`List[str]`, *optional*): Adapter names to be used for fusing. If nothing is passed, all active adapters will be fused. Example: ```py from diffusers import DiffusionPipeline import torch pipeline = DiffusionPipeline.from_pretrained( "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16 ).to("cuda") pipeline.load_lora_weights("nerijs/pixel-art-xl", weight_name="pixel-art-xl.safetensors", adapter_name="pixel") pipeline.fuse_lora(lora_scale=0.7) ``` """ super().fuse_lora( components=components, lora_scale=lora_scale, safe_fusing=safe_fusing, adapter_names=adapter_names ) def unfuse_lora(self, components: List[str] = ["transformer"], **kwargs): r""" Reverses the effect of [`pipe.fuse_lora()`](https://huggingface.co/docs/diffusers/main/en/api/loaders#diffusers.loaders.LoraBaseMixin.fuse_lora). <Tip warning={true}> This is an experimental API. </Tip> Args: components (`List[str]`): List of LoRA-injectable components to unfuse LoRA from. unfuse_transformer (`bool`, defaults to `True`): Whether to unfuse the UNet LoRA parameters. """ super().unfuse_lora(components=components) class LoraLoaderMixin(StableDiffusionLoraLoaderMixin): def __init__(self, *args, **kwargs): deprecation_message = "LoraLoaderMixin is deprecated and this will be removed in a future version. Please use `StableDiffusionLoraLoaderMixin`, instead." deprecate("LoraLoaderMixin", "1.0.0", deprecation_message) super().__init__(*args, **kwargs)
diffusers/src/diffusers/loaders/lora_pipeline.py/0
{ "file_path": "diffusers/src/diffusers/loaders/lora_pipeline.py", "repo_id": "diffusers", "token_count": 79975 }
# Copyright 2024 The HuggingFace Team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import functools import math import flax.linen as nn import jax import jax.numpy as jnp def _query_chunk_attention(query, key, value, precision, key_chunk_size: int = 4096): """Multi-head dot product attention with a limited number of queries.""" num_kv, num_heads, k_features = key.shape[-3:] v_features = value.shape[-1] key_chunk_size = min(key_chunk_size, num_kv) query = query / jnp.sqrt(k_features) @functools.partial(jax.checkpoint, prevent_cse=False) def summarize_chunk(query, key, value): attn_weights = jnp.einsum("...qhd,...khd->...qhk", query, key, precision=precision) max_score = jnp.max(attn_weights, axis=-1, keepdims=True) max_score = jax.lax.stop_gradient(max_score) exp_weights = jnp.exp(attn_weights - max_score) exp_values = jnp.einsum("...vhf,...qhv->...qhf", value, exp_weights, precision=precision) max_score = jnp.einsum("...qhk->...qh", max_score) return (exp_values, exp_weights.sum(axis=-1), max_score) def chunk_scanner(chunk_idx): # julienne key array key_chunk = jax.lax.dynamic_slice( operand=key, start_indices=[0] * (key.ndim - 3) + [chunk_idx, 0, 0], # [...,k,h,d] slice_sizes=list(key.shape[:-3]) + [key_chunk_size, num_heads, k_features], # [...,k,h,d] ) # julienne value array value_chunk = jax.lax.dynamic_slice( operand=value, start_indices=[0] * (value.ndim - 3) + [chunk_idx, 0, 0], # [...,v,h,d] slice_sizes=list(value.shape[:-3]) + [key_chunk_size, num_heads, v_features], # [...,v,h,d] ) return summarize_chunk(query, key_chunk, value_chunk) chunk_values, chunk_weights, chunk_max = jax.lax.map(f=chunk_scanner, xs=jnp.arange(0, num_kv, key_chunk_size)) global_max = jnp.max(chunk_max, axis=0, keepdims=True) max_diffs = jnp.exp(chunk_max - global_max) chunk_values *= jnp.expand_dims(max_diffs, axis=-1) chunk_weights *= max_diffs all_values = chunk_values.sum(axis=0) all_weights = jnp.expand_dims(chunk_weights, -1).sum(axis=0) return all_values / all_weights def jax_memory_efficient_attention( query, key, value, precision=jax.lax.Precision.HIGHEST, query_chunk_size: int = 1024, key_chunk_size: int = 4096 ): r""" Flax Memory-efficient multi-head dot product attention. https://arxiv.org/abs/2112.05682v2 https://github.com/AminRezaei0x443/memory-efficient-attention Args: query (`jnp.ndarray`): (batch..., query_length, head, query_key_depth_per_head) key (`jnp.ndarray`): (batch..., key_value_length, head, query_key_depth_per_head) value (`jnp.ndarray`): (batch..., key_value_length, head, value_depth_per_head) precision (`jax.lax.Precision`, *optional*, defaults to `jax.lax.Precision.HIGHEST`): numerical precision for computation query_chunk_size (`int`, *optional*, defaults to 1024): chunk size to divide query array value must divide query_length equally without remainder key_chunk_size (`int`, *optional*, defaults to 4096): chunk size to divide key and value array value must divide key_value_length equally without remainder Returns: (`jnp.ndarray`) with shape of (batch..., query_length, head, value_depth_per_head) """ num_q, num_heads, q_features = query.shape[-3:] def chunk_scanner(chunk_idx, _): # julienne query array query_chunk = jax.lax.dynamic_slice( operand=query, start_indices=([0] * (query.ndim - 3)) + [chunk_idx, 0, 0], # [...,q,h,d] slice_sizes=list(query.shape[:-3]) + [min(query_chunk_size, num_q), num_heads, q_features], # [...,q,h,d] ) return ( chunk_idx + query_chunk_size, # unused ignore it _query_chunk_attention( query=query_chunk, key=key, value=value, precision=precision, key_chunk_size=key_chunk_size ), ) _, res = jax.lax.scan( f=chunk_scanner, init=0, xs=None, length=math.ceil(num_q / query_chunk_size), # start counter # stop counter ) return jnp.concatenate(res, axis=-3) # fuse the chunked result back class FlaxAttention(nn.Module): r""" A Flax multi-head attention module as described in: https://arxiv.org/abs/1706.03762 Parameters: query_dim (:obj:`int`): Input hidden states dimension heads (:obj:`int`, *optional*, defaults to 8): Number of heads dim_head (:obj:`int`, *optional*, defaults to 64): Hidden states dimension inside each head dropout (:obj:`float`, *optional*, defaults to 0.0): Dropout rate use_memory_efficient_attention (`bool`, *optional*, defaults to `False`): enable memory efficient attention https://arxiv.org/abs/2112.05682 split_head_dim (`bool`, *optional*, defaults to `False`): Whether to split the head dimension into a new axis for the self-attention computation. In most cases, enabling this flag should speed up the computation for Stable Diffusion 2.x and Stable Diffusion XL. dtype (:obj:`jnp.dtype`, *optional*, defaults to jnp.float32): Parameters `dtype` """ query_dim: int heads: int = 8 dim_head: int = 64 dropout: float = 0.0 use_memory_efficient_attention: bool = False split_head_dim: bool = False dtype: jnp.dtype = jnp.float32 def setup(self): inner_dim = self.dim_head * self.heads self.scale = self.dim_head**-0.5 # Weights were exported with old names {to_q, to_k, to_v, to_out} self.query = nn.Dense(inner_dim, use_bias=False, dtype=self.dtype, name="to_q") self.key = nn.Dense(inner_dim, use_bias=False, dtype=self.dtype, name="to_k") self.value = nn.Dense(inner_dim, use_bias=False, dtype=self.dtype, name="to_v") self.proj_attn = nn.Dense(self.query_dim, dtype=self.dtype, name="to_out_0") self.dropout_layer = nn.Dropout(rate=self.dropout) def reshape_heads_to_batch_dim(self, tensor): batch_size, seq_len, dim = tensor.shape head_size = self.heads tensor = tensor.reshape(batch_size, seq_len, head_size, dim // head_size) tensor = jnp.transpose(tensor, (0, 2, 1, 3)) tensor = tensor.reshape(batch_size * head_size, seq_len, dim // head_size) return tensor def reshape_batch_dim_to_heads(self, tensor): batch_size, seq_len, dim = tensor.shape head_size = self.heads tensor = tensor.reshape(batch_size // head_size, head_size, seq_len, dim) tensor = jnp.transpose(tensor, (0, 2, 1, 3)) tensor = tensor.reshape(batch_size // head_size, seq_len, dim * head_size) return tensor def __call__(self, hidden_states, context=None, deterministic=True): context = hidden_states if context is None else context query_proj = self.query(hidden_states) key_proj = self.key(context) value_proj = self.value(context) if self.split_head_dim: b = hidden_states.shape[0] query_states = jnp.reshape(query_proj, (b, -1, self.heads, self.dim_head)) key_states = jnp.reshape(key_proj, (b, -1, self.heads, self.dim_head)) value_states = jnp.reshape(value_proj, (b, -1, self.heads, self.dim_head)) else: query_states = self.reshape_heads_to_batch_dim(query_proj) key_states = self.reshape_heads_to_batch_dim(key_proj) value_states = self.reshape_heads_to_batch_dim(value_proj) if self.use_memory_efficient_attention: query_states = query_states.transpose(1, 0, 2) key_states = key_states.transpose(1, 0, 2) value_states = value_states.transpose(1, 0, 2) # this if statement create a chunk size for each layer of the unet # the chunk size is equal to the query_length dimension of the deepest layer of the unet flatten_latent_dim = query_states.shape[-3] if flatten_latent_dim % 64 == 0: query_chunk_size = int(flatten_latent_dim / 64) elif flatten_latent_dim % 16 == 0: query_chunk_size = int(flatten_latent_dim / 16) elif flatten_latent_dim % 4 == 0: query_chunk_size = int(flatten_latent_dim / 4) else: query_chunk_size = int(flatten_latent_dim) hidden_states = jax_memory_efficient_attention( query_states, key_states, value_states, query_chunk_size=query_chunk_size, key_chunk_size=4096 * 4 ) hidden_states = hidden_states.transpose(1, 0, 2) hidden_states = self.reshape_batch_dim_to_heads(hidden_states) else: # compute attentions if self.split_head_dim: attention_scores = jnp.einsum("b t n h, b f n h -> b n f t", key_states, query_states) else: attention_scores = jnp.einsum("b i d, b j d->b i j", query_states, key_states) attention_scores = attention_scores * self.scale attention_probs = nn.softmax(attention_scores, axis=-1 if self.split_head_dim else 2) # attend to values if self.split_head_dim: hidden_states = jnp.einsum("b n f t, b t n h -> b f n h", attention_probs, value_states) b = hidden_states.shape[0] hidden_states = jnp.reshape(hidden_states, (b, -1, self.heads * self.dim_head)) else: hidden_states = jnp.einsum("b i j, b j d -> b i d", attention_probs, value_states) hidden_states = self.reshape_batch_dim_to_heads(hidden_states) hidden_states = self.proj_attn(hidden_states) return self.dropout_layer(hidden_states, deterministic=deterministic) class FlaxBasicTransformerBlock(nn.Module): r""" A Flax transformer block layer with `GLU` (Gated Linear Unit) activation function as described in: https://arxiv.org/abs/1706.03762 Parameters: dim (:obj:`int`): Inner hidden states dimension n_heads (:obj:`int`): Number of heads d_head (:obj:`int`): Hidden states dimension inside each head dropout (:obj:`float`, *optional*, defaults to 0.0): Dropout rate only_cross_attention (`bool`, defaults to `False`): Whether to only apply cross attention. dtype (:obj:`jnp.dtype`, *optional*, defaults to jnp.float32): Parameters `dtype` use_memory_efficient_attention (`bool`, *optional*, defaults to `False`): enable memory efficient attention https://arxiv.org/abs/2112.05682 split_head_dim (`bool`, *optional*, defaults to `False`): Whether to split the head dimension into a new axis for the self-attention computation. In most cases, enabling this flag should speed up the computation for Stable Diffusion 2.x and Stable Diffusion XL. """ dim: int n_heads: int d_head: int dropout: float = 0.0 only_cross_attention: bool = False dtype: jnp.dtype = jnp.float32 use_memory_efficient_attention: bool = False split_head_dim: bool = False def setup(self): # self attention (or cross_attention if only_cross_attention is True) self.attn1 = FlaxAttention( self.dim, self.n_heads, self.d_head, self.dropout, self.use_memory_efficient_attention, self.split_head_dim, dtype=self.dtype, ) # cross attention self.attn2 = FlaxAttention( self.dim, self.n_heads, self.d_head, self.dropout, self.use_memory_efficient_attention, self.split_head_dim, dtype=self.dtype, ) self.ff = FlaxFeedForward(dim=self.dim, dropout=self.dropout, dtype=self.dtype) self.norm1 = nn.LayerNorm(epsilon=1e-5, dtype=self.dtype) self.norm2 = nn.LayerNorm(epsilon=1e-5, dtype=self.dtype) self.norm3 = nn.LayerNorm(epsilon=1e-5, dtype=self.dtype) self.dropout_layer = nn.Dropout(rate=self.dropout) def __call__(self, hidden_states, context, deterministic=True): # self attention residual = hidden_states if self.only_cross_attention: hidden_states = self.attn1(self.norm1(hidden_states), context, deterministic=deterministic) else: hidden_states = self.attn1(self.norm1(hidden_states), deterministic=deterministic) hidden_states = hidden_states + residual # cross attention residual = hidden_states hidden_states = self.attn2(self.norm2(hidden_states), context, deterministic=deterministic) hidden_states = hidden_states + residual # feed forward residual = hidden_states hidden_states = self.ff(self.norm3(hidden_states), deterministic=deterministic) hidden_states = hidden_states + residual return self.dropout_layer(hidden_states, deterministic=deterministic) class FlaxTransformer2DModel(nn.Module): r""" A Spatial Transformer layer with Gated Linear Unit (GLU) activation function as described in: https://arxiv.org/pdf/1506.02025.pdf Parameters: in_channels (:obj:`int`): Input number of channels n_heads (:obj:`int`): Number of heads d_head (:obj:`int`): Hidden states dimension inside each head depth (:obj:`int`, *optional*, defaults to 1): Number of transformers block dropout (:obj:`float`, *optional*, defaults to 0.0): Dropout rate use_linear_projection (`bool`, defaults to `False`): tbd only_cross_attention (`bool`, defaults to `False`): tbd dtype (:obj:`jnp.dtype`, *optional*, defaults to jnp.float32): Parameters `dtype` use_memory_efficient_attention (`bool`, *optional*, defaults to `False`): enable memory efficient attention https://arxiv.org/abs/2112.05682 split_head_dim (`bool`, *optional*, defaults to `False`): Whether to split the head dimension into a new axis for the self-attention computation. In most cases, enabling this flag should speed up the computation for Stable Diffusion 2.x and Stable Diffusion XL. """ in_channels: int n_heads: int d_head: int depth: int = 1 dropout: float = 0.0 use_linear_projection: bool = False only_cross_attention: bool = False dtype: jnp.dtype = jnp.float32 use_memory_efficient_attention: bool = False split_head_dim: bool = False def setup(self): self.norm = nn.GroupNorm(num_groups=32, epsilon=1e-5) inner_dim = self.n_heads * self.d_head if self.use_linear_projection: self.proj_in = nn.Dense(inner_dim, dtype=self.dtype) else: self.proj_in = nn.Conv( inner_dim, kernel_size=(1, 1), strides=(1, 1), padding="VALID", dtype=self.dtype, ) self.transformer_blocks = [ FlaxBasicTransformerBlock( inner_dim, self.n_heads, self.d_head, dropout=self.dropout, only_cross_attention=self.only_cross_attention, dtype=self.dtype, use_memory_efficient_attention=self.use_memory_efficient_attention, split_head_dim=self.split_head_dim, ) for _ in range(self.depth) ] if self.use_linear_projection: self.proj_out = nn.Dense(inner_dim, dtype=self.dtype) else: self.proj_out = nn.Conv( inner_dim, kernel_size=(1, 1), strides=(1, 1), padding="VALID", dtype=self.dtype, ) self.dropout_layer = nn.Dropout(rate=self.dropout) def __call__(self, hidden_states, context, deterministic=True): batch, height, width, channels = hidden_states.shape residual = hidden_states hidden_states = self.norm(hidden_states) if self.use_linear_projection: hidden_states = hidden_states.reshape(batch, height * width, channels) hidden_states = self.proj_in(hidden_states) else: hidden_states = self.proj_in(hidden_states) hidden_states = hidden_states.reshape(batch, height * width, channels) for transformer_block in self.transformer_blocks: hidden_states = transformer_block(hidden_states, context, deterministic=deterministic) if self.use_linear_projection: hidden_states = self.proj_out(hidden_states) hidden_states = hidden_states.reshape(batch, height, width, channels) else: hidden_states = hidden_states.reshape(batch, height, width, channels) hidden_states = self.proj_out(hidden_states) hidden_states = hidden_states + residual return self.dropout_layer(hidden_states, deterministic=deterministic) class FlaxFeedForward(nn.Module): r""" Flax module that encapsulates two Linear layers separated by a non-linearity. It is the counterpart of PyTorch's [`FeedForward`] class, with the following simplifications: - The activation function is currently hardcoded to a gated linear unit from: https://arxiv.org/abs/2002.05202 - `dim_out` is equal to `dim`. - The number of hidden dimensions is hardcoded to `dim * 4` in [`FlaxGELU`]. Parameters: dim (:obj:`int`): Inner hidden states dimension dropout (:obj:`float`, *optional*, defaults to 0.0): Dropout rate dtype (:obj:`jnp.dtype`, *optional*, defaults to jnp.float32): Parameters `dtype` """ dim: int dropout: float = 0.0 dtype: jnp.dtype = jnp.float32 def setup(self): # The second linear layer needs to be called # net_2 for now to match the index of the Sequential layer self.net_0 = FlaxGEGLU(self.dim, self.dropout, self.dtype) self.net_2 = nn.Dense(self.dim, dtype=self.dtype) def __call__(self, hidden_states, deterministic=True): hidden_states = self.net_0(hidden_states, deterministic=deterministic) hidden_states = self.net_2(hidden_states) return hidden_states class FlaxGEGLU(nn.Module): r""" Flax implementation of a Linear layer followed by the variant of the gated linear unit activation function from https://arxiv.org/abs/2002.05202. Parameters: dim (:obj:`int`): Input hidden states dimension dropout (:obj:`float`, *optional*, defaults to 0.0): Dropout rate dtype (:obj:`jnp.dtype`, *optional*, defaults to jnp.float32): Parameters `dtype` """ dim: int dropout: float = 0.0 dtype: jnp.dtype = jnp.float32 def setup(self): inner_dim = self.dim * 4 self.proj = nn.Dense(inner_dim * 2, dtype=self.dtype) self.dropout_layer = nn.Dropout(rate=self.dropout) def __call__(self, hidden_states, deterministic=True): hidden_states = self.proj(hidden_states) hidden_linear, hidden_gelu = jnp.split(hidden_states, 2, axis=2) return self.dropout_layer(hidden_linear * nn.gelu(hidden_gelu), deterministic=deterministic)
diffusers/src/diffusers/models/attention_flax.py/0
{ "file_path": "diffusers/src/diffusers/models/attention_flax.py", "repo_id": "diffusers", "token_count": 9064 }
# Copyright 2024 The HuggingFace Team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from dataclasses import dataclass from typing import Optional, Tuple, Union import torch import torch.nn as nn from ...configuration_utils import ConfigMixin, register_to_config from ...utils import BaseOutput from ...utils.accelerate_utils import apply_forward_hook from ..autoencoders.vae import Decoder, DecoderOutput, Encoder, VectorQuantizer from ..modeling_utils import ModelMixin @dataclass class VQEncoderOutput(BaseOutput): """ Output of VQModel encoding method. Args: latents (`torch.Tensor` of shape `(batch_size, num_channels, height, width)`): The encoded output sample from the last layer of the model. """ latents: torch.Tensor class VQModel(ModelMixin, ConfigMixin): r""" A VQ-VAE model for decoding latent representations. This model inherits from [`ModelMixin`]. Check the superclass documentation for it's generic methods implemented for all models (such as downloading or saving). Parameters: in_channels (int, *optional*, defaults to 3): Number of channels in the input image. out_channels (int, *optional*, defaults to 3): Number of channels in the output. down_block_types (`Tuple[str]`, *optional*, defaults to `("DownEncoderBlock2D",)`): Tuple of downsample block types. up_block_types (`Tuple[str]`, *optional*, defaults to `("UpDecoderBlock2D",)`): Tuple of upsample block types. block_out_channels (`Tuple[int]`, *optional*, defaults to `(64,)`): Tuple of block output channels. layers_per_block (`int`, *optional*, defaults to `1`): Number of layers per block. act_fn (`str`, *optional*, defaults to `"silu"`): The activation function to use. latent_channels (`int`, *optional*, defaults to `3`): Number of channels in the latent space. sample_size (`int`, *optional*, defaults to `32`): Sample input size. num_vq_embeddings (`int`, *optional*, defaults to `256`): Number of codebook vectors in the VQ-VAE. norm_num_groups (`int`, *optional*, defaults to `32`): Number of groups for normalization layers. vq_embed_dim (`int`, *optional*): Hidden dim of codebook vectors in the VQ-VAE. scaling_factor (`float`, *optional*, defaults to `0.18215`): The component-wise standard deviation of the trained latent space computed using the first batch of the training set. This is used to scale the latent space to have unit variance when training the diffusion model. The latents are scaled with the formula `z = z * scaling_factor` before being passed to the diffusion model. When decoding, the latents are scaled back to the original scale with the formula: `z = 1 / scaling_factor * z`. For more details, refer to sections 4.3.2 and D.1 of the [High-Resolution Image Synthesis with Latent Diffusion Models](https://arxiv.org/abs/2112.10752) paper. norm_type (`str`, *optional*, defaults to `"group"`): Type of normalization layer to use. Can be one of `"group"` or `"spatial"`. """ _skip_layerwise_casting_patterns = ["quantize"] @register_to_config def __init__( self, in_channels: int = 3, out_channels: int = 3, down_block_types: Tuple[str, ...] = ("DownEncoderBlock2D",), up_block_types: Tuple[str, ...] = ("UpDecoderBlock2D",), block_out_channels: Tuple[int, ...] = (64,), layers_per_block: int = 1, act_fn: str = "silu", latent_channels: int = 3, sample_size: int = 32, num_vq_embeddings: int = 256, norm_num_groups: int = 32, vq_embed_dim: Optional[int] = None, scaling_factor: float = 0.18215, norm_type: str = "group", # group, spatial mid_block_add_attention=True, lookup_from_codebook=False, force_upcast=False, ): super().__init__() # pass init params to Encoder self.encoder = Encoder( in_channels=in_channels, out_channels=latent_channels, down_block_types=down_block_types, block_out_channels=block_out_channels, layers_per_block=layers_per_block, act_fn=act_fn, norm_num_groups=norm_num_groups, double_z=False, mid_block_add_attention=mid_block_add_attention, ) vq_embed_dim = vq_embed_dim if vq_embed_dim is not None else latent_channels self.quant_conv = nn.Conv2d(latent_channels, vq_embed_dim, 1) self.quantize = VectorQuantizer(num_vq_embeddings, vq_embed_dim, beta=0.25, remap=None, sane_index_shape=False) self.post_quant_conv = nn.Conv2d(vq_embed_dim, latent_channels, 1) # pass init params to Decoder self.decoder = Decoder( in_channels=latent_channels, out_channels=out_channels, up_block_types=up_block_types, block_out_channels=block_out_channels, layers_per_block=layers_per_block, act_fn=act_fn, norm_num_groups=norm_num_groups, norm_type=norm_type, mid_block_add_attention=mid_block_add_attention, ) @apply_forward_hook def encode(self, x: torch.Tensor, return_dict: bool = True) -> VQEncoderOutput: h = self.encoder(x) h = self.quant_conv(h) if not return_dict: return (h,) return VQEncoderOutput(latents=h) @apply_forward_hook def decode( self, h: torch.Tensor, force_not_quantize: bool = False, return_dict: bool = True, shape=None ) -> Union[DecoderOutput, torch.Tensor]: # also go through quantization layer if not force_not_quantize: quant, commit_loss, _ = self.quantize(h) elif self.config.lookup_from_codebook: quant = self.quantize.get_codebook_entry(h, shape) commit_loss = torch.zeros((h.shape[0])).to(h.device, dtype=h.dtype) else: quant = h commit_loss = torch.zeros((h.shape[0])).to(h.device, dtype=h.dtype) quant2 = self.post_quant_conv(quant) dec = self.decoder(quant2, quant if self.config.norm_type == "spatial" else None) if not return_dict: return dec, commit_loss return DecoderOutput(sample=dec, commit_loss=commit_loss) def forward( self, sample: torch.Tensor, return_dict: bool = True ) -> Union[DecoderOutput, Tuple[torch.Tensor, ...]]: r""" The [`VQModel`] forward method. Args: sample (`torch.Tensor`): Input sample. return_dict (`bool`, *optional*, defaults to `True`): Whether or not to return a [`models.autoencoders.vq_model.VQEncoderOutput`] instead of a plain tuple. Returns: [`~models.autoencoders.vq_model.VQEncoderOutput`] or `tuple`: If return_dict is True, a [`~models.autoencoders.vq_model.VQEncoderOutput`] is returned, otherwise a plain `tuple` is returned. """ h = self.encode(sample).latents dec = self.decode(h) if not return_dict: return dec.sample, dec.commit_loss return dec
diffusers/src/diffusers/models/autoencoders/vq_model.py/0
{ "file_path": "diffusers/src/diffusers/models/autoencoders/vq_model.py", "repo_id": "diffusers", "token_count": 3260 }
# Copyright 2024 The HuggingFace Team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from typing import Optional, Tuple import torch import torch.nn as nn import torch.nn.functional as F from ..utils import deprecate from .normalization import RMSNorm from .upsampling import upfirdn2d_native class Downsample1D(nn.Module): """A 1D downsampling layer with an optional convolution. Parameters: channels (`int`): number of channels in the inputs and outputs. use_conv (`bool`, default `False`): option to use a convolution. out_channels (`int`, optional): number of output channels. Defaults to `channels`. padding (`int`, default `1`): padding for the convolution. name (`str`, default `conv`): name of the downsampling 1D layer. """ def __init__( self, channels: int, use_conv: bool = False, out_channels: Optional[int] = None, padding: int = 1, name: str = "conv", ): super().__init__() self.channels = channels self.out_channels = out_channels or channels self.use_conv = use_conv self.padding = padding stride = 2 self.name = name if use_conv: self.conv = nn.Conv1d(self.channels, self.out_channels, 3, stride=stride, padding=padding) else: assert self.channels == self.out_channels self.conv = nn.AvgPool1d(kernel_size=stride, stride=stride) def forward(self, inputs: torch.Tensor) -> torch.Tensor: assert inputs.shape[1] == self.channels return self.conv(inputs) class Downsample2D(nn.Module): """A 2D downsampling layer with an optional convolution. Parameters: channels (`int`): number of channels in the inputs and outputs. use_conv (`bool`, default `False`): option to use a convolution. out_channels (`int`, optional): number of output channels. Defaults to `channels`. padding (`int`, default `1`): padding for the convolution. name (`str`, default `conv`): name of the downsampling 2D layer. """ def __init__( self, channels: int, use_conv: bool = False, out_channels: Optional[int] = None, padding: int = 1, name: str = "conv", kernel_size=3, norm_type=None, eps=None, elementwise_affine=None, bias=True, ): super().__init__() self.channels = channels self.out_channels = out_channels or channels self.use_conv = use_conv self.padding = padding stride = 2 self.name = name if norm_type == "ln_norm": self.norm = nn.LayerNorm(channels, eps, elementwise_affine) elif norm_type == "rms_norm": self.norm = RMSNorm(channels, eps, elementwise_affine) elif norm_type is None: self.norm = None else: raise ValueError(f"unknown norm_type: {norm_type}") if use_conv: conv = nn.Conv2d( self.channels, self.out_channels, kernel_size=kernel_size, stride=stride, padding=padding, bias=bias ) else: assert self.channels == self.out_channels conv = nn.AvgPool2d(kernel_size=stride, stride=stride) # TODO(Suraj, Patrick) - clean up after weight dicts are correctly renamed if name == "conv": self.Conv2d_0 = conv self.conv = conv elif name == "Conv2d_0": self.conv = conv else: self.conv = conv def forward(self, hidden_states: torch.Tensor, *args, **kwargs) -> torch.Tensor: if len(args) > 0 or kwargs.get("scale", None) is not None: deprecation_message = "The `scale` argument is deprecated and will be ignored. Please remove it, as passing it will raise an error in the future. `scale` should directly be passed while calling the underlying pipeline component i.e., via `cross_attention_kwargs`." deprecate("scale", "1.0.0", deprecation_message) assert hidden_states.shape[1] == self.channels if self.norm is not None: hidden_states = self.norm(hidden_states.permute(0, 2, 3, 1)).permute(0, 3, 1, 2) if self.use_conv and self.padding == 0: pad = (0, 1, 0, 1) hidden_states = F.pad(hidden_states, pad, mode="constant", value=0) assert hidden_states.shape[1] == self.channels hidden_states = self.conv(hidden_states) return hidden_states class FirDownsample2D(nn.Module): """A 2D FIR downsampling layer with an optional convolution. Parameters: channels (`int`): number of channels in the inputs and outputs. use_conv (`bool`, default `False`): option to use a convolution. out_channels (`int`, optional): number of output channels. Defaults to `channels`. fir_kernel (`tuple`, default `(1, 3, 3, 1)`): kernel for the FIR filter. """ def __init__( self, channels: Optional[int] = None, out_channels: Optional[int] = None, use_conv: bool = False, fir_kernel: Tuple[int, int, int, int] = (1, 3, 3, 1), ): super().__init__() out_channels = out_channels if out_channels else channels if use_conv: self.Conv2d_0 = nn.Conv2d(channels, out_channels, kernel_size=3, stride=1, padding=1) self.fir_kernel = fir_kernel self.use_conv = use_conv self.out_channels = out_channels def _downsample_2d( self, hidden_states: torch.Tensor, weight: Optional[torch.Tensor] = None, kernel: Optional[torch.Tensor] = None, factor: int = 2, gain: float = 1, ) -> torch.Tensor: """Fused `Conv2d()` followed by `downsample_2d()`. Padding is performed only once at the beginning, not between the operations. The fused op is considerably more efficient than performing the same calculation using standard TensorFlow ops. It supports gradients of arbitrary order. Args: hidden_states (`torch.Tensor`): Input tensor of the shape `[N, C, H, W]` or `[N, H, W, C]`. weight (`torch.Tensor`, *optional*): Weight tensor of the shape `[filterH, filterW, inChannels, outChannels]`. Grouped convolution can be performed by `inChannels = x.shape[0] // numGroups`. kernel (`torch.Tensor`, *optional*): FIR filter of the shape `[firH, firW]` or `[firN]` (separable). The default is `[1] * factor`, which corresponds to average pooling. factor (`int`, *optional*, default to `2`): Integer downsampling factor. gain (`float`, *optional*, default to `1.0`): Scaling factor for signal magnitude. Returns: output (`torch.Tensor`): Tensor of the shape `[N, C, H // factor, W // factor]` or `[N, H // factor, W // factor, C]`, and same datatype as `x`. """ assert isinstance(factor, int) and factor >= 1 if kernel is None: kernel = [1] * factor # setup kernel kernel = torch.tensor(kernel, dtype=torch.float32) if kernel.ndim == 1: kernel = torch.outer(kernel, kernel) kernel /= torch.sum(kernel) kernel = kernel * gain if self.use_conv: _, _, convH, convW = weight.shape pad_value = (kernel.shape[0] - factor) + (convW - 1) stride_value = [factor, factor] upfirdn_input = upfirdn2d_native( hidden_states, torch.tensor(kernel, device=hidden_states.device), pad=((pad_value + 1) // 2, pad_value // 2), ) output = F.conv2d(upfirdn_input, weight, stride=stride_value, padding=0) else: pad_value = kernel.shape[0] - factor output = upfirdn2d_native( hidden_states, torch.tensor(kernel, device=hidden_states.device), down=factor, pad=((pad_value + 1) // 2, pad_value // 2), ) return output def forward(self, hidden_states: torch.Tensor) -> torch.Tensor: if self.use_conv: downsample_input = self._downsample_2d(hidden_states, weight=self.Conv2d_0.weight, kernel=self.fir_kernel) hidden_states = downsample_input + self.Conv2d_0.bias.reshape(1, -1, 1, 1) else: hidden_states = self._downsample_2d(hidden_states, kernel=self.fir_kernel, factor=2) return hidden_states # downsample/upsample layer used in k-upscaler, might be able to use FirDownsample2D/DirUpsample2D instead class KDownsample2D(nn.Module): r"""A 2D K-downsampling layer. Parameters: pad_mode (`str`, *optional*, default to `"reflect"`): the padding mode to use. """ def __init__(self, pad_mode: str = "reflect"): super().__init__() self.pad_mode = pad_mode kernel_1d = torch.tensor([[1 / 8, 3 / 8, 3 / 8, 1 / 8]]) self.pad = kernel_1d.shape[1] // 2 - 1 self.register_buffer("kernel", kernel_1d.T @ kernel_1d, persistent=False) def forward(self, inputs: torch.Tensor) -> torch.Tensor: inputs = F.pad(inputs, (self.pad,) * 4, self.pad_mode) weight = inputs.new_zeros( [ inputs.shape[1], inputs.shape[1], self.kernel.shape[0], self.kernel.shape[1], ] ) indices = torch.arange(inputs.shape[1], device=inputs.device) kernel = self.kernel.to(weight)[None, :].expand(inputs.shape[1], -1, -1) weight[indices, indices] = kernel return F.conv2d(inputs, weight, stride=2) class CogVideoXDownsample3D(nn.Module): # Todo: Wait for paper relase. r""" A 3D Downsampling layer using in [CogVideoX]() by Tsinghua University & ZhipuAI Args: in_channels (`int`): Number of channels in the input image. out_channels (`int`): Number of channels produced by the convolution. kernel_size (`int`, defaults to `3`): Size of the convolving kernel. stride (`int`, defaults to `2`): Stride of the convolution. padding (`int`, defaults to `0`): Padding added to all four sides of the input. compress_time (`bool`, defaults to `False`): Whether or not to compress the time dimension. """ def __init__( self, in_channels: int, out_channels: int, kernel_size: int = 3, stride: int = 2, padding: int = 0, compress_time: bool = False, ): super().__init__() self.conv = nn.Conv2d(in_channels, out_channels, kernel_size=kernel_size, stride=stride, padding=padding) self.compress_time = compress_time def forward(self, x: torch.Tensor) -> torch.Tensor: if self.compress_time: batch_size, channels, frames, height, width = x.shape # (batch_size, channels, frames, height, width) -> (batch_size, height, width, channels, frames) -> (batch_size * height * width, channels, frames) x = x.permute(0, 3, 4, 1, 2).reshape(batch_size * height * width, channels, frames) if x.shape[-1] % 2 == 1: x_first, x_rest = x[..., 0], x[..., 1:] if x_rest.shape[-1] > 0: # (batch_size * height * width, channels, frames - 1) -> (batch_size * height * width, channels, (frames - 1) // 2) x_rest = F.avg_pool1d(x_rest, kernel_size=2, stride=2) x = torch.cat([x_first[..., None], x_rest], dim=-1) # (batch_size * height * width, channels, (frames // 2) + 1) -> (batch_size, height, width, channels, (frames // 2) + 1) -> (batch_size, channels, (frames // 2) + 1, height, width) x = x.reshape(batch_size, height, width, channels, x.shape[-1]).permute(0, 3, 4, 1, 2) else: # (batch_size * height * width, channels, frames) -> (batch_size * height * width, channels, frames // 2) x = F.avg_pool1d(x, kernel_size=2, stride=2) # (batch_size * height * width, channels, frames // 2) -> (batch_size, height, width, channels, frames // 2) -> (batch_size, channels, frames // 2, height, width) x = x.reshape(batch_size, height, width, channels, x.shape[-1]).permute(0, 3, 4, 1, 2) # Pad the tensor pad = (0, 1, 0, 1) x = F.pad(x, pad, mode="constant", value=0) batch_size, channels, frames, height, width = x.shape # (batch_size, channels, frames, height, width) -> (batch_size, frames, channels, height, width) -> (batch_size * frames, channels, height, width) x = x.permute(0, 2, 1, 3, 4).reshape(batch_size * frames, channels, height, width) x = self.conv(x) # (batch_size * frames, channels, height, width) -> (batch_size, frames, channels, height, width) -> (batch_size, channels, frames, height, width) x = x.reshape(batch_size, frames, x.shape[1], x.shape[2], x.shape[3]).permute(0, 2, 1, 3, 4) return x def downsample_2d( hidden_states: torch.Tensor, kernel: Optional[torch.Tensor] = None, factor: int = 2, gain: float = 1, ) -> torch.Tensor: r"""Downsample2D a batch of 2D images with the given filter. Accepts a batch of 2D images of the shape `[N, C, H, W]` or `[N, H, W, C]` and downsamples each image with the given filter. The filter is normalized so that if the input pixels are constant, they will be scaled by the specified `gain`. Pixels outside the image are assumed to be zero, and the filter is padded with zeros so that its shape is a multiple of the downsampling factor. Args: hidden_states (`torch.Tensor`) Input tensor of the shape `[N, C, H, W]` or `[N, H, W, C]`. kernel (`torch.Tensor`, *optional*): FIR filter of the shape `[firH, firW]` or `[firN]` (separable). The default is `[1] * factor`, which corresponds to average pooling. factor (`int`, *optional*, default to `2`): Integer downsampling factor. gain (`float`, *optional*, default to `1.0`): Scaling factor for signal magnitude. Returns: output (`torch.Tensor`): Tensor of the shape `[N, C, H // factor, W // factor]` """ assert isinstance(factor, int) and factor >= 1 if kernel is None: kernel = [1] * factor kernel = torch.tensor(kernel, dtype=torch.float32) if kernel.ndim == 1: kernel = torch.outer(kernel, kernel) kernel /= torch.sum(kernel) kernel = kernel * gain pad_value = kernel.shape[0] - factor output = upfirdn2d_native( hidden_states, kernel.to(device=hidden_states.device), down=factor, pad=((pad_value + 1) // 2, pad_value // 2), ) return output
diffusers/src/diffusers/models/downsampling.py/0
{ "file_path": "diffusers/src/diffusers/models/downsampling.py", "repo_id": "diffusers", "token_count": 7040 }
# Copyright 2024 ConsisID Authors and The HuggingFace Team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import math from typing import Any, Dict, List, Optional, Tuple, Union import torch from torch import nn from ...configuration_utils import ConfigMixin, register_to_config from ...loaders import PeftAdapterMixin from ...utils import USE_PEFT_BACKEND, logging, scale_lora_layers, unscale_lora_layers from ...utils.torch_utils import maybe_allow_in_graph from ..attention import Attention, FeedForward from ..attention_processor import AttentionProcessor, CogVideoXAttnProcessor2_0 from ..embeddings import CogVideoXPatchEmbed, TimestepEmbedding, Timesteps from ..modeling_outputs import Transformer2DModelOutput from ..modeling_utils import ModelMixin from ..normalization import AdaLayerNorm, CogVideoXLayerNormZero logger = logging.get_logger(__name__) # pylint: disable=invalid-name class PerceiverAttention(nn.Module): def __init__(self, dim: int, dim_head: int = 64, heads: int = 8, kv_dim: Optional[int] = None): super().__init__() self.scale = dim_head**-0.5 self.dim_head = dim_head self.heads = heads inner_dim = dim_head * heads self.norm1 = nn.LayerNorm(dim if kv_dim is None else kv_dim) self.norm2 = nn.LayerNorm(dim) self.to_q = nn.Linear(dim, inner_dim, bias=False) self.to_kv = nn.Linear(dim if kv_dim is None else kv_dim, inner_dim * 2, bias=False) self.to_out = nn.Linear(inner_dim, dim, bias=False) def forward(self, image_embeds: torch.Tensor, latents: torch.Tensor) -> torch.Tensor: # Apply normalization image_embeds = self.norm1(image_embeds) latents = self.norm2(latents) batch_size, seq_len, _ = latents.shape # Get batch size and sequence length # Compute query, key, and value matrices query = self.to_q(latents) kv_input = torch.cat((image_embeds, latents), dim=-2) key, value = self.to_kv(kv_input).chunk(2, dim=-1) # Reshape the tensors for multi-head attention query = query.reshape(query.size(0), -1, self.heads, self.dim_head).transpose(1, 2) key = key.reshape(key.size(0), -1, self.heads, self.dim_head).transpose(1, 2) value = value.reshape(value.size(0), -1, self.heads, self.dim_head).transpose(1, 2) # attention scale = 1 / math.sqrt(math.sqrt(self.dim_head)) weight = (query * scale) @ (key * scale).transpose(-2, -1) # More stable with f16 than dividing afterwards weight = torch.softmax(weight.float(), dim=-1).type(weight.dtype) output = weight @ value # Reshape and return the final output output = output.permute(0, 2, 1, 3).reshape(batch_size, seq_len, -1) return self.to_out(output) class LocalFacialExtractor(nn.Module): def __init__( self, id_dim: int = 1280, vit_dim: int = 1024, depth: int = 10, dim_head: int = 64, heads: int = 16, num_id_token: int = 5, num_queries: int = 32, output_dim: int = 2048, ff_mult: int = 4, num_scale: int = 5, ): super().__init__() # Storing identity token and query information self.num_id_token = num_id_token self.vit_dim = vit_dim self.num_queries = num_queries assert depth % num_scale == 0 self.depth = depth // num_scale self.num_scale = num_scale scale = vit_dim**-0.5 # Learnable latent query embeddings self.latents = nn.Parameter(torch.randn(1, num_queries, vit_dim) * scale) # Projection layer to map the latent output to the desired dimension self.proj_out = nn.Parameter(scale * torch.randn(vit_dim, output_dim)) # Attention and ConsisIDFeedForward layer stack self.layers = nn.ModuleList([]) for _ in range(depth): self.layers.append( nn.ModuleList( [ PerceiverAttention(dim=vit_dim, dim_head=dim_head, heads=heads), # Perceiver Attention layer nn.Sequential( nn.LayerNorm(vit_dim), nn.Linear(vit_dim, vit_dim * ff_mult, bias=False), nn.GELU(), nn.Linear(vit_dim * ff_mult, vit_dim, bias=False), ), # ConsisIDFeedForward layer ] ) ) # Mappings for each of the 5 different ViT features for i in range(num_scale): setattr( self, f"mapping_{i}", nn.Sequential( nn.Linear(vit_dim, vit_dim), nn.LayerNorm(vit_dim), nn.LeakyReLU(), nn.Linear(vit_dim, vit_dim), nn.LayerNorm(vit_dim), nn.LeakyReLU(), nn.Linear(vit_dim, vit_dim), ), ) # Mapping for identity embedding vectors self.id_embedding_mapping = nn.Sequential( nn.Linear(id_dim, vit_dim), nn.LayerNorm(vit_dim), nn.LeakyReLU(), nn.Linear(vit_dim, vit_dim), nn.LayerNorm(vit_dim), nn.LeakyReLU(), nn.Linear(vit_dim, vit_dim * num_id_token), ) def forward(self, id_embeds: torch.Tensor, vit_hidden_states: List[torch.Tensor]) -> torch.Tensor: # Repeat latent queries for the batch size latents = self.latents.repeat(id_embeds.size(0), 1, 1) # Map the identity embedding to tokens id_embeds = self.id_embedding_mapping(id_embeds) id_embeds = id_embeds.reshape(-1, self.num_id_token, self.vit_dim) # Concatenate identity tokens with the latent queries latents = torch.cat((latents, id_embeds), dim=1) # Process each of the num_scale visual feature inputs for i in range(self.num_scale): vit_feature = getattr(self, f"mapping_{i}")(vit_hidden_states[i]) ctx_feature = torch.cat((id_embeds, vit_feature), dim=1) # Pass through the PerceiverAttention and ConsisIDFeedForward layers for attn, ff in self.layers[i * self.depth : (i + 1) * self.depth]: latents = attn(ctx_feature, latents) + latents latents = ff(latents) + latents # Retain only the query latents latents = latents[:, : self.num_queries] # Project the latents to the output dimension latents = latents @ self.proj_out return latents class PerceiverCrossAttention(nn.Module): def __init__(self, dim: int = 3072, dim_head: int = 128, heads: int = 16, kv_dim: int = 2048): super().__init__() self.scale = dim_head**-0.5 self.dim_head = dim_head self.heads = heads inner_dim = dim_head * heads # Layer normalization to stabilize training self.norm1 = nn.LayerNorm(dim if kv_dim is None else kv_dim) self.norm2 = nn.LayerNorm(dim) # Linear transformations to produce queries, keys, and values self.to_q = nn.Linear(dim, inner_dim, bias=False) self.to_kv = nn.Linear(dim if kv_dim is None else kv_dim, inner_dim * 2, bias=False) self.to_out = nn.Linear(inner_dim, dim, bias=False) def forward(self, image_embeds: torch.Tensor, hidden_states: torch.Tensor) -> torch.Tensor: # Apply layer normalization to the input image and latent features image_embeds = self.norm1(image_embeds) hidden_states = self.norm2(hidden_states) batch_size, seq_len, _ = hidden_states.shape # Compute queries, keys, and values query = self.to_q(hidden_states) key, value = self.to_kv(image_embeds).chunk(2, dim=-1) # Reshape tensors to split into attention heads query = query.reshape(query.size(0), -1, self.heads, self.dim_head).transpose(1, 2) key = key.reshape(key.size(0), -1, self.heads, self.dim_head).transpose(1, 2) value = value.reshape(value.size(0), -1, self.heads, self.dim_head).transpose(1, 2) # Compute attention weights scale = 1 / math.sqrt(math.sqrt(self.dim_head)) weight = (query * scale) @ (key * scale).transpose(-2, -1) # More stable scaling than post-division weight = torch.softmax(weight.float(), dim=-1).type(weight.dtype) # Compute the output via weighted combination of values out = weight @ value # Reshape and permute to prepare for final linear transformation out = out.permute(0, 2, 1, 3).reshape(batch_size, seq_len, -1) return self.to_out(out) @maybe_allow_in_graph class ConsisIDBlock(nn.Module): r""" Transformer block used in [ConsisID](https://github.com/PKU-YuanGroup/ConsisID) model. Parameters: dim (`int`): The number of channels in the input and output. num_attention_heads (`int`): The number of heads to use for multi-head attention. attention_head_dim (`int`): The number of channels in each head. time_embed_dim (`int`): The number of channels in timestep embedding. dropout (`float`, defaults to `0.0`): The dropout probability to use. activation_fn (`str`, defaults to `"gelu-approximate"`): Activation function to be used in feed-forward. attention_bias (`bool`, defaults to `False`): Whether or not to use bias in attention projection layers. qk_norm (`bool`, defaults to `True`): Whether or not to use normalization after query and key projections in Attention. norm_elementwise_affine (`bool`, defaults to `True`): Whether to use learnable elementwise affine parameters for normalization. norm_eps (`float`, defaults to `1e-5`): Epsilon value for normalization layers. final_dropout (`bool` defaults to `False`): Whether to apply a final dropout after the last feed-forward layer. ff_inner_dim (`int`, *optional*, defaults to `None`): Custom hidden dimension of Feed-forward layer. If not provided, `4 * dim` is used. ff_bias (`bool`, defaults to `True`): Whether or not to use bias in Feed-forward layer. attention_out_bias (`bool`, defaults to `True`): Whether or not to use bias in Attention output projection layer. """ def __init__( self, dim: int, num_attention_heads: int, attention_head_dim: int, time_embed_dim: int, dropout: float = 0.0, activation_fn: str = "gelu-approximate", attention_bias: bool = False, qk_norm: bool = True, norm_elementwise_affine: bool = True, norm_eps: float = 1e-5, final_dropout: bool = True, ff_inner_dim: Optional[int] = None, ff_bias: bool = True, attention_out_bias: bool = True, ): super().__init__() # 1. Self Attention self.norm1 = CogVideoXLayerNormZero(time_embed_dim, dim, norm_elementwise_affine, norm_eps, bias=True) self.attn1 = Attention( query_dim=dim, dim_head=attention_head_dim, heads=num_attention_heads, qk_norm="layer_norm" if qk_norm else None, eps=1e-6, bias=attention_bias, out_bias=attention_out_bias, processor=CogVideoXAttnProcessor2_0(), ) # 2. Feed Forward self.norm2 = CogVideoXLayerNormZero(time_embed_dim, dim, norm_elementwise_affine, norm_eps, bias=True) self.ff = FeedForward( dim, dropout=dropout, activation_fn=activation_fn, final_dropout=final_dropout, inner_dim=ff_inner_dim, bias=ff_bias, ) def forward( self, hidden_states: torch.Tensor, encoder_hidden_states: torch.Tensor, temb: torch.Tensor, image_rotary_emb: Optional[Tuple[torch.Tensor, torch.Tensor]] = None, ) -> torch.Tensor: text_seq_length = encoder_hidden_states.size(1) # norm & modulate norm_hidden_states, norm_encoder_hidden_states, gate_msa, enc_gate_msa = self.norm1( hidden_states, encoder_hidden_states, temb ) # attention attn_hidden_states, attn_encoder_hidden_states = self.attn1( hidden_states=norm_hidden_states, encoder_hidden_states=norm_encoder_hidden_states, image_rotary_emb=image_rotary_emb, ) hidden_states = hidden_states + gate_msa * attn_hidden_states encoder_hidden_states = encoder_hidden_states + enc_gate_msa * attn_encoder_hidden_states # norm & modulate norm_hidden_states, norm_encoder_hidden_states, gate_ff, enc_gate_ff = self.norm2( hidden_states, encoder_hidden_states, temb ) # feed-forward norm_hidden_states = torch.cat([norm_encoder_hidden_states, norm_hidden_states], dim=1) ff_output = self.ff(norm_hidden_states) hidden_states = hidden_states + gate_ff * ff_output[:, text_seq_length:] encoder_hidden_states = encoder_hidden_states + enc_gate_ff * ff_output[:, :text_seq_length] return hidden_states, encoder_hidden_states class ConsisIDTransformer3DModel(ModelMixin, ConfigMixin, PeftAdapterMixin): """ A Transformer model for video-like data in [ConsisID](https://github.com/PKU-YuanGroup/ConsisID). Parameters: num_attention_heads (`int`, defaults to `30`): The number of heads to use for multi-head attention. attention_head_dim (`int`, defaults to `64`): The number of channels in each head. in_channels (`int`, defaults to `16`): The number of channels in the input. out_channels (`int`, *optional*, defaults to `16`): The number of channels in the output. flip_sin_to_cos (`bool`, defaults to `True`): Whether to flip the sin to cos in the time embedding. time_embed_dim (`int`, defaults to `512`): Output dimension of timestep embeddings. text_embed_dim (`int`, defaults to `4096`): Input dimension of text embeddings from the text encoder. num_layers (`int`, defaults to `30`): The number of layers of Transformer blocks to use. dropout (`float`, defaults to `0.0`): The dropout probability to use. attention_bias (`bool`, defaults to `True`): Whether to use bias in the attention projection layers. sample_width (`int`, defaults to `90`): The width of the input latents. sample_height (`int`, defaults to `60`): The height of the input latents. sample_frames (`int`, defaults to `49`): The number of frames in the input latents. Note that this parameter was incorrectly initialized to 49 instead of 13 because ConsisID processed 13 latent frames at once in its default and recommended settings, but cannot be changed to the correct value to ensure backwards compatibility. To create a transformer with K latent frames, the correct value to pass here would be: ((K - 1) * temporal_compression_ratio + 1). patch_size (`int`, defaults to `2`): The size of the patches to use in the patch embedding layer. temporal_compression_ratio (`int`, defaults to `4`): The compression ratio across the temporal dimension. See documentation for `sample_frames`. max_text_seq_length (`int`, defaults to `226`): The maximum sequence length of the input text embeddings. activation_fn (`str`, defaults to `"gelu-approximate"`): Activation function to use in feed-forward. timestep_activation_fn (`str`, defaults to `"silu"`): Activation function to use when generating the timestep embeddings. norm_elementwise_affine (`bool`, defaults to `True`): Whether to use elementwise affine in normalization layers. norm_eps (`float`, defaults to `1e-5`): The epsilon value to use in normalization layers. spatial_interpolation_scale (`float`, defaults to `1.875`): Scaling factor to apply in 3D positional embeddings across spatial dimensions. temporal_interpolation_scale (`float`, defaults to `1.0`): Scaling factor to apply in 3D positional embeddings across temporal dimensions. is_train_face (`bool`, defaults to `False`): Whether to use enable the identity-preserving module during the training process. When set to `True`, the model will focus on identity-preserving tasks. is_kps (`bool`, defaults to `False`): Whether to enable keypoint for global facial extractor. If `True`, keypoints will be in the model. cross_attn_interval (`int`, defaults to `2`): The interval between cross-attention layers in the Transformer architecture. A larger value may reduce the frequency of cross-attention computations, which can help reduce computational overhead. cross_attn_dim_head (`int`, optional, defaults to `128`): The dimensionality of each attention head in the cross-attention layers of the Transformer architecture. A larger value increases the capacity to attend to more complex patterns, but also increases memory and computation costs. cross_attn_num_heads (`int`, optional, defaults to `16`): The number of attention heads in the cross-attention layers. More heads allow for more parallel attention mechanisms, capturing diverse relationships between different components of the input, but can also increase computational requirements. LFE_id_dim (`int`, optional, defaults to `1280`): The dimensionality of the identity vector used in the Local Facial Extractor (LFE). This vector represents the identity features of a face, which are important for tasks like face recognition and identity preservation across different frames. LFE_vit_dim (`int`, optional, defaults to `1024`): The dimension of the vision transformer (ViT) output used in the Local Facial Extractor (LFE). This value dictates the size of the transformer-generated feature vectors that will be processed for facial feature extraction. LFE_depth (`int`, optional, defaults to `10`): The number of layers in the Local Facial Extractor (LFE). Increasing the depth allows the model to capture more complex representations of facial features, but also increases the computational load. LFE_dim_head (`int`, optional, defaults to `64`): The dimensionality of each attention head in the Local Facial Extractor (LFE). This parameter affects how finely the model can process and focus on different parts of the facial features during the extraction process. LFE_num_heads (`int`, optional, defaults to `16`): The number of attention heads in the Local Facial Extractor (LFE). More heads can improve the model's ability to capture diverse facial features, but at the cost of increased computational complexity. LFE_num_id_token (`int`, optional, defaults to `5`): The number of identity tokens used in the Local Facial Extractor (LFE). This defines how many identity-related tokens the model will process to ensure face identity preservation during feature extraction. LFE_num_querie (`int`, optional, defaults to `32`): The number of query tokens used in the Local Facial Extractor (LFE). These tokens are used to capture high-frequency face-related information that aids in accurate facial feature extraction. LFE_output_dim (`int`, optional, defaults to `2048`): The output dimension of the Local Facial Extractor (LFE). This dimension determines the size of the feature vectors produced by the LFE module, which will be used for subsequent tasks such as face recognition or tracking. LFE_ff_mult (`int`, optional, defaults to `4`): The multiplication factor applied to the feed-forward network's hidden layer size in the Local Facial Extractor (LFE). A higher value increases the model's capacity to learn more complex facial feature transformations, but also increases the computation and memory requirements. LFE_num_scale (`int`, optional, defaults to `5`): The number of different scales visual feature. A higher value increases the model's capacity to learn more complex facial feature transformations, but also increases the computation and memory requirements. local_face_scale (`float`, defaults to `1.0`): A scaling factor used to adjust the importance of local facial features in the model. This can influence how strongly the model focuses on high frequency face-related content. """ _supports_gradient_checkpointing = True @register_to_config def __init__( self, num_attention_heads: int = 30, attention_head_dim: int = 64, in_channels: int = 16, out_channels: Optional[int] = 16, flip_sin_to_cos: bool = True, freq_shift: int = 0, time_embed_dim: int = 512, text_embed_dim: int = 4096, num_layers: int = 30, dropout: float = 0.0, attention_bias: bool = True, sample_width: int = 90, sample_height: int = 60, sample_frames: int = 49, patch_size: int = 2, temporal_compression_ratio: int = 4, max_text_seq_length: int = 226, activation_fn: str = "gelu-approximate", timestep_activation_fn: str = "silu", norm_elementwise_affine: bool = True, norm_eps: float = 1e-5, spatial_interpolation_scale: float = 1.875, temporal_interpolation_scale: float = 1.0, use_rotary_positional_embeddings: bool = False, use_learned_positional_embeddings: bool = False, is_train_face: bool = False, is_kps: bool = False, cross_attn_interval: int = 2, cross_attn_dim_head: int = 128, cross_attn_num_heads: int = 16, LFE_id_dim: int = 1280, LFE_vit_dim: int = 1024, LFE_depth: int = 10, LFE_dim_head: int = 64, LFE_num_heads: int = 16, LFE_num_id_token: int = 5, LFE_num_querie: int = 32, LFE_output_dim: int = 2048, LFE_ff_mult: int = 4, LFE_num_scale: int = 5, local_face_scale: float = 1.0, ): super().__init__() inner_dim = num_attention_heads * attention_head_dim if not use_rotary_positional_embeddings and use_learned_positional_embeddings: raise ValueError( "There are no ConsisID checkpoints available with disable rotary embeddings and learned positional " "embeddings. If you're using a custom model and/or believe this should be supported, please open an " "issue at https://github.com/huggingface/diffusers/issues." ) # 1. Patch embedding self.patch_embed = CogVideoXPatchEmbed( patch_size=patch_size, in_channels=in_channels, embed_dim=inner_dim, text_embed_dim=text_embed_dim, bias=True, sample_width=sample_width, sample_height=sample_height, sample_frames=sample_frames, temporal_compression_ratio=temporal_compression_ratio, max_text_seq_length=max_text_seq_length, spatial_interpolation_scale=spatial_interpolation_scale, temporal_interpolation_scale=temporal_interpolation_scale, use_positional_embeddings=not use_rotary_positional_embeddings, use_learned_positional_embeddings=use_learned_positional_embeddings, ) self.embedding_dropout = nn.Dropout(dropout) # 2. Time embeddings self.time_proj = Timesteps(inner_dim, flip_sin_to_cos, freq_shift) self.time_embedding = TimestepEmbedding(inner_dim, time_embed_dim, timestep_activation_fn) # 3. Define spatio-temporal transformers blocks self.transformer_blocks = nn.ModuleList( [ ConsisIDBlock( dim=inner_dim, num_attention_heads=num_attention_heads, attention_head_dim=attention_head_dim, time_embed_dim=time_embed_dim, dropout=dropout, activation_fn=activation_fn, attention_bias=attention_bias, norm_elementwise_affine=norm_elementwise_affine, norm_eps=norm_eps, ) for _ in range(num_layers) ] ) self.norm_final = nn.LayerNorm(inner_dim, norm_eps, norm_elementwise_affine) # 4. Output blocks self.norm_out = AdaLayerNorm( embedding_dim=time_embed_dim, output_dim=2 * inner_dim, norm_elementwise_affine=norm_elementwise_affine, norm_eps=norm_eps, chunk_dim=1, ) self.proj_out = nn.Linear(inner_dim, patch_size * patch_size * out_channels) self.is_train_face = is_train_face self.is_kps = is_kps # 5. Define identity-preserving config if is_train_face: # LFE configs self.LFE_id_dim = LFE_id_dim self.LFE_vit_dim = LFE_vit_dim self.LFE_depth = LFE_depth self.LFE_dim_head = LFE_dim_head self.LFE_num_heads = LFE_num_heads self.LFE_num_id_token = LFE_num_id_token self.LFE_num_querie = LFE_num_querie self.LFE_output_dim = LFE_output_dim self.LFE_ff_mult = LFE_ff_mult self.LFE_num_scale = LFE_num_scale # cross configs self.inner_dim = inner_dim self.cross_attn_interval = cross_attn_interval self.num_cross_attn = num_layers // cross_attn_interval self.cross_attn_dim_head = cross_attn_dim_head self.cross_attn_num_heads = cross_attn_num_heads self.cross_attn_kv_dim = int(self.inner_dim / 3 * 2) self.local_face_scale = local_face_scale # face modules self._init_face_inputs() self.gradient_checkpointing = False def _init_face_inputs(self): self.local_facial_extractor = LocalFacialExtractor( id_dim=self.LFE_id_dim, vit_dim=self.LFE_vit_dim, depth=self.LFE_depth, dim_head=self.LFE_dim_head, heads=self.LFE_num_heads, num_id_token=self.LFE_num_id_token, num_queries=self.LFE_num_querie, output_dim=self.LFE_output_dim, ff_mult=self.LFE_ff_mult, num_scale=self.LFE_num_scale, ) self.perceiver_cross_attention = nn.ModuleList( [ PerceiverCrossAttention( dim=self.inner_dim, dim_head=self.cross_attn_dim_head, heads=self.cross_attn_num_heads, kv_dim=self.cross_attn_kv_dim, ) for _ in range(self.num_cross_attn) ] ) @property # Copied from diffusers.models.unets.unet_2d_condition.UNet2DConditionModel.attn_processors def attn_processors(self) -> Dict[str, AttentionProcessor]: r""" Returns: `dict` of attention processors: A dictionary containing all attention processors used in the model with indexed by its weight name. """ # set recursively processors = {} def fn_recursive_add_processors(name: str, module: torch.nn.Module, processors: Dict[str, AttentionProcessor]): if hasattr(module, "get_processor"): processors[f"{name}.processor"] = module.get_processor() for sub_name, child in module.named_children(): fn_recursive_add_processors(f"{name}.{sub_name}", child, processors) return processors for name, module in self.named_children(): fn_recursive_add_processors(name, module, processors) return processors # Copied from diffusers.models.unets.unet_2d_condition.UNet2DConditionModel.set_attn_processor def set_attn_processor(self, processor: Union[AttentionProcessor, Dict[str, AttentionProcessor]]): r""" Sets the attention processor to use to compute attention. Parameters: processor (`dict` of `AttentionProcessor` or only `AttentionProcessor`): The instantiated processor class or a dictionary of processor classes that will be set as the processor for **all** `Attention` layers. If `processor` is a dict, the key needs to define the path to the corresponding cross attention processor. This is strongly recommended when setting trainable attention processors. """ count = len(self.attn_processors.keys()) if isinstance(processor, dict) and len(processor) != count: raise ValueError( f"A dict of processors was passed, but the number of processors {len(processor)} does not match the" f" number of attention layers: {count}. Please make sure to pass {count} processor classes." ) def fn_recursive_attn_processor(name: str, module: torch.nn.Module, processor): if hasattr(module, "set_processor"): if not isinstance(processor, dict): module.set_processor(processor) else: module.set_processor(processor.pop(f"{name}.processor")) for sub_name, child in module.named_children(): fn_recursive_attn_processor(f"{name}.{sub_name}", child, processor) for name, module in self.named_children(): fn_recursive_attn_processor(name, module, processor) def forward( self, hidden_states: torch.Tensor, encoder_hidden_states: torch.Tensor, timestep: Union[int, float, torch.LongTensor], timestep_cond: Optional[torch.Tensor] = None, image_rotary_emb: Optional[Tuple[torch.Tensor, torch.Tensor]] = None, attention_kwargs: Optional[Dict[str, Any]] = None, id_cond: Optional[torch.Tensor] = None, id_vit_hidden: Optional[torch.Tensor] = None, return_dict: bool = True, ): if attention_kwargs is not None: attention_kwargs = attention_kwargs.copy() lora_scale = attention_kwargs.pop("scale", 1.0) else: lora_scale = 1.0 if USE_PEFT_BACKEND: # weight the lora layers by setting `lora_scale` for each PEFT layer scale_lora_layers(self, lora_scale) else: if attention_kwargs is not None and attention_kwargs.get("scale", None) is not None: logger.warning( "Passing `scale` via `attention_kwargs` when not using the PEFT backend is ineffective." ) # fuse clip and insightface valid_face_emb = None if self.is_train_face: id_cond = id_cond.to(device=hidden_states.device, dtype=hidden_states.dtype) id_vit_hidden = [ tensor.to(device=hidden_states.device, dtype=hidden_states.dtype) for tensor in id_vit_hidden ] valid_face_emb = self.local_facial_extractor( id_cond, id_vit_hidden ) # torch.Size([1, 1280]), list[5](torch.Size([1, 577, 1024])) -> torch.Size([1, 32, 2048]) batch_size, num_frames, channels, height, width = hidden_states.shape # 1. Time embedding timesteps = timestep t_emb = self.time_proj(timesteps) # timesteps does not contain any weights and will always return f32 tensors # but time_embedding might actually be running in fp16. so we need to cast here. # there might be better ways to encapsulate this. t_emb = t_emb.to(dtype=hidden_states.dtype) emb = self.time_embedding(t_emb, timestep_cond) # 2. Patch embedding # torch.Size([1, 226, 4096]) torch.Size([1, 13, 32, 60, 90]) hidden_states = self.patch_embed(encoder_hidden_states, hidden_states) # torch.Size([1, 17776, 3072]) hidden_states = self.embedding_dropout(hidden_states) # torch.Size([1, 17776, 3072]) text_seq_length = encoder_hidden_states.shape[1] encoder_hidden_states = hidden_states[:, :text_seq_length] # torch.Size([1, 226, 3072]) hidden_states = hidden_states[:, text_seq_length:] # torch.Size([1, 17550, 3072]) # 3. Transformer blocks ca_idx = 0 for i, block in enumerate(self.transformer_blocks): if torch.is_grad_enabled() and self.gradient_checkpointing: hidden_states, encoder_hidden_states = self._gradient_checkpointing_func( block, hidden_states, encoder_hidden_states, emb, image_rotary_emb, ) else: hidden_states, encoder_hidden_states = block( hidden_states=hidden_states, encoder_hidden_states=encoder_hidden_states, temb=emb, image_rotary_emb=image_rotary_emb, ) if self.is_train_face: if i % self.cross_attn_interval == 0 and valid_face_emb is not None: hidden_states = hidden_states + self.local_face_scale * self.perceiver_cross_attention[ca_idx]( valid_face_emb, hidden_states ) # torch.Size([2, 32, 2048]) torch.Size([2, 17550, 3072]) ca_idx += 1 hidden_states = torch.cat([encoder_hidden_states, hidden_states], dim=1) hidden_states = self.norm_final(hidden_states) hidden_states = hidden_states[:, text_seq_length:] # 4. Final block hidden_states = self.norm_out(hidden_states, temb=emb) hidden_states = self.proj_out(hidden_states) # 5. Unpatchify # Note: we use `-1` instead of `channels`: # - It is okay to `channels` use for ConsisID (number of input channels is equal to output channels) p = self.config.patch_size output = hidden_states.reshape(batch_size, num_frames, height // p, width // p, -1, p, p) output = output.permute(0, 1, 4, 2, 5, 3, 6).flatten(5, 6).flatten(3, 4) if USE_PEFT_BACKEND: # remove `lora_scale` from each PEFT layer unscale_lora_layers(self, lora_scale) if not return_dict: return (output,) return Transformer2DModelOutput(sample=output)
diffusers/src/diffusers/models/transformers/consisid_transformer_3d.py/0
{ "file_path": "diffusers/src/diffusers/models/transformers/consisid_transformer_3d.py", "repo_id": "diffusers", "token_count": 15854 }
# Copyright 2024 The Genmo team and The HuggingFace Team. # All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import math from typing import Any, Dict, Optional, Tuple import torch import torch.nn as nn import torch.nn.functional as F from ...configuration_utils import ConfigMixin, register_to_config from ...loaders import FromOriginalModelMixin, PeftAdapterMixin from ...utils import USE_PEFT_BACKEND, logging, scale_lora_layers, unscale_lora_layers from ...utils.torch_utils import maybe_allow_in_graph from ..attention import FeedForward from ..attention_processor import Attention from ..embeddings import PixArtAlphaTextProjection from ..modeling_outputs import Transformer2DModelOutput from ..modeling_utils import ModelMixin from ..normalization import AdaLayerNormSingle, RMSNorm logger = logging.get_logger(__name__) # pylint: disable=invalid-name class LTXVideoAttentionProcessor2_0: r""" Processor for implementing scaled dot-product attention (enabled by default if you're using PyTorch 2.0). This is used in the LTX model. It applies a normalization layer and rotary embedding on the query and key vector. """ def __init__(self): if not hasattr(F, "scaled_dot_product_attention"): raise ImportError( "LTXVideoAttentionProcessor2_0 requires PyTorch 2.0, to use it, please upgrade PyTorch to 2.0." ) def __call__( self, attn: Attention, hidden_states: torch.Tensor, encoder_hidden_states: Optional[torch.Tensor] = None, attention_mask: Optional[torch.Tensor] = None, image_rotary_emb: Optional[torch.Tensor] = None, ) -> torch.Tensor: batch_size, sequence_length, _ = ( hidden_states.shape if encoder_hidden_states is None else encoder_hidden_states.shape ) if attention_mask is not None: attention_mask = attn.prepare_attention_mask(attention_mask, sequence_length, batch_size) attention_mask = attention_mask.view(batch_size, attn.heads, -1, attention_mask.shape[-1]) if encoder_hidden_states is None: encoder_hidden_states = hidden_states query = attn.to_q(hidden_states) key = attn.to_k(encoder_hidden_states) value = attn.to_v(encoder_hidden_states) query = attn.norm_q(query) key = attn.norm_k(key) if image_rotary_emb is not None: query = apply_rotary_emb(query, image_rotary_emb) key = apply_rotary_emb(key, image_rotary_emb) query = query.unflatten(2, (attn.heads, -1)).transpose(1, 2) key = key.unflatten(2, (attn.heads, -1)).transpose(1, 2) value = value.unflatten(2, (attn.heads, -1)).transpose(1, 2) hidden_states = F.scaled_dot_product_attention( query, key, value, attn_mask=attention_mask, dropout_p=0.0, is_causal=False ) hidden_states = hidden_states.transpose(1, 2).flatten(2, 3) hidden_states = hidden_states.to(query.dtype) hidden_states = attn.to_out[0](hidden_states) hidden_states = attn.to_out[1](hidden_states) return hidden_states class LTXVideoRotaryPosEmbed(nn.Module): def __init__( self, dim: int, base_num_frames: int = 20, base_height: int = 2048, base_width: int = 2048, patch_size: int = 1, patch_size_t: int = 1, theta: float = 10000.0, ) -> None: super().__init__() self.dim = dim self.base_num_frames = base_num_frames self.base_height = base_height self.base_width = base_width self.patch_size = patch_size self.patch_size_t = patch_size_t self.theta = theta def forward( self, hidden_states: torch.Tensor, num_frames: int, height: int, width: int, rope_interpolation_scale: Optional[Tuple[torch.Tensor, float, float]] = None, ) -> Tuple[torch.Tensor, torch.Tensor]: batch_size = hidden_states.size(0) # Always compute rope in fp32 grid_h = torch.arange(height, dtype=torch.float32, device=hidden_states.device) grid_w = torch.arange(width, dtype=torch.float32, device=hidden_states.device) grid_f = torch.arange(num_frames, dtype=torch.float32, device=hidden_states.device) grid = torch.meshgrid(grid_f, grid_h, grid_w, indexing="ij") grid = torch.stack(grid, dim=0) grid = grid.unsqueeze(0).repeat(batch_size, 1, 1, 1, 1) if rope_interpolation_scale is not None: grid[:, 0:1] = grid[:, 0:1] * rope_interpolation_scale[0] * self.patch_size_t / self.base_num_frames grid[:, 1:2] = grid[:, 1:2] * rope_interpolation_scale[1] * self.patch_size / self.base_height grid[:, 2:3] = grid[:, 2:3] * rope_interpolation_scale[2] * self.patch_size / self.base_width grid = grid.flatten(2, 4).transpose(1, 2) start = 1.0 end = self.theta freqs = self.theta ** torch.linspace( math.log(start, self.theta), math.log(end, self.theta), self.dim // 6, device=hidden_states.device, dtype=torch.float32, ) freqs = freqs * math.pi / 2.0 freqs = freqs * (grid.unsqueeze(-1) * 2 - 1) freqs = freqs.transpose(-1, -2).flatten(2) cos_freqs = freqs.cos().repeat_interleave(2, dim=-1) sin_freqs = freqs.sin().repeat_interleave(2, dim=-1) if self.dim % 6 != 0: cos_padding = torch.ones_like(cos_freqs[:, :, : self.dim % 6]) sin_padding = torch.zeros_like(cos_freqs[:, :, : self.dim % 6]) cos_freqs = torch.cat([cos_padding, cos_freqs], dim=-1) sin_freqs = torch.cat([sin_padding, sin_freqs], dim=-1) return cos_freqs, sin_freqs @maybe_allow_in_graph class LTXVideoTransformerBlock(nn.Module): r""" Transformer block used in [LTX](https://huggingface.co/Lightricks/LTX-Video). Args: dim (`int`): The number of channels in the input and output. num_attention_heads (`int`): The number of heads to use for multi-head attention. attention_head_dim (`int`): The number of channels in each head. qk_norm (`str`, defaults to `"rms_norm"`): The normalization layer to use. activation_fn (`str`, defaults to `"gelu-approximate"`): Activation function to use in feed-forward. eps (`float`, defaults to `1e-6`): Epsilon value for normalization layers. """ def __init__( self, dim: int, num_attention_heads: int, attention_head_dim: int, cross_attention_dim: int, qk_norm: str = "rms_norm_across_heads", activation_fn: str = "gelu-approximate", attention_bias: bool = True, attention_out_bias: bool = True, eps: float = 1e-6, elementwise_affine: bool = False, ): super().__init__() self.norm1 = RMSNorm(dim, eps=eps, elementwise_affine=elementwise_affine) self.attn1 = Attention( query_dim=dim, heads=num_attention_heads, kv_heads=num_attention_heads, dim_head=attention_head_dim, bias=attention_bias, cross_attention_dim=None, out_bias=attention_out_bias, qk_norm=qk_norm, processor=LTXVideoAttentionProcessor2_0(), ) self.norm2 = RMSNorm(dim, eps=eps, elementwise_affine=elementwise_affine) self.attn2 = Attention( query_dim=dim, cross_attention_dim=cross_attention_dim, heads=num_attention_heads, kv_heads=num_attention_heads, dim_head=attention_head_dim, bias=attention_bias, out_bias=attention_out_bias, qk_norm=qk_norm, processor=LTXVideoAttentionProcessor2_0(), ) self.ff = FeedForward(dim, activation_fn=activation_fn) self.scale_shift_table = nn.Parameter(torch.randn(6, dim) / dim**0.5) def forward( self, hidden_states: torch.Tensor, encoder_hidden_states: torch.Tensor, temb: torch.Tensor, image_rotary_emb: Optional[Tuple[torch.Tensor, torch.Tensor]] = None, encoder_attention_mask: Optional[torch.Tensor] = None, ) -> torch.Tensor: batch_size = hidden_states.size(0) norm_hidden_states = self.norm1(hidden_states) num_ada_params = self.scale_shift_table.shape[0] ada_values = self.scale_shift_table[None, None] + temb.reshape(batch_size, temb.size(1), num_ada_params, -1) shift_msa, scale_msa, gate_msa, shift_mlp, scale_mlp, gate_mlp = ada_values.unbind(dim=2) norm_hidden_states = norm_hidden_states * (1 + scale_msa) + shift_msa attn_hidden_states = self.attn1( hidden_states=norm_hidden_states, encoder_hidden_states=None, image_rotary_emb=image_rotary_emb, ) hidden_states = hidden_states + attn_hidden_states * gate_msa attn_hidden_states = self.attn2( hidden_states, encoder_hidden_states=encoder_hidden_states, image_rotary_emb=None, attention_mask=encoder_attention_mask, ) hidden_states = hidden_states + attn_hidden_states norm_hidden_states = self.norm2(hidden_states) * (1 + scale_mlp) + shift_mlp ff_output = self.ff(norm_hidden_states) hidden_states = hidden_states + ff_output * gate_mlp return hidden_states @maybe_allow_in_graph class LTXVideoTransformer3DModel(ModelMixin, ConfigMixin, FromOriginalModelMixin, PeftAdapterMixin): r""" A Transformer model for video-like data used in [LTX](https://huggingface.co/Lightricks/LTX-Video). Args: in_channels (`int`, defaults to `128`): The number of channels in the input. out_channels (`int`, defaults to `128`): The number of channels in the output. patch_size (`int`, defaults to `1`): The size of the spatial patches to use in the patch embedding layer. patch_size_t (`int`, defaults to `1`): The size of the tmeporal patches to use in the patch embedding layer. num_attention_heads (`int`, defaults to `32`): The number of heads to use for multi-head attention. attention_head_dim (`int`, defaults to `64`): The number of channels in each head. cross_attention_dim (`int`, defaults to `2048 `): The number of channels for cross attention heads. num_layers (`int`, defaults to `28`): The number of layers of Transformer blocks to use. activation_fn (`str`, defaults to `"gelu-approximate"`): Activation function to use in feed-forward. qk_norm (`str`, defaults to `"rms_norm_across_heads"`): The normalization layer to use. """ _supports_gradient_checkpointing = True _skip_layerwise_casting_patterns = ["norm"] @register_to_config def __init__( self, in_channels: int = 128, out_channels: int = 128, patch_size: int = 1, patch_size_t: int = 1, num_attention_heads: int = 32, attention_head_dim: int = 64, cross_attention_dim: int = 2048, num_layers: int = 28, activation_fn: str = "gelu-approximate", qk_norm: str = "rms_norm_across_heads", norm_elementwise_affine: bool = False, norm_eps: float = 1e-6, caption_channels: int = 4096, attention_bias: bool = True, attention_out_bias: bool = True, ) -> None: super().__init__() out_channels = out_channels or in_channels inner_dim = num_attention_heads * attention_head_dim self.proj_in = nn.Linear(in_channels, inner_dim) self.scale_shift_table = nn.Parameter(torch.randn(2, inner_dim) / inner_dim**0.5) self.time_embed = AdaLayerNormSingle(inner_dim, use_additional_conditions=False) self.caption_projection = PixArtAlphaTextProjection(in_features=caption_channels, hidden_size=inner_dim) self.rope = LTXVideoRotaryPosEmbed( dim=inner_dim, base_num_frames=20, base_height=2048, base_width=2048, patch_size=patch_size, patch_size_t=patch_size_t, theta=10000.0, ) self.transformer_blocks = nn.ModuleList( [ LTXVideoTransformerBlock( dim=inner_dim, num_attention_heads=num_attention_heads, attention_head_dim=attention_head_dim, cross_attention_dim=cross_attention_dim, qk_norm=qk_norm, activation_fn=activation_fn, attention_bias=attention_bias, attention_out_bias=attention_out_bias, eps=norm_eps, elementwise_affine=norm_elementwise_affine, ) for _ in range(num_layers) ] ) self.norm_out = nn.LayerNorm(inner_dim, eps=1e-6, elementwise_affine=False) self.proj_out = nn.Linear(inner_dim, out_channels) self.gradient_checkpointing = False def forward( self, hidden_states: torch.Tensor, encoder_hidden_states: torch.Tensor, timestep: torch.LongTensor, encoder_attention_mask: torch.Tensor, num_frames: int, height: int, width: int, rope_interpolation_scale: Optional[Tuple[float, float, float]] = None, attention_kwargs: Optional[Dict[str, Any]] = None, return_dict: bool = True, ) -> torch.Tensor: if attention_kwargs is not None: attention_kwargs = attention_kwargs.copy() lora_scale = attention_kwargs.pop("scale", 1.0) else: lora_scale = 1.0 if USE_PEFT_BACKEND: # weight the lora layers by setting `lora_scale` for each PEFT layer scale_lora_layers(self, lora_scale) else: if attention_kwargs is not None and attention_kwargs.get("scale", None) is not None: logger.warning( "Passing `scale` via `attention_kwargs` when not using the PEFT backend is ineffective." ) image_rotary_emb = self.rope(hidden_states, num_frames, height, width, rope_interpolation_scale) # convert encoder_attention_mask to a bias the same way we do for attention_mask if encoder_attention_mask is not None and encoder_attention_mask.ndim == 2: encoder_attention_mask = (1 - encoder_attention_mask.to(hidden_states.dtype)) * -10000.0 encoder_attention_mask = encoder_attention_mask.unsqueeze(1) batch_size = hidden_states.size(0) hidden_states = self.proj_in(hidden_states) temb, embedded_timestep = self.time_embed( timestep.flatten(), batch_size=batch_size, hidden_dtype=hidden_states.dtype, ) temb = temb.view(batch_size, -1, temb.size(-1)) embedded_timestep = embedded_timestep.view(batch_size, -1, embedded_timestep.size(-1)) encoder_hidden_states = self.caption_projection(encoder_hidden_states) encoder_hidden_states = encoder_hidden_states.view(batch_size, -1, hidden_states.size(-1)) for block in self.transformer_blocks: if torch.is_grad_enabled() and self.gradient_checkpointing: hidden_states = self._gradient_checkpointing_func( block, hidden_states, encoder_hidden_states, temb, image_rotary_emb, encoder_attention_mask, ) else: hidden_states = block( hidden_states=hidden_states, encoder_hidden_states=encoder_hidden_states, temb=temb, image_rotary_emb=image_rotary_emb, encoder_attention_mask=encoder_attention_mask, ) scale_shift_values = self.scale_shift_table[None, None] + embedded_timestep[:, :, None] shift, scale = scale_shift_values[:, :, 0], scale_shift_values[:, :, 1] hidden_states = self.norm_out(hidden_states) hidden_states = hidden_states * (1 + scale) + shift output = self.proj_out(hidden_states) if USE_PEFT_BACKEND: # remove `lora_scale` from each PEFT layer unscale_lora_layers(self, lora_scale) if not return_dict: return (output,) return Transformer2DModelOutput(sample=output) def apply_rotary_emb(x, freqs): cos, sin = freqs x_real, x_imag = x.unflatten(2, (-1, 2)).unbind(-1) # [B, S, H, D // 2] x_rotated = torch.stack([-x_imag, x_real], dim=-1).flatten(2) out = (x.float() * cos + x_rotated.float() * sin).to(x.dtype) return out
diffusers/src/diffusers/models/transformers/transformer_ltx.py/0
{ "file_path": "diffusers/src/diffusers/models/transformers/transformer_ltx.py", "repo_id": "diffusers", "token_count": 8198 }
# Copyright 2024 The HuggingFace Team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from dataclasses import dataclass from typing import Any, Dict, Optional, Tuple, Union import torch import torch.nn as nn import torch.nn.functional as F import torch.utils.checkpoint from ...configuration_utils import ConfigMixin, FrozenDict, register_to_config from ...loaders import FromOriginalModelMixin, PeftAdapterMixin, UNet2DConditionLoadersMixin from ...utils import BaseOutput, deprecate, logging from ...utils.torch_utils import apply_freeu from ..attention import BasicTransformerBlock from ..attention_processor import ( ADDED_KV_ATTENTION_PROCESSORS, CROSS_ATTENTION_PROCESSORS, Attention, AttentionProcessor, AttnAddedKVProcessor, AttnProcessor, AttnProcessor2_0, FusedAttnProcessor2_0, IPAdapterAttnProcessor, IPAdapterAttnProcessor2_0, ) from ..embeddings import TimestepEmbedding, Timesteps from ..modeling_utils import ModelMixin from ..resnet import Downsample2D, ResnetBlock2D, Upsample2D from ..transformers.dual_transformer_2d import DualTransformer2DModel from ..transformers.transformer_2d import Transformer2DModel from .unet_2d_blocks import UNetMidBlock2DCrossAttn from .unet_2d_condition import UNet2DConditionModel logger = logging.get_logger(__name__) # pylint: disable=invalid-name @dataclass class UNetMotionOutput(BaseOutput): """ The output of [`UNetMotionOutput`]. Args: sample (`torch.Tensor` of shape `(batch_size, num_channels, num_frames, height, width)`): The hidden states output conditioned on `encoder_hidden_states` input. Output of last layer of model. """ sample: torch.Tensor class AnimateDiffTransformer3D(nn.Module): """ A Transformer model for video-like data. Parameters: num_attention_heads (`int`, *optional*, defaults to 16): The number of heads to use for multi-head attention. attention_head_dim (`int`, *optional*, defaults to 88): The number of channels in each head. in_channels (`int`, *optional*): The number of channels in the input and output (specify if the input is **continuous**). num_layers (`int`, *optional*, defaults to 1): The number of layers of Transformer blocks to use. dropout (`float`, *optional*, defaults to 0.0): The dropout probability to use. cross_attention_dim (`int`, *optional*): The number of `encoder_hidden_states` dimensions to use. attention_bias (`bool`, *optional*): Configure if the `TransformerBlock` attention should contain a bias parameter. sample_size (`int`, *optional*): The width of the latent images (specify if the input is **discrete**). This is fixed during training since it is used to learn a number of position embeddings. activation_fn (`str`, *optional*, defaults to `"geglu"`): Activation function to use in feed-forward. See `diffusers.models.activations.get_activation` for supported activation functions. norm_elementwise_affine (`bool`, *optional*): Configure if the `TransformerBlock` should use learnable elementwise affine parameters for normalization. double_self_attention (`bool`, *optional*): Configure if each `TransformerBlock` should contain two self-attention layers. positional_embeddings: (`str`, *optional*): The type of positional embeddings to apply to the sequence input before passing use. num_positional_embeddings: (`int`, *optional*): The maximum length of the sequence over which to apply positional embeddings. """ def __init__( self, num_attention_heads: int = 16, attention_head_dim: int = 88, in_channels: Optional[int] = None, out_channels: Optional[int] = None, num_layers: int = 1, dropout: float = 0.0, norm_num_groups: int = 32, cross_attention_dim: Optional[int] = None, attention_bias: bool = False, sample_size: Optional[int] = None, activation_fn: str = "geglu", norm_elementwise_affine: bool = True, double_self_attention: bool = True, positional_embeddings: Optional[str] = None, num_positional_embeddings: Optional[int] = None, ): super().__init__() self.num_attention_heads = num_attention_heads self.attention_head_dim = attention_head_dim inner_dim = num_attention_heads * attention_head_dim self.in_channels = in_channels self.norm = nn.GroupNorm(num_groups=norm_num_groups, num_channels=in_channels, eps=1e-6, affine=True) self.proj_in = nn.Linear(in_channels, inner_dim) # 3. Define transformers blocks self.transformer_blocks = nn.ModuleList( [ BasicTransformerBlock( inner_dim, num_attention_heads, attention_head_dim, dropout=dropout, cross_attention_dim=cross_attention_dim, activation_fn=activation_fn, attention_bias=attention_bias, double_self_attention=double_self_attention, norm_elementwise_affine=norm_elementwise_affine, positional_embeddings=positional_embeddings, num_positional_embeddings=num_positional_embeddings, ) for _ in range(num_layers) ] ) self.proj_out = nn.Linear(inner_dim, in_channels) def forward( self, hidden_states: torch.Tensor, encoder_hidden_states: Optional[torch.LongTensor] = None, timestep: Optional[torch.LongTensor] = None, class_labels: Optional[torch.LongTensor] = None, num_frames: int = 1, cross_attention_kwargs: Optional[Dict[str, Any]] = None, ) -> torch.Tensor: """ The [`AnimateDiffTransformer3D`] forward method. Args: hidden_states (`torch.LongTensor` of shape `(batch size, num latent pixels)` if discrete, `torch.Tensor` of shape `(batch size, channel, height, width)` if continuous): Input hidden_states. encoder_hidden_states ( `torch.LongTensor` of shape `(batch size, encoder_hidden_states dim)`, *optional*): Conditional embeddings for cross attention layer. If not given, cross-attention defaults to self-attention. timestep ( `torch.LongTensor`, *optional*): Used to indicate denoising step. Optional timestep to be applied as an embedding in `AdaLayerNorm`. class_labels ( `torch.LongTensor` of shape `(batch size, num classes)`, *optional*): Used to indicate class labels conditioning. Optional class labels to be applied as an embedding in `AdaLayerZeroNorm`. num_frames (`int`, *optional*, defaults to 1): The number of frames to be processed per batch. This is used to reshape the hidden states. cross_attention_kwargs (`dict`, *optional*): A kwargs dictionary that if specified is passed along to the `AttentionProcessor` as defined under `self.processor` in [diffusers.models.attention_processor](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py). Returns: torch.Tensor: The output tensor. """ # 1. Input batch_frames, channel, height, width = hidden_states.shape batch_size = batch_frames // num_frames residual = hidden_states hidden_states = hidden_states[None, :].reshape(batch_size, num_frames, channel, height, width) hidden_states = hidden_states.permute(0, 2, 1, 3, 4) hidden_states = self.norm(hidden_states) hidden_states = hidden_states.permute(0, 3, 4, 2, 1).reshape(batch_size * height * width, num_frames, channel) hidden_states = self.proj_in(input=hidden_states) # 2. Blocks for block in self.transformer_blocks: hidden_states = block( hidden_states=hidden_states, encoder_hidden_states=encoder_hidden_states, timestep=timestep, cross_attention_kwargs=cross_attention_kwargs, class_labels=class_labels, ) # 3. Output hidden_states = self.proj_out(input=hidden_states) hidden_states = ( hidden_states[None, None, :] .reshape(batch_size, height, width, num_frames, channel) .permute(0, 3, 4, 1, 2) .contiguous() ) hidden_states = hidden_states.reshape(batch_frames, channel, height, width) output = hidden_states + residual return output class DownBlockMotion(nn.Module): def __init__( self, in_channels: int, out_channels: int, temb_channels: int, dropout: float = 0.0, num_layers: int = 1, resnet_eps: float = 1e-6, resnet_time_scale_shift: str = "default", resnet_act_fn: str = "swish", resnet_groups: int = 32, resnet_pre_norm: bool = True, output_scale_factor: float = 1.0, add_downsample: bool = True, downsample_padding: int = 1, temporal_num_attention_heads: Union[int, Tuple[int]] = 1, temporal_cross_attention_dim: Optional[int] = None, temporal_max_seq_length: int = 32, temporal_transformer_layers_per_block: Union[int, Tuple[int]] = 1, temporal_double_self_attention: bool = True, ): super().__init__() resnets = [] motion_modules = [] # support for variable transformer layers per temporal block if isinstance(temporal_transformer_layers_per_block, int): temporal_transformer_layers_per_block = (temporal_transformer_layers_per_block,) * num_layers elif len(temporal_transformer_layers_per_block) != num_layers: raise ValueError( f"`temporal_transformer_layers_per_block` must be an integer or a tuple of integers of length {num_layers}" ) # support for variable number of attention head per temporal layers if isinstance(temporal_num_attention_heads, int): temporal_num_attention_heads = (temporal_num_attention_heads,) * num_layers elif len(temporal_num_attention_heads) != num_layers: raise ValueError( f"`temporal_num_attention_heads` must be an integer or a tuple of integers of length {num_layers}" ) for i in range(num_layers): in_channels = in_channels if i == 0 else out_channels resnets.append( ResnetBlock2D( in_channels=in_channels, out_channels=out_channels, temb_channels=temb_channels, eps=resnet_eps, groups=resnet_groups, dropout=dropout, time_embedding_norm=resnet_time_scale_shift, non_linearity=resnet_act_fn, output_scale_factor=output_scale_factor, pre_norm=resnet_pre_norm, ) ) motion_modules.append( AnimateDiffTransformer3D( num_attention_heads=temporal_num_attention_heads[i], in_channels=out_channels, num_layers=temporal_transformer_layers_per_block[i], norm_num_groups=resnet_groups, cross_attention_dim=temporal_cross_attention_dim, attention_bias=False, activation_fn="geglu", positional_embeddings="sinusoidal", num_positional_embeddings=temporal_max_seq_length, attention_head_dim=out_channels // temporal_num_attention_heads[i], double_self_attention=temporal_double_self_attention, ) ) self.resnets = nn.ModuleList(resnets) self.motion_modules = nn.ModuleList(motion_modules) if add_downsample: self.downsamplers = nn.ModuleList( [ Downsample2D( out_channels, use_conv=True, out_channels=out_channels, padding=downsample_padding, name="op", ) ] ) else: self.downsamplers = None self.gradient_checkpointing = False def forward( self, hidden_states: torch.Tensor, temb: Optional[torch.Tensor] = None, num_frames: int = 1, *args, **kwargs, ) -> Union[torch.Tensor, Tuple[torch.Tensor, ...]]: if len(args) > 0 or kwargs.get("scale", None) is not None: deprecation_message = "The `scale` argument is deprecated and will be ignored. Please remove it, as passing it will raise an error in the future. `scale` should directly be passed while calling the underlying pipeline component i.e., via `cross_attention_kwargs`." deprecate("scale", "1.0.0", deprecation_message) output_states = () blocks = zip(self.resnets, self.motion_modules) for resnet, motion_module in blocks: if torch.is_grad_enabled() and self.gradient_checkpointing: hidden_states = self._gradient_checkpointing_func(resnet, hidden_states, temb) else: hidden_states = resnet(input_tensor=hidden_states, temb=temb) hidden_states = motion_module(hidden_states, num_frames=num_frames) output_states = output_states + (hidden_states,) if self.downsamplers is not None: for downsampler in self.downsamplers: hidden_states = downsampler(hidden_states=hidden_states) output_states = output_states + (hidden_states,) return hidden_states, output_states class CrossAttnDownBlockMotion(nn.Module): def __init__( self, in_channels: int, out_channels: int, temb_channels: int, dropout: float = 0.0, num_layers: int = 1, transformer_layers_per_block: Union[int, Tuple[int]] = 1, resnet_eps: float = 1e-6, resnet_time_scale_shift: str = "default", resnet_act_fn: str = "swish", resnet_groups: int = 32, resnet_pre_norm: bool = True, num_attention_heads: int = 1, cross_attention_dim: int = 1280, output_scale_factor: float = 1.0, downsample_padding: int = 1, add_downsample: bool = True, dual_cross_attention: bool = False, use_linear_projection: bool = False, only_cross_attention: bool = False, upcast_attention: bool = False, attention_type: str = "default", temporal_cross_attention_dim: Optional[int] = None, temporal_num_attention_heads: int = 8, temporal_max_seq_length: int = 32, temporal_transformer_layers_per_block: Union[int, Tuple[int]] = 1, temporal_double_self_attention: bool = True, ): super().__init__() resnets = [] attentions = [] motion_modules = [] self.has_cross_attention = True self.num_attention_heads = num_attention_heads # support for variable transformer layers per block if isinstance(transformer_layers_per_block, int): transformer_layers_per_block = (transformer_layers_per_block,) * num_layers elif len(transformer_layers_per_block) != num_layers: raise ValueError( f"transformer_layers_per_block must be an integer or a list of integers of length {num_layers}" ) # support for variable transformer layers per temporal block if isinstance(temporal_transformer_layers_per_block, int): temporal_transformer_layers_per_block = (temporal_transformer_layers_per_block,) * num_layers elif len(temporal_transformer_layers_per_block) != num_layers: raise ValueError( f"temporal_transformer_layers_per_block must be an integer or a list of integers of length {num_layers}" ) for i in range(num_layers): in_channels = in_channels if i == 0 else out_channels resnets.append( ResnetBlock2D( in_channels=in_channels, out_channels=out_channels, temb_channels=temb_channels, eps=resnet_eps, groups=resnet_groups, dropout=dropout, time_embedding_norm=resnet_time_scale_shift, non_linearity=resnet_act_fn, output_scale_factor=output_scale_factor, pre_norm=resnet_pre_norm, ) ) if not dual_cross_attention: attentions.append( Transformer2DModel( num_attention_heads, out_channels // num_attention_heads, in_channels=out_channels, num_layers=transformer_layers_per_block[i], cross_attention_dim=cross_attention_dim, norm_num_groups=resnet_groups, use_linear_projection=use_linear_projection, only_cross_attention=only_cross_attention, upcast_attention=upcast_attention, attention_type=attention_type, ) ) else: attentions.append( DualTransformer2DModel( num_attention_heads, out_channels // num_attention_heads, in_channels=out_channels, num_layers=1, cross_attention_dim=cross_attention_dim, norm_num_groups=resnet_groups, ) ) motion_modules.append( AnimateDiffTransformer3D( num_attention_heads=temporal_num_attention_heads, in_channels=out_channels, num_layers=temporal_transformer_layers_per_block[i], norm_num_groups=resnet_groups, cross_attention_dim=temporal_cross_attention_dim, attention_bias=False, activation_fn="geglu", positional_embeddings="sinusoidal", num_positional_embeddings=temporal_max_seq_length, attention_head_dim=out_channels // temporal_num_attention_heads, double_self_attention=temporal_double_self_attention, ) ) self.attentions = nn.ModuleList(attentions) self.resnets = nn.ModuleList(resnets) self.motion_modules = nn.ModuleList(motion_modules) if add_downsample: self.downsamplers = nn.ModuleList( [ Downsample2D( out_channels, use_conv=True, out_channels=out_channels, padding=downsample_padding, name="op", ) ] ) else: self.downsamplers = None self.gradient_checkpointing = False def forward( self, hidden_states: torch.Tensor, temb: Optional[torch.Tensor] = None, encoder_hidden_states: Optional[torch.Tensor] = None, attention_mask: Optional[torch.Tensor] = None, num_frames: int = 1, encoder_attention_mask: Optional[torch.Tensor] = None, cross_attention_kwargs: Optional[Dict[str, Any]] = None, additional_residuals: Optional[torch.Tensor] = None, ): if cross_attention_kwargs is not None: if cross_attention_kwargs.get("scale", None) is not None: logger.warning("Passing `scale` to `cross_attention_kwargs` is deprecated. `scale` will be ignored.") output_states = () blocks = list(zip(self.resnets, self.attentions, self.motion_modules)) for i, (resnet, attn, motion_module) in enumerate(blocks): if torch.is_grad_enabled() and self.gradient_checkpointing: hidden_states = self._gradient_checkpointing_func(resnet, hidden_states, temb) else: hidden_states = resnet(input_tensor=hidden_states, temb=temb) hidden_states = attn( hidden_states=hidden_states, encoder_hidden_states=encoder_hidden_states, cross_attention_kwargs=cross_attention_kwargs, attention_mask=attention_mask, encoder_attention_mask=encoder_attention_mask, return_dict=False, )[0] hidden_states = motion_module(hidden_states, num_frames=num_frames) # apply additional residuals to the output of the last pair of resnet and attention blocks if i == len(blocks) - 1 and additional_residuals is not None: hidden_states = hidden_states + additional_residuals output_states = output_states + (hidden_states,) if self.downsamplers is not None: for downsampler in self.downsamplers: hidden_states = downsampler(hidden_states=hidden_states) output_states = output_states + (hidden_states,) return hidden_states, output_states class CrossAttnUpBlockMotion(nn.Module): def __init__( self, in_channels: int, out_channels: int, prev_output_channel: int, temb_channels: int, resolution_idx: Optional[int] = None, dropout: float = 0.0, num_layers: int = 1, transformer_layers_per_block: Union[int, Tuple[int]] = 1, resnet_eps: float = 1e-6, resnet_time_scale_shift: str = "default", resnet_act_fn: str = "swish", resnet_groups: int = 32, resnet_pre_norm: bool = True, num_attention_heads: int = 1, cross_attention_dim: int = 1280, output_scale_factor: float = 1.0, add_upsample: bool = True, dual_cross_attention: bool = False, use_linear_projection: bool = False, only_cross_attention: bool = False, upcast_attention: bool = False, attention_type: str = "default", temporal_cross_attention_dim: Optional[int] = None, temporal_num_attention_heads: int = 8, temporal_max_seq_length: int = 32, temporal_transformer_layers_per_block: Union[int, Tuple[int]] = 1, ): super().__init__() resnets = [] attentions = [] motion_modules = [] self.has_cross_attention = True self.num_attention_heads = num_attention_heads # support for variable transformer layers per block if isinstance(transformer_layers_per_block, int): transformer_layers_per_block = (transformer_layers_per_block,) * num_layers elif len(transformer_layers_per_block) != num_layers: raise ValueError( f"transformer_layers_per_block must be an integer or a list of integers of length {num_layers}, got {len(transformer_layers_per_block)}" ) # support for variable transformer layers per temporal block if isinstance(temporal_transformer_layers_per_block, int): temporal_transformer_layers_per_block = (temporal_transformer_layers_per_block,) * num_layers elif len(temporal_transformer_layers_per_block) != num_layers: raise ValueError( f"temporal_transformer_layers_per_block must be an integer or a list of integers of length {num_layers}, got {len(temporal_transformer_layers_per_block)}" ) for i in range(num_layers): res_skip_channels = in_channels if (i == num_layers - 1) else out_channels resnet_in_channels = prev_output_channel if i == 0 else out_channels resnets.append( ResnetBlock2D( in_channels=resnet_in_channels + res_skip_channels, out_channels=out_channels, temb_channels=temb_channels, eps=resnet_eps, groups=resnet_groups, dropout=dropout, time_embedding_norm=resnet_time_scale_shift, non_linearity=resnet_act_fn, output_scale_factor=output_scale_factor, pre_norm=resnet_pre_norm, ) ) if not dual_cross_attention: attentions.append( Transformer2DModel( num_attention_heads, out_channels // num_attention_heads, in_channels=out_channels, num_layers=transformer_layers_per_block[i], cross_attention_dim=cross_attention_dim, norm_num_groups=resnet_groups, use_linear_projection=use_linear_projection, only_cross_attention=only_cross_attention, upcast_attention=upcast_attention, attention_type=attention_type, ) ) else: attentions.append( DualTransformer2DModel( num_attention_heads, out_channels // num_attention_heads, in_channels=out_channels, num_layers=1, cross_attention_dim=cross_attention_dim, norm_num_groups=resnet_groups, ) ) motion_modules.append( AnimateDiffTransformer3D( num_attention_heads=temporal_num_attention_heads, in_channels=out_channels, num_layers=temporal_transformer_layers_per_block[i], norm_num_groups=resnet_groups, cross_attention_dim=temporal_cross_attention_dim, attention_bias=False, activation_fn="geglu", positional_embeddings="sinusoidal", num_positional_embeddings=temporal_max_seq_length, attention_head_dim=out_channels // temporal_num_attention_heads, ) ) self.attentions = nn.ModuleList(attentions) self.resnets = nn.ModuleList(resnets) self.motion_modules = nn.ModuleList(motion_modules) if add_upsample: self.upsamplers = nn.ModuleList([Upsample2D(out_channels, use_conv=True, out_channels=out_channels)]) else: self.upsamplers = None self.gradient_checkpointing = False self.resolution_idx = resolution_idx def forward( self, hidden_states: torch.Tensor, res_hidden_states_tuple: Tuple[torch.Tensor, ...], temb: Optional[torch.Tensor] = None, encoder_hidden_states: Optional[torch.Tensor] = None, cross_attention_kwargs: Optional[Dict[str, Any]] = None, upsample_size: Optional[int] = None, attention_mask: Optional[torch.Tensor] = None, encoder_attention_mask: Optional[torch.Tensor] = None, num_frames: int = 1, ) -> torch.Tensor: if cross_attention_kwargs is not None: if cross_attention_kwargs.get("scale", None) is not None: logger.warning("Passing `scale` to `cross_attention_kwargs` is deprecated. `scale` will be ignored.") is_freeu_enabled = ( getattr(self, "s1", None) and getattr(self, "s2", None) and getattr(self, "b1", None) and getattr(self, "b2", None) ) blocks = zip(self.resnets, self.attentions, self.motion_modules) for resnet, attn, motion_module in blocks: # pop res hidden states res_hidden_states = res_hidden_states_tuple[-1] res_hidden_states_tuple = res_hidden_states_tuple[:-1] # FreeU: Only operate on the first two stages if is_freeu_enabled: hidden_states, res_hidden_states = apply_freeu( self.resolution_idx, hidden_states, res_hidden_states, s1=self.s1, s2=self.s2, b1=self.b1, b2=self.b2, ) hidden_states = torch.cat([hidden_states, res_hidden_states], dim=1) if torch.is_grad_enabled() and self.gradient_checkpointing: hidden_states = self._gradient_checkpointing_func(resnet, hidden_states, temb) else: hidden_states = resnet(input_tensor=hidden_states, temb=temb) hidden_states = attn( hidden_states=hidden_states, encoder_hidden_states=encoder_hidden_states, cross_attention_kwargs=cross_attention_kwargs, attention_mask=attention_mask, encoder_attention_mask=encoder_attention_mask, return_dict=False, )[0] hidden_states = motion_module(hidden_states, num_frames=num_frames) if self.upsamplers is not None: for upsampler in self.upsamplers: hidden_states = upsampler(hidden_states=hidden_states, output_size=upsample_size) return hidden_states class UpBlockMotion(nn.Module): def __init__( self, in_channels: int, prev_output_channel: int, out_channels: int, temb_channels: int, resolution_idx: Optional[int] = None, dropout: float = 0.0, num_layers: int = 1, resnet_eps: float = 1e-6, resnet_time_scale_shift: str = "default", resnet_act_fn: str = "swish", resnet_groups: int = 32, resnet_pre_norm: bool = True, output_scale_factor: float = 1.0, add_upsample: bool = True, temporal_cross_attention_dim: Optional[int] = None, temporal_num_attention_heads: int = 8, temporal_max_seq_length: int = 32, temporal_transformer_layers_per_block: Union[int, Tuple[int]] = 1, ): super().__init__() resnets = [] motion_modules = [] # support for variable transformer layers per temporal block if isinstance(temporal_transformer_layers_per_block, int): temporal_transformer_layers_per_block = (temporal_transformer_layers_per_block,) * num_layers elif len(temporal_transformer_layers_per_block) != num_layers: raise ValueError( f"temporal_transformer_layers_per_block must be an integer or a list of integers of length {num_layers}" ) for i in range(num_layers): res_skip_channels = in_channels if (i == num_layers - 1) else out_channels resnet_in_channels = prev_output_channel if i == 0 else out_channels resnets.append( ResnetBlock2D( in_channels=resnet_in_channels + res_skip_channels, out_channels=out_channels, temb_channels=temb_channels, eps=resnet_eps, groups=resnet_groups, dropout=dropout, time_embedding_norm=resnet_time_scale_shift, non_linearity=resnet_act_fn, output_scale_factor=output_scale_factor, pre_norm=resnet_pre_norm, ) ) motion_modules.append( AnimateDiffTransformer3D( num_attention_heads=temporal_num_attention_heads, in_channels=out_channels, num_layers=temporal_transformer_layers_per_block[i], norm_num_groups=resnet_groups, cross_attention_dim=temporal_cross_attention_dim, attention_bias=False, activation_fn="geglu", positional_embeddings="sinusoidal", num_positional_embeddings=temporal_max_seq_length, attention_head_dim=out_channels // temporal_num_attention_heads, ) ) self.resnets = nn.ModuleList(resnets) self.motion_modules = nn.ModuleList(motion_modules) if add_upsample: self.upsamplers = nn.ModuleList([Upsample2D(out_channels, use_conv=True, out_channels=out_channels)]) else: self.upsamplers = None self.gradient_checkpointing = False self.resolution_idx = resolution_idx def forward( self, hidden_states: torch.Tensor, res_hidden_states_tuple: Tuple[torch.Tensor, ...], temb: Optional[torch.Tensor] = None, upsample_size=None, num_frames: int = 1, *args, **kwargs, ) -> torch.Tensor: if len(args) > 0 or kwargs.get("scale", None) is not None: deprecation_message = "The `scale` argument is deprecated and will be ignored. Please remove it, as passing it will raise an error in the future. `scale` should directly be passed while calling the underlying pipeline component i.e., via `cross_attention_kwargs`." deprecate("scale", "1.0.0", deprecation_message) is_freeu_enabled = ( getattr(self, "s1", None) and getattr(self, "s2", None) and getattr(self, "b1", None) and getattr(self, "b2", None) ) blocks = zip(self.resnets, self.motion_modules) for resnet, motion_module in blocks: # pop res hidden states res_hidden_states = res_hidden_states_tuple[-1] res_hidden_states_tuple = res_hidden_states_tuple[:-1] # FreeU: Only operate on the first two stages if is_freeu_enabled: hidden_states, res_hidden_states = apply_freeu( self.resolution_idx, hidden_states, res_hidden_states, s1=self.s1, s2=self.s2, b1=self.b1, b2=self.b2, ) hidden_states = torch.cat([hidden_states, res_hidden_states], dim=1) if torch.is_grad_enabled() and self.gradient_checkpointing: hidden_states = self._gradient_checkpointing_func(resnet, hidden_states, temb) else: hidden_states = resnet(input_tensor=hidden_states, temb=temb) hidden_states = motion_module(hidden_states, num_frames=num_frames) if self.upsamplers is not None: for upsampler in self.upsamplers: hidden_states = upsampler(hidden_states=hidden_states, output_size=upsample_size) return hidden_states class UNetMidBlockCrossAttnMotion(nn.Module): def __init__( self, in_channels: int, temb_channels: int, dropout: float = 0.0, num_layers: int = 1, transformer_layers_per_block: Union[int, Tuple[int]] = 1, resnet_eps: float = 1e-6, resnet_time_scale_shift: str = "default", resnet_act_fn: str = "swish", resnet_groups: int = 32, resnet_pre_norm: bool = True, num_attention_heads: int = 1, output_scale_factor: float = 1.0, cross_attention_dim: int = 1280, dual_cross_attention: bool = False, use_linear_projection: bool = False, upcast_attention: bool = False, attention_type: str = "default", temporal_num_attention_heads: int = 1, temporal_cross_attention_dim: Optional[int] = None, temporal_max_seq_length: int = 32, temporal_transformer_layers_per_block: Union[int, Tuple[int]] = 1, ): super().__init__() self.has_cross_attention = True self.num_attention_heads = num_attention_heads resnet_groups = resnet_groups if resnet_groups is not None else min(in_channels // 4, 32) # support for variable transformer layers per block if isinstance(transformer_layers_per_block, int): transformer_layers_per_block = (transformer_layers_per_block,) * num_layers elif len(transformer_layers_per_block) != num_layers: raise ValueError( f"`transformer_layers_per_block` should be an integer or a list of integers of length {num_layers}." ) # support for variable transformer layers per temporal block if isinstance(temporal_transformer_layers_per_block, int): temporal_transformer_layers_per_block = (temporal_transformer_layers_per_block,) * num_layers elif len(temporal_transformer_layers_per_block) != num_layers: raise ValueError( f"`temporal_transformer_layers_per_block` should be an integer or a list of integers of length {num_layers}." ) # there is always at least one resnet resnets = [ ResnetBlock2D( in_channels=in_channels, out_channels=in_channels, temb_channels=temb_channels, eps=resnet_eps, groups=resnet_groups, dropout=dropout, time_embedding_norm=resnet_time_scale_shift, non_linearity=resnet_act_fn, output_scale_factor=output_scale_factor, pre_norm=resnet_pre_norm, ) ] attentions = [] motion_modules = [] for i in range(num_layers): if not dual_cross_attention: attentions.append( Transformer2DModel( num_attention_heads, in_channels // num_attention_heads, in_channels=in_channels, num_layers=transformer_layers_per_block[i], cross_attention_dim=cross_attention_dim, norm_num_groups=resnet_groups, use_linear_projection=use_linear_projection, upcast_attention=upcast_attention, attention_type=attention_type, ) ) else: attentions.append( DualTransformer2DModel( num_attention_heads, in_channels // num_attention_heads, in_channels=in_channels, num_layers=1, cross_attention_dim=cross_attention_dim, norm_num_groups=resnet_groups, ) ) resnets.append( ResnetBlock2D( in_channels=in_channels, out_channels=in_channels, temb_channels=temb_channels, eps=resnet_eps, groups=resnet_groups, dropout=dropout, time_embedding_norm=resnet_time_scale_shift, non_linearity=resnet_act_fn, output_scale_factor=output_scale_factor, pre_norm=resnet_pre_norm, ) ) motion_modules.append( AnimateDiffTransformer3D( num_attention_heads=temporal_num_attention_heads, attention_head_dim=in_channels // temporal_num_attention_heads, in_channels=in_channels, num_layers=temporal_transformer_layers_per_block[i], norm_num_groups=resnet_groups, cross_attention_dim=temporal_cross_attention_dim, attention_bias=False, positional_embeddings="sinusoidal", num_positional_embeddings=temporal_max_seq_length, activation_fn="geglu", ) ) self.attentions = nn.ModuleList(attentions) self.resnets = nn.ModuleList(resnets) self.motion_modules = nn.ModuleList(motion_modules) self.gradient_checkpointing = False def forward( self, hidden_states: torch.Tensor, temb: Optional[torch.Tensor] = None, encoder_hidden_states: Optional[torch.Tensor] = None, attention_mask: Optional[torch.Tensor] = None, cross_attention_kwargs: Optional[Dict[str, Any]] = None, encoder_attention_mask: Optional[torch.Tensor] = None, num_frames: int = 1, ) -> torch.Tensor: if cross_attention_kwargs is not None: if cross_attention_kwargs.get("scale", None) is not None: logger.warning("Passing `scale` to `cross_attention_kwargs` is deprecated. `scale` will be ignored.") hidden_states = self.resnets[0](input_tensor=hidden_states, temb=temb) blocks = zip(self.attentions, self.resnets[1:], self.motion_modules) for attn, resnet, motion_module in blocks: hidden_states = attn( hidden_states=hidden_states, encoder_hidden_states=encoder_hidden_states, cross_attention_kwargs=cross_attention_kwargs, attention_mask=attention_mask, encoder_attention_mask=encoder_attention_mask, return_dict=False, )[0] if torch.is_grad_enabled() and self.gradient_checkpointing: hidden_states = self._gradient_checkpointing_func( motion_module, hidden_states, None, None, None, num_frames, None ) hidden_states = self._gradient_checkpointing_func(resnet, hidden_states, temb) else: hidden_states = motion_module(hidden_states, None, None, None, num_frames, None) hidden_states = resnet(input_tensor=hidden_states, temb=temb) return hidden_states class MotionModules(nn.Module): def __init__( self, in_channels: int, layers_per_block: int = 2, transformer_layers_per_block: Union[int, Tuple[int]] = 8, num_attention_heads: Union[int, Tuple[int]] = 8, attention_bias: bool = False, cross_attention_dim: Optional[int] = None, activation_fn: str = "geglu", norm_num_groups: int = 32, max_seq_length: int = 32, ): super().__init__() self.motion_modules = nn.ModuleList([]) if isinstance(transformer_layers_per_block, int): transformer_layers_per_block = (transformer_layers_per_block,) * layers_per_block elif len(transformer_layers_per_block) != layers_per_block: raise ValueError( f"The number of transformer layers per block must match the number of layers per block, " f"got {layers_per_block} and {len(transformer_layers_per_block)}" ) for i in range(layers_per_block): self.motion_modules.append( AnimateDiffTransformer3D( in_channels=in_channels, num_layers=transformer_layers_per_block[i], norm_num_groups=norm_num_groups, cross_attention_dim=cross_attention_dim, activation_fn=activation_fn, attention_bias=attention_bias, num_attention_heads=num_attention_heads, attention_head_dim=in_channels // num_attention_heads, positional_embeddings="sinusoidal", num_positional_embeddings=max_seq_length, ) ) class MotionAdapter(ModelMixin, ConfigMixin, FromOriginalModelMixin): @register_to_config def __init__( self, block_out_channels: Tuple[int, ...] = (320, 640, 1280, 1280), motion_layers_per_block: Union[int, Tuple[int]] = 2, motion_transformer_layers_per_block: Union[int, Tuple[int], Tuple[Tuple[int]]] = 1, motion_mid_block_layers_per_block: int = 1, motion_transformer_layers_per_mid_block: Union[int, Tuple[int]] = 1, motion_num_attention_heads: Union[int, Tuple[int]] = 8, motion_norm_num_groups: int = 32, motion_max_seq_length: int = 32, use_motion_mid_block: bool = True, conv_in_channels: Optional[int] = None, ): """Container to store AnimateDiff Motion Modules Args: block_out_channels (`Tuple[int]`, *optional*, defaults to `(320, 640, 1280, 1280)`): The tuple of output channels for each UNet block. motion_layers_per_block (`int` or `Tuple[int]`, *optional*, defaults to 2): The number of motion layers per UNet block. motion_transformer_layers_per_block (`int`, `Tuple[int]`, or `Tuple[Tuple[int]]`, *optional*, defaults to 1): The number of transformer layers to use in each motion layer in each block. motion_mid_block_layers_per_block (`int`, *optional*, defaults to 1): The number of motion layers in the middle UNet block. motion_transformer_layers_per_mid_block (`int` or `Tuple[int]`, *optional*, defaults to 1): The number of transformer layers to use in each motion layer in the middle block. motion_num_attention_heads (`int` or `Tuple[int]`, *optional*, defaults to 8): The number of heads to use in each attention layer of the motion module. motion_norm_num_groups (`int`, *optional*, defaults to 32): The number of groups to use in each group normalization layer of the motion module. motion_max_seq_length (`int`, *optional*, defaults to 32): The maximum sequence length to use in the motion module. use_motion_mid_block (`bool`, *optional*, defaults to True): Whether to use a motion module in the middle of the UNet. """ super().__init__() down_blocks = [] up_blocks = [] if isinstance(motion_layers_per_block, int): motion_layers_per_block = (motion_layers_per_block,) * len(block_out_channels) elif len(motion_layers_per_block) != len(block_out_channels): raise ValueError( f"The number of motion layers per block must match the number of blocks, " f"got {len(block_out_channels)} and {len(motion_layers_per_block)}" ) if isinstance(motion_transformer_layers_per_block, int): motion_transformer_layers_per_block = (motion_transformer_layers_per_block,) * len(block_out_channels) if isinstance(motion_transformer_layers_per_mid_block, int): motion_transformer_layers_per_mid_block = ( motion_transformer_layers_per_mid_block, ) * motion_mid_block_layers_per_block elif len(motion_transformer_layers_per_mid_block) != motion_mid_block_layers_per_block: raise ValueError( f"The number of layers per mid block ({motion_mid_block_layers_per_block}) " f"must match the length of motion_transformer_layers_per_mid_block ({len(motion_transformer_layers_per_mid_block)})" ) if isinstance(motion_num_attention_heads, int): motion_num_attention_heads = (motion_num_attention_heads,) * len(block_out_channels) elif len(motion_num_attention_heads) != len(block_out_channels): raise ValueError( f"The length of the attention head number tuple in the motion module must match the " f"number of block, got {len(motion_num_attention_heads)} and {len(block_out_channels)}" ) if conv_in_channels: # input self.conv_in = nn.Conv2d(conv_in_channels, block_out_channels[0], kernel_size=3, padding=1) else: self.conv_in = None for i, channel in enumerate(block_out_channels): output_channel = block_out_channels[i] down_blocks.append( MotionModules( in_channels=output_channel, norm_num_groups=motion_norm_num_groups, cross_attention_dim=None, activation_fn="geglu", attention_bias=False, num_attention_heads=motion_num_attention_heads[i], max_seq_length=motion_max_seq_length, layers_per_block=motion_layers_per_block[i], transformer_layers_per_block=motion_transformer_layers_per_block[i], ) ) if use_motion_mid_block: self.mid_block = MotionModules( in_channels=block_out_channels[-1], norm_num_groups=motion_norm_num_groups, cross_attention_dim=None, activation_fn="geglu", attention_bias=False, num_attention_heads=motion_num_attention_heads[-1], max_seq_length=motion_max_seq_length, layers_per_block=motion_mid_block_layers_per_block, transformer_layers_per_block=motion_transformer_layers_per_mid_block, ) else: self.mid_block = None reversed_block_out_channels = list(reversed(block_out_channels)) output_channel = reversed_block_out_channels[0] reversed_motion_layers_per_block = list(reversed(motion_layers_per_block)) reversed_motion_transformer_layers_per_block = list(reversed(motion_transformer_layers_per_block)) reversed_motion_num_attention_heads = list(reversed(motion_num_attention_heads)) for i, channel in enumerate(reversed_block_out_channels): output_channel = reversed_block_out_channels[i] up_blocks.append( MotionModules( in_channels=output_channel, norm_num_groups=motion_norm_num_groups, cross_attention_dim=None, activation_fn="geglu", attention_bias=False, num_attention_heads=reversed_motion_num_attention_heads[i], max_seq_length=motion_max_seq_length, layers_per_block=reversed_motion_layers_per_block[i] + 1, transformer_layers_per_block=reversed_motion_transformer_layers_per_block[i], ) ) self.down_blocks = nn.ModuleList(down_blocks) self.up_blocks = nn.ModuleList(up_blocks) def forward(self, sample): pass class UNetMotionModel(ModelMixin, ConfigMixin, UNet2DConditionLoadersMixin, PeftAdapterMixin): r""" A modified conditional 2D UNet model that takes a noisy sample, conditional state, and a timestep and returns a sample shaped output. This model inherits from [`ModelMixin`]. Check the superclass documentation for it's generic methods implemented for all models (such as downloading or saving). """ _supports_gradient_checkpointing = True _skip_layerwise_casting_patterns = ["norm"] @register_to_config def __init__( self, sample_size: Optional[int] = None, in_channels: int = 4, out_channels: int = 4, down_block_types: Tuple[str, ...] = ( "CrossAttnDownBlockMotion", "CrossAttnDownBlockMotion", "CrossAttnDownBlockMotion", "DownBlockMotion", ), up_block_types: Tuple[str, ...] = ( "UpBlockMotion", "CrossAttnUpBlockMotion", "CrossAttnUpBlockMotion", "CrossAttnUpBlockMotion", ), block_out_channels: Tuple[int, ...] = (320, 640, 1280, 1280), layers_per_block: Union[int, Tuple[int]] = 2, downsample_padding: int = 1, mid_block_scale_factor: float = 1, act_fn: str = "silu", norm_num_groups: int = 32, norm_eps: float = 1e-5, cross_attention_dim: int = 1280, transformer_layers_per_block: Union[int, Tuple[int], Tuple[Tuple]] = 1, reverse_transformer_layers_per_block: Optional[Union[int, Tuple[int], Tuple[Tuple]]] = None, temporal_transformer_layers_per_block: Union[int, Tuple[int], Tuple[Tuple]] = 1, reverse_temporal_transformer_layers_per_block: Optional[Union[int, Tuple[int], Tuple[Tuple]]] = None, transformer_layers_per_mid_block: Optional[Union[int, Tuple[int]]] = None, temporal_transformer_layers_per_mid_block: Optional[Union[int, Tuple[int]]] = 1, use_linear_projection: bool = False, num_attention_heads: Union[int, Tuple[int, ...]] = 8, motion_max_seq_length: int = 32, motion_num_attention_heads: Union[int, Tuple[int, ...]] = 8, reverse_motion_num_attention_heads: Optional[Union[int, Tuple[int, ...], Tuple[Tuple[int, ...], ...]]] = None, use_motion_mid_block: bool = True, mid_block_layers: int = 1, encoder_hid_dim: Optional[int] = None, encoder_hid_dim_type: Optional[str] = None, addition_embed_type: Optional[str] = None, addition_time_embed_dim: Optional[int] = None, projection_class_embeddings_input_dim: Optional[int] = None, time_cond_proj_dim: Optional[int] = None, ): super().__init__() self.sample_size = sample_size # Check inputs if len(down_block_types) != len(up_block_types): raise ValueError( f"Must provide the same number of `down_block_types` as `up_block_types`. `down_block_types`: {down_block_types}. `up_block_types`: {up_block_types}." ) if len(block_out_channels) != len(down_block_types): raise ValueError( f"Must provide the same number of `block_out_channels` as `down_block_types`. `block_out_channels`: {block_out_channels}. `down_block_types`: {down_block_types}." ) if not isinstance(num_attention_heads, int) and len(num_attention_heads) != len(down_block_types): raise ValueError( f"Must provide the same number of `num_attention_heads` as `down_block_types`. `num_attention_heads`: {num_attention_heads}. `down_block_types`: {down_block_types}." ) if isinstance(cross_attention_dim, list) and len(cross_attention_dim) != len(down_block_types): raise ValueError( f"Must provide the same number of `cross_attention_dim` as `down_block_types`. `cross_attention_dim`: {cross_attention_dim}. `down_block_types`: {down_block_types}." ) if not isinstance(layers_per_block, int) and len(layers_per_block) != len(down_block_types): raise ValueError( f"Must provide the same number of `layers_per_block` as `down_block_types`. `layers_per_block`: {layers_per_block}. `down_block_types`: {down_block_types}." ) if isinstance(transformer_layers_per_block, list) and reverse_transformer_layers_per_block is None: for layer_number_per_block in transformer_layers_per_block: if isinstance(layer_number_per_block, list): raise ValueError("Must provide 'reverse_transformer_layers_per_block` if using asymmetrical UNet.") if ( isinstance(temporal_transformer_layers_per_block, list) and reverse_temporal_transformer_layers_per_block is None ): for layer_number_per_block in temporal_transformer_layers_per_block: if isinstance(layer_number_per_block, list): raise ValueError( "Must provide 'reverse_temporal_transformer_layers_per_block` if using asymmetrical motion module in UNet." ) # input conv_in_kernel = 3 conv_out_kernel = 3 conv_in_padding = (conv_in_kernel - 1) // 2 self.conv_in = nn.Conv2d( in_channels, block_out_channels[0], kernel_size=conv_in_kernel, padding=conv_in_padding ) # time time_embed_dim = block_out_channels[0] * 4 self.time_proj = Timesteps(block_out_channels[0], True, 0) timestep_input_dim = block_out_channels[0] self.time_embedding = TimestepEmbedding( timestep_input_dim, time_embed_dim, act_fn=act_fn, cond_proj_dim=time_cond_proj_dim ) if encoder_hid_dim_type is None: self.encoder_hid_proj = None if addition_embed_type == "text_time": self.add_time_proj = Timesteps(addition_time_embed_dim, True, 0) self.add_embedding = TimestepEmbedding(projection_class_embeddings_input_dim, time_embed_dim) # class embedding self.down_blocks = nn.ModuleList([]) self.up_blocks = nn.ModuleList([]) if isinstance(num_attention_heads, int): num_attention_heads = (num_attention_heads,) * len(down_block_types) if isinstance(cross_attention_dim, int): cross_attention_dim = (cross_attention_dim,) * len(down_block_types) if isinstance(layers_per_block, int): layers_per_block = [layers_per_block] * len(down_block_types) if isinstance(transformer_layers_per_block, int): transformer_layers_per_block = [transformer_layers_per_block] * len(down_block_types) if isinstance(reverse_transformer_layers_per_block, int): reverse_transformer_layers_per_block = [reverse_transformer_layers_per_block] * len(down_block_types) if isinstance(temporal_transformer_layers_per_block, int): temporal_transformer_layers_per_block = [temporal_transformer_layers_per_block] * len(down_block_types) if isinstance(reverse_temporal_transformer_layers_per_block, int): reverse_temporal_transformer_layers_per_block = [reverse_temporal_transformer_layers_per_block] * len( down_block_types ) if isinstance(motion_num_attention_heads, int): motion_num_attention_heads = (motion_num_attention_heads,) * len(down_block_types) # down output_channel = block_out_channels[0] for i, down_block_type in enumerate(down_block_types): input_channel = output_channel output_channel = block_out_channels[i] is_final_block = i == len(block_out_channels) - 1 if down_block_type == "CrossAttnDownBlockMotion": down_block = CrossAttnDownBlockMotion( in_channels=input_channel, out_channels=output_channel, temb_channels=time_embed_dim, num_layers=layers_per_block[i], transformer_layers_per_block=transformer_layers_per_block[i], resnet_eps=norm_eps, resnet_act_fn=act_fn, resnet_groups=norm_num_groups, num_attention_heads=num_attention_heads[i], cross_attention_dim=cross_attention_dim[i], downsample_padding=downsample_padding, add_downsample=not is_final_block, use_linear_projection=use_linear_projection, temporal_num_attention_heads=motion_num_attention_heads[i], temporal_max_seq_length=motion_max_seq_length, temporal_transformer_layers_per_block=temporal_transformer_layers_per_block[i], ) elif down_block_type == "DownBlockMotion": down_block = DownBlockMotion( in_channels=input_channel, out_channels=output_channel, temb_channels=time_embed_dim, num_layers=layers_per_block[i], resnet_eps=norm_eps, resnet_act_fn=act_fn, resnet_groups=norm_num_groups, add_downsample=not is_final_block, downsample_padding=downsample_padding, temporal_num_attention_heads=motion_num_attention_heads[i], temporal_max_seq_length=motion_max_seq_length, temporal_transformer_layers_per_block=temporal_transformer_layers_per_block[i], ) else: raise ValueError( "Invalid `down_block_type` encountered. Must be one of `CrossAttnDownBlockMotion` or `DownBlockMotion`" ) self.down_blocks.append(down_block) # mid if transformer_layers_per_mid_block is None: transformer_layers_per_mid_block = ( transformer_layers_per_block[-1] if isinstance(transformer_layers_per_block[-1], int) else 1 ) if use_motion_mid_block: self.mid_block = UNetMidBlockCrossAttnMotion( in_channels=block_out_channels[-1], temb_channels=time_embed_dim, resnet_eps=norm_eps, resnet_act_fn=act_fn, output_scale_factor=mid_block_scale_factor, cross_attention_dim=cross_attention_dim[-1], num_attention_heads=num_attention_heads[-1], resnet_groups=norm_num_groups, dual_cross_attention=False, use_linear_projection=use_linear_projection, num_layers=mid_block_layers, temporal_num_attention_heads=motion_num_attention_heads[-1], temporal_max_seq_length=motion_max_seq_length, transformer_layers_per_block=transformer_layers_per_mid_block, temporal_transformer_layers_per_block=temporal_transformer_layers_per_mid_block, ) else: self.mid_block = UNetMidBlock2DCrossAttn( in_channels=block_out_channels[-1], temb_channels=time_embed_dim, resnet_eps=norm_eps, resnet_act_fn=act_fn, output_scale_factor=mid_block_scale_factor, cross_attention_dim=cross_attention_dim[-1], num_attention_heads=num_attention_heads[-1], resnet_groups=norm_num_groups, dual_cross_attention=False, use_linear_projection=use_linear_projection, num_layers=mid_block_layers, transformer_layers_per_block=transformer_layers_per_mid_block, ) # count how many layers upsample the images self.num_upsamplers = 0 # up reversed_block_out_channels = list(reversed(block_out_channels)) reversed_num_attention_heads = list(reversed(num_attention_heads)) reversed_layers_per_block = list(reversed(layers_per_block)) reversed_cross_attention_dim = list(reversed(cross_attention_dim)) reversed_motion_num_attention_heads = list(reversed(motion_num_attention_heads)) if reverse_transformer_layers_per_block is None: reverse_transformer_layers_per_block = list(reversed(transformer_layers_per_block)) if reverse_temporal_transformer_layers_per_block is None: reverse_temporal_transformer_layers_per_block = list(reversed(temporal_transformer_layers_per_block)) output_channel = reversed_block_out_channels[0] for i, up_block_type in enumerate(up_block_types): is_final_block = i == len(block_out_channels) - 1 prev_output_channel = output_channel output_channel = reversed_block_out_channels[i] input_channel = reversed_block_out_channels[min(i + 1, len(block_out_channels) - 1)] # add upsample block for all BUT final layer if not is_final_block: add_upsample = True self.num_upsamplers += 1 else: add_upsample = False if up_block_type == "CrossAttnUpBlockMotion": up_block = CrossAttnUpBlockMotion( in_channels=input_channel, out_channels=output_channel, prev_output_channel=prev_output_channel, temb_channels=time_embed_dim, resolution_idx=i, num_layers=reversed_layers_per_block[i] + 1, transformer_layers_per_block=reverse_transformer_layers_per_block[i], resnet_eps=norm_eps, resnet_act_fn=act_fn, resnet_groups=norm_num_groups, num_attention_heads=reversed_num_attention_heads[i], cross_attention_dim=reversed_cross_attention_dim[i], add_upsample=add_upsample, use_linear_projection=use_linear_projection, temporal_num_attention_heads=reversed_motion_num_attention_heads[i], temporal_max_seq_length=motion_max_seq_length, temporal_transformer_layers_per_block=reverse_temporal_transformer_layers_per_block[i], ) elif up_block_type == "UpBlockMotion": up_block = UpBlockMotion( in_channels=input_channel, prev_output_channel=prev_output_channel, out_channels=output_channel, temb_channels=time_embed_dim, resolution_idx=i, num_layers=reversed_layers_per_block[i] + 1, resnet_eps=norm_eps, resnet_act_fn=act_fn, resnet_groups=norm_num_groups, add_upsample=add_upsample, temporal_num_attention_heads=reversed_motion_num_attention_heads[i], temporal_max_seq_length=motion_max_seq_length, temporal_transformer_layers_per_block=reverse_temporal_transformer_layers_per_block[i], ) else: raise ValueError( "Invalid `up_block_type` encountered. Must be one of `CrossAttnUpBlockMotion` or `UpBlockMotion`" ) self.up_blocks.append(up_block) prev_output_channel = output_channel # out if norm_num_groups is not None: self.conv_norm_out = nn.GroupNorm( num_channels=block_out_channels[0], num_groups=norm_num_groups, eps=norm_eps ) self.conv_act = nn.SiLU() else: self.conv_norm_out = None self.conv_act = None conv_out_padding = (conv_out_kernel - 1) // 2 self.conv_out = nn.Conv2d( block_out_channels[0], out_channels, kernel_size=conv_out_kernel, padding=conv_out_padding ) @classmethod def from_unet2d( cls, unet: UNet2DConditionModel, motion_adapter: Optional[MotionAdapter] = None, load_weights: bool = True, ): has_motion_adapter = motion_adapter is not None if has_motion_adapter: motion_adapter.to(device=unet.device) # check compatibility of number of blocks if len(unet.config["down_block_types"]) != len(motion_adapter.config["block_out_channels"]): raise ValueError("Incompatible Motion Adapter, got different number of blocks") # check layers compatibility for each block if isinstance(unet.config["layers_per_block"], int): expanded_layers_per_block = [unet.config["layers_per_block"]] * len(unet.config["down_block_types"]) else: expanded_layers_per_block = list(unet.config["layers_per_block"]) if isinstance(motion_adapter.config["motion_layers_per_block"], int): expanded_adapter_layers_per_block = [motion_adapter.config["motion_layers_per_block"]] * len( motion_adapter.config["block_out_channels"] ) else: expanded_adapter_layers_per_block = list(motion_adapter.config["motion_layers_per_block"]) if expanded_layers_per_block != expanded_adapter_layers_per_block: raise ValueError("Incompatible Motion Adapter, got different number of layers per block") # based on https://github.com/guoyww/AnimateDiff/blob/895f3220c06318ea0760131ec70408b466c49333/animatediff/models/unet.py#L459 config = dict(unet.config) config["_class_name"] = cls.__name__ down_blocks = [] for down_blocks_type in config["down_block_types"]: if "CrossAttn" in down_blocks_type: down_blocks.append("CrossAttnDownBlockMotion") else: down_blocks.append("DownBlockMotion") config["down_block_types"] = down_blocks up_blocks = [] for down_blocks_type in config["up_block_types"]: if "CrossAttn" in down_blocks_type: up_blocks.append("CrossAttnUpBlockMotion") else: up_blocks.append("UpBlockMotion") config["up_block_types"] = up_blocks if has_motion_adapter: config["motion_num_attention_heads"] = motion_adapter.config["motion_num_attention_heads"] config["motion_max_seq_length"] = motion_adapter.config["motion_max_seq_length"] config["use_motion_mid_block"] = motion_adapter.config["use_motion_mid_block"] config["layers_per_block"] = motion_adapter.config["motion_layers_per_block"] config["temporal_transformer_layers_per_mid_block"] = motion_adapter.config[ "motion_transformer_layers_per_mid_block" ] config["temporal_transformer_layers_per_block"] = motion_adapter.config[ "motion_transformer_layers_per_block" ] config["motion_num_attention_heads"] = motion_adapter.config["motion_num_attention_heads"] # For PIA UNets we need to set the number input channels to 9 if motion_adapter.config["conv_in_channels"]: config["in_channels"] = motion_adapter.config["conv_in_channels"] # Need this for backwards compatibility with UNet2DConditionModel checkpoints if not config.get("num_attention_heads"): config["num_attention_heads"] = config["attention_head_dim"] expected_kwargs, optional_kwargs = cls._get_signature_keys(cls) config = FrozenDict({k: config.get(k) for k in config if k in expected_kwargs or k in optional_kwargs}) config["_class_name"] = cls.__name__ model = cls.from_config(config) if not load_weights: return model # Logic for loading PIA UNets which allow the first 4 channels to be any UNet2DConditionModel conv_in weight # while the last 5 channels must be PIA conv_in weights. if has_motion_adapter and motion_adapter.config["conv_in_channels"]: model.conv_in = motion_adapter.conv_in updated_conv_in_weight = torch.cat( [unet.conv_in.weight, motion_adapter.conv_in.weight[:, 4:, :, :]], dim=1 ) model.conv_in.load_state_dict({"weight": updated_conv_in_weight, "bias": unet.conv_in.bias}) else: model.conv_in.load_state_dict(unet.conv_in.state_dict()) model.time_proj.load_state_dict(unet.time_proj.state_dict()) model.time_embedding.load_state_dict(unet.time_embedding.state_dict()) if any( isinstance(proc, (IPAdapterAttnProcessor, IPAdapterAttnProcessor2_0)) for proc in unet.attn_processors.values() ): attn_procs = {} for name, processor in unet.attn_processors.items(): if name.endswith("attn1.processor"): attn_processor_class = ( AttnProcessor2_0 if hasattr(F, "scaled_dot_product_attention") else AttnProcessor ) attn_procs[name] = attn_processor_class() else: attn_processor_class = ( IPAdapterAttnProcessor2_0 if hasattr(F, "scaled_dot_product_attention") else IPAdapterAttnProcessor ) attn_procs[name] = attn_processor_class( hidden_size=processor.hidden_size, cross_attention_dim=processor.cross_attention_dim, scale=processor.scale, num_tokens=processor.num_tokens, ) for name, processor in model.attn_processors.items(): if name not in attn_procs: attn_procs[name] = processor.__class__() model.set_attn_processor(attn_procs) model.config.encoder_hid_dim_type = "ip_image_proj" model.encoder_hid_proj = unet.encoder_hid_proj for i, down_block in enumerate(unet.down_blocks): model.down_blocks[i].resnets.load_state_dict(down_block.resnets.state_dict()) if hasattr(model.down_blocks[i], "attentions"): model.down_blocks[i].attentions.load_state_dict(down_block.attentions.state_dict()) if model.down_blocks[i].downsamplers: model.down_blocks[i].downsamplers.load_state_dict(down_block.downsamplers.state_dict()) for i, up_block in enumerate(unet.up_blocks): model.up_blocks[i].resnets.load_state_dict(up_block.resnets.state_dict()) if hasattr(model.up_blocks[i], "attentions"): model.up_blocks[i].attentions.load_state_dict(up_block.attentions.state_dict()) if model.up_blocks[i].upsamplers: model.up_blocks[i].upsamplers.load_state_dict(up_block.upsamplers.state_dict()) model.mid_block.resnets.load_state_dict(unet.mid_block.resnets.state_dict()) model.mid_block.attentions.load_state_dict(unet.mid_block.attentions.state_dict()) if unet.conv_norm_out is not None: model.conv_norm_out.load_state_dict(unet.conv_norm_out.state_dict()) if unet.conv_act is not None: model.conv_act.load_state_dict(unet.conv_act.state_dict()) model.conv_out.load_state_dict(unet.conv_out.state_dict()) if has_motion_adapter: model.load_motion_modules(motion_adapter) # ensure that the Motion UNet is the same dtype as the UNet2DConditionModel model.to(unet.dtype) return model def freeze_unet2d_params(self) -> None: """Freeze the weights of just the UNet2DConditionModel, and leave the motion modules unfrozen for fine tuning. """ # Freeze everything for param in self.parameters(): param.requires_grad = False # Unfreeze Motion Modules for down_block in self.down_blocks: motion_modules = down_block.motion_modules for param in motion_modules.parameters(): param.requires_grad = True for up_block in self.up_blocks: motion_modules = up_block.motion_modules for param in motion_modules.parameters(): param.requires_grad = True if hasattr(self.mid_block, "motion_modules"): motion_modules = self.mid_block.motion_modules for param in motion_modules.parameters(): param.requires_grad = True def load_motion_modules(self, motion_adapter: Optional[MotionAdapter]) -> None: for i, down_block in enumerate(motion_adapter.down_blocks): self.down_blocks[i].motion_modules.load_state_dict(down_block.motion_modules.state_dict()) for i, up_block in enumerate(motion_adapter.up_blocks): self.up_blocks[i].motion_modules.load_state_dict(up_block.motion_modules.state_dict()) # to support older motion modules that don't have a mid_block if hasattr(self.mid_block, "motion_modules"): self.mid_block.motion_modules.load_state_dict(motion_adapter.mid_block.motion_modules.state_dict()) def save_motion_modules( self, save_directory: str, is_main_process: bool = True, safe_serialization: bool = True, variant: Optional[str] = None, push_to_hub: bool = False, **kwargs, ) -> None: state_dict = self.state_dict() # Extract all motion modules motion_state_dict = {} for k, v in state_dict.items(): if "motion_modules" in k: motion_state_dict[k] = v adapter = MotionAdapter( block_out_channels=self.config["block_out_channels"], motion_layers_per_block=self.config["layers_per_block"], motion_norm_num_groups=self.config["norm_num_groups"], motion_num_attention_heads=self.config["motion_num_attention_heads"], motion_max_seq_length=self.config["motion_max_seq_length"], use_motion_mid_block=self.config["use_motion_mid_block"], ) adapter.load_state_dict(motion_state_dict) adapter.save_pretrained( save_directory=save_directory, is_main_process=is_main_process, safe_serialization=safe_serialization, variant=variant, push_to_hub=push_to_hub, **kwargs, ) @property # Copied from diffusers.models.unets.unet_2d_condition.UNet2DConditionModel.attn_processors def attn_processors(self) -> Dict[str, AttentionProcessor]: r""" Returns: `dict` of attention processors: A dictionary containing all attention processors used in the model with indexed by its weight name. """ # set recursively processors = {} def fn_recursive_add_processors(name: str, module: torch.nn.Module, processors: Dict[str, AttentionProcessor]): if hasattr(module, "get_processor"): processors[f"{name}.processor"] = module.get_processor() for sub_name, child in module.named_children(): fn_recursive_add_processors(f"{name}.{sub_name}", child, processors) return processors for name, module in self.named_children(): fn_recursive_add_processors(name, module, processors) return processors # Copied from diffusers.models.unets.unet_2d_condition.UNet2DConditionModel.set_attn_processor def set_attn_processor(self, processor: Union[AttentionProcessor, Dict[str, AttentionProcessor]]): r""" Sets the attention processor to use to compute attention. Parameters: processor (`dict` of `AttentionProcessor` or only `AttentionProcessor`): The instantiated processor class or a dictionary of processor classes that will be set as the processor for **all** `Attention` layers. If `processor` is a dict, the key needs to define the path to the corresponding cross attention processor. This is strongly recommended when setting trainable attention processors. """ count = len(self.attn_processors.keys()) if isinstance(processor, dict) and len(processor) != count: raise ValueError( f"A dict of processors was passed, but the number of processors {len(processor)} does not match the" f" number of attention layers: {count}. Please make sure to pass {count} processor classes." ) def fn_recursive_attn_processor(name: str, module: torch.nn.Module, processor): if hasattr(module, "set_processor"): if not isinstance(processor, dict): module.set_processor(processor) else: module.set_processor(processor.pop(f"{name}.processor")) for sub_name, child in module.named_children(): fn_recursive_attn_processor(f"{name}.{sub_name}", child, processor) for name, module in self.named_children(): fn_recursive_attn_processor(name, module, processor) def enable_forward_chunking(self, chunk_size: Optional[int] = None, dim: int = 0) -> None: """ Sets the attention processor to use [feed forward chunking](https://huggingface.co/blog/reformer#2-chunked-feed-forward-layers). Parameters: chunk_size (`int`, *optional*): The chunk size of the feed-forward layers. If not specified, will run feed-forward layer individually over each tensor of dim=`dim`. dim (`int`, *optional*, defaults to `0`): The dimension over which the feed-forward computation should be chunked. Choose between dim=0 (batch) or dim=1 (sequence length). """ if dim not in [0, 1]: raise ValueError(f"Make sure to set `dim` to either 0 or 1, not {dim}") # By default chunk size is 1 chunk_size = chunk_size or 1 def fn_recursive_feed_forward(module: torch.nn.Module, chunk_size: int, dim: int): if hasattr(module, "set_chunk_feed_forward"): module.set_chunk_feed_forward(chunk_size=chunk_size, dim=dim) for child in module.children(): fn_recursive_feed_forward(child, chunk_size, dim) for module in self.children(): fn_recursive_feed_forward(module, chunk_size, dim) def disable_forward_chunking(self) -> None: def fn_recursive_feed_forward(module: torch.nn.Module, chunk_size: int, dim: int): if hasattr(module, "set_chunk_feed_forward"): module.set_chunk_feed_forward(chunk_size=chunk_size, dim=dim) for child in module.children(): fn_recursive_feed_forward(child, chunk_size, dim) for module in self.children(): fn_recursive_feed_forward(module, None, 0) # Copied from diffusers.models.unets.unet_2d_condition.UNet2DConditionModel.set_default_attn_processor def set_default_attn_processor(self) -> None: """ Disables custom attention processors and sets the default attention implementation. """ if all(proc.__class__ in ADDED_KV_ATTENTION_PROCESSORS for proc in self.attn_processors.values()): processor = AttnAddedKVProcessor() elif all(proc.__class__ in CROSS_ATTENTION_PROCESSORS for proc in self.attn_processors.values()): processor = AttnProcessor() else: raise ValueError( f"Cannot call `set_default_attn_processor` when attention processors are of type {next(iter(self.attn_processors.values()))}" ) self.set_attn_processor(processor) # Copied from diffusers.models.unets.unet_2d_condition.UNet2DConditionModel.enable_freeu def enable_freeu(self, s1: float, s2: float, b1: float, b2: float) -> None: r"""Enables the FreeU mechanism from https://arxiv.org/abs/2309.11497. The suffixes after the scaling factors represent the stage blocks where they are being applied. Please refer to the [official repository](https://github.com/ChenyangSi/FreeU) for combinations of values that are known to work well for different pipelines such as Stable Diffusion v1, v2, and Stable Diffusion XL. Args: s1 (`float`): Scaling factor for stage 1 to attenuate the contributions of the skip features. This is done to mitigate the "oversmoothing effect" in the enhanced denoising process. s2 (`float`): Scaling factor for stage 2 to attenuate the contributions of the skip features. This is done to mitigate the "oversmoothing effect" in the enhanced denoising process. b1 (`float`): Scaling factor for stage 1 to amplify the contributions of backbone features. b2 (`float`): Scaling factor for stage 2 to amplify the contributions of backbone features. """ for i, upsample_block in enumerate(self.up_blocks): setattr(upsample_block, "s1", s1) setattr(upsample_block, "s2", s2) setattr(upsample_block, "b1", b1) setattr(upsample_block, "b2", b2) # Copied from diffusers.models.unets.unet_2d_condition.UNet2DConditionModel.disable_freeu def disable_freeu(self) -> None: """Disables the FreeU mechanism.""" freeu_keys = {"s1", "s2", "b1", "b2"} for i, upsample_block in enumerate(self.up_blocks): for k in freeu_keys: if hasattr(upsample_block, k) or getattr(upsample_block, k, None) is not None: setattr(upsample_block, k, None) # Copied from diffusers.models.unets.unet_2d_condition.UNet2DConditionModel.fuse_qkv_projections def fuse_qkv_projections(self): """ Enables fused QKV projections. For self-attention modules, all projection matrices (i.e., query, key, value) are fused. For cross-attention modules, key and value projection matrices are fused. <Tip warning={true}> This API is 🧪 experimental. </Tip> """ self.original_attn_processors = None for _, attn_processor in self.attn_processors.items(): if "Added" in str(attn_processor.__class__.__name__): raise ValueError("`fuse_qkv_projections()` is not supported for models having added KV projections.") self.original_attn_processors = self.attn_processors for module in self.modules(): if isinstance(module, Attention): module.fuse_projections(fuse=True) self.set_attn_processor(FusedAttnProcessor2_0()) # Copied from diffusers.models.unets.unet_2d_condition.UNet2DConditionModel.unfuse_qkv_projections def unfuse_qkv_projections(self): """Disables the fused QKV projection if enabled. <Tip warning={true}> This API is 🧪 experimental. </Tip> """ if self.original_attn_processors is not None: self.set_attn_processor(self.original_attn_processors) def forward( self, sample: torch.Tensor, timestep: Union[torch.Tensor, float, int], encoder_hidden_states: torch.Tensor, timestep_cond: Optional[torch.Tensor] = None, attention_mask: Optional[torch.Tensor] = None, cross_attention_kwargs: Optional[Dict[str, Any]] = None, added_cond_kwargs: Optional[Dict[str, torch.Tensor]] = None, down_block_additional_residuals: Optional[Tuple[torch.Tensor]] = None, mid_block_additional_residual: Optional[torch.Tensor] = None, return_dict: bool = True, ) -> Union[UNetMotionOutput, Tuple[torch.Tensor]]: r""" The [`UNetMotionModel`] forward method. Args: sample (`torch.Tensor`): The noisy input tensor with the following shape `(batch, num_frames, channel, height, width`. timestep (`torch.Tensor` or `float` or `int`): The number of timesteps to denoise an input. encoder_hidden_states (`torch.Tensor`): The encoder hidden states with shape `(batch, sequence_length, feature_dim)`. timestep_cond: (`torch.Tensor`, *optional*, defaults to `None`): Conditional embeddings for timestep. If provided, the embeddings will be summed with the samples passed through the `self.time_embedding` layer to obtain the timestep embeddings. attention_mask (`torch.Tensor`, *optional*, defaults to `None`): An attention mask of shape `(batch, key_tokens)` is applied to `encoder_hidden_states`. If `1` the mask is kept, otherwise if `0` it is discarded. Mask will be converted into a bias, which adds large negative values to the attention scores corresponding to "discard" tokens. cross_attention_kwargs (`dict`, *optional*): A kwargs dictionary that if specified is passed along to the `AttentionProcessor` as defined under `self.processor` in [diffusers.models.attention_processor](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py). down_block_additional_residuals: (`tuple` of `torch.Tensor`, *optional*): A tuple of tensors that if specified are added to the residuals of down unet blocks. mid_block_additional_residual: (`torch.Tensor`, *optional*): A tensor that if specified is added to the residual of the middle unet block. return_dict (`bool`, *optional*, defaults to `True`): Whether or not to return a [`~models.unets.unet_motion_model.UNetMotionOutput`] instead of a plain tuple. Returns: [`~models.unets.unet_motion_model.UNetMotionOutput`] or `tuple`: If `return_dict` is True, an [`~models.unets.unet_motion_model.UNetMotionOutput`] is returned, otherwise a `tuple` is returned where the first element is the sample tensor. """ # By default samples have to be AT least a multiple of the overall upsampling factor. # The overall upsampling factor is equal to 2 ** (# num of upsampling layears). # However, the upsampling interpolation output size can be forced to fit any upsampling size # on the fly if necessary. default_overall_up_factor = 2**self.num_upsamplers # upsample size should be forwarded when sample is not a multiple of `default_overall_up_factor` forward_upsample_size = False upsample_size = None if any(s % default_overall_up_factor != 0 for s in sample.shape[-2:]): logger.info("Forward upsample size to force interpolation output size.") forward_upsample_size = True # prepare attention_mask if attention_mask is not None: attention_mask = (1 - attention_mask.to(sample.dtype)) * -10000.0 attention_mask = attention_mask.unsqueeze(1) # 1. time timesteps = timestep if not torch.is_tensor(timesteps): # TODO: this requires sync between CPU and GPU. So try to pass timesteps as tensors if you can # This would be a good case for the `match` statement (Python 3.10+) is_mps = sample.device.type == "mps" is_npu = sample.device.type == "npu" if isinstance(timestep, float): dtype = torch.float32 if (is_mps or is_npu) else torch.float64 else: dtype = torch.int32 if (is_mps or is_npu) else torch.int64 timesteps = torch.tensor([timesteps], dtype=dtype, device=sample.device) elif len(timesteps.shape) == 0: timesteps = timesteps[None].to(sample.device) # broadcast to batch dimension in a way that's compatible with ONNX/Core ML num_frames = sample.shape[2] timesteps = timesteps.expand(sample.shape[0]) t_emb = self.time_proj(timesteps) # timesteps does not contain any weights and will always return f32 tensors # but time_embedding might actually be running in fp16. so we need to cast here. # there might be better ways to encapsulate this. t_emb = t_emb.to(dtype=self.dtype) emb = self.time_embedding(t_emb, timestep_cond) aug_emb = None if self.config.addition_embed_type == "text_time": if "text_embeds" not in added_cond_kwargs: raise ValueError( f"{self.__class__} has the config param `addition_embed_type` set to 'text_time' which requires the keyword argument `text_embeds` to be passed in `added_cond_kwargs`" ) text_embeds = added_cond_kwargs.get("text_embeds") if "time_ids" not in added_cond_kwargs: raise ValueError( f"{self.__class__} has the config param `addition_embed_type` set to 'text_time' which requires the keyword argument `time_ids` to be passed in `added_cond_kwargs`" ) time_ids = added_cond_kwargs.get("time_ids") time_embeds = self.add_time_proj(time_ids.flatten()) time_embeds = time_embeds.reshape((text_embeds.shape[0], -1)) add_embeds = torch.concat([text_embeds, time_embeds], dim=-1) add_embeds = add_embeds.to(emb.dtype) aug_emb = self.add_embedding(add_embeds) emb = emb if aug_emb is None else emb + aug_emb emb = emb.repeat_interleave(repeats=num_frames, dim=0) if self.encoder_hid_proj is not None and self.config.encoder_hid_dim_type == "ip_image_proj": if "image_embeds" not in added_cond_kwargs: raise ValueError( f"{self.__class__} has the config param `encoder_hid_dim_type` set to 'ip_image_proj' which requires the keyword argument `image_embeds` to be passed in `added_conditions`" ) image_embeds = added_cond_kwargs.get("image_embeds") image_embeds = self.encoder_hid_proj(image_embeds) image_embeds = [image_embed.repeat_interleave(repeats=num_frames, dim=0) for image_embed in image_embeds] encoder_hidden_states = (encoder_hidden_states, image_embeds) # 2. pre-process sample = sample.permute(0, 2, 1, 3, 4).reshape((sample.shape[0] * num_frames, -1) + sample.shape[3:]) sample = self.conv_in(sample) # 3. down down_block_res_samples = (sample,) for downsample_block in self.down_blocks: if hasattr(downsample_block, "has_cross_attention") and downsample_block.has_cross_attention: sample, res_samples = downsample_block( hidden_states=sample, temb=emb, encoder_hidden_states=encoder_hidden_states, attention_mask=attention_mask, num_frames=num_frames, cross_attention_kwargs=cross_attention_kwargs, ) else: sample, res_samples = downsample_block(hidden_states=sample, temb=emb, num_frames=num_frames) down_block_res_samples += res_samples if down_block_additional_residuals is not None: new_down_block_res_samples = () for down_block_res_sample, down_block_additional_residual in zip( down_block_res_samples, down_block_additional_residuals ): down_block_res_sample = down_block_res_sample + down_block_additional_residual new_down_block_res_samples += (down_block_res_sample,) down_block_res_samples = new_down_block_res_samples # 4. mid if self.mid_block is not None: # To support older versions of motion modules that don't have a mid_block if hasattr(self.mid_block, "motion_modules"): sample = self.mid_block( sample, emb, encoder_hidden_states=encoder_hidden_states, attention_mask=attention_mask, num_frames=num_frames, cross_attention_kwargs=cross_attention_kwargs, ) else: sample = self.mid_block( sample, emb, encoder_hidden_states=encoder_hidden_states, attention_mask=attention_mask, cross_attention_kwargs=cross_attention_kwargs, ) if mid_block_additional_residual is not None: sample = sample + mid_block_additional_residual # 5. up for i, upsample_block in enumerate(self.up_blocks): is_final_block = i == len(self.up_blocks) - 1 res_samples = down_block_res_samples[-len(upsample_block.resnets) :] down_block_res_samples = down_block_res_samples[: -len(upsample_block.resnets)] # if we have not reached the final block and need to forward the # upsample size, we do it here if not is_final_block and forward_upsample_size: upsample_size = down_block_res_samples[-1].shape[2:] if hasattr(upsample_block, "has_cross_attention") and upsample_block.has_cross_attention: sample = upsample_block( hidden_states=sample, temb=emb, res_hidden_states_tuple=res_samples, encoder_hidden_states=encoder_hidden_states, upsample_size=upsample_size, attention_mask=attention_mask, num_frames=num_frames, cross_attention_kwargs=cross_attention_kwargs, ) else: sample = upsample_block( hidden_states=sample, temb=emb, res_hidden_states_tuple=res_samples, upsample_size=upsample_size, num_frames=num_frames, ) # 6. post-process if self.conv_norm_out: sample = self.conv_norm_out(sample) sample = self.conv_act(sample) sample = self.conv_out(sample) # reshape to (batch, channel, framerate, width, height) sample = sample[None, :].reshape((-1, num_frames) + sample.shape[1:]).permute(0, 2, 1, 3, 4) if not return_dict: return (sample,) return UNetMotionOutput(sample=sample)
diffusers/src/diffusers/models/unets/unet_motion_model.py/0
{ "file_path": "diffusers/src/diffusers/models/unets/unet_motion_model.py", "repo_id": "diffusers", "token_count": 47486 }
# coding=utf-8 # Copyright 2025 The HuggingFace Inc. team. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from collections import OrderedDict from huggingface_hub.utils import validate_hf_hub_args from ..configuration_utils import ConfigMixin from ..models.controlnets import ControlNetUnionModel from ..utils import is_sentencepiece_available from .aura_flow import AuraFlowPipeline from .cogview3 import CogView3PlusPipeline from .controlnet import ( StableDiffusionControlNetImg2ImgPipeline, StableDiffusionControlNetInpaintPipeline, StableDiffusionControlNetPipeline, StableDiffusionXLControlNetImg2ImgPipeline, StableDiffusionXLControlNetInpaintPipeline, StableDiffusionXLControlNetPipeline, StableDiffusionXLControlNetUnionImg2ImgPipeline, StableDiffusionXLControlNetUnionInpaintPipeline, StableDiffusionXLControlNetUnionPipeline, ) from .deepfloyd_if import IFImg2ImgPipeline, IFInpaintingPipeline, IFPipeline from .flux import ( FluxControlImg2ImgPipeline, FluxControlInpaintPipeline, FluxControlNetImg2ImgPipeline, FluxControlNetInpaintPipeline, FluxControlNetPipeline, FluxControlPipeline, FluxImg2ImgPipeline, FluxInpaintPipeline, FluxPipeline, ) from .hunyuandit import HunyuanDiTPipeline from .kandinsky import ( KandinskyCombinedPipeline, KandinskyImg2ImgCombinedPipeline, KandinskyImg2ImgPipeline, KandinskyInpaintCombinedPipeline, KandinskyInpaintPipeline, KandinskyPipeline, ) from .kandinsky2_2 import ( KandinskyV22CombinedPipeline, KandinskyV22Img2ImgCombinedPipeline, KandinskyV22Img2ImgPipeline, KandinskyV22InpaintCombinedPipeline, KandinskyV22InpaintPipeline, KandinskyV22Pipeline, ) from .kandinsky3 import Kandinsky3Img2ImgPipeline, Kandinsky3Pipeline from .latent_consistency_models import LatentConsistencyModelImg2ImgPipeline, LatentConsistencyModelPipeline from .lumina import LuminaText2ImgPipeline from .pag import ( HunyuanDiTPAGPipeline, PixArtSigmaPAGPipeline, SanaPAGPipeline, StableDiffusion3PAGImg2ImgPipeline, StableDiffusion3PAGPipeline, StableDiffusionControlNetPAGInpaintPipeline, StableDiffusionControlNetPAGPipeline, StableDiffusionPAGImg2ImgPipeline, StableDiffusionPAGInpaintPipeline, StableDiffusionPAGPipeline, StableDiffusionXLControlNetPAGImg2ImgPipeline, StableDiffusionXLControlNetPAGPipeline, StableDiffusionXLPAGImg2ImgPipeline, StableDiffusionXLPAGInpaintPipeline, StableDiffusionXLPAGPipeline, ) from .pixart_alpha import PixArtAlphaPipeline, PixArtSigmaPipeline from .sana import SanaPipeline from .stable_cascade import StableCascadeCombinedPipeline, StableCascadeDecoderPipeline from .stable_diffusion import ( StableDiffusionImg2ImgPipeline, StableDiffusionInpaintPipeline, StableDiffusionPipeline, ) from .stable_diffusion_3 import ( StableDiffusion3Img2ImgPipeline, StableDiffusion3InpaintPipeline, StableDiffusion3Pipeline, ) from .stable_diffusion_xl import ( StableDiffusionXLImg2ImgPipeline, StableDiffusionXLInpaintPipeline, StableDiffusionXLPipeline, ) from .wuerstchen import WuerstchenCombinedPipeline, WuerstchenDecoderPipeline AUTO_TEXT2IMAGE_PIPELINES_MAPPING = OrderedDict( [ ("stable-diffusion", StableDiffusionPipeline), ("stable-diffusion-xl", StableDiffusionXLPipeline), ("stable-diffusion-3", StableDiffusion3Pipeline), ("stable-diffusion-3-pag", StableDiffusion3PAGPipeline), ("if", IFPipeline), ("hunyuan", HunyuanDiTPipeline), ("hunyuan-pag", HunyuanDiTPAGPipeline), ("kandinsky", KandinskyCombinedPipeline), ("kandinsky22", KandinskyV22CombinedPipeline), ("kandinsky3", Kandinsky3Pipeline), ("stable-diffusion-controlnet", StableDiffusionControlNetPipeline), ("stable-diffusion-xl-controlnet", StableDiffusionXLControlNetPipeline), ("stable-diffusion-xl-controlnet-union", StableDiffusionXLControlNetUnionPipeline), ("wuerstchen", WuerstchenCombinedPipeline), ("cascade", StableCascadeCombinedPipeline), ("lcm", LatentConsistencyModelPipeline), ("pixart-alpha", PixArtAlphaPipeline), ("pixart-sigma", PixArtSigmaPipeline), ("sana", SanaPipeline), ("sana-pag", SanaPAGPipeline), ("stable-diffusion-pag", StableDiffusionPAGPipeline), ("stable-diffusion-controlnet-pag", StableDiffusionControlNetPAGPipeline), ("stable-diffusion-xl-pag", StableDiffusionXLPAGPipeline), ("stable-diffusion-xl-controlnet-pag", StableDiffusionXLControlNetPAGPipeline), ("pixart-sigma-pag", PixArtSigmaPAGPipeline), ("auraflow", AuraFlowPipeline), ("flux", FluxPipeline), ("flux-control", FluxControlPipeline), ("flux-controlnet", FluxControlNetPipeline), ("lumina", LuminaText2ImgPipeline), ("cogview3", CogView3PlusPipeline), ] ) AUTO_IMAGE2IMAGE_PIPELINES_MAPPING = OrderedDict( [ ("stable-diffusion", StableDiffusionImg2ImgPipeline), ("stable-diffusion-xl", StableDiffusionXLImg2ImgPipeline), ("stable-diffusion-3", StableDiffusion3Img2ImgPipeline), ("stable-diffusion-3-pag", StableDiffusion3PAGImg2ImgPipeline), ("if", IFImg2ImgPipeline), ("kandinsky", KandinskyImg2ImgCombinedPipeline), ("kandinsky22", KandinskyV22Img2ImgCombinedPipeline), ("kandinsky3", Kandinsky3Img2ImgPipeline), ("stable-diffusion-controlnet", StableDiffusionControlNetImg2ImgPipeline), ("stable-diffusion-pag", StableDiffusionPAGImg2ImgPipeline), ("stable-diffusion-xl-controlnet", StableDiffusionXLControlNetImg2ImgPipeline), ("stable-diffusion-xl-controlnet-union", StableDiffusionXLControlNetUnionImg2ImgPipeline), ("stable-diffusion-xl-pag", StableDiffusionXLPAGImg2ImgPipeline), ("stable-diffusion-xl-controlnet-pag", StableDiffusionXLControlNetPAGImg2ImgPipeline), ("lcm", LatentConsistencyModelImg2ImgPipeline), ("flux", FluxImg2ImgPipeline), ("flux-controlnet", FluxControlNetImg2ImgPipeline), ("flux-control", FluxControlImg2ImgPipeline), ] ) AUTO_INPAINT_PIPELINES_MAPPING = OrderedDict( [ ("stable-diffusion", StableDiffusionInpaintPipeline), ("stable-diffusion-xl", StableDiffusionXLInpaintPipeline), ("stable-diffusion-3", StableDiffusion3InpaintPipeline), ("if", IFInpaintingPipeline), ("kandinsky", KandinskyInpaintCombinedPipeline), ("kandinsky22", KandinskyV22InpaintCombinedPipeline), ("stable-diffusion-controlnet", StableDiffusionControlNetInpaintPipeline), ("stable-diffusion-controlnet-pag", StableDiffusionControlNetPAGInpaintPipeline), ("stable-diffusion-xl-controlnet", StableDiffusionXLControlNetInpaintPipeline), ("stable-diffusion-xl-controlnet-union", StableDiffusionXLControlNetUnionInpaintPipeline), ("stable-diffusion-xl-pag", StableDiffusionXLPAGInpaintPipeline), ("flux", FluxInpaintPipeline), ("flux-controlnet", FluxControlNetInpaintPipeline), ("flux-control", FluxControlInpaintPipeline), ("stable-diffusion-pag", StableDiffusionPAGInpaintPipeline), ] ) _AUTO_TEXT2IMAGE_DECODER_PIPELINES_MAPPING = OrderedDict( [ ("kandinsky", KandinskyPipeline), ("kandinsky22", KandinskyV22Pipeline), ("wuerstchen", WuerstchenDecoderPipeline), ("cascade", StableCascadeDecoderPipeline), ] ) _AUTO_IMAGE2IMAGE_DECODER_PIPELINES_MAPPING = OrderedDict( [ ("kandinsky", KandinskyImg2ImgPipeline), ("kandinsky22", KandinskyV22Img2ImgPipeline), ] ) _AUTO_INPAINT_DECODER_PIPELINES_MAPPING = OrderedDict( [ ("kandinsky", KandinskyInpaintPipeline), ("kandinsky22", KandinskyV22InpaintPipeline), ] ) if is_sentencepiece_available(): from .kolors import KolorsImg2ImgPipeline, KolorsPipeline from .pag import KolorsPAGPipeline AUTO_TEXT2IMAGE_PIPELINES_MAPPING["kolors"] = KolorsPipeline AUTO_TEXT2IMAGE_PIPELINES_MAPPING["kolors-pag"] = KolorsPAGPipeline AUTO_IMAGE2IMAGE_PIPELINES_MAPPING["kolors"] = KolorsImg2ImgPipeline SUPPORTED_TASKS_MAPPINGS = [ AUTO_TEXT2IMAGE_PIPELINES_MAPPING, AUTO_IMAGE2IMAGE_PIPELINES_MAPPING, AUTO_INPAINT_PIPELINES_MAPPING, _AUTO_TEXT2IMAGE_DECODER_PIPELINES_MAPPING, _AUTO_IMAGE2IMAGE_DECODER_PIPELINES_MAPPING, _AUTO_INPAINT_DECODER_PIPELINES_MAPPING, ] def _get_connected_pipeline(pipeline_cls): # for now connected pipelines can only be loaded from decoder pipelines, such as kandinsky-community/kandinsky-2-2-decoder if pipeline_cls in _AUTO_TEXT2IMAGE_DECODER_PIPELINES_MAPPING.values(): return _get_task_class( AUTO_TEXT2IMAGE_PIPELINES_MAPPING, pipeline_cls.__name__, throw_error_if_not_exist=False ) if pipeline_cls in _AUTO_IMAGE2IMAGE_DECODER_PIPELINES_MAPPING.values(): return _get_task_class( AUTO_IMAGE2IMAGE_PIPELINES_MAPPING, pipeline_cls.__name__, throw_error_if_not_exist=False ) if pipeline_cls in _AUTO_INPAINT_DECODER_PIPELINES_MAPPING.values(): return _get_task_class(AUTO_INPAINT_PIPELINES_MAPPING, pipeline_cls.__name__, throw_error_if_not_exist=False) def _get_task_class(mapping, pipeline_class_name, throw_error_if_not_exist: bool = True): def get_model(pipeline_class_name): for task_mapping in SUPPORTED_TASKS_MAPPINGS: for model_name, pipeline in task_mapping.items(): if pipeline.__name__ == pipeline_class_name: return model_name model_name = get_model(pipeline_class_name) if model_name is not None: task_class = mapping.get(model_name, None) if task_class is not None: return task_class if throw_error_if_not_exist: raise ValueError(f"AutoPipeline can't find a pipeline linked to {pipeline_class_name} for {model_name}") class AutoPipelineForText2Image(ConfigMixin): r""" [`AutoPipelineForText2Image`] is a generic pipeline class that instantiates a text-to-image pipeline class. The specific underlying pipeline class is automatically selected from either the [`~AutoPipelineForText2Image.from_pretrained`] or [`~AutoPipelineForText2Image.from_pipe`] methods. This class cannot be instantiated using `__init__()` (throws an error). Class attributes: - **config_name** (`str`) -- The configuration filename that stores the class and module names of all the diffusion pipeline's components. """ config_name = "model_index.json" def __init__(self, *args, **kwargs): raise EnvironmentError( f"{self.__class__.__name__} is designed to be instantiated " f"using the `{self.__class__.__name__}.from_pretrained(pretrained_model_name_or_path)` or " f"`{self.__class__.__name__}.from_pipe(pipeline)` methods." ) @classmethod @validate_hf_hub_args def from_pretrained(cls, pretrained_model_or_path, **kwargs): r""" Instantiates a text-to-image Pytorch diffusion pipeline from pretrained pipeline weight. The from_pretrained() method takes care of returning the correct pipeline class instance by: 1. Detect the pipeline class of the pretrained_model_or_path based on the _class_name property of its config object 2. Find the text-to-image pipeline linked to the pipeline class using pattern matching on pipeline class name. If a `controlnet` argument is passed, it will instantiate a [`StableDiffusionControlNetPipeline`] object. The pipeline is set in evaluation mode (`model.eval()`) by default. If you get the error message below, you need to finetune the weights for your downstream task: ``` Some weights of UNet2DConditionModel were not initialized from the model checkpoint at stable-diffusion-v1-5/stable-diffusion-v1-5 and are newly initialized because the shapes did not match: - conv_in.weight: found shape torch.Size([320, 4, 3, 3]) in the checkpoint and torch.Size([320, 9, 3, 3]) in the model instantiated You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. ``` Parameters: pretrained_model_or_path (`str` or `os.PathLike`, *optional*): Can be either: - A string, the *repo id* (for example `CompVis/ldm-text2im-large-256`) of a pretrained pipeline hosted on the Hub. - A path to a *directory* (for example `./my_pipeline_directory/`) containing pipeline weights saved using [`~DiffusionPipeline.save_pretrained`]. torch_dtype (`str` or `torch.dtype`, *optional*): Override the default `torch.dtype` and load the model with another dtype. If "auto" is passed, the dtype is automatically derived from the model's weights. force_download (`bool`, *optional*, defaults to `False`): Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. cache_dir (`Union[str, os.PathLike]`, *optional*): Path to a directory where a downloaded pretrained model configuration is cached if the standard cache is not used. proxies (`Dict[str, str]`, *optional*): A dictionary of proxy servers to use by protocol or endpoint, for example, `{'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}`. The proxies are used on each request. output_loading_info(`bool`, *optional*, defaults to `False`): Whether or not to also return a dictionary containing missing keys, unexpected keys and error messages. local_files_only (`bool`, *optional*, defaults to `False`): Whether to only load local model weights and configuration files or not. If set to `True`, the model won't be downloaded from the Hub. token (`str` or *bool*, *optional*): The token to use as HTTP bearer authorization for remote files. If `True`, the token generated from `diffusers-cli login` (stored in `~/.huggingface`) is used. revision (`str`, *optional*, defaults to `"main"`): The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier allowed by Git. custom_revision (`str`, *optional*, defaults to `"main"`): The specific model version to use. It can be a branch name, a tag name, or a commit id similar to `revision` when loading a custom pipeline from the Hub. It can be a 🤗 Diffusers version when loading a custom pipeline from GitHub, otherwise it defaults to `"main"` when loading from the Hub. mirror (`str`, *optional*): Mirror source to resolve accessibility issues if you’re downloading a model in China. We do not guarantee the timeliness or safety of the source, and you should refer to the mirror site for more information. device_map (`str` or `Dict[str, Union[int, str, torch.device]]`, *optional*): A map that specifies where each submodule should go. It doesn’t need to be defined for each parameter/buffer name; once a given module name is inside, every submodule of it will be sent to the same device. Set `device_map="auto"` to have 🤗 Accelerate automatically compute the most optimized `device_map`. For more information about each option see [designing a device map](https://hf.co/docs/accelerate/main/en/usage_guides/big_modeling#designing-a-device-map). max_memory (`Dict`, *optional*): A dictionary device identifier for the maximum memory. Will default to the maximum memory available for each GPU and the available CPU RAM if unset. offload_folder (`str` or `os.PathLike`, *optional*): The path to offload weights if device_map contains the value `"disk"`. offload_state_dict (`bool`, *optional*): If `True`, temporarily offloads the CPU state dict to the hard drive to avoid running out of CPU RAM if the weight of the CPU state dict + the biggest shard of the checkpoint does not fit. Defaults to `True` when there is some disk offload. low_cpu_mem_usage (`bool`, *optional*, defaults to `True` if torch version >= 1.9.0 else `False`): Speed up model loading only loading the pretrained weights and not initializing the weights. This also tries to not use more than 1x model size in CPU memory (including peak memory) while loading the model. Only supported for PyTorch >= 1.9.0. If you are using an older version of PyTorch, setting this argument to `True` will raise an error. use_safetensors (`bool`, *optional*, defaults to `None`): If set to `None`, the safetensors weights are downloaded if they're available **and** if the safetensors library is installed. If set to `True`, the model is forcibly loaded from safetensors weights. If set to `False`, safetensors weights are not loaded. kwargs (remaining dictionary of keyword arguments, *optional*): Can be used to overwrite load and saveable variables (the pipeline components of the specific pipeline class). The overwritten components are passed directly to the pipelines `__init__` method. See example below for more information. variant (`str`, *optional*): Load weights from a specified variant filename such as `"fp16"` or `"ema"`. This is ignored when loading `from_flax`. <Tip> To use private or [gated](https://huggingface.co/docs/hub/models-gated#gated-models) models, log-in with `huggingface-cli login`. </Tip> Examples: ```py >>> from diffusers import AutoPipelineForText2Image >>> pipeline = AutoPipelineForText2Image.from_pretrained("stable-diffusion-v1-5/stable-diffusion-v1-5") >>> image = pipeline(prompt).images[0] ``` """ cache_dir = kwargs.pop("cache_dir", None) force_download = kwargs.pop("force_download", False) proxies = kwargs.pop("proxies", None) token = kwargs.pop("token", None) local_files_only = kwargs.pop("local_files_only", False) revision = kwargs.pop("revision", None) load_config_kwargs = { "cache_dir": cache_dir, "force_download": force_download, "proxies": proxies, "token": token, "local_files_only": local_files_only, "revision": revision, } config = cls.load_config(pretrained_model_or_path, **load_config_kwargs) orig_class_name = config["_class_name"] if "ControlPipeline" in orig_class_name: to_replace = "ControlPipeline" else: to_replace = "Pipeline" if "controlnet" in kwargs: if isinstance(kwargs["controlnet"], ControlNetUnionModel): orig_class_name = config["_class_name"].replace(to_replace, "ControlNetUnionPipeline") else: orig_class_name = config["_class_name"].replace(to_replace, "ControlNetPipeline") if "enable_pag" in kwargs: enable_pag = kwargs.pop("enable_pag") if enable_pag: orig_class_name = orig_class_name.replace(to_replace, "PAGPipeline") text_2_image_cls = _get_task_class(AUTO_TEXT2IMAGE_PIPELINES_MAPPING, orig_class_name) kwargs = {**load_config_kwargs, **kwargs} return text_2_image_cls.from_pretrained(pretrained_model_or_path, **kwargs) @classmethod def from_pipe(cls, pipeline, **kwargs): r""" Instantiates a text-to-image Pytorch diffusion pipeline from another instantiated diffusion pipeline class. The from_pipe() method takes care of returning the correct pipeline class instance by finding the text-to-image pipeline linked to the pipeline class using pattern matching on pipeline class name. All the modules the pipeline contains will be used to initialize the new pipeline without reallocating additional memory. The pipeline is set in evaluation mode (`model.eval()`) by default. Parameters: pipeline (`DiffusionPipeline`): an instantiated `DiffusionPipeline` object ```py >>> from diffusers import AutoPipelineForText2Image, AutoPipelineForImage2Image >>> pipe_i2i = AutoPipelineForImage2Image.from_pretrained( ... "stable-diffusion-v1-5/stable-diffusion-v1-5", requires_safety_checker=False ... ) >>> pipe_t2i = AutoPipelineForText2Image.from_pipe(pipe_i2i) >>> image = pipe_t2i(prompt).images[0] ``` """ original_config = dict(pipeline.config) original_cls_name = pipeline.__class__.__name__ # derive the pipeline class to instantiate text_2_image_cls = _get_task_class(AUTO_TEXT2IMAGE_PIPELINES_MAPPING, original_cls_name) if "controlnet" in kwargs: if kwargs["controlnet"] is not None: to_replace = "PAGPipeline" if "PAG" in text_2_image_cls.__name__ else "Pipeline" text_2_image_cls = _get_task_class( AUTO_TEXT2IMAGE_PIPELINES_MAPPING, text_2_image_cls.__name__.replace("ControlNet", "").replace(to_replace, "ControlNet" + to_replace), ) else: text_2_image_cls = _get_task_class( AUTO_TEXT2IMAGE_PIPELINES_MAPPING, text_2_image_cls.__name__.replace("ControlNet", ""), ) if "enable_pag" in kwargs: enable_pag = kwargs.pop("enable_pag") if enable_pag: text_2_image_cls = _get_task_class( AUTO_TEXT2IMAGE_PIPELINES_MAPPING, text_2_image_cls.__name__.replace("PAG", "").replace("Pipeline", "PAGPipeline"), ) else: text_2_image_cls = _get_task_class( AUTO_TEXT2IMAGE_PIPELINES_MAPPING, text_2_image_cls.__name__.replace("PAG", ""), ) # define expected module and optional kwargs given the pipeline signature expected_modules, optional_kwargs = text_2_image_cls._get_signature_keys(text_2_image_cls) pretrained_model_name_or_path = original_config.pop("_name_or_path", None) # allow users pass modules in `kwargs` to override the original pipeline's components passed_class_obj = {k: kwargs.pop(k) for k in expected_modules if k in kwargs} original_class_obj = { k: pipeline.components[k] for k, v in pipeline.components.items() if k in expected_modules and k not in passed_class_obj } # allow users pass optional kwargs to override the original pipelines config attribute passed_pipe_kwargs = {k: kwargs.pop(k) for k in optional_kwargs if k in kwargs} original_pipe_kwargs = { k: original_config[k] for k, v in original_config.items() if k in optional_kwargs and k not in passed_pipe_kwargs } # config that were not expected by original pipeline is stored as private attribute # we will pass them as optional arguments if they can be accepted by the pipeline additional_pipe_kwargs = [ k[1:] for k in original_config.keys() if k.startswith("_") and k[1:] in optional_kwargs and k[1:] not in passed_pipe_kwargs ] for k in additional_pipe_kwargs: original_pipe_kwargs[k] = original_config.pop(f"_{k}") text_2_image_kwargs = {**passed_class_obj, **original_class_obj, **passed_pipe_kwargs, **original_pipe_kwargs} # store unused config as private attribute unused_original_config = { f"{'' if k.startswith('_') else '_'}{k}": original_config[k] for k, v in original_config.items() if k not in text_2_image_kwargs } missing_modules = ( set(expected_modules) - set(text_2_image_cls._optional_components) - set(text_2_image_kwargs.keys()) ) if len(missing_modules) > 0: raise ValueError( f"Pipeline {text_2_image_cls} expected {expected_modules}, but only {set(list(passed_class_obj.keys()) + list(original_class_obj.keys()))} were passed" ) model = text_2_image_cls(**text_2_image_kwargs) model.register_to_config(_name_or_path=pretrained_model_name_or_path) model.register_to_config(**unused_original_config) return model class AutoPipelineForImage2Image(ConfigMixin): r""" [`AutoPipelineForImage2Image`] is a generic pipeline class that instantiates an image-to-image pipeline class. The specific underlying pipeline class is automatically selected from either the [`~AutoPipelineForImage2Image.from_pretrained`] or [`~AutoPipelineForImage2Image.from_pipe`] methods. This class cannot be instantiated using `__init__()` (throws an error). Class attributes: - **config_name** (`str`) -- The configuration filename that stores the class and module names of all the diffusion pipeline's components. """ config_name = "model_index.json" def __init__(self, *args, **kwargs): raise EnvironmentError( f"{self.__class__.__name__} is designed to be instantiated " f"using the `{self.__class__.__name__}.from_pretrained(pretrained_model_name_or_path)` or " f"`{self.__class__.__name__}.from_pipe(pipeline)` methods." ) @classmethod @validate_hf_hub_args def from_pretrained(cls, pretrained_model_or_path, **kwargs): r""" Instantiates a image-to-image Pytorch diffusion pipeline from pretrained pipeline weight. The from_pretrained() method takes care of returning the correct pipeline class instance by: 1. Detect the pipeline class of the pretrained_model_or_path based on the _class_name property of its config object 2. Find the image-to-image pipeline linked to the pipeline class using pattern matching on pipeline class name. If a `controlnet` argument is passed, it will instantiate a [`StableDiffusionControlNetImg2ImgPipeline`] object. The pipeline is set in evaluation mode (`model.eval()`) by default. If you get the error message below, you need to finetune the weights for your downstream task: ``` Some weights of UNet2DConditionModel were not initialized from the model checkpoint at stable-diffusion-v1-5/stable-diffusion-v1-5 and are newly initialized because the shapes did not match: - conv_in.weight: found shape torch.Size([320, 4, 3, 3]) in the checkpoint and torch.Size([320, 9, 3, 3]) in the model instantiated You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. ``` Parameters: pretrained_model_or_path (`str` or `os.PathLike`, *optional*): Can be either: - A string, the *repo id* (for example `CompVis/ldm-text2im-large-256`) of a pretrained pipeline hosted on the Hub. - A path to a *directory* (for example `./my_pipeline_directory/`) containing pipeline weights saved using [`~DiffusionPipeline.save_pretrained`]. torch_dtype (`str` or `torch.dtype`, *optional*): Override the default `torch.dtype` and load the model with another dtype. If "auto" is passed, the dtype is automatically derived from the model's weights. force_download (`bool`, *optional*, defaults to `False`): Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. cache_dir (`Union[str, os.PathLike]`, *optional*): Path to a directory where a downloaded pretrained model configuration is cached if the standard cache is not used. proxies (`Dict[str, str]`, *optional*): A dictionary of proxy servers to use by protocol or endpoint, for example, `{'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}`. The proxies are used on each request. output_loading_info(`bool`, *optional*, defaults to `False`): Whether or not to also return a dictionary containing missing keys, unexpected keys and error messages. local_files_only (`bool`, *optional*, defaults to `False`): Whether to only load local model weights and configuration files or not. If set to `True`, the model won't be downloaded from the Hub. token (`str` or *bool*, *optional*): The token to use as HTTP bearer authorization for remote files. If `True`, the token generated from `diffusers-cli login` (stored in `~/.huggingface`) is used. revision (`str`, *optional*, defaults to `"main"`): The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier allowed by Git. custom_revision (`str`, *optional*, defaults to `"main"`): The specific model version to use. It can be a branch name, a tag name, or a commit id similar to `revision` when loading a custom pipeline from the Hub. It can be a 🤗 Diffusers version when loading a custom pipeline from GitHub, otherwise it defaults to `"main"` when loading from the Hub. mirror (`str`, *optional*): Mirror source to resolve accessibility issues if you’re downloading a model in China. We do not guarantee the timeliness or safety of the source, and you should refer to the mirror site for more information. device_map (`str` or `Dict[str, Union[int, str, torch.device]]`, *optional*): A map that specifies where each submodule should go. It doesn’t need to be defined for each parameter/buffer name; once a given module name is inside, every submodule of it will be sent to the same device. Set `device_map="auto"` to have 🤗 Accelerate automatically compute the most optimized `device_map`. For more information about each option see [designing a device map](https://hf.co/docs/accelerate/main/en/usage_guides/big_modeling#designing-a-device-map). max_memory (`Dict`, *optional*): A dictionary device identifier for the maximum memory. Will default to the maximum memory available for each GPU and the available CPU RAM if unset. offload_folder (`str` or `os.PathLike`, *optional*): The path to offload weights if device_map contains the value `"disk"`. offload_state_dict (`bool`, *optional*): If `True`, temporarily offloads the CPU state dict to the hard drive to avoid running out of CPU RAM if the weight of the CPU state dict + the biggest shard of the checkpoint does not fit. Defaults to `True` when there is some disk offload. low_cpu_mem_usage (`bool`, *optional*, defaults to `True` if torch version >= 1.9.0 else `False`): Speed up model loading only loading the pretrained weights and not initializing the weights. This also tries to not use more than 1x model size in CPU memory (including peak memory) while loading the model. Only supported for PyTorch >= 1.9.0. If you are using an older version of PyTorch, setting this argument to `True` will raise an error. use_safetensors (`bool`, *optional*, defaults to `None`): If set to `None`, the safetensors weights are downloaded if they're available **and** if the safetensors library is installed. If set to `True`, the model is forcibly loaded from safetensors weights. If set to `False`, safetensors weights are not loaded. kwargs (remaining dictionary of keyword arguments, *optional*): Can be used to overwrite load and saveable variables (the pipeline components of the specific pipeline class). The overwritten components are passed directly to the pipelines `__init__` method. See example below for more information. variant (`str`, *optional*): Load weights from a specified variant filename such as `"fp16"` or `"ema"`. This is ignored when loading `from_flax`. <Tip> To use private or [gated](https://huggingface.co/docs/hub/models-gated#gated-models) models, log-in with `huggingface-cli login`. </Tip> Examples: ```py >>> from diffusers import AutoPipelineForImage2Image >>> pipeline = AutoPipelineForImage2Image.from_pretrained("stable-diffusion-v1-5/stable-diffusion-v1-5") >>> image = pipeline(prompt, image).images[0] ``` """ cache_dir = kwargs.pop("cache_dir", None) force_download = kwargs.pop("force_download", False) proxies = kwargs.pop("proxies", None) token = kwargs.pop("token", None) local_files_only = kwargs.pop("local_files_only", False) revision = kwargs.pop("revision", None) load_config_kwargs = { "cache_dir": cache_dir, "force_download": force_download, "proxies": proxies, "token": token, "local_files_only": local_files_only, "revision": revision, } config = cls.load_config(pretrained_model_or_path, **load_config_kwargs) orig_class_name = config["_class_name"] # the `orig_class_name` can be: # `- *Pipeline` (for regular text-to-image checkpoint) # - `*ControlPipeline` (for Flux tools specific checkpoint) # `- *Img2ImgPipeline` (for refiner checkpoint) if "Img2Img" in orig_class_name: to_replace = "Img2ImgPipeline" elif "ControlPipeline" in orig_class_name: to_replace = "ControlPipeline" else: to_replace = "Pipeline" if "controlnet" in kwargs: if isinstance(kwargs["controlnet"], ControlNetUnionModel): orig_class_name = orig_class_name.replace(to_replace, "ControlNetUnion" + to_replace) else: orig_class_name = orig_class_name.replace(to_replace, "ControlNet" + to_replace) if "enable_pag" in kwargs: enable_pag = kwargs.pop("enable_pag") if enable_pag: orig_class_name = orig_class_name.replace(to_replace, "PAG" + to_replace) if to_replace == "ControlPipeline": orig_class_name = orig_class_name.replace(to_replace, "ControlImg2ImgPipeline") image_2_image_cls = _get_task_class(AUTO_IMAGE2IMAGE_PIPELINES_MAPPING, orig_class_name) kwargs = {**load_config_kwargs, **kwargs} return image_2_image_cls.from_pretrained(pretrained_model_or_path, **kwargs) @classmethod def from_pipe(cls, pipeline, **kwargs): r""" Instantiates a image-to-image Pytorch diffusion pipeline from another instantiated diffusion pipeline class. The from_pipe() method takes care of returning the correct pipeline class instance by finding the image-to-image pipeline linked to the pipeline class using pattern matching on pipeline class name. All the modules the pipeline contains will be used to initialize the new pipeline without reallocating additional memory. The pipeline is set in evaluation mode (`model.eval()`) by default. Parameters: pipeline (`DiffusionPipeline`): an instantiated `DiffusionPipeline` object Examples: ```py >>> from diffusers import AutoPipelineForText2Image, AutoPipelineForImage2Image >>> pipe_t2i = AutoPipelineForText2Image.from_pretrained( ... "stable-diffusion-v1-5/stable-diffusion-v1-5", requires_safety_checker=False ... ) >>> pipe_i2i = AutoPipelineForImage2Image.from_pipe(pipe_t2i) >>> image = pipe_i2i(prompt, image).images[0] ``` """ original_config = dict(pipeline.config) original_cls_name = pipeline.__class__.__name__ # derive the pipeline class to instantiate image_2_image_cls = _get_task_class(AUTO_IMAGE2IMAGE_PIPELINES_MAPPING, original_cls_name) if "controlnet" in kwargs: if kwargs["controlnet"] is not None: to_replace = "Img2ImgPipeline" if "PAG" in image_2_image_cls.__name__: to_replace = "PAG" + to_replace image_2_image_cls = _get_task_class( AUTO_IMAGE2IMAGE_PIPELINES_MAPPING, image_2_image_cls.__name__.replace("ControlNet", "").replace( to_replace, "ControlNet" + to_replace ), ) else: image_2_image_cls = _get_task_class( AUTO_IMAGE2IMAGE_PIPELINES_MAPPING, image_2_image_cls.__name__.replace("ControlNet", ""), ) if "enable_pag" in kwargs: enable_pag = kwargs.pop("enable_pag") if enable_pag: image_2_image_cls = _get_task_class( AUTO_IMAGE2IMAGE_PIPELINES_MAPPING, image_2_image_cls.__name__.replace("PAG", "").replace("Img2ImgPipeline", "PAGImg2ImgPipeline"), ) else: image_2_image_cls = _get_task_class( AUTO_IMAGE2IMAGE_PIPELINES_MAPPING, image_2_image_cls.__name__.replace("PAG", ""), ) # define expected module and optional kwargs given the pipeline signature expected_modules, optional_kwargs = image_2_image_cls._get_signature_keys(image_2_image_cls) pretrained_model_name_or_path = original_config.pop("_name_or_path", None) # allow users pass modules in `kwargs` to override the original pipeline's components passed_class_obj = {k: kwargs.pop(k) for k in expected_modules if k in kwargs} original_class_obj = { k: pipeline.components[k] for k, v in pipeline.components.items() if k in expected_modules and k not in passed_class_obj } # allow users pass optional kwargs to override the original pipelines config attribute passed_pipe_kwargs = {k: kwargs.pop(k) for k in optional_kwargs if k in kwargs} original_pipe_kwargs = { k: original_config[k] for k, v in original_config.items() if k in optional_kwargs and k not in passed_pipe_kwargs } # config attribute that were not expected by original pipeline is stored as its private attribute # we will pass them as optional arguments if they can be accepted by the pipeline additional_pipe_kwargs = [ k[1:] for k in original_config.keys() if k.startswith("_") and k[1:] in optional_kwargs and k[1:] not in passed_pipe_kwargs ] for k in additional_pipe_kwargs: original_pipe_kwargs[k] = original_config.pop(f"_{k}") image_2_image_kwargs = {**passed_class_obj, **original_class_obj, **passed_pipe_kwargs, **original_pipe_kwargs} # store unused config as private attribute unused_original_config = { f"{'' if k.startswith('_') else '_'}{k}": original_config[k] for k, v in original_config.items() if k not in image_2_image_kwargs } missing_modules = ( set(expected_modules) - set(image_2_image_cls._optional_components) - set(image_2_image_kwargs.keys()) ) if len(missing_modules) > 0: raise ValueError( f"Pipeline {image_2_image_cls} expected {expected_modules}, but only {set(list(passed_class_obj.keys()) + list(original_class_obj.keys()))} were passed" ) model = image_2_image_cls(**image_2_image_kwargs) model.register_to_config(_name_or_path=pretrained_model_name_or_path) model.register_to_config(**unused_original_config) return model class AutoPipelineForInpainting(ConfigMixin): r""" [`AutoPipelineForInpainting`] is a generic pipeline class that instantiates an inpainting pipeline class. The specific underlying pipeline class is automatically selected from either the [`~AutoPipelineForInpainting.from_pretrained`] or [`~AutoPipelineForInpainting.from_pipe`] methods. This class cannot be instantiated using `__init__()` (throws an error). Class attributes: - **config_name** (`str`) -- The configuration filename that stores the class and module names of all the diffusion pipeline's components. """ config_name = "model_index.json" def __init__(self, *args, **kwargs): raise EnvironmentError( f"{self.__class__.__name__} is designed to be instantiated " f"using the `{self.__class__.__name__}.from_pretrained(pretrained_model_name_or_path)` or " f"`{self.__class__.__name__}.from_pipe(pipeline)` methods." ) @classmethod @validate_hf_hub_args def from_pretrained(cls, pretrained_model_or_path, **kwargs): r""" Instantiates a inpainting Pytorch diffusion pipeline from pretrained pipeline weight. The from_pretrained() method takes care of returning the correct pipeline class instance by: 1. Detect the pipeline class of the pretrained_model_or_path based on the _class_name property of its config object 2. Find the inpainting pipeline linked to the pipeline class using pattern matching on pipeline class name. If a `controlnet` argument is passed, it will instantiate a [`StableDiffusionControlNetInpaintPipeline`] object. The pipeline is set in evaluation mode (`model.eval()`) by default. If you get the error message below, you need to finetune the weights for your downstream task: ``` Some weights of UNet2DConditionModel were not initialized from the model checkpoint at stable-diffusion-v1-5/stable-diffusion-v1-5 and are newly initialized because the shapes did not match: - conv_in.weight: found shape torch.Size([320, 4, 3, 3]) in the checkpoint and torch.Size([320, 9, 3, 3]) in the model instantiated You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. ``` Parameters: pretrained_model_or_path (`str` or `os.PathLike`, *optional*): Can be either: - A string, the *repo id* (for example `CompVis/ldm-text2im-large-256`) of a pretrained pipeline hosted on the Hub. - A path to a *directory* (for example `./my_pipeline_directory/`) containing pipeline weights saved using [`~DiffusionPipeline.save_pretrained`]. torch_dtype (`str` or `torch.dtype`, *optional*): Override the default `torch.dtype` and load the model with another dtype. If "auto" is passed, the dtype is automatically derived from the model's weights. force_download (`bool`, *optional*, defaults to `False`): Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. cache_dir (`Union[str, os.PathLike]`, *optional*): Path to a directory where a downloaded pretrained model configuration is cached if the standard cache is not used. proxies (`Dict[str, str]`, *optional*): A dictionary of proxy servers to use by protocol or endpoint, for example, `{'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}`. The proxies are used on each request. output_loading_info(`bool`, *optional*, defaults to `False`): Whether or not to also return a dictionary containing missing keys, unexpected keys and error messages. local_files_only (`bool`, *optional*, defaults to `False`): Whether to only load local model weights and configuration files or not. If set to `True`, the model won't be downloaded from the Hub. token (`str` or *bool*, *optional*): The token to use as HTTP bearer authorization for remote files. If `True`, the token generated from `diffusers-cli login` (stored in `~/.huggingface`) is used. revision (`str`, *optional*, defaults to `"main"`): The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier allowed by Git. custom_revision (`str`, *optional*, defaults to `"main"`): The specific model version to use. It can be a branch name, a tag name, or a commit id similar to `revision` when loading a custom pipeline from the Hub. It can be a 🤗 Diffusers version when loading a custom pipeline from GitHub, otherwise it defaults to `"main"` when loading from the Hub. mirror (`str`, *optional*): Mirror source to resolve accessibility issues if you’re downloading a model in China. We do not guarantee the timeliness or safety of the source, and you should refer to the mirror site for more information. device_map (`str` or `Dict[str, Union[int, str, torch.device]]`, *optional*): A map that specifies where each submodule should go. It doesn’t need to be defined for each parameter/buffer name; once a given module name is inside, every submodule of it will be sent to the same device. Set `device_map="auto"` to have 🤗 Accelerate automatically compute the most optimized `device_map`. For more information about each option see [designing a device map](https://hf.co/docs/accelerate/main/en/usage_guides/big_modeling#designing-a-device-map). max_memory (`Dict`, *optional*): A dictionary device identifier for the maximum memory. Will default to the maximum memory available for each GPU and the available CPU RAM if unset. offload_folder (`str` or `os.PathLike`, *optional*): The path to offload weights if device_map contains the value `"disk"`. offload_state_dict (`bool`, *optional*): If `True`, temporarily offloads the CPU state dict to the hard drive to avoid running out of CPU RAM if the weight of the CPU state dict + the biggest shard of the checkpoint does not fit. Defaults to `True` when there is some disk offload. low_cpu_mem_usage (`bool`, *optional*, defaults to `True` if torch version >= 1.9.0 else `False`): Speed up model loading only loading the pretrained weights and not initializing the weights. This also tries to not use more than 1x model size in CPU memory (including peak memory) while loading the model. Only supported for PyTorch >= 1.9.0. If you are using an older version of PyTorch, setting this argument to `True` will raise an error. use_safetensors (`bool`, *optional*, defaults to `None`): If set to `None`, the safetensors weights are downloaded if they're available **and** if the safetensors library is installed. If set to `True`, the model is forcibly loaded from safetensors weights. If set to `False`, safetensors weights are not loaded. kwargs (remaining dictionary of keyword arguments, *optional*): Can be used to overwrite load and saveable variables (the pipeline components of the specific pipeline class). The overwritten components are passed directly to the pipelines `__init__` method. See example below for more information. variant (`str`, *optional*): Load weights from a specified variant filename such as `"fp16"` or `"ema"`. This is ignored when loading `from_flax`. <Tip> To use private or [gated](https://huggingface.co/docs/hub/models-gated#gated-models) models, log-in with `huggingface-cli login`. </Tip> Examples: ```py >>> from diffusers import AutoPipelineForInpainting >>> pipeline = AutoPipelineForInpainting.from_pretrained("stable-diffusion-v1-5/stable-diffusion-v1-5") >>> image = pipeline(prompt, image=init_image, mask_image=mask_image).images[0] ``` """ cache_dir = kwargs.pop("cache_dir", None) force_download = kwargs.pop("force_download", False) proxies = kwargs.pop("proxies", None) token = kwargs.pop("token", None) local_files_only = kwargs.pop("local_files_only", False) revision = kwargs.pop("revision", None) load_config_kwargs = { "cache_dir": cache_dir, "force_download": force_download, "proxies": proxies, "token": token, "local_files_only": local_files_only, "revision": revision, } config = cls.load_config(pretrained_model_or_path, **load_config_kwargs) orig_class_name = config["_class_name"] # The `orig_class_name`` can be: # `- *InpaintPipeline` (for inpaint-specific checkpoint) # - `*ControlPipeline` (for Flux tools specific checkpoint) # - or *Pipeline (for regular text-to-image checkpoint) if "Inpaint" in orig_class_name: to_replace = "InpaintPipeline" elif "ControlPipeline" in orig_class_name: to_replace = "ControlPipeline" else: to_replace = "Pipeline" if "controlnet" in kwargs: if isinstance(kwargs["controlnet"], ControlNetUnionModel): orig_class_name = orig_class_name.replace(to_replace, "ControlNetUnion" + to_replace) else: orig_class_name = orig_class_name.replace(to_replace, "ControlNet" + to_replace) if "enable_pag" in kwargs: enable_pag = kwargs.pop("enable_pag") if enable_pag: orig_class_name = orig_class_name.replace(to_replace, "PAG" + to_replace) if to_replace == "ControlPipeline": orig_class_name = orig_class_name.replace(to_replace, "ControlInpaintPipeline") inpainting_cls = _get_task_class(AUTO_INPAINT_PIPELINES_MAPPING, orig_class_name) kwargs = {**load_config_kwargs, **kwargs} return inpainting_cls.from_pretrained(pretrained_model_or_path, **kwargs) @classmethod def from_pipe(cls, pipeline, **kwargs): r""" Instantiates a inpainting Pytorch diffusion pipeline from another instantiated diffusion pipeline class. The from_pipe() method takes care of returning the correct pipeline class instance by finding the inpainting pipeline linked to the pipeline class using pattern matching on pipeline class name. All the modules the pipeline class contain will be used to initialize the new pipeline without reallocating additional memory. The pipeline is set in evaluation mode (`model.eval()`) by default. Parameters: pipeline (`DiffusionPipeline`): an instantiated `DiffusionPipeline` object Examples: ```py >>> from diffusers import AutoPipelineForText2Image, AutoPipelineForInpainting >>> pipe_t2i = AutoPipelineForText2Image.from_pretrained( ... "DeepFloyd/IF-I-XL-v1.0", requires_safety_checker=False ... ) >>> pipe_inpaint = AutoPipelineForInpainting.from_pipe(pipe_t2i) >>> image = pipe_inpaint(prompt, image=init_image, mask_image=mask_image).images[0] ``` """ original_config = dict(pipeline.config) original_cls_name = pipeline.__class__.__name__ # derive the pipeline class to instantiate inpainting_cls = _get_task_class(AUTO_INPAINT_PIPELINES_MAPPING, original_cls_name) if "controlnet" in kwargs: if kwargs["controlnet"] is not None: inpainting_cls = _get_task_class( AUTO_INPAINT_PIPELINES_MAPPING, inpainting_cls.__name__.replace("ControlNet", "").replace( "InpaintPipeline", "ControlNetInpaintPipeline" ), ) else: inpainting_cls = _get_task_class( AUTO_INPAINT_PIPELINES_MAPPING, inpainting_cls.__name__.replace("ControlNetInpaintPipeline", "InpaintPipeline"), ) if "enable_pag" in kwargs: enable_pag = kwargs.pop("enable_pag") if enable_pag: inpainting_cls = _get_task_class( AUTO_INPAINT_PIPELINES_MAPPING, inpainting_cls.__name__.replace("PAG", "").replace("InpaintPipeline", "PAGInpaintPipeline"), ) else: inpainting_cls = _get_task_class( AUTO_INPAINT_PIPELINES_MAPPING, inpainting_cls.__name__.replace("PAGInpaintPipeline", "InpaintPipeline"), ) # define expected module and optional kwargs given the pipeline signature expected_modules, optional_kwargs = inpainting_cls._get_signature_keys(inpainting_cls) pretrained_model_name_or_path = original_config.pop("_name_or_path", None) # allow users pass modules in `kwargs` to override the original pipeline's components passed_class_obj = {k: kwargs.pop(k) for k in expected_modules if k in kwargs} original_class_obj = { k: pipeline.components[k] for k, v in pipeline.components.items() if k in expected_modules and k not in passed_class_obj } # allow users pass optional kwargs to override the original pipelines config attribute passed_pipe_kwargs = {k: kwargs.pop(k) for k in optional_kwargs if k in kwargs} original_pipe_kwargs = { k: original_config[k] for k, v in original_config.items() if k in optional_kwargs and k not in passed_pipe_kwargs } # config that were not expected by original pipeline is stored as private attribute # we will pass them as optional arguments if they can be accepted by the pipeline additional_pipe_kwargs = [ k[1:] for k in original_config.keys() if k.startswith("_") and k[1:] in optional_kwargs and k[1:] not in passed_pipe_kwargs ] for k in additional_pipe_kwargs: original_pipe_kwargs[k] = original_config.pop(f"_{k}") inpainting_kwargs = {**passed_class_obj, **original_class_obj, **passed_pipe_kwargs, **original_pipe_kwargs} # store unused config as private attribute unused_original_config = { f"{'' if k.startswith('_') else '_'}{k}": original_config[k] for k, v in original_config.items() if k not in inpainting_kwargs } missing_modules = ( set(expected_modules) - set(inpainting_cls._optional_components) - set(inpainting_kwargs.keys()) ) if len(missing_modules) > 0: raise ValueError( f"Pipeline {inpainting_cls} expected {expected_modules}, but only {set(list(passed_class_obj.keys()) + list(original_class_obj.keys()))} were passed" ) model = inpainting_cls(**inpainting_kwargs) model.register_to_config(_name_or_path=pretrained_model_name_or_path) model.register_to_config(**unused_original_config) return model
diffusers/src/diffusers/pipelines/auto_pipeline.py/0
{ "file_path": "diffusers/src/diffusers/pipelines/auto_pipeline.py", "repo_id": "diffusers", "token_count": 24515 }
import importlib.util import os import cv2 import numpy as np import torch from PIL import Image, ImageOps from torchvision.transforms import InterpolationMode from torchvision.transforms.functional import normalize, resize from ...utils import load_image _insightface_available = importlib.util.find_spec("insightface") is not None _consisid_eva_clip_available = importlib.util.find_spec("consisid_eva_clip") is not None _facexlib_available = importlib.util.find_spec("facexlib") is not None if _insightface_available: import insightface from insightface.app import FaceAnalysis else: raise ImportError("insightface is not available. Please install it using 'pip install insightface'.") if _consisid_eva_clip_available: from consisid_eva_clip import create_model_and_transforms from consisid_eva_clip.constants import OPENAI_DATASET_MEAN, OPENAI_DATASET_STD else: raise ImportError("consisid_eva_clip is not available. Please install it using 'pip install consisid_eva_clip'.") if _facexlib_available: from facexlib.parsing import init_parsing_model from facexlib.utils.face_restoration_helper import FaceRestoreHelper else: raise ImportError("facexlib is not available. Please install it using 'pip install facexlib'.") def resize_numpy_image_long(image, resize_long_edge=768): """ Resize the input image to a specified long edge while maintaining aspect ratio. Args: image (numpy.ndarray): Input image (H x W x C or H x W). resize_long_edge (int): The target size for the long edge of the image. Default is 768. Returns: numpy.ndarray: Resized image with the long edge matching `resize_long_edge`, while maintaining the aspect ratio. """ h, w = image.shape[:2] if max(h, w) <= resize_long_edge: return image k = resize_long_edge / max(h, w) h = int(h * k) w = int(w * k) image = cv2.resize(image, (w, h), interpolation=cv2.INTER_LANCZOS4) return image def img2tensor(imgs, bgr2rgb=True, float32=True): """Numpy array to tensor. Args: imgs (list[ndarray] | ndarray): Input images. bgr2rgb (bool): Whether to change bgr to rgb. float32 (bool): Whether to change to float32. Returns: list[tensor] | tensor: Tensor images. If returned results only have one element, just return tensor. """ def _totensor(img, bgr2rgb, float32): if img.shape[2] == 3 and bgr2rgb: if img.dtype == "float64": img = img.astype("float32") img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) img = torch.from_numpy(img.transpose(2, 0, 1)) if float32: img = img.float() return img if isinstance(imgs, list): return [_totensor(img, bgr2rgb, float32) for img in imgs] return _totensor(imgs, bgr2rgb, float32) def to_gray(img): """ Converts an RGB image to grayscale by applying the standard luminosity formula. Args: img (torch.Tensor): The input image tensor with shape (batch_size, channels, height, width). The image is expected to be in RGB format (3 channels). Returns: torch.Tensor: The grayscale image tensor with shape (batch_size, 3, height, width). The grayscale values are replicated across all three channels. """ x = 0.299 * img[:, 0:1] + 0.587 * img[:, 1:2] + 0.114 * img[:, 2:3] x = x.repeat(1, 3, 1, 1) return x def process_face_embeddings( face_helper_1, clip_vision_model, face_helper_2, eva_transform_mean, eva_transform_std, app, device, weight_dtype, image, original_id_image=None, is_align_face=True, ): """ Process face embeddings from an image, extracting relevant features such as face embeddings, landmarks, and parsed face features using a series of face detection and alignment tools. Args: face_helper_1: Face helper object (first helper) for alignment and landmark detection. clip_vision_model: Pre-trained CLIP vision model used for feature extraction. face_helper_2: Face helper object (second helper) for embedding extraction. eva_transform_mean: Mean values for image normalization before passing to EVA model. eva_transform_std: Standard deviation values for image normalization before passing to EVA model. app: Application instance used for face detection. device: Device (CPU or GPU) where the computations will be performed. weight_dtype: Data type of the weights for precision (e.g., `torch.float32`). image: Input image in RGB format with pixel values in the range [0, 255]. original_id_image: (Optional) Original image for feature extraction if `is_align_face` is False. is_align_face: Boolean flag indicating whether face alignment should be performed. Returns: Tuple: - id_cond: Concatenated tensor of Ante face embedding and CLIP vision embedding - id_vit_hidden: Hidden state of the CLIP vision model, a list of tensors. - return_face_features_image_2: Processed face features image after normalization and parsing. - face_kps: Keypoints of the face detected in the image. """ face_helper_1.clean_all() image_bgr = cv2.cvtColor(image, cv2.COLOR_RGB2BGR) # get antelopev2 embedding face_info = app.get(image_bgr) if len(face_info) > 0: face_info = sorted(face_info, key=lambda x: (x["bbox"][2] - x["bbox"][0]) * (x["bbox"][3] - x["bbox"][1]))[ -1 ] # only use the maximum face id_ante_embedding = face_info["embedding"] # (512,) face_kps = face_info["kps"] else: id_ante_embedding = None face_kps = None # using facexlib to detect and align face face_helper_1.read_image(image_bgr) face_helper_1.get_face_landmarks_5(only_center_face=True) if face_kps is None: face_kps = face_helper_1.all_landmarks_5[0] face_helper_1.align_warp_face() if len(face_helper_1.cropped_faces) == 0: raise RuntimeError("facexlib align face fail") align_face = face_helper_1.cropped_faces[0] # (512, 512, 3) # RGB # incase insightface didn't detect face if id_ante_embedding is None: print("fail to detect face using insightface, extract embedding on align face") id_ante_embedding = face_helper_2.get_feat(align_face) id_ante_embedding = torch.from_numpy(id_ante_embedding).to(device, weight_dtype) # torch.Size([512]) if id_ante_embedding.ndim == 1: id_ante_embedding = id_ante_embedding.unsqueeze(0) # torch.Size([1, 512]) # parsing if is_align_face: input = img2tensor(align_face, bgr2rgb=True).unsqueeze(0) / 255.0 # torch.Size([1, 3, 512, 512]) input = input.to(device) parsing_out = face_helper_1.face_parse(normalize(input, [0.485, 0.456, 0.406], [0.229, 0.224, 0.225]))[0] parsing_out = parsing_out.argmax(dim=1, keepdim=True) # torch.Size([1, 1, 512, 512]) bg_label = [0, 16, 18, 7, 8, 9, 14, 15] bg = sum(parsing_out == i for i in bg_label).bool() white_image = torch.ones_like(input) # torch.Size([1, 3, 512, 512]) # only keep the face features return_face_features_image = torch.where(bg, white_image, to_gray(input)) # torch.Size([1, 3, 512, 512]) return_face_features_image_2 = torch.where(bg, white_image, input) # torch.Size([1, 3, 512, 512]) else: original_image_bgr = cv2.cvtColor(original_id_image, cv2.COLOR_RGB2BGR) input = img2tensor(original_image_bgr, bgr2rgb=True).unsqueeze(0) / 255.0 # torch.Size([1, 3, 512, 512]) input = input.to(device) return_face_features_image = return_face_features_image_2 = input # transform img before sending to eva-clip-vit face_features_image = resize( return_face_features_image, clip_vision_model.image_size, InterpolationMode.BICUBIC ) # torch.Size([1, 3, 336, 336]) face_features_image = normalize(face_features_image, eva_transform_mean, eva_transform_std) id_cond_vit, id_vit_hidden = clip_vision_model( face_features_image.to(weight_dtype), return_all_features=False, return_hidden=True, shuffle=False ) # torch.Size([1, 768]), list(torch.Size([1, 577, 1024])) id_cond_vit_norm = torch.norm(id_cond_vit, 2, 1, True) id_cond_vit = torch.div(id_cond_vit, id_cond_vit_norm) id_cond = torch.cat( [id_ante_embedding, id_cond_vit], dim=-1 ) # torch.Size([1, 512]), torch.Size([1, 768]) -> torch.Size([1, 1280]) return ( id_cond, id_vit_hidden, return_face_features_image_2, face_kps, ) # torch.Size([1, 1280]), list(torch.Size([1, 577, 1024])) def process_face_embeddings_infer( face_helper_1, clip_vision_model, face_helper_2, eva_transform_mean, eva_transform_std, app, device, weight_dtype, img_file_path, is_align_face=True, ): """ Process face embeddings from an input image for inference, including alignment, feature extraction, and embedding concatenation. Args: face_helper_1: Face helper object (first helper) for alignment and landmark detection. clip_vision_model: Pre-trained CLIP vision model used for feature extraction. face_helper_2: Face helper object (second helper) for embedding extraction. eva_transform_mean: Mean values for image normalization before passing to EVA model. eva_transform_std: Standard deviation values for image normalization before passing to EVA model. app: Application instance used for face detection. device: Device (CPU or GPU) where the computations will be performed. weight_dtype: Data type of the weights for precision (e.g., `torch.float32`). img_file_path: Path to the input image file (string) or a numpy array representing an image. is_align_face: Boolean flag indicating whether face alignment should be performed (default: True). Returns: Tuple: - id_cond: Concatenated tensor of Ante face embedding and CLIP vision embedding. - id_vit_hidden: Hidden state of the CLIP vision model, a list of tensors. - image: Processed face image after feature extraction and alignment. - face_kps: Keypoints of the face detected in the image. """ # Load and preprocess the input image if isinstance(img_file_path, str): image = np.array(load_image(image=img_file_path).convert("RGB")) else: image = np.array(ImageOps.exif_transpose(Image.fromarray(img_file_path)).convert("RGB")) # Resize image to ensure the longer side is 1024 pixels image = resize_numpy_image_long(image, 1024) original_id_image = image # Process the image to extract face embeddings and related features id_cond, id_vit_hidden, align_crop_face_image, face_kps = process_face_embeddings( face_helper_1, clip_vision_model, face_helper_2, eva_transform_mean, eva_transform_std, app, device, weight_dtype, image, original_id_image, is_align_face, ) # Convert the aligned cropped face image (torch tensor) to a numpy array tensor = align_crop_face_image.cpu().detach() tensor = tensor.squeeze() tensor = tensor.permute(1, 2, 0) tensor = tensor.numpy() * 255 tensor = tensor.astype(np.uint8) image = ImageOps.exif_transpose(Image.fromarray(tensor)) return id_cond, id_vit_hidden, image, face_kps def prepare_face_models(model_path, device, dtype): """ Prepare all face models for the facial recognition task. Parameters: - model_path: Path to the directory containing model files. - device: The device (e.g., 'cuda', 'cpu') where models will be loaded. - dtype: Data type (e.g., torch.float32) for model inference. Returns: - face_helper_1: First face restoration helper. - face_helper_2: Second face restoration helper. - face_clip_model: CLIP model for face extraction. - eva_transform_mean: Mean value for image normalization. - eva_transform_std: Standard deviation value for image normalization. - face_main_model: Main face analysis model. """ # get helper model face_helper_1 = FaceRestoreHelper( upscale_factor=1, face_size=512, crop_ratio=(1, 1), det_model="retinaface_resnet50", save_ext="png", device=device, model_rootpath=os.path.join(model_path, "face_encoder"), ) face_helper_1.face_parse = None face_helper_1.face_parse = init_parsing_model( model_name="bisenet", device=device, model_rootpath=os.path.join(model_path, "face_encoder") ) face_helper_2 = insightface.model_zoo.get_model( f"{model_path}/face_encoder/models/antelopev2/glintr100.onnx", providers=["CUDAExecutionProvider"] ) face_helper_2.prepare(ctx_id=0) # get local facial extractor part 1 model, _, _ = create_model_and_transforms( "EVA02-CLIP-L-14-336", os.path.join(model_path, "face_encoder", "EVA02_CLIP_L_336_psz14_s6B.pt"), force_custom_clip=True, ) face_clip_model = model.visual eva_transform_mean = getattr(face_clip_model, "image_mean", OPENAI_DATASET_MEAN) eva_transform_std = getattr(face_clip_model, "image_std", OPENAI_DATASET_STD) if not isinstance(eva_transform_mean, (list, tuple)): eva_transform_mean = (eva_transform_mean,) * 3 if not isinstance(eva_transform_std, (list, tuple)): eva_transform_std = (eva_transform_std,) * 3 eva_transform_mean = eva_transform_mean eva_transform_std = eva_transform_std # get local facial extractor part 2 face_main_model = FaceAnalysis( name="antelopev2", root=os.path.join(model_path, "face_encoder"), providers=["CUDAExecutionProvider"] ) face_main_model.prepare(ctx_id=0, det_size=(640, 640)) # move face models to device face_helper_1.face_det.eval() face_helper_1.face_parse.eval() face_clip_model.eval() face_helper_1.face_det.to(device) face_helper_1.face_parse.to(device) face_clip_model.to(device, dtype=dtype) return face_helper_1, face_helper_2, face_clip_model, face_main_model, eva_transform_mean, eva_transform_std
diffusers/src/diffusers/pipelines/consisid/consisid_utils.py/0
{ "file_path": "diffusers/src/diffusers/pipelines/consisid/consisid_utils.py", "repo_id": "diffusers", "token_count": 5836 }
# Copyright 2022 The Music Spectrogram Diffusion Authors. # Copyright 2024 The HuggingFace Team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import torch import torch.nn as nn from transformers.modeling_utils import ModuleUtilsMixin from transformers.models.t5.modeling_t5 import T5Block, T5Config, T5LayerNorm from ....configuration_utils import ConfigMixin, register_to_config from ....models import ModelMixin class SpectrogramNotesEncoder(ModelMixin, ConfigMixin, ModuleUtilsMixin): @register_to_config def __init__( self, max_length: int, vocab_size: int, d_model: int, dropout_rate: float, num_layers: int, num_heads: int, d_kv: int, d_ff: int, feed_forward_proj: str, is_decoder: bool = False, ): super().__init__() self.token_embedder = nn.Embedding(vocab_size, d_model) self.position_encoding = nn.Embedding(max_length, d_model) self.position_encoding.weight.requires_grad = False self.dropout_pre = nn.Dropout(p=dropout_rate) t5config = T5Config( vocab_size=vocab_size, d_model=d_model, num_heads=num_heads, d_kv=d_kv, d_ff=d_ff, dropout_rate=dropout_rate, feed_forward_proj=feed_forward_proj, is_decoder=is_decoder, is_encoder_decoder=False, ) self.encoders = nn.ModuleList() for lyr_num in range(num_layers): lyr = T5Block(t5config) self.encoders.append(lyr) self.layer_norm = T5LayerNorm(d_model) self.dropout_post = nn.Dropout(p=dropout_rate) def forward(self, encoder_input_tokens, encoder_inputs_mask): x = self.token_embedder(encoder_input_tokens) seq_length = encoder_input_tokens.shape[1] inputs_positions = torch.arange(seq_length, device=encoder_input_tokens.device) x += self.position_encoding(inputs_positions) x = self.dropout_pre(x) # inverted the attention mask input_shape = encoder_input_tokens.size() extended_attention_mask = self.get_extended_attention_mask(encoder_inputs_mask, input_shape) for lyr in self.encoders: x = lyr(x, extended_attention_mask)[0] x = self.layer_norm(x) return self.dropout_post(x), encoder_inputs_mask
diffusers/src/diffusers/pipelines/deprecated/spectrogram_diffusion/notes_encoder.py/0
{ "file_path": "diffusers/src/diffusers/pipelines/deprecated/spectrogram_diffusion/notes_encoder.py", "repo_id": "diffusers", "token_count": 1254 }
# Copyright 2024 The HuggingFace Team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import inspect from typing import Callable, List, Optional, Union import torch import torch.utils.checkpoint from transformers import CLIPImageProcessor, CLIPTextModelWithProjection, CLIPTokenizer from ....image_processor import VaeImageProcessor from ....models import AutoencoderKL, Transformer2DModel, UNet2DConditionModel from ....schedulers import KarrasDiffusionSchedulers from ....utils import deprecate, logging from ....utils.torch_utils import randn_tensor from ...pipeline_utils import DiffusionPipeline, ImagePipelineOutput from .modeling_text_unet import UNetFlatConditionModel logger = logging.get_logger(__name__) # pylint: disable=invalid-name class VersatileDiffusionTextToImagePipeline(DiffusionPipeline): r""" Pipeline for text-to-image generation using Versatile Diffusion. This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods implemented for all pipelines (downloading, saving, running on a particular device, etc.). Parameters: vqvae ([`VQModel`]): Vector-quantized (VQ) model to encode and decode images to and from latent representations. bert ([`LDMBertModel`]): Text-encoder model based on [`~transformers.BERT`]. tokenizer ([`~transformers.BertTokenizer`]): A `BertTokenizer` to tokenize text. unet ([`UNet2DConditionModel`]): A `UNet2DConditionModel` to denoise the encoded image latents. scheduler ([`SchedulerMixin`]): A scheduler to be used in combination with `unet` to denoise the encoded image latents. Can be one of [`DDIMScheduler`], [`LMSDiscreteScheduler`], or [`PNDMScheduler`]. """ model_cpu_offload_seq = "bert->unet->vqvae" tokenizer: CLIPTokenizer image_feature_extractor: CLIPImageProcessor text_encoder: CLIPTextModelWithProjection image_unet: UNet2DConditionModel text_unet: UNetFlatConditionModel vae: AutoencoderKL scheduler: KarrasDiffusionSchedulers _optional_components = ["text_unet"] def __init__( self, tokenizer: CLIPTokenizer, text_encoder: CLIPTextModelWithProjection, image_unet: UNet2DConditionModel, text_unet: UNetFlatConditionModel, vae: AutoencoderKL, scheduler: KarrasDiffusionSchedulers, ): super().__init__() self.register_modules( tokenizer=tokenizer, text_encoder=text_encoder, image_unet=image_unet, text_unet=text_unet, vae=vae, scheduler=scheduler, ) self.vae_scale_factor = 2 ** (len(self.vae.config.block_out_channels) - 1) if getattr(self, "vae", None) else 8 self.image_processor = VaeImageProcessor(vae_scale_factor=self.vae_scale_factor) if self.text_unet is not None: self._swap_unet_attention_blocks() def _swap_unet_attention_blocks(self): """ Swap the `Transformer2DModel` blocks between the image and text UNets """ for name, module in self.image_unet.named_modules(): if isinstance(module, Transformer2DModel): parent_name, index = name.rsplit(".", 1) index = int(index) self.image_unet.get_submodule(parent_name)[index], self.text_unet.get_submodule(parent_name)[index] = ( self.text_unet.get_submodule(parent_name)[index], self.image_unet.get_submodule(parent_name)[index], ) def remove_unused_weights(self): self.register_modules(text_unet=None) def _encode_prompt(self, prompt, device, num_images_per_prompt, do_classifier_free_guidance, negative_prompt): r""" Encodes the prompt into text encoder hidden states. Args: prompt (`str` or `List[str]`): prompt to be encoded device: (`torch.device`): torch device num_images_per_prompt (`int`): number of images that should be generated per prompt do_classifier_free_guidance (`bool`): whether to use classifier free guidance or not negative_prompt (`str` or `List[str]`): The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored if `guidance_scale` is less than `1`). """ def normalize_embeddings(encoder_output): embeds = self.text_encoder.text_projection(encoder_output.last_hidden_state) embeds_pooled = encoder_output.text_embeds embeds = embeds / torch.norm(embeds_pooled.unsqueeze(1), dim=-1, keepdim=True) return embeds batch_size = len(prompt) if isinstance(prompt, list) else 1 text_inputs = self.tokenizer( prompt, padding="max_length", max_length=self.tokenizer.model_max_length, truncation=True, return_tensors="pt", ) text_input_ids = text_inputs.input_ids untruncated_ids = self.tokenizer(prompt, padding="max_length", return_tensors="pt").input_ids if not torch.equal(text_input_ids, untruncated_ids): removed_text = self.tokenizer.batch_decode(untruncated_ids[:, self.tokenizer.model_max_length - 1 : -1]) logger.warning( "The following part of your input was truncated because CLIP can only handle sequences up to" f" {self.tokenizer.model_max_length} tokens: {removed_text}" ) if hasattr(self.text_encoder.config, "use_attention_mask") and self.text_encoder.config.use_attention_mask: attention_mask = text_inputs.attention_mask.to(device) else: attention_mask = None prompt_embeds = self.text_encoder( text_input_ids.to(device), attention_mask=attention_mask, ) prompt_embeds = normalize_embeddings(prompt_embeds) # duplicate text embeddings for each generation per prompt, using mps friendly method bs_embed, seq_len, _ = prompt_embeds.shape prompt_embeds = prompt_embeds.repeat(1, num_images_per_prompt, 1) prompt_embeds = prompt_embeds.view(bs_embed * num_images_per_prompt, seq_len, -1) # get unconditional embeddings for classifier free guidance if do_classifier_free_guidance: uncond_tokens: List[str] if negative_prompt is None: uncond_tokens = [""] * batch_size elif type(prompt) is not type(negative_prompt): raise TypeError( f"`negative_prompt` should be the same type to `prompt`, but got {type(negative_prompt)} !=" f" {type(prompt)}." ) elif isinstance(negative_prompt, str): uncond_tokens = [negative_prompt] elif batch_size != len(negative_prompt): raise ValueError( f"`negative_prompt`: {negative_prompt} has batch size {len(negative_prompt)}, but `prompt`:" f" {prompt} has batch size {batch_size}. Please make sure that passed `negative_prompt` matches" " the batch size of `prompt`." ) else: uncond_tokens = negative_prompt max_length = text_input_ids.shape[-1] uncond_input = self.tokenizer( uncond_tokens, padding="max_length", max_length=max_length, truncation=True, return_tensors="pt", ) if hasattr(self.text_encoder.config, "use_attention_mask") and self.text_encoder.config.use_attention_mask: attention_mask = uncond_input.attention_mask.to(device) else: attention_mask = None negative_prompt_embeds = self.text_encoder( uncond_input.input_ids.to(device), attention_mask=attention_mask, ) negative_prompt_embeds = normalize_embeddings(negative_prompt_embeds) # duplicate unconditional embeddings for each generation per prompt, using mps friendly method seq_len = negative_prompt_embeds.shape[1] negative_prompt_embeds = negative_prompt_embeds.repeat(1, num_images_per_prompt, 1) negative_prompt_embeds = negative_prompt_embeds.view(batch_size * num_images_per_prompt, seq_len, -1) # For classifier free guidance, we need to do two forward passes. # Here we concatenate the unconditional and text embeddings into a single batch # to avoid doing two forward passes prompt_embeds = torch.cat([negative_prompt_embeds, prompt_embeds]) return prompt_embeds # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.decode_latents def decode_latents(self, latents): deprecation_message = "The decode_latents method is deprecated and will be removed in 1.0.0. Please use VaeImageProcessor.postprocess(...) instead" deprecate("decode_latents", "1.0.0", deprecation_message, standard_warn=False) latents = 1 / self.vae.config.scaling_factor * latents image = self.vae.decode(latents, return_dict=False)[0] image = (image / 2 + 0.5).clamp(0, 1) # we always cast to float32 as this does not cause significant overhead and is compatible with bfloat16 image = image.cpu().permute(0, 2, 3, 1).float().numpy() return image # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.prepare_extra_step_kwargs def prepare_extra_step_kwargs(self, generator, eta): # prepare extra kwargs for the scheduler step, since not all schedulers have the same signature # eta (η) is only used with the DDIMScheduler, it will be ignored for other schedulers. # eta corresponds to η in DDIM paper: https://arxiv.org/abs/2010.02502 # and should be between [0, 1] accepts_eta = "eta" in set(inspect.signature(self.scheduler.step).parameters.keys()) extra_step_kwargs = {} if accepts_eta: extra_step_kwargs["eta"] = eta # check if the scheduler accepts generator accepts_generator = "generator" in set(inspect.signature(self.scheduler.step).parameters.keys()) if accepts_generator: extra_step_kwargs["generator"] = generator return extra_step_kwargs def check_inputs( self, prompt, height, width, callback_steps, negative_prompt=None, prompt_embeds=None, negative_prompt_embeds=None, callback_on_step_end_tensor_inputs=None, ): if height % 8 != 0 or width % 8 != 0: raise ValueError(f"`height` and `width` have to be divisible by 8 but are {height} and {width}.") if callback_steps is not None and (not isinstance(callback_steps, int) or callback_steps <= 0): raise ValueError( f"`callback_steps` has to be a positive integer but is {callback_steps} of type" f" {type(callback_steps)}." ) if callback_on_step_end_tensor_inputs is not None and not all( k in self._callback_tensor_inputs for k in callback_on_step_end_tensor_inputs ): raise ValueError( f"`callback_on_step_end_tensor_inputs` has to be in {self._callback_tensor_inputs}, but found {[k for k in callback_on_step_end_tensor_inputs if k not in self._callback_tensor_inputs]}" ) if prompt is not None and prompt_embeds is not None: raise ValueError( f"Cannot forward both `prompt`: {prompt} and `prompt_embeds`: {prompt_embeds}. Please make sure to" " only forward one of the two." ) elif prompt is None and prompt_embeds is None: raise ValueError( "Provide either `prompt` or `prompt_embeds`. Cannot leave both `prompt` and `prompt_embeds` undefined." ) elif prompt is not None and (not isinstance(prompt, str) and not isinstance(prompt, list)): raise ValueError(f"`prompt` has to be of type `str` or `list` but is {type(prompt)}") if negative_prompt is not None and negative_prompt_embeds is not None: raise ValueError( f"Cannot forward both `negative_prompt`: {negative_prompt} and `negative_prompt_embeds`:" f" {negative_prompt_embeds}. Please make sure to only forward one of the two." ) if prompt_embeds is not None and negative_prompt_embeds is not None: if prompt_embeds.shape != negative_prompt_embeds.shape: raise ValueError( "`prompt_embeds` and `negative_prompt_embeds` must have the same shape when passed directly, but" f" got: `prompt_embeds` {prompt_embeds.shape} != `negative_prompt_embeds`" f" {negative_prompt_embeds.shape}." ) # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.prepare_latents def prepare_latents(self, batch_size, num_channels_latents, height, width, dtype, device, generator, latents=None): shape = ( batch_size, num_channels_latents, int(height) // self.vae_scale_factor, int(width) // self.vae_scale_factor, ) if isinstance(generator, list) and len(generator) != batch_size: raise ValueError( f"You have passed a list of generators of length {len(generator)}, but requested an effective batch" f" size of {batch_size}. Make sure the batch size matches the length of the generators." ) if latents is None: latents = randn_tensor(shape, generator=generator, device=device, dtype=dtype) else: latents = latents.to(device) # scale the initial noise by the standard deviation required by the scheduler latents = latents * self.scheduler.init_noise_sigma return latents @torch.no_grad() def __call__( self, prompt: Union[str, List[str]], height: Optional[int] = None, width: Optional[int] = None, num_inference_steps: int = 50, guidance_scale: float = 7.5, negative_prompt: Optional[Union[str, List[str]]] = None, num_images_per_prompt: Optional[int] = 1, eta: float = 0.0, generator: Optional[Union[torch.Generator, List[torch.Generator]]] = None, latents: Optional[torch.Tensor] = None, output_type: Optional[str] = "pil", return_dict: bool = True, callback: Optional[Callable[[int, int, torch.Tensor], None]] = None, callback_steps: int = 1, **kwargs, ): r""" The call function to the pipeline for generation. Args: prompt (`str` or `List[str]`): The prompt or prompts to guide image generation. height (`int`, *optional*, defaults to `self.image_unet.config.sample_size * self.vae_scale_factor`): The height in pixels of the generated image. width (`int`, *optional*, defaults to `self.image_unet.config.sample_size * self.vae_scale_factor`): The width in pixels of the generated image. num_inference_steps (`int`, *optional*, defaults to 50): The number of denoising steps. More denoising steps usually lead to a higher quality image at the expense of slower inference. guidance_scale (`float`, *optional*, defaults to 7.5): A higher guidance scale value encourages the model to generate images closely linked to the text `prompt` at the expense of lower image quality. Guidance scale is enabled when `guidance_scale > 1`. negative_prompt (`str` or `List[str]`, *optional*): The prompt or prompts to guide what to not include in image generation. If not defined, you need to pass `negative_prompt_embeds` instead. Ignored when not using guidance (`guidance_scale < 1`). num_images_per_prompt (`int`, *optional*, defaults to 1): The number of images to generate per prompt. eta (`float`, *optional*, defaults to 0.0): Corresponds to parameter eta (η) from the [DDIM](https://arxiv.org/abs/2010.02502) paper. Only applies to the [`~schedulers.DDIMScheduler`], and is ignored in other schedulers. generator (`torch.Generator`, *optional*): A [`torch.Generator`](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make generation deterministic. latents (`torch.Tensor`, *optional*): Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image generation. Can be used to tweak the same generation with different prompts. If not provided, a latents tensor is generated by sampling using the supplied random `generator`. output_type (`str`, *optional*, defaults to `"pil"`): The output format of the generated image. Choose between `PIL.Image` or `np.array`. return_dict (`bool`, *optional*, defaults to `True`): Whether or not to return a [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] instead of a plain tuple. callback (`Callable`, *optional*): A function that calls every `callback_steps` steps during inference. The function is called with the following arguments: `callback(step: int, timestep: int, latents: torch.Tensor)`. callback_steps (`int`, *optional*, defaults to 1): The frequency at which the `callback` function is called. If not specified, the callback is called at every step. Examples: ```py >>> from diffusers import VersatileDiffusionTextToImagePipeline >>> import torch >>> pipe = VersatileDiffusionTextToImagePipeline.from_pretrained( ... "shi-labs/versatile-diffusion", torch_dtype=torch.float16 ... ) >>> pipe.remove_unused_weights() >>> pipe = pipe.to("cuda") >>> generator = torch.Generator(device="cuda").manual_seed(0) >>> image = pipe("an astronaut riding on a horse on mars", generator=generator).images[0] >>> image.save("./astronaut.png") ``` Returns: [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] or `tuple`: If `return_dict` is `True`, [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] is returned, otherwise a `tuple` is returned where the first element is a list with the generated images. """ # 0. Default height and width to unet height = height or self.image_unet.config.sample_size * self.vae_scale_factor width = width or self.image_unet.config.sample_size * self.vae_scale_factor # 1. Check inputs. Raise error if not correct self.check_inputs(prompt, height, width, callback_steps) # 2. Define call parameters batch_size = 1 if isinstance(prompt, str) else len(prompt) device = self._execution_device # here `guidance_scale` is defined analog to the guidance weight `w` of equation (2) # of the Imagen paper: https://arxiv.org/pdf/2205.11487.pdf . `guidance_scale = 1` # corresponds to doing no classifier free guidance. do_classifier_free_guidance = guidance_scale > 1.0 # 3. Encode input prompt prompt_embeds = self._encode_prompt( prompt, device, num_images_per_prompt, do_classifier_free_guidance, negative_prompt ) # 4. Prepare timesteps self.scheduler.set_timesteps(num_inference_steps, device=device) timesteps = self.scheduler.timesteps # 5. Prepare latent variables num_channels_latents = self.image_unet.config.in_channels latents = self.prepare_latents( batch_size * num_images_per_prompt, num_channels_latents, height, width, prompt_embeds.dtype, device, generator, latents, ) # 6. Prepare extra step kwargs. extra_step_kwargs = self.prepare_extra_step_kwargs(generator, eta) # 7. Denoising loop for i, t in enumerate(self.progress_bar(timesteps)): # expand the latents if we are doing classifier free guidance latent_model_input = torch.cat([latents] * 2) if do_classifier_free_guidance else latents latent_model_input = self.scheduler.scale_model_input(latent_model_input, t) # predict the noise residual noise_pred = self.image_unet(latent_model_input, t, encoder_hidden_states=prompt_embeds).sample # perform guidance if do_classifier_free_guidance: noise_pred_uncond, noise_pred_text = noise_pred.chunk(2) noise_pred = noise_pred_uncond + guidance_scale * (noise_pred_text - noise_pred_uncond) # compute the previous noisy sample x_t -> x_t-1 latents = self.scheduler.step(noise_pred, t, latents, **extra_step_kwargs).prev_sample # call the callback, if provided if callback is not None and i % callback_steps == 0: step_idx = i // getattr(self.scheduler, "order", 1) callback(step_idx, t, latents) if not output_type == "latent": image = self.vae.decode(latents / self.vae.config.scaling_factor, return_dict=False)[0] else: image = latents image = self.image_processor.postprocess(image, output_type=output_type) if not return_dict: return (image,) return ImagePipelineOutput(images=image)
diffusers/src/diffusers/pipelines/deprecated/versatile_diffusion/pipeline_versatile_diffusion_text_to_image.py/0
{ "file_path": "diffusers/src/diffusers/pipelines/deprecated/versatile_diffusion/pipeline_versatile_diffusion_text_to_image.py", "repo_id": "diffusers", "token_count": 9882 }
# Copyright 2024 The HuggingFace Team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from copy import deepcopy from typing import Callable, List, Optional, Union import numpy as np import PIL.Image import torch import torch.nn.functional as F from packaging import version from PIL import Image from transformers import ( XLMRobertaTokenizer, ) from ... import __version__ from ...models import UNet2DConditionModel, VQModel from ...schedulers import DDIMScheduler from ...utils import ( is_torch_xla_available, logging, replace_example_docstring, ) from ...utils.torch_utils import randn_tensor from ..pipeline_utils import DiffusionPipeline, ImagePipelineOutput from .text_encoder import MultilingualCLIP if is_torch_xla_available(): import torch_xla.core.xla_model as xm XLA_AVAILABLE = True else: XLA_AVAILABLE = False logger = logging.get_logger(__name__) # pylint: disable=invalid-name EXAMPLE_DOC_STRING = """ Examples: ```py >>> from diffusers import KandinskyInpaintPipeline, KandinskyPriorPipeline >>> from diffusers.utils import load_image >>> import torch >>> import numpy as np >>> pipe_prior = KandinskyPriorPipeline.from_pretrained( ... "kandinsky-community/kandinsky-2-1-prior", torch_dtype=torch.float16 ... ) >>> pipe_prior.to("cuda") >>> prompt = "a hat" >>> image_emb, zero_image_emb = pipe_prior(prompt, return_dict=False) >>> pipe = KandinskyInpaintPipeline.from_pretrained( ... "kandinsky-community/kandinsky-2-1-inpaint", torch_dtype=torch.float16 ... ) >>> pipe.to("cuda") >>> init_image = load_image( ... "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main" ... "/kandinsky/cat.png" ... ) >>> mask = np.zeros((768, 768), dtype=np.float32) >>> mask[:250, 250:-250] = 1 >>> out = pipe( ... prompt, ... image=init_image, ... mask_image=mask, ... image_embeds=image_emb, ... negative_image_embeds=zero_image_emb, ... height=768, ... width=768, ... num_inference_steps=50, ... ) >>> image = out.images[0] >>> image.save("cat_with_hat.png") ``` """ def get_new_h_w(h, w, scale_factor=8): new_h = h // scale_factor**2 if h % scale_factor**2 != 0: new_h += 1 new_w = w // scale_factor**2 if w % scale_factor**2 != 0: new_w += 1 return new_h * scale_factor, new_w * scale_factor def prepare_mask(masks): prepared_masks = [] for mask in masks: old_mask = deepcopy(mask) for i in range(mask.shape[1]): for j in range(mask.shape[2]): if old_mask[0][i][j] == 1: continue if i != 0: mask[:, i - 1, j] = 0 if j != 0: mask[:, i, j - 1] = 0 if i != 0 and j != 0: mask[:, i - 1, j - 1] = 0 if i != mask.shape[1] - 1: mask[:, i + 1, j] = 0 if j != mask.shape[2] - 1: mask[:, i, j + 1] = 0 if i != mask.shape[1] - 1 and j != mask.shape[2] - 1: mask[:, i + 1, j + 1] = 0 prepared_masks.append(mask) return torch.stack(prepared_masks, dim=0) def prepare_mask_and_masked_image(image, mask, height, width): r""" Prepares a pair (mask, image) to be consumed by the Kandinsky inpaint pipeline. This means that those inputs will be converted to ``torch.Tensor`` with shapes ``batch x channels x height x width`` where ``channels`` is ``3`` for the ``image`` and ``1`` for the ``mask``. The ``image`` will be converted to ``torch.float32`` and normalized to be in ``[-1, 1]``. The ``mask`` will be binarized (``mask > 0.5``) and cast to ``torch.float32`` too. Args: image (Union[np.array, PIL.Image, torch.Tensor]): The image to inpaint. It can be a ``PIL.Image``, or a ``height x width x 3`` ``np.array`` or a ``channels x height x width`` ``torch.Tensor`` or a ``batch x channels x height x width`` ``torch.Tensor``. mask (_type_): The mask to apply to the image, i.e. regions to inpaint. It can be a ``PIL.Image``, or a ``height x width`` ``np.array`` or a ``1 x height x width`` ``torch.Tensor`` or a ``batch x 1 x height x width`` ``torch.Tensor``. height (`int`, *optional*, defaults to 512): The height in pixels of the generated image. width (`int`, *optional*, defaults to 512): The width in pixels of the generated image. Raises: ValueError: ``torch.Tensor`` images should be in the ``[-1, 1]`` range. ValueError: ``torch.Tensor`` mask should be in the ``[0, 1]`` range. ValueError: ``mask`` and ``image`` should have the same spatial dimensions. TypeError: ``mask`` is a ``torch.Tensor`` but ``image`` is not (ot the other way around). Returns: tuple[torch.Tensor]: The pair (mask, image) as ``torch.Tensor`` with 4 dimensions: ``batch x channels x height x width``. """ if image is None: raise ValueError("`image` input cannot be undefined.") if mask is None: raise ValueError("`mask_image` input cannot be undefined.") if isinstance(image, torch.Tensor): if not isinstance(mask, torch.Tensor): raise TypeError(f"`image` is a torch.Tensor but `mask` (type: {type(mask)} is not") # Batch single image if image.ndim == 3: assert image.shape[0] == 3, "Image outside a batch should be of shape (3, H, W)" image = image.unsqueeze(0) # Batch and add channel dim for single mask if mask.ndim == 2: mask = mask.unsqueeze(0).unsqueeze(0) # Batch single mask or add channel dim if mask.ndim == 3: # Single batched mask, no channel dim or single mask not batched but channel dim if mask.shape[0] == 1: mask = mask.unsqueeze(0) # Batched masks no channel dim else: mask = mask.unsqueeze(1) assert image.ndim == 4 and mask.ndim == 4, "Image and Mask must have 4 dimensions" assert image.shape[-2:] == mask.shape[-2:], "Image and Mask must have the same spatial dimensions" assert image.shape[0] == mask.shape[0], "Image and Mask must have the same batch size" # Check image is in [-1, 1] if image.min() < -1 or image.max() > 1: raise ValueError("Image should be in [-1, 1] range") # Check mask is in [0, 1] if mask.min() < 0 or mask.max() > 1: raise ValueError("Mask should be in [0, 1] range") # Binarize mask mask[mask < 0.5] = 0 mask[mask >= 0.5] = 1 # Image as float32 image = image.to(dtype=torch.float32) elif isinstance(mask, torch.Tensor): raise TypeError(f"`mask` is a torch.Tensor but `image` (type: {type(image)} is not") else: # preprocess image if isinstance(image, (PIL.Image.Image, np.ndarray)): image = [image] if isinstance(image, list) and isinstance(image[0], PIL.Image.Image): # resize all images w.r.t passed height an width image = [i.resize((width, height), resample=Image.BICUBIC, reducing_gap=1) for i in image] image = [np.array(i.convert("RGB"))[None, :] for i in image] image = np.concatenate(image, axis=0) elif isinstance(image, list) and isinstance(image[0], np.ndarray): image = np.concatenate([i[None, :] for i in image], axis=0) image = image.transpose(0, 3, 1, 2) image = torch.from_numpy(image).to(dtype=torch.float32) / 127.5 - 1.0 # preprocess mask if isinstance(mask, (PIL.Image.Image, np.ndarray)): mask = [mask] if isinstance(mask, list) and isinstance(mask[0], PIL.Image.Image): mask = [i.resize((width, height), resample=PIL.Image.LANCZOS) for i in mask] mask = np.concatenate([np.array(m.convert("L"))[None, None, :] for m in mask], axis=0) mask = mask.astype(np.float32) / 255.0 elif isinstance(mask, list) and isinstance(mask[0], np.ndarray): mask = np.concatenate([m[None, None, :] for m in mask], axis=0) mask[mask < 0.5] = 0 mask[mask >= 0.5] = 1 mask = torch.from_numpy(mask) mask = 1 - mask return mask, image class KandinskyInpaintPipeline(DiffusionPipeline): """ Pipeline for text-guided image inpainting using Kandinsky2.1 This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods the library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) Args: text_encoder ([`MultilingualCLIP`]): Frozen text-encoder. tokenizer ([`XLMRobertaTokenizer`]): Tokenizer of class scheduler ([`DDIMScheduler`]): A scheduler to be used in combination with `unet` to generate image latents. unet ([`UNet2DConditionModel`]): Conditional U-Net architecture to denoise the image embedding. movq ([`VQModel`]): MoVQ image encoder and decoder """ model_cpu_offload_seq = "text_encoder->unet->movq" def __init__( self, text_encoder: MultilingualCLIP, movq: VQModel, tokenizer: XLMRobertaTokenizer, unet: UNet2DConditionModel, scheduler: DDIMScheduler, ): super().__init__() self.register_modules( text_encoder=text_encoder, movq=movq, tokenizer=tokenizer, unet=unet, scheduler=scheduler, ) self.movq_scale_factor = 2 ** (len(self.movq.config.block_out_channels) - 1) self._warn_has_been_called = False # Copied from diffusers.pipelines.unclip.pipeline_unclip.UnCLIPPipeline.prepare_latents def prepare_latents(self, shape, dtype, device, generator, latents, scheduler): if latents is None: latents = randn_tensor(shape, generator=generator, device=device, dtype=dtype) else: if latents.shape != shape: raise ValueError(f"Unexpected latents shape, got {latents.shape}, expected {shape}") latents = latents.to(device) latents = latents * scheduler.init_noise_sigma return latents def _encode_prompt( self, prompt, device, num_images_per_prompt, do_classifier_free_guidance, negative_prompt=None, ): batch_size = len(prompt) if isinstance(prompt, list) else 1 # get prompt text embeddings text_inputs = self.tokenizer( prompt, padding="max_length", max_length=77, truncation=True, return_attention_mask=True, add_special_tokens=True, return_tensors="pt", ) text_input_ids = text_inputs.input_ids untruncated_ids = self.tokenizer(prompt, padding="longest", return_tensors="pt").input_ids if untruncated_ids.shape[-1] >= text_input_ids.shape[-1] and not torch.equal(text_input_ids, untruncated_ids): removed_text = self.tokenizer.batch_decode(untruncated_ids[:, self.tokenizer.model_max_length - 1 : -1]) logger.warning( "The following part of your input was truncated because CLIP can only handle sequences up to" f" {self.tokenizer.model_max_length} tokens: {removed_text}" ) text_input_ids = text_input_ids.to(device) text_mask = text_inputs.attention_mask.to(device) prompt_embeds, text_encoder_hidden_states = self.text_encoder( input_ids=text_input_ids, attention_mask=text_mask ) prompt_embeds = prompt_embeds.repeat_interleave(num_images_per_prompt, dim=0) text_encoder_hidden_states = text_encoder_hidden_states.repeat_interleave(num_images_per_prompt, dim=0) text_mask = text_mask.repeat_interleave(num_images_per_prompt, dim=0) if do_classifier_free_guidance: uncond_tokens: List[str] if negative_prompt is None: uncond_tokens = [""] * batch_size elif type(prompt) is not type(negative_prompt): raise TypeError( f"`negative_prompt` should be the same type to `prompt`, but got {type(negative_prompt)} !=" f" {type(prompt)}." ) elif isinstance(negative_prompt, str): uncond_tokens = [negative_prompt] elif batch_size != len(negative_prompt): raise ValueError( f"`negative_prompt`: {negative_prompt} has batch size {len(negative_prompt)}, but `prompt`:" f" {prompt} has batch size {batch_size}. Please make sure that passed `negative_prompt` matches" " the batch size of `prompt`." ) else: uncond_tokens = negative_prompt uncond_input = self.tokenizer( uncond_tokens, padding="max_length", max_length=77, truncation=True, return_attention_mask=True, add_special_tokens=True, return_tensors="pt", ) uncond_text_input_ids = uncond_input.input_ids.to(device) uncond_text_mask = uncond_input.attention_mask.to(device) negative_prompt_embeds, uncond_text_encoder_hidden_states = self.text_encoder( input_ids=uncond_text_input_ids, attention_mask=uncond_text_mask ) # duplicate unconditional embeddings for each generation per prompt, using mps friendly method seq_len = negative_prompt_embeds.shape[1] negative_prompt_embeds = negative_prompt_embeds.repeat(1, num_images_per_prompt) negative_prompt_embeds = negative_prompt_embeds.view(batch_size * num_images_per_prompt, seq_len) seq_len = uncond_text_encoder_hidden_states.shape[1] uncond_text_encoder_hidden_states = uncond_text_encoder_hidden_states.repeat(1, num_images_per_prompt, 1) uncond_text_encoder_hidden_states = uncond_text_encoder_hidden_states.view( batch_size * num_images_per_prompt, seq_len, -1 ) uncond_text_mask = uncond_text_mask.repeat_interleave(num_images_per_prompt, dim=0) # done duplicates # For classifier free guidance, we need to do two forward passes. # Here we concatenate the unconditional and text embeddings into a single batch # to avoid doing two forward passes prompt_embeds = torch.cat([negative_prompt_embeds, prompt_embeds]) text_encoder_hidden_states = torch.cat([uncond_text_encoder_hidden_states, text_encoder_hidden_states]) text_mask = torch.cat([uncond_text_mask, text_mask]) return prompt_embeds, text_encoder_hidden_states, text_mask @torch.no_grad() @replace_example_docstring(EXAMPLE_DOC_STRING) def __call__( self, prompt: Union[str, List[str]], image: Union[torch.Tensor, PIL.Image.Image], mask_image: Union[torch.Tensor, PIL.Image.Image, np.ndarray], image_embeds: torch.Tensor, negative_image_embeds: torch.Tensor, negative_prompt: Optional[Union[str, List[str]]] = None, height: int = 512, width: int = 512, num_inference_steps: int = 100, guidance_scale: float = 4.0, num_images_per_prompt: int = 1, generator: Optional[Union[torch.Generator, List[torch.Generator]]] = None, latents: Optional[torch.Tensor] = None, output_type: Optional[str] = "pil", callback: Optional[Callable[[int, int, torch.Tensor], None]] = None, callback_steps: int = 1, return_dict: bool = True, ): """ Function invoked when calling the pipeline for generation. Args: prompt (`str` or `List[str]`): The prompt or prompts to guide the image generation. image (`torch.Tensor`, `PIL.Image.Image` or `np.ndarray`): `Image`, or tensor representing an image batch, that will be used as the starting point for the process. mask_image (`PIL.Image.Image`,`torch.Tensor` or `np.ndarray`): `Image`, or a tensor representing an image batch, to mask `image`. White pixels in the mask will be repainted, while black pixels will be preserved. You can pass a pytorch tensor as mask only if the image you passed is a pytorch tensor, and it should contain one color channel (L) instead of 3, so the expected shape would be either `(B, 1, H, W,)`, `(B, H, W)`, `(1, H, W)` or `(H, W)` If image is an PIL image or numpy array, mask should also be a either PIL image or numpy array. If it is a PIL image, it will be converted to a single channel (luminance) before use. If it is a nummpy array, the expected shape is `(H, W)`. image_embeds (`torch.Tensor` or `List[torch.Tensor]`): The clip image embeddings for text prompt, that will be used to condition the image generation. negative_image_embeds (`torch.Tensor` or `List[torch.Tensor]`): The clip image embeddings for negative text prompt, will be used to condition the image generation. negative_prompt (`str` or `List[str]`, *optional*): The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored if `guidance_scale` is less than `1`). height (`int`, *optional*, defaults to 512): The height in pixels of the generated image. width (`int`, *optional*, defaults to 512): The width in pixels of the generated image. num_inference_steps (`int`, *optional*, defaults to 100): The number of denoising steps. More denoising steps usually lead to a higher quality image at the expense of slower inference. guidance_scale (`float`, *optional*, defaults to 4.0): Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://arxiv.org/abs/2207.12598). `guidance_scale` is defined as `w` of equation 2. of [Imagen Paper](https://arxiv.org/pdf/2205.11487.pdf). Guidance scale is enabled by setting `guidance_scale > 1`. Higher guidance scale encourages to generate images that are closely linked to the text `prompt`, usually at the expense of lower image quality. num_images_per_prompt (`int`, *optional*, defaults to 1): The number of images to generate per prompt. generator (`torch.Generator` or `List[torch.Generator]`, *optional*): One or a list of [torch generator(s)](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make generation deterministic. latents (`torch.Tensor`, *optional*): Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image generation. Can be used to tweak the same generation with different prompts. If not provided, a latents tensor will ge generated by sampling using the supplied random `generator`. output_type (`str`, *optional*, defaults to `"pil"`): The output format of the generate image. Choose between: `"pil"` (`PIL.Image.Image`), `"np"` (`np.array`) or `"pt"` (`torch.Tensor`). callback (`Callable`, *optional*): A function that calls every `callback_steps` steps during inference. The function is called with the following arguments: `callback(step: int, timestep: int, latents: torch.Tensor)`. callback_steps (`int`, *optional*, defaults to 1): The frequency at which the `callback` function is called. If not specified, the callback is called at every step. return_dict (`bool`, *optional*, defaults to `True`): Whether or not to return a [`~pipelines.ImagePipelineOutput`] instead of a plain tuple. Examples: Returns: [`~pipelines.ImagePipelineOutput`] or `tuple` """ if not self._warn_has_been_called and version.parse(version.parse(__version__).base_version) < version.parse( "0.23.0.dev0" ): logger.warning( "Please note that the expected format of `mask_image` has recently been changed. " "Before diffusers == 0.19.0, Kandinsky Inpainting pipelines repainted black pixels and preserved black pixels. " "As of diffusers==0.19.0 this behavior has been inverted. Now white pixels are repainted and black pixels are preserved. " "This way, Kandinsky's masking behavior is aligned with Stable Diffusion. " "THIS means that you HAVE to invert the input mask to have the same behavior as before as explained in https://github.com/huggingface/diffusers/pull/4207. " "This warning will be surpressed after the first inference call and will be removed in diffusers>0.23.0" ) self._warn_has_been_called = True # Define call parameters if isinstance(prompt, str): batch_size = 1 elif isinstance(prompt, list): batch_size = len(prompt) else: raise ValueError(f"`prompt` has to be of type `str` or `list` but is {type(prompt)}") device = self._execution_device batch_size = batch_size * num_images_per_prompt do_classifier_free_guidance = guidance_scale > 1.0 prompt_embeds, text_encoder_hidden_states, _ = self._encode_prompt( prompt, device, num_images_per_prompt, do_classifier_free_guidance, negative_prompt ) if isinstance(image_embeds, list): image_embeds = torch.cat(image_embeds, dim=0) if isinstance(negative_image_embeds, list): negative_image_embeds = torch.cat(negative_image_embeds, dim=0) if do_classifier_free_guidance: image_embeds = image_embeds.repeat_interleave(num_images_per_prompt, dim=0) negative_image_embeds = negative_image_embeds.repeat_interleave(num_images_per_prompt, dim=0) image_embeds = torch.cat([negative_image_embeds, image_embeds], dim=0).to( dtype=prompt_embeds.dtype, device=device ) # preprocess image and mask mask_image, image = prepare_mask_and_masked_image(image, mask_image, height, width) image = image.to(dtype=prompt_embeds.dtype, device=device) image = self.movq.encode(image)["latents"] mask_image = mask_image.to(dtype=prompt_embeds.dtype, device=device) image_shape = tuple(image.shape[-2:]) mask_image = F.interpolate( mask_image, image_shape, mode="nearest", ) mask_image = prepare_mask(mask_image) masked_image = image * mask_image mask_image = mask_image.repeat_interleave(num_images_per_prompt, dim=0) masked_image = masked_image.repeat_interleave(num_images_per_prompt, dim=0) if do_classifier_free_guidance: mask_image = mask_image.repeat(2, 1, 1, 1) masked_image = masked_image.repeat(2, 1, 1, 1) self.scheduler.set_timesteps(num_inference_steps, device=device) timesteps_tensor = self.scheduler.timesteps num_channels_latents = self.movq.config.latent_channels # get h, w for latents sample_height, sample_width = get_new_h_w(height, width, self.movq_scale_factor) # create initial latent latents = self.prepare_latents( (batch_size, num_channels_latents, sample_height, sample_width), text_encoder_hidden_states.dtype, device, generator, latents, self.scheduler, ) # Check that sizes of mask, masked image and latents match with expected num_channels_mask = mask_image.shape[1] num_channels_masked_image = masked_image.shape[1] if num_channels_latents + num_channels_mask + num_channels_masked_image != self.unet.config.in_channels: raise ValueError( f"Incorrect configuration settings! The config of `pipeline.unet`: {self.unet.config} expects" f" {self.unet.config.in_channels} but received `num_channels_latents`: {num_channels_latents} +" f" `num_channels_mask`: {num_channels_mask} + `num_channels_masked_image`: {num_channels_masked_image}" f" = {num_channels_latents+num_channels_masked_image+num_channels_mask}. Please verify the config of" " `pipeline.unet` or your `mask_image` or `image` input." ) for i, t in enumerate(self.progress_bar(timesteps_tensor)): # expand the latents if we are doing classifier free guidance latent_model_input = torch.cat([latents] * 2) if do_classifier_free_guidance else latents latent_model_input = torch.cat([latent_model_input, masked_image, mask_image], dim=1) added_cond_kwargs = {"text_embeds": prompt_embeds, "image_embeds": image_embeds} noise_pred = self.unet( sample=latent_model_input, timestep=t, encoder_hidden_states=text_encoder_hidden_states, added_cond_kwargs=added_cond_kwargs, return_dict=False, )[0] if do_classifier_free_guidance: noise_pred, variance_pred = noise_pred.split(latents.shape[1], dim=1) noise_pred_uncond, noise_pred_text = noise_pred.chunk(2) _, variance_pred_text = variance_pred.chunk(2) noise_pred = noise_pred_uncond + guidance_scale * (noise_pred_text - noise_pred_uncond) noise_pred = torch.cat([noise_pred, variance_pred_text], dim=1) if not ( hasattr(self.scheduler.config, "variance_type") and self.scheduler.config.variance_type in ["learned", "learned_range"] ): noise_pred, _ = noise_pred.split(latents.shape[1], dim=1) # compute the previous noisy sample x_t -> x_t-1 latents = self.scheduler.step( noise_pred, t, latents, generator=generator, ).prev_sample if callback is not None and i % callback_steps == 0: step_idx = i // getattr(self.scheduler, "order", 1) callback(step_idx, t, latents) if XLA_AVAILABLE: xm.mark_step() # post-processing image = self.movq.decode(latents, force_not_quantize=True)["sample"] self.maybe_free_model_hooks() if output_type not in ["pt", "np", "pil"]: raise ValueError(f"Only the output types `pt`, `pil` and `np` are supported not output_type={output_type}") if output_type in ["np", "pil"]: image = image * 0.5 + 0.5 image = image.clamp(0, 1) image = image.cpu().permute(0, 2, 3, 1).float().numpy() if output_type == "pil": image = self.numpy_to_pil(image) if not return_dict: return (image,) return ImagePipelineOutput(images=image)
diffusers/src/diffusers/pipelines/kandinsky/pipeline_kandinsky_inpaint.py/0
{ "file_path": "diffusers/src/diffusers/pipelines/kandinsky/pipeline_kandinsky_inpaint.py", "repo_id": "diffusers", "token_count": 12891 }
# Copyright 2023 The HuggingFace Team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import inspect import math from typing import Any, Callable, Dict, List, Optional, Tuple, Union import torch import torch.nn.functional as F from transformers import ( CLIPImageProcessor, CLIPTextModel, CLIPTextModelWithProjection, CLIPTokenizer, CLIPVisionModelWithProjection, ) from ...image_processor import PipelineImageInput, VaeImageProcessor from ...loaders import ( FromSingleFileMixin, IPAdapterMixin, StableDiffusionXLLoraLoaderMixin, TextualInversionLoaderMixin, ) from ...models import AutoencoderKL, UNet2DConditionModel from ...models.attention_processor import ( Attention, AttnProcessor, AttnProcessor2_0, XFormersAttnProcessor, ) from ...models.lora import adjust_lora_scale_text_encoder from ...schedulers import DDIMScheduler, DPMSolverMultistepScheduler from ...utils import ( USE_PEFT_BACKEND, is_invisible_watermark_available, is_torch_xla_available, logging, replace_example_docstring, scale_lora_layers, unscale_lora_layers, ) from ...utils.torch_utils import randn_tensor from ..pipeline_utils import DiffusionPipeline from .pipeline_output import LEditsPPDiffusionPipelineOutput, LEditsPPInversionPipelineOutput if is_invisible_watermark_available(): from ..stable_diffusion_xl.watermark import StableDiffusionXLWatermarker if is_torch_xla_available(): import torch_xla.core.xla_model as xm XLA_AVAILABLE = True else: XLA_AVAILABLE = False logger = logging.get_logger(__name__) # pylint: disable=invalid-name EXAMPLE_DOC_STRING = """ Examples: ```py >>> import torch >>> from diffusers import LEditsPPPipelineStableDiffusionXL >>> from diffusers.utils import load_image >>> pipe = LEditsPPPipelineStableDiffusionXL.from_pretrained( ... "stabilityai/stable-diffusion-xl-base-1.0", variant="fp16", torch_dtype=torch.float16 ... ) >>> pipe.enable_vae_tiling() >>> pipe = pipe.to("cuda") >>> img_url = "https://www.aiml.informatik.tu-darmstadt.de/people/mbrack/tennis.jpg" >>> image = load_image(img_url).resize((1024, 1024)) >>> _ = pipe.invert(image=image, num_inversion_steps=50, skip=0.2) >>> edited_image = pipe( ... editing_prompt=["tennis ball", "tomato"], ... reverse_editing_direction=[True, False], ... edit_guidance_scale=[5.0, 10.0], ... edit_threshold=[0.9, 0.85], ... ).images[0] ``` """ # Copied from diffusers.pipelines.ledits_pp.pipeline_leditspp_stable_diffusion.LeditsAttentionStore class LeditsAttentionStore: @staticmethod def get_empty_store(): return {"down_cross": [], "mid_cross": [], "up_cross": [], "down_self": [], "mid_self": [], "up_self": []} def __call__(self, attn, is_cross: bool, place_in_unet: str, editing_prompts, PnP=False): # attn.shape = batch_size * head_size, seq_len query, seq_len_key if attn.shape[1] <= self.max_size: bs = 1 + int(PnP) + editing_prompts skip = 2 if PnP else 1 # skip PnP & unconditional attn = torch.stack(attn.split(self.batch_size)).permute(1, 0, 2, 3) source_batch_size = int(attn.shape[1] // bs) self.forward(attn[:, skip * source_batch_size :], is_cross, place_in_unet) def forward(self, attn, is_cross: bool, place_in_unet: str): key = f"{place_in_unet}_{'cross' if is_cross else 'self'}" self.step_store[key].append(attn) def between_steps(self, store_step=True): if store_step: if self.average: if len(self.attention_store) == 0: self.attention_store = self.step_store else: for key in self.attention_store: for i in range(len(self.attention_store[key])): self.attention_store[key][i] += self.step_store[key][i] else: if len(self.attention_store) == 0: self.attention_store = [self.step_store] else: self.attention_store.append(self.step_store) self.cur_step += 1 self.step_store = self.get_empty_store() def get_attention(self, step: int): if self.average: attention = { key: [item / self.cur_step for item in self.attention_store[key]] for key in self.attention_store } else: assert step is not None attention = self.attention_store[step] return attention def aggregate_attention( self, attention_maps, prompts, res: Union[int, Tuple[int]], from_where: List[str], is_cross: bool, select: int ): out = [[] for x in range(self.batch_size)] if isinstance(res, int): num_pixels = res**2 resolution = (res, res) else: num_pixels = res[0] * res[1] resolution = res[:2] for location in from_where: for bs_item in attention_maps[f"{location}_{'cross' if is_cross else 'self'}"]: for batch, item in enumerate(bs_item): if item.shape[1] == num_pixels: cross_maps = item.reshape(len(prompts), -1, *resolution, item.shape[-1])[select] out[batch].append(cross_maps) out = torch.stack([torch.cat(x, dim=0) for x in out]) # average over heads out = out.sum(1) / out.shape[1] return out def __init__(self, average: bool, batch_size=1, max_resolution=16, max_size: int = None): self.step_store = self.get_empty_store() self.attention_store = [] self.cur_step = 0 self.average = average self.batch_size = batch_size if max_size is None: self.max_size = max_resolution**2 elif max_size is not None and max_resolution is None: self.max_size = max_size else: raise ValueError("Only allowed to set one of max_resolution or max_size") # Copied from diffusers.pipelines.ledits_pp.pipeline_leditspp_stable_diffusion.LeditsGaussianSmoothing class LeditsGaussianSmoothing: def __init__(self, device): kernel_size = [3, 3] sigma = [0.5, 0.5] # The gaussian kernel is the product of the gaussian function of each dimension. kernel = 1 meshgrids = torch.meshgrid([torch.arange(size, dtype=torch.float32) for size in kernel_size], indexing="ij") for size, std, mgrid in zip(kernel_size, sigma, meshgrids): mean = (size - 1) / 2 kernel *= 1 / (std * math.sqrt(2 * math.pi)) * torch.exp(-(((mgrid - mean) / (2 * std)) ** 2)) # Make sure sum of values in gaussian kernel equals 1. kernel = kernel / torch.sum(kernel) # Reshape to depthwise convolutional weight kernel = kernel.view(1, 1, *kernel.size()) kernel = kernel.repeat(1, *[1] * (kernel.dim() - 1)) self.weight = kernel.to(device) def __call__(self, input): """ Arguments: Apply gaussian filter to input. input (torch.Tensor): Input to apply gaussian filter on. Returns: filtered (torch.Tensor): Filtered output. """ return F.conv2d(input, weight=self.weight.to(input.dtype)) # Copied from diffusers.pipelines.ledits_pp.pipeline_leditspp_stable_diffusion.LEDITSCrossAttnProcessor class LEDITSCrossAttnProcessor: def __init__(self, attention_store, place_in_unet, pnp, editing_prompts): self.attnstore = attention_store self.place_in_unet = place_in_unet self.editing_prompts = editing_prompts self.pnp = pnp def __call__( self, attn: Attention, hidden_states, encoder_hidden_states, attention_mask=None, temb=None, ): batch_size, sequence_length, _ = ( hidden_states.shape if encoder_hidden_states is None else encoder_hidden_states.shape ) attention_mask = attn.prepare_attention_mask(attention_mask, sequence_length, batch_size) query = attn.to_q(hidden_states) if encoder_hidden_states is None: encoder_hidden_states = hidden_states elif attn.norm_cross: encoder_hidden_states = attn.norm_encoder_hidden_states(encoder_hidden_states) key = attn.to_k(encoder_hidden_states) value = attn.to_v(encoder_hidden_states) query = attn.head_to_batch_dim(query) key = attn.head_to_batch_dim(key) value = attn.head_to_batch_dim(value) attention_probs = attn.get_attention_scores(query, key, attention_mask) self.attnstore( attention_probs, is_cross=True, place_in_unet=self.place_in_unet, editing_prompts=self.editing_prompts, PnP=self.pnp, ) hidden_states = torch.bmm(attention_probs, value) hidden_states = attn.batch_to_head_dim(hidden_states) # linear proj hidden_states = attn.to_out[0](hidden_states) # dropout hidden_states = attn.to_out[1](hidden_states) hidden_states = hidden_states / attn.rescale_output_factor return hidden_states class LEditsPPPipelineStableDiffusionXL( DiffusionPipeline, FromSingleFileMixin, StableDiffusionXLLoraLoaderMixin, TextualInversionLoaderMixin, IPAdapterMixin, ): """ Pipeline for textual image editing using LEDits++ with Stable Diffusion XL. This model inherits from [`DiffusionPipeline`] and builds on the [`StableDiffusionXLPipeline`]. Check the superclass documentation for the generic methods implemented for all pipelines (downloading, saving, running on a particular device, etc.). In addition the pipeline inherits the following loading methods: - *LoRA*: [`LEditsPPPipelineStableDiffusionXL.load_lora_weights`] - *Ckpt*: [`loaders.FromSingleFileMixin.from_single_file`] as well as the following saving methods: - *LoRA*: [`loaders.StableDiffusionXLPipeline.save_lora_weights`] Args: vae ([`AutoencoderKL`]): Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. text_encoder ([`~transformers.CLIPTextModel`]): Frozen text-encoder. Stable Diffusion XL uses the text portion of [CLIP](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPTextModel), specifically the [clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14) variant. text_encoder_2 ([`~transformers.CLIPTextModelWithProjection`]): Second frozen text-encoder. Stable Diffusion XL uses the text and pool portion of [CLIP](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPTextModelWithProjection), specifically the [laion/CLIP-ViT-bigG-14-laion2B-39B-b160k](https://huggingface.co/laion/CLIP-ViT-bigG-14-laion2B-39B-b160k) variant. tokenizer ([`~transformers.CLIPTokenizer`]): Tokenizer of class [CLIPTokenizer](https://huggingface.co/docs/transformers/v4.21.0/en/model_doc/clip#transformers.CLIPTokenizer). tokenizer_2 ([`~transformers.CLIPTokenizer`]): Second Tokenizer of class [CLIPTokenizer](https://huggingface.co/docs/transformers/v4.21.0/en/model_doc/clip#transformers.CLIPTokenizer). unet ([`UNet2DConditionModel`]): Conditional U-Net architecture to denoise the encoded image latents. scheduler ([`DPMSolverMultistepScheduler`] or [`DDIMScheduler`]): A scheduler to be used in combination with `unet` to denoise the encoded image latens. Can be one of [`DPMSolverMultistepScheduler`] or [`DDIMScheduler`]. If any other scheduler is passed it will automatically be set to [`DPMSolverMultistepScheduler`]. force_zeros_for_empty_prompt (`bool`, *optional*, defaults to `"True"`): Whether the negative prompt embeddings shall be forced to always be set to 0. Also see the config of `stabilityai/stable-diffusion-xl-base-1-0`. add_watermarker (`bool`, *optional*): Whether to use the [invisible_watermark library](https://github.com/ShieldMnt/invisible-watermark/) to watermark output images. If not defined, it will default to True if the package is installed, otherwise no watermarker will be used. """ model_cpu_offload_seq = "text_encoder->text_encoder_2->unet->vae" _optional_components = [ "tokenizer", "tokenizer_2", "text_encoder", "text_encoder_2", "image_encoder", "feature_extractor", ] _callback_tensor_inputs = [ "latents", "prompt_embeds", "negative_prompt_embeds", "add_text_embeds", "add_time_ids", "negative_pooled_prompt_embeds", "negative_add_time_ids", ] def __init__( self, vae: AutoencoderKL, text_encoder: CLIPTextModel, text_encoder_2: CLIPTextModelWithProjection, tokenizer: CLIPTokenizer, tokenizer_2: CLIPTokenizer, unet: UNet2DConditionModel, scheduler: Union[DPMSolverMultistepScheduler, DDIMScheduler], image_encoder: CLIPVisionModelWithProjection = None, feature_extractor: CLIPImageProcessor = None, force_zeros_for_empty_prompt: bool = True, add_watermarker: Optional[bool] = None, ): super().__init__() self.register_modules( vae=vae, text_encoder=text_encoder, text_encoder_2=text_encoder_2, tokenizer=tokenizer, tokenizer_2=tokenizer_2, unet=unet, scheduler=scheduler, image_encoder=image_encoder, feature_extractor=feature_extractor, ) self.register_to_config(force_zeros_for_empty_prompt=force_zeros_for_empty_prompt) self.vae_scale_factor = 2 ** (len(self.vae.config.block_out_channels) - 1) if getattr(self, "vae", None) else 8 self.image_processor = VaeImageProcessor(vae_scale_factor=self.vae_scale_factor) if not isinstance(scheduler, DDIMScheduler) and not isinstance(scheduler, DPMSolverMultistepScheduler): self.scheduler = DPMSolverMultistepScheduler.from_config( scheduler.config, algorithm_type="sde-dpmsolver++", solver_order=2 ) logger.warning( "This pipeline only supports DDIMScheduler and DPMSolverMultistepScheduler. " "The scheduler has been changed to DPMSolverMultistepScheduler." ) self.default_sample_size = ( self.unet.config.sample_size if hasattr(self, "unet") and self.unet is not None and hasattr(self.unet.config, "sample_size") else 128 ) add_watermarker = add_watermarker if add_watermarker is not None else is_invisible_watermark_available() if add_watermarker: self.watermark = StableDiffusionXLWatermarker() else: self.watermark = None self.inversion_steps = None def encode_prompt( self, device: Optional[torch.device] = None, num_images_per_prompt: int = 1, negative_prompt: Optional[str] = None, negative_prompt_2: Optional[str] = None, negative_prompt_embeds: Optional[torch.Tensor] = None, negative_pooled_prompt_embeds: Optional[torch.Tensor] = None, lora_scale: Optional[float] = None, clip_skip: Optional[int] = None, enable_edit_guidance: bool = True, editing_prompt: Optional[str] = None, editing_prompt_embeds: Optional[torch.Tensor] = None, editing_pooled_prompt_embeds: Optional[torch.Tensor] = None, ) -> object: r""" Encodes the prompt into text encoder hidden states. Args: device: (`torch.device`): torch device num_images_per_prompt (`int`): number of images that should be generated per prompt negative_prompt (`str` or `List[str]`, *optional*): The prompt or prompts not to guide the image generation. If not defined, one has to pass `negative_prompt_embeds` instead. negative_prompt_2 (`str` or `List[str]`, *optional*): The prompt or prompts not to guide the image generation to be sent to `tokenizer_2` and `text_encoder_2`. If not defined, `negative_prompt` is used in both text-encoders negative_prompt_embeds (`torch.Tensor`, *optional*): Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input argument. negative_pooled_prompt_embeds (`torch.Tensor`, *optional*): Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not provided, pooled negative_prompt_embeds will be generated from `negative_prompt` input argument. lora_scale (`float`, *optional*): A lora scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (`int`, *optional*): Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that the output of the pre-final layer will be used for computing the prompt embeddings. enable_edit_guidance (`bool`): Whether to guide towards an editing prompt or not. editing_prompt (`str` or `List[str]`, *optional*): Editing prompt(s) to be encoded. If not defined and 'enable_edit_guidance' is True, one has to pass `editing_prompt_embeds` instead. editing_prompt_embeds (`torch.Tensor`, *optional*): Pre-generated edit text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not provided and 'enable_edit_guidance' is True, editing_prompt_embeds will be generated from `editing_prompt` input argument. editing_pooled_prompt_embeds (`torch.Tensor`, *optional*): Pre-generated edit pooled text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not provided, pooled editing_pooled_prompt_embeds will be generated from `editing_prompt` input argument. """ device = device or self._execution_device # set lora scale so that monkey patched LoRA # function of text encoder can correctly access it if lora_scale is not None and isinstance(self, StableDiffusionXLLoraLoaderMixin): self._lora_scale = lora_scale # dynamically adjust the LoRA scale if self.text_encoder is not None: if not USE_PEFT_BACKEND: adjust_lora_scale_text_encoder(self.text_encoder, lora_scale) else: scale_lora_layers(self.text_encoder, lora_scale) if self.text_encoder_2 is not None: if not USE_PEFT_BACKEND: adjust_lora_scale_text_encoder(self.text_encoder_2, lora_scale) else: scale_lora_layers(self.text_encoder_2, lora_scale) batch_size = self.batch_size # Define tokenizers and text encoders tokenizers = [self.tokenizer, self.tokenizer_2] if self.tokenizer is not None else [self.tokenizer_2] text_encoders = ( [self.text_encoder, self.text_encoder_2] if self.text_encoder is not None else [self.text_encoder_2] ) num_edit_tokens = 0 # get unconditional embeddings for classifier free guidance zero_out_negative_prompt = negative_prompt is None and self.config.force_zeros_for_empty_prompt if negative_prompt_embeds is None: negative_prompt = negative_prompt or "" negative_prompt_2 = negative_prompt_2 or negative_prompt # normalize str to list negative_prompt = batch_size * [negative_prompt] if isinstance(negative_prompt, str) else negative_prompt negative_prompt_2 = ( batch_size * [negative_prompt_2] if isinstance(negative_prompt_2, str) else negative_prompt_2 ) uncond_tokens: List[str] if batch_size != len(negative_prompt): raise ValueError( f"`negative_prompt`: {negative_prompt} has batch size {len(negative_prompt)}, but image inversion " f" has batch size {batch_size}. Please make sure that passed `negative_prompt` matches" " the batch size of the input images." ) else: uncond_tokens = [negative_prompt, negative_prompt_2] negative_prompt_embeds_list = [] for negative_prompt, tokenizer, text_encoder in zip(uncond_tokens, tokenizers, text_encoders): if isinstance(self, TextualInversionLoaderMixin): negative_prompt = self.maybe_convert_prompt(negative_prompt, tokenizer) uncond_input = tokenizer( negative_prompt, padding="max_length", max_length=tokenizer.model_max_length, truncation=True, return_tensors="pt", ) negative_prompt_embeds = text_encoder( uncond_input.input_ids.to(device), output_hidden_states=True, ) # We are only ALWAYS interested in the pooled output of the final text encoder negative_pooled_prompt_embeds = negative_prompt_embeds[0] negative_prompt_embeds = negative_prompt_embeds.hidden_states[-2] negative_prompt_embeds_list.append(negative_prompt_embeds) negative_prompt_embeds = torch.concat(negative_prompt_embeds_list, dim=-1) if zero_out_negative_prompt: negative_prompt_embeds = torch.zeros_like(negative_prompt_embeds) negative_pooled_prompt_embeds = torch.zeros_like(negative_pooled_prompt_embeds) if enable_edit_guidance and editing_prompt_embeds is None: editing_prompt_2 = editing_prompt editing_prompts = [editing_prompt, editing_prompt_2] edit_prompt_embeds_list = [] for editing_prompt, tokenizer, text_encoder in zip(editing_prompts, tokenizers, text_encoders): if isinstance(self, TextualInversionLoaderMixin): editing_prompt = self.maybe_convert_prompt(editing_prompt, tokenizer) max_length = negative_prompt_embeds.shape[1] edit_concepts_input = tokenizer( # [x for item in editing_prompt for x in repeat(item, batch_size)], editing_prompt, padding="max_length", max_length=max_length, truncation=True, return_tensors="pt", return_length=True, ) num_edit_tokens = edit_concepts_input.length - 2 edit_concepts_embeds = text_encoder( edit_concepts_input.input_ids.to(device), output_hidden_states=True, ) # We are only ALWAYS interested in the pooled output of the final text encoder editing_pooled_prompt_embeds = edit_concepts_embeds[0] if clip_skip is None: edit_concepts_embeds = edit_concepts_embeds.hidden_states[-2] else: # "2" because SDXL always indexes from the penultimate layer. edit_concepts_embeds = edit_concepts_embeds.hidden_states[-(clip_skip + 2)] edit_prompt_embeds_list.append(edit_concepts_embeds) edit_concepts_embeds = torch.concat(edit_prompt_embeds_list, dim=-1) elif not enable_edit_guidance: edit_concepts_embeds = None editing_pooled_prompt_embeds = None negative_prompt_embeds = negative_prompt_embeds.to(dtype=self.text_encoder_2.dtype, device=device) bs_embed, seq_len, _ = negative_prompt_embeds.shape # duplicate unconditional embeddings for each generation per prompt, using mps friendly method seq_len = negative_prompt_embeds.shape[1] negative_prompt_embeds = negative_prompt_embeds.to(dtype=self.text_encoder_2.dtype, device=device) negative_prompt_embeds = negative_prompt_embeds.repeat(1, num_images_per_prompt, 1) negative_prompt_embeds = negative_prompt_embeds.view(batch_size * num_images_per_prompt, seq_len, -1) if enable_edit_guidance: bs_embed_edit, seq_len, _ = edit_concepts_embeds.shape edit_concepts_embeds = edit_concepts_embeds.to(dtype=self.text_encoder_2.dtype, device=device) edit_concepts_embeds = edit_concepts_embeds.repeat(1, num_images_per_prompt, 1) edit_concepts_embeds = edit_concepts_embeds.view(bs_embed_edit * num_images_per_prompt, seq_len, -1) negative_pooled_prompt_embeds = negative_pooled_prompt_embeds.repeat(1, num_images_per_prompt).view( bs_embed * num_images_per_prompt, -1 ) if enable_edit_guidance: editing_pooled_prompt_embeds = editing_pooled_prompt_embeds.repeat(1, num_images_per_prompt).view( bs_embed_edit * num_images_per_prompt, -1 ) if self.text_encoder is not None: if isinstance(self, StableDiffusionXLLoraLoaderMixin) and USE_PEFT_BACKEND: # Retrieve the original scale by scaling back the LoRA layers unscale_lora_layers(self.text_encoder, lora_scale) if self.text_encoder_2 is not None: if isinstance(self, StableDiffusionXLLoraLoaderMixin) and USE_PEFT_BACKEND: # Retrieve the original scale by scaling back the LoRA layers unscale_lora_layers(self.text_encoder_2, lora_scale) return ( negative_prompt_embeds, edit_concepts_embeds, negative_pooled_prompt_embeds, editing_pooled_prompt_embeds, num_edit_tokens, ) # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.prepare_extra_step_kwargs def prepare_extra_step_kwargs(self, eta, generator=None): # prepare extra kwargs for the scheduler step, since not all schedulers have the same signature # eta (η) is only used with the DDIMScheduler, it will be ignored for other schedulers. # eta corresponds to η in DDIM paper: https://arxiv.org/abs/2010.02502 # and should be between [0, 1] accepts_eta = "eta" in set(inspect.signature(self.scheduler.step).parameters.keys()) extra_step_kwargs = {} if accepts_eta: extra_step_kwargs["eta"] = eta # check if the scheduler accepts generator accepts_generator = "generator" in set(inspect.signature(self.scheduler.step).parameters.keys()) if accepts_generator: extra_step_kwargs["generator"] = generator return extra_step_kwargs def check_inputs( self, negative_prompt=None, negative_prompt_2=None, negative_prompt_embeds=None, negative_pooled_prompt_embeds=None, ): if negative_prompt is not None and negative_prompt_embeds is not None: raise ValueError( f"Cannot forward both `negative_prompt`: {negative_prompt} and `negative_prompt_embeds`:" f" {negative_prompt_embeds}. Please make sure to only forward one of the two." ) elif negative_prompt_2 is not None and negative_prompt_embeds is not None: raise ValueError( f"Cannot forward both `negative_prompt_2`: {negative_prompt_2} and `negative_prompt_embeds`:" f" {negative_prompt_embeds}. Please make sure to only forward one of the two." ) if negative_prompt_embeds is not None and negative_pooled_prompt_embeds is None: raise ValueError( "If `negative_prompt_embeds` are provided, `negative_pooled_prompt_embeds` also have to be passed. Make sure to generate `negative_pooled_prompt_embeds` from the same text encoder that was used to generate `negative_prompt_embeds`." ) # Modified from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.prepare_latents def prepare_latents(self, device, latents): latents = latents.to(device) # scale the initial noise by the standard deviation required by the scheduler latents = latents * self.scheduler.init_noise_sigma return latents def _get_add_time_ids( self, original_size, crops_coords_top_left, target_size, dtype, text_encoder_projection_dim=None ): add_time_ids = list(original_size + crops_coords_top_left + target_size) passed_add_embed_dim = ( self.unet.config.addition_time_embed_dim * len(add_time_ids) + text_encoder_projection_dim ) expected_add_embed_dim = self.unet.add_embedding.linear_1.in_features if expected_add_embed_dim != passed_add_embed_dim: raise ValueError( f"Model expects an added time embedding vector of length {expected_add_embed_dim}, but a vector of {passed_add_embed_dim} was created. The model has an incorrect config. Please check `unet.config.time_embedding_type` and `text_encoder_2.config.projection_dim`." ) add_time_ids = torch.tensor([add_time_ids], dtype=dtype) return add_time_ids # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion_upscale.StableDiffusionUpscalePipeline.upcast_vae def upcast_vae(self): dtype = self.vae.dtype self.vae.to(dtype=torch.float32) use_torch_2_0_or_xformers = isinstance( self.vae.decoder.mid_block.attentions[0].processor, ( AttnProcessor2_0, XFormersAttnProcessor, ), ) # if xformers or torch_2_0 is used attention block does not need # to be in float32 which can save lots of memory if use_torch_2_0_or_xformers: self.vae.post_quant_conv.to(dtype) self.vae.decoder.conv_in.to(dtype) self.vae.decoder.mid_block.to(dtype) # Copied from diffusers.pipelines.latent_consistency_models.pipeline_latent_consistency_text2img.LatentConsistencyModelPipeline.get_guidance_scale_embedding def get_guidance_scale_embedding( self, w: torch.Tensor, embedding_dim: int = 512, dtype: torch.dtype = torch.float32 ) -> torch.Tensor: """ See https://github.com/google-research/vdm/blob/dc27b98a554f65cdc654b800da5aa1846545d41b/model_vdm.py#L298 Args: w (`torch.Tensor`): Generate embedding vectors with a specified guidance scale to subsequently enrich timestep embeddings. embedding_dim (`int`, *optional*, defaults to 512): Dimension of the embeddings to generate. dtype (`torch.dtype`, *optional*, defaults to `torch.float32`): Data type of the generated embeddings. Returns: `torch.Tensor`: Embedding vectors with shape `(len(w), embedding_dim)`. """ assert len(w.shape) == 1 w = w * 1000.0 half_dim = embedding_dim // 2 emb = torch.log(torch.tensor(10000.0)) / (half_dim - 1) emb = torch.exp(torch.arange(half_dim, dtype=dtype) * -emb) emb = w.to(dtype)[:, None] * emb[None, :] emb = torch.cat([torch.sin(emb), torch.cos(emb)], dim=1) if embedding_dim % 2 == 1: # zero pad emb = torch.nn.functional.pad(emb, (0, 1)) assert emb.shape == (w.shape[0], embedding_dim) return emb @property def guidance_scale(self): return self._guidance_scale @property def guidance_rescale(self): return self._guidance_rescale @property def clip_skip(self): return self._clip_skip # here `guidance_scale` is defined analog to the guidance weight `w` of equation (2) # of the Imagen paper: https://arxiv.org/pdf/2205.11487.pdf . `guidance_scale = 1` # corresponds to doing no classifier free guidance. @property def do_classifier_free_guidance(self): return self._guidance_scale > 1 and self.unet.config.time_cond_proj_dim is None @property def cross_attention_kwargs(self): return self._cross_attention_kwargs @property def denoising_end(self): return self._denoising_end @property def num_timesteps(self): return self._num_timesteps def enable_vae_slicing(self): r""" Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. """ self.vae.enable_slicing() def disable_vae_slicing(self): r""" Disable sliced VAE decoding. If `enable_vae_slicing` was previously enabled, this method will go back to computing decoding in one step. """ self.vae.disable_slicing() def enable_vae_tiling(self): r""" Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow processing larger images. """ self.vae.enable_tiling() def disable_vae_tiling(self): r""" Disable tiled VAE decoding. If `enable_vae_tiling` was previously enabled, this method will go back to computing decoding in one step. """ self.vae.disable_tiling() # Copied from diffusers.pipelines.ledits_pp.pipeline_leditspp_stable_diffusion.LEditsPPPipelineStableDiffusion.prepare_unet def prepare_unet(self, attention_store, PnP: bool = False): attn_procs = {} for name in self.unet.attn_processors.keys(): if name.startswith("mid_block"): place_in_unet = "mid" elif name.startswith("up_blocks"): place_in_unet = "up" elif name.startswith("down_blocks"): place_in_unet = "down" else: continue if "attn2" in name and place_in_unet != "mid": attn_procs[name] = LEDITSCrossAttnProcessor( attention_store=attention_store, place_in_unet=place_in_unet, pnp=PnP, editing_prompts=self.enabled_editing_prompts, ) else: attn_procs[name] = AttnProcessor() self.unet.set_attn_processor(attn_procs) @torch.no_grad() @replace_example_docstring(EXAMPLE_DOC_STRING) def __call__( self, denoising_end: Optional[float] = None, negative_prompt: Optional[Union[str, List[str]]] = None, negative_prompt_2: Optional[Union[str, List[str]]] = None, negative_prompt_embeds: Optional[torch.Tensor] = None, negative_pooled_prompt_embeds: Optional[torch.Tensor] = None, ip_adapter_image: Optional[PipelineImageInput] = None, output_type: Optional[str] = "pil", return_dict: bool = True, cross_attention_kwargs: Optional[Dict[str, Any]] = None, guidance_rescale: float = 0.0, crops_coords_top_left: Tuple[int, int] = (0, 0), target_size: Optional[Tuple[int, int]] = None, editing_prompt: Optional[Union[str, List[str]]] = None, editing_prompt_embeddings: Optional[torch.Tensor] = None, editing_pooled_prompt_embeds: Optional[torch.Tensor] = None, reverse_editing_direction: Optional[Union[bool, List[bool]]] = False, edit_guidance_scale: Optional[Union[float, List[float]]] = 5, edit_warmup_steps: Optional[Union[int, List[int]]] = 0, edit_cooldown_steps: Optional[Union[int, List[int]]] = None, edit_threshold: Optional[Union[float, List[float]]] = 0.9, sem_guidance: Optional[List[torch.Tensor]] = None, use_cross_attn_mask: bool = False, use_intersect_mask: bool = False, user_mask: Optional[torch.Tensor] = None, attn_store_steps: Optional[List[int]] = [], store_averaged_over_steps: bool = True, clip_skip: Optional[int] = None, callback_on_step_end: Optional[Callable[[int, int, Dict], None]] = None, callback_on_step_end_tensor_inputs: List[str] = ["latents"], **kwargs, ): r""" The call function to the pipeline for editing. The [`~pipelines.ledits_pp.LEditsPPPipelineStableDiffusionXL.invert`] method has to be called beforehand. Edits will always be performed for the last inverted image(s). Args: denoising_end (`float`, *optional*): When specified, determines the fraction (between 0.0 and 1.0) of the total denoising process to be completed before it is intentionally prematurely terminated. As a result, the returned sample will still retain a substantial amount of noise as determined by the discrete timesteps selected by the scheduler. The denoising_end parameter should ideally be utilized when this pipeline forms a part of a "Mixture of Denoisers" multi-pipeline setup, as elaborated in [**Refining the Image negative_prompt (`str` or `List[str]`, *optional*): The prompt or prompts not to guide the image generation. If not defined, one has to pass `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is less than `1`). negative_prompt_2 (`str` or `List[str]`, *optional*): The prompt or prompts not to guide the image generation to be sent to `tokenizer_2` and `text_encoder_2`. If not defined, `negative_prompt` is used in both text-encoders negative_prompt_embeds (`torch.Tensor`, *optional*): Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input argument. negative_pooled_prompt_embeds (`torch.Tensor`, *optional*): Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not provided, pooled negative_prompt_embeds will be generated from `negative_prompt` input argument. ip_adapter_image: (`PipelineImageInput`, *optional*): Optional image input to work with IP Adapters. output_type (`str`, *optional*, defaults to `"pil"`): The output format of the generate image. Choose between [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`. return_dict (`bool`, *optional*, defaults to `True`): Whether or not to return a [`~pipelines.stable_diffusion_xl.StableDiffusionXLPipelineOutput`] instead of a plain tuple. callback (`Callable`, *optional*): A function that will be called every `callback_steps` steps during inference. The function will be called with the following arguments: `callback(step: int, timestep: int, latents: torch.Tensor)`. callback_steps (`int`, *optional*, defaults to 1): The frequency at which the `callback` function will be called. If not specified, the callback will be called at every step. cross_attention_kwargs (`dict`, *optional*): A kwargs dictionary that if specified is passed along to the `AttentionProcessor` as defined under `self.processor` in [diffusers.models.attention_processor](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py). guidance_rescale (`float`, *optional*, defaults to 0.7): Guidance rescale factor proposed by [Common Diffusion Noise Schedules and Sample Steps are Flawed](https://arxiv.org/pdf/2305.08891.pdf) `guidance_scale` is defined as `φ` in equation 16. of [Common Diffusion Noise Schedules and Sample Steps are Flawed](https://arxiv.org/pdf/2305.08891.pdf). Guidance rescale factor should fix overexposure when using zero terminal SNR. crops_coords_top_left (`Tuple[int]`, *optional*, defaults to (0, 0)): `crops_coords_top_left` can be used to generate an image that appears to be "cropped" from the position `crops_coords_top_left` downwards. Favorable, well-centered images are usually achieved by setting `crops_coords_top_left` to (0, 0). Part of SDXL's micro-conditioning as explained in section 2.2 of [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952). target_size (`Tuple[int]`, *optional*, defaults to (1024, 1024)): For most cases, `target_size` should be set to the desired height and width of the generated image. If not specified it will default to `(width, height)`. Part of SDXL's micro-conditioning as explained in section 2.2 of [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952). editing_prompt (`str` or `List[str]`, *optional*): The prompt or prompts to guide the image generation. The image is reconstructed by setting `editing_prompt = None`. Guidance direction of prompt should be specified via `reverse_editing_direction`. editing_prompt_embeddings (`torch.Tensor`, *optional*): Pre-generated edit text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not provided, editing_prompt_embeddings will be generated from `editing_prompt` input argument. editing_pooled_prompt_embeddings (`torch.Tensor`, *optional*): Pre-generated pooled edit text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not provided, editing_prompt_embeddings will be generated from `editing_prompt` input argument. reverse_editing_direction (`bool` or `List[bool]`, *optional*, defaults to `False`): Whether the corresponding prompt in `editing_prompt` should be increased or decreased. edit_guidance_scale (`float` or `List[float]`, *optional*, defaults to 5): Guidance scale for guiding the image generation. If provided as list values should correspond to `editing_prompt`. `edit_guidance_scale` is defined as `s_e` of equation 12 of [LEDITS++ Paper](https://arxiv.org/abs/2301.12247). edit_warmup_steps (`float` or `List[float]`, *optional*, defaults to 10): Number of diffusion steps (for each prompt) for which guidance is not applied. edit_cooldown_steps (`float` or `List[float]`, *optional*, defaults to `None`): Number of diffusion steps (for each prompt) after which guidance is no longer applied. edit_threshold (`float` or `List[float]`, *optional*, defaults to 0.9): Masking threshold of guidance. Threshold should be proportional to the image region that is modified. 'edit_threshold' is defined as 'λ' of equation 12 of [LEDITS++ Paper](https://arxiv.org/abs/2301.12247). sem_guidance (`List[torch.Tensor]`, *optional*): List of pre-generated guidance vectors to be applied at generation. Length of the list has to correspond to `num_inference_steps`. use_cross_attn_mask: Whether cross-attention masks are used. Cross-attention masks are always used when use_intersect_mask is set to true. Cross-attention masks are defined as 'M^1' of equation 12 of [LEDITS++ paper](https://arxiv.org/pdf/2311.16711.pdf). use_intersect_mask: Whether the masking term is calculated as intersection of cross-attention masks and masks derived from the noise estimate. Cross-attention mask are defined as 'M^1' and masks derived from the noise estimate are defined as 'M^2' of equation 12 of [LEDITS++ paper](https://arxiv.org/pdf/2311.16711.pdf). user_mask: User-provided mask for even better control over the editing process. This is helpful when LEDITS++'s implicit masks do not meet user preferences. attn_store_steps: Steps for which the attention maps are stored in the AttentionStore. Just for visualization purposes. store_averaged_over_steps: Whether the attention maps for the 'attn_store_steps' are stored averaged over the diffusion steps. If False, attention maps for each step are stores separately. Just for visualization purposes. clip_skip (`int`, *optional*): Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that the output of the pre-final layer will be used for computing the prompt embeddings. callback_on_step_end (`Callable`, *optional*): A function that calls at the end of each denoising steps during the inference. The function is called with the following arguments: `callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict)`. `callback_kwargs` will include a list of all tensors as specified by `callback_on_step_end_tensor_inputs`. callback_on_step_end_tensor_inputs (`List`, *optional*): The list of tensor inputs for the `callback_on_step_end` function. The tensors specified in the list will be passed as `callback_kwargs` argument. You will only be able to include variables listed in the `._callback_tensor_inputs` attribute of your pipeline class. Examples: Returns: [`~pipelines.ledits_pp.LEditsPPDiffusionPipelineOutput`] or `tuple`: [`~pipelines.ledits_pp.LEditsPPDiffusionPipelineOutput`] if `return_dict` is True, otherwise a `tuple. When returning a tuple, the first element is a list with the generated images. """ if self.inversion_steps is None: raise ValueError( "You need to invert an input image first before calling the pipeline. The `invert` method has to be called beforehand. Edits will always be performed for the last inverted image(s)." ) eta = self.eta num_images_per_prompt = 1 latents = self.init_latents zs = self.zs self.scheduler.set_timesteps(len(self.scheduler.timesteps)) if use_intersect_mask: use_cross_attn_mask = True if use_cross_attn_mask: self.smoothing = LeditsGaussianSmoothing(self.device) if user_mask is not None: user_mask = user_mask.to(self.device) # TODO: Check inputs # 1. Check inputs. Raise error if not correct # self.check_inputs( # callback_steps, # negative_prompt, # negative_prompt_2, # prompt_embeds, # negative_prompt_embeds, # pooled_prompt_embeds, # negative_pooled_prompt_embeds, # ) self._guidance_rescale = guidance_rescale self._clip_skip = clip_skip self._cross_attention_kwargs = cross_attention_kwargs self._denoising_end = denoising_end # 2. Define call parameters batch_size = self.batch_size device = self._execution_device if editing_prompt: enable_edit_guidance = True if isinstance(editing_prompt, str): editing_prompt = [editing_prompt] self.enabled_editing_prompts = len(editing_prompt) elif editing_prompt_embeddings is not None: enable_edit_guidance = True self.enabled_editing_prompts = editing_prompt_embeddings.shape[0] else: self.enabled_editing_prompts = 0 enable_edit_guidance = False # 3. Encode input prompt text_encoder_lora_scale = ( cross_attention_kwargs.get("scale", None) if cross_attention_kwargs is not None else None ) ( prompt_embeds, edit_prompt_embeds, negative_pooled_prompt_embeds, pooled_edit_embeds, num_edit_tokens, ) = self.encode_prompt( device=device, num_images_per_prompt=num_images_per_prompt, negative_prompt=negative_prompt, negative_prompt_2=negative_prompt_2, negative_prompt_embeds=negative_prompt_embeds, negative_pooled_prompt_embeds=negative_pooled_prompt_embeds, lora_scale=text_encoder_lora_scale, clip_skip=self.clip_skip, enable_edit_guidance=enable_edit_guidance, editing_prompt=editing_prompt, editing_prompt_embeds=editing_prompt_embeddings, editing_pooled_prompt_embeds=editing_pooled_prompt_embeds, ) # 4. Prepare timesteps # self.scheduler.set_timesteps(num_inference_steps, device=device) timesteps = self.inversion_steps t_to_idx = {int(v): k for k, v in enumerate(timesteps)} if use_cross_attn_mask: self.attention_store = LeditsAttentionStore( average=store_averaged_over_steps, batch_size=batch_size, max_size=(latents.shape[-2] / 4.0) * (latents.shape[-1] / 4.0), max_resolution=None, ) self.prepare_unet(self.attention_store) resolution = latents.shape[-2:] att_res = (int(resolution[0] / 4), int(resolution[1] / 4)) # 5. Prepare latent variables latents = self.prepare_latents(device=device, latents=latents) # 6. Prepare extra step kwargs. extra_step_kwargs = self.prepare_extra_step_kwargs(eta) if self.text_encoder_2 is None: text_encoder_projection_dim = int(negative_pooled_prompt_embeds.shape[-1]) else: text_encoder_projection_dim = self.text_encoder_2.config.projection_dim # 7. Prepare added time ids & embeddings add_text_embeds = negative_pooled_prompt_embeds add_time_ids = self._get_add_time_ids( self.size, crops_coords_top_left, self.size, dtype=negative_pooled_prompt_embeds.dtype, text_encoder_projection_dim=text_encoder_projection_dim, ) if enable_edit_guidance: prompt_embeds = torch.cat([prompt_embeds, edit_prompt_embeds], dim=0) add_text_embeds = torch.cat([add_text_embeds, pooled_edit_embeds], dim=0) edit_concepts_time_ids = add_time_ids.repeat(edit_prompt_embeds.shape[0], 1) add_time_ids = torch.cat([add_time_ids, edit_concepts_time_ids], dim=0) self.text_cross_attention_maps = [editing_prompt] if isinstance(editing_prompt, str) else editing_prompt prompt_embeds = prompt_embeds.to(device) add_text_embeds = add_text_embeds.to(device) add_time_ids = add_time_ids.to(device).repeat(batch_size * num_images_per_prompt, 1) if ip_adapter_image is not None: # TODO: fix image encoding image_embeds, negative_image_embeds = self.encode_image(ip_adapter_image, device, num_images_per_prompt) if self.do_classifier_free_guidance: image_embeds = torch.cat([negative_image_embeds, image_embeds]) image_embeds = image_embeds.to(device) # 8. Denoising loop self.sem_guidance = None self.activation_mask = None if ( self.denoising_end is not None and isinstance(self.denoising_end, float) and self.denoising_end > 0 and self.denoising_end < 1 ): discrete_timestep_cutoff = int( round( self.scheduler.config.num_train_timesteps - (self.denoising_end * self.scheduler.config.num_train_timesteps) ) ) num_inference_steps = len(list(filter(lambda ts: ts >= discrete_timestep_cutoff, timesteps))) timesteps = timesteps[:num_inference_steps] # 9. Optionally get Guidance Scale Embedding timestep_cond = None if self.unet.config.time_cond_proj_dim is not None: guidance_scale_tensor = torch.tensor(self.guidance_scale - 1).repeat(batch_size * num_images_per_prompt) timestep_cond = self.get_guidance_scale_embedding( guidance_scale_tensor, embedding_dim=self.unet.config.time_cond_proj_dim ).to(device=device, dtype=latents.dtype) self._num_timesteps = len(timesteps) with self.progress_bar(total=self._num_timesteps) as progress_bar: for i, t in enumerate(timesteps): # expand the latents if we are doing classifier free guidance latent_model_input = torch.cat([latents] * (1 + self.enabled_editing_prompts)) latent_model_input = self.scheduler.scale_model_input(latent_model_input, t) # predict the noise residual added_cond_kwargs = {"text_embeds": add_text_embeds, "time_ids": add_time_ids} if ip_adapter_image is not None: added_cond_kwargs["image_embeds"] = image_embeds noise_pred = self.unet( latent_model_input, t, encoder_hidden_states=prompt_embeds, cross_attention_kwargs=cross_attention_kwargs, added_cond_kwargs=added_cond_kwargs, return_dict=False, )[0] noise_pred_out = noise_pred.chunk(1 + self.enabled_editing_prompts) # [b,4, 64, 64] noise_pred_uncond = noise_pred_out[0] noise_pred_edit_concepts = noise_pred_out[1:] noise_guidance_edit = torch.zeros( noise_pred_uncond.shape, device=self.device, dtype=noise_pred_uncond.dtype, ) if sem_guidance is not None and len(sem_guidance) > i: noise_guidance_edit += sem_guidance[i].to(self.device) elif enable_edit_guidance: if self.activation_mask is None: self.activation_mask = torch.zeros( (len(timesteps), self.enabled_editing_prompts, *noise_pred_edit_concepts[0].shape) ) if self.sem_guidance is None: self.sem_guidance = torch.zeros((len(timesteps), *noise_pred_uncond.shape)) # noise_guidance_edit = torch.zeros_like(noise_guidance) for c, noise_pred_edit_concept in enumerate(noise_pred_edit_concepts): if isinstance(edit_warmup_steps, list): edit_warmup_steps_c = edit_warmup_steps[c] else: edit_warmup_steps_c = edit_warmup_steps if i < edit_warmup_steps_c: continue if isinstance(edit_guidance_scale, list): edit_guidance_scale_c = edit_guidance_scale[c] else: edit_guidance_scale_c = edit_guidance_scale if isinstance(edit_threshold, list): edit_threshold_c = edit_threshold[c] else: edit_threshold_c = edit_threshold if isinstance(reverse_editing_direction, list): reverse_editing_direction_c = reverse_editing_direction[c] else: reverse_editing_direction_c = reverse_editing_direction if isinstance(edit_cooldown_steps, list): edit_cooldown_steps_c = edit_cooldown_steps[c] elif edit_cooldown_steps is None: edit_cooldown_steps_c = i + 1 else: edit_cooldown_steps_c = edit_cooldown_steps if i >= edit_cooldown_steps_c: continue noise_guidance_edit_tmp = noise_pred_edit_concept - noise_pred_uncond if reverse_editing_direction_c: noise_guidance_edit_tmp = noise_guidance_edit_tmp * -1 noise_guidance_edit_tmp = noise_guidance_edit_tmp * edit_guidance_scale_c if user_mask is not None: noise_guidance_edit_tmp = noise_guidance_edit_tmp * user_mask if use_cross_attn_mask: out = self.attention_store.aggregate_attention( attention_maps=self.attention_store.step_store, prompts=self.text_cross_attention_maps, res=att_res, from_where=["up", "down"], is_cross=True, select=self.text_cross_attention_maps.index(editing_prompt[c]), ) attn_map = out[:, :, :, 1 : 1 + num_edit_tokens[c]] # 0 -> startoftext # average over all tokens if attn_map.shape[3] != num_edit_tokens[c]: raise ValueError( f"Incorrect shape of attention_map. Expected size {num_edit_tokens[c]}, but found {attn_map.shape[3]}!" ) attn_map = torch.sum(attn_map, dim=3) # gaussian_smoothing attn_map = F.pad(attn_map.unsqueeze(1), (1, 1, 1, 1), mode="reflect") attn_map = self.smoothing(attn_map).squeeze(1) # torch.quantile function expects float32 if attn_map.dtype == torch.float32: tmp = torch.quantile(attn_map.flatten(start_dim=1), edit_threshold_c, dim=1) else: tmp = torch.quantile( attn_map.flatten(start_dim=1).to(torch.float32), edit_threshold_c, dim=1 ).to(attn_map.dtype) attn_mask = torch.where( attn_map >= tmp.unsqueeze(1).unsqueeze(1).repeat(1, *att_res), 1.0, 0.0 ) # resolution must match latent space dimension attn_mask = F.interpolate( attn_mask.unsqueeze(1), noise_guidance_edit_tmp.shape[-2:], # 64,64 ).repeat(1, 4, 1, 1) self.activation_mask[i, c] = attn_mask.detach().cpu() if not use_intersect_mask: noise_guidance_edit_tmp = noise_guidance_edit_tmp * attn_mask if use_intersect_mask: noise_guidance_edit_tmp_quantile = torch.abs(noise_guidance_edit_tmp) noise_guidance_edit_tmp_quantile = torch.sum( noise_guidance_edit_tmp_quantile, dim=1, keepdim=True ) noise_guidance_edit_tmp_quantile = noise_guidance_edit_tmp_quantile.repeat( 1, self.unet.config.in_channels, 1, 1 ) # torch.quantile function expects float32 if noise_guidance_edit_tmp_quantile.dtype == torch.float32: tmp = torch.quantile( noise_guidance_edit_tmp_quantile.flatten(start_dim=2), edit_threshold_c, dim=2, keepdim=False, ) else: tmp = torch.quantile( noise_guidance_edit_tmp_quantile.flatten(start_dim=2).to(torch.float32), edit_threshold_c, dim=2, keepdim=False, ).to(noise_guidance_edit_tmp_quantile.dtype) intersect_mask = ( torch.where( noise_guidance_edit_tmp_quantile >= tmp[:, :, None, None], torch.ones_like(noise_guidance_edit_tmp), torch.zeros_like(noise_guidance_edit_tmp), ) * attn_mask ) self.activation_mask[i, c] = intersect_mask.detach().cpu() noise_guidance_edit_tmp = noise_guidance_edit_tmp * intersect_mask elif not use_cross_attn_mask: # calculate quantile noise_guidance_edit_tmp_quantile = torch.abs(noise_guidance_edit_tmp) noise_guidance_edit_tmp_quantile = torch.sum( noise_guidance_edit_tmp_quantile, dim=1, keepdim=True ) noise_guidance_edit_tmp_quantile = noise_guidance_edit_tmp_quantile.repeat(1, 4, 1, 1) # torch.quantile function expects float32 if noise_guidance_edit_tmp_quantile.dtype == torch.float32: tmp = torch.quantile( noise_guidance_edit_tmp_quantile.flatten(start_dim=2), edit_threshold_c, dim=2, keepdim=False, ) else: tmp = torch.quantile( noise_guidance_edit_tmp_quantile.flatten(start_dim=2).to(torch.float32), edit_threshold_c, dim=2, keepdim=False, ).to(noise_guidance_edit_tmp_quantile.dtype) self.activation_mask[i, c] = ( torch.where( noise_guidance_edit_tmp_quantile >= tmp[:, :, None, None], torch.ones_like(noise_guidance_edit_tmp), torch.zeros_like(noise_guidance_edit_tmp), ) .detach() .cpu() ) noise_guidance_edit_tmp = torch.where( noise_guidance_edit_tmp_quantile >= tmp[:, :, None, None], noise_guidance_edit_tmp, torch.zeros_like(noise_guidance_edit_tmp), ) noise_guidance_edit += noise_guidance_edit_tmp self.sem_guidance[i] = noise_guidance_edit.detach().cpu() noise_pred = noise_pred_uncond + noise_guidance_edit # compute the previous noisy sample x_t -> x_t-1 if enable_edit_guidance and self.guidance_rescale > 0.0: # Based on 3.4. in https://arxiv.org/pdf/2305.08891.pdf noise_pred = rescale_noise_cfg( noise_pred, noise_pred_edit_concepts.mean(dim=0, keepdim=False), guidance_rescale=self.guidance_rescale, ) idx = t_to_idx[int(t)] latents = self.scheduler.step( noise_pred, t, latents, variance_noise=zs[idx], **extra_step_kwargs, return_dict=False )[0] # step callback if use_cross_attn_mask: store_step = i in attn_store_steps self.attention_store.between_steps(store_step) if callback_on_step_end is not None: callback_kwargs = {} for k in callback_on_step_end_tensor_inputs: callback_kwargs[k] = locals()[k] callback_outputs = callback_on_step_end(self, i, t, callback_kwargs) latents = callback_outputs.pop("latents", latents) prompt_embeds = callback_outputs.pop("prompt_embeds", prompt_embeds) negative_prompt_embeds = callback_outputs.pop("negative_prompt_embeds", negative_prompt_embeds) add_text_embeds = callback_outputs.pop("add_text_embeds", add_text_embeds) negative_pooled_prompt_embeds = callback_outputs.pop( "negative_pooled_prompt_embeds", negative_pooled_prompt_embeds ) add_time_ids = callback_outputs.pop("add_time_ids", add_time_ids) # negative_add_time_ids = callback_outputs.pop("negative_add_time_ids", negative_add_time_ids) # call the callback, if provided if i == len(timesteps) - 1 or ((i + 1) > 0 and (i + 1) % self.scheduler.order == 0): progress_bar.update() if XLA_AVAILABLE: xm.mark_step() if not output_type == "latent": # make sure the VAE is in float32 mode, as it overflows in float16 needs_upcasting = self.vae.dtype == torch.float16 and self.vae.config.force_upcast if needs_upcasting: self.upcast_vae() latents = latents.to(next(iter(self.vae.post_quant_conv.parameters())).dtype) image = self.vae.decode(latents / self.vae.config.scaling_factor, return_dict=False)[0] # cast back to fp16 if needed if needs_upcasting: self.vae.to(dtype=torch.float16) else: image = latents if not output_type == "latent": # apply watermark if available if self.watermark is not None: image = self.watermark.apply_watermark(image) image = self.image_processor.postprocess(image, output_type=output_type) # Offload all models self.maybe_free_model_hooks() if not return_dict: return (image,) return LEditsPPDiffusionPipelineOutput(images=image, nsfw_content_detected=None) @torch.no_grad() # Modified from diffusers.pipelines.ledits_pp.pipeline_leditspp_stable_diffusion.LEditsPPPipelineStableDiffusion.encode_image def encode_image(self, image, dtype=None, height=None, width=None, resize_mode="default", crops_coords=None): image = self.image_processor.preprocess( image=image, height=height, width=width, resize_mode=resize_mode, crops_coords=crops_coords ) height, width = image.shape[-2:] if height % 32 != 0 or width % 32 != 0: raise ValueError( "Image height and width must be a factor of 32. " "Consider down-sampling the input using the `height` and `width` parameters" ) resized = self.image_processor.postprocess(image=image, output_type="pil") if max(image.shape[-2:]) > self.vae.config["sample_size"] * 1.5: logger.warning( "Your input images far exceed the default resolution of the underlying diffusion model. " "The output images may contain severe artifacts! " "Consider down-sampling the input using the `height` and `width` parameters" ) image = image.to(self.device, dtype=dtype) needs_upcasting = self.vae.dtype == torch.float16 and self.vae.config.force_upcast if needs_upcasting: image = image.float() self.upcast_vae() x0 = self.vae.encode(image).latent_dist.mode() x0 = x0.to(dtype) # cast back to fp16 if needed if needs_upcasting: self.vae.to(dtype=torch.float16) x0 = self.vae.config.scaling_factor * x0 return x0, resized @torch.no_grad() def invert( self, image: PipelineImageInput, source_prompt: str = "", source_guidance_scale=3.5, negative_prompt: str = None, negative_prompt_2: str = None, num_inversion_steps: int = 50, skip: float = 0.15, generator: Optional[torch.Generator] = None, crops_coords_top_left: Tuple[int, int] = (0, 0), num_zero_noise_steps: int = 3, cross_attention_kwargs: Optional[Dict[str, Any]] = None, height: Optional[int] = None, width: Optional[int] = None, resize_mode: Optional[str] = "default", crops_coords: Optional[Tuple[int, int, int, int]] = None, ): r""" The function to the pipeline for image inversion as described by the [LEDITS++ Paper](https://arxiv.org/abs/2301.12247). If the scheduler is set to [`~schedulers.DDIMScheduler`] the inversion proposed by [edit-friendly DPDM](https://arxiv.org/abs/2304.06140) will be performed instead. Args: image (`PipelineImageInput`): Input for the image(s) that are to be edited. Multiple input images have to default to the same aspect ratio. source_prompt (`str`, defaults to `""`): Prompt describing the input image that will be used for guidance during inversion. Guidance is disabled if the `source_prompt` is `""`. source_guidance_scale (`float`, defaults to `3.5`): Strength of guidance during inversion. negative_prompt (`str` or `List[str]`, *optional*): The prompt or prompts not to guide the image generation. If not defined, one has to pass `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is less than `1`). negative_prompt_2 (`str` or `List[str]`, *optional*): The prompt or prompts not to guide the image generation to be sent to `tokenizer_2` and `text_encoder_2`. If not defined, `negative_prompt` is used in both text-encoders num_inversion_steps (`int`, defaults to `50`): Number of total performed inversion steps after discarding the initial `skip` steps. skip (`float`, defaults to `0.15`): Portion of initial steps that will be ignored for inversion and subsequent generation. Lower values will lead to stronger changes to the input image. `skip` has to be between `0` and `1`. generator (`torch.Generator`, *optional*): A [`torch.Generator`](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make inversion deterministic. crops_coords_top_left (`Tuple[int]`, *optional*, defaults to (0, 0)): `crops_coords_top_left` can be used to generate an image that appears to be "cropped" from the position `crops_coords_top_left` downwards. Favorable, well-centered images are usually achieved by setting `crops_coords_top_left` to (0, 0). Part of SDXL's micro-conditioning as explained in section 2.2 of [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952). num_zero_noise_steps (`int`, defaults to `3`): Number of final diffusion steps that will not renoise the current image. If no steps are set to zero SD-XL in combination with [`DPMSolverMultistepScheduler`] will produce noise artifacts. cross_attention_kwargs (`dict`, *optional*): A kwargs dictionary that if specified is passed along to the `AttentionProcessor` as defined under `self.processor` in [diffusers.models.attention_processor](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py). Returns: [`~pipelines.ledits_pp.LEditsPPInversionPipelineOutput`]: Output will contain the resized input image(s) and respective VAE reconstruction(s). """ if height is not None and height % 32 != 0 or width is not None and width % 32 != 0: raise ValueError("height and width must be a factor of 32.") # Reset attn processor, we do not want to store attn maps during inversion self.unet.set_attn_processor(AttnProcessor()) self.eta = 1.0 self.scheduler.config.timestep_spacing = "leading" self.scheduler.set_timesteps(int(num_inversion_steps * (1 + skip))) self.inversion_steps = self.scheduler.timesteps[-num_inversion_steps:] timesteps = self.inversion_steps num_images_per_prompt = 1 device = self._execution_device # 0. Ensure that only uncond embedding is used if prompt = "" if source_prompt == "": # noise pred should only be noise_pred_uncond source_guidance_scale = 0.0 do_classifier_free_guidance = False else: do_classifier_free_guidance = source_guidance_scale > 1.0 # 1. prepare image x0, resized = self.encode_image( image, dtype=self.text_encoder_2.dtype, height=height, width=width, resize_mode=resize_mode, crops_coords=crops_coords, ) width = x0.shape[2] * self.vae_scale_factor height = x0.shape[3] * self.vae_scale_factor self.size = (height, width) self.batch_size = x0.shape[0] # 2. get embeddings text_encoder_lora_scale = ( cross_attention_kwargs.get("scale", None) if cross_attention_kwargs is not None else None ) if isinstance(source_prompt, str): source_prompt = [source_prompt] * self.batch_size ( negative_prompt_embeds, prompt_embeds, negative_pooled_prompt_embeds, edit_pooled_prompt_embeds, _, ) = self.encode_prompt( device=device, num_images_per_prompt=num_images_per_prompt, negative_prompt=negative_prompt, negative_prompt_2=negative_prompt_2, editing_prompt=source_prompt, lora_scale=text_encoder_lora_scale, enable_edit_guidance=do_classifier_free_guidance, ) if self.text_encoder_2 is None: text_encoder_projection_dim = int(negative_pooled_prompt_embeds.shape[-1]) else: text_encoder_projection_dim = self.text_encoder_2.config.projection_dim # 3. Prepare added time ids & embeddings add_text_embeds = negative_pooled_prompt_embeds add_time_ids = self._get_add_time_ids( self.size, crops_coords_top_left, self.size, dtype=negative_prompt_embeds.dtype, text_encoder_projection_dim=text_encoder_projection_dim, ) if do_classifier_free_guidance: negative_prompt_embeds = torch.cat([negative_prompt_embeds, prompt_embeds], dim=0) add_text_embeds = torch.cat([add_text_embeds, edit_pooled_prompt_embeds], dim=0) add_time_ids = torch.cat([add_time_ids, add_time_ids], dim=0) negative_prompt_embeds = negative_prompt_embeds.to(device) add_text_embeds = add_text_embeds.to(device) add_time_ids = add_time_ids.to(device).repeat(self.batch_size * num_images_per_prompt, 1) # autoencoder reconstruction if self.vae.dtype == torch.float16 and self.vae.config.force_upcast: self.upcast_vae() x0_tmp = x0.to(next(iter(self.vae.post_quant_conv.parameters())).dtype) image_rec = self.vae.decode( x0_tmp / self.vae.config.scaling_factor, return_dict=False, generator=generator )[0] elif self.vae.config.force_upcast: x0_tmp = x0.to(next(iter(self.vae.post_quant_conv.parameters())).dtype) image_rec = self.vae.decode( x0_tmp / self.vae.config.scaling_factor, return_dict=False, generator=generator )[0] else: image_rec = self.vae.decode(x0 / self.vae.config.scaling_factor, return_dict=False, generator=generator)[0] image_rec = self.image_processor.postprocess(image_rec, output_type="pil") # 5. find zs and xts variance_noise_shape = (num_inversion_steps, *x0.shape) # intermediate latents t_to_idx = {int(v): k for k, v in enumerate(timesteps)} xts = torch.zeros(size=variance_noise_shape, device=self.device, dtype=negative_prompt_embeds.dtype) for t in reversed(timesteps): idx = num_inversion_steps - t_to_idx[int(t)] - 1 noise = randn_tensor(shape=x0.shape, generator=generator, device=self.device, dtype=x0.dtype) xts[idx] = self.scheduler.add_noise(x0, noise, t.unsqueeze(0)) xts = torch.cat([x0.unsqueeze(0), xts], dim=0) # noise maps zs = torch.zeros(size=variance_noise_shape, device=self.device, dtype=negative_prompt_embeds.dtype) self.scheduler.set_timesteps(len(self.scheduler.timesteps)) for t in self.progress_bar(timesteps): idx = num_inversion_steps - t_to_idx[int(t)] - 1 # 1. predict noise residual xt = xts[idx + 1] latent_model_input = torch.cat([xt] * 2) if do_classifier_free_guidance else xt latent_model_input = self.scheduler.scale_model_input(latent_model_input, t) added_cond_kwargs = {"text_embeds": add_text_embeds, "time_ids": add_time_ids} noise_pred = self.unet( latent_model_input, t, encoder_hidden_states=negative_prompt_embeds, cross_attention_kwargs=cross_attention_kwargs, added_cond_kwargs=added_cond_kwargs, return_dict=False, )[0] # 2. perform guidance if do_classifier_free_guidance: noise_pred_out = noise_pred.chunk(2) noise_pred_uncond, noise_pred_text = noise_pred_out[0], noise_pred_out[1] noise_pred = noise_pred_uncond + source_guidance_scale * (noise_pred_text - noise_pred_uncond) xtm1 = xts[idx] z, xtm1_corrected = compute_noise(self.scheduler, xtm1, xt, t, noise_pred, self.eta) zs[idx] = z # correction to avoid error accumulation xts[idx] = xtm1_corrected self.init_latents = xts[-1] zs = zs.flip(0) if num_zero_noise_steps > 0: zs[-num_zero_noise_steps:] = torch.zeros_like(zs[-num_zero_noise_steps:]) self.zs = zs return LEditsPPInversionPipelineOutput(images=resized, vae_reconstruction_images=image_rec) # Copied from diffusers.pipelines.stable_diffusion_xl.pipeline_stable_diffusion_xl.rescale_noise_cfg def rescale_noise_cfg(noise_cfg, noise_pred_text, guidance_rescale=0.0): r""" Rescales `noise_cfg` tensor based on `guidance_rescale` to improve image quality and fix overexposure. Based on Section 3.4 from [Common Diffusion Noise Schedules and Sample Steps are Flawed](https://arxiv.org/pdf/2305.08891.pdf). Args: noise_cfg (`torch.Tensor`): The predicted noise tensor for the guided diffusion process. noise_pred_text (`torch.Tensor`): The predicted noise tensor for the text-guided diffusion process. guidance_rescale (`float`, *optional*, defaults to 0.0): A rescale factor applied to the noise predictions. Returns: noise_cfg (`torch.Tensor`): The rescaled noise prediction tensor. """ std_text = noise_pred_text.std(dim=list(range(1, noise_pred_text.ndim)), keepdim=True) std_cfg = noise_cfg.std(dim=list(range(1, noise_cfg.ndim)), keepdim=True) # rescale the results from guidance (fixes overexposure) noise_pred_rescaled = noise_cfg * (std_text / std_cfg) # mix with the original results from guidance by factor guidance_rescale to avoid "plain looking" images noise_cfg = guidance_rescale * noise_pred_rescaled + (1 - guidance_rescale) * noise_cfg return noise_cfg # Copied from diffusers.pipelines.ledits_pp.pipeline_leditspp_stable_diffusion.compute_noise_ddim def compute_noise_ddim(scheduler, prev_latents, latents, timestep, noise_pred, eta): # 1. get previous step value (=t-1) prev_timestep = timestep - scheduler.config.num_train_timesteps // scheduler.num_inference_steps # 2. compute alphas, betas alpha_prod_t = scheduler.alphas_cumprod[timestep] alpha_prod_t_prev = ( scheduler.alphas_cumprod[prev_timestep] if prev_timestep >= 0 else scheduler.final_alpha_cumprod ) beta_prod_t = 1 - alpha_prod_t # 3. compute predicted original sample from predicted noise also called # "predicted x_0" of formula (12) from https://arxiv.org/pdf/2010.02502.pdf pred_original_sample = (latents - beta_prod_t ** (0.5) * noise_pred) / alpha_prod_t ** (0.5) # 4. Clip "predicted x_0" if scheduler.config.clip_sample: pred_original_sample = torch.clamp(pred_original_sample, -1, 1) # 5. compute variance: "sigma_t(η)" -> see formula (16) # σ_t = sqrt((1 − α_t−1)/(1 − α_t)) * sqrt(1 − α_t/α_t−1) variance = scheduler._get_variance(timestep, prev_timestep) std_dev_t = eta * variance ** (0.5) # 6. compute "direction pointing to x_t" of formula (12) from https://arxiv.org/pdf/2010.02502.pdf pred_sample_direction = (1 - alpha_prod_t_prev - std_dev_t**2) ** (0.5) * noise_pred # modifed so that updated xtm1 is returned as well (to avoid error accumulation) mu_xt = alpha_prod_t_prev ** (0.5) * pred_original_sample + pred_sample_direction if variance > 0.0: noise = (prev_latents - mu_xt) / (variance ** (0.5) * eta) else: noise = torch.tensor([0.0]).to(latents.device) return noise, mu_xt + (eta * variance**0.5) * noise # Copied from diffusers.pipelines.ledits_pp.pipeline_leditspp_stable_diffusion.compute_noise_sde_dpm_pp_2nd def compute_noise_sde_dpm_pp_2nd(scheduler, prev_latents, latents, timestep, noise_pred, eta): def first_order_update(model_output, sample): # timestep, prev_timestep, sample): sigma_t, sigma_s = scheduler.sigmas[scheduler.step_index + 1], scheduler.sigmas[scheduler.step_index] alpha_t, sigma_t = scheduler._sigma_to_alpha_sigma_t(sigma_t) alpha_s, sigma_s = scheduler._sigma_to_alpha_sigma_t(sigma_s) lambda_t = torch.log(alpha_t) - torch.log(sigma_t) lambda_s = torch.log(alpha_s) - torch.log(sigma_s) h = lambda_t - lambda_s mu_xt = (sigma_t / sigma_s * torch.exp(-h)) * sample + (alpha_t * (1 - torch.exp(-2.0 * h))) * model_output mu_xt = scheduler.dpm_solver_first_order_update( model_output=model_output, sample=sample, noise=torch.zeros_like(sample) ) sigma = sigma_t * torch.sqrt(1.0 - torch.exp(-2 * h)) if sigma > 0.0: noise = (prev_latents - mu_xt) / sigma else: noise = torch.tensor([0.0]).to(sample.device) prev_sample = mu_xt + sigma * noise return noise, prev_sample def second_order_update(model_output_list, sample): # timestep_list, prev_timestep, sample): sigma_t, sigma_s0, sigma_s1 = ( scheduler.sigmas[scheduler.step_index + 1], scheduler.sigmas[scheduler.step_index], scheduler.sigmas[scheduler.step_index - 1], ) alpha_t, sigma_t = scheduler._sigma_to_alpha_sigma_t(sigma_t) alpha_s0, sigma_s0 = scheduler._sigma_to_alpha_sigma_t(sigma_s0) alpha_s1, sigma_s1 = scheduler._sigma_to_alpha_sigma_t(sigma_s1) lambda_t = torch.log(alpha_t) - torch.log(sigma_t) lambda_s0 = torch.log(alpha_s0) - torch.log(sigma_s0) lambda_s1 = torch.log(alpha_s1) - torch.log(sigma_s1) m0, m1 = model_output_list[-1], model_output_list[-2] h, h_0 = lambda_t - lambda_s0, lambda_s0 - lambda_s1 r0 = h_0 / h D0, D1 = m0, (1.0 / r0) * (m0 - m1) mu_xt = ( (sigma_t / sigma_s0 * torch.exp(-h)) * sample + (alpha_t * (1 - torch.exp(-2.0 * h))) * D0 + 0.5 * (alpha_t * (1 - torch.exp(-2.0 * h))) * D1 ) sigma = sigma_t * torch.sqrt(1.0 - torch.exp(-2 * h)) if sigma > 0.0: noise = (prev_latents - mu_xt) / sigma else: noise = torch.tensor([0.0]).to(sample.device) prev_sample = mu_xt + sigma * noise return noise, prev_sample if scheduler.step_index is None: scheduler._init_step_index(timestep) model_output = scheduler.convert_model_output(model_output=noise_pred, sample=latents) for i in range(scheduler.config.solver_order - 1): scheduler.model_outputs[i] = scheduler.model_outputs[i + 1] scheduler.model_outputs[-1] = model_output if scheduler.lower_order_nums < 1: noise, prev_sample = first_order_update(model_output, latents) else: noise, prev_sample = second_order_update(scheduler.model_outputs, latents) if scheduler.lower_order_nums < scheduler.config.solver_order: scheduler.lower_order_nums += 1 # upon completion increase step index by one scheduler._step_index += 1 return noise, prev_sample # Copied from diffusers.pipelines.ledits_pp.pipeline_leditspp_stable_diffusion.compute_noise def compute_noise(scheduler, *args): if isinstance(scheduler, DDIMScheduler): return compute_noise_ddim(scheduler, *args) elif ( isinstance(scheduler, DPMSolverMultistepScheduler) and scheduler.config.algorithm_type == "sde-dpmsolver++" and scheduler.config.solver_order == 2 ): return compute_noise_sde_dpm_pp_2nd(scheduler, *args) else: raise NotImplementedError
diffusers/src/diffusers/pipelines/ledits_pp/pipeline_leditspp_stable_diffusion_xl.py/0
{ "file_path": "diffusers/src/diffusers/pipelines/ledits_pp/pipeline_leditspp_stable_diffusion_xl.py", "repo_id": "diffusers", "token_count": 42483 }
# Copyright 2024 The HuggingFace Team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import inspect from typing import Callable, List, Optional, Union import PIL.Image import torch from packaging import version from transformers import CLIPImageProcessor, CLIPVisionModelWithProjection from ...configuration_utils import FrozenDict from ...image_processor import VaeImageProcessor from ...models import AutoencoderKL, UNet2DConditionModel from ...schedulers import KarrasDiffusionSchedulers from ...utils import deprecate, is_torch_xla_available, logging from ...utils.torch_utils import randn_tensor from ..pipeline_utils import DiffusionPipeline, StableDiffusionMixin from . import StableDiffusionPipelineOutput from .safety_checker import StableDiffusionSafetyChecker if is_torch_xla_available(): import torch_xla.core.xla_model as xm XLA_AVAILABLE = True else: XLA_AVAILABLE = False logger = logging.get_logger(__name__) # pylint: disable=invalid-name class StableDiffusionImageVariationPipeline(DiffusionPipeline, StableDiffusionMixin): r""" Pipeline to generate image variations from an input image using Stable Diffusion. This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods implemented for all pipelines (downloading, saving, running on a particular device, etc.). Args: vae ([`AutoencoderKL`]): Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. image_encoder ([`~transformers.CLIPVisionModelWithProjection`]): Frozen CLIP image-encoder ([clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14)). text_encoder ([`~transformers.CLIPTextModel`]): Frozen text-encoder ([clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14)). tokenizer ([`~transformers.CLIPTokenizer`]): A `CLIPTokenizer` to tokenize text. unet ([`UNet2DConditionModel`]): A `UNet2DConditionModel` to denoise the encoded image latents. scheduler ([`SchedulerMixin`]): A scheduler to be used in combination with `unet` to denoise the encoded image latents. Can be one of [`DDIMScheduler`], [`LMSDiscreteScheduler`], or [`PNDMScheduler`]. safety_checker ([`StableDiffusionSafetyChecker`]): Classification module that estimates whether generated images could be considered offensive or harmful. Please refer to the [model card](https://huggingface.co/stable-diffusion-v1-5/stable-diffusion-v1-5) for more details about a model's potential harms. feature_extractor ([`~transformers.CLIPImageProcessor`]): A `CLIPImageProcessor` to extract features from generated images; used as inputs to the `safety_checker`. """ # TODO: feature_extractor is required to encode images (if they are in PIL format), # we should give a descriptive message if the pipeline doesn't have one. _optional_components = ["safety_checker"] model_cpu_offload_seq = "image_encoder->unet->vae" _exclude_from_cpu_offload = ["safety_checker"] def __init__( self, vae: AutoencoderKL, image_encoder: CLIPVisionModelWithProjection, unet: UNet2DConditionModel, scheduler: KarrasDiffusionSchedulers, safety_checker: StableDiffusionSafetyChecker, feature_extractor: CLIPImageProcessor, requires_safety_checker: bool = True, ): super().__init__() if safety_checker is None and requires_safety_checker: logger.warning( f"You have disabled the safety checker for {self.__class__} by passing `safety_checker=None`. Ensure" " that you abide to the conditions of the Stable Diffusion license and do not expose unfiltered" " results in services or applications open to the public. Both the diffusers team and Hugging Face" " strongly recommend to keep the safety filter enabled in all public facing circumstances, disabling" " it only for use-cases that involve analyzing network behavior or auditing its results. For more" " information, please have a look at https://github.com/huggingface/diffusers/pull/254 ." ) if safety_checker is not None and feature_extractor is None: raise ValueError( "Make sure to define a feature extractor when loading {self.__class__} if you want to use the safety" " checker. If you do not want to use the safety checker, you can pass `'safety_checker=None'` instead." ) is_unet_version_less_0_9_0 = ( unet is not None and hasattr(unet.config, "_diffusers_version") and version.parse(version.parse(unet.config._diffusers_version).base_version) < version.parse("0.9.0.dev0") ) is_unet_sample_size_less_64 = ( unet is not None and hasattr(unet.config, "sample_size") and unet.config.sample_size < 64 ) if is_unet_version_less_0_9_0 and is_unet_sample_size_less_64: deprecation_message = ( "The configuration file of the unet has set the default `sample_size` to smaller than" " 64 which seems highly unlikely .If you're checkpoint is a fine-tuned version of any of the" " following: \n- CompVis/stable-diffusion-v1-4 \n- CompVis/stable-diffusion-v1-3 \n-" " CompVis/stable-diffusion-v1-2 \n- CompVis/stable-diffusion-v1-1 \n- stable-diffusion-v1-5/stable-diffusion-v1-5" " \n- stable-diffusion-v1-5/stable-diffusion-inpainting \n you should change 'sample_size' to 64 in the" " configuration file. Please make sure to update the config accordingly as leaving `sample_size=32`" " in the config might lead to incorrect results in future versions. If you have downloaded this" " checkpoint from the Hugging Face Hub, it would be very nice if you could open a Pull request for" " the `unet/config.json` file" ) deprecate("sample_size<64", "1.0.0", deprecation_message, standard_warn=False) new_config = dict(unet.config) new_config["sample_size"] = 64 unet._internal_dict = FrozenDict(new_config) self.register_modules( vae=vae, image_encoder=image_encoder, unet=unet, scheduler=scheduler, safety_checker=safety_checker, feature_extractor=feature_extractor, ) self.vae_scale_factor = 2 ** (len(self.vae.config.block_out_channels) - 1) if getattr(self, "vae", None) else 8 self.image_processor = VaeImageProcessor(vae_scale_factor=self.vae_scale_factor) self.register_to_config(requires_safety_checker=requires_safety_checker) def _encode_image(self, image, device, num_images_per_prompt, do_classifier_free_guidance): dtype = next(self.image_encoder.parameters()).dtype if not isinstance(image, torch.Tensor): image = self.feature_extractor(images=image, return_tensors="pt").pixel_values image = image.to(device=device, dtype=dtype) image_embeddings = self.image_encoder(image).image_embeds image_embeddings = image_embeddings.unsqueeze(1) # duplicate image embeddings for each generation per prompt, using mps friendly method bs_embed, seq_len, _ = image_embeddings.shape image_embeddings = image_embeddings.repeat(1, num_images_per_prompt, 1) image_embeddings = image_embeddings.view(bs_embed * num_images_per_prompt, seq_len, -1) if do_classifier_free_guidance: negative_prompt_embeds = torch.zeros_like(image_embeddings) # For classifier free guidance, we need to do two forward passes. # Here we concatenate the unconditional and text embeddings into a single batch # to avoid doing two forward passes image_embeddings = torch.cat([negative_prompt_embeds, image_embeddings]) return image_embeddings # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.run_safety_checker def run_safety_checker(self, image, device, dtype): if self.safety_checker is None: has_nsfw_concept = None else: if torch.is_tensor(image): feature_extractor_input = self.image_processor.postprocess(image, output_type="pil") else: feature_extractor_input = self.image_processor.numpy_to_pil(image) safety_checker_input = self.feature_extractor(feature_extractor_input, return_tensors="pt").to(device) image, has_nsfw_concept = self.safety_checker( images=image, clip_input=safety_checker_input.pixel_values.to(dtype) ) return image, has_nsfw_concept # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.decode_latents def decode_latents(self, latents): deprecation_message = "The decode_latents method is deprecated and will be removed in 1.0.0. Please use VaeImageProcessor.postprocess(...) instead" deprecate("decode_latents", "1.0.0", deprecation_message, standard_warn=False) latents = 1 / self.vae.config.scaling_factor * latents image = self.vae.decode(latents, return_dict=False)[0] image = (image / 2 + 0.5).clamp(0, 1) # we always cast to float32 as this does not cause significant overhead and is compatible with bfloat16 image = image.cpu().permute(0, 2, 3, 1).float().numpy() return image # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.prepare_extra_step_kwargs def prepare_extra_step_kwargs(self, generator, eta): # prepare extra kwargs for the scheduler step, since not all schedulers have the same signature # eta (η) is only used with the DDIMScheduler, it will be ignored for other schedulers. # eta corresponds to η in DDIM paper: https://arxiv.org/abs/2010.02502 # and should be between [0, 1] accepts_eta = "eta" in set(inspect.signature(self.scheduler.step).parameters.keys()) extra_step_kwargs = {} if accepts_eta: extra_step_kwargs["eta"] = eta # check if the scheduler accepts generator accepts_generator = "generator" in set(inspect.signature(self.scheduler.step).parameters.keys()) if accepts_generator: extra_step_kwargs["generator"] = generator return extra_step_kwargs def check_inputs(self, image, height, width, callback_steps): if ( not isinstance(image, torch.Tensor) and not isinstance(image, PIL.Image.Image) and not isinstance(image, list) ): raise ValueError( "`image` has to be of type `torch.Tensor` or `PIL.Image.Image` or `List[PIL.Image.Image]` but is" f" {type(image)}" ) if height % 8 != 0 or width % 8 != 0: raise ValueError(f"`height` and `width` have to be divisible by 8 but are {height} and {width}.") if (callback_steps is None) or ( callback_steps is not None and (not isinstance(callback_steps, int) or callback_steps <= 0) ): raise ValueError( f"`callback_steps` has to be a positive integer but is {callback_steps} of type" f" {type(callback_steps)}." ) # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.prepare_latents def prepare_latents(self, batch_size, num_channels_latents, height, width, dtype, device, generator, latents=None): shape = ( batch_size, num_channels_latents, int(height) // self.vae_scale_factor, int(width) // self.vae_scale_factor, ) if isinstance(generator, list) and len(generator) != batch_size: raise ValueError( f"You have passed a list of generators of length {len(generator)}, but requested an effective batch" f" size of {batch_size}. Make sure the batch size matches the length of the generators." ) if latents is None: latents = randn_tensor(shape, generator=generator, device=device, dtype=dtype) else: latents = latents.to(device) # scale the initial noise by the standard deviation required by the scheduler latents = latents * self.scheduler.init_noise_sigma return latents @torch.no_grad() def __call__( self, image: Union[PIL.Image.Image, List[PIL.Image.Image], torch.Tensor], height: Optional[int] = None, width: Optional[int] = None, num_inference_steps: int = 50, guidance_scale: float = 7.5, num_images_per_prompt: Optional[int] = 1, eta: float = 0.0, generator: Optional[Union[torch.Generator, List[torch.Generator]]] = None, latents: Optional[torch.Tensor] = None, output_type: Optional[str] = "pil", return_dict: bool = True, callback: Optional[Callable[[int, int, torch.Tensor], None]] = None, callback_steps: int = 1, ): r""" The call function to the pipeline for generation. Args: image (`PIL.Image.Image` or `List[PIL.Image.Image]` or `torch.Tensor`): Image or images to guide image generation. If you provide a tensor, it needs to be compatible with [`CLIPImageProcessor`](https://huggingface.co/lambdalabs/sd-image-variations-diffusers/blob/main/feature_extractor/preprocessor_config.json). height (`int`, *optional*, defaults to `self.unet.config.sample_size * self.vae_scale_factor`): The height in pixels of the generated image. width (`int`, *optional*, defaults to `self.unet.config.sample_size * self.vae_scale_factor`): The width in pixels of the generated image. num_inference_steps (`int`, *optional*, defaults to 50): The number of denoising steps. More denoising steps usually lead to a higher quality image at the expense of slower inference. This parameter is modulated by `strength`. guidance_scale (`float`, *optional*, defaults to 7.5): A higher guidance scale value encourages the model to generate images closely linked to the text `prompt` at the expense of lower image quality. Guidance scale is enabled when `guidance_scale > 1`. num_images_per_prompt (`int`, *optional*, defaults to 1): The number of images to generate per prompt. eta (`float`, *optional*, defaults to 0.0): Corresponds to parameter eta (η) from the [DDIM](https://arxiv.org/abs/2010.02502) paper. Only applies to the [`~schedulers.DDIMScheduler`], and is ignored in other schedulers. generator (`torch.Generator` or `List[torch.Generator]`, *optional*): A [`torch.Generator`](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make generation deterministic. latents (`torch.Tensor`, *optional*): Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image generation. Can be used to tweak the same generation with different prompts. If not provided, a latents tensor is generated by sampling using the supplied random `generator`. output_type (`str`, *optional*, defaults to `"pil"`): The output format of the generated image. Choose between `PIL.Image` or `np.array`. return_dict (`bool`, *optional*, defaults to `True`): Whether or not to return a [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] instead of a plain tuple. callback (`Callable`, *optional*): A function that calls every `callback_steps` steps during inference. The function is called with the following arguments: `callback(step: int, timestep: int, latents: torch.Tensor)`. callback_steps (`int`, *optional*, defaults to 1): The frequency at which the `callback` function is called. If not specified, the callback is called at every step. Returns: [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] or `tuple`: If `return_dict` is `True`, [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] is returned, otherwise a `tuple` is returned where the first element is a list with the generated images and the second element is a list of `bool`s indicating whether the corresponding generated image contains "not-safe-for-work" (nsfw) content. Examples: ```py from diffusers import StableDiffusionImageVariationPipeline from PIL import Image from io import BytesIO import requests pipe = StableDiffusionImageVariationPipeline.from_pretrained( "lambdalabs/sd-image-variations-diffusers", revision="v2.0" ) pipe = pipe.to("cuda") url = "https://lh3.googleusercontent.com/y-iFOHfLTwkuQSUegpwDdgKmOjRSTvPxat63dQLB25xkTs4lhIbRUFeNBWZzYf370g=s1200" response = requests.get(url) image = Image.open(BytesIO(response.content)).convert("RGB") out = pipe(image, num_images_per_prompt=3, guidance_scale=15) out["images"][0].save("result.jpg") ``` """ # 0. Default height and width to unet height = height or self.unet.config.sample_size * self.vae_scale_factor width = width or self.unet.config.sample_size * self.vae_scale_factor # 1. Check inputs. Raise error if not correct self.check_inputs(image, height, width, callback_steps) # 2. Define call parameters if isinstance(image, PIL.Image.Image): batch_size = 1 elif isinstance(image, list): batch_size = len(image) else: batch_size = image.shape[0] device = self._execution_device # here `guidance_scale` is defined analog to the guidance weight `w` of equation (2) # of the Imagen paper: https://arxiv.org/pdf/2205.11487.pdf . `guidance_scale = 1` # corresponds to doing no classifier free guidance. do_classifier_free_guidance = guidance_scale > 1.0 # 3. Encode input image image_embeddings = self._encode_image(image, device, num_images_per_prompt, do_classifier_free_guidance) # 4. Prepare timesteps self.scheduler.set_timesteps(num_inference_steps, device=device) timesteps = self.scheduler.timesteps # 5. Prepare latent variables num_channels_latents = self.unet.config.in_channels latents = self.prepare_latents( batch_size * num_images_per_prompt, num_channels_latents, height, width, image_embeddings.dtype, device, generator, latents, ) # 6. Prepare extra step kwargs. TODO: Logic should ideally just be moved out of the pipeline extra_step_kwargs = self.prepare_extra_step_kwargs(generator, eta) # 7. Denoising loop num_warmup_steps = len(timesteps) - num_inference_steps * self.scheduler.order with self.progress_bar(total=num_inference_steps) as progress_bar: for i, t in enumerate(timesteps): # expand the latents if we are doing classifier free guidance latent_model_input = torch.cat([latents] * 2) if do_classifier_free_guidance else latents latent_model_input = self.scheduler.scale_model_input(latent_model_input, t) # predict the noise residual noise_pred = self.unet(latent_model_input, t, encoder_hidden_states=image_embeddings).sample # perform guidance if do_classifier_free_guidance: noise_pred_uncond, noise_pred_text = noise_pred.chunk(2) noise_pred = noise_pred_uncond + guidance_scale * (noise_pred_text - noise_pred_uncond) # compute the previous noisy sample x_t -> x_t-1 latents = self.scheduler.step(noise_pred, t, latents, **extra_step_kwargs).prev_sample # call the callback, if provided if i == len(timesteps) - 1 or ((i + 1) > num_warmup_steps and (i + 1) % self.scheduler.order == 0): progress_bar.update() if callback is not None and i % callback_steps == 0: step_idx = i // getattr(self.scheduler, "order", 1) callback(step_idx, t, latents) if XLA_AVAILABLE: xm.mark_step() self.maybe_free_model_hooks() if not output_type == "latent": image = self.vae.decode(latents / self.vae.config.scaling_factor, return_dict=False)[0] image, has_nsfw_concept = self.run_safety_checker(image, device, image_embeddings.dtype) else: image = latents has_nsfw_concept = None if has_nsfw_concept is None: do_denormalize = [True] * image.shape[0] else: do_denormalize = [not has_nsfw for has_nsfw in has_nsfw_concept] image = self.image_processor.postprocess(image, output_type=output_type, do_denormalize=do_denormalize) self.maybe_free_model_hooks() if not return_dict: return (image, has_nsfw_concept) return StableDiffusionPipelineOutput(images=image, nsfw_content_detected=has_nsfw_concept)
diffusers/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_image_variation.py/0
{ "file_path": "diffusers/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_image_variation.py", "repo_id": "diffusers", "token_count": 9438 }
import inspect import warnings from typing import Callable, List, Optional, Union import numpy as np import torch from packaging import version from transformers import CLIPImageProcessor, CLIPTextModel, CLIPTokenizer, CLIPVisionModelWithProjection from ...configuration_utils import FrozenDict from ...image_processor import PipelineImageInput from ...loaders import IPAdapterMixin from ...models import AutoencoderKL, ImageProjection, UNet2DConditionModel from ...schedulers import KarrasDiffusionSchedulers from ...utils import deprecate, is_torch_xla_available, logging from ...utils.torch_utils import randn_tensor from ..pipeline_utils import DiffusionPipeline, StableDiffusionMixin from . import StableDiffusionSafePipelineOutput from .safety_checker import SafeStableDiffusionSafetyChecker if is_torch_xla_available(): import torch_xla.core.xla_model as xm XLA_AVAILABLE = True else: XLA_AVAILABLE = False logger = logging.get_logger(__name__) # pylint: disable=invalid-name class StableDiffusionPipelineSafe(DiffusionPipeline, StableDiffusionMixin, IPAdapterMixin): r""" Pipeline based on the [`StableDiffusionPipeline`] for text-to-image generation using Safe Latent Diffusion. This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: - [`~loaders.IPAdapterMixin.load_ip_adapter`] for loading IP Adapters Args: vae ([`AutoencoderKL`]): Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder ([`~transformers.CLIPTextModel`]): Frozen text-encoder ([clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14)). tokenizer ([`~transformers.CLIPTokenizer`]): A `CLIPTokenizer` to tokenize text. unet ([`UNet2DConditionModel`]): A `UNet2DConditionModel` to denoise the encoded image latents. scheduler ([`SchedulerMixin`]): A scheduler to be used in combination with `unet` to denoise the encoded image latents. Can be one of [`DDIMScheduler`], [`LMSDiscreteScheduler`], or [`PNDMScheduler`]. safety_checker ([`StableDiffusionSafetyChecker`]): Classification module that estimates whether generated images could be considered offensive or harmful. Please refer to the [model card](https://huggingface.co/stable-diffusion-v1-5/stable-diffusion-v1-5) for more details about a model's potential harms. feature_extractor ([`~transformers.CLIPImageProcessor`]): A `CLIPImageProcessor` to extract features from generated images; used as inputs to the `safety_checker`. """ model_cpu_offload_seq = "text_encoder->unet->vae" _optional_components = ["safety_checker", "feature_extractor", "image_encoder"] def __init__( self, vae: AutoencoderKL, text_encoder: CLIPTextModel, tokenizer: CLIPTokenizer, unet: UNet2DConditionModel, scheduler: KarrasDiffusionSchedulers, safety_checker: SafeStableDiffusionSafetyChecker, feature_extractor: CLIPImageProcessor, image_encoder: Optional[CLIPVisionModelWithProjection] = None, requires_safety_checker: bool = True, ): super().__init__() safety_concept: Optional[str] = ( "an image showing hate, harassment, violence, suffering, humiliation, harm, suicide, sexual, nudity," " bodily fluids, blood, obscene gestures, illegal activity, drug use, theft, vandalism, weapons, child" " abuse, brutality, cruelty" ) if scheduler is not None and getattr(scheduler.config, "steps_offset", 1) != 1: deprecation_message = ( f"The configuration file of this scheduler: {scheduler} is outdated. `steps_offset`" f" should be set to 1 instead of {scheduler.config.steps_offset}. Please make sure " "to update the config accordingly as leaving `steps_offset` might led to incorrect results" " in future versions. If you have downloaded this checkpoint from the Hugging Face Hub," " it would be very nice if you could open a Pull request for the `scheduler/scheduler_config.json`" " file" ) deprecate("steps_offset!=1", "1.0.0", deprecation_message, standard_warn=False) new_config = dict(scheduler.config) new_config["steps_offset"] = 1 scheduler._internal_dict = FrozenDict(new_config) if scheduler is not None and getattr(scheduler.config, "clip_sample", False) is True: deprecation_message = ( f"The configuration file of this scheduler: {scheduler} has not set the configuration `clip_sample`." " `clip_sample` should be set to False in the configuration file. Please make sure to update the" " config accordingly as not setting `clip_sample` in the config might lead to incorrect results in" " future versions. If you have downloaded this checkpoint from the Hugging Face Hub, it would be very" " nice if you could open a Pull request for the `scheduler/scheduler_config.json` file" ) deprecate("clip_sample not set", "1.0.0", deprecation_message, standard_warn=False) new_config = dict(scheduler.config) new_config["clip_sample"] = False scheduler._internal_dict = FrozenDict(new_config) if safety_checker is None and requires_safety_checker: logger.warning( f"You have disabled the safety checker for {self.__class__} by passing `safety_checker=None`. Ensure" " that you abide to the conditions of the Stable Diffusion license and do not expose unfiltered" " results in services or applications open to the public. Both the diffusers team and Hugging Face" " strongly recommend to keep the safety filter enabled in all public facing circumstances, disabling" " it only for use-cases that involve analyzing network behavior or auditing its results. For more" " information, please have a look at https://github.com/huggingface/diffusers/pull/254 ." ) if safety_checker is not None and feature_extractor is None: raise ValueError( "Make sure to define a feature extractor when loading {self.__class__} if you want to use the safety" " checker. If you do not want to use the safety checker, you can pass `'safety_checker=None'` instead." ) is_unet_version_less_0_9_0 = ( unet is not None and hasattr(unet.config, "_diffusers_version") and version.parse(version.parse(unet.config._diffusers_version).base_version) < version.parse("0.9.0.dev0") ) is_unet_sample_size_less_64 = ( unet is not None and hasattr(unet.config, "sample_size") and unet.config.sample_size < 64 ) if is_unet_version_less_0_9_0 and is_unet_sample_size_less_64: deprecation_message = ( "The configuration file of the unet has set the default `sample_size` to smaller than" " 64 which seems highly unlikely .If you're checkpoint is a fine-tuned version of any of the" " following: \n- CompVis/stable-diffusion-v1-4 \n- CompVis/stable-diffusion-v1-3 \n-" " CompVis/stable-diffusion-v1-2 \n- CompVis/stable-diffusion-v1-1 \n- stable-diffusion-v1-5/stable-diffusion-v1-5" " \n- stable-diffusion-v1-5/stable-diffusion-inpainting \n you should change 'sample_size' to 64 in the" " configuration file. Please make sure to update the config accordingly as leaving `sample_size=32`" " in the config might lead to incorrect results in future versions. If you have downloaded this" " checkpoint from the Hugging Face Hub, it would be very nice if you could open a Pull request for" " the `unet/config.json` file" ) deprecate("sample_size<64", "1.0.0", deprecation_message, standard_warn=False) new_config = dict(unet.config) new_config["sample_size"] = 64 unet._internal_dict = FrozenDict(new_config) self.register_modules( vae=vae, text_encoder=text_encoder, tokenizer=tokenizer, unet=unet, scheduler=scheduler, safety_checker=safety_checker, feature_extractor=feature_extractor, image_encoder=image_encoder, ) self._safety_text_concept = safety_concept self.vae_scale_factor = 2 ** (len(self.vae.config.block_out_channels) - 1) if getattr(self, "vae", None) else 8 self.register_to_config(requires_safety_checker=requires_safety_checker) @property def safety_concept(self): r""" Getter method for the safety concept used with SLD Returns: `str`: The text describing the safety concept """ return self._safety_text_concept @safety_concept.setter def safety_concept(self, concept): r""" Setter method for the safety concept used with SLD Args: concept (`str`): The text of the new safety concept """ self._safety_text_concept = concept def _encode_prompt( self, prompt, device, num_images_per_prompt, do_classifier_free_guidance, negative_prompt, enable_safety_guidance, ): r""" Encodes the prompt into text encoder hidden states. Args: prompt (`str` or `List[str]`): prompt to be encoded device: (`torch.device`): torch device num_images_per_prompt (`int`): number of images that should be generated per prompt do_classifier_free_guidance (`bool`): whether to use classifier free guidance or not negative_prompt (`str` or `List[str]`): The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored if `guidance_scale` is less than `1`). """ batch_size = len(prompt) if isinstance(prompt, list) else 1 text_inputs = self.tokenizer( prompt, padding="max_length", max_length=self.tokenizer.model_max_length, truncation=True, return_tensors="pt", ) text_input_ids = text_inputs.input_ids untruncated_ids = self.tokenizer(prompt, padding="max_length", return_tensors="pt").input_ids if not torch.equal(text_input_ids, untruncated_ids): removed_text = self.tokenizer.batch_decode(untruncated_ids[:, self.tokenizer.model_max_length - 1 : -1]) logger.warning( "The following part of your input was truncated because CLIP can only handle sequences up to" f" {self.tokenizer.model_max_length} tokens: {removed_text}" ) if hasattr(self.text_encoder.config, "use_attention_mask") and self.text_encoder.config.use_attention_mask: attention_mask = text_inputs.attention_mask.to(device) else: attention_mask = None prompt_embeds = self.text_encoder( text_input_ids.to(device), attention_mask=attention_mask, ) prompt_embeds = prompt_embeds[0] # duplicate text embeddings for each generation per prompt, using mps friendly method bs_embed, seq_len, _ = prompt_embeds.shape prompt_embeds = prompt_embeds.repeat(1, num_images_per_prompt, 1) prompt_embeds = prompt_embeds.view(bs_embed * num_images_per_prompt, seq_len, -1) # get unconditional embeddings for classifier free guidance if do_classifier_free_guidance: uncond_tokens: List[str] if negative_prompt is None: uncond_tokens = [""] * batch_size elif type(prompt) is not type(negative_prompt): raise TypeError( f"`negative_prompt` should be the same type to `prompt`, but got {type(negative_prompt)} !=" f" {type(prompt)}." ) elif isinstance(negative_prompt, str): uncond_tokens = [negative_prompt] elif batch_size != len(negative_prompt): raise ValueError( f"`negative_prompt`: {negative_prompt} has batch size {len(negative_prompt)}, but `prompt`:" f" {prompt} has batch size {batch_size}. Please make sure that passed `negative_prompt` matches" " the batch size of `prompt`." ) else: uncond_tokens = negative_prompt max_length = text_input_ids.shape[-1] uncond_input = self.tokenizer( uncond_tokens, padding="max_length", max_length=max_length, truncation=True, return_tensors="pt", ) if hasattr(self.text_encoder.config, "use_attention_mask") and self.text_encoder.config.use_attention_mask: attention_mask = uncond_input.attention_mask.to(device) else: attention_mask = None negative_prompt_embeds = self.text_encoder( uncond_input.input_ids.to(device), attention_mask=attention_mask, ) negative_prompt_embeds = negative_prompt_embeds[0] # duplicate unconditional embeddings for each generation per prompt, using mps friendly method seq_len = negative_prompt_embeds.shape[1] negative_prompt_embeds = negative_prompt_embeds.repeat(1, num_images_per_prompt, 1) negative_prompt_embeds = negative_prompt_embeds.view(batch_size * num_images_per_prompt, seq_len, -1) # Encode the safety concept text if enable_safety_guidance: safety_concept_input = self.tokenizer( [self._safety_text_concept], padding="max_length", max_length=max_length, truncation=True, return_tensors="pt", ) safety_embeddings = self.text_encoder(safety_concept_input.input_ids.to(self.device))[0] # duplicate safety embeddings for each generation per prompt, using mps friendly method seq_len = safety_embeddings.shape[1] safety_embeddings = safety_embeddings.repeat(batch_size, num_images_per_prompt, 1) safety_embeddings = safety_embeddings.view(batch_size * num_images_per_prompt, seq_len, -1) # For classifier free guidance + sld, we need to do three forward passes. # Here we concatenate the unconditional and text embeddings into a single batch # to avoid doing three forward passes prompt_embeds = torch.cat([negative_prompt_embeds, prompt_embeds, safety_embeddings]) else: # For classifier free guidance, we need to do two forward passes. # Here we concatenate the unconditional and text embeddings into a single batch # to avoid doing two forward passes prompt_embeds = torch.cat([negative_prompt_embeds, prompt_embeds]) return prompt_embeds def run_safety_checker(self, image, device, dtype, enable_safety_guidance): if self.safety_checker is not None: images = image.copy() safety_checker_input = self.feature_extractor(self.numpy_to_pil(image), return_tensors="pt").to(device) image, has_nsfw_concept = self.safety_checker( images=image, clip_input=safety_checker_input.pixel_values.to(dtype) ) flagged_images = np.zeros((2, *image.shape[1:])) if any(has_nsfw_concept): logger.warning( "Potential NSFW content was detected in one or more images. A black image will be returned" " instead." f"{'You may look at this images in the `unsafe_images` variable of the output at your own discretion.' if enable_safety_guidance else 'Try again with a different prompt and/or seed.'}" ) for idx, has_nsfw_concept in enumerate(has_nsfw_concept): if has_nsfw_concept: flagged_images[idx] = images[idx] image[idx] = np.zeros(image[idx].shape) # black image else: has_nsfw_concept = None flagged_images = None return image, has_nsfw_concept, flagged_images # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.decode_latents def decode_latents(self, latents): deprecation_message = "The decode_latents method is deprecated and will be removed in 1.0.0. Please use VaeImageProcessor.postprocess(...) instead" deprecate("decode_latents", "1.0.0", deprecation_message, standard_warn=False) latents = 1 / self.vae.config.scaling_factor * latents image = self.vae.decode(latents, return_dict=False)[0] image = (image / 2 + 0.5).clamp(0, 1) # we always cast to float32 as this does not cause significant overhead and is compatible with bfloat16 image = image.cpu().permute(0, 2, 3, 1).float().numpy() return image # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.prepare_extra_step_kwargs def prepare_extra_step_kwargs(self, generator, eta): # prepare extra kwargs for the scheduler step, since not all schedulers have the same signature # eta (η) is only used with the DDIMScheduler, it will be ignored for other schedulers. # eta corresponds to η in DDIM paper: https://arxiv.org/abs/2010.02502 # and should be between [0, 1] accepts_eta = "eta" in set(inspect.signature(self.scheduler.step).parameters.keys()) extra_step_kwargs = {} if accepts_eta: extra_step_kwargs["eta"] = eta # check if the scheduler accepts generator accepts_generator = "generator" in set(inspect.signature(self.scheduler.step).parameters.keys()) if accepts_generator: extra_step_kwargs["generator"] = generator return extra_step_kwargs # Copied from diffusers.pipelines.stable_diffusion_k_diffusion.pipeline_stable_diffusion_k_diffusion.StableDiffusionKDiffusionPipeline.check_inputs def check_inputs( self, prompt, height, width, callback_steps, negative_prompt=None, prompt_embeds=None, negative_prompt_embeds=None, callback_on_step_end_tensor_inputs=None, ): if height % 8 != 0 or width % 8 != 0: raise ValueError(f"`height` and `width` have to be divisible by 8 but are {height} and {width}.") if callback_steps is not None and (not isinstance(callback_steps, int) or callback_steps <= 0): raise ValueError( f"`callback_steps` has to be a positive integer but is {callback_steps} of type" f" {type(callback_steps)}." ) if callback_on_step_end_tensor_inputs is not None and not all( k in self._callback_tensor_inputs for k in callback_on_step_end_tensor_inputs ): raise ValueError( f"`callback_on_step_end_tensor_inputs` has to be in {self._callback_tensor_inputs}, but found {[k for k in callback_on_step_end_tensor_inputs if k not in self._callback_tensor_inputs]}" ) if prompt is not None and prompt_embeds is not None: raise ValueError( f"Cannot forward both `prompt`: {prompt} and `prompt_embeds`: {prompt_embeds}. Please make sure to" " only forward one of the two." ) elif prompt is None and prompt_embeds is None: raise ValueError( "Provide either `prompt` or `prompt_embeds`. Cannot leave both `prompt` and `prompt_embeds` undefined." ) elif prompt is not None and (not isinstance(prompt, str) and not isinstance(prompt, list)): raise ValueError(f"`prompt` has to be of type `str` or `list` but is {type(prompt)}") if negative_prompt is not None and negative_prompt_embeds is not None: raise ValueError( f"Cannot forward both `negative_prompt`: {negative_prompt} and `negative_prompt_embeds`:" f" {negative_prompt_embeds}. Please make sure to only forward one of the two." ) if prompt_embeds is not None and negative_prompt_embeds is not None: if prompt_embeds.shape != negative_prompt_embeds.shape: raise ValueError( "`prompt_embeds` and `negative_prompt_embeds` must have the same shape when passed directly, but" f" got: `prompt_embeds` {prompt_embeds.shape} != `negative_prompt_embeds`" f" {negative_prompt_embeds.shape}." ) # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.prepare_latents def prepare_latents(self, batch_size, num_channels_latents, height, width, dtype, device, generator, latents=None): shape = ( batch_size, num_channels_latents, int(height) // self.vae_scale_factor, int(width) // self.vae_scale_factor, ) if isinstance(generator, list) and len(generator) != batch_size: raise ValueError( f"You have passed a list of generators of length {len(generator)}, but requested an effective batch" f" size of {batch_size}. Make sure the batch size matches the length of the generators." ) if latents is None: latents = randn_tensor(shape, generator=generator, device=device, dtype=dtype) else: latents = latents.to(device) # scale the initial noise by the standard deviation required by the scheduler latents = latents * self.scheduler.init_noise_sigma return latents def perform_safety_guidance( self, enable_safety_guidance, safety_momentum, noise_guidance, noise_pred_out, i, sld_guidance_scale, sld_warmup_steps, sld_threshold, sld_momentum_scale, sld_mom_beta, ): # Perform SLD guidance if enable_safety_guidance: if safety_momentum is None: safety_momentum = torch.zeros_like(noise_guidance) noise_pred_text, noise_pred_uncond = noise_pred_out[0], noise_pred_out[1] noise_pred_safety_concept = noise_pred_out[2] # Equation 6 scale = torch.clamp(torch.abs((noise_pred_text - noise_pred_safety_concept)) * sld_guidance_scale, max=1.0) # Equation 6 safety_concept_scale = torch.where( (noise_pred_text - noise_pred_safety_concept) >= sld_threshold, torch.zeros_like(scale), scale ) # Equation 4 noise_guidance_safety = torch.mul((noise_pred_safety_concept - noise_pred_uncond), safety_concept_scale) # Equation 7 noise_guidance_safety = noise_guidance_safety + sld_momentum_scale * safety_momentum # Equation 8 safety_momentum = sld_mom_beta * safety_momentum + (1 - sld_mom_beta) * noise_guidance_safety if i >= sld_warmup_steps: # Warmup # Equation 3 noise_guidance = noise_guidance - noise_guidance_safety return noise_guidance, safety_momentum # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.encode_image def encode_image(self, image, device, num_images_per_prompt, output_hidden_states=None): dtype = next(self.image_encoder.parameters()).dtype if not isinstance(image, torch.Tensor): image = self.feature_extractor(image, return_tensors="pt").pixel_values image = image.to(device=device, dtype=dtype) if output_hidden_states: image_enc_hidden_states = self.image_encoder(image, output_hidden_states=True).hidden_states[-2] image_enc_hidden_states = image_enc_hidden_states.repeat_interleave(num_images_per_prompt, dim=0) uncond_image_enc_hidden_states = self.image_encoder( torch.zeros_like(image), output_hidden_states=True ).hidden_states[-2] uncond_image_enc_hidden_states = uncond_image_enc_hidden_states.repeat_interleave( num_images_per_prompt, dim=0 ) return image_enc_hidden_states, uncond_image_enc_hidden_states else: image_embeds = self.image_encoder(image).image_embeds image_embeds = image_embeds.repeat_interleave(num_images_per_prompt, dim=0) uncond_image_embeds = torch.zeros_like(image_embeds) return image_embeds, uncond_image_embeds @torch.no_grad() def __call__( self, prompt: Union[str, List[str]], height: Optional[int] = None, width: Optional[int] = None, num_inference_steps: int = 50, guidance_scale: float = 7.5, negative_prompt: Optional[Union[str, List[str]]] = None, num_images_per_prompt: Optional[int] = 1, eta: float = 0.0, generator: Optional[Union[torch.Generator, List[torch.Generator]]] = None, latents: Optional[torch.Tensor] = None, ip_adapter_image: Optional[PipelineImageInput] = None, output_type: Optional[str] = "pil", return_dict: bool = True, callback: Optional[Callable[[int, int, torch.Tensor], None]] = None, callback_steps: int = 1, sld_guidance_scale: Optional[float] = 1000, sld_warmup_steps: Optional[int] = 10, sld_threshold: Optional[float] = 0.01, sld_momentum_scale: Optional[float] = 0.3, sld_mom_beta: Optional[float] = 0.4, ): r""" The call function to the pipeline for generation. Args: prompt (`str` or `List[str]`): The prompt or prompts to guide image generation. If not defined, you need to pass `prompt_embeds`. height (`int`, *optional*, defaults to `self.unet.config.sample_size * self.vae_scale_factor`): The height in pixels of the generated image. width (`int`, *optional*, defaults to `self.unet.config.sample_size * self.vae_scale_factor`): The width in pixels of the generated image. num_inference_steps (`int`, *optional*, defaults to 50): The number of denoising steps. More denoising steps usually lead to a higher quality image at the expense of slower inference. guidance_scale (`float`, *optional*, defaults to 7.5): A higher guidance scale value encourages the model to generate images closely linked to the text `prompt` at the expense of lower image quality. Guidance scale is enabled when `guidance_scale > 1`. negative_prompt (`str` or `List[str]`, *optional*): The prompt or prompts to guide what to not include in image generation. If not defined, you need to pass `negative_prompt_embeds` instead. Ignored when not using guidance (`guidance_scale < 1`). num_images_per_prompt (`int`, *optional*, defaults to 1): The number of images to generate per prompt. eta (`float`, *optional*, defaults to 0.0): Corresponds to parameter eta (η) from the [DDIM](https://arxiv.org/abs/2010.02502) paper. Only applies to the [`~schedulers.DDIMScheduler`], and is ignored in other schedulers. generator (`torch.Generator` or `List[torch.Generator]`, *optional*): A [`torch.Generator`](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make generation deterministic. latents (`torch.Tensor`, *optional*): Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image generation. Can be used to tweak the same generation with different prompts. If not provided, a latents tensor is generated by sampling using the supplied random `generator`. ip_adapter_image: (`PipelineImageInput`, *optional*): Optional image input to work with IP Adapters. output_type (`str`, *optional*, defaults to `"pil"`): The output format of the generated image. Choose between `PIL.Image` or `np.array`. return_dict (`bool`, *optional*, defaults to `True`): Whether or not to return a [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] instead of a plain tuple. callback (`Callable`, *optional*): A function that calls every `callback_steps` steps during inference. The function is called with the following arguments: `callback(step: int, timestep: int, latents: torch.Tensor)`. callback_steps (`int`, *optional*, defaults to 1): The frequency at which the `callback` function is called. If not specified, the callback is called at every step. sld_guidance_scale (`float`, *optional*, defaults to 1000): If `sld_guidance_scale < 1`, safety guidance is disabled. sld_warmup_steps (`int`, *optional*, defaults to 10): Number of warmup steps for safety guidance. SLD is only be applied for diffusion steps greater than `sld_warmup_steps`. sld_threshold (`float`, *optional*, defaults to 0.01): Threshold that separates the hyperplane between appropriate and inappropriate images. sld_momentum_scale (`float`, *optional*, defaults to 0.3): Scale of the SLD momentum to be added to the safety guidance at each diffusion step. If set to 0.0, momentum is disabled. Momentum is built up during warmup for diffusion steps smaller than `sld_warmup_steps`. sld_mom_beta (`float`, *optional*, defaults to 0.4): Defines how safety guidance momentum builds up. `sld_mom_beta` indicates how much of the previous momentum is kept. Momentum is built up during warmup for diffusion steps smaller than `sld_warmup_steps`. Returns: [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] or `tuple`: If `return_dict` is `True`, [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] is returned, otherwise a `tuple` is returned where the first element is a list with the generated images and the second element is a list of `bool`s indicating whether the corresponding generated image contains "not-safe-for-work" (nsfw) content. Examples: ```py import torch from diffusers import StableDiffusionPipelineSafe from diffusers.pipelines.stable_diffusion_safe import SafetyConfig pipeline = StableDiffusionPipelineSafe.from_pretrained( "AIML-TUDA/stable-diffusion-safe", torch_dtype=torch.float16 ).to("cuda") prompt = "the four horsewomen of the apocalypse, painting by tom of finland, gaston bussiere, craig mullins, j. c. leyendecker" image = pipeline(prompt=prompt, **SafetyConfig.MEDIUM).images[0] ``` """ # 0. Default height and width to unet height = height or self.unet.config.sample_size * self.vae_scale_factor width = width or self.unet.config.sample_size * self.vae_scale_factor # 1. Check inputs. Raise error if not correct self.check_inputs(prompt, height, width, callback_steps) # 2. Define call parameters batch_size = 1 if isinstance(prompt, str) else len(prompt) device = self._execution_device # here `guidance_scale` is defined analog to the guidance weight `w` of equation (2) # of the Imagen paper: https://arxiv.org/pdf/2205.11487.pdf . `guidance_scale = 1` # corresponds to doing no classifier free guidance. do_classifier_free_guidance = guidance_scale > 1.0 enable_safety_guidance = sld_guidance_scale > 1.0 and do_classifier_free_guidance if not enable_safety_guidance: warnings.warn("Safety checker disabled!") if ip_adapter_image is not None: output_hidden_state = False if isinstance(self.unet.encoder_hid_proj, ImageProjection) else True image_embeds, negative_image_embeds = self.encode_image( ip_adapter_image, device, num_images_per_prompt, output_hidden_state ) if do_classifier_free_guidance: if enable_safety_guidance: image_embeds = torch.cat([negative_image_embeds, image_embeds, image_embeds]) else: image_embeds = torch.cat([negative_image_embeds, image_embeds]) # 3. Encode input prompt prompt_embeds = self._encode_prompt( prompt, device, num_images_per_prompt, do_classifier_free_guidance, negative_prompt, enable_safety_guidance ) # 4. Prepare timesteps self.scheduler.set_timesteps(num_inference_steps, device=device) timesteps = self.scheduler.timesteps # 5. Prepare latent variables num_channels_latents = self.unet.config.in_channels latents = self.prepare_latents( batch_size * num_images_per_prompt, num_channels_latents, height, width, prompt_embeds.dtype, device, generator, latents, ) # 6. Prepare extra step kwargs. extra_step_kwargs = self.prepare_extra_step_kwargs(generator, eta) # 6.1 Add image embeds for IP-Adapter added_cond_kwargs = {"image_embeds": image_embeds} if ip_adapter_image is not None else None safety_momentum = None num_warmup_steps = len(timesteps) - num_inference_steps * self.scheduler.order with self.progress_bar(total=num_inference_steps) as progress_bar: for i, t in enumerate(timesteps): # expand the latents if we are doing classifier free guidance latent_model_input = ( torch.cat([latents] * (3 if enable_safety_guidance else 2)) if do_classifier_free_guidance else latents ) latent_model_input = self.scheduler.scale_model_input(latent_model_input, t) # predict the noise residual noise_pred = self.unet( latent_model_input, t, encoder_hidden_states=prompt_embeds, added_cond_kwargs=added_cond_kwargs ).sample # perform guidance if do_classifier_free_guidance: noise_pred_out = noise_pred.chunk((3 if enable_safety_guidance else 2)) noise_pred_uncond, noise_pred_text = noise_pred_out[0], noise_pred_out[1] # default classifier free guidance noise_guidance = noise_pred_text - noise_pred_uncond # Perform SLD guidance if enable_safety_guidance: if safety_momentum is None: safety_momentum = torch.zeros_like(noise_guidance) noise_pred_safety_concept = noise_pred_out[2] # Equation 6 scale = torch.clamp( torch.abs((noise_pred_text - noise_pred_safety_concept)) * sld_guidance_scale, max=1.0 ) # Equation 6 safety_concept_scale = torch.where( (noise_pred_text - noise_pred_safety_concept) >= sld_threshold, torch.zeros_like(scale), scale, ) # Equation 4 noise_guidance_safety = torch.mul( (noise_pred_safety_concept - noise_pred_uncond), safety_concept_scale ) # Equation 7 noise_guidance_safety = noise_guidance_safety + sld_momentum_scale * safety_momentum # Equation 8 safety_momentum = sld_mom_beta * safety_momentum + (1 - sld_mom_beta) * noise_guidance_safety if i >= sld_warmup_steps: # Warmup # Equation 3 noise_guidance = noise_guidance - noise_guidance_safety noise_pred = noise_pred_uncond + guidance_scale * noise_guidance # compute the previous noisy sample x_t -> x_t-1 latents = self.scheduler.step(noise_pred, t, latents, **extra_step_kwargs).prev_sample # call the callback, if provided if i == len(timesteps) - 1 or ((i + 1) > num_warmup_steps and (i + 1) % self.scheduler.order == 0): progress_bar.update() if callback is not None and i % callback_steps == 0: step_idx = i // getattr(self.scheduler, "order", 1) callback(step_idx, t, latents) if XLA_AVAILABLE: xm.mark_step() # 8. Post-processing image = self.decode_latents(latents) # 9. Run safety checker image, has_nsfw_concept, flagged_images = self.run_safety_checker( image, device, prompt_embeds.dtype, enable_safety_guidance ) # 10. Convert to PIL if output_type == "pil": image = self.numpy_to_pil(image) if flagged_images is not None: flagged_images = self.numpy_to_pil(flagged_images) if not return_dict: return ( image, has_nsfw_concept, self._safety_text_concept if enable_safety_guidance else None, flagged_images, ) return StableDiffusionSafePipelineOutput( images=image, nsfw_content_detected=has_nsfw_concept, applied_safety_concept=self._safety_text_concept if enable_safety_guidance else None, unsafe_images=flagged_images, )
diffusers/src/diffusers/pipelines/stable_diffusion_safe/pipeline_stable_diffusion_safe.py/0
{ "file_path": "diffusers/src/diffusers/pipelines/stable_diffusion_safe/pipeline_stable_diffusion_safe.py", "repo_id": "diffusers", "token_count": 17645 }
from typing import TYPE_CHECKING from ...utils import ( DIFFUSERS_SLOW_IMPORT, OptionalDependencyNotAvailable, _LazyModule, get_objects_from_module, is_torch_available, is_transformers_available, ) _dummy_objects = {} _import_structure = {} try: if not (is_transformers_available() and is_torch_available()): raise OptionalDependencyNotAvailable() except OptionalDependencyNotAvailable: from ...utils import dummy_torch_and_transformers_objects _dummy_objects.update(get_objects_from_module(dummy_torch_and_transformers_objects)) else: _import_structure["modeling_paella_vq_model"] = ["PaellaVQModel"] _import_structure["modeling_wuerstchen_diffnext"] = ["WuerstchenDiffNeXt"] _import_structure["modeling_wuerstchen_prior"] = ["WuerstchenPrior"] _import_structure["pipeline_wuerstchen"] = ["WuerstchenDecoderPipeline"] _import_structure["pipeline_wuerstchen_combined"] = ["WuerstchenCombinedPipeline"] _import_structure["pipeline_wuerstchen_prior"] = ["DEFAULT_STAGE_C_TIMESTEPS", "WuerstchenPriorPipeline"] if TYPE_CHECKING or DIFFUSERS_SLOW_IMPORT: try: if not (is_transformers_available() and is_torch_available()): raise OptionalDependencyNotAvailable() except OptionalDependencyNotAvailable: from ...utils.dummy_torch_and_transformers_objects import * # noqa F403 else: from .modeling_paella_vq_model import PaellaVQModel from .modeling_wuerstchen_diffnext import WuerstchenDiffNeXt from .modeling_wuerstchen_prior import WuerstchenPrior from .pipeline_wuerstchen import WuerstchenDecoderPipeline from .pipeline_wuerstchen_combined import WuerstchenCombinedPipeline from .pipeline_wuerstchen_prior import DEFAULT_STAGE_C_TIMESTEPS, WuerstchenPriorPipeline else: import sys sys.modules[__name__] = _LazyModule( __name__, globals()["__file__"], _import_structure, module_spec=__spec__, ) for name, value in _dummy_objects.items(): setattr(sys.modules[__name__], name, value)
diffusers/src/diffusers/pipelines/wuerstchen/__init__.py/0
{ "file_path": "diffusers/src/diffusers/pipelines/wuerstchen/__init__.py", "repo_id": "diffusers", "token_count": 849 }
from typing import TYPE_CHECKING, Any, Dict, List, Optional, Union from ..base import DiffusersQuantizer if TYPE_CHECKING: from ...models.modeling_utils import ModelMixin from ...utils import ( get_module_from_name, is_accelerate_available, is_accelerate_version, is_gguf_available, is_gguf_version, is_torch_available, logging, ) if is_torch_available() and is_gguf_available(): import torch from .utils import ( GGML_QUANT_SIZES, GGUFParameter, _dequantize_gguf_and_restore_linear, _quant_shape_from_byte_shape, _replace_with_gguf_linear, ) logger = logging.get_logger(__name__) class GGUFQuantizer(DiffusersQuantizer): use_keep_in_fp32_modules = True def __init__(self, quantization_config, **kwargs): super().__init__(quantization_config, **kwargs) self.compute_dtype = quantization_config.compute_dtype self.pre_quantized = quantization_config.pre_quantized self.modules_to_not_convert = quantization_config.modules_to_not_convert if not isinstance(self.modules_to_not_convert, list): self.modules_to_not_convert = [self.modules_to_not_convert] def validate_environment(self, *args, **kwargs): if not is_accelerate_available() or is_accelerate_version("<", "0.26.0"): raise ImportError( "Loading GGUF Parameters requires `accelerate` installed in your enviroment: `pip install 'accelerate>=0.26.0'`" ) if not is_gguf_available() or is_gguf_version("<", "0.10.0"): raise ImportError( "To load GGUF format files you must have `gguf` installed in your environment: `pip install gguf>=0.10.0`" ) # Copied from diffusers.quantizers.bitsandbytes.bnb_quantizer.BnB4BitDiffusersQuantizer.adjust_max_memory def adjust_max_memory(self, max_memory: Dict[str, Union[int, str]]) -> Dict[str, Union[int, str]]: # need more space for buffers that are created during quantization max_memory = {key: val * 0.90 for key, val in max_memory.items()} return max_memory def adjust_target_dtype(self, target_dtype: "torch.dtype") -> "torch.dtype": if target_dtype != torch.uint8: logger.info(f"target_dtype {target_dtype} is replaced by `torch.uint8` for GGUF quantization") return torch.uint8 def update_torch_dtype(self, torch_dtype: "torch.dtype") -> "torch.dtype": if torch_dtype is None: torch_dtype = self.compute_dtype return torch_dtype def check_quantized_param_shape(self, param_name, current_param, loaded_param): loaded_param_shape = loaded_param.shape current_param_shape = current_param.shape quant_type = loaded_param.quant_type block_size, type_size = GGML_QUANT_SIZES[quant_type] inferred_shape = _quant_shape_from_byte_shape(loaded_param_shape, type_size, block_size) if inferred_shape != current_param_shape: raise ValueError( f"{param_name} has an expected quantized shape of: {inferred_shape}, but receieved shape: {loaded_param_shape}" ) return True def check_if_quantized_param( self, model: "ModelMixin", param_value: Union["GGUFParameter", "torch.Tensor"], param_name: str, state_dict: Dict[str, Any], **kwargs, ) -> bool: if isinstance(param_value, GGUFParameter): return True return False def create_quantized_param( self, model: "ModelMixin", param_value: Union["GGUFParameter", "torch.Tensor"], param_name: str, target_device: "torch.device", state_dict: Optional[Dict[str, Any]] = None, unexpected_keys: Optional[List[str]] = None, ): module, tensor_name = get_module_from_name(model, param_name) if tensor_name not in module._parameters and tensor_name not in module._buffers: raise ValueError(f"{module} does not have a parameter or a buffer named {tensor_name}.") if tensor_name in module._parameters: module._parameters[tensor_name] = param_value.to(target_device) if tensor_name in module._buffers: module._buffers[tensor_name] = param_value.to(target_device) def _process_model_before_weight_loading( self, model: "ModelMixin", device_map, keep_in_fp32_modules: List[str] = [], **kwargs, ): state_dict = kwargs.get("state_dict", None) self.modules_to_not_convert.extend(keep_in_fp32_modules) self.modules_to_not_convert = [module for module in self.modules_to_not_convert if module is not None] _replace_with_gguf_linear( model, self.compute_dtype, state_dict, modules_to_not_convert=self.modules_to_not_convert ) def _process_model_after_weight_loading(self, model: "ModelMixin", **kwargs): return model @property def is_serializable(self): return False @property def is_trainable(self) -> bool: return False def _dequantize(self, model): is_model_on_cpu = model.device.type == "cpu" if is_model_on_cpu: logger.info( "Model was found to be on CPU (could happen as a result of `enable_model_cpu_offload()`). So, moving it to GPU. After dequantization, will move the model back to CPU again to preserve the previous device." ) model.to(torch.cuda.current_device()) model = _dequantize_gguf_and_restore_linear(model, self.modules_to_not_convert) if is_model_on_cpu: model.to("cpu") return model
diffusers/src/diffusers/quantizers/gguf/gguf_quantizer.py/0
{ "file_path": "diffusers/src/diffusers/quantizers/gguf/gguf_quantizer.py", "repo_id": "diffusers", "token_count": 2485 }
# Copyright 2024 Stanford University Team and The HuggingFace Team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # DISCLAIMER: This code is strongly influenced by https://github.com/pesser/pytorch_diffusion # and https://github.com/hojonathanho/diffusion from dataclasses import dataclass from typing import Optional, Tuple, Union import flax import jax.numpy as jnp from ..configuration_utils import ConfigMixin, register_to_config from .scheduling_utils_flax import ( CommonSchedulerState, FlaxKarrasDiffusionSchedulers, FlaxSchedulerMixin, FlaxSchedulerOutput, add_noise_common, get_velocity_common, ) @flax.struct.dataclass class DDIMSchedulerState: common: CommonSchedulerState final_alpha_cumprod: jnp.ndarray # setable values init_noise_sigma: jnp.ndarray timesteps: jnp.ndarray num_inference_steps: Optional[int] = None @classmethod def create( cls, common: CommonSchedulerState, final_alpha_cumprod: jnp.ndarray, init_noise_sigma: jnp.ndarray, timesteps: jnp.ndarray, ): return cls( common=common, final_alpha_cumprod=final_alpha_cumprod, init_noise_sigma=init_noise_sigma, timesteps=timesteps, ) @dataclass class FlaxDDIMSchedulerOutput(FlaxSchedulerOutput): state: DDIMSchedulerState class FlaxDDIMScheduler(FlaxSchedulerMixin, ConfigMixin): """ Denoising diffusion implicit models is a scheduler that extends the denoising procedure introduced in denoising diffusion probabilistic models (DDPMs) with non-Markovian guidance. [`~ConfigMixin`] takes care of storing all config attributes that are passed in the scheduler's `__init__` function, such as `num_train_timesteps`. They can be accessed via `scheduler.config.num_train_timesteps`. [`SchedulerMixin`] provides general loading and saving functionality via the [`SchedulerMixin.save_pretrained`] and [`~SchedulerMixin.from_pretrained`] functions. For more details, see the original paper: https://arxiv.org/abs/2010.02502 Args: num_train_timesteps (`int`): number of diffusion steps used to train the model. beta_start (`float`): the starting `beta` value of inference. beta_end (`float`): the final `beta` value. beta_schedule (`str`): the beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from `linear`, `scaled_linear`, or `squaredcos_cap_v2`. trained_betas (`jnp.ndarray`, optional): option to pass an array of betas directly to the constructor to bypass `beta_start`, `beta_end` etc. clip_sample (`bool`, default `True`): option to clip predicted sample between for numerical stability. The clip range is determined by `clip_sample_range`. clip_sample_range (`float`, default `1.0`): the maximum magnitude for sample clipping. Valid only when `clip_sample=True`. set_alpha_to_one (`bool`, default `True`): each diffusion step uses the value of alphas product at that step and at the previous one. For the final step there is no previous alpha. When this option is `True` the previous alpha product is fixed to `1`, otherwise it uses the value of alpha at step 0. steps_offset (`int`, default `0`): An offset added to the inference steps, as required by some model families. prediction_type (`str`, default `epsilon`): indicates whether the model predicts the noise (epsilon), or the samples. One of `epsilon`, `sample`. `v-prediction` is not supported for this scheduler. dtype (`jnp.dtype`, *optional*, defaults to `jnp.float32`): the `dtype` used for params and computation. """ _compatibles = [e.name for e in FlaxKarrasDiffusionSchedulers] dtype: jnp.dtype @property def has_state(self): return True @register_to_config def __init__( self, num_train_timesteps: int = 1000, beta_start: float = 0.0001, beta_end: float = 0.02, beta_schedule: str = "linear", trained_betas: Optional[jnp.ndarray] = None, clip_sample: bool = True, clip_sample_range: float = 1.0, set_alpha_to_one: bool = True, steps_offset: int = 0, prediction_type: str = "epsilon", dtype: jnp.dtype = jnp.float32, ): self.dtype = dtype def create_state(self, common: Optional[CommonSchedulerState] = None) -> DDIMSchedulerState: if common is None: common = CommonSchedulerState.create(self) # At every step in ddim, we are looking into the previous alphas_cumprod # For the final step, there is no previous alphas_cumprod because we are already at 0 # `set_alpha_to_one` decides whether we set this parameter simply to one or # whether we use the final alpha of the "non-previous" one. final_alpha_cumprod = ( jnp.array(1.0, dtype=self.dtype) if self.config.set_alpha_to_one else common.alphas_cumprod[0] ) # standard deviation of the initial noise distribution init_noise_sigma = jnp.array(1.0, dtype=self.dtype) timesteps = jnp.arange(0, self.config.num_train_timesteps).round()[::-1] return DDIMSchedulerState.create( common=common, final_alpha_cumprod=final_alpha_cumprod, init_noise_sigma=init_noise_sigma, timesteps=timesteps, ) def scale_model_input( self, state: DDIMSchedulerState, sample: jnp.ndarray, timestep: Optional[int] = None ) -> jnp.ndarray: """ Args: state (`PNDMSchedulerState`): the `FlaxPNDMScheduler` state data class instance. sample (`jnp.ndarray`): input sample timestep (`int`, optional): current timestep Returns: `jnp.ndarray`: scaled input sample """ return sample def set_timesteps( self, state: DDIMSchedulerState, num_inference_steps: int, shape: Tuple = () ) -> DDIMSchedulerState: """ Sets the discrete timesteps used for the diffusion chain. Supporting function to be run before inference. Args: state (`DDIMSchedulerState`): the `FlaxDDIMScheduler` state data class instance. num_inference_steps (`int`): the number of diffusion steps used when generating samples with a pre-trained model. """ step_ratio = self.config.num_train_timesteps // num_inference_steps # creates integer timesteps by multiplying by ratio # rounding to avoid issues when num_inference_step is power of 3 timesteps = (jnp.arange(0, num_inference_steps) * step_ratio).round()[::-1] + self.config.steps_offset return state.replace( num_inference_steps=num_inference_steps, timesteps=timesteps, ) def _get_variance(self, state: DDIMSchedulerState, timestep, prev_timestep): alpha_prod_t = state.common.alphas_cumprod[timestep] alpha_prod_t_prev = jnp.where( prev_timestep >= 0, state.common.alphas_cumprod[prev_timestep], state.final_alpha_cumprod ) beta_prod_t = 1 - alpha_prod_t beta_prod_t_prev = 1 - alpha_prod_t_prev variance = (beta_prod_t_prev / beta_prod_t) * (1 - alpha_prod_t / alpha_prod_t_prev) return variance def step( self, state: DDIMSchedulerState, model_output: jnp.ndarray, timestep: int, sample: jnp.ndarray, eta: float = 0.0, return_dict: bool = True, ) -> Union[FlaxDDIMSchedulerOutput, Tuple]: """ Predict the sample at the previous timestep by reversing the SDE. Core function to propagate the diffusion process from the learned model outputs (most often the predicted noise). Args: state (`DDIMSchedulerState`): the `FlaxDDIMScheduler` state data class instance. model_output (`jnp.ndarray`): direct output from learned diffusion model. timestep (`int`): current discrete timestep in the diffusion chain. sample (`jnp.ndarray`): current instance of sample being created by diffusion process. return_dict (`bool`): option for returning tuple rather than FlaxDDIMSchedulerOutput class Returns: [`FlaxDDIMSchedulerOutput`] or `tuple`: [`FlaxDDIMSchedulerOutput`] if `return_dict` is True, otherwise a `tuple`. When returning a tuple, the first element is the sample tensor. """ if state.num_inference_steps is None: raise ValueError( "Number of inference steps is 'None', you need to run 'set_timesteps' after creating the scheduler" ) # See formulas (12) and (16) of DDIM paper https://arxiv.org/pdf/2010.02502.pdf # Ideally, read DDIM paper in-detail understanding # Notation (<variable name> -> <name in paper> # - pred_noise_t -> e_theta(x_t, t) # - pred_original_sample -> f_theta(x_t, t) or x_0 # - std_dev_t -> sigma_t # - eta -> η # - pred_sample_direction -> "direction pointing to x_t" # - pred_prev_sample -> "x_t-1" # 1. get previous step value (=t-1) prev_timestep = timestep - self.config.num_train_timesteps // state.num_inference_steps alphas_cumprod = state.common.alphas_cumprod final_alpha_cumprod = state.final_alpha_cumprod # 2. compute alphas, betas alpha_prod_t = alphas_cumprod[timestep] alpha_prod_t_prev = jnp.where(prev_timestep >= 0, alphas_cumprod[prev_timestep], final_alpha_cumprod) beta_prod_t = 1 - alpha_prod_t # 3. compute predicted original sample from predicted noise also called # "predicted x_0" of formula (12) from https://arxiv.org/pdf/2010.02502.pdf if self.config.prediction_type == "epsilon": pred_original_sample = (sample - beta_prod_t ** (0.5) * model_output) / alpha_prod_t ** (0.5) pred_epsilon = model_output elif self.config.prediction_type == "sample": pred_original_sample = model_output pred_epsilon = (sample - alpha_prod_t ** (0.5) * pred_original_sample) / beta_prod_t ** (0.5) elif self.config.prediction_type == "v_prediction": pred_original_sample = (alpha_prod_t**0.5) * sample - (beta_prod_t**0.5) * model_output pred_epsilon = (alpha_prod_t**0.5) * model_output + (beta_prod_t**0.5) * sample else: raise ValueError( f"prediction_type given as {self.config.prediction_type} must be one of `epsilon`, `sample`, or" " `v_prediction`" ) # 4. Clip or threshold "predicted x_0" if self.config.clip_sample: pred_original_sample = pred_original_sample.clip( -self.config.clip_sample_range, self.config.clip_sample_range ) # 4. compute variance: "sigma_t(η)" -> see formula (16) # σ_t = sqrt((1 − α_t−1)/(1 − α_t)) * sqrt(1 − α_t/α_t−1) variance = self._get_variance(state, timestep, prev_timestep) std_dev_t = eta * variance ** (0.5) # 5. compute "direction pointing to x_t" of formula (12) from https://arxiv.org/pdf/2010.02502.pdf pred_sample_direction = (1 - alpha_prod_t_prev - std_dev_t**2) ** (0.5) * pred_epsilon # 6. compute x_t without "random noise" of formula (12) from https://arxiv.org/pdf/2010.02502.pdf prev_sample = alpha_prod_t_prev ** (0.5) * pred_original_sample + pred_sample_direction if not return_dict: return (prev_sample, state) return FlaxDDIMSchedulerOutput(prev_sample=prev_sample, state=state) def add_noise( self, state: DDIMSchedulerState, original_samples: jnp.ndarray, noise: jnp.ndarray, timesteps: jnp.ndarray, ) -> jnp.ndarray: return add_noise_common(state.common, original_samples, noise, timesteps) def get_velocity( self, state: DDIMSchedulerState, sample: jnp.ndarray, noise: jnp.ndarray, timesteps: jnp.ndarray, ) -> jnp.ndarray: return get_velocity_common(state.common, sample, noise, timesteps) def __len__(self): return self.config.num_train_timesteps
diffusers/src/diffusers/schedulers/scheduling_ddim_flax.py/0
{ "file_path": "diffusers/src/diffusers/schedulers/scheduling_ddim_flax.py", "repo_id": "diffusers", "token_count": 5537 }
# Copyright 2024 Shuchen Xue, etc. in University of Chinese Academy of Sciences Team and The HuggingFace Team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # DISCLAIMER: check https://arxiv.org/abs/2309.05019 # The codebase is modified based on https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_dpmsolver_multistep.py import math from typing import Callable, List, Optional, Tuple, Union import numpy as np import torch from ..configuration_utils import ConfigMixin, register_to_config from ..utils import deprecate, is_scipy_available from ..utils.torch_utils import randn_tensor from .scheduling_utils import KarrasDiffusionSchedulers, SchedulerMixin, SchedulerOutput if is_scipy_available(): import scipy.stats # Copied from diffusers.schedulers.scheduling_ddpm.betas_for_alpha_bar def betas_for_alpha_bar( num_diffusion_timesteps, max_beta=0.999, alpha_transform_type="cosine", ): """ Create a beta schedule that discretizes the given alpha_t_bar function, which defines the cumulative product of (1-beta) over time from t = [0,1]. Contains a function alpha_bar that takes an argument t and transforms it to the cumulative product of (1-beta) up to that part of the diffusion process. Args: num_diffusion_timesteps (`int`): the number of betas to produce. max_beta (`float`): the maximum beta to use; use values lower than 1 to prevent singularities. alpha_transform_type (`str`, *optional*, default to `cosine`): the type of noise schedule for alpha_bar. Choose from `cosine` or `exp` Returns: betas (`np.ndarray`): the betas used by the scheduler to step the model outputs """ if alpha_transform_type == "cosine": def alpha_bar_fn(t): return math.cos((t + 0.008) / 1.008 * math.pi / 2) ** 2 elif alpha_transform_type == "exp": def alpha_bar_fn(t): return math.exp(t * -12.0) else: raise ValueError(f"Unsupported alpha_transform_type: {alpha_transform_type}") betas = [] for i in range(num_diffusion_timesteps): t1 = i / num_diffusion_timesteps t2 = (i + 1) / num_diffusion_timesteps betas.append(min(1 - alpha_bar_fn(t2) / alpha_bar_fn(t1), max_beta)) return torch.tensor(betas, dtype=torch.float32) class SASolverScheduler(SchedulerMixin, ConfigMixin): """ `SASolverScheduler` is a fast dedicated high-order solver for diffusion SDEs. This model inherits from [`SchedulerMixin`] and [`ConfigMixin`]. Check the superclass documentation for the generic methods the library implements for all schedulers such as loading and saving. Args: num_train_timesteps (`int`, defaults to 1000): The number of diffusion steps to train the model. beta_start (`float`, defaults to 0.0001): The starting `beta` value of inference. beta_end (`float`, defaults to 0.02): The final `beta` value. beta_schedule (`str`, defaults to `"linear"`): The beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from `linear`, `scaled_linear`, or `squaredcos_cap_v2`. trained_betas (`np.ndarray`, *optional*): Pass an array of betas directly to the constructor to bypass `beta_start` and `beta_end`. predictor_order (`int`, defaults to 2): The predictor order which can be `1` or `2` or `3` or '4'. It is recommended to use `predictor_order=2` for guided sampling, and `predictor_order=3` for unconditional sampling. corrector_order (`int`, defaults to 2): The corrector order which can be `1` or `2` or `3` or '4'. It is recommended to use `corrector_order=2` for guided sampling, and `corrector_order=3` for unconditional sampling. prediction_type (`str`, defaults to `epsilon`, *optional*): Prediction type of the scheduler function; can be `epsilon` (predicts the noise of the diffusion process), `sample` (directly predicts the noisy sample`) or `v_prediction` (see section 2.4 of [Imagen Video](https://imagen.research.google/video/paper.pdf) paper). tau_func (`Callable`, *optional*): Stochasticity during the sampling. Default in init is `lambda t: 1 if t >= 200 and t <= 800 else 0`. SA-Solver will sample from vanilla diffusion ODE if tau_func is set to `lambda t: 0`. SA-Solver will sample from vanilla diffusion SDE if tau_func is set to `lambda t: 1`. For more details, please check https://arxiv.org/abs/2309.05019 thresholding (`bool`, defaults to `False`): Whether to use the "dynamic thresholding" method. This is unsuitable for latent-space diffusion models such as Stable Diffusion. dynamic_thresholding_ratio (`float`, defaults to 0.995): The ratio for the dynamic thresholding method. Valid only when `thresholding=True`. sample_max_value (`float`, defaults to 1.0): The threshold value for dynamic thresholding. Valid only when `thresholding=True` and `algorithm_type="dpmsolver++"`. algorithm_type (`str`, defaults to `data_prediction`): Algorithm type for the solver; can be `data_prediction` or `noise_prediction`. It is recommended to use `data_prediction` with `solver_order=2` for guided sampling like in Stable Diffusion. lower_order_final (`bool`, defaults to `True`): Whether to use lower-order solvers in the final steps. Default = True. use_karras_sigmas (`bool`, *optional*, defaults to `False`): Whether to use Karras sigmas for step sizes in the noise schedule during the sampling process. If `True`, the sigmas are determined according to a sequence of noise levels {σi}. use_exponential_sigmas (`bool`, *optional*, defaults to `False`): Whether to use exponential sigmas for step sizes in the noise schedule during the sampling process. use_beta_sigmas (`bool`, *optional*, defaults to `False`): Whether to use beta sigmas for step sizes in the noise schedule during the sampling process. Refer to [Beta Sampling is All You Need](https://huggingface.co/papers/2407.12173) for more information. lambda_min_clipped (`float`, defaults to `-inf`): Clipping threshold for the minimum value of `lambda(t)` for numerical stability. This is critical for the cosine (`squaredcos_cap_v2`) noise schedule. variance_type (`str`, *optional*): Set to "learned" or "learned_range" for diffusion models that predict variance. If set, the model's output contains the predicted Gaussian variance. timestep_spacing (`str`, defaults to `"linspace"`): The way the timesteps should be scaled. Refer to Table 2 of the [Common Diffusion Noise Schedules and Sample Steps are Flawed](https://huggingface.co/papers/2305.08891) for more information. steps_offset (`int`, defaults to 0): An offset added to the inference steps, as required by some model families. """ _compatibles = [e.name for e in KarrasDiffusionSchedulers] order = 1 @register_to_config def __init__( self, num_train_timesteps: int = 1000, beta_start: float = 0.0001, beta_end: float = 0.02, beta_schedule: str = "linear", trained_betas: Optional[Union[np.ndarray, List[float]]] = None, predictor_order: int = 2, corrector_order: int = 2, prediction_type: str = "epsilon", tau_func: Optional[Callable] = None, thresholding: bool = False, dynamic_thresholding_ratio: float = 0.995, sample_max_value: float = 1.0, algorithm_type: str = "data_prediction", lower_order_final: bool = True, use_karras_sigmas: Optional[bool] = False, use_exponential_sigmas: Optional[bool] = False, use_beta_sigmas: Optional[bool] = False, use_flow_sigmas: Optional[bool] = False, flow_shift: Optional[float] = 1.0, lambda_min_clipped: float = -float("inf"), variance_type: Optional[str] = None, timestep_spacing: str = "linspace", steps_offset: int = 0, ): if self.config.use_beta_sigmas and not is_scipy_available(): raise ImportError("Make sure to install scipy if you want to use beta sigmas.") if sum([self.config.use_beta_sigmas, self.config.use_exponential_sigmas, self.config.use_karras_sigmas]) > 1: raise ValueError( "Only one of `config.use_beta_sigmas`, `config.use_exponential_sigmas`, `config.use_karras_sigmas` can be used." ) if trained_betas is not None: self.betas = torch.tensor(trained_betas, dtype=torch.float32) elif beta_schedule == "linear": self.betas = torch.linspace(beta_start, beta_end, num_train_timesteps, dtype=torch.float32) elif beta_schedule == "scaled_linear": # this schedule is very specific to the latent diffusion model. self.betas = ( torch.linspace( beta_start**0.5, beta_end**0.5, num_train_timesteps, dtype=torch.float32, ) ** 2 ) elif beta_schedule == "squaredcos_cap_v2": # Glide cosine schedule self.betas = betas_for_alpha_bar(num_train_timesteps) else: raise NotImplementedError(f"{beta_schedule} is not implemented for {self.__class__}") self.alphas = 1.0 - self.betas self.alphas_cumprod = torch.cumprod(self.alphas, dim=0) # Currently we only support VP-type noise schedule self.alpha_t = torch.sqrt(self.alphas_cumprod) self.sigma_t = torch.sqrt(1 - self.alphas_cumprod) self.lambda_t = torch.log(self.alpha_t) - torch.log(self.sigma_t) self.sigmas = ((1 - self.alphas_cumprod) / self.alphas_cumprod) ** 0.5 # standard deviation of the initial noise distribution self.init_noise_sigma = 1.0 if algorithm_type not in ["data_prediction", "noise_prediction"]: raise NotImplementedError(f"{algorithm_type} is not implemented for {self.__class__}") # setable values self.num_inference_steps = None timesteps = np.linspace(0, num_train_timesteps - 1, num_train_timesteps, dtype=np.float32)[::-1].copy() self.timesteps = torch.from_numpy(timesteps) self.timestep_list = [None] * max(predictor_order, corrector_order - 1) self.model_outputs = [None] * max(predictor_order, corrector_order - 1) if tau_func is None: self.tau_func = lambda t: 1 if t >= 200 and t <= 800 else 0 else: self.tau_func = tau_func self.predict_x0 = algorithm_type == "data_prediction" self.lower_order_nums = 0 self.last_sample = None self._step_index = None self._begin_index = None self.sigmas = self.sigmas.to("cpu") # to avoid too much CPU/GPU communication @property def step_index(self): """ The index counter for current timestep. It will increase 1 after each scheduler step. """ return self._step_index @property def begin_index(self): """ The index for the first timestep. It should be set from pipeline with `set_begin_index` method. """ return self._begin_index # Copied from diffusers.schedulers.scheduling_dpmsolver_multistep.DPMSolverMultistepScheduler.set_begin_index def set_begin_index(self, begin_index: int = 0): """ Sets the begin index for the scheduler. This function should be run from pipeline before the inference. Args: begin_index (`int`): The begin index for the scheduler. """ self._begin_index = begin_index def set_timesteps(self, num_inference_steps: int = None, device: Union[str, torch.device] = None): """ Sets the discrete timesteps used for the diffusion chain (to be run before inference). Args: num_inference_steps (`int`): The number of diffusion steps used when generating samples with a pre-trained model. device (`str` or `torch.device`, *optional*): The device to which the timesteps should be moved to. If `None`, the timesteps are not moved. """ # Clipping the minimum of all lambda(t) for numerical stability. # This is critical for cosine (squaredcos_cap_v2) noise schedule. clipped_idx = torch.searchsorted(torch.flip(self.lambda_t, [0]), self.config.lambda_min_clipped) last_timestep = ((self.config.num_train_timesteps - clipped_idx).numpy()).item() # "linspace", "leading", "trailing" corresponds to annotation of Table 2. of https://arxiv.org/abs/2305.08891 if self.config.timestep_spacing == "linspace": timesteps = ( np.linspace(0, last_timestep - 1, num_inference_steps + 1).round()[::-1][:-1].copy().astype(np.int64) ) elif self.config.timestep_spacing == "leading": step_ratio = last_timestep // (num_inference_steps + 1) # creates integer timesteps by multiplying by ratio # casting to int to avoid issues when num_inference_step is power of 3 timesteps = (np.arange(0, num_inference_steps + 1) * step_ratio).round()[::-1][:-1].copy().astype(np.int64) timesteps += self.config.steps_offset elif self.config.timestep_spacing == "trailing": step_ratio = self.config.num_train_timesteps / num_inference_steps # creates integer timesteps by multiplying by ratio # casting to int to avoid issues when num_inference_step is power of 3 timesteps = np.arange(last_timestep, 0, -step_ratio).round().copy().astype(np.int64) timesteps -= 1 else: raise ValueError( f"{self.config.timestep_spacing} is not supported. Please make sure to choose one of 'linspace', 'leading' or 'trailing'." ) sigmas = np.array(((1 - self.alphas_cumprod) / self.alphas_cumprod) ** 0.5) log_sigmas = np.log(sigmas) if self.config.use_karras_sigmas: sigmas = np.flip(sigmas).copy() sigmas = self._convert_to_karras(in_sigmas=sigmas, num_inference_steps=num_inference_steps) timesteps = np.array([self._sigma_to_t(sigma, log_sigmas) for sigma in sigmas]).round() sigmas = np.concatenate([sigmas, sigmas[-1:]]).astype(np.float32) elif self.config.use_exponential_sigmas: sigmas = np.flip(sigmas).copy() sigmas = self._convert_to_exponential(in_sigmas=sigmas, num_inference_steps=num_inference_steps) timesteps = np.array([self._sigma_to_t(sigma, log_sigmas) for sigma in sigmas]) sigmas = np.concatenate([sigmas, sigmas[-1:]]).astype(np.float32) elif self.config.use_beta_sigmas: sigmas = np.flip(sigmas).copy() sigmas = self._convert_to_beta(in_sigmas=sigmas, num_inference_steps=num_inference_steps) timesteps = np.array([self._sigma_to_t(sigma, log_sigmas) for sigma in sigmas]) sigmas = np.concatenate([sigmas, sigmas[-1:]]).astype(np.float32) elif self.config.use_flow_sigmas: alphas = np.linspace(1, 1 / self.config.num_train_timesteps, num_inference_steps + 1) sigmas = 1.0 - alphas sigmas = np.flip(self.config.flow_shift * sigmas / (1 + (self.config.flow_shift - 1) * sigmas))[:-1].copy() timesteps = (sigmas * self.config.num_train_timesteps).copy() sigmas = np.concatenate([sigmas, sigmas[-1:]]).astype(np.float32) else: sigmas = np.interp(timesteps, np.arange(0, len(sigmas)), sigmas) sigma_last = ((1 - self.alphas_cumprod[0]) / self.alphas_cumprod[0]) ** 0.5 sigmas = np.concatenate([sigmas, [sigma_last]]).astype(np.float32) self.sigmas = torch.from_numpy(sigmas) self.timesteps = torch.from_numpy(timesteps).to(device=device, dtype=torch.int64) self.num_inference_steps = len(timesteps) self.model_outputs = [ None, ] * max(self.config.predictor_order, self.config.corrector_order - 1) self.lower_order_nums = 0 self.last_sample = None # add an index counter for schedulers that allow duplicated timesteps self._step_index = None self._begin_index = None self.sigmas = self.sigmas.to("cpu") # to avoid too much CPU/GPU communication # Copied from diffusers.schedulers.scheduling_ddpm.DDPMScheduler._threshold_sample def _threshold_sample(self, sample: torch.Tensor) -> torch.Tensor: """ "Dynamic thresholding: At each sampling step we set s to a certain percentile absolute pixel value in xt0 (the prediction of x_0 at timestep t), and if s > 1, then we threshold xt0 to the range [-s, s] and then divide by s. Dynamic thresholding pushes saturated pixels (those near -1 and 1) inwards, thereby actively preventing pixels from saturation at each step. We find that dynamic thresholding results in significantly better photorealism as well as better image-text alignment, especially when using very large guidance weights." https://arxiv.org/abs/2205.11487 """ dtype = sample.dtype batch_size, channels, *remaining_dims = sample.shape if dtype not in (torch.float32, torch.float64): sample = sample.float() # upcast for quantile calculation, and clamp not implemented for cpu half # Flatten sample for doing quantile calculation along each image sample = sample.reshape(batch_size, channels * np.prod(remaining_dims)) abs_sample = sample.abs() # "a certain percentile absolute pixel value" s = torch.quantile(abs_sample, self.config.dynamic_thresholding_ratio, dim=1) s = torch.clamp( s, min=1, max=self.config.sample_max_value ) # When clamped to min=1, equivalent to standard clipping to [-1, 1] s = s.unsqueeze(1) # (batch_size, 1) because clamp will broadcast along dim=0 sample = torch.clamp(sample, -s, s) / s # "we threshold xt0 to the range [-s, s] and then divide by s" sample = sample.reshape(batch_size, channels, *remaining_dims) sample = sample.to(dtype) return sample # Copied from diffusers.schedulers.scheduling_euler_discrete.EulerDiscreteScheduler._sigma_to_t def _sigma_to_t(self, sigma, log_sigmas): # get log sigma log_sigma = np.log(np.maximum(sigma, 1e-10)) # get distribution dists = log_sigma - log_sigmas[:, np.newaxis] # get sigmas range low_idx = np.cumsum((dists >= 0), axis=0).argmax(axis=0).clip(max=log_sigmas.shape[0] - 2) high_idx = low_idx + 1 low = log_sigmas[low_idx] high = log_sigmas[high_idx] # interpolate sigmas w = (low - log_sigma) / (low - high) w = np.clip(w, 0, 1) # transform interpolation to time range t = (1 - w) * low_idx + w * high_idx t = t.reshape(sigma.shape) return t # Copied from diffusers.schedulers.scheduling_dpmsolver_multistep.DPMSolverMultistepScheduler._sigma_to_alpha_sigma_t def _sigma_to_alpha_sigma_t(self, sigma): if self.config.use_flow_sigmas: alpha_t = 1 - sigma sigma_t = sigma else: alpha_t = 1 / ((sigma**2 + 1) ** 0.5) sigma_t = sigma * alpha_t return alpha_t, sigma_t # Copied from diffusers.schedulers.scheduling_euler_discrete.EulerDiscreteScheduler._convert_to_karras def _convert_to_karras(self, in_sigmas: torch.Tensor, num_inference_steps) -> torch.Tensor: """Constructs the noise schedule of Karras et al. (2022).""" # Hack to make sure that other schedulers which copy this function don't break # TODO: Add this logic to the other schedulers if hasattr(self.config, "sigma_min"): sigma_min = self.config.sigma_min else: sigma_min = None if hasattr(self.config, "sigma_max"): sigma_max = self.config.sigma_max else: sigma_max = None sigma_min = sigma_min if sigma_min is not None else in_sigmas[-1].item() sigma_max = sigma_max if sigma_max is not None else in_sigmas[0].item() rho = 7.0 # 7.0 is the value used in the paper ramp = np.linspace(0, 1, num_inference_steps) min_inv_rho = sigma_min ** (1 / rho) max_inv_rho = sigma_max ** (1 / rho) sigmas = (max_inv_rho + ramp * (min_inv_rho - max_inv_rho)) ** rho return sigmas # Copied from diffusers.schedulers.scheduling_euler_discrete.EulerDiscreteScheduler._convert_to_exponential def _convert_to_exponential(self, in_sigmas: torch.Tensor, num_inference_steps: int) -> torch.Tensor: """Constructs an exponential noise schedule.""" # Hack to make sure that other schedulers which copy this function don't break # TODO: Add this logic to the other schedulers if hasattr(self.config, "sigma_min"): sigma_min = self.config.sigma_min else: sigma_min = None if hasattr(self.config, "sigma_max"): sigma_max = self.config.sigma_max else: sigma_max = None sigma_min = sigma_min if sigma_min is not None else in_sigmas[-1].item() sigma_max = sigma_max if sigma_max is not None else in_sigmas[0].item() sigmas = np.exp(np.linspace(math.log(sigma_max), math.log(sigma_min), num_inference_steps)) return sigmas # Copied from diffusers.schedulers.scheduling_euler_discrete.EulerDiscreteScheduler._convert_to_beta def _convert_to_beta( self, in_sigmas: torch.Tensor, num_inference_steps: int, alpha: float = 0.6, beta: float = 0.6 ) -> torch.Tensor: """From "Beta Sampling is All You Need" [arXiv:2407.12173] (Lee et. al, 2024)""" # Hack to make sure that other schedulers which copy this function don't break # TODO: Add this logic to the other schedulers if hasattr(self.config, "sigma_min"): sigma_min = self.config.sigma_min else: sigma_min = None if hasattr(self.config, "sigma_max"): sigma_max = self.config.sigma_max else: sigma_max = None sigma_min = sigma_min if sigma_min is not None else in_sigmas[-1].item() sigma_max = sigma_max if sigma_max is not None else in_sigmas[0].item() sigmas = np.array( [ sigma_min + (ppf * (sigma_max - sigma_min)) for ppf in [ scipy.stats.beta.ppf(timestep, alpha, beta) for timestep in 1 - np.linspace(0, 1, num_inference_steps) ] ] ) return sigmas def convert_model_output( self, model_output: torch.Tensor, *args, sample: torch.Tensor = None, **kwargs, ) -> torch.Tensor: """ Convert the model output to the corresponding type the data_prediction/noise_prediction algorithm needs. Noise_prediction is designed to discretize an integral of the noise prediction model, and data_prediction is designed to discretize an integral of the data prediction model. <Tip> The algorithm and model type are decoupled. You can use either data_prediction or noise_prediction for both noise prediction and data prediction models. </Tip> Args: model_output (`torch.Tensor`): The direct output from the learned diffusion model. sample (`torch.Tensor`): A current instance of a sample created by the diffusion process. Returns: `torch.Tensor`: The converted model output. """ timestep = args[0] if len(args) > 0 else kwargs.pop("timestep", None) if sample is None: if len(args) > 1: sample = args[1] else: raise ValueError("missing `sample` as a required keyward argument") if timestep is not None: deprecate( "timesteps", "1.0.0", "Passing `timesteps` is deprecated and has no effect as model output conversion is now handled via an internal counter `self.step_index`", ) sigma = self.sigmas[self.step_index] alpha_t, sigma_t = self._sigma_to_alpha_sigma_t(sigma) # SA-Solver_data_prediction needs to solve an integral of the data prediction model. if self.config.algorithm_type in ["data_prediction"]: if self.config.prediction_type == "epsilon": # SA-Solver only needs the "mean" output. if self.config.variance_type in ["learned", "learned_range"]: model_output = model_output[:, :3] x0_pred = (sample - sigma_t * model_output) / alpha_t elif self.config.prediction_type == "sample": x0_pred = model_output elif self.config.prediction_type == "v_prediction": x0_pred = alpha_t * sample - sigma_t * model_output elif self.config.prediction_type == "flow_prediction": sigma_t = self.sigmas[self.step_index] x0_pred = sample - sigma_t * model_output else: raise ValueError( f"prediction_type given as {self.config.prediction_type} must be one of `epsilon`, `sample`, " "`v_prediction`, or `flow_prediction` for the SASolverScheduler." ) if self.config.thresholding: x0_pred = self._threshold_sample(x0_pred) return x0_pred # SA-Solver_noise_prediction needs to solve an integral of the noise prediction model. elif self.config.algorithm_type in ["noise_prediction"]: if self.config.prediction_type == "epsilon": # SA-Solver only needs the "mean" output. if self.config.variance_type in ["learned", "learned_range"]: epsilon = model_output[:, :3] else: epsilon = model_output elif self.config.prediction_type == "sample": epsilon = (sample - alpha_t * model_output) / sigma_t elif self.config.prediction_type == "v_prediction": epsilon = alpha_t * model_output + sigma_t * sample else: raise ValueError( f"prediction_type given as {self.config.prediction_type} must be one of `epsilon`, `sample`, or" " `v_prediction` for the SASolverScheduler." ) if self.config.thresholding: alpha_t, sigma_t = self.alpha_t[timestep], self.sigma_t[timestep] x0_pred = (sample - sigma_t * epsilon) / alpha_t x0_pred = self._threshold_sample(x0_pred) epsilon = (sample - alpha_t * x0_pred) / sigma_t return epsilon def get_coefficients_exponential_negative(self, order, interval_start, interval_end): """ Calculate the integral of exp(-x) * x^order dx from interval_start to interval_end """ assert order in [0, 1, 2, 3], "order is only supported for 0, 1, 2 and 3" if order == 0: return torch.exp(-interval_end) * (torch.exp(interval_end - interval_start) - 1) elif order == 1: return torch.exp(-interval_end) * ( (interval_start + 1) * torch.exp(interval_end - interval_start) - (interval_end + 1) ) elif order == 2: return torch.exp(-interval_end) * ( (interval_start**2 + 2 * interval_start + 2) * torch.exp(interval_end - interval_start) - (interval_end**2 + 2 * interval_end + 2) ) elif order == 3: return torch.exp(-interval_end) * ( (interval_start**3 + 3 * interval_start**2 + 6 * interval_start + 6) * torch.exp(interval_end - interval_start) - (interval_end**3 + 3 * interval_end**2 + 6 * interval_end + 6) ) def get_coefficients_exponential_positive(self, order, interval_start, interval_end, tau): """ Calculate the integral of exp(x(1+tau^2)) * x^order dx from interval_start to interval_end """ assert order in [0, 1, 2, 3], "order is only supported for 0, 1, 2 and 3" # after change of variable(cov) interval_end_cov = (1 + tau**2) * interval_end interval_start_cov = (1 + tau**2) * interval_start if order == 0: return ( torch.exp(interval_end_cov) * (1 - torch.exp(-(interval_end_cov - interval_start_cov))) / (1 + tau**2) ) elif order == 1: return ( torch.exp(interval_end_cov) * ( (interval_end_cov - 1) - (interval_start_cov - 1) * torch.exp(-(interval_end_cov - interval_start_cov)) ) / ((1 + tau**2) ** 2) ) elif order == 2: return ( torch.exp(interval_end_cov) * ( (interval_end_cov**2 - 2 * interval_end_cov + 2) - (interval_start_cov**2 - 2 * interval_start_cov + 2) * torch.exp(-(interval_end_cov - interval_start_cov)) ) / ((1 + tau**2) ** 3) ) elif order == 3: return ( torch.exp(interval_end_cov) * ( (interval_end_cov**3 - 3 * interval_end_cov**2 + 6 * interval_end_cov - 6) - (interval_start_cov**3 - 3 * interval_start_cov**2 + 6 * interval_start_cov - 6) * torch.exp(-(interval_end_cov - interval_start_cov)) ) / ((1 + tau**2) ** 4) ) def lagrange_polynomial_coefficient(self, order, lambda_list): """ Calculate the coefficient of lagrange polynomial """ assert order in [0, 1, 2, 3] assert order == len(lambda_list) - 1 if order == 0: return [[1]] elif order == 1: return [ [ 1 / (lambda_list[0] - lambda_list[1]), -lambda_list[1] / (lambda_list[0] - lambda_list[1]), ], [ 1 / (lambda_list[1] - lambda_list[0]), -lambda_list[0] / (lambda_list[1] - lambda_list[0]), ], ] elif order == 2: denominator1 = (lambda_list[0] - lambda_list[1]) * (lambda_list[0] - lambda_list[2]) denominator2 = (lambda_list[1] - lambda_list[0]) * (lambda_list[1] - lambda_list[2]) denominator3 = (lambda_list[2] - lambda_list[0]) * (lambda_list[2] - lambda_list[1]) return [ [ 1 / denominator1, (-lambda_list[1] - lambda_list[2]) / denominator1, lambda_list[1] * lambda_list[2] / denominator1, ], [ 1 / denominator2, (-lambda_list[0] - lambda_list[2]) / denominator2, lambda_list[0] * lambda_list[2] / denominator2, ], [ 1 / denominator3, (-lambda_list[0] - lambda_list[1]) / denominator3, lambda_list[0] * lambda_list[1] / denominator3, ], ] elif order == 3: denominator1 = ( (lambda_list[0] - lambda_list[1]) * (lambda_list[0] - lambda_list[2]) * (lambda_list[0] - lambda_list[3]) ) denominator2 = ( (lambda_list[1] - lambda_list[0]) * (lambda_list[1] - lambda_list[2]) * (lambda_list[1] - lambda_list[3]) ) denominator3 = ( (lambda_list[2] - lambda_list[0]) * (lambda_list[2] - lambda_list[1]) * (lambda_list[2] - lambda_list[3]) ) denominator4 = ( (lambda_list[3] - lambda_list[0]) * (lambda_list[3] - lambda_list[1]) * (lambda_list[3] - lambda_list[2]) ) return [ [ 1 / denominator1, (-lambda_list[1] - lambda_list[2] - lambda_list[3]) / denominator1, ( lambda_list[1] * lambda_list[2] + lambda_list[1] * lambda_list[3] + lambda_list[2] * lambda_list[3] ) / denominator1, (-lambda_list[1] * lambda_list[2] * lambda_list[3]) / denominator1, ], [ 1 / denominator2, (-lambda_list[0] - lambda_list[2] - lambda_list[3]) / denominator2, ( lambda_list[0] * lambda_list[2] + lambda_list[0] * lambda_list[3] + lambda_list[2] * lambda_list[3] ) / denominator2, (-lambda_list[0] * lambda_list[2] * lambda_list[3]) / denominator2, ], [ 1 / denominator3, (-lambda_list[0] - lambda_list[1] - lambda_list[3]) / denominator3, ( lambda_list[0] * lambda_list[1] + lambda_list[0] * lambda_list[3] + lambda_list[1] * lambda_list[3] ) / denominator3, (-lambda_list[0] * lambda_list[1] * lambda_list[3]) / denominator3, ], [ 1 / denominator4, (-lambda_list[0] - lambda_list[1] - lambda_list[2]) / denominator4, ( lambda_list[0] * lambda_list[1] + lambda_list[0] * lambda_list[2] + lambda_list[1] * lambda_list[2] ) / denominator4, (-lambda_list[0] * lambda_list[1] * lambda_list[2]) / denominator4, ], ] def get_coefficients_fn(self, order, interval_start, interval_end, lambda_list, tau): assert order in [1, 2, 3, 4] assert order == len(lambda_list), "the length of lambda list must be equal to the order" coefficients = [] lagrange_coefficient = self.lagrange_polynomial_coefficient(order - 1, lambda_list) for i in range(order): coefficient = 0 for j in range(order): if self.predict_x0: coefficient += lagrange_coefficient[i][j] * self.get_coefficients_exponential_positive( order - 1 - j, interval_start, interval_end, tau ) else: coefficient += lagrange_coefficient[i][j] * self.get_coefficients_exponential_negative( order - 1 - j, interval_start, interval_end ) coefficients.append(coefficient) assert len(coefficients) == order, "the length of coefficients does not match the order" return coefficients def stochastic_adams_bashforth_update( self, model_output: torch.Tensor, *args, sample: torch.Tensor, noise: torch.Tensor, order: int, tau: torch.Tensor, **kwargs, ) -> torch.Tensor: """ One step for the SA-Predictor. Args: model_output (`torch.Tensor`): The direct output from the learned diffusion model at the current timestep. prev_timestep (`int`): The previous discrete timestep in the diffusion chain. sample (`torch.Tensor`): A current instance of a sample created by the diffusion process. order (`int`): The order of SA-Predictor at this timestep. Returns: `torch.Tensor`: The sample tensor at the previous timestep. """ prev_timestep = args[0] if len(args) > 0 else kwargs.pop("prev_timestep", None) if sample is None: if len(args) > 1: sample = args[1] else: raise ValueError(" missing `sample` as a required keyward argument") if noise is None: if len(args) > 2: noise = args[2] else: raise ValueError(" missing `noise` as a required keyward argument") if order is None: if len(args) > 3: order = args[3] else: raise ValueError(" missing `order` as a required keyward argument") if tau is None: if len(args) > 4: tau = args[4] else: raise ValueError(" missing `tau` as a required keyward argument") if prev_timestep is not None: deprecate( "prev_timestep", "1.0.0", "Passing `prev_timestep` is deprecated and has no effect as model output conversion is now handled via an internal counter `self.step_index`", ) model_output_list = self.model_outputs sigma_t, sigma_s0 = ( self.sigmas[self.step_index + 1], self.sigmas[self.step_index], ) alpha_t, sigma_t = self._sigma_to_alpha_sigma_t(sigma_t) alpha_s0, sigma_s0 = self._sigma_to_alpha_sigma_t(sigma_s0) lambda_t = torch.log(alpha_t) - torch.log(sigma_t) lambda_s0 = torch.log(alpha_s0) - torch.log(sigma_s0) gradient_part = torch.zeros_like(sample) h = lambda_t - lambda_s0 lambda_list = [] for i in range(order): si = self.step_index - i alpha_si, sigma_si = self._sigma_to_alpha_sigma_t(self.sigmas[si]) lambda_si = torch.log(alpha_si) - torch.log(sigma_si) lambda_list.append(lambda_si) gradient_coefficients = self.get_coefficients_fn(order, lambda_s0, lambda_t, lambda_list, tau) x = sample if self.predict_x0: if ( order == 2 ): ## if order = 2 we do a modification that does not influence the convergence order similar to unipc. Note: This is used only for few steps sampling. # The added term is O(h^3). Empirically we find it will slightly improve the image quality. # ODE case # gradient_coefficients[0] += 1.0 * torch.exp(lambda_t) * (h ** 2 / 2 - (h - 1 + torch.exp(-h))) / (ns.marginal_lambda(t_prev_list[-1]) - ns.marginal_lambda(t_prev_list[-2])) # gradient_coefficients[1] -= 1.0 * torch.exp(lambda_t) * (h ** 2 / 2 - (h - 1 + torch.exp(-h))) / (ns.marginal_lambda(t_prev_list[-1]) - ns.marginal_lambda(t_prev_list[-2])) temp_sigma = self.sigmas[self.step_index - 1] temp_alpha_s, temp_sigma_s = self._sigma_to_alpha_sigma_t(temp_sigma) temp_lambda_s = torch.log(temp_alpha_s) - torch.log(temp_sigma_s) gradient_coefficients[0] += ( 1.0 * torch.exp((1 + tau**2) * lambda_t) * (h**2 / 2 - (h * (1 + tau**2) - 1 + torch.exp((1 + tau**2) * (-h))) / ((1 + tau**2) ** 2)) / (lambda_s0 - temp_lambda_s) ) gradient_coefficients[1] -= ( 1.0 * torch.exp((1 + tau**2) * lambda_t) * (h**2 / 2 - (h * (1 + tau**2) - 1 + torch.exp((1 + tau**2) * (-h))) / ((1 + tau**2) ** 2)) / (lambda_s0 - temp_lambda_s) ) for i in range(order): if self.predict_x0: gradient_part += ( (1 + tau**2) * sigma_t * torch.exp(-(tau**2) * lambda_t) * gradient_coefficients[i] * model_output_list[-(i + 1)] ) else: gradient_part += -(1 + tau**2) * alpha_t * gradient_coefficients[i] * model_output_list[-(i + 1)] if self.predict_x0: noise_part = sigma_t * torch.sqrt(1 - torch.exp(-2 * tau**2 * h)) * noise else: noise_part = tau * sigma_t * torch.sqrt(torch.exp(2 * h) - 1) * noise if self.predict_x0: x_t = torch.exp(-(tau**2) * h) * (sigma_t / sigma_s0) * x + gradient_part + noise_part else: x_t = (alpha_t / alpha_s0) * x + gradient_part + noise_part x_t = x_t.to(x.dtype) return x_t def stochastic_adams_moulton_update( self, this_model_output: torch.Tensor, *args, last_sample: torch.Tensor, last_noise: torch.Tensor, this_sample: torch.Tensor, order: int, tau: torch.Tensor, **kwargs, ) -> torch.Tensor: """ One step for the SA-Corrector. Args: this_model_output (`torch.Tensor`): The model outputs at `x_t`. this_timestep (`int`): The current timestep `t`. last_sample (`torch.Tensor`): The generated sample before the last predictor `x_{t-1}`. this_sample (`torch.Tensor`): The generated sample after the last predictor `x_{t}`. order (`int`): The order of SA-Corrector at this step. Returns: `torch.Tensor`: The corrected sample tensor at the current timestep. """ this_timestep = args[0] if len(args) > 0 else kwargs.pop("this_timestep", None) if last_sample is None: if len(args) > 1: last_sample = args[1] else: raise ValueError(" missing`last_sample` as a required keyward argument") if last_noise is None: if len(args) > 2: last_noise = args[2] else: raise ValueError(" missing`last_noise` as a required keyward argument") if this_sample is None: if len(args) > 3: this_sample = args[3] else: raise ValueError(" missing`this_sample` as a required keyward argument") if order is None: if len(args) > 4: order = args[4] else: raise ValueError(" missing`order` as a required keyward argument") if tau is None: if len(args) > 5: tau = args[5] else: raise ValueError(" missing`tau` as a required keyward argument") if this_timestep is not None: deprecate( "this_timestep", "1.0.0", "Passing `this_timestep` is deprecated and has no effect as model output conversion is now handled via an internal counter `self.step_index`", ) model_output_list = self.model_outputs sigma_t, sigma_s0 = ( self.sigmas[self.step_index], self.sigmas[self.step_index - 1], ) alpha_t, sigma_t = self._sigma_to_alpha_sigma_t(sigma_t) alpha_s0, sigma_s0 = self._sigma_to_alpha_sigma_t(sigma_s0) lambda_t = torch.log(alpha_t) - torch.log(sigma_t) lambda_s0 = torch.log(alpha_s0) - torch.log(sigma_s0) gradient_part = torch.zeros_like(this_sample) h = lambda_t - lambda_s0 lambda_list = [] for i in range(order): si = self.step_index - i alpha_si, sigma_si = self._sigma_to_alpha_sigma_t(self.sigmas[si]) lambda_si = torch.log(alpha_si) - torch.log(sigma_si) lambda_list.append(lambda_si) model_prev_list = model_output_list + [this_model_output] gradient_coefficients = self.get_coefficients_fn(order, lambda_s0, lambda_t, lambda_list, tau) x = last_sample if self.predict_x0: if ( order == 2 ): ## if order = 2 we do a modification that does not influence the convergence order similar to UniPC. Note: This is used only for few steps sampling. # The added term is O(h^3). Empirically we find it will slightly improve the image quality. # ODE case # gradient_coefficients[0] += 1.0 * torch.exp(lambda_t) * (h / 2 - (h - 1 + torch.exp(-h)) / h) # gradient_coefficients[1] -= 1.0 * torch.exp(lambda_t) * (h / 2 - (h - 1 + torch.exp(-h)) / h) gradient_coefficients[0] += ( 1.0 * torch.exp((1 + tau**2) * lambda_t) * (h / 2 - (h * (1 + tau**2) - 1 + torch.exp((1 + tau**2) * (-h))) / ((1 + tau**2) ** 2 * h)) ) gradient_coefficients[1] -= ( 1.0 * torch.exp((1 + tau**2) * lambda_t) * (h / 2 - (h * (1 + tau**2) - 1 + torch.exp((1 + tau**2) * (-h))) / ((1 + tau**2) ** 2 * h)) ) for i in range(order): if self.predict_x0: gradient_part += ( (1 + tau**2) * sigma_t * torch.exp(-(tau**2) * lambda_t) * gradient_coefficients[i] * model_prev_list[-(i + 1)] ) else: gradient_part += -(1 + tau**2) * alpha_t * gradient_coefficients[i] * model_prev_list[-(i + 1)] if self.predict_x0: noise_part = sigma_t * torch.sqrt(1 - torch.exp(-2 * tau**2 * h)) * last_noise else: noise_part = tau * sigma_t * torch.sqrt(torch.exp(2 * h) - 1) * last_noise if self.predict_x0: x_t = torch.exp(-(tau**2) * h) * (sigma_t / sigma_s0) * x + gradient_part + noise_part else: x_t = (alpha_t / alpha_s0) * x + gradient_part + noise_part x_t = x_t.to(x.dtype) return x_t # Copied from diffusers.schedulers.scheduling_dpmsolver_multistep.DPMSolverMultistepScheduler.index_for_timestep def index_for_timestep(self, timestep, schedule_timesteps=None): if schedule_timesteps is None: schedule_timesteps = self.timesteps index_candidates = (schedule_timesteps == timestep).nonzero() if len(index_candidates) == 0: step_index = len(self.timesteps) - 1 # The sigma index that is taken for the **very** first `step` # is always the second index (or the last index if there is only 1) # This way we can ensure we don't accidentally skip a sigma in # case we start in the middle of the denoising schedule (e.g. for image-to-image) elif len(index_candidates) > 1: step_index = index_candidates[1].item() else: step_index = index_candidates[0].item() return step_index # Copied from diffusers.schedulers.scheduling_dpmsolver_multistep.DPMSolverMultistepScheduler._init_step_index def _init_step_index(self, timestep): """ Initialize the step_index counter for the scheduler. """ if self.begin_index is None: if isinstance(timestep, torch.Tensor): timestep = timestep.to(self.timesteps.device) self._step_index = self.index_for_timestep(timestep) else: self._step_index = self._begin_index def step( self, model_output: torch.Tensor, timestep: int, sample: torch.Tensor, generator=None, return_dict: bool = True, ) -> Union[SchedulerOutput, Tuple]: """ Predict the sample from the previous timestep by reversing the SDE. This function propagates the sample with the SA-Solver. Args: model_output (`torch.Tensor`): The direct output from learned diffusion model. timestep (`int`): The current discrete timestep in the diffusion chain. sample (`torch.Tensor`): A current instance of a sample created by the diffusion process. generator (`torch.Generator`, *optional*): A random number generator. return_dict (`bool`): Whether or not to return a [`~schedulers.scheduling_utils.SchedulerOutput`] or `tuple`. Returns: [`~schedulers.scheduling_utils.SchedulerOutput`] or `tuple`: If return_dict is `True`, [`~schedulers.scheduling_utils.SchedulerOutput`] is returned, otherwise a tuple is returned where the first element is the sample tensor. """ if self.num_inference_steps is None: raise ValueError( "Number of inference steps is 'None', you need to run 'set_timesteps' after creating the scheduler" ) if self.step_index is None: self._init_step_index(timestep) use_corrector = self.step_index > 0 and self.last_sample is not None model_output_convert = self.convert_model_output(model_output, sample=sample) if use_corrector: current_tau = self.tau_func(self.timestep_list[-1]) sample = self.stochastic_adams_moulton_update( this_model_output=model_output_convert, last_sample=self.last_sample, last_noise=self.last_noise, this_sample=sample, order=self.this_corrector_order, tau=current_tau, ) for i in range(max(self.config.predictor_order, self.config.corrector_order - 1) - 1): self.model_outputs[i] = self.model_outputs[i + 1] self.timestep_list[i] = self.timestep_list[i + 1] self.model_outputs[-1] = model_output_convert self.timestep_list[-1] = timestep noise = randn_tensor( model_output.shape, generator=generator, device=model_output.device, dtype=model_output.dtype, ) if self.config.lower_order_final: this_predictor_order = min(self.config.predictor_order, len(self.timesteps) - self.step_index) this_corrector_order = min(self.config.corrector_order, len(self.timesteps) - self.step_index + 1) else: this_predictor_order = self.config.predictor_order this_corrector_order = self.config.corrector_order self.this_predictor_order = min(this_predictor_order, self.lower_order_nums + 1) # warmup for multistep self.this_corrector_order = min(this_corrector_order, self.lower_order_nums + 2) # warmup for multistep assert self.this_predictor_order > 0 assert self.this_corrector_order > 0 self.last_sample = sample self.last_noise = noise current_tau = self.tau_func(self.timestep_list[-1]) prev_sample = self.stochastic_adams_bashforth_update( model_output=model_output_convert, sample=sample, noise=noise, order=self.this_predictor_order, tau=current_tau, ) if self.lower_order_nums < max(self.config.predictor_order, self.config.corrector_order - 1): self.lower_order_nums += 1 # upon completion increase step index by one self._step_index += 1 if not return_dict: return (prev_sample,) return SchedulerOutput(prev_sample=prev_sample) def scale_model_input(self, sample: torch.Tensor, *args, **kwargs) -> torch.Tensor: """ Ensures interchangeability with schedulers that need to scale the denoising model input depending on the current timestep. Args: sample (`torch.Tensor`): The input sample. Returns: `torch.Tensor`: A scaled input sample. """ return sample # Copied from diffusers.schedulers.scheduling_ddpm.DDPMScheduler.add_noise def add_noise( self, original_samples: torch.Tensor, noise: torch.Tensor, timesteps: torch.IntTensor, ) -> torch.Tensor: # Make sure alphas_cumprod and timestep have same device and dtype as original_samples # Move the self.alphas_cumprod to device to avoid redundant CPU to GPU data movement # for the subsequent add_noise calls self.alphas_cumprod = self.alphas_cumprod.to(device=original_samples.device) alphas_cumprod = self.alphas_cumprod.to(dtype=original_samples.dtype) timesteps = timesteps.to(original_samples.device) sqrt_alpha_prod = alphas_cumprod[timesteps] ** 0.5 sqrt_alpha_prod = sqrt_alpha_prod.flatten() while len(sqrt_alpha_prod.shape) < len(original_samples.shape): sqrt_alpha_prod = sqrt_alpha_prod.unsqueeze(-1) sqrt_one_minus_alpha_prod = (1 - alphas_cumprod[timesteps]) ** 0.5 sqrt_one_minus_alpha_prod = sqrt_one_minus_alpha_prod.flatten() while len(sqrt_one_minus_alpha_prod.shape) < len(original_samples.shape): sqrt_one_minus_alpha_prod = sqrt_one_minus_alpha_prod.unsqueeze(-1) noisy_samples = sqrt_alpha_prod * original_samples + sqrt_one_minus_alpha_prod * noise return noisy_samples def __len__(self): return self.config.num_train_timesteps
diffusers/src/diffusers/schedulers/scheduling_sasolver.py/0
{ "file_path": "diffusers/src/diffusers/schedulers/scheduling_sasolver.py", "repo_id": "diffusers", "token_count": 26440 }
# This file is autogenerated by the command `make fix-copies`, do not edit. from ..utils import DummyObject, requires_backends class FlaxControlNetModel(metaclass=DummyObject): _backends = ["flax"] def __init__(self, *args, **kwargs): requires_backends(self, ["flax"]) @classmethod def from_config(cls, *args, **kwargs): requires_backends(cls, ["flax"]) @classmethod def from_pretrained(cls, *args, **kwargs): requires_backends(cls, ["flax"]) class FlaxModelMixin(metaclass=DummyObject): _backends = ["flax"] def __init__(self, *args, **kwargs): requires_backends(self, ["flax"]) @classmethod def from_config(cls, *args, **kwargs): requires_backends(cls, ["flax"]) @classmethod def from_pretrained(cls, *args, **kwargs): requires_backends(cls, ["flax"]) class FlaxUNet2DConditionModel(metaclass=DummyObject): _backends = ["flax"] def __init__(self, *args, **kwargs): requires_backends(self, ["flax"]) @classmethod def from_config(cls, *args, **kwargs): requires_backends(cls, ["flax"]) @classmethod def from_pretrained(cls, *args, **kwargs): requires_backends(cls, ["flax"]) class FlaxAutoencoderKL(metaclass=DummyObject): _backends = ["flax"] def __init__(self, *args, **kwargs): requires_backends(self, ["flax"]) @classmethod def from_config(cls, *args, **kwargs): requires_backends(cls, ["flax"]) @classmethod def from_pretrained(cls, *args, **kwargs): requires_backends(cls, ["flax"]) class FlaxDiffusionPipeline(metaclass=DummyObject): _backends = ["flax"] def __init__(self, *args, **kwargs): requires_backends(self, ["flax"]) @classmethod def from_config(cls, *args, **kwargs): requires_backends(cls, ["flax"]) @classmethod def from_pretrained(cls, *args, **kwargs): requires_backends(cls, ["flax"]) class FlaxDDIMScheduler(metaclass=DummyObject): _backends = ["flax"] def __init__(self, *args, **kwargs): requires_backends(self, ["flax"]) @classmethod def from_config(cls, *args, **kwargs): requires_backends(cls, ["flax"]) @classmethod def from_pretrained(cls, *args, **kwargs): requires_backends(cls, ["flax"]) class FlaxDDPMScheduler(metaclass=DummyObject): _backends = ["flax"] def __init__(self, *args, **kwargs): requires_backends(self, ["flax"]) @classmethod def from_config(cls, *args, **kwargs): requires_backends(cls, ["flax"]) @classmethod def from_pretrained(cls, *args, **kwargs): requires_backends(cls, ["flax"]) class FlaxDPMSolverMultistepScheduler(metaclass=DummyObject): _backends = ["flax"] def __init__(self, *args, **kwargs): requires_backends(self, ["flax"]) @classmethod def from_config(cls, *args, **kwargs): requires_backends(cls, ["flax"]) @classmethod def from_pretrained(cls, *args, **kwargs): requires_backends(cls, ["flax"]) class FlaxEulerDiscreteScheduler(metaclass=DummyObject): _backends = ["flax"] def __init__(self, *args, **kwargs): requires_backends(self, ["flax"]) @classmethod def from_config(cls, *args, **kwargs): requires_backends(cls, ["flax"]) @classmethod def from_pretrained(cls, *args, **kwargs): requires_backends(cls, ["flax"]) class FlaxKarrasVeScheduler(metaclass=DummyObject): _backends = ["flax"] def __init__(self, *args, **kwargs): requires_backends(self, ["flax"]) @classmethod def from_config(cls, *args, **kwargs): requires_backends(cls, ["flax"]) @classmethod def from_pretrained(cls, *args, **kwargs): requires_backends(cls, ["flax"]) class FlaxLMSDiscreteScheduler(metaclass=DummyObject): _backends = ["flax"] def __init__(self, *args, **kwargs): requires_backends(self, ["flax"]) @classmethod def from_config(cls, *args, **kwargs): requires_backends(cls, ["flax"]) @classmethod def from_pretrained(cls, *args, **kwargs): requires_backends(cls, ["flax"]) class FlaxPNDMScheduler(metaclass=DummyObject): _backends = ["flax"] def __init__(self, *args, **kwargs): requires_backends(self, ["flax"]) @classmethod def from_config(cls, *args, **kwargs): requires_backends(cls, ["flax"]) @classmethod def from_pretrained(cls, *args, **kwargs): requires_backends(cls, ["flax"]) class FlaxSchedulerMixin(metaclass=DummyObject): _backends = ["flax"] def __init__(self, *args, **kwargs): requires_backends(self, ["flax"]) @classmethod def from_config(cls, *args, **kwargs): requires_backends(cls, ["flax"]) @classmethod def from_pretrained(cls, *args, **kwargs): requires_backends(cls, ["flax"]) class FlaxScoreSdeVeScheduler(metaclass=DummyObject): _backends = ["flax"] def __init__(self, *args, **kwargs): requires_backends(self, ["flax"]) @classmethod def from_config(cls, *args, **kwargs): requires_backends(cls, ["flax"]) @classmethod def from_pretrained(cls, *args, **kwargs): requires_backends(cls, ["flax"])
diffusers/src/diffusers/utils/dummy_flax_objects.py/0
{ "file_path": "diffusers/src/diffusers/utils/dummy_flax_objects.py", "repo_id": "diffusers", "token_count": 2343 }
import os import tempfile from typing import Any, Callable, List, Optional, Tuple, Union from urllib.parse import unquote, urlparse import PIL.Image import PIL.ImageOps import requests from .import_utils import BACKENDS_MAPPING, is_imageio_available def load_image( image: Union[str, PIL.Image.Image], convert_method: Optional[Callable[[PIL.Image.Image], PIL.Image.Image]] = None ) -> PIL.Image.Image: """ Loads `image` to a PIL Image. Args: image (`str` or `PIL.Image.Image`): The image to convert to the PIL Image format. convert_method (Callable[[PIL.Image.Image], PIL.Image.Image], *optional*): A conversion method to apply to the image after loading it. When set to `None` the image will be converted "RGB". Returns: `PIL.Image.Image`: A PIL Image. """ if isinstance(image, str): if image.startswith("http://") or image.startswith("https://"): image = PIL.Image.open(requests.get(image, stream=True).raw) elif os.path.isfile(image): image = PIL.Image.open(image) else: raise ValueError( f"Incorrect path or URL. URLs must start with `http://` or `https://`, and {image} is not a valid path." ) elif isinstance(image, PIL.Image.Image): image = image else: raise ValueError( "Incorrect format used for the image. Should be a URL linking to an image, a local path, or a PIL image." ) image = PIL.ImageOps.exif_transpose(image) if convert_method is not None: image = convert_method(image) else: image = image.convert("RGB") return image def load_video( video: str, convert_method: Optional[Callable[[List[PIL.Image.Image]], List[PIL.Image.Image]]] = None, ) -> List[PIL.Image.Image]: """ Loads `video` to a list of PIL Image. Args: video (`str`): A URL or Path to a video to convert to a list of PIL Image format. convert_method (Callable[[List[PIL.Image.Image]], List[PIL.Image.Image]], *optional*): A conversion method to apply to the video after loading it. When set to `None` the images will be converted to "RGB". Returns: `List[PIL.Image.Image]`: The video as a list of PIL images. """ is_url = video.startswith("http://") or video.startswith("https://") is_file = os.path.isfile(video) was_tempfile_created = False if not (is_url or is_file): raise ValueError( f"Incorrect path or URL. URLs must start with `http://` or `https://`, and {video} is not a valid path." ) if is_url: response = requests.get(video, stream=True) if response.status_code != 200: raise ValueError(f"Failed to download video. Status code: {response.status_code}") parsed_url = urlparse(video) file_name = os.path.basename(unquote(parsed_url.path)) suffix = os.path.splitext(file_name)[1] or ".mp4" video_path = tempfile.NamedTemporaryFile(suffix=suffix, delete=False).name was_tempfile_created = True video_data = response.iter_content(chunk_size=8192) with open(video_path, "wb") as f: for chunk in video_data: f.write(chunk) video = video_path pil_images = [] if video.endswith(".gif"): gif = PIL.Image.open(video) try: while True: pil_images.append(gif.copy()) gif.seek(gif.tell() + 1) except EOFError: pass else: if is_imageio_available(): import imageio else: raise ImportError(BACKENDS_MAPPING["imageio"][1].format("load_video")) try: imageio.plugins.ffmpeg.get_exe() except AttributeError: raise AttributeError( "`Unable to find an ffmpeg installation on your machine. Please install via `pip install imageio-ffmpeg" ) with imageio.get_reader(video) as reader: # Read all frames for frame in reader: pil_images.append(PIL.Image.fromarray(frame)) if was_tempfile_created: os.remove(video_path) if convert_method is not None: pil_images = convert_method(pil_images) return pil_images # Taken from `transformers`. def get_module_from_name(module, tensor_name: str) -> Tuple[Any, str]: if "." in tensor_name: splits = tensor_name.split(".") for split in splits[:-1]: new_module = getattr(module, split) if new_module is None: raise ValueError(f"{module} has no attribute {split}.") module = new_module tensor_name = splits[-1] return module, tensor_name def get_submodule_by_name(root_module, module_path: str): current = root_module parts = module_path.split(".") for part in parts: if part.isdigit(): idx = int(part) current = current[idx] # e.g., for nn.ModuleList or nn.Sequential else: current = getattr(current, part) return current
diffusers/src/diffusers/utils/loading_utils.py/0
{ "file_path": "diffusers/src/diffusers/utils/loading_utils.py", "repo_id": "diffusers", "token_count": 2302 }
# Copyright 2024 HuggingFace Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import gc import unittest import torch from diffusers.hooks import HookRegistry, ModelHook from diffusers.training_utils import free_memory from diffusers.utils.logging import get_logger from diffusers.utils.testing_utils import CaptureLogger, torch_device logger = get_logger(__name__) # pylint: disable=invalid-name class DummyBlock(torch.nn.Module): def __init__(self, in_features: int, hidden_features: int, out_features: int) -> None: super().__init__() self.proj_in = torch.nn.Linear(in_features, hidden_features) self.activation = torch.nn.ReLU() self.proj_out = torch.nn.Linear(hidden_features, out_features) def forward(self, x: torch.Tensor) -> torch.Tensor: x = self.proj_in(x) x = self.activation(x) x = self.proj_out(x) return x class DummyModel(torch.nn.Module): def __init__(self, in_features: int, hidden_features: int, out_features: int, num_layers: int) -> None: super().__init__() self.linear_1 = torch.nn.Linear(in_features, hidden_features) self.activation = torch.nn.ReLU() self.blocks = torch.nn.ModuleList( [DummyBlock(hidden_features, hidden_features, hidden_features) for _ in range(num_layers)] ) self.linear_2 = torch.nn.Linear(hidden_features, out_features) def forward(self, x: torch.Tensor) -> torch.Tensor: x = self.linear_1(x) x = self.activation(x) for block in self.blocks: x = block(x) x = self.linear_2(x) return x class AddHook(ModelHook): def __init__(self, value: int): super().__init__() self.value = value def pre_forward(self, module: torch.nn.Module, *args, **kwargs): logger.debug("AddHook pre_forward") args = ((x + self.value) if torch.is_tensor(x) else x for x in args) return args, kwargs def post_forward(self, module, output): logger.debug("AddHook post_forward") return output class MultiplyHook(ModelHook): def __init__(self, value: int): super().__init__() self.value = value def pre_forward(self, module, *args, **kwargs): logger.debug("MultiplyHook pre_forward") args = ((x * self.value) if torch.is_tensor(x) else x for x in args) return args, kwargs def post_forward(self, module, output): logger.debug("MultiplyHook post_forward") return output def __repr__(self): return f"MultiplyHook(value={self.value})" class StatefulAddHook(ModelHook): _is_stateful = True def __init__(self, value: int): super().__init__() self.value = value self.increment = 0 def pre_forward(self, module, *args, **kwargs): logger.debug("StatefulAddHook pre_forward") add_value = self.value + self.increment self.increment += 1 args = ((x + add_value) if torch.is_tensor(x) else x for x in args) return args, kwargs def reset_state(self, module): self.increment = 0 class SkipLayerHook(ModelHook): def __init__(self, skip_layer: bool): super().__init__() self.skip_layer = skip_layer def pre_forward(self, module, *args, **kwargs): logger.debug("SkipLayerHook pre_forward") return args, kwargs def new_forward(self, module, *args, **kwargs): logger.debug("SkipLayerHook new_forward") if self.skip_layer: return args[0] return self.fn_ref.original_forward(*args, **kwargs) def post_forward(self, module, output): logger.debug("SkipLayerHook post_forward") return output class HookTests(unittest.TestCase): in_features = 4 hidden_features = 8 out_features = 4 num_layers = 2 def setUp(self): params = self.get_module_parameters() self.model = DummyModel(**params) self.model.to(torch_device) def tearDown(self): super().tearDown() del self.model gc.collect() free_memory() def get_module_parameters(self): return { "in_features": self.in_features, "hidden_features": self.hidden_features, "out_features": self.out_features, "num_layers": self.num_layers, } def get_generator(self): return torch.manual_seed(0) def test_hook_registry(self): registry = HookRegistry.check_if_exists_or_initialize(self.model) registry.register_hook(AddHook(1), "add_hook") registry.register_hook(MultiplyHook(2), "multiply_hook") registry_repr = repr(registry) expected_repr = ( "HookRegistry(\n" " (0) add_hook - AddHook\n" " (1) multiply_hook - MultiplyHook(value=2)\n" ")" ) self.assertEqual(len(registry.hooks), 2) self.assertEqual(registry._hook_order, ["add_hook", "multiply_hook"]) self.assertEqual(registry_repr, expected_repr) registry.remove_hook("add_hook") self.assertEqual(len(registry.hooks), 1) self.assertEqual(registry._hook_order, ["multiply_hook"]) def test_stateful_hook(self): registry = HookRegistry.check_if_exists_or_initialize(self.model) registry.register_hook(StatefulAddHook(1), "stateful_add_hook") self.assertEqual(registry.hooks["stateful_add_hook"].increment, 0) input = torch.randn(1, 4, device=torch_device, generator=self.get_generator()) num_repeats = 3 for i in range(num_repeats): result = self.model(input) if i == 0: output1 = result self.assertEqual(registry.get_hook("stateful_add_hook").increment, num_repeats) registry.reset_stateful_hooks() output2 = self.model(input) self.assertEqual(registry.get_hook("stateful_add_hook").increment, 1) self.assertTrue(torch.allclose(output1, output2)) def test_inference(self): registry = HookRegistry.check_if_exists_or_initialize(self.model) registry.register_hook(AddHook(1), "add_hook") registry.register_hook(MultiplyHook(2), "multiply_hook") input = torch.randn(1, 4, device=torch_device, generator=self.get_generator()) output1 = self.model(input).mean().detach().cpu().item() registry.remove_hook("multiply_hook") new_input = input * 2 output2 = self.model(new_input).mean().detach().cpu().item() registry.remove_hook("add_hook") new_input = input * 2 + 1 output3 = self.model(new_input).mean().detach().cpu().item() self.assertAlmostEqual(output1, output2, places=5) self.assertAlmostEqual(output1, output3, places=5) def test_skip_layer_hook(self): registry = HookRegistry.check_if_exists_or_initialize(self.model) registry.register_hook(SkipLayerHook(skip_layer=True), "skip_layer_hook") input = torch.zeros(1, 4, device=torch_device) output = self.model(input).mean().detach().cpu().item() self.assertEqual(output, 0.0) registry.remove_hook("skip_layer_hook") registry.register_hook(SkipLayerHook(skip_layer=False), "skip_layer_hook") output = self.model(input).mean().detach().cpu().item() self.assertNotEqual(output, 0.0) def test_skip_layer_internal_block(self): registry = HookRegistry.check_if_exists_or_initialize(self.model.linear_1) input = torch.zeros(1, 4, device=torch_device) registry.register_hook(SkipLayerHook(skip_layer=True), "skip_layer_hook") with self.assertRaises(RuntimeError) as cm: self.model(input).mean().detach().cpu().item() self.assertIn("mat1 and mat2 shapes cannot be multiplied", str(cm.exception)) registry.remove_hook("skip_layer_hook") output = self.model(input).mean().detach().cpu().item() self.assertNotEqual(output, 0.0) registry = HookRegistry.check_if_exists_or_initialize(self.model.blocks[1]) registry.register_hook(SkipLayerHook(skip_layer=True), "skip_layer_hook") output = self.model(input).mean().detach().cpu().item() self.assertNotEqual(output, 0.0) def test_invocation_order_stateful_first(self): registry = HookRegistry.check_if_exists_or_initialize(self.model) registry.register_hook(StatefulAddHook(1), "add_hook") registry.register_hook(AddHook(2), "add_hook_2") registry.register_hook(MultiplyHook(3), "multiply_hook") input = torch.randn(1, 4, device=torch_device, generator=self.get_generator()) logger = get_logger(__name__) logger.setLevel("DEBUG") with CaptureLogger(logger) as cap_logger: self.model(input) output = cap_logger.out.replace(" ", "").replace("\n", "") expected_invocation_order_log = ( ( "MultiplyHook pre_forward\n" "AddHook pre_forward\n" "StatefulAddHook pre_forward\n" "AddHook post_forward\n" "MultiplyHook post_forward\n" ) .replace(" ", "") .replace("\n", "") ) self.assertEqual(output, expected_invocation_order_log) registry.remove_hook("add_hook") with CaptureLogger(logger) as cap_logger: self.model(input) output = cap_logger.out.replace(" ", "").replace("\n", "") expected_invocation_order_log = ( ( "MultiplyHook pre_forward\n" "AddHook pre_forward\n" "AddHook post_forward\n" "MultiplyHook post_forward\n" ) .replace(" ", "") .replace("\n", "") ) self.assertEqual(output, expected_invocation_order_log) def test_invocation_order_stateful_middle(self): registry = HookRegistry.check_if_exists_or_initialize(self.model) registry.register_hook(AddHook(2), "add_hook") registry.register_hook(StatefulAddHook(1), "add_hook_2") registry.register_hook(MultiplyHook(3), "multiply_hook") input = torch.randn(1, 4, device=torch_device, generator=self.get_generator()) logger = get_logger(__name__) logger.setLevel("DEBUG") with CaptureLogger(logger) as cap_logger: self.model(input) output = cap_logger.out.replace(" ", "").replace("\n", "") expected_invocation_order_log = ( ( "MultiplyHook pre_forward\n" "StatefulAddHook pre_forward\n" "AddHook pre_forward\n" "AddHook post_forward\n" "MultiplyHook post_forward\n" ) .replace(" ", "") .replace("\n", "") ) self.assertEqual(output, expected_invocation_order_log) registry.remove_hook("add_hook") with CaptureLogger(logger) as cap_logger: self.model(input) output = cap_logger.out.replace(" ", "").replace("\n", "") expected_invocation_order_log = ( ("MultiplyHook pre_forward\nStatefulAddHook pre_forward\nMultiplyHook post_forward\n") .replace(" ", "") .replace("\n", "") ) self.assertEqual(output, expected_invocation_order_log) registry.remove_hook("add_hook_2") with CaptureLogger(logger) as cap_logger: self.model(input) output = cap_logger.out.replace(" ", "").replace("\n", "") expected_invocation_order_log = ( ("MultiplyHook pre_forward\nMultiplyHook post_forward\n").replace(" ", "").replace("\n", "") ) self.assertEqual(output, expected_invocation_order_log) def test_invocation_order_stateful_last(self): registry = HookRegistry.check_if_exists_or_initialize(self.model) registry.register_hook(AddHook(1), "add_hook") registry.register_hook(MultiplyHook(2), "multiply_hook") registry.register_hook(StatefulAddHook(3), "add_hook_2") input = torch.randn(1, 4, device=torch_device, generator=self.get_generator()) logger = get_logger(__name__) logger.setLevel("DEBUG") with CaptureLogger(logger) as cap_logger: self.model(input) output = cap_logger.out.replace(" ", "").replace("\n", "") expected_invocation_order_log = ( ( "StatefulAddHook pre_forward\n" "MultiplyHook pre_forward\n" "AddHook pre_forward\n" "AddHook post_forward\n" "MultiplyHook post_forward\n" ) .replace(" ", "") .replace("\n", "") ) self.assertEqual(output, expected_invocation_order_log) registry.remove_hook("add_hook") with CaptureLogger(logger) as cap_logger: self.model(input) output = cap_logger.out.replace(" ", "").replace("\n", "") expected_invocation_order_log = ( ("StatefulAddHook pre_forward\nMultiplyHook pre_forward\nMultiplyHook post_forward\n") .replace(" ", "") .replace("\n", "") ) self.assertEqual(output, expected_invocation_order_log)
diffusers/tests/hooks/test_hooks.py/0
{ "file_path": "diffusers/tests/hooks/test_hooks.py", "repo_id": "diffusers", "token_count": 6203 }
# coding=utf-8 # Copyright 2024 HuggingFace Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import unittest import torch from diffusers import AutoencoderKLHunyuanVideo from diffusers.utils.testing_utils import ( enable_full_determinism, floats_tensor, torch_device, ) from ..test_modeling_common import ModelTesterMixin, UNetTesterMixin enable_full_determinism() class AutoencoderKLHunyuanVideoTests(ModelTesterMixin, UNetTesterMixin, unittest.TestCase): model_class = AutoencoderKLHunyuanVideo main_input_name = "sample" base_precision = 1e-2 def get_autoencoder_kl_hunyuan_video_config(self): return { "in_channels": 3, "out_channels": 3, "latent_channels": 4, "down_block_types": ( "HunyuanVideoDownBlock3D", "HunyuanVideoDownBlock3D", "HunyuanVideoDownBlock3D", "HunyuanVideoDownBlock3D", ), "up_block_types": ( "HunyuanVideoUpBlock3D", "HunyuanVideoUpBlock3D", "HunyuanVideoUpBlock3D", "HunyuanVideoUpBlock3D", ), "block_out_channels": (8, 8, 8, 8), "layers_per_block": 1, "act_fn": "silu", "norm_num_groups": 4, "scaling_factor": 0.476986, "spatial_compression_ratio": 8, "temporal_compression_ratio": 4, "mid_block_add_attention": True, } @property def dummy_input(self): batch_size = 2 num_frames = 9 num_channels = 3 sizes = (16, 16) image = floats_tensor((batch_size, num_channels, num_frames) + sizes).to(torch_device) return {"sample": image} @property def input_shape(self): return (3, 9, 16, 16) @property def output_shape(self): return (3, 9, 16, 16) def prepare_init_args_and_inputs_for_common(self): init_dict = self.get_autoencoder_kl_hunyuan_video_config() inputs_dict = self.dummy_input return init_dict, inputs_dict def test_enable_disable_tiling(self): init_dict, inputs_dict = self.prepare_init_args_and_inputs_for_common() torch.manual_seed(0) model = self.model_class(**init_dict).to(torch_device) inputs_dict.update({"return_dict": False}) torch.manual_seed(0) output_without_tiling = model(**inputs_dict, generator=torch.manual_seed(0))[0] torch.manual_seed(0) model.enable_tiling() output_with_tiling = model(**inputs_dict, generator=torch.manual_seed(0))[0] self.assertLess( (output_without_tiling.detach().cpu().numpy() - output_with_tiling.detach().cpu().numpy()).max(), 0.5, "VAE tiling should not affect the inference results", ) torch.manual_seed(0) model.disable_tiling() output_without_tiling_2 = model(**inputs_dict, generator=torch.manual_seed(0))[0] self.assertEqual( output_without_tiling.detach().cpu().numpy().all(), output_without_tiling_2.detach().cpu().numpy().all(), "Without tiling outputs should match with the outputs when tiling is manually disabled.", ) def test_enable_disable_slicing(self): init_dict, inputs_dict = self.prepare_init_args_and_inputs_for_common() torch.manual_seed(0) model = self.model_class(**init_dict).to(torch_device) inputs_dict.update({"return_dict": False}) torch.manual_seed(0) output_without_slicing = model(**inputs_dict, generator=torch.manual_seed(0))[0] torch.manual_seed(0) model.enable_slicing() output_with_slicing = model(**inputs_dict, generator=torch.manual_seed(0))[0] self.assertLess( (output_without_slicing.detach().cpu().numpy() - output_with_slicing.detach().cpu().numpy()).max(), 0.5, "VAE slicing should not affect the inference results", ) torch.manual_seed(0) model.disable_slicing() output_without_slicing_2 = model(**inputs_dict, generator=torch.manual_seed(0))[0] self.assertEqual( output_without_slicing.detach().cpu().numpy().all(), output_without_slicing_2.detach().cpu().numpy().all(), "Without slicing outputs should match with the outputs when slicing is manually disabled.", ) def test_gradient_checkpointing_is_applied(self): expected_set = { "HunyuanVideoDecoder3D", "HunyuanVideoDownBlock3D", "HunyuanVideoEncoder3D", "HunyuanVideoMidBlock3D", "HunyuanVideoUpBlock3D", } super().test_gradient_checkpointing_is_applied(expected_set=expected_set) # We need to overwrite this test because the base test does not account length of down_block_types def test_forward_with_norm_groups(self): init_dict, inputs_dict = self.prepare_init_args_and_inputs_for_common() init_dict["norm_num_groups"] = 16 init_dict["block_out_channels"] = (16, 16, 16, 16) model = self.model_class(**init_dict) model.to(torch_device) model.eval() with torch.no_grad(): output = model(**inputs_dict) if isinstance(output, dict): output = output.to_tuple()[0] self.assertIsNotNone(output) expected_shape = inputs_dict["sample"].shape self.assertEqual(output.shape, expected_shape, "Input and output shapes do not match") @unittest.skip("Unsupported test.") def test_outputs_equivalence(self): pass
diffusers/tests/models/autoencoders/test_models_autoencoder_hunyuan_video.py/0
{ "file_path": "diffusers/tests/models/autoencoders/test_models_autoencoder_hunyuan_video.py", "repo_id": "diffusers", "token_count": 2814 }
# Copyright 2024 HuggingFace Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import unittest import torch from diffusers import SanaTransformer2DModel from diffusers.utils.testing_utils import ( enable_full_determinism, torch_device, ) from ..test_modeling_common import ModelTesterMixin enable_full_determinism() class SanaTransformerTests(ModelTesterMixin, unittest.TestCase): model_class = SanaTransformer2DModel main_input_name = "hidden_states" uses_custom_attn_processor = True model_split_percents = [0.7, 0.7, 0.9] @property def dummy_input(self): batch_size = 2 num_channels = 4 height = 32 width = 32 embedding_dim = 8 sequence_length = 8 hidden_states = torch.randn((batch_size, num_channels, height, width)).to(torch_device) encoder_hidden_states = torch.randn((batch_size, sequence_length, embedding_dim)).to(torch_device) timestep = torch.randint(0, 1000, size=(batch_size,)).to(torch_device) return { "hidden_states": hidden_states, "encoder_hidden_states": encoder_hidden_states, "timestep": timestep, } @property def input_shape(self): return (4, 32, 32) @property def output_shape(self): return (4, 32, 32) def prepare_init_args_and_inputs_for_common(self): init_dict = { "patch_size": 1, "in_channels": 4, "out_channels": 4, "num_layers": 1, "attention_head_dim": 4, "num_attention_heads": 2, "num_cross_attention_heads": 2, "cross_attention_head_dim": 4, "cross_attention_dim": 8, "caption_channels": 8, "sample_size": 32, } inputs_dict = self.dummy_input return init_dict, inputs_dict def test_gradient_checkpointing_is_applied(self): expected_set = {"SanaTransformer2DModel"} super().test_gradient_checkpointing_is_applied(expected_set=expected_set)
diffusers/tests/models/transformers/test_models_transformer_sana.py/0
{ "file_path": "diffusers/tests/models/transformers/test_models_transformer_sana.py", "repo_id": "diffusers", "token_count": 1065 }
# coding=utf-8 # Copyright 2024 HuggingFace Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import json import tempfile import unittest from pathlib import Path from diffusers import ( DDIMScheduler, DDPMScheduler, DPMSolverMultistepScheduler, EulerAncestralDiscreteScheduler, EulerDiscreteScheduler, PNDMScheduler, logging, ) from diffusers.configuration_utils import ConfigMixin, register_to_config from diffusers.utils.testing_utils import CaptureLogger class SampleObject(ConfigMixin): config_name = "config.json" @register_to_config def __init__( self, a=2, b=5, c=(2, 5), d="for diffusion", e=[1, 3], ): pass class SampleObject2(ConfigMixin): config_name = "config.json" @register_to_config def __init__( self, a=2, b=5, c=(2, 5), d="for diffusion", f=[1, 3], ): pass class SampleObject3(ConfigMixin): config_name = "config.json" @register_to_config def __init__( self, a=2, b=5, c=(2, 5), d="for diffusion", e=[1, 3], f=[1, 3], ): pass class SampleObject4(ConfigMixin): config_name = "config.json" @register_to_config def __init__( self, a=2, b=5, c=(2, 5), d="for diffusion", e=[1, 5], f=[5, 4], ): pass class SampleObjectPaths(ConfigMixin): config_name = "config.json" @register_to_config def __init__(self, test_file_1=Path("foo/bar"), test_file_2=Path("foo bar\\bar")): pass class ConfigTester(unittest.TestCase): def test_load_not_from_mixin(self): with self.assertRaises(ValueError): ConfigMixin.load_config("dummy_path") def test_register_to_config(self): obj = SampleObject() config = obj.config assert config["a"] == 2 assert config["b"] == 5 assert config["c"] == (2, 5) assert config["d"] == "for diffusion" assert config["e"] == [1, 3] # init ignore private arguments obj = SampleObject(_name_or_path="lalala") config = obj.config assert config["a"] == 2 assert config["b"] == 5 assert config["c"] == (2, 5) assert config["d"] == "for diffusion" assert config["e"] == [1, 3] # can override default obj = SampleObject(c=6) config = obj.config assert config["a"] == 2 assert config["b"] == 5 assert config["c"] == 6 assert config["d"] == "for diffusion" assert config["e"] == [1, 3] # can use positional arguments. obj = SampleObject(1, c=6) config = obj.config assert config["a"] == 1 assert config["b"] == 5 assert config["c"] == 6 assert config["d"] == "for diffusion" assert config["e"] == [1, 3] def test_save_load(self): obj = SampleObject() config = obj.config assert config["a"] == 2 assert config["b"] == 5 assert config["c"] == (2, 5) assert config["d"] == "for diffusion" assert config["e"] == [1, 3] with tempfile.TemporaryDirectory() as tmpdirname: obj.save_config(tmpdirname) new_obj = SampleObject.from_config(SampleObject.load_config(tmpdirname)) new_config = new_obj.config # unfreeze configs config = dict(config) new_config = dict(new_config) assert config.pop("c") == (2, 5) # instantiated as tuple assert new_config.pop("c") == [2, 5] # saved & loaded as list because of json config.pop("_use_default_values") assert config == new_config def test_load_ddim_from_pndm(self): logger = logging.get_logger("diffusers.configuration_utils") # 30 for warning logger.setLevel(30) with CaptureLogger(logger) as cap_logger: ddim = DDIMScheduler.from_pretrained( "hf-internal-testing/tiny-stable-diffusion-torch", subfolder="scheduler" ) assert ddim.__class__ == DDIMScheduler # no warning should be thrown assert cap_logger.out == "" def test_load_euler_from_pndm(self): logger = logging.get_logger("diffusers.configuration_utils") # 30 for warning logger.setLevel(30) with CaptureLogger(logger) as cap_logger: euler = EulerDiscreteScheduler.from_pretrained( "hf-internal-testing/tiny-stable-diffusion-torch", subfolder="scheduler" ) assert euler.__class__ == EulerDiscreteScheduler # no warning should be thrown assert cap_logger.out == "" def test_load_euler_ancestral_from_pndm(self): logger = logging.get_logger("diffusers.configuration_utils") # 30 for warning logger.setLevel(30) with CaptureLogger(logger) as cap_logger: euler = EulerAncestralDiscreteScheduler.from_pretrained( "hf-internal-testing/tiny-stable-diffusion-torch", subfolder="scheduler" ) assert euler.__class__ == EulerAncestralDiscreteScheduler # no warning should be thrown assert cap_logger.out == "" def test_load_pndm(self): logger = logging.get_logger("diffusers.configuration_utils") # 30 for warning logger.setLevel(30) with CaptureLogger(logger) as cap_logger: pndm = PNDMScheduler.from_pretrained( "hf-internal-testing/tiny-stable-diffusion-torch", subfolder="scheduler" ) assert pndm.__class__ == PNDMScheduler # no warning should be thrown assert cap_logger.out == "" def test_overwrite_config_on_load(self): logger = logging.get_logger("diffusers.configuration_utils") # 30 for warning logger.setLevel(30) with CaptureLogger(logger) as cap_logger: ddpm = DDPMScheduler.from_pretrained( "hf-internal-testing/tiny-stable-diffusion-torch", subfolder="scheduler", prediction_type="sample", beta_end=8, ) with CaptureLogger(logger) as cap_logger_2: ddpm_2 = DDPMScheduler.from_pretrained("google/ddpm-celebahq-256", beta_start=88) assert ddpm.__class__ == DDPMScheduler assert ddpm.config.prediction_type == "sample" assert ddpm.config.beta_end == 8 assert ddpm_2.config.beta_start == 88 # no warning should be thrown assert cap_logger.out == "" assert cap_logger_2.out == "" def test_load_dpmsolver(self): logger = logging.get_logger("diffusers.configuration_utils") # 30 for warning logger.setLevel(30) with CaptureLogger(logger) as cap_logger: dpm = DPMSolverMultistepScheduler.from_pretrained( "hf-internal-testing/tiny-stable-diffusion-torch", subfolder="scheduler" ) assert dpm.__class__ == DPMSolverMultistepScheduler # no warning should be thrown assert cap_logger.out == "" def test_use_default_values(self): # let's first save a config that should be in the form # a=2, # b=5, # c=(2, 5), # d="for diffusion", # e=[1, 3], config = SampleObject() config_dict = {k: v for k, v in config.config.items() if not k.startswith("_")} # make sure that default config has all keys in `_use_default_values` assert set(config_dict.keys()) == set(config.config._use_default_values) with tempfile.TemporaryDirectory() as tmpdirname: config.save_config(tmpdirname) # now loading it with SampleObject2 should put f into `_use_default_values` config = SampleObject2.from_config(SampleObject2.load_config(tmpdirname)) assert "f" in config.config._use_default_values assert config.config.f == [1, 3] # now loading the config, should **NOT** use [1, 3] for `f`, but the default [1, 4] value # **BECAUSE** it is part of `config.config._use_default_values` new_config = SampleObject4.from_config(config.config) assert new_config.config.f == [5, 4] config.config._use_default_values.pop() new_config_2 = SampleObject4.from_config(config.config) assert new_config_2.config.f == [1, 3] # Nevertheless "e" should still be correctly loaded to [1, 3] from SampleObject2 instead of defaulting to [1, 5] assert new_config_2.config.e == [1, 3] def test_check_path_types(self): # Verify that we get a string returned from a WindowsPath or PosixPath (depending on system) config = SampleObjectPaths() json_string = config.to_json_string() result = json.loads(json_string) assert result["test_file_1"] == config.config.test_file_1.as_posix() assert result["test_file_2"] == config.config.test_file_2.as_posix()
diffusers/tests/others/test_config.py/0
{ "file_path": "diffusers/tests/others/test_config.py", "repo_id": "diffusers", "token_count": 4259 }
# coding=utf-8 # Copyright 2024 HuggingFace Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import copy import gc import unittest import numpy as np import torch from transformers import CLIPTextConfig, CLIPTextModel, CLIPTextModelWithProjection, CLIPTokenizer from diffusers import ( AutoencoderKL, ControlNetModel, EulerDiscreteScheduler, HeunDiscreteScheduler, LCMScheduler, StableDiffusionXLControlNetPipeline, StableDiffusionXLImg2ImgPipeline, UNet2DConditionModel, ) from diffusers.models.unets.unet_2d_blocks import UNetMidBlock2D from diffusers.pipelines.controlnet.pipeline_controlnet import MultiControlNetModel from diffusers.utils.import_utils import is_xformers_available from diffusers.utils.testing_utils import ( backend_empty_cache, enable_full_determinism, load_image, require_torch_accelerator, slow, torch_device, ) from diffusers.utils.torch_utils import randn_tensor from ..pipeline_params import ( IMAGE_TO_IMAGE_IMAGE_PARAMS, TEXT_TO_IMAGE_BATCH_PARAMS, TEXT_TO_IMAGE_IMAGE_PARAMS, TEXT_TO_IMAGE_PARAMS, ) from ..test_pipelines_common import ( IPAdapterTesterMixin, PipelineKarrasSchedulerTesterMixin, PipelineLatentTesterMixin, PipelineTesterMixin, SDXLOptionalComponentsTesterMixin, ) enable_full_determinism() class StableDiffusionXLControlNetPipelineFastTests( IPAdapterTesterMixin, PipelineLatentTesterMixin, PipelineKarrasSchedulerTesterMixin, PipelineTesterMixin, SDXLOptionalComponentsTesterMixin, unittest.TestCase, ): pipeline_class = StableDiffusionXLControlNetPipeline params = TEXT_TO_IMAGE_PARAMS batch_params = TEXT_TO_IMAGE_BATCH_PARAMS image_params = IMAGE_TO_IMAGE_IMAGE_PARAMS image_latents_params = TEXT_TO_IMAGE_IMAGE_PARAMS test_layerwise_casting = True def get_dummy_components(self, time_cond_proj_dim=None): torch.manual_seed(0) unet = UNet2DConditionModel( block_out_channels=(32, 64), layers_per_block=2, sample_size=32, in_channels=4, out_channels=4, down_block_types=("DownBlock2D", "CrossAttnDownBlock2D"), up_block_types=("CrossAttnUpBlock2D", "UpBlock2D"), # SD2-specific config below attention_head_dim=(2, 4), use_linear_projection=True, addition_embed_type="text_time", addition_time_embed_dim=8, transformer_layers_per_block=(1, 2), projection_class_embeddings_input_dim=80, # 6 * 8 + 32 cross_attention_dim=64, time_cond_proj_dim=time_cond_proj_dim, ) torch.manual_seed(0) controlnet = ControlNetModel( block_out_channels=(32, 64), layers_per_block=2, in_channels=4, down_block_types=("DownBlock2D", "CrossAttnDownBlock2D"), conditioning_embedding_out_channels=(16, 32), # SD2-specific config below attention_head_dim=(2, 4), use_linear_projection=True, addition_embed_type="text_time", addition_time_embed_dim=8, transformer_layers_per_block=(1, 2), projection_class_embeddings_input_dim=80, # 6 * 8 + 32 cross_attention_dim=64, ) torch.manual_seed(0) scheduler = EulerDiscreteScheduler( beta_start=0.00085, beta_end=0.012, steps_offset=1, beta_schedule="scaled_linear", timestep_spacing="leading", ) torch.manual_seed(0) vae = AutoencoderKL( block_out_channels=[32, 64], in_channels=3, out_channels=3, down_block_types=["DownEncoderBlock2D", "DownEncoderBlock2D"], up_block_types=["UpDecoderBlock2D", "UpDecoderBlock2D"], latent_channels=4, ) torch.manual_seed(0) text_encoder_config = CLIPTextConfig( bos_token_id=0, eos_token_id=2, hidden_size=32, intermediate_size=37, layer_norm_eps=1e-05, num_attention_heads=4, num_hidden_layers=5, pad_token_id=1, vocab_size=1000, # SD2-specific config below hidden_act="gelu", projection_dim=32, ) text_encoder = CLIPTextModel(text_encoder_config) tokenizer = CLIPTokenizer.from_pretrained("hf-internal-testing/tiny-random-clip") text_encoder_2 = CLIPTextModelWithProjection(text_encoder_config) tokenizer_2 = CLIPTokenizer.from_pretrained("hf-internal-testing/tiny-random-clip") components = { "unet": unet, "controlnet": controlnet, "scheduler": scheduler, "vae": vae, "text_encoder": text_encoder, "tokenizer": tokenizer, "text_encoder_2": text_encoder_2, "tokenizer_2": tokenizer_2, "feature_extractor": None, "image_encoder": None, } return components def get_dummy_inputs(self, device, seed=0): if str(device).startswith("mps"): generator = torch.manual_seed(seed) else: generator = torch.Generator(device=device).manual_seed(seed) controlnet_embedder_scale_factor = 2 image = randn_tensor( (1, 3, 32 * controlnet_embedder_scale_factor, 32 * controlnet_embedder_scale_factor), generator=generator, device=torch.device(device), ) inputs = { "prompt": "A painting of a squirrel eating a burger", "generator": generator, "num_inference_steps": 2, "guidance_scale": 6.0, "output_type": "np", "image": image, } return inputs def test_attention_slicing_forward_pass(self): return self._test_attention_slicing_forward_pass(expected_max_diff=2e-3) def test_ip_adapter(self, from_ssd1b=False, expected_pipe_slice=None): if not from_ssd1b: expected_pipe_slice = None if torch_device == "cpu": expected_pipe_slice = np.array( [0.7335, 0.5866, 0.5623, 0.6242, 0.5751, 0.5999, 0.4091, 0.4590, 0.5054] ) return super().test_ip_adapter(expected_pipe_slice=expected_pipe_slice) @unittest.skipIf( torch_device != "cuda" or not is_xformers_available(), reason="XFormers attention is only available with CUDA and `xformers` installed", ) def test_xformers_attention_forwardGenerator_pass(self): self._test_xformers_attention_forwardGenerator_pass(expected_max_diff=2e-3) def test_inference_batch_single_identical(self): self._test_inference_batch_single_identical(expected_max_diff=2e-3) def test_save_load_optional_components(self): self._test_save_load_optional_components() @require_torch_accelerator def test_stable_diffusion_xl_offloads(self): pipes = [] components = self.get_dummy_components() sd_pipe = self.pipeline_class(**components).to(torch_device) pipes.append(sd_pipe) components = self.get_dummy_components() sd_pipe = self.pipeline_class(**components) sd_pipe.enable_model_cpu_offload() pipes.append(sd_pipe) components = self.get_dummy_components() sd_pipe = self.pipeline_class(**components) sd_pipe.enable_sequential_cpu_offload() pipes.append(sd_pipe) image_slices = [] for pipe in pipes: pipe.unet.set_default_attn_processor() inputs = self.get_dummy_inputs(torch_device) image = pipe(**inputs).images image_slices.append(image[0, -3:, -3:, -1].flatten()) assert np.abs(image_slices[0] - image_slices[1]).max() < 1e-3 assert np.abs(image_slices[0] - image_slices[2]).max() < 1e-3 def test_stable_diffusion_xl_multi_prompts(self): components = self.get_dummy_components() sd_pipe = self.pipeline_class(**components).to(torch_device) # forward with single prompt inputs = self.get_dummy_inputs(torch_device) output = sd_pipe(**inputs) image_slice_1 = output.images[0, -3:, -3:, -1] # forward with same prompt duplicated inputs = self.get_dummy_inputs(torch_device) inputs["prompt_2"] = inputs["prompt"] output = sd_pipe(**inputs) image_slice_2 = output.images[0, -3:, -3:, -1] # ensure the results are equal assert np.abs(image_slice_1.flatten() - image_slice_2.flatten()).max() < 1e-4 # forward with different prompt inputs = self.get_dummy_inputs(torch_device) inputs["prompt_2"] = "different prompt" output = sd_pipe(**inputs) image_slice_3 = output.images[0, -3:, -3:, -1] # ensure the results are not equal assert np.abs(image_slice_1.flatten() - image_slice_3.flatten()).max() > 1e-4 # manually set a negative_prompt inputs = self.get_dummy_inputs(torch_device) inputs["negative_prompt"] = "negative prompt" output = sd_pipe(**inputs) image_slice_1 = output.images[0, -3:, -3:, -1] # forward with same negative_prompt duplicated inputs = self.get_dummy_inputs(torch_device) inputs["negative_prompt"] = "negative prompt" inputs["negative_prompt_2"] = inputs["negative_prompt"] output = sd_pipe(**inputs) image_slice_2 = output.images[0, -3:, -3:, -1] # ensure the results are equal assert np.abs(image_slice_1.flatten() - image_slice_2.flatten()).max() < 1e-4 # forward with different negative_prompt inputs = self.get_dummy_inputs(torch_device) inputs["negative_prompt"] = "negative prompt" inputs["negative_prompt_2"] = "different negative prompt" output = sd_pipe(**inputs) image_slice_3 = output.images[0, -3:, -3:, -1] # ensure the results are not equal assert np.abs(image_slice_1.flatten() - image_slice_3.flatten()).max() > 1e-4 # Copied from test_stable_diffusion_xl.py def test_stable_diffusion_xl_prompt_embeds(self): components = self.get_dummy_components() sd_pipe = self.pipeline_class(**components) sd_pipe = sd_pipe.to(torch_device) sd_pipe = sd_pipe.to(torch_device) sd_pipe.set_progress_bar_config(disable=None) # forward without prompt embeds inputs = self.get_dummy_inputs(torch_device) inputs["prompt"] = 2 * [inputs["prompt"]] inputs["num_images_per_prompt"] = 2 output = sd_pipe(**inputs) image_slice_1 = output.images[0, -3:, -3:, -1] # forward with prompt embeds inputs = self.get_dummy_inputs(torch_device) prompt = 2 * [inputs.pop("prompt")] ( prompt_embeds, negative_prompt_embeds, pooled_prompt_embeds, negative_pooled_prompt_embeds, ) = sd_pipe.encode_prompt(prompt) output = sd_pipe( **inputs, prompt_embeds=prompt_embeds, negative_prompt_embeds=negative_prompt_embeds, pooled_prompt_embeds=pooled_prompt_embeds, negative_pooled_prompt_embeds=negative_pooled_prompt_embeds, ) image_slice_2 = output.images[0, -3:, -3:, -1] # make sure that it's equal assert np.abs(image_slice_1.flatten() - image_slice_2.flatten()).max() < 1e-4 def test_controlnet_sdxl_guess(self): device = "cpu" components = self.get_dummy_components() sd_pipe = self.pipeline_class(**components) sd_pipe = sd_pipe.to(device) sd_pipe.set_progress_bar_config(disable=None) inputs = self.get_dummy_inputs(device) inputs["guess_mode"] = True output = sd_pipe(**inputs) image_slice = output.images[0, -3:, -3:, -1] expected_slice = np.array([0.7335, 0.5866, 0.5623, 0.6242, 0.5751, 0.5999, 0.4091, 0.4590, 0.5054]) # make sure that it's equal assert np.abs(image_slice.flatten() - expected_slice).max() < 1e-4 def test_controlnet_sdxl_lcm(self): device = "cpu" # ensure determinism for the device-dependent torch.Generator components = self.get_dummy_components(time_cond_proj_dim=256) sd_pipe = StableDiffusionXLControlNetPipeline(**components) sd_pipe.scheduler = LCMScheduler.from_config(sd_pipe.scheduler.config) sd_pipe = sd_pipe.to(torch_device) sd_pipe.set_progress_bar_config(disable=None) inputs = self.get_dummy_inputs(device) output = sd_pipe(**inputs) image = output.images image_slice = image[0, -3:, -3:, -1] assert image.shape == (1, 64, 64, 3) expected_slice = np.array([0.7820, 0.6195, 0.6193, 0.7045, 0.6706, 0.5837, 0.4147, 0.5232, 0.4868]) assert np.abs(image_slice.flatten() - expected_slice).max() < 1e-2 # Copied from test_stable_diffusion_xl.py:test_stable_diffusion_two_xl_mixture_of_denoiser_fast # with `StableDiffusionXLControlNetPipeline` instead of `StableDiffusionXLPipeline` def test_controlnet_sdxl_two_mixture_of_denoiser_fast(self): components = self.get_dummy_components() pipe_1 = StableDiffusionXLControlNetPipeline(**components).to(torch_device) pipe_1.unet.set_default_attn_processor() components_without_controlnet = {k: v for k, v in components.items() if k != "controlnet"} pipe_2 = StableDiffusionXLImg2ImgPipeline(**components_without_controlnet).to(torch_device) pipe_2.unet.set_default_attn_processor() def assert_run_mixture( num_steps, split, scheduler_cls_orig, expected_tss, num_train_timesteps=pipe_1.scheduler.config.num_train_timesteps, ): inputs = self.get_dummy_inputs(torch_device) inputs["num_inference_steps"] = num_steps class scheduler_cls(scheduler_cls_orig): pass pipe_1.scheduler = scheduler_cls.from_config(pipe_1.scheduler.config) pipe_2.scheduler = scheduler_cls.from_config(pipe_2.scheduler.config) # Let's retrieve the number of timesteps we want to use pipe_1.scheduler.set_timesteps(num_steps) expected_steps = pipe_1.scheduler.timesteps.tolist() if pipe_1.scheduler.order == 2: expected_steps_1 = list(filter(lambda ts: ts >= split, expected_tss)) expected_steps_2 = expected_steps_1[-1:] + list(filter(lambda ts: ts < split, expected_tss)) expected_steps = expected_steps_1 + expected_steps_2 else: expected_steps_1 = list(filter(lambda ts: ts >= split, expected_tss)) expected_steps_2 = list(filter(lambda ts: ts < split, expected_tss)) # now we monkey patch step `done_steps` # list into the step function for testing done_steps = [] old_step = copy.copy(scheduler_cls.step) def new_step(self, *args, **kwargs): done_steps.append(args[1].cpu().item()) # args[1] is always the passed `t` return old_step(self, *args, **kwargs) scheduler_cls.step = new_step inputs_1 = { **inputs, **{ "denoising_end": 1.0 - (split / num_train_timesteps), "output_type": "latent", }, } latents = pipe_1(**inputs_1).images[0] assert expected_steps_1 == done_steps, f"Failure with {scheduler_cls.__name__} and {num_steps} and {split}" inputs_2 = { **inputs, **{ "denoising_start": 1.0 - (split / num_train_timesteps), "image": latents, }, } pipe_2(**inputs_2).images[0] assert expected_steps_2 == done_steps[len(expected_steps_1) :] assert expected_steps == done_steps, f"Failure with {scheduler_cls.__name__} and {num_steps} and {split}" steps = 10 for split in [300, 700]: for scheduler_cls_timesteps in [ (EulerDiscreteScheduler, [901, 801, 701, 601, 501, 401, 301, 201, 101, 1]), ( HeunDiscreteScheduler, [ 901.0, 801.0, 801.0, 701.0, 701.0, 601.0, 601.0, 501.0, 501.0, 401.0, 401.0, 301.0, 301.0, 201.0, 201.0, 101.0, 101.0, 1.0, 1.0, ], ), ]: assert_run_mixture(steps, split, scheduler_cls_timesteps[0], scheduler_cls_timesteps[1]) class StableDiffusionXLMultiControlNetPipelineFastTests( PipelineTesterMixin, PipelineKarrasSchedulerTesterMixin, SDXLOptionalComponentsTesterMixin, unittest.TestCase ): pipeline_class = StableDiffusionXLControlNetPipeline params = TEXT_TO_IMAGE_PARAMS batch_params = TEXT_TO_IMAGE_BATCH_PARAMS image_params = frozenset([]) # TO_DO: add image_params once refactored VaeImageProcessor.preprocess supports_dduf = False def get_dummy_components(self): torch.manual_seed(0) unet = UNet2DConditionModel( block_out_channels=(32, 64), layers_per_block=2, sample_size=32, in_channels=4, out_channels=4, down_block_types=("DownBlock2D", "CrossAttnDownBlock2D"), up_block_types=("CrossAttnUpBlock2D", "UpBlock2D"), # SD2-specific config below attention_head_dim=(2, 4), use_linear_projection=True, addition_embed_type="text_time", addition_time_embed_dim=8, transformer_layers_per_block=(1, 2), projection_class_embeddings_input_dim=80, # 6 * 8 + 32 cross_attention_dim=64, ) torch.manual_seed(0) def init_weights(m): if isinstance(m, torch.nn.Conv2d): torch.nn.init.normal_(m.weight) m.bias.data.fill_(1.0) controlnet1 = ControlNetModel( block_out_channels=(32, 64), layers_per_block=2, in_channels=4, down_block_types=("DownBlock2D", "CrossAttnDownBlock2D"), conditioning_embedding_out_channels=(16, 32), # SD2-specific config below attention_head_dim=(2, 4), use_linear_projection=True, addition_embed_type="text_time", addition_time_embed_dim=8, transformer_layers_per_block=(1, 2), projection_class_embeddings_input_dim=80, # 6 * 8 + 32 cross_attention_dim=64, ) controlnet1.controlnet_down_blocks.apply(init_weights) torch.manual_seed(0) controlnet2 = ControlNetModel( block_out_channels=(32, 64), layers_per_block=2, in_channels=4, down_block_types=("DownBlock2D", "CrossAttnDownBlock2D"), conditioning_embedding_out_channels=(16, 32), # SD2-specific config below attention_head_dim=(2, 4), use_linear_projection=True, addition_embed_type="text_time", addition_time_embed_dim=8, transformer_layers_per_block=(1, 2), projection_class_embeddings_input_dim=80, # 6 * 8 + 32 cross_attention_dim=64, ) controlnet2.controlnet_down_blocks.apply(init_weights) torch.manual_seed(0) scheduler = EulerDiscreteScheduler( beta_start=0.00085, beta_end=0.012, steps_offset=1, beta_schedule="scaled_linear", timestep_spacing="leading", ) torch.manual_seed(0) vae = AutoencoderKL( block_out_channels=[32, 64], in_channels=3, out_channels=3, down_block_types=["DownEncoderBlock2D", "DownEncoderBlock2D"], up_block_types=["UpDecoderBlock2D", "UpDecoderBlock2D"], latent_channels=4, ) torch.manual_seed(0) text_encoder_config = CLIPTextConfig( bos_token_id=0, eos_token_id=2, hidden_size=32, intermediate_size=37, layer_norm_eps=1e-05, num_attention_heads=4, num_hidden_layers=5, pad_token_id=1, vocab_size=1000, # SD2-specific config below hidden_act="gelu", projection_dim=32, ) text_encoder = CLIPTextModel(text_encoder_config) tokenizer = CLIPTokenizer.from_pretrained("hf-internal-testing/tiny-random-clip") text_encoder_2 = CLIPTextModelWithProjection(text_encoder_config) tokenizer_2 = CLIPTokenizer.from_pretrained("hf-internal-testing/tiny-random-clip") controlnet = MultiControlNetModel([controlnet1, controlnet2]) components = { "unet": unet, "controlnet": controlnet, "scheduler": scheduler, "vae": vae, "text_encoder": text_encoder, "tokenizer": tokenizer, "text_encoder_2": text_encoder_2, "tokenizer_2": tokenizer_2, "feature_extractor": None, "image_encoder": None, } return components def get_dummy_inputs(self, device, seed=0): if str(device).startswith("mps"): generator = torch.manual_seed(seed) else: generator = torch.Generator(device=device).manual_seed(seed) controlnet_embedder_scale_factor = 2 images = [ randn_tensor( (1, 3, 32 * controlnet_embedder_scale_factor, 32 * controlnet_embedder_scale_factor), generator=generator, device=torch.device(device), ), randn_tensor( (1, 3, 32 * controlnet_embedder_scale_factor, 32 * controlnet_embedder_scale_factor), generator=generator, device=torch.device(device), ), ] inputs = { "prompt": "A painting of a squirrel eating a burger", "generator": generator, "num_inference_steps": 2, "guidance_scale": 6.0, "output_type": "np", "image": images, } return inputs def test_control_guidance_switch(self): components = self.get_dummy_components() pipe = self.pipeline_class(**components) pipe.to(torch_device) scale = 10.0 steps = 4 inputs = self.get_dummy_inputs(torch_device) inputs["num_inference_steps"] = steps inputs["controlnet_conditioning_scale"] = scale output_1 = pipe(**inputs)[0] inputs = self.get_dummy_inputs(torch_device) inputs["num_inference_steps"] = steps inputs["controlnet_conditioning_scale"] = scale output_2 = pipe(**inputs, control_guidance_start=0.1, control_guidance_end=0.2)[0] inputs = self.get_dummy_inputs(torch_device) inputs["num_inference_steps"] = steps inputs["controlnet_conditioning_scale"] = scale output_3 = pipe(**inputs, control_guidance_start=[0.1, 0.3], control_guidance_end=[0.2, 0.7])[0] inputs = self.get_dummy_inputs(torch_device) inputs["num_inference_steps"] = steps inputs["controlnet_conditioning_scale"] = scale output_4 = pipe(**inputs, control_guidance_start=0.4, control_guidance_end=[0.5, 0.8])[0] # make sure that all outputs are different assert np.sum(np.abs(output_1 - output_2)) > 1e-3 assert np.sum(np.abs(output_1 - output_3)) > 1e-3 assert np.sum(np.abs(output_1 - output_4)) > 1e-3 def test_attention_slicing_forward_pass(self): return self._test_attention_slicing_forward_pass(expected_max_diff=2e-3) @unittest.skipIf( torch_device != "cuda" or not is_xformers_available(), reason="XFormers attention is only available with CUDA and `xformers` installed", ) def test_xformers_attention_forwardGenerator_pass(self): self._test_xformers_attention_forwardGenerator_pass(expected_max_diff=2e-3) def test_inference_batch_single_identical(self): self._test_inference_batch_single_identical(expected_max_diff=2e-3) def test_save_load_optional_components(self): return self._test_save_load_optional_components() class StableDiffusionXLMultiControlNetOneModelPipelineFastTests( PipelineKarrasSchedulerTesterMixin, PipelineTesterMixin, SDXLOptionalComponentsTesterMixin, unittest.TestCase ): pipeline_class = StableDiffusionXLControlNetPipeline params = TEXT_TO_IMAGE_PARAMS batch_params = TEXT_TO_IMAGE_BATCH_PARAMS image_params = frozenset([]) # TO_DO: add image_params once refactored VaeImageProcessor.preprocess supports_dduf = False def get_dummy_components(self): torch.manual_seed(0) unet = UNet2DConditionModel( block_out_channels=(32, 64), layers_per_block=2, sample_size=32, in_channels=4, out_channels=4, down_block_types=("DownBlock2D", "CrossAttnDownBlock2D"), up_block_types=("CrossAttnUpBlock2D", "UpBlock2D"), # SD2-specific config below attention_head_dim=(2, 4), use_linear_projection=True, addition_embed_type="text_time", addition_time_embed_dim=8, transformer_layers_per_block=(1, 2), projection_class_embeddings_input_dim=80, # 6 * 8 + 32 cross_attention_dim=64, ) torch.manual_seed(0) def init_weights(m): if isinstance(m, torch.nn.Conv2d): torch.nn.init.normal_(m.weight) m.bias.data.fill_(1.0) controlnet = ControlNetModel( block_out_channels=(32, 64), layers_per_block=2, in_channels=4, down_block_types=("DownBlock2D", "CrossAttnDownBlock2D"), conditioning_embedding_out_channels=(16, 32), # SD2-specific config below attention_head_dim=(2, 4), use_linear_projection=True, addition_embed_type="text_time", addition_time_embed_dim=8, transformer_layers_per_block=(1, 2), projection_class_embeddings_input_dim=80, # 6 * 8 + 32 cross_attention_dim=64, ) controlnet.controlnet_down_blocks.apply(init_weights) torch.manual_seed(0) scheduler = EulerDiscreteScheduler( beta_start=0.00085, beta_end=0.012, steps_offset=1, beta_schedule="scaled_linear", timestep_spacing="leading", ) torch.manual_seed(0) vae = AutoencoderKL( block_out_channels=[32, 64], in_channels=3, out_channels=3, down_block_types=["DownEncoderBlock2D", "DownEncoderBlock2D"], up_block_types=["UpDecoderBlock2D", "UpDecoderBlock2D"], latent_channels=4, ) torch.manual_seed(0) text_encoder_config = CLIPTextConfig( bos_token_id=0, eos_token_id=2, hidden_size=32, intermediate_size=37, layer_norm_eps=1e-05, num_attention_heads=4, num_hidden_layers=5, pad_token_id=1, vocab_size=1000, # SD2-specific config below hidden_act="gelu", projection_dim=32, ) text_encoder = CLIPTextModel(text_encoder_config) tokenizer = CLIPTokenizer.from_pretrained("hf-internal-testing/tiny-random-clip") text_encoder_2 = CLIPTextModelWithProjection(text_encoder_config) tokenizer_2 = CLIPTokenizer.from_pretrained("hf-internal-testing/tiny-random-clip") controlnet = MultiControlNetModel([controlnet]) components = { "unet": unet, "controlnet": controlnet, "scheduler": scheduler, "vae": vae, "text_encoder": text_encoder, "tokenizer": tokenizer, "text_encoder_2": text_encoder_2, "tokenizer_2": tokenizer_2, "feature_extractor": None, "image_encoder": None, } return components def get_dummy_inputs(self, device, seed=0): if str(device).startswith("mps"): generator = torch.manual_seed(seed) else: generator = torch.Generator(device=device).manual_seed(seed) controlnet_embedder_scale_factor = 2 images = [ randn_tensor( (1, 3, 32 * controlnet_embedder_scale_factor, 32 * controlnet_embedder_scale_factor), generator=generator, device=torch.device(device), ), ] inputs = { "prompt": "A painting of a squirrel eating a burger", "generator": generator, "num_inference_steps": 2, "guidance_scale": 6.0, "output_type": "np", "image": images, } return inputs def test_control_guidance_switch(self): components = self.get_dummy_components() pipe = self.pipeline_class(**components) pipe.to(torch_device) scale = 10.0 steps = 4 inputs = self.get_dummy_inputs(torch_device) inputs["num_inference_steps"] = steps inputs["controlnet_conditioning_scale"] = scale output_1 = pipe(**inputs)[0] inputs = self.get_dummy_inputs(torch_device) inputs["num_inference_steps"] = steps inputs["controlnet_conditioning_scale"] = scale output_2 = pipe(**inputs, control_guidance_start=0.1, control_guidance_end=0.2)[0] inputs = self.get_dummy_inputs(torch_device) inputs["num_inference_steps"] = steps inputs["controlnet_conditioning_scale"] = scale output_3 = pipe( **inputs, control_guidance_start=[0.1], control_guidance_end=[0.2], )[0] inputs = self.get_dummy_inputs(torch_device) inputs["num_inference_steps"] = steps inputs["controlnet_conditioning_scale"] = scale output_4 = pipe(**inputs, control_guidance_start=0.4, control_guidance_end=[0.5])[0] # make sure that all outputs are different assert np.sum(np.abs(output_1 - output_2)) > 1e-3 assert np.sum(np.abs(output_1 - output_3)) > 1e-3 assert np.sum(np.abs(output_1 - output_4)) > 1e-3 def test_attention_slicing_forward_pass(self): return self._test_attention_slicing_forward_pass(expected_max_diff=2e-3) @unittest.skipIf( torch_device != "cuda" or not is_xformers_available(), reason="XFormers attention is only available with CUDA and `xformers` installed", ) def test_xformers_attention_forwardGenerator_pass(self): self._test_xformers_attention_forwardGenerator_pass(expected_max_diff=2e-3) def test_inference_batch_single_identical(self): self._test_inference_batch_single_identical(expected_max_diff=2e-3) def test_save_load_optional_components(self): self._test_save_load_optional_components() def test_negative_conditions(self): components = self.get_dummy_components() pipe = self.pipeline_class(**components) pipe.to(torch_device) inputs = self.get_dummy_inputs(torch_device) image = pipe(**inputs).images image_slice_without_neg_cond = image[0, -3:, -3:, -1] image = pipe( **inputs, negative_original_size=(512, 512), negative_crops_coords_top_left=(0, 0), negative_target_size=(1024, 1024), ).images image_slice_with_neg_cond = image[0, -3:, -3:, -1] self.assertTrue(np.abs(image_slice_without_neg_cond - image_slice_with_neg_cond).max() > 1e-2) @slow @require_torch_accelerator class ControlNetSDXLPipelineSlowTests(unittest.TestCase): def setUp(self): super().setUp() gc.collect() backend_empty_cache(torch_device) def tearDown(self): super().tearDown() gc.collect() backend_empty_cache(torch_device) def test_canny(self): controlnet = ControlNetModel.from_pretrained("diffusers/controlnet-canny-sdxl-1.0") pipe = StableDiffusionXLControlNetPipeline.from_pretrained( "stabilityai/stable-diffusion-xl-base-1.0", controlnet=controlnet ) pipe.enable_sequential_cpu_offload(device=torch_device) pipe.set_progress_bar_config(disable=None) generator = torch.Generator(device="cpu").manual_seed(0) prompt = "bird" image = load_image( "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/sd_controlnet/bird_canny.png" ) images = pipe(prompt, image=image, generator=generator, output_type="np", num_inference_steps=3).images assert images[0].shape == (768, 512, 3) original_image = images[0, -3:, -3:, -1].flatten() expected_image = np.array([0.4185, 0.4127, 0.4089, 0.4046, 0.4115, 0.4096, 0.4081, 0.4112, 0.3913]) assert np.allclose(original_image, expected_image, atol=1e-04) def test_depth(self): controlnet = ControlNetModel.from_pretrained("diffusers/controlnet-depth-sdxl-1.0") pipe = StableDiffusionXLControlNetPipeline.from_pretrained( "stabilityai/stable-diffusion-xl-base-1.0", controlnet=controlnet ) pipe.enable_sequential_cpu_offload(device=torch_device) pipe.set_progress_bar_config(disable=None) generator = torch.Generator(device="cpu").manual_seed(0) prompt = "Stormtrooper's lecture" image = load_image( "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/sd_controlnet/stormtrooper_depth.png" ) images = pipe(prompt, image=image, generator=generator, output_type="np", num_inference_steps=3).images assert images[0].shape == (512, 512, 3) original_image = images[0, -3:, -3:, -1].flatten() expected_image = np.array([0.4399, 0.5112, 0.5478, 0.4314, 0.472, 0.4823, 0.4647, 0.4957, 0.4853]) assert np.allclose(original_image, expected_image, atol=1e-04) class StableDiffusionSSD1BControlNetPipelineFastTests(StableDiffusionXLControlNetPipelineFastTests): def test_controlnet_sdxl_guess(self): device = "cpu" components = self.get_dummy_components() sd_pipe = self.pipeline_class(**components) sd_pipe = sd_pipe.to(device) sd_pipe.set_progress_bar_config(disable=None) inputs = self.get_dummy_inputs(device) inputs["guess_mode"] = True output = sd_pipe(**inputs) image_slice = output.images[0, -3:, -3:, -1] expected_slice = np.array([0.7212, 0.5890, 0.5491, 0.6425, 0.5970, 0.6091, 0.4418, 0.4556, 0.5032]) # make sure that it's equal assert np.abs(image_slice.flatten() - expected_slice).max() < 1e-4 def test_ip_adapter(self): expected_pipe_slice = None if torch_device == "cpu": expected_pipe_slice = np.array([0.7212, 0.5890, 0.5491, 0.6425, 0.5970, 0.6091, 0.4418, 0.4556, 0.5032]) return super().test_ip_adapter(from_ssd1b=True, expected_pipe_slice=expected_pipe_slice) def test_controlnet_sdxl_lcm(self): device = "cpu" # ensure determinism for the device-dependent torch.Generator components = self.get_dummy_components(time_cond_proj_dim=256) sd_pipe = StableDiffusionXLControlNetPipeline(**components) sd_pipe.scheduler = LCMScheduler.from_config(sd_pipe.scheduler.config) sd_pipe = sd_pipe.to(torch_device) sd_pipe.set_progress_bar_config(disable=None) inputs = self.get_dummy_inputs(device) output = sd_pipe(**inputs) image = output.images image_slice = image[0, -3:, -3:, -1] assert image.shape == (1, 64, 64, 3) expected_slice = np.array([0.6787, 0.5117, 0.5558, 0.6963, 0.6571, 0.5928, 0.4121, 0.5468, 0.5057]) assert np.abs(image_slice.flatten() - expected_slice).max() < 1e-2 def test_conditioning_channels(self): unet = UNet2DConditionModel( block_out_channels=(32, 64), layers_per_block=2, sample_size=32, in_channels=4, out_channels=4, down_block_types=("DownBlock2D", "CrossAttnDownBlock2D"), up_block_types=("CrossAttnUpBlock2D", "UpBlock2D"), mid_block_type="UNetMidBlock2D", # SD2-specific config below attention_head_dim=(2, 4), use_linear_projection=True, addition_embed_type="text_time", addition_time_embed_dim=8, transformer_layers_per_block=(1, 2), projection_class_embeddings_input_dim=80, # 6 * 8 + 32 cross_attention_dim=64, time_cond_proj_dim=None, ) controlnet = ControlNetModel.from_unet(unet, conditioning_channels=4) assert type(controlnet.mid_block) is UNetMidBlock2D assert controlnet.conditioning_channels == 4 def get_dummy_components(self, time_cond_proj_dim=None): torch.manual_seed(0) unet = UNet2DConditionModel( block_out_channels=(32, 64), layers_per_block=2, sample_size=32, in_channels=4, out_channels=4, down_block_types=("DownBlock2D", "CrossAttnDownBlock2D"), up_block_types=("CrossAttnUpBlock2D", "UpBlock2D"), mid_block_type="UNetMidBlock2D", # SD2-specific config below attention_head_dim=(2, 4), use_linear_projection=True, addition_embed_type="text_time", addition_time_embed_dim=8, transformer_layers_per_block=(1, 2), projection_class_embeddings_input_dim=80, # 6 * 8 + 32 cross_attention_dim=64, time_cond_proj_dim=time_cond_proj_dim, ) torch.manual_seed(0) controlnet = ControlNetModel( block_out_channels=(32, 64), layers_per_block=2, in_channels=4, down_block_types=("DownBlock2D", "CrossAttnDownBlock2D"), conditioning_embedding_out_channels=(16, 32), mid_block_type="UNetMidBlock2D", # SD2-specific config below attention_head_dim=(2, 4), use_linear_projection=True, addition_embed_type="text_time", addition_time_embed_dim=8, transformer_layers_per_block=(1, 2), projection_class_embeddings_input_dim=80, # 6 * 8 + 32 cross_attention_dim=64, ) torch.manual_seed(0) scheduler = EulerDiscreteScheduler( beta_start=0.00085, beta_end=0.012, steps_offset=1, beta_schedule="scaled_linear", timestep_spacing="leading", ) torch.manual_seed(0) vae = AutoencoderKL( block_out_channels=[32, 64], in_channels=3, out_channels=3, down_block_types=["DownEncoderBlock2D", "DownEncoderBlock2D"], up_block_types=["UpDecoderBlock2D", "UpDecoderBlock2D"], latent_channels=4, ) torch.manual_seed(0) text_encoder_config = CLIPTextConfig( bos_token_id=0, eos_token_id=2, hidden_size=32, intermediate_size=37, layer_norm_eps=1e-05, num_attention_heads=4, num_hidden_layers=5, pad_token_id=1, vocab_size=1000, # SD2-specific config below hidden_act="gelu", projection_dim=32, ) text_encoder = CLIPTextModel(text_encoder_config) tokenizer = CLIPTokenizer.from_pretrained("hf-internal-testing/tiny-random-clip") text_encoder_2 = CLIPTextModelWithProjection(text_encoder_config) tokenizer_2 = CLIPTokenizer.from_pretrained("hf-internal-testing/tiny-random-clip") components = { "unet": unet, "controlnet": controlnet, "scheduler": scheduler, "vae": vae, "text_encoder": text_encoder, "tokenizer": tokenizer, "text_encoder_2": text_encoder_2, "tokenizer_2": tokenizer_2, "feature_extractor": None, "image_encoder": None, } return components
diffusers/tests/pipelines/controlnet/test_controlnet_sdxl.py/0
{ "file_path": "diffusers/tests/pipelines/controlnet/test_controlnet_sdxl.py", "repo_id": "diffusers", "token_count": 20666 }
# coding=utf-8 # Copyright 2024 HuggingFace Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import gc import unittest import numpy as np import torch from diffusers import DanceDiffusionPipeline, IPNDMScheduler, UNet1DModel from diffusers.utils.testing_utils import enable_full_determinism, nightly, require_torch_gpu, skip_mps, torch_device from ..pipeline_params import UNCONDITIONAL_AUDIO_GENERATION_BATCH_PARAMS, UNCONDITIONAL_AUDIO_GENERATION_PARAMS from ..test_pipelines_common import PipelineTesterMixin enable_full_determinism() class DanceDiffusionPipelineFastTests(PipelineTesterMixin, unittest.TestCase): pipeline_class = DanceDiffusionPipeline params = UNCONDITIONAL_AUDIO_GENERATION_PARAMS required_optional_params = PipelineTesterMixin.required_optional_params - { "callback", "latents", "callback_steps", "output_type", "num_images_per_prompt", } batch_params = UNCONDITIONAL_AUDIO_GENERATION_BATCH_PARAMS test_attention_slicing = False def get_dummy_components(self): torch.manual_seed(0) unet = UNet1DModel( block_out_channels=(32, 32, 64), extra_in_channels=16, sample_size=512, sample_rate=16_000, in_channels=2, out_channels=2, flip_sin_to_cos=True, use_timestep_embedding=False, time_embedding_type="fourier", mid_block_type="UNetMidBlock1D", down_block_types=("DownBlock1DNoSkip", "DownBlock1D", "AttnDownBlock1D"), up_block_types=("AttnUpBlock1D", "UpBlock1D", "UpBlock1DNoSkip"), ) scheduler = IPNDMScheduler() components = { "unet": unet, "scheduler": scheduler, } return components def get_dummy_inputs(self, device, seed=0): if str(device).startswith("mps"): generator = torch.manual_seed(seed) else: generator = torch.Generator(device=device).manual_seed(seed) inputs = { "batch_size": 1, "generator": generator, "num_inference_steps": 4, } return inputs def test_dance_diffusion(self): device = "cpu" # ensure determinism for the device-dependent torch.Generator components = self.get_dummy_components() pipe = DanceDiffusionPipeline(**components) pipe = pipe.to(device) pipe.set_progress_bar_config(disable=None) inputs = self.get_dummy_inputs(device) output = pipe(**inputs) audio = output.audios audio_slice = audio[0, -3:, -3:] assert audio.shape == (1, 2, components["unet"].sample_size) expected_slice = np.array([-0.7265, 1.0000, -0.8388, 0.1175, 0.9498, -1.0000]) assert np.abs(audio_slice.flatten() - expected_slice).max() < 1e-2 @skip_mps def test_save_load_local(self): return super().test_save_load_local() @skip_mps def test_dict_tuple_outputs_equivalent(self): return super().test_dict_tuple_outputs_equivalent(expected_max_difference=3e-3) @skip_mps def test_save_load_optional_components(self): return super().test_save_load_optional_components() @skip_mps def test_attention_slicing_forward_pass(self): return super().test_attention_slicing_forward_pass() def test_inference_batch_single_identical(self): super().test_inference_batch_single_identical(expected_max_diff=3e-3) @nightly @require_torch_gpu class PipelineIntegrationTests(unittest.TestCase): def setUp(self): # clean up the VRAM before each test super().setUp() gc.collect() torch.cuda.empty_cache() def tearDown(self): # clean up the VRAM after each test super().tearDown() gc.collect() torch.cuda.empty_cache() def test_dance_diffusion(self): device = torch_device pipe = DanceDiffusionPipeline.from_pretrained("harmonai/maestro-150k") pipe = pipe.to(device) pipe.set_progress_bar_config(disable=None) generator = torch.manual_seed(0) output = pipe(generator=generator, num_inference_steps=100, audio_length_in_s=4.096) audio = output.audios audio_slice = audio[0, -3:, -3:] assert audio.shape == (1, 2, pipe.unet.config.sample_size) expected_slice = np.array([-0.0192, -0.0231, -0.0318, -0.0059, 0.0002, -0.0020]) assert np.abs(audio_slice.flatten() - expected_slice).max() < 1e-2 def test_dance_diffusion_fp16(self): device = torch_device pipe = DanceDiffusionPipeline.from_pretrained("harmonai/maestro-150k", torch_dtype=torch.float16) pipe = pipe.to(device) pipe.set_progress_bar_config(disable=None) generator = torch.manual_seed(0) output = pipe(generator=generator, num_inference_steps=100, audio_length_in_s=4.096) audio = output.audios audio_slice = audio[0, -3:, -3:] assert audio.shape == (1, 2, pipe.unet.config.sample_size) expected_slice = np.array([-0.0367, -0.0488, -0.0771, -0.0525, -0.0444, -0.0341]) assert np.abs(audio_slice.flatten() - expected_slice).max() < 1e-2
diffusers/tests/pipelines/dance_diffusion/test_dance_diffusion.py/0
{ "file_path": "diffusers/tests/pipelines/dance_diffusion/test_dance_diffusion.py", "repo_id": "diffusers", "token_count": 2496 }
# coding=utf-8 # Copyright 2024 HuggingFace Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import inspect import tempfile import unittest import numpy as np import torch from transformers import AutoTokenizer, BertModel, T5EncoderModel from diffusers import ( AutoencoderKL, DDPMScheduler, HunyuanDiT2DModel, HunyuanDiTPAGPipeline, HunyuanDiTPipeline, ) from diffusers.utils.testing_utils import ( enable_full_determinism, torch_device, ) from ..pipeline_params import TEXT_TO_IMAGE_BATCH_PARAMS, TEXT_TO_IMAGE_IMAGE_PARAMS, TEXT_TO_IMAGE_PARAMS from ..test_pipelines_common import PipelineTesterMixin, to_np enable_full_determinism() class HunyuanDiTPAGPipelineFastTests(PipelineTesterMixin, unittest.TestCase): pipeline_class = HunyuanDiTPAGPipeline params = TEXT_TO_IMAGE_PARAMS - {"cross_attention_kwargs"} batch_params = TEXT_TO_IMAGE_BATCH_PARAMS image_params = TEXT_TO_IMAGE_IMAGE_PARAMS image_latents_params = TEXT_TO_IMAGE_IMAGE_PARAMS required_optional_params = PipelineTesterMixin.required_optional_params def get_dummy_components(self): torch.manual_seed(0) transformer = HunyuanDiT2DModel( sample_size=16, num_layers=2, patch_size=2, attention_head_dim=8, num_attention_heads=3, in_channels=4, cross_attention_dim=32, cross_attention_dim_t5=32, pooled_projection_dim=16, hidden_size=24, activation_fn="gelu-approximate", ) torch.manual_seed(0) vae = AutoencoderKL() scheduler = DDPMScheduler() text_encoder = BertModel.from_pretrained("hf-internal-testing/tiny-random-BertModel") tokenizer = AutoTokenizer.from_pretrained("hf-internal-testing/tiny-random-BertModel") text_encoder_2 = T5EncoderModel.from_pretrained("hf-internal-testing/tiny-random-t5") tokenizer_2 = AutoTokenizer.from_pretrained("hf-internal-testing/tiny-random-t5") components = { "transformer": transformer.eval(), "vae": vae.eval(), "scheduler": scheduler, "text_encoder": text_encoder, "tokenizer": tokenizer, "text_encoder_2": text_encoder_2, "tokenizer_2": tokenizer_2, "safety_checker": None, "feature_extractor": None, } return components def get_dummy_inputs(self, device, seed=0): if str(device).startswith("mps"): generator = torch.manual_seed(seed) else: generator = torch.Generator(device=device).manual_seed(seed) inputs = { "prompt": "A painting of a squirrel eating a burger", "generator": generator, "num_inference_steps": 2, "guidance_scale": 5.0, "output_type": "np", "use_resolution_binning": False, "pag_scale": 0.0, } return inputs def test_inference(self): device = "cpu" components = self.get_dummy_components() pipe = self.pipeline_class(**components) pipe.to(device) pipe.set_progress_bar_config(disable=None) inputs = self.get_dummy_inputs(device) image = pipe(**inputs).images image_slice = image[0, -3:, -3:, -1] self.assertEqual(image.shape, (1, 16, 16, 3)) expected_slice = np.array( [0.56939435, 0.34541583, 0.35915792, 0.46489206, 0.38775963, 0.45004836, 0.5957267, 0.59481275, 0.33287364] ) max_diff = np.abs(image_slice.flatten() - expected_slice).max() self.assertLessEqual(max_diff, 1e-3) def test_sequential_cpu_offload_forward_pass(self): # TODO(YiYi) need to fix later pass def test_sequential_offload_forward_pass_twice(self): # TODO(YiYi) need to fix later pass def test_inference_batch_single_identical(self): self._test_inference_batch_single_identical( expected_max_diff=1e-3, ) def test_save_load_optional_components(self): components = self.get_dummy_components() pipe = self.pipeline_class(**components) pipe.to(torch_device) pipe.set_progress_bar_config(disable=None) inputs = self.get_dummy_inputs(torch_device) prompt = inputs["prompt"] generator = inputs["generator"] num_inference_steps = inputs["num_inference_steps"] output_type = inputs["output_type"] ( prompt_embeds, negative_prompt_embeds, prompt_attention_mask, negative_prompt_attention_mask, ) = pipe.encode_prompt(prompt, device=torch_device, dtype=torch.float32, text_encoder_index=0) ( prompt_embeds_2, negative_prompt_embeds_2, prompt_attention_mask_2, negative_prompt_attention_mask_2, ) = pipe.encode_prompt( prompt, device=torch_device, dtype=torch.float32, text_encoder_index=1, ) # inputs with prompt converted to embeddings inputs = { "prompt_embeds": prompt_embeds, "prompt_attention_mask": prompt_attention_mask, "negative_prompt_embeds": negative_prompt_embeds, "negative_prompt_attention_mask": negative_prompt_attention_mask, "prompt_embeds_2": prompt_embeds_2, "prompt_attention_mask_2": prompt_attention_mask_2, "negative_prompt_embeds_2": negative_prompt_embeds_2, "negative_prompt_attention_mask_2": negative_prompt_attention_mask_2, "generator": generator, "num_inference_steps": num_inference_steps, "output_type": output_type, "use_resolution_binning": False, } # set all optional components to None for optional_component in pipe._optional_components: setattr(pipe, optional_component, None) output = pipe(**inputs)[0] with tempfile.TemporaryDirectory() as tmpdir: pipe.save_pretrained(tmpdir) pipe_loaded = self.pipeline_class.from_pretrained(tmpdir) pipe_loaded.to(torch_device) pipe_loaded.set_progress_bar_config(disable=None) for optional_component in pipe._optional_components: self.assertTrue( getattr(pipe_loaded, optional_component) is None, f"`{optional_component}` did not stay set to None after loading.", ) inputs = self.get_dummy_inputs(torch_device) generator = inputs["generator"] num_inference_steps = inputs["num_inference_steps"] output_type = inputs["output_type"] # inputs with prompt converted to embeddings inputs = { "prompt_embeds": prompt_embeds, "prompt_attention_mask": prompt_attention_mask, "negative_prompt_embeds": negative_prompt_embeds, "negative_prompt_attention_mask": negative_prompt_attention_mask, "prompt_embeds_2": prompt_embeds_2, "prompt_attention_mask_2": prompt_attention_mask_2, "negative_prompt_embeds_2": negative_prompt_embeds_2, "negative_prompt_attention_mask_2": negative_prompt_attention_mask_2, "generator": generator, "num_inference_steps": num_inference_steps, "output_type": output_type, "use_resolution_binning": False, } output_loaded = pipe_loaded(**inputs)[0] max_diff = np.abs(to_np(output) - to_np(output_loaded)).max() self.assertLess(max_diff, 1e-4) def test_feed_forward_chunking(self): device = "cpu" components = self.get_dummy_components() pipe = self.pipeline_class(**components) pipe.to(device) pipe.set_progress_bar_config(disable=None) inputs = self.get_dummy_inputs(device) image = pipe(**inputs).images image_slice_no_chunking = image[0, -3:, -3:, -1] pipe.transformer.enable_forward_chunking(chunk_size=1, dim=0) inputs = self.get_dummy_inputs(device) image = pipe(**inputs).images image_slice_chunking = image[0, -3:, -3:, -1] max_diff = np.abs(to_np(image_slice_no_chunking) - to_np(image_slice_chunking)).max() self.assertLess(max_diff, 1e-4) def test_fused_qkv_projections(self): device = "cpu" # ensure determinism for the device-dependent torch.Generator components = self.get_dummy_components() pipe = self.pipeline_class(**components) pipe = pipe.to(device) pipe.set_progress_bar_config(disable=None) inputs = self.get_dummy_inputs(device) inputs["return_dict"] = False image = pipe(**inputs)[0] original_image_slice = image[0, -3:, -3:, -1] pipe.transformer.fuse_qkv_projections() inputs = self.get_dummy_inputs(device) inputs["return_dict"] = False image_fused = pipe(**inputs)[0] image_slice_fused = image_fused[0, -3:, -3:, -1] pipe.transformer.unfuse_qkv_projections() inputs = self.get_dummy_inputs(device) inputs["return_dict"] = False image_disabled = pipe(**inputs)[0] image_slice_disabled = image_disabled[0, -3:, -3:, -1] assert np.allclose( original_image_slice, image_slice_fused, atol=1e-2, rtol=1e-2 ), "Fusion of QKV projections shouldn't affect the outputs." assert np.allclose( image_slice_fused, image_slice_disabled, atol=1e-2, rtol=1e-2 ), "Outputs, with QKV projection fusion enabled, shouldn't change when fused QKV projections are disabled." assert np.allclose( original_image_slice, image_slice_disabled, atol=1e-2, rtol=1e-2 ), "Original outputs should match when fused QKV projections are disabled." def test_pag_disable_enable(self): device = "cpu" # ensure determinism for the device-dependent torch.Generator components = self.get_dummy_components() # base pipeline (expect same output when pag is disabled) pipe_sd = HunyuanDiTPipeline(**components) pipe_sd = pipe_sd.to(device) pipe_sd.set_progress_bar_config(disable=None) inputs = self.get_dummy_inputs(device) del inputs["pag_scale"] assert ( "pag_scale" not in inspect.signature(pipe_sd.__call__).parameters ), f"`pag_scale` should not be a call parameter of the base pipeline {pipe_sd.__class__.__name__}." out = pipe_sd(**inputs).images[0, -3:, -3:, -1] components = self.get_dummy_components() # pag disabled with pag_scale=0.0 pipe_pag = self.pipeline_class(**components) pipe_pag = pipe_pag.to(device) pipe_pag.set_progress_bar_config(disable=None) inputs = self.get_dummy_inputs(device) inputs["pag_scale"] = 0.0 out_pag_disabled = pipe_pag(**inputs).images[0, -3:, -3:, -1] # pag enabled pipe_pag = self.pipeline_class(**components) pipe_pag = pipe_pag.to(device) pipe_pag.set_progress_bar_config(disable=None) inputs = self.get_dummy_inputs(device) inputs["pag_scale"] = 3.0 out_pag_enabled = pipe_pag(**inputs).images[0, -3:, -3:, -1] assert np.abs(out.flatten() - out_pag_disabled.flatten()).max() < 1e-3 assert np.abs(out.flatten() - out_pag_enabled.flatten()).max() > 1e-3 def test_pag_applied_layers(self): device = "cpu" # ensure determinism for the device-dependent torch.Generator components = self.get_dummy_components() # base pipeline pipe = self.pipeline_class(**components) pipe = pipe.to(device) pipe.set_progress_bar_config(disable=None) all_self_attn_layers = [k for k in pipe.transformer.attn_processors.keys() if "attn1" in k] original_attn_procs = pipe.transformer.attn_processors pag_layers = ["blocks.0", "blocks.1"] pipe._set_pag_attn_processor(pag_applied_layers=pag_layers, do_classifier_free_guidance=False) assert set(pipe.pag_attn_processors) == set(all_self_attn_layers) # blocks.0 block_0_self_attn = ["blocks.0.attn1.processor"] pipe.transformer.set_attn_processor(original_attn_procs.copy()) pag_layers = ["blocks.0"] pipe._set_pag_attn_processor(pag_applied_layers=pag_layers, do_classifier_free_guidance=False) assert set(pipe.pag_attn_processors) == set(block_0_self_attn) pipe.transformer.set_attn_processor(original_attn_procs.copy()) pag_layers = ["blocks.0.attn1"] pipe._set_pag_attn_processor(pag_applied_layers=pag_layers, do_classifier_free_guidance=False) assert set(pipe.pag_attn_processors) == set(block_0_self_attn) pipe.transformer.set_attn_processor(original_attn_procs.copy()) pag_layers = ["blocks.(0|1)"] pipe._set_pag_attn_processor(pag_applied_layers=pag_layers, do_classifier_free_guidance=False) assert (len(pipe.pag_attn_processors)) == 2 pipe.transformer.set_attn_processor(original_attn_procs.copy()) pag_layers = ["blocks.0", r"blocks\.1"] pipe._set_pag_attn_processor(pag_applied_layers=pag_layers, do_classifier_free_guidance=False) assert len(pipe.pag_attn_processors) == 2
diffusers/tests/pipelines/pag/test_pag_hunyuan_dit.py/0
{ "file_path": "diffusers/tests/pipelines/pag/test_pag_hunyuan_dit.py", "repo_id": "diffusers", "token_count": 6402 }
# These are canonical sets of parameters for different types of pipelines. # They are set on subclasses of `PipelineTesterMixin` as `params` and # `batch_params`. # # If a pipeline's set of arguments has minor changes from one of the common sets # of arguments, do not make modifications to the existing common sets of arguments. # I.e. a text to image pipeline with non-configurable height and width arguments # should set its attribute as `params = TEXT_TO_IMAGE_PARAMS - {'height', 'width'}`. TEXT_TO_IMAGE_PARAMS = frozenset( [ "prompt", "height", "width", "guidance_scale", "negative_prompt", "prompt_embeds", "negative_prompt_embeds", "cross_attention_kwargs", ] ) TEXT_TO_IMAGE_BATCH_PARAMS = frozenset(["prompt", "negative_prompt"]) TEXT_TO_IMAGE_IMAGE_PARAMS = frozenset([]) IMAGE_TO_IMAGE_IMAGE_PARAMS = frozenset(["image"]) IMAGE_VARIATION_PARAMS = frozenset( [ "image", "height", "width", "guidance_scale", ] ) IMAGE_VARIATION_BATCH_PARAMS = frozenset(["image"]) TEXT_GUIDED_IMAGE_VARIATION_PARAMS = frozenset( [ "prompt", "image", "height", "width", "guidance_scale", "negative_prompt", "prompt_embeds", "negative_prompt_embeds", ] ) TEXT_GUIDED_IMAGE_VARIATION_BATCH_PARAMS = frozenset(["prompt", "image", "negative_prompt"]) TEXT_GUIDED_IMAGE_INPAINTING_PARAMS = frozenset( [ # Text guided image variation with an image mask "prompt", "image", "mask_image", "height", "width", "guidance_scale", "negative_prompt", "prompt_embeds", "negative_prompt_embeds", ] ) TEXT_GUIDED_IMAGE_INPAINTING_BATCH_PARAMS = frozenset(["prompt", "image", "mask_image", "negative_prompt"]) IMAGE_INPAINTING_PARAMS = frozenset( [ # image variation with an image mask "image", "mask_image", "height", "width", "guidance_scale", ] ) IMAGE_INPAINTING_BATCH_PARAMS = frozenset(["image", "mask_image"]) IMAGE_GUIDED_IMAGE_INPAINTING_PARAMS = frozenset( [ "example_image", "image", "mask_image", "height", "width", "guidance_scale", ] ) IMAGE_GUIDED_IMAGE_INPAINTING_BATCH_PARAMS = frozenset(["example_image", "image", "mask_image"]) CLASS_CONDITIONED_IMAGE_GENERATION_PARAMS = frozenset(["class_labels"]) CLASS_CONDITIONED_IMAGE_GENERATION_BATCH_PARAMS = frozenset(["class_labels"]) UNCONDITIONAL_IMAGE_GENERATION_PARAMS = frozenset(["batch_size"]) UNCONDITIONAL_IMAGE_GENERATION_BATCH_PARAMS = frozenset([]) UNCONDITIONAL_AUDIO_GENERATION_PARAMS = frozenset(["batch_size"]) UNCONDITIONAL_AUDIO_GENERATION_BATCH_PARAMS = frozenset([]) TEXT_TO_AUDIO_PARAMS = frozenset( [ "prompt", "audio_length_in_s", "guidance_scale", "negative_prompt", "prompt_embeds", "negative_prompt_embeds", "cross_attention_kwargs", ] ) TEXT_TO_AUDIO_BATCH_PARAMS = frozenset(["prompt", "negative_prompt"]) TOKENS_TO_AUDIO_GENERATION_PARAMS = frozenset(["input_tokens"]) TOKENS_TO_AUDIO_GENERATION_BATCH_PARAMS = frozenset(["input_tokens"]) TEXT_TO_IMAGE_CALLBACK_CFG_PARAMS = frozenset(["prompt_embeds"]) VIDEO_TO_VIDEO_BATCH_PARAMS = frozenset(["prompt", "negative_prompt", "video"])
diffusers/tests/pipelines/pipeline_params.py/0
{ "file_path": "diffusers/tests/pipelines/pipeline_params.py", "repo_id": "diffusers", "token_count": 1584 }
# coding=utf-8 # Copyright 2024 HuggingFace Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import gc import random import tempfile import unittest import numpy as np import torch from PIL import Image from transformers import ( CLIPTextConfig, CLIPTextModel, CLIPTokenizer, DPTConfig, DPTForDepthEstimation, DPTImageProcessor, ) from diffusers import ( AutoencoderKL, PNDMScheduler, StableDiffusionDepth2ImgPipeline, UNet2DConditionModel, ) from diffusers.utils.testing_utils import ( enable_full_determinism, floats_tensor, load_image, load_numpy, nightly, require_accelerate_version_greater, require_accelerator, require_torch_gpu, skip_mps, slow, torch_device, ) from ..pipeline_params import ( IMAGE_TO_IMAGE_IMAGE_PARAMS, TEXT_GUIDED_IMAGE_VARIATION_BATCH_PARAMS, TEXT_GUIDED_IMAGE_VARIATION_PARAMS, TEXT_TO_IMAGE_CALLBACK_CFG_PARAMS, TEXT_TO_IMAGE_IMAGE_PARAMS, ) from ..test_pipelines_common import PipelineKarrasSchedulerTesterMixin, PipelineLatentTesterMixin, PipelineTesterMixin enable_full_determinism() @skip_mps class StableDiffusionDepth2ImgPipelineFastTests( PipelineLatentTesterMixin, PipelineKarrasSchedulerTesterMixin, PipelineTesterMixin, unittest.TestCase ): pipeline_class = StableDiffusionDepth2ImgPipeline test_save_load_optional_components = False params = TEXT_GUIDED_IMAGE_VARIATION_PARAMS - {"height", "width"} required_optional_params = PipelineTesterMixin.required_optional_params - {"latents"} batch_params = TEXT_GUIDED_IMAGE_VARIATION_BATCH_PARAMS image_params = IMAGE_TO_IMAGE_IMAGE_PARAMS image_latents_params = TEXT_TO_IMAGE_IMAGE_PARAMS callback_cfg_params = TEXT_TO_IMAGE_CALLBACK_CFG_PARAMS.union({"depth_mask"}) supports_dduf = False def get_dummy_components(self): torch.manual_seed(0) unet = UNet2DConditionModel( block_out_channels=(32, 64), layers_per_block=2, sample_size=32, in_channels=5, out_channels=4, down_block_types=("DownBlock2D", "CrossAttnDownBlock2D"), up_block_types=("CrossAttnUpBlock2D", "UpBlock2D"), cross_attention_dim=32, attention_head_dim=(2, 4), use_linear_projection=True, ) scheduler = PNDMScheduler(skip_prk_steps=True) torch.manual_seed(0) vae = AutoencoderKL( block_out_channels=[32, 64], in_channels=3, out_channels=3, down_block_types=["DownEncoderBlock2D", "DownEncoderBlock2D"], up_block_types=["UpDecoderBlock2D", "UpDecoderBlock2D"], latent_channels=4, ) torch.manual_seed(0) text_encoder_config = CLIPTextConfig( bos_token_id=0, eos_token_id=2, hidden_size=32, intermediate_size=37, layer_norm_eps=1e-05, num_attention_heads=4, num_hidden_layers=5, pad_token_id=1, vocab_size=1000, ) text_encoder = CLIPTextModel(text_encoder_config) tokenizer = CLIPTokenizer.from_pretrained("hf-internal-testing/tiny-random-clip") backbone_config = { "global_padding": "same", "layer_type": "bottleneck", "depths": [3, 4, 9], "out_features": ["stage1", "stage2", "stage3"], "embedding_dynamic_padding": True, "hidden_sizes": [96, 192, 384, 768], "num_groups": 2, } depth_estimator_config = DPTConfig( image_size=32, patch_size=16, num_channels=3, hidden_size=32, num_hidden_layers=4, backbone_out_indices=(0, 1, 2, 3), num_attention_heads=4, intermediate_size=37, hidden_act="gelu", hidden_dropout_prob=0.1, attention_probs_dropout_prob=0.1, is_decoder=False, initializer_range=0.02, is_hybrid=True, backbone_config=backbone_config, backbone_featmap_shape=[1, 384, 24, 24], ) depth_estimator = DPTForDepthEstimation(depth_estimator_config).eval() feature_extractor = DPTImageProcessor.from_pretrained("hf-internal-testing/tiny-random-DPTForDepthEstimation") components = { "unet": unet, "scheduler": scheduler, "vae": vae, "text_encoder": text_encoder, "tokenizer": tokenizer, "depth_estimator": depth_estimator, "feature_extractor": feature_extractor, } return components def get_dummy_inputs(self, device, seed=0): image = floats_tensor((1, 3, 32, 32), rng=random.Random(seed)) image = image.cpu().permute(0, 2, 3, 1)[0] image = Image.fromarray(np.uint8(image)).convert("RGB").resize((32, 32)) if str(device).startswith("mps"): generator = torch.manual_seed(seed) else: generator = torch.Generator(device=device).manual_seed(seed) inputs = { "prompt": "A painting of a squirrel eating a burger", "image": image, "generator": generator, "num_inference_steps": 2, "guidance_scale": 6.0, "output_type": "np", } return inputs def test_save_load_local(self): components = self.get_dummy_components() pipe = self.pipeline_class(**components) pipe.to(torch_device) pipe.set_progress_bar_config(disable=None) inputs = self.get_dummy_inputs(torch_device) output = pipe(**inputs)[0] with tempfile.TemporaryDirectory() as tmpdir: pipe.save_pretrained(tmpdir) pipe_loaded = self.pipeline_class.from_pretrained(tmpdir) pipe_loaded.to(torch_device) pipe_loaded.set_progress_bar_config(disable=None) inputs = self.get_dummy_inputs(torch_device) output_loaded = pipe_loaded(**inputs)[0] max_diff = np.abs(output - output_loaded).max() self.assertLess(max_diff, 1e-4) @unittest.skipIf(torch_device not in ["cuda", "xpu"], reason="float16 requires CUDA or XPU") @require_accelerator def test_save_load_float16(self): components = self.get_dummy_components() for name, module in components.items(): if hasattr(module, "half"): components[name] = module.to(torch_device).half() pipe = self.pipeline_class(**components) pipe.to(torch_device) pipe.set_progress_bar_config(disable=None) inputs = self.get_dummy_inputs(torch_device) output = pipe(**inputs)[0] with tempfile.TemporaryDirectory() as tmpdir: pipe.save_pretrained(tmpdir) pipe_loaded = self.pipeline_class.from_pretrained(tmpdir, torch_dtype=torch.float16) pipe_loaded.to(torch_device) pipe_loaded.set_progress_bar_config(disable=None) for name, component in pipe_loaded.components.items(): if hasattr(component, "dtype"): self.assertTrue( component.dtype == torch.float16, f"`{name}.dtype` switched from `float16` to {component.dtype} after loading.", ) inputs = self.get_dummy_inputs(torch_device) output_loaded = pipe_loaded(**inputs)[0] max_diff = np.abs(output - output_loaded).max() self.assertLess(max_diff, 2e-2, "The output of the fp16 pipeline changed after saving and loading.") @unittest.skipIf(torch_device not in ["cuda", "xpu"], reason="float16 requires CUDA or XPU") @require_accelerator def test_float16_inference(self): components = self.get_dummy_components() pipe = self.pipeline_class(**components) pipe.to(torch_device) pipe.set_progress_bar_config(disable=None) for name, module in components.items(): if hasattr(module, "half"): components[name] = module.half() pipe_fp16 = self.pipeline_class(**components) pipe_fp16.to(torch_device) pipe_fp16.set_progress_bar_config(disable=None) output = pipe(**self.get_dummy_inputs(torch_device))[0] output_fp16 = pipe_fp16(**self.get_dummy_inputs(torch_device))[0] max_diff = np.abs(output - output_fp16).max() self.assertLess(max_diff, 1.3e-2, "The outputs of the fp16 and fp32 pipelines are too different.") @require_accelerator @require_accelerate_version_greater("0.14.0") def test_cpu_offload_forward_pass(self): components = self.get_dummy_components() pipe = self.pipeline_class(**components) pipe.to(torch_device) pipe.set_progress_bar_config(disable=None) inputs = self.get_dummy_inputs(torch_device) output_without_offload = pipe(**inputs)[0] pipe.enable_sequential_cpu_offload(device=torch_device) inputs = self.get_dummy_inputs(torch_device) output_with_offload = pipe(**inputs)[0] max_diff = np.abs(output_with_offload - output_without_offload).max() self.assertLess(max_diff, 1e-4, "CPU offloading should not affect the inference results") def test_dict_tuple_outputs_equivalent(self): components = self.get_dummy_components() pipe = self.pipeline_class(**components) pipe.to(torch_device) pipe.set_progress_bar_config(disable=None) output = pipe(**self.get_dummy_inputs(torch_device))[0] output_tuple = pipe(**self.get_dummy_inputs(torch_device), return_dict=False)[0] max_diff = np.abs(output - output_tuple).max() self.assertLess(max_diff, 1e-4) def test_stable_diffusion_depth2img_default_case(self): device = "cpu" # ensure determinism for the device-dependent torch.Generator components = self.get_dummy_components() pipe = StableDiffusionDepth2ImgPipeline(**components) pipe = pipe.to(device) pipe.set_progress_bar_config(disable=None) inputs = self.get_dummy_inputs(device) image = pipe(**inputs).images image_slice = image[0, -3:, -3:, -1] assert image.shape == (1, 32, 32, 3) if torch_device == "mps": expected_slice = np.array([0.6071, 0.5035, 0.4378, 0.5776, 0.5753, 0.4316, 0.4513, 0.5263, 0.4546]) else: expected_slice = np.array([0.5435, 0.4992, 0.3783, 0.4411, 0.5842, 0.4654, 0.3786, 0.5077, 0.4655]) assert np.abs(image_slice.flatten() - expected_slice).max() < 1e-3 def test_stable_diffusion_depth2img_negative_prompt(self): device = "cpu" # ensure determinism for the device-dependent torch.Generator components = self.get_dummy_components() pipe = StableDiffusionDepth2ImgPipeline(**components) pipe = pipe.to(device) pipe.set_progress_bar_config(disable=None) inputs = self.get_dummy_inputs(device) negative_prompt = "french fries" output = pipe(**inputs, negative_prompt=negative_prompt) image = output.images image_slice = image[0, -3:, -3:, -1] assert image.shape == (1, 32, 32, 3) if torch_device == "mps": expected_slice = np.array([0.6296, 0.5125, 0.3890, 0.4456, 0.5955, 0.4621, 0.3810, 0.5310, 0.4626]) else: expected_slice = np.array([0.6012, 0.4507, 0.3769, 0.4121, 0.5566, 0.4585, 0.3803, 0.5045, 0.4631]) assert np.abs(image_slice.flatten() - expected_slice).max() < 1e-3 def test_stable_diffusion_depth2img_multiple_init_images(self): device = "cpu" # ensure determinism for the device-dependent torch.Generator components = self.get_dummy_components() pipe = StableDiffusionDepth2ImgPipeline(**components) pipe = pipe.to(device) pipe.set_progress_bar_config(disable=None) inputs = self.get_dummy_inputs(device) inputs["prompt"] = [inputs["prompt"]] * 2 inputs["image"] = 2 * [inputs["image"]] image = pipe(**inputs).images image_slice = image[-1, -3:, -3:, -1] assert image.shape == (2, 32, 32, 3) if torch_device == "mps": expected_slice = np.array([0.6501, 0.5150, 0.4939, 0.6688, 0.5437, 0.5758, 0.5115, 0.4406, 0.4551]) else: expected_slice = np.array([0.6557, 0.6214, 0.6254, 0.5775, 0.4785, 0.5949, 0.5904, 0.4785, 0.4730]) assert np.abs(image_slice.flatten() - expected_slice).max() < 1e-3 def test_stable_diffusion_depth2img_pil(self): device = "cpu" # ensure determinism for the device-dependent torch.Generator components = self.get_dummy_components() pipe = StableDiffusionDepth2ImgPipeline(**components) pipe = pipe.to(device) pipe.set_progress_bar_config(disable=None) inputs = self.get_dummy_inputs(device) image = pipe(**inputs).images image_slice = image[0, -3:, -3:, -1] if torch_device == "mps": expected_slice = np.array([0.53232, 0.47015, 0.40868, 0.45651, 0.4891, 0.4668, 0.4287, 0.48822, 0.47439]) else: expected_slice = np.array([0.5435, 0.4992, 0.3783, 0.4411, 0.5842, 0.4654, 0.3786, 0.5077, 0.4655]) assert np.abs(image_slice.flatten() - expected_slice).max() < 1e-3 @skip_mps def test_attention_slicing_forward_pass(self): return super().test_attention_slicing_forward_pass() def test_inference_batch_single_identical(self): super().test_inference_batch_single_identical(expected_max_diff=7e-3) @slow @require_torch_gpu class StableDiffusionDepth2ImgPipelineSlowTests(unittest.TestCase): def setUp(self): super().setUp() gc.collect() torch.cuda.empty_cache() def tearDown(self): super().tearDown() gc.collect() torch.cuda.empty_cache() def get_inputs(self, device="cpu", dtype=torch.float32, seed=0): generator = torch.Generator(device=device).manual_seed(seed) init_image = load_image( "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/depth2img/two_cats.png" ) inputs = { "prompt": "two tigers", "image": init_image, "generator": generator, "num_inference_steps": 3, "strength": 0.75, "guidance_scale": 7.5, "output_type": "np", } return inputs def test_stable_diffusion_depth2img_pipeline_default(self): pipe = StableDiffusionDepth2ImgPipeline.from_pretrained( "stabilityai/stable-diffusion-2-depth", safety_checker=None ) pipe.to(torch_device) pipe.set_progress_bar_config(disable=None) pipe.enable_attention_slicing() inputs = self.get_inputs() image = pipe(**inputs).images image_slice = image[0, 253:256, 253:256, -1].flatten() assert image.shape == (1, 480, 640, 3) expected_slice = np.array([0.5435, 0.4992, 0.3783, 0.4411, 0.5842, 0.4654, 0.3786, 0.5077, 0.4655]) assert np.abs(expected_slice - image_slice).max() < 6e-1 @nightly @require_torch_gpu class StableDiffusionImg2ImgPipelineNightlyTests(unittest.TestCase): def setUp(self): super().setUp() gc.collect() torch.cuda.empty_cache() def tearDown(self): super().tearDown() gc.collect() torch.cuda.empty_cache() def get_inputs(self, device="cpu", dtype=torch.float32, seed=0): generator = torch.Generator(device=device).manual_seed(seed) init_image = load_image( "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/depth2img/two_cats.png" ) inputs = { "prompt": "two tigers", "image": init_image, "generator": generator, "num_inference_steps": 2, "strength": 0.75, "guidance_scale": 7.5, "output_type": "np", } return inputs def test_depth2img(self): pipe = StableDiffusionDepth2ImgPipeline.from_pretrained("stabilityai/stable-diffusion-2-depth") pipe.to(torch_device) pipe.set_progress_bar_config(disable=None) inputs = self.get_inputs() image = pipe(**inputs).images[0] expected_image = load_numpy( "https://huggingface.co/datasets/diffusers/test-arrays/resolve/main" "/stable_diffusion_depth2img/stable_diffusion_2_0_pndm.npy" ) max_diff = np.abs(expected_image - image).max() assert max_diff < 1e-3
diffusers/tests/pipelines/stable_diffusion_2/test_stable_diffusion_depth.py/0
{ "file_path": "diffusers/tests/pipelines/stable_diffusion_2/test_stable_diffusion_depth.py", "repo_id": "diffusers", "token_count": 8066 }
# coding=utf-8 # Copyright 2024 HuggingFace Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import random import unittest import numpy as np import torch from parameterized import parameterized from transformers import CLIPTextConfig, CLIPTextModel, CLIPTextModelWithProjection, CLIPTokenizer import diffusers from diffusers import ( AutoencoderKL, EulerDiscreteScheduler, LCMScheduler, MultiAdapter, StableDiffusionXLAdapterPipeline, T2IAdapter, UNet2DConditionModel, ) from diffusers.utils import logging from diffusers.utils.testing_utils import ( enable_full_determinism, floats_tensor, torch_device, ) from ..pipeline_params import TEXT_GUIDED_IMAGE_VARIATION_BATCH_PARAMS, TEXT_GUIDED_IMAGE_VARIATION_PARAMS from ..test_pipelines_common import ( IPAdapterTesterMixin, PipelineTesterMixin, SDXLOptionalComponentsTesterMixin, assert_mean_pixel_difference, ) enable_full_determinism() class StableDiffusionXLAdapterPipelineFastTests( IPAdapterTesterMixin, PipelineTesterMixin, SDXLOptionalComponentsTesterMixin, unittest.TestCase ): pipeline_class = StableDiffusionXLAdapterPipeline params = TEXT_GUIDED_IMAGE_VARIATION_PARAMS batch_params = TEXT_GUIDED_IMAGE_VARIATION_BATCH_PARAMS def get_dummy_components(self, adapter_type="full_adapter_xl", time_cond_proj_dim=None): torch.manual_seed(0) unet = UNet2DConditionModel( block_out_channels=(32, 64), layers_per_block=2, sample_size=32, in_channels=4, out_channels=4, down_block_types=("DownBlock2D", "CrossAttnDownBlock2D"), up_block_types=("CrossAttnUpBlock2D", "UpBlock2D"), # SD2-specific config below attention_head_dim=(2, 4), use_linear_projection=True, addition_embed_type="text_time", addition_time_embed_dim=8, transformer_layers_per_block=(1, 2), projection_class_embeddings_input_dim=80, # 6 * 8 + 32 cross_attention_dim=64, time_cond_proj_dim=time_cond_proj_dim, ) scheduler = EulerDiscreteScheduler( beta_start=0.00085, beta_end=0.012, steps_offset=1, beta_schedule="scaled_linear", timestep_spacing="leading", ) torch.manual_seed(0) vae = AutoencoderKL( block_out_channels=[32, 64], in_channels=3, out_channels=3, down_block_types=["DownEncoderBlock2D", "DownEncoderBlock2D"], up_block_types=["UpDecoderBlock2D", "UpDecoderBlock2D"], latent_channels=4, sample_size=128, ) torch.manual_seed(0) text_encoder_config = CLIPTextConfig( bos_token_id=0, eos_token_id=2, hidden_size=32, intermediate_size=37, layer_norm_eps=1e-05, num_attention_heads=4, num_hidden_layers=5, pad_token_id=1, vocab_size=1000, # SD2-specific config below hidden_act="gelu", projection_dim=32, ) text_encoder = CLIPTextModel(text_encoder_config) tokenizer = CLIPTokenizer.from_pretrained("hf-internal-testing/tiny-random-clip") text_encoder_2 = CLIPTextModelWithProjection(text_encoder_config) tokenizer_2 = CLIPTokenizer.from_pretrained("hf-internal-testing/tiny-random-clip") if adapter_type == "full_adapter_xl": adapter = T2IAdapter( in_channels=3, channels=[32, 64], num_res_blocks=2, downscale_factor=4, adapter_type=adapter_type, ) elif adapter_type == "multi_adapter": adapter = MultiAdapter( [ T2IAdapter( in_channels=3, channels=[32, 64], num_res_blocks=2, downscale_factor=4, adapter_type="full_adapter_xl", ), T2IAdapter( in_channels=3, channels=[32, 64], num_res_blocks=2, downscale_factor=4, adapter_type="full_adapter_xl", ), ] ) else: raise ValueError( f"Unknown adapter type: {adapter_type}, must be one of 'full_adapter_xl', or 'multi_adapter''" ) components = { "adapter": adapter, "unet": unet, "scheduler": scheduler, "vae": vae, "text_encoder": text_encoder, "tokenizer": tokenizer, "text_encoder_2": text_encoder_2, "tokenizer_2": tokenizer_2, # "safety_checker": None, "feature_extractor": None, "image_encoder": None, } return components def get_dummy_components_with_full_downscaling(self, adapter_type="full_adapter_xl"): """Get dummy components with x8 VAE downscaling and 3 UNet down blocks. These dummy components are intended to fully-exercise the T2I-Adapter downscaling behavior. """ torch.manual_seed(0) unet = UNet2DConditionModel( block_out_channels=(32, 32, 64), layers_per_block=2, sample_size=32, in_channels=4, out_channels=4, down_block_types=("DownBlock2D", "CrossAttnDownBlock2D", "CrossAttnDownBlock2D"), up_block_types=("CrossAttnUpBlock2D", "CrossAttnUpBlock2D", "UpBlock2D"), # SD2-specific config below attention_head_dim=2, use_linear_projection=True, addition_embed_type="text_time", addition_time_embed_dim=8, transformer_layers_per_block=1, projection_class_embeddings_input_dim=80, # 6 * 8 + 32 cross_attention_dim=64, ) scheduler = EulerDiscreteScheduler( beta_start=0.00085, beta_end=0.012, steps_offset=1, beta_schedule="scaled_linear", timestep_spacing="leading", ) torch.manual_seed(0) vae = AutoencoderKL( block_out_channels=[32, 32, 32, 64], in_channels=3, out_channels=3, down_block_types=["DownEncoderBlock2D", "DownEncoderBlock2D", "DownEncoderBlock2D", "DownEncoderBlock2D"], up_block_types=["UpDecoderBlock2D", "UpDecoderBlock2D", "UpDecoderBlock2D", "UpDecoderBlock2D"], latent_channels=4, sample_size=128, ) torch.manual_seed(0) text_encoder_config = CLIPTextConfig( bos_token_id=0, eos_token_id=2, hidden_size=32, intermediate_size=37, layer_norm_eps=1e-05, num_attention_heads=4, num_hidden_layers=5, pad_token_id=1, vocab_size=1000, # SD2-specific config below hidden_act="gelu", projection_dim=32, ) text_encoder = CLIPTextModel(text_encoder_config) tokenizer = CLIPTokenizer.from_pretrained("hf-internal-testing/tiny-random-clip") text_encoder_2 = CLIPTextModelWithProjection(text_encoder_config) tokenizer_2 = CLIPTokenizer.from_pretrained("hf-internal-testing/tiny-random-clip") if adapter_type == "full_adapter_xl": adapter = T2IAdapter( in_channels=3, channels=[32, 32, 64], num_res_blocks=2, downscale_factor=16, adapter_type=adapter_type, ) elif adapter_type == "multi_adapter": adapter = MultiAdapter( [ T2IAdapter( in_channels=3, channels=[32, 32, 64], num_res_blocks=2, downscale_factor=16, adapter_type="full_adapter_xl", ), T2IAdapter( in_channels=3, channels=[32, 32, 64], num_res_blocks=2, downscale_factor=16, adapter_type="full_adapter_xl", ), ] ) else: raise ValueError( f"Unknown adapter type: {adapter_type}, must be one of 'full_adapter_xl', or 'multi_adapter''" ) components = { "adapter": adapter, "unet": unet, "scheduler": scheduler, "vae": vae, "text_encoder": text_encoder, "tokenizer": tokenizer, "text_encoder_2": text_encoder_2, "tokenizer_2": tokenizer_2, # "safety_checker": None, "feature_extractor": None, "image_encoder": None, } return components def get_dummy_inputs(self, device, seed=0, height=64, width=64, num_images=1): if num_images == 1: image = floats_tensor((1, 3, height, width), rng=random.Random(seed)).to(device) else: image = [ floats_tensor((1, 3, height, width), rng=random.Random(seed)).to(device) for _ in range(num_images) ] if str(device).startswith("mps"): generator = torch.manual_seed(seed) else: generator = torch.Generator(device=device).manual_seed(seed) inputs = { "prompt": "A painting of a squirrel eating a burger", "image": image, "generator": generator, "num_inference_steps": 2, "guidance_scale": 5.0, "output_type": "np", } return inputs def test_ip_adapter(self, from_multi=False, expected_pipe_slice=None): if not from_multi: expected_pipe_slice = None if torch_device == "cpu": expected_pipe_slice = np.array( [0.5752, 0.6155, 0.4826, 0.5111, 0.5741, 0.4678, 0.5199, 0.5231, 0.4794] ) return super().test_ip_adapter(expected_pipe_slice=expected_pipe_slice) def test_stable_diffusion_adapter_default_case(self): device = "cpu" # ensure determinism for the device-dependent torch.Generator components = self.get_dummy_components() sd_pipe = StableDiffusionXLAdapterPipeline(**components) sd_pipe = sd_pipe.to(device) sd_pipe.set_progress_bar_config(disable=None) inputs = self.get_dummy_inputs(device) image = sd_pipe(**inputs).images image_slice = image[0, -3:, -3:, -1] assert image.shape == (1, 64, 64, 3) expected_slice = np.array([00.5752, 0.6155, 0.4826, 0.5111, 0.5741, 0.4678, 0.5199, 0.5231, 0.4794]) assert np.abs(image_slice.flatten() - expected_slice).max() < 5e-3 @parameterized.expand( [ # (dim=144) The internal feature map will be 9x9 after initial pixel unshuffling (downscaled x16). (((4 * 2 + 1) * 16),), # (dim=160) The internal feature map will be 5x5 after the first T2I down block (downscaled x32). (((4 * 1 + 1) * 32),), ] ) def test_multiple_image_dimensions(self, dim): """Test that the T2I-Adapter pipeline supports any input dimension that is divisible by the adapter's `downscale_factor`. This test was added in response to an issue where the T2I Adapter's downscaling padding behavior did not match the UNet's behavior. Note that we have selected `dim` values to produce odd resolutions at each downscaling level. """ components = self.get_dummy_components_with_full_downscaling() sd_pipe = StableDiffusionXLAdapterPipeline(**components) sd_pipe = sd_pipe.to(torch_device) sd_pipe.set_progress_bar_config(disable=None) inputs = self.get_dummy_inputs(torch_device, height=dim, width=dim) image = sd_pipe(**inputs).images assert image.shape == (1, dim, dim, 3) @parameterized.expand(["full_adapter", "full_adapter_xl", "light_adapter"]) def test_total_downscale_factor(self, adapter_type): """Test that the T2IAdapter correctly reports its total_downscale_factor.""" batch_size = 1 in_channels = 3 out_channels = [320, 640, 1280, 1280] in_image_size = 512 adapter = T2IAdapter( in_channels=in_channels, channels=out_channels, num_res_blocks=2, downscale_factor=8, adapter_type=adapter_type, ) adapter.to(torch_device) in_image = floats_tensor((batch_size, in_channels, in_image_size, in_image_size)).to(torch_device) adapter_state = adapter(in_image) # Assume that the last element in `adapter_state` has been downsampled the most, and check # that it matches the `total_downscale_factor`. expected_out_image_size = in_image_size // adapter.total_downscale_factor assert adapter_state[-1].shape == ( batch_size, out_channels[-1], expected_out_image_size, expected_out_image_size, ) def test_save_load_optional_components(self): return self._test_save_load_optional_components() def test_adapter_sdxl_lcm(self): device = "cpu" # ensure determinism for the device-dependent torch.Generator components = self.get_dummy_components(time_cond_proj_dim=256) sd_pipe = StableDiffusionXLAdapterPipeline(**components) sd_pipe.scheduler = LCMScheduler.from_config(sd_pipe.scheduler.config) sd_pipe = sd_pipe.to(torch_device) sd_pipe.set_progress_bar_config(disable=None) inputs = self.get_dummy_inputs(device) output = sd_pipe(**inputs) image = output.images image_slice = image[0, -3:, -3:, -1] assert image.shape == (1, 64, 64, 3) expected_slice = np.array([0.5425, 0.5385, 0.4964, 0.5045, 0.6149, 0.4974, 0.5469, 0.5332, 0.5426]) assert np.abs(image_slice.flatten() - expected_slice).max() < 1e-2 def test_adapter_sdxl_lcm_custom_timesteps(self): device = "cpu" # ensure determinism for the device-dependent torch.Generator components = self.get_dummy_components(time_cond_proj_dim=256) sd_pipe = StableDiffusionXLAdapterPipeline(**components) sd_pipe.scheduler = LCMScheduler.from_config(sd_pipe.scheduler.config) sd_pipe = sd_pipe.to(torch_device) sd_pipe.set_progress_bar_config(disable=None) inputs = self.get_dummy_inputs(device) del inputs["num_inference_steps"] inputs["timesteps"] = [999, 499] output = sd_pipe(**inputs) image = output.images image_slice = image[0, -3:, -3:, -1] assert image.shape == (1, 64, 64, 3) expected_slice = np.array([0.5425, 0.5385, 0.4964, 0.5045, 0.6149, 0.4974, 0.5469, 0.5332, 0.5426]) assert np.abs(image_slice.flatten() - expected_slice).max() < 1e-2 class StableDiffusionXLMultiAdapterPipelineFastTests( StableDiffusionXLAdapterPipelineFastTests, PipelineTesterMixin, unittest.TestCase ): supports_dduf = False def get_dummy_components(self, time_cond_proj_dim=None): return super().get_dummy_components("multi_adapter", time_cond_proj_dim=time_cond_proj_dim) def get_dummy_components_with_full_downscaling(self): return super().get_dummy_components_with_full_downscaling("multi_adapter") def get_dummy_inputs(self, device, seed=0, height=64, width=64): inputs = super().get_dummy_inputs(device, seed, height, width, num_images=2) inputs["adapter_conditioning_scale"] = [0.5, 0.5] return inputs def test_stable_diffusion_adapter_default_case(self): device = "cpu" # ensure determinism for the device-dependent torch.Generator components = self.get_dummy_components() sd_pipe = StableDiffusionXLAdapterPipeline(**components) sd_pipe = sd_pipe.to(device) sd_pipe.set_progress_bar_config(disable=None) inputs = self.get_dummy_inputs(device) image = sd_pipe(**inputs).images image_slice = image[0, -3:, -3:, -1] assert image.shape == (1, 64, 64, 3) expected_slice = np.array([0.5617, 0.6081, 0.4807, 0.5071, 0.5665, 0.4614, 0.5165, 0.5164, 0.4786]) assert np.abs(image_slice.flatten() - expected_slice).max() < 5e-3 def test_ip_adapter(self): expected_pipe_slice = None if torch_device == "cpu": expected_pipe_slice = np.array([0.5617, 0.6081, 0.4807, 0.5071, 0.5665, 0.4614, 0.5165, 0.5164, 0.4786]) return super().test_ip_adapter(from_multi=True, expected_pipe_slice=expected_pipe_slice) def test_inference_batch_consistent( self, batch_sizes=[2, 4, 13], additional_params_copy_to_batched_inputs=["num_inference_steps"] ): components = self.get_dummy_components() pipe = self.pipeline_class(**components) pipe.to(torch_device) pipe.set_progress_bar_config(disable=None) inputs = self.get_dummy_inputs(torch_device) logger = logging.get_logger(pipe.__module__) logger.setLevel(level=diffusers.logging.FATAL) # batchify inputs for batch_size in batch_sizes: batched_inputs = {} for name, value in inputs.items(): if name in self.batch_params: # prompt is string if name == "prompt": len_prompt = len(value) # make unequal batch sizes batched_inputs[name] = [value[: len_prompt // i] for i in range(1, batch_size + 1)] # make last batch super long batched_inputs[name][-1] = 100 * "very long" elif name == "image": batched_images = [] for image in value: batched_images.append(batch_size * [image]) batched_inputs[name] = batched_images else: batched_inputs[name] = batch_size * [value] elif name == "batch_size": batched_inputs[name] = batch_size else: batched_inputs[name] = value for arg in additional_params_copy_to_batched_inputs: batched_inputs[arg] = inputs[arg] batched_inputs["output_type"] = "np" output = pipe(**batched_inputs) assert len(output[0]) == batch_size batched_inputs["output_type"] = "np" output = pipe(**batched_inputs)[0] assert output.shape[0] == batch_size logger.setLevel(level=diffusers.logging.WARNING) def test_num_images_per_prompt(self): components = self.get_dummy_components() pipe = self.pipeline_class(**components) pipe = pipe.to(torch_device) pipe.set_progress_bar_config(disable=None) batch_sizes = [1, 2] num_images_per_prompts = [1, 2] for batch_size in batch_sizes: for num_images_per_prompt in num_images_per_prompts: inputs = self.get_dummy_inputs(torch_device) for key in inputs.keys(): if key in self.batch_params: if key == "image": batched_images = [] for image in inputs[key]: batched_images.append(batch_size * [image]) inputs[key] = batched_images else: inputs[key] = batch_size * [inputs[key]] images = pipe(**inputs, num_images_per_prompt=num_images_per_prompt)[0] assert images.shape[0] == batch_size * num_images_per_prompt def test_inference_batch_single_identical( self, batch_size=3, test_max_difference=None, test_mean_pixel_difference=None, relax_max_difference=False, expected_max_diff=2e-3, additional_params_copy_to_batched_inputs=["num_inference_steps"], ): if test_max_difference is None: # TODO(Pedro) - not sure why, but not at all reproducible at the moment it seems # make sure that batched and non-batched is identical test_max_difference = torch_device != "mps" if test_mean_pixel_difference is None: # TODO same as above test_mean_pixel_difference = torch_device != "mps" components = self.get_dummy_components() pipe = self.pipeline_class(**components) pipe.to(torch_device) pipe.set_progress_bar_config(disable=None) inputs = self.get_dummy_inputs(torch_device) logger = logging.get_logger(pipe.__module__) logger.setLevel(level=diffusers.logging.FATAL) # batchify inputs batched_inputs = {} for name, value in inputs.items(): if name in self.batch_params: # prompt is string if name == "prompt": len_prompt = len(value) # make unequal batch sizes batched_inputs[name] = [value[: len_prompt // i] for i in range(1, batch_size + 1)] # make last batch super long batched_inputs[name][-1] = 100 * "very long" elif name == "image": batched_images = [] for image in value: batched_images.append(batch_size * [image]) batched_inputs[name] = batched_images else: batched_inputs[name] = batch_size * [value] elif name == "batch_size": batched_inputs[name] = batch_size elif name == "generator": batched_inputs[name] = [self.get_generator(i) for i in range(batch_size)] else: batched_inputs[name] = value for arg in additional_params_copy_to_batched_inputs: batched_inputs[arg] = inputs[arg] output_batch = pipe(**batched_inputs) assert output_batch[0].shape[0] == batch_size inputs["generator"] = self.get_generator(0) output = pipe(**inputs) logger.setLevel(level=diffusers.logging.WARNING) if test_max_difference: if relax_max_difference: # Taking the median of the largest <n> differences # is resilient to outliers diff = np.abs(output_batch[0][0] - output[0][0]) diff = diff.flatten() diff.sort() max_diff = np.median(diff[-5:]) else: max_diff = np.abs(output_batch[0][0] - output[0][0]).max() assert max_diff < expected_max_diff if test_mean_pixel_difference: assert_mean_pixel_difference(output_batch[0][0], output[0][0]) def test_adapter_sdxl_lcm(self): device = "cpu" # ensure determinism for the device-dependent torch.Generator components = self.get_dummy_components(time_cond_proj_dim=256) sd_pipe = StableDiffusionXLAdapterPipeline(**components) sd_pipe.scheduler = LCMScheduler.from_config(sd_pipe.scheduler.config) sd_pipe = sd_pipe.to(torch_device) sd_pipe.set_progress_bar_config(disable=None) inputs = self.get_dummy_inputs(device) output = sd_pipe(**inputs) image = output.images image_slice = image[0, -3:, -3:, -1] assert image.shape == (1, 64, 64, 3) expected_slice = np.array([0.5313, 0.5375, 0.4942, 0.5021, 0.6142, 0.4968, 0.5434, 0.5311, 0.5448]) assert np.abs(image_slice.flatten() - expected_slice).max() < 1e-2 def test_adapter_sdxl_lcm_custom_timesteps(self): device = "cpu" # ensure determinism for the device-dependent torch.Generator components = self.get_dummy_components(time_cond_proj_dim=256) sd_pipe = StableDiffusionXLAdapterPipeline(**components) sd_pipe.scheduler = LCMScheduler.from_config(sd_pipe.scheduler.config) sd_pipe = sd_pipe.to(torch_device) sd_pipe.set_progress_bar_config(disable=None) inputs = self.get_dummy_inputs(device) del inputs["num_inference_steps"] inputs["timesteps"] = [999, 499] output = sd_pipe(**inputs) image = output.images image_slice = image[0, -3:, -3:, -1] assert image.shape == (1, 64, 64, 3) expected_slice = np.array([0.5313, 0.5375, 0.4942, 0.5021, 0.6142, 0.4968, 0.5434, 0.5311, 0.5448]) assert np.abs(image_slice.flatten() - expected_slice).max() < 1e-2
diffusers/tests/pipelines/stable_diffusion_xl/test_stable_diffusion_xl_adapter.py/0
{ "file_path": "diffusers/tests/pipelines/stable_diffusion_xl/test_stable_diffusion_xl_adapter.py", "repo_id": "diffusers", "token_count": 12814 }
from diffusers.utils.testing_utils import require_onnxruntime @require_onnxruntime class OnnxPipelineTesterMixin: """ This mixin is designed to be used with unittest.TestCase classes. It provides a set of common tests for each ONNXRuntime pipeline, e.g. saving and loading the pipeline, equivalence of dict and tuple outputs, etc. """ pass
diffusers/tests/pipelines/test_pipelines_onnx_common.py/0
{ "file_path": "diffusers/tests/pipelines/test_pipelines_onnx_common.py", "repo_id": "diffusers", "token_count": 118 }
import torch from diffusers import TCDScheduler from .test_schedulers import SchedulerCommonTest class TCDSchedulerTest(SchedulerCommonTest): scheduler_classes = (TCDScheduler,) forward_default_kwargs = (("num_inference_steps", 10),) def get_scheduler_config(self, **kwargs): config = { "num_train_timesteps": 1000, "beta_start": 0.00085, "beta_end": 0.0120, "beta_schedule": "scaled_linear", "prediction_type": "epsilon", } config.update(**kwargs) return config @property def default_num_inference_steps(self): return 10 @property def default_valid_timestep(self): kwargs = dict(self.forward_default_kwargs) num_inference_steps = kwargs.pop("num_inference_steps", None) scheduler_config = self.get_scheduler_config() scheduler = self.scheduler_classes[0](**scheduler_config) scheduler.set_timesteps(num_inference_steps) timestep = scheduler.timesteps[-1] return timestep def test_timesteps(self): for timesteps in [100, 500, 1000]: # 0 is not guaranteed to be in the timestep schedule, but timesteps - 1 is self.check_over_configs(time_step=timesteps - 1, num_train_timesteps=timesteps) def test_betas(self): for beta_start, beta_end in zip([0.0001, 0.001, 0.01, 0.1], [0.002, 0.02, 0.2, 2]): self.check_over_configs(time_step=self.default_valid_timestep, beta_start=beta_start, beta_end=beta_end) def test_schedules(self): for schedule in ["linear", "scaled_linear", "squaredcos_cap_v2"]: self.check_over_configs(time_step=self.default_valid_timestep, beta_schedule=schedule) def test_prediction_type(self): for prediction_type in ["epsilon", "v_prediction"]: self.check_over_configs(time_step=self.default_valid_timestep, prediction_type=prediction_type) def test_clip_sample(self): for clip_sample in [True, False]: self.check_over_configs(time_step=self.default_valid_timestep, clip_sample=clip_sample) def test_thresholding(self): self.check_over_configs(time_step=self.default_valid_timestep, thresholding=False) for threshold in [0.5, 1.0, 2.0]: for prediction_type in ["epsilon", "v_prediction"]: self.check_over_configs( time_step=self.default_valid_timestep, thresholding=True, prediction_type=prediction_type, sample_max_value=threshold, ) def test_time_indices(self): # Get default timestep schedule. kwargs = dict(self.forward_default_kwargs) num_inference_steps = kwargs.pop("num_inference_steps", None) scheduler_config = self.get_scheduler_config() scheduler = self.scheduler_classes[0](**scheduler_config) scheduler.set_timesteps(num_inference_steps) timesteps = scheduler.timesteps for t in timesteps: self.check_over_forward(time_step=t) def test_inference_steps(self): # Hardcoded for now for t, num_inference_steps in zip([99, 39, 39, 19], [10, 25, 26, 50]): self.check_over_forward(time_step=t, num_inference_steps=num_inference_steps) def full_loop(self, num_inference_steps=10, seed=0, **config): scheduler_class = self.scheduler_classes[0] scheduler_config = self.get_scheduler_config(**config) scheduler = scheduler_class(**scheduler_config) eta = 0.0 # refer to gamma in the paper model = self.dummy_model() sample = self.dummy_sample_deter generator = torch.manual_seed(seed) scheduler.set_timesteps(num_inference_steps) for t in scheduler.timesteps: residual = model(sample, t) sample = scheduler.step(residual, t, sample, eta, generator).prev_sample return sample def test_full_loop_onestep_deter(self): sample = self.full_loop(num_inference_steps=1) result_sum = torch.sum(torch.abs(sample)) result_mean = torch.mean(torch.abs(sample)) assert abs(result_sum.item() - 29.8715) < 1e-3 # 0.0778918 assert abs(result_mean.item() - 0.0389) < 1e-3 def test_full_loop_multistep_deter(self): sample = self.full_loop(num_inference_steps=10) result_sum = torch.sum(torch.abs(sample)) result_mean = torch.mean(torch.abs(sample)) assert abs(result_sum.item() - 181.2040) < 1e-3 assert abs(result_mean.item() - 0.2359) < 1e-3 def test_custom_timesteps(self): scheduler_class = self.scheduler_classes[0] scheduler_config = self.get_scheduler_config() scheduler = scheduler_class(**scheduler_config) timesteps = [100, 87, 50, 1, 0] scheduler.set_timesteps(timesteps=timesteps) scheduler_timesteps = scheduler.timesteps for i, timestep in enumerate(scheduler_timesteps): if i == len(timesteps) - 1: expected_prev_t = -1 else: expected_prev_t = timesteps[i + 1] prev_t = scheduler.previous_timestep(timestep) prev_t = prev_t.item() self.assertEqual(prev_t, expected_prev_t) def test_custom_timesteps_increasing_order(self): scheduler_class = self.scheduler_classes[0] scheduler_config = self.get_scheduler_config() scheduler = scheduler_class(**scheduler_config) timesteps = [100, 87, 50, 51, 0] with self.assertRaises(ValueError, msg="`custom_timesteps` must be in descending order."): scheduler.set_timesteps(timesteps=timesteps) def test_custom_timesteps_passing_both_num_inference_steps_and_timesteps(self): scheduler_class = self.scheduler_classes[0] scheduler_config = self.get_scheduler_config() scheduler = scheduler_class(**scheduler_config) timesteps = [100, 87, 50, 1, 0] num_inference_steps = len(timesteps) with self.assertRaises(ValueError, msg="Can only pass one of `num_inference_steps` or `custom_timesteps`."): scheduler.set_timesteps(num_inference_steps=num_inference_steps, timesteps=timesteps) def test_custom_timesteps_too_large(self): scheduler_class = self.scheduler_classes[0] scheduler_config = self.get_scheduler_config() scheduler = scheduler_class(**scheduler_config) timesteps = [scheduler.config.num_train_timesteps] with self.assertRaises( ValueError, msg="`timesteps` must start before `self.config.train_timesteps`: {scheduler.config.num_train_timesteps}}", ): scheduler.set_timesteps(timesteps=timesteps)
diffusers/tests/schedulers/test_scheduler_tcd.py/0
{ "file_path": "diffusers/tests/schedulers/test_scheduler_tcd.py", "repo_id": "diffusers", "token_count": 3098 }
import gc import unittest import torch from diffusers import ( StableDiffusionImg2ImgPipeline, ) from diffusers.utils import load_image from diffusers.utils.testing_utils import ( backend_empty_cache, enable_full_determinism, require_torch_accelerator, slow, torch_device, ) from .single_file_testing_utils import SDSingleFileTesterMixin enable_full_determinism() @slow @require_torch_accelerator class StableDiffusionImg2ImgPipelineSingleFileSlowTests(unittest.TestCase, SDSingleFileTesterMixin): pipeline_class = StableDiffusionImg2ImgPipeline ckpt_path = ( "https://huggingface.co/stable-diffusion-v1-5/stable-diffusion-v1-5/blob/main/v1-5-pruned-emaonly.safetensors" ) original_config = ( "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/configs/stable-diffusion/v1-inference.yaml" ) repo_id = "stable-diffusion-v1-5/stable-diffusion-v1-5" def setUp(self): super().setUp() gc.collect() backend_empty_cache(torch_device) def tearDown(self): super().tearDown() gc.collect() backend_empty_cache(torch_device) def get_inputs(self, device, generator_device="cpu", dtype=torch.float32, seed=0): generator = torch.Generator(device=generator_device).manual_seed(seed) init_image = load_image( "https://huggingface.co/datasets/diffusers/test-arrays/resolve/main" "/stable_diffusion_img2img/sketch-mountains-input.png" ) inputs = { "prompt": "a fantasy landscape, concept art, high resolution", "image": init_image, "generator": generator, "num_inference_steps": 3, "strength": 0.75, "guidance_scale": 7.5, "output_type": "np", } return inputs def test_single_file_format_inference_is_same_as_pretrained(self): super().test_single_file_format_inference_is_same_as_pretrained(expected_max_diff=1e-3) @slow @require_torch_accelerator class StableDiffusion21Img2ImgPipelineSingleFileSlowTests(unittest.TestCase, SDSingleFileTesterMixin): pipeline_class = StableDiffusionImg2ImgPipeline ckpt_path = "https://huggingface.co/stabilityai/stable-diffusion-2-1/blob/main/v2-1_768-ema-pruned.safetensors" original_config = "https://raw.githubusercontent.com/Stability-AI/stablediffusion/main/configs/stable-diffusion/v2-inference-v.yaml" repo_id = "stabilityai/stable-diffusion-2-1" def setUp(self): super().setUp() gc.collect() backend_empty_cache(torch_device) def tearDown(self): super().tearDown() gc.collect() backend_empty_cache(torch_device) def get_inputs(self, device, generator_device="cpu", dtype=torch.float32, seed=0): generator = torch.Generator(device=generator_device).manual_seed(seed) init_image = load_image( "https://huggingface.co/datasets/diffusers/test-arrays/resolve/main" "/stable_diffusion_img2img/sketch-mountains-input.png" ) inputs = { "prompt": "a fantasy landscape, concept art, high resolution", "image": init_image, "generator": generator, "num_inference_steps": 3, "strength": 0.75, "guidance_scale": 7.5, "output_type": "np", } return inputs def test_single_file_format_inference_is_same_as_pretrained(self): super().test_single_file_format_inference_is_same_as_pretrained(expected_max_diff=1e-3)
diffusers/tests/single_file/test_stable_diffusion_img2img_single_file.py/0
{ "file_path": "diffusers/tests/single_file/test_stable_diffusion_img2img_single_file.py", "repo_id": "diffusers", "token_count": 1604 }
#!/usr/bin/env python # Copyright 2024 The HuggingFace Inc. team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """Assess the performance of video decoding in various configurations. This script will benchmark different video encoding and decoding parameters. See the provided README.md or run `python benchmark/video/run_video_benchmark.py --help` for usage info. """ import argparse import datetime as dt import random import shutil from collections import OrderedDict from concurrent.futures import ThreadPoolExecutor, as_completed from pathlib import Path import einops import numpy as np import pandas as pd import PIL import torch from skimage.metrics import mean_squared_error, peak_signal_noise_ratio, structural_similarity from tqdm import tqdm from lerobot.common.datasets.lerobot_dataset import LeRobotDataset from lerobot.common.datasets.video_utils import ( decode_video_frames_torchvision, encode_video_frames, ) from lerobot.common.utils.benchmark import TimeBenchmark BASE_ENCODING = OrderedDict( [ ("vcodec", "libx264"), ("pix_fmt", "yuv444p"), ("g", 2), ("crf", None), # TODO(aliberts): Add fastdecode # ("fastdecode", 0), ] ) # TODO(rcadene, aliberts): move to `utils.py` folder when we want to refactor def parse_int_or_none(value) -> int | None: if value.lower() == "none": return None try: return int(value) except ValueError as e: raise argparse.ArgumentTypeError(f"Invalid int or None: {value}") from e def check_datasets_formats(repo_ids: list) -> None: for repo_id in repo_ids: dataset = LeRobotDataset(repo_id) if dataset.video: raise ValueError( f"Use only image dataset for running this benchmark. Video dataset provided: {repo_id}" ) def get_directory_size(directory: Path) -> int: total_size = 0 for item in directory.rglob("*"): if item.is_file(): total_size += item.stat().st_size return total_size def load_original_frames(imgs_dir: Path, timestamps: list[float], fps: int) -> torch.Tensor: frames = [] for ts in timestamps: idx = int(ts * fps) frame = PIL.Image.open(imgs_dir / f"frame_{idx:06d}.png") frame = torch.from_numpy(np.array(frame)) frame = frame.type(torch.float32) / 255 frame = einops.rearrange(frame, "h w c -> c h w") frames.append(frame) return torch.stack(frames) def save_decoded_frames( imgs_dir: Path, save_dir: Path, frames: torch.Tensor, timestamps: list[float], fps: int ) -> None: if save_dir.exists() and len(list(save_dir.glob("frame_*.png"))) == len(timestamps): return save_dir.mkdir(parents=True, exist_ok=True) for i, ts in enumerate(timestamps): idx = int(ts * fps) frame_hwc = (frames[i].permute((1, 2, 0)) * 255).type(torch.uint8).cpu().numpy() PIL.Image.fromarray(frame_hwc).save(save_dir / f"frame_{idx:06d}_decoded.png") shutil.copyfile(imgs_dir / f"frame_{idx:06d}.png", save_dir / f"frame_{idx:06d}_original.png") def save_first_episode(imgs_dir: Path, dataset: LeRobotDataset) -> None: ep_num_images = dataset.episode_data_index["to"][0].item() if imgs_dir.exists() and len(list(imgs_dir.glob("frame_*.png"))) == ep_num_images: return imgs_dir.mkdir(parents=True, exist_ok=True) hf_dataset = dataset.hf_dataset.with_format(None) # We only save images from the first camera img_keys = [key for key in hf_dataset.features if key.startswith("observation.image")] imgs_dataset = hf_dataset.select_columns(img_keys[0]) for i, item in enumerate( tqdm(imgs_dataset, desc=f"saving {dataset.repo_id} first episode images", leave=False) ): img = item[img_keys[0]] img.save(str(imgs_dir / f"frame_{i:06d}.png"), quality=100) if i >= ep_num_images - 1: break def sample_timestamps(timestamps_mode: str, ep_num_images: int, fps: int) -> list[float]: # Start at 5 to allow for 2_frames_4_space and 6_frames idx = random.randint(5, ep_num_images - 1) match timestamps_mode: case "1_frame": frame_indexes = [idx] case "2_frames": frame_indexes = [idx - 1, idx] case "2_frames_4_space": frame_indexes = [idx - 5, idx] case "6_frames": frame_indexes = [idx - i for i in range(6)][::-1] case _: raise ValueError(timestamps_mode) return [idx / fps for idx in frame_indexes] def decode_video_frames( video_path: str, timestamps: list[float], tolerance_s: float, backend: str, ) -> torch.Tensor: if backend in ["pyav", "video_reader"]: return decode_video_frames_torchvision(video_path, timestamps, tolerance_s, backend) else: raise NotImplementedError(backend) def benchmark_decoding( imgs_dir: Path, video_path: Path, timestamps_mode: str, backend: str, ep_num_images: int, fps: int, num_samples: int = 50, num_workers: int = 4, save_frames: bool = False, ) -> dict: def process_sample(sample: int): time_benchmark = TimeBenchmark() timestamps = sample_timestamps(timestamps_mode, ep_num_images, fps) num_frames = len(timestamps) result = { "psnr_values": [], "ssim_values": [], "mse_values": [], } with time_benchmark: frames = decode_video_frames(video_path, timestamps=timestamps, tolerance_s=5e-1, backend=backend) result["load_time_video_ms"] = time_benchmark.result_ms / num_frames with time_benchmark: original_frames = load_original_frames(imgs_dir, timestamps, fps) result["load_time_images_ms"] = time_benchmark.result_ms / num_frames frames_np, original_frames_np = frames.numpy(), original_frames.numpy() for i in range(num_frames): result["mse_values"].append(mean_squared_error(original_frames_np[i], frames_np[i])) result["psnr_values"].append( peak_signal_noise_ratio(original_frames_np[i], frames_np[i], data_range=1.0) ) result["ssim_values"].append( structural_similarity(original_frames_np[i], frames_np[i], data_range=1.0, channel_axis=0) ) if save_frames and sample == 0: save_dir = video_path.with_suffix("") / f"{timestamps_mode}_{backend}" save_decoded_frames(imgs_dir, save_dir, frames, timestamps, fps) return result load_times_video_ms = [] load_times_images_ms = [] mse_values = [] psnr_values = [] ssim_values = [] # A sample is a single set of decoded frames specified by timestamps_mode (e.g. a single frame, 2 frames, etc.). # For each sample, we record metrics (loading time and quality metrics) which are then averaged over all samples. # As these samples are independent, we run them in parallel threads to speed up the benchmark. with ThreadPoolExecutor(max_workers=num_workers) as executor: futures = [executor.submit(process_sample, i) for i in range(num_samples)] for future in tqdm(as_completed(futures), total=num_samples, desc="samples", leave=False): result = future.result() load_times_video_ms.append(result["load_time_video_ms"]) load_times_images_ms.append(result["load_time_images_ms"]) psnr_values.extend(result["psnr_values"]) ssim_values.extend(result["ssim_values"]) mse_values.extend(result["mse_values"]) avg_load_time_video_ms = float(np.array(load_times_video_ms).mean()) avg_load_time_images_ms = float(np.array(load_times_images_ms).mean()) video_images_load_time_ratio = avg_load_time_video_ms / avg_load_time_images_ms return { "avg_load_time_video_ms": avg_load_time_video_ms, "avg_load_time_images_ms": avg_load_time_images_ms, "video_images_load_time_ratio": video_images_load_time_ratio, "avg_mse": float(np.mean(mse_values)), "avg_psnr": float(np.mean(psnr_values)), "avg_ssim": float(np.mean(ssim_values)), } def benchmark_encoding_decoding( dataset: LeRobotDataset, video_path: Path, imgs_dir: Path, encoding_cfg: dict, decoding_cfg: dict, num_samples: int, num_workers: int, save_frames: bool, overwrite: bool = False, seed: int = 1337, ) -> list[dict]: fps = dataset.fps if overwrite or not video_path.is_file(): tqdm.write(f"encoding {video_path}") encode_video_frames( imgs_dir=imgs_dir, video_path=video_path, fps=fps, vcodec=encoding_cfg["vcodec"], pix_fmt=encoding_cfg["pix_fmt"], g=encoding_cfg.get("g"), crf=encoding_cfg.get("crf"), # fast_decode=encoding_cfg.get("fastdecode"), overwrite=True, ) ep_num_images = dataset.episode_data_index["to"][0].item() width, height = tuple(dataset[0][dataset.meta.camera_keys[0]].shape[-2:]) num_pixels = width * height video_size_bytes = video_path.stat().st_size images_size_bytes = get_directory_size(imgs_dir) video_images_size_ratio = video_size_bytes / images_size_bytes random.seed(seed) benchmark_table = [] for timestamps_mode in tqdm( decoding_cfg["timestamps_modes"], desc="decodings (timestamps_modes)", leave=False ): for backend in tqdm(decoding_cfg["backends"], desc="decodings (backends)", leave=False): benchmark_row = benchmark_decoding( imgs_dir, video_path, timestamps_mode, backend, ep_num_images, fps, num_samples, num_workers, save_frames, ) benchmark_row.update( **{ "repo_id": dataset.repo_id, "resolution": f"{width} x {height}", "num_pixels": num_pixels, "video_size_bytes": video_size_bytes, "images_size_bytes": images_size_bytes, "video_images_size_ratio": video_images_size_ratio, "timestamps_mode": timestamps_mode, "backend": backend, }, **encoding_cfg, ) benchmark_table.append(benchmark_row) return benchmark_table def main( output_dir: Path, repo_ids: list[str], vcodec: list[str], pix_fmt: list[str], g: list[int], crf: list[int], # fastdecode: list[int], timestamps_modes: list[str], backends: list[str], num_samples: int, num_workers: int, save_frames: bool, ): check_datasets_formats(repo_ids) encoding_benchmarks = { "g": g, "crf": crf, # "fastdecode": fastdecode, } decoding_benchmarks = { "timestamps_modes": timestamps_modes, "backends": backends, } headers = ["repo_id", "resolution", "num_pixels"] headers += list(BASE_ENCODING.keys()) headers += [ "timestamps_mode", "backend", "video_size_bytes", "images_size_bytes", "video_images_size_ratio", "avg_load_time_video_ms", "avg_load_time_images_ms", "video_images_load_time_ratio", "avg_mse", "avg_psnr", "avg_ssim", ] file_paths = [] for video_codec in tqdm(vcodec, desc="encodings (vcodec)"): for pixel_format in tqdm(pix_fmt, desc="encodings (pix_fmt)", leave=False): benchmark_table = [] for repo_id in tqdm(repo_ids, desc="encodings (datasets)", leave=False): dataset = LeRobotDataset(repo_id) imgs_dir = output_dir / "images" / dataset.repo_id.replace("/", "_") # We only use the first episode save_first_episode(imgs_dir, dataset) for key, values in tqdm(encoding_benchmarks.items(), desc="encodings (g, crf)", leave=False): for value in tqdm(values, desc=f"encodings ({key})", leave=False): encoding_cfg = BASE_ENCODING.copy() encoding_cfg["vcodec"] = video_codec encoding_cfg["pix_fmt"] = pixel_format encoding_cfg[key] = value args_path = Path("_".join(str(value) for value in encoding_cfg.values())) video_path = output_dir / "videos" / args_path / f"{repo_id.replace('/', '_')}.mp4" benchmark_table += benchmark_encoding_decoding( dataset, video_path, imgs_dir, encoding_cfg, decoding_benchmarks, num_samples, num_workers, save_frames, ) # Save intermediate results benchmark_df = pd.DataFrame(benchmark_table, columns=headers) now = dt.datetime.now() csv_path = ( output_dir / f"{now:%Y-%m-%d}_{now:%H-%M-%S}_{video_codec}_{pixel_format}_{num_samples}-samples.csv" ) benchmark_df.to_csv(csv_path, header=True, index=False) file_paths.append(csv_path) del benchmark_df # Concatenate all results df_list = [pd.read_csv(csv_path) for csv_path in file_paths] concatenated_df = pd.concat(df_list, ignore_index=True) concatenated_path = output_dir / f"{now:%Y-%m-%d}_{now:%H-%M-%S}_all_{num_samples}-samples.csv" concatenated_df.to_csv(concatenated_path, header=True, index=False) if __name__ == "__main__": parser = argparse.ArgumentParser() parser.add_argument( "--output-dir", type=Path, default=Path("outputs/video_benchmark"), help="Directory where the video benchmark outputs are written.", ) parser.add_argument( "--repo-ids", type=str, nargs="*", default=[ "lerobot/pusht_image", "aliberts/aloha_mobile_shrimp_image", "aliberts/paris_street", "aliberts/kitchen", ], help="Datasets repo-ids to test against. First episodes only are used. Must be images.", ) parser.add_argument( "--vcodec", type=str, nargs="*", default=["libx264", "libx265", "libsvtav1"], help="Video codecs to be tested", ) parser.add_argument( "--pix-fmt", type=str, nargs="*", default=["yuv444p", "yuv420p"], help="Pixel formats (chroma subsampling) to be tested", ) parser.add_argument( "--g", type=parse_int_or_none, nargs="*", default=[1, 2, 3, 4, 5, 6, 10, 15, 20, 40, 100, None], help="Group of pictures sizes to be tested.", ) parser.add_argument( "--crf", type=parse_int_or_none, nargs="*", default=[0, 5, 10, 15, 20, 25, 30, 40, 50, None], help="Constant rate factors to be tested.", ) # parser.add_argument( # "--fastdecode", # type=int, # nargs="*", # default=[0, 1], # help="Use the fastdecode tuning option. 0 disables it. " # "For libx264 and libx265, only 1 is possible. " # "For libsvtav1, 1, 2 or 3 are possible values with a higher number meaning a faster decoding optimization", # ) parser.add_argument( "--timestamps-modes", type=str, nargs="*", default=[ "1_frame", "2_frames", "2_frames_4_space", "6_frames", ], help="Timestamps scenarios to be tested.", ) parser.add_argument( "--backends", type=str, nargs="*", default=["pyav", "video_reader"], help="Torchvision decoding backend to be tested.", ) parser.add_argument( "--num-samples", type=int, default=50, help="Number of samples for each encoding x decoding config.", ) parser.add_argument( "--num-workers", type=int, default=10, help="Number of processes for parallelized sample processing.", ) parser.add_argument( "--save-frames", type=int, default=0, help="Whether to save decoded frames or not. Enter a non-zero number for true.", ) args = parser.parse_args() main(**vars(args))
lerobot/benchmarks/video/run_video_benchmark.py/0
{ "file_path": "lerobot/benchmarks/video/run_video_benchmark.py", "repo_id": "lerobot", "token_count": 8154 }
#!/usr/bin/env python # Copyright 2024 The HuggingFace Inc. team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """ This file contains lists of available environments, dataset and policies to reflect the current state of LeRobot library. We do not want to import all the dependencies, but instead we keep it lightweight to ensure fast access to these variables. Example: ```python import lerobot print(lerobot.available_envs) print(lerobot.available_tasks_per_env) print(lerobot.available_datasets) print(lerobot.available_datasets_per_env) print(lerobot.available_real_world_datasets) print(lerobot.available_policies) print(lerobot.available_policies_per_env) print(lerobot.available_robots) print(lerobot.available_cameras) print(lerobot.available_motors) ``` When implementing a new dataset loadable with LeRobotDataset follow these steps: - Update `available_datasets_per_env` in `lerobot/__init__.py` When implementing a new environment (e.g. `gym_aloha`), follow these steps: - Update `available_tasks_per_env` and `available_datasets_per_env` in `lerobot/__init__.py` When implementing a new policy class (e.g. `DiffusionPolicy`) follow these steps: - Update `available_policies` and `available_policies_per_env`, in `lerobot/__init__.py` - Set the required `name` class attribute. - Update variables in `tests/test_available.py` by importing your new Policy class """ import itertools from lerobot.__version__ import __version__ # noqa: F401 # TODO(rcadene): Improve policies and envs. As of now, an item in `available_policies` # refers to a yaml file AND a modeling name. Same for `available_envs` which refers to # a yaml file AND a environment name. The difference should be more obvious. available_tasks_per_env = { "aloha": [ "AlohaInsertion-v0", "AlohaTransferCube-v0", ], "pusht": ["PushT-v0"], "xarm": ["XarmLift-v0"], } available_envs = list(available_tasks_per_env.keys()) available_datasets_per_env = { "aloha": [ "lerobot/aloha_sim_insertion_human", "lerobot/aloha_sim_insertion_scripted", "lerobot/aloha_sim_transfer_cube_human", "lerobot/aloha_sim_transfer_cube_scripted", "lerobot/aloha_sim_insertion_human_image", "lerobot/aloha_sim_insertion_scripted_image", "lerobot/aloha_sim_transfer_cube_human_image", "lerobot/aloha_sim_transfer_cube_scripted_image", ], # TODO(alexander-soare): Add "lerobot/pusht_keypoints". Right now we can't because this is too tightly # coupled with tests. "pusht": ["lerobot/pusht", "lerobot/pusht_image"], "xarm": [ "lerobot/xarm_lift_medium", "lerobot/xarm_lift_medium_replay", "lerobot/xarm_push_medium", "lerobot/xarm_push_medium_replay", "lerobot/xarm_lift_medium_image", "lerobot/xarm_lift_medium_replay_image", "lerobot/xarm_push_medium_image", "lerobot/xarm_push_medium_replay_image", ], } available_real_world_datasets = [ "lerobot/aloha_mobile_cabinet", "lerobot/aloha_mobile_chair", "lerobot/aloha_mobile_elevator", "lerobot/aloha_mobile_shrimp", "lerobot/aloha_mobile_wash_pan", "lerobot/aloha_mobile_wipe_wine", "lerobot/aloha_static_battery", "lerobot/aloha_static_candy", "lerobot/aloha_static_coffee", "lerobot/aloha_static_coffee_new", "lerobot/aloha_static_cups_open", "lerobot/aloha_static_fork_pick_up", "lerobot/aloha_static_pingpong_test", "lerobot/aloha_static_pro_pencil", "lerobot/aloha_static_screw_driver", "lerobot/aloha_static_tape", "lerobot/aloha_static_thread_velcro", "lerobot/aloha_static_towel", "lerobot/aloha_static_vinh_cup", "lerobot/aloha_static_vinh_cup_left", "lerobot/aloha_static_ziploc_slide", "lerobot/umi_cup_in_the_wild", "lerobot/unitreeh1_fold_clothes", "lerobot/unitreeh1_rearrange_objects", "lerobot/unitreeh1_two_robot_greeting", "lerobot/unitreeh1_warehouse", "lerobot/nyu_rot_dataset", "lerobot/utokyo_saytap", "lerobot/imperialcollege_sawyer_wrist_cam", "lerobot/utokyo_xarm_bimanual", "lerobot/tokyo_u_lsmo", "lerobot/utokyo_pr2_opening_fridge", "lerobot/cmu_franka_exploration_dataset", "lerobot/cmu_stretch", "lerobot/asu_table_top", "lerobot/utokyo_pr2_tabletop_manipulation", "lerobot/utokyo_xarm_pick_and_place", "lerobot/ucsd_kitchen_dataset", "lerobot/austin_buds_dataset", "lerobot/dlr_sara_grid_clamp", "lerobot/conq_hose_manipulation", "lerobot/columbia_cairlab_pusht_real", "lerobot/dlr_sara_pour", "lerobot/dlr_edan_shared_control", "lerobot/ucsd_pick_and_place_dataset", "lerobot/berkeley_cable_routing", "lerobot/nyu_franka_play_dataset", "lerobot/austin_sirius_dataset", "lerobot/cmu_play_fusion", "lerobot/berkeley_gnm_sac_son", "lerobot/nyu_door_opening_surprising_effectiveness", "lerobot/berkeley_fanuc_manipulation", "lerobot/jaco_play", "lerobot/viola", "lerobot/kaist_nonprehensile", "lerobot/berkeley_mvp", "lerobot/uiuc_d3field", "lerobot/berkeley_gnm_recon", "lerobot/austin_sailor_dataset", "lerobot/utaustin_mutex", "lerobot/roboturk", "lerobot/stanford_hydra_dataset", "lerobot/berkeley_autolab_ur5", "lerobot/stanford_robocook", "lerobot/toto", "lerobot/fmb", "lerobot/droid_100", "lerobot/berkeley_rpt", "lerobot/stanford_kuka_multimodal_dataset", "lerobot/iamlab_cmu_pickup_insert", "lerobot/taco_play", "lerobot/berkeley_gnm_cory_hall", "lerobot/usc_cloth_sim", ] available_datasets = sorted( set(itertools.chain(*available_datasets_per_env.values(), available_real_world_datasets)) ) # lists all available policies from `lerobot/common/policies` available_policies = [ "act", "diffusion", "tdmpc", "vqbet", ] # lists all available robots from `lerobot/common/robot_devices/robots` available_robots = [ "koch", "koch_bimanual", "aloha", "so100", "moss", ] # lists all available cameras from `lerobot/common/robot_devices/cameras` available_cameras = [ "opencv", "intelrealsense", ] # lists all available motors from `lerobot/common/robot_devices/motors` available_motors = [ "dynamixel", "feetech", ] # keys and values refer to yaml files available_policies_per_env = { "aloha": ["act"], "pusht": ["diffusion", "vqbet"], "xarm": ["tdmpc"], "koch_real": ["act_koch_real"], "aloha_real": ["act_aloha_real"], } env_task_pairs = [(env, task) for env, tasks in available_tasks_per_env.items() for task in tasks] env_dataset_pairs = [ (env, dataset) for env, datasets in available_datasets_per_env.items() for dataset in datasets ] env_dataset_policy_triplets = [ (env, dataset, policy) for env, datasets in available_datasets_per_env.items() for dataset in datasets for policy in available_policies_per_env[env] ]
lerobot/lerobot/__init__.py/0
{ "file_path": "lerobot/lerobot/__init__.py", "repo_id": "lerobot", "token_count": 3253 }
#!/usr/bin/env python # Copyright 2024 The HuggingFace Inc. team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import collections from dataclasses import dataclass, field from typing import Any, Callable, Sequence import torch from torchvision.transforms import v2 from torchvision.transforms.v2 import Transform from torchvision.transforms.v2 import functional as F # noqa: N812 class RandomSubsetApply(Transform): """Apply a random subset of N transformations from a list of transformations. Args: transforms: list of transformations. p: represents the multinomial probabilities (with no replacement) used for sampling the transform. If the sum of the weights is not 1, they will be normalized. If ``None`` (default), all transforms have the same probability. n_subset: number of transformations to apply. If ``None``, all transforms are applied. Must be in [1, len(transforms)]. random_order: apply transformations in a random order. """ def __init__( self, transforms: Sequence[Callable], p: list[float] | None = None, n_subset: int | None = None, random_order: bool = False, ) -> None: super().__init__() if not isinstance(transforms, Sequence): raise TypeError("Argument transforms should be a sequence of callables") if p is None: p = [1] * len(transforms) elif len(p) != len(transforms): raise ValueError( f"Length of p doesn't match the number of transforms: {len(p)} != {len(transforms)}" ) if n_subset is None: n_subset = len(transforms) elif not isinstance(n_subset, int): raise TypeError("n_subset should be an int or None") elif not (1 <= n_subset <= len(transforms)): raise ValueError(f"n_subset should be in the interval [1, {len(transforms)}]") self.transforms = transforms total = sum(p) self.p = [prob / total for prob in p] self.n_subset = n_subset self.random_order = random_order self.selected_transforms = None def forward(self, *inputs: Any) -> Any: needs_unpacking = len(inputs) > 1 selected_indices = torch.multinomial(torch.tensor(self.p), self.n_subset) if not self.random_order: selected_indices = selected_indices.sort().values self.selected_transforms = [self.transforms[i] for i in selected_indices] for transform in self.selected_transforms: outputs = transform(*inputs) inputs = outputs if needs_unpacking else (outputs,) return outputs def extra_repr(self) -> str: return ( f"transforms={self.transforms}, " f"p={self.p}, " f"n_subset={self.n_subset}, " f"random_order={self.random_order}" ) class SharpnessJitter(Transform): """Randomly change the sharpness of an image or video. Similar to a v2.RandomAdjustSharpness with p=1 and a sharpness_factor sampled randomly. While v2.RandomAdjustSharpness applies — with a given probability — a fixed sharpness_factor to an image, SharpnessJitter applies a random sharpness_factor each time. This is to have a more diverse set of augmentations as a result. A sharpness_factor of 0 gives a blurred image, 1 gives the original image while 2 increases the sharpness by a factor of 2. If the input is a :class:`torch.Tensor`, it is expected to have [..., 1 or 3, H, W] shape, where ... means an arbitrary number of leading dimensions. Args: sharpness: How much to jitter sharpness. sharpness_factor is chosen uniformly from [max(0, 1 - sharpness), 1 + sharpness] or the given [min, max]. Should be non negative numbers. """ def __init__(self, sharpness: float | Sequence[float]) -> None: super().__init__() self.sharpness = self._check_input(sharpness) def _check_input(self, sharpness): if isinstance(sharpness, (int, float)): if sharpness < 0: raise ValueError("If sharpness is a single number, it must be non negative.") sharpness = [1.0 - sharpness, 1.0 + sharpness] sharpness[0] = max(sharpness[0], 0.0) elif isinstance(sharpness, collections.abc.Sequence) and len(sharpness) == 2: sharpness = [float(v) for v in sharpness] else: raise TypeError(f"{sharpness=} should be a single number or a sequence with length 2.") if not 0.0 <= sharpness[0] <= sharpness[1]: raise ValueError(f"sharpnesss values should be between (0., inf), but got {sharpness}.") return float(sharpness[0]), float(sharpness[1]) def make_params(self, flat_inputs: list[Any]) -> dict[str, Any]: sharpness_factor = torch.empty(1).uniform_(self.sharpness[0], self.sharpness[1]).item() return {"sharpness_factor": sharpness_factor} def transform(self, inpt: Any, params: dict[str, Any]) -> Any: sharpness_factor = params["sharpness_factor"] return self._call_kernel(F.adjust_sharpness, inpt, sharpness_factor=sharpness_factor) @dataclass class ImageTransformConfig: """ For each transform, the following parameters are available: weight: This represents the multinomial probability (with no replacement) used for sampling the transform. If the sum of the weights is not 1, they will be normalized. type: The name of the class used. This is either a class available under torchvision.transforms.v2 or a custom transform defined here. kwargs: Lower & upper bound respectively used for sampling the transform's parameter (following uniform distribution) when it's applied. """ weight: float = 1.0 type: str = "Identity" kwargs: dict[str, Any] = field(default_factory=dict) @dataclass class ImageTransformsConfig: """ These transforms are all using standard torchvision.transforms.v2 You can find out how these transformations affect images here: https://pytorch.org/vision/0.18/auto_examples/transforms/plot_transforms_illustrations.html We use a custom RandomSubsetApply container to sample them. """ # Set this flag to `true` to enable transforms during training enable: bool = False # This is the maximum number of transforms (sampled from these below) that will be applied to each frame. # It's an integer in the interval [1, number_of_available_transforms]. max_num_transforms: int = 3 # By default, transforms are applied in Torchvision's suggested order (shown below). # Set this to True to apply them in a random order. random_order: bool = False tfs: dict[str, ImageTransformConfig] = field( default_factory=lambda: { "brightness": ImageTransformConfig( weight=1.0, type="ColorJitter", kwargs={"brightness": (0.8, 1.2)}, ), "contrast": ImageTransformConfig( weight=1.0, type="ColorJitter", kwargs={"contrast": (0.8, 1.2)}, ), "saturation": ImageTransformConfig( weight=1.0, type="ColorJitter", kwargs={"saturation": (0.5, 1.5)}, ), "hue": ImageTransformConfig( weight=1.0, type="ColorJitter", kwargs={"hue": (-0.05, 0.05)}, ), "sharpness": ImageTransformConfig( weight=1.0, type="SharpnessJitter", kwargs={"sharpness": (0.5, 1.5)}, ), } ) def make_transform_from_config(cfg: ImageTransformConfig): if cfg.type == "Identity": return v2.Identity(**cfg.kwargs) elif cfg.type == "ColorJitter": return v2.ColorJitter(**cfg.kwargs) elif cfg.type == "SharpnessJitter": return SharpnessJitter(**cfg.kwargs) else: raise ValueError(f"Transform '{cfg.type}' is not valid.") class ImageTransforms(Transform): """A class to compose image transforms based on configuration.""" def __init__(self, cfg: ImageTransformsConfig) -> None: super().__init__() self._cfg = cfg self.weights = [] self.transforms = {} for tf_name, tf_cfg in cfg.tfs.items(): if tf_cfg.weight <= 0.0: continue self.transforms[tf_name] = make_transform_from_config(tf_cfg) self.weights.append(tf_cfg.weight) n_subset = min(len(self.transforms), cfg.max_num_transforms) if n_subset == 0 or not cfg.enable: self.tf = v2.Identity() else: self.tf = RandomSubsetApply( transforms=list(self.transforms.values()), p=self.weights, n_subset=n_subset, random_order=cfg.random_order, ) def forward(self, *inputs: Any) -> Any: return self.tf(*inputs)
lerobot/lerobot/common/datasets/transforms.py/0
{ "file_path": "lerobot/lerobot/common/datasets/transforms.py", "repo_id": "lerobot", "token_count": 3919 }
#!/usr/bin/env python # Copyright 2024 Tony Z. Zhao and The HuggingFace Inc. team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """Action Chunking Transformer Policy As per Learning Fine-Grained Bimanual Manipulation with Low-Cost Hardware (https://arxiv.org/abs/2304.13705). The majority of changes here involve removing unused code, unifying naming, and adding helpful comments. """ import math from collections import deque from itertools import chain from typing import Callable import einops import numpy as np import torch import torch.nn.functional as F # noqa: N812 import torchvision from torch import Tensor, nn from torchvision.models._utils import IntermediateLayerGetter from torchvision.ops.misc import FrozenBatchNorm2d from lerobot.common.policies.act.configuration_act import ACTConfig from lerobot.common.policies.normalize import Normalize, Unnormalize from lerobot.common.policies.pretrained import PreTrainedPolicy class ACTPolicy(PreTrainedPolicy): """ Action Chunking Transformer Policy as per Learning Fine-Grained Bimanual Manipulation with Low-Cost Hardware (paper: https://arxiv.org/abs/2304.13705, code: https://github.com/tonyzhaozh/act) """ config_class = ACTConfig name = "act" def __init__( self, config: ACTConfig, dataset_stats: dict[str, dict[str, Tensor]] | None = None, ): """ Args: config: Policy configuration class instance or None, in which case the default instantiation of the configuration class is used. dataset_stats: Dataset statistics to be used for normalization. If not passed here, it is expected that they will be passed with a call to `load_state_dict` before the policy is used. """ super().__init__(config) config.validate_features() self.config = config self.normalize_inputs = Normalize(config.input_features, config.normalization_mapping, dataset_stats) self.normalize_targets = Normalize( config.output_features, config.normalization_mapping, dataset_stats ) self.unnormalize_outputs = Unnormalize( config.output_features, config.normalization_mapping, dataset_stats ) self.model = ACT(config) if config.temporal_ensemble_coeff is not None: self.temporal_ensembler = ACTTemporalEnsembler(config.temporal_ensemble_coeff, config.chunk_size) self.reset() def get_optim_params(self) -> dict: # TODO(aliberts, rcadene): As of now, lr_backbone == lr # Should we remove this and just `return self.parameters()`? return [ { "params": [ p for n, p in self.named_parameters() if not n.startswith("model.backbone") and p.requires_grad ] }, { "params": [ p for n, p in self.named_parameters() if n.startswith("model.backbone") and p.requires_grad ], "lr": self.config.optimizer_lr_backbone, }, ] def reset(self): """This should be called whenever the environment is reset.""" if self.config.temporal_ensemble_coeff is not None: self.temporal_ensembler.reset() else: self._action_queue = deque([], maxlen=self.config.n_action_steps) @torch.no_grad def select_action(self, batch: dict[str, Tensor]) -> Tensor: """Select a single action given environment observations. This method wraps `select_actions` in order to return one action at a time for execution in the environment. It works by managing the actions in a queue and only calling `select_actions` when the queue is empty. """ self.eval() batch = self.normalize_inputs(batch) if self.config.image_features: batch = dict(batch) # shallow copy so that adding a key doesn't modify the original batch["observation.images"] = torch.stack( [batch[key] for key in self.config.image_features], dim=-4 ) # If we are doing temporal ensembling, do online updates where we keep track of the number of actions # we are ensembling over. if self.config.temporal_ensemble_coeff is not None: actions = self.model(batch)[0] # (batch_size, chunk_size, action_dim) actions = self.unnormalize_outputs({"action": actions})["action"] action = self.temporal_ensembler.update(actions) return action # Action queue logic for n_action_steps > 1. When the action_queue is depleted, populate it by # querying the policy. if len(self._action_queue) == 0: actions = self.model(batch)[0][:, : self.config.n_action_steps] # TODO(rcadene): make _forward return output dictionary? actions = self.unnormalize_outputs({"action": actions})["action"] # `self.model.forward` returns a (batch_size, n_action_steps, action_dim) tensor, but the queue # effectively has shape (n_action_steps, batch_size, *), hence the transpose. self._action_queue.extend(actions.transpose(0, 1)) return self._action_queue.popleft() def forward(self, batch: dict[str, Tensor]) -> dict[str, Tensor]: """Run the batch through the model and compute the loss for training or validation.""" batch = self.normalize_inputs(batch) if self.config.image_features: batch = dict(batch) # shallow copy so that adding a key doesn't modify the original batch["observation.images"] = torch.stack( [batch[key] for key in self.config.image_features], dim=-4 ) batch = self.normalize_targets(batch) actions_hat, (mu_hat, log_sigma_x2_hat) = self.model(batch) l1_loss = ( F.l1_loss(batch["action"], actions_hat, reduction="none") * ~batch["action_is_pad"].unsqueeze(-1) ).mean() loss_dict = {"l1_loss": l1_loss.item()} if self.config.use_vae: # Calculate Dₖₗ(latent_pdf || standard_normal). Note: After computing the KL-divergence for # each dimension independently, we sum over the latent dimension to get the total # KL-divergence per batch element, then take the mean over the batch. # (See App. B of https://arxiv.org/abs/1312.6114 for more details). mean_kld = ( (-0.5 * (1 + log_sigma_x2_hat - mu_hat.pow(2) - (log_sigma_x2_hat).exp())).sum(-1).mean() ) loss_dict["kld_loss"] = mean_kld.item() loss_dict["loss"] = l1_loss + mean_kld * self.config.kl_weight else: loss_dict["loss"] = l1_loss return loss_dict class ACTTemporalEnsembler: def __init__(self, temporal_ensemble_coeff: float, chunk_size: int) -> None: """Temporal ensembling as described in Algorithm 2 of https://arxiv.org/abs/2304.13705. The weights are calculated as wᵢ = exp(-temporal_ensemble_coeff * i) where w₀ is the oldest action. They are then normalized to sum to 1 by dividing by Σwᵢ. Here's some intuition around how the coefficient works: - Setting it to 0 uniformly weighs all actions. - Setting it positive gives more weight to older actions. - Setting it negative gives more weight to newer actions. NOTE: The default value for `temporal_ensemble_coeff` used by the original ACT work is 0.01. This results in older actions being weighed more highly than newer actions (the experiments documented in https://github.com/huggingface/lerobot/pull/319 hint at why highly weighing new actions might be detrimental: doing so aggressively may diminish the benefits of action chunking). Here we use an online method for computing the average rather than caching a history of actions in order to compute the average offline. For a simple 1D sequence it looks something like: ``` import torch seq = torch.linspace(8, 8.5, 100) print(seq) m = 0.01 exp_weights = torch.exp(-m * torch.arange(len(seq))) print(exp_weights) # Calculate offline avg = (exp_weights * seq).sum() / exp_weights.sum() print("offline", avg) # Calculate online for i, item in enumerate(seq): if i == 0: avg = item continue avg *= exp_weights[:i].sum() avg += item * exp_weights[i] avg /= exp_weights[:i+1].sum() print("online", avg) ``` """ self.chunk_size = chunk_size self.ensemble_weights = torch.exp(-temporal_ensemble_coeff * torch.arange(chunk_size)) self.ensemble_weights_cumsum = torch.cumsum(self.ensemble_weights, dim=0) self.reset() def reset(self): """Resets the online computation variables.""" self.ensembled_actions = None # (chunk_size,) count of how many actions are in the ensemble for each time step in the sequence. self.ensembled_actions_count = None def update(self, actions: Tensor) -> Tensor: """ Takes a (batch, chunk_size, action_dim) sequence of actions, update the temporal ensemble for all time steps, and pop/return the next batch of actions in the sequence. """ self.ensemble_weights = self.ensemble_weights.to(device=actions.device) self.ensemble_weights_cumsum = self.ensemble_weights_cumsum.to(device=actions.device) if self.ensembled_actions is None: # Initializes `self._ensembled_action` to the sequence of actions predicted during the first # time step of the episode. self.ensembled_actions = actions.clone() # Note: The last dimension is unsqueeze to make sure we can broadcast properly for tensor # operations later. self.ensembled_actions_count = torch.ones( (self.chunk_size, 1), dtype=torch.long, device=self.ensembled_actions.device ) else: # self.ensembled_actions will have shape (batch_size, chunk_size - 1, action_dim). Compute # the online update for those entries. self.ensembled_actions *= self.ensemble_weights_cumsum[self.ensembled_actions_count - 1] self.ensembled_actions += actions[:, :-1] * self.ensemble_weights[self.ensembled_actions_count] self.ensembled_actions /= self.ensemble_weights_cumsum[self.ensembled_actions_count] self.ensembled_actions_count = torch.clamp(self.ensembled_actions_count + 1, max=self.chunk_size) # The last action, which has no prior online average, needs to get concatenated onto the end. self.ensembled_actions = torch.cat([self.ensembled_actions, actions[:, -1:]], dim=1) self.ensembled_actions_count = torch.cat( [self.ensembled_actions_count, torch.ones_like(self.ensembled_actions_count[-1:])] ) # "Consume" the first action. action, self.ensembled_actions, self.ensembled_actions_count = ( self.ensembled_actions[:, 0], self.ensembled_actions[:, 1:], self.ensembled_actions_count[1:], ) return action class ACT(nn.Module): """Action Chunking Transformer: The underlying neural network for ACTPolicy. Note: In this code we use the terms `vae_encoder`, 'encoder', `decoder`. The meanings are as follows. - The `vae_encoder` is, as per the literature around variational auto-encoders (VAE), the part of the model that encodes the target data (a sequence of actions), and the condition (the robot joint-space). - A transformer with an `encoder` (not the VAE encoder) and `decoder` (not the VAE decoder) with cross-attention is used as the VAE decoder. For these terms, we drop the `vae_` prefix because we have an option to train this model without the variational objective (in which case we drop the `vae_encoder` altogether, and nothing about this model has anything to do with a VAE). Transformer Used alone for inference (acts as VAE decoder during training) ┌───────────────────────┐ │ Outputs │ │ ▲ │ │ ┌─────►┌───────┐ │ ┌──────┐ │ │ │Transf.│ │ │ │ │ ├─────►│decoder│ │ ┌────┴────┐ │ │ │ │ │ │ │ │ │ │ ┌───┴───┬─►│ │ │ │ VAE │ │ │ │ │ └───────┘ │ │ encoder │ │ │ │Transf.│ │ │ │ │ │ │encoder│ │ └───▲─────┘ │ │ │ │ │ │ │ │ └▲──▲─▲─┘ │ │ │ │ │ │ │ │ inputs └─────┼──┘ │ image emb. │ │ state emb. │ └───────────────────────┘ """ def __init__(self, config: ACTConfig): # BERT style VAE encoder with input tokens [cls, robot_state, *action_sequence]. # The cls token forms parameters of the latent's distribution (like this [*means, *log_variances]). super().__init__() self.config = config if self.config.use_vae: self.vae_encoder = ACTEncoder(config, is_vae_encoder=True) self.vae_encoder_cls_embed = nn.Embedding(1, config.dim_model) # Projection layer for joint-space configuration to hidden dimension. if self.config.robot_state_feature: self.vae_encoder_robot_state_input_proj = nn.Linear( self.config.robot_state_feature.shape[0], config.dim_model ) # Projection layer for action (joint-space target) to hidden dimension. self.vae_encoder_action_input_proj = nn.Linear( self.config.action_feature.shape[0], config.dim_model, ) # Projection layer from the VAE encoder's output to the latent distribution's parameter space. self.vae_encoder_latent_output_proj = nn.Linear(config.dim_model, config.latent_dim * 2) # Fixed sinusoidal positional embedding for the input to the VAE encoder. Unsqueeze for batch # dimension. num_input_token_encoder = 1 + config.chunk_size if self.config.robot_state_feature: num_input_token_encoder += 1 self.register_buffer( "vae_encoder_pos_enc", create_sinusoidal_pos_embedding(num_input_token_encoder, config.dim_model).unsqueeze(0), ) # Backbone for image feature extraction. if self.config.image_features: backbone_model = getattr(torchvision.models, config.vision_backbone)( replace_stride_with_dilation=[False, False, config.replace_final_stride_with_dilation], weights=config.pretrained_backbone_weights, norm_layer=FrozenBatchNorm2d, ) # Note: The assumption here is that we are using a ResNet model (and hence layer4 is the final # feature map). # Note: The forward method of this returns a dict: {"feature_map": output}. self.backbone = IntermediateLayerGetter(backbone_model, return_layers={"layer4": "feature_map"}) # Transformer (acts as VAE decoder when training with the variational objective). self.encoder = ACTEncoder(config) self.decoder = ACTDecoder(config) # Transformer encoder input projections. The tokens will be structured like # [latent, (robot_state), (env_state), (image_feature_map_pixels)]. if self.config.robot_state_feature: self.encoder_robot_state_input_proj = nn.Linear( self.config.robot_state_feature.shape[0], config.dim_model ) if self.config.env_state_feature: self.encoder_env_state_input_proj = nn.Linear( self.config.env_state_feature.shape[0], config.dim_model ) self.encoder_latent_input_proj = nn.Linear(config.latent_dim, config.dim_model) if self.config.image_features: self.encoder_img_feat_input_proj = nn.Conv2d( backbone_model.fc.in_features, config.dim_model, kernel_size=1 ) # Transformer encoder positional embeddings. n_1d_tokens = 1 # for the latent if self.config.robot_state_feature: n_1d_tokens += 1 if self.config.env_state_feature: n_1d_tokens += 1 self.encoder_1d_feature_pos_embed = nn.Embedding(n_1d_tokens, config.dim_model) if self.config.image_features: self.encoder_cam_feat_pos_embed = ACTSinusoidalPositionEmbedding2d(config.dim_model // 2) # Transformer decoder. # Learnable positional embedding for the transformer's decoder (in the style of DETR object queries). self.decoder_pos_embed = nn.Embedding(config.chunk_size, config.dim_model) # Final action regression head on the output of the transformer's decoder. self.action_head = nn.Linear(config.dim_model, self.config.action_feature.shape[0]) self._reset_parameters() def _reset_parameters(self): """Xavier-uniform initialization of the transformer parameters as in the original code.""" for p in chain(self.encoder.parameters(), self.decoder.parameters()): if p.dim() > 1: nn.init.xavier_uniform_(p) def forward(self, batch: dict[str, Tensor]) -> tuple[Tensor, tuple[Tensor, Tensor] | tuple[None, None]]: """A forward pass through the Action Chunking Transformer (with optional VAE encoder). `batch` should have the following structure: { [robot_state_feature] (optional): (B, state_dim) batch of robot states. [image_features]: (B, n_cameras, C, H, W) batch of images. AND/OR [env_state_feature]: (B, env_dim) batch of environment states. [action_feature] (optional, only if training with VAE): (B, chunk_size, action dim) batch of actions. } Returns: (B, chunk_size, action_dim) batch of action sequences Tuple containing the latent PDF's parameters (mean, log(σ²)) both as (B, L) tensors where L is the latent dimension. """ if self.config.use_vae and self.training: assert ( "action" in batch ), "actions must be provided when using the variational objective in training mode." batch_size = ( batch["observation.images"] if "observation.images" in batch else batch["observation.environment_state"] ).shape[0] # Prepare the latent for input to the transformer encoder. if self.config.use_vae and "action" in batch: # Prepare the input to the VAE encoder: [cls, *joint_space_configuration, *action_sequence]. cls_embed = einops.repeat( self.vae_encoder_cls_embed.weight, "1 d -> b 1 d", b=batch_size ) # (B, 1, D) if self.config.robot_state_feature: robot_state_embed = self.vae_encoder_robot_state_input_proj(batch["observation.state"]) robot_state_embed = robot_state_embed.unsqueeze(1) # (B, 1, D) action_embed = self.vae_encoder_action_input_proj(batch["action"]) # (B, S, D) if self.config.robot_state_feature: vae_encoder_input = [cls_embed, robot_state_embed, action_embed] # (B, S+2, D) else: vae_encoder_input = [cls_embed, action_embed] vae_encoder_input = torch.cat(vae_encoder_input, axis=1) # Prepare fixed positional embedding. # Note: detach() shouldn't be necessary but leaving it the same as the original code just in case. pos_embed = self.vae_encoder_pos_enc.clone().detach() # (1, S+2, D) # Prepare key padding mask for the transformer encoder. We have 1 or 2 extra tokens at the start of the # sequence depending whether we use the input states or not (cls and robot state) # False means not a padding token. cls_joint_is_pad = torch.full( (batch_size, 2 if self.config.robot_state_feature else 1), False, device=batch["observation.state"].device, ) key_padding_mask = torch.cat( [cls_joint_is_pad, batch["action_is_pad"]], axis=1 ) # (bs, seq+1 or 2) # Forward pass through VAE encoder to get the latent PDF parameters. cls_token_out = self.vae_encoder( vae_encoder_input.permute(1, 0, 2), pos_embed=pos_embed.permute(1, 0, 2), key_padding_mask=key_padding_mask, )[0] # select the class token, with shape (B, D) latent_pdf_params = self.vae_encoder_latent_output_proj(cls_token_out) mu = latent_pdf_params[:, : self.config.latent_dim] # This is 2log(sigma). Done this way to match the original implementation. log_sigma_x2 = latent_pdf_params[:, self.config.latent_dim :] # Sample the latent with the reparameterization trick. latent_sample = mu + log_sigma_x2.div(2).exp() * torch.randn_like(mu) else: # When not using the VAE encoder, we set the latent to be all zeros. mu = log_sigma_x2 = None # TODO(rcadene, alexander-soare): remove call to `.to` to speedup forward ; precompute and use buffer latent_sample = torch.zeros([batch_size, self.config.latent_dim], dtype=torch.float32).to( batch["observation.state"].device ) # Prepare transformer encoder inputs. encoder_in_tokens = [self.encoder_latent_input_proj(latent_sample)] encoder_in_pos_embed = list(self.encoder_1d_feature_pos_embed.weight.unsqueeze(1)) # Robot state token. if self.config.robot_state_feature: encoder_in_tokens.append(self.encoder_robot_state_input_proj(batch["observation.state"])) # Environment state token. if self.config.env_state_feature: encoder_in_tokens.append( self.encoder_env_state_input_proj(batch["observation.environment_state"]) ) # Camera observation features and positional embeddings. if self.config.image_features: all_cam_features = [] all_cam_pos_embeds = [] for cam_index in range(batch["observation.images"].shape[-4]): cam_features = self.backbone(batch["observation.images"][:, cam_index])["feature_map"] # TODO(rcadene, alexander-soare): remove call to `.to` to speedup forward ; precompute and use # buffer cam_pos_embed = self.encoder_cam_feat_pos_embed(cam_features).to(dtype=cam_features.dtype) cam_features = self.encoder_img_feat_input_proj(cam_features) # (B, C, h, w) all_cam_features.append(cam_features) all_cam_pos_embeds.append(cam_pos_embed) # Concatenate camera observation feature maps and positional embeddings along the width dimension, # and move to (sequence, batch, dim). all_cam_features = torch.cat(all_cam_features, axis=-1) encoder_in_tokens.extend(einops.rearrange(all_cam_features, "b c h w -> (h w) b c")) all_cam_pos_embeds = torch.cat(all_cam_pos_embeds, axis=-1) encoder_in_pos_embed.extend(einops.rearrange(all_cam_pos_embeds, "b c h w -> (h w) b c")) # Stack all tokens along the sequence dimension. encoder_in_tokens = torch.stack(encoder_in_tokens, axis=0) encoder_in_pos_embed = torch.stack(encoder_in_pos_embed, axis=0) # Forward pass through the transformer modules. encoder_out = self.encoder(encoder_in_tokens, pos_embed=encoder_in_pos_embed) # TODO(rcadene, alexander-soare): remove call to `device` ; precompute and use buffer decoder_in = torch.zeros( (self.config.chunk_size, batch_size, self.config.dim_model), dtype=encoder_in_pos_embed.dtype, device=encoder_in_pos_embed.device, ) decoder_out = self.decoder( decoder_in, encoder_out, encoder_pos_embed=encoder_in_pos_embed, decoder_pos_embed=self.decoder_pos_embed.weight.unsqueeze(1), ) # Move back to (B, S, C). decoder_out = decoder_out.transpose(0, 1) actions = self.action_head(decoder_out) return actions, (mu, log_sigma_x2) class ACTEncoder(nn.Module): """Convenience module for running multiple encoder layers, maybe followed by normalization.""" def __init__(self, config: ACTConfig, is_vae_encoder: bool = False): super().__init__() self.is_vae_encoder = is_vae_encoder num_layers = config.n_vae_encoder_layers if self.is_vae_encoder else config.n_encoder_layers self.layers = nn.ModuleList([ACTEncoderLayer(config) for _ in range(num_layers)]) self.norm = nn.LayerNorm(config.dim_model) if config.pre_norm else nn.Identity() def forward( self, x: Tensor, pos_embed: Tensor | None = None, key_padding_mask: Tensor | None = None ) -> Tensor: for layer in self.layers: x = layer(x, pos_embed=pos_embed, key_padding_mask=key_padding_mask) x = self.norm(x) return x class ACTEncoderLayer(nn.Module): def __init__(self, config: ACTConfig): super().__init__() self.self_attn = nn.MultiheadAttention(config.dim_model, config.n_heads, dropout=config.dropout) # Feed forward layers. self.linear1 = nn.Linear(config.dim_model, config.dim_feedforward) self.dropout = nn.Dropout(config.dropout) self.linear2 = nn.Linear(config.dim_feedforward, config.dim_model) self.norm1 = nn.LayerNorm(config.dim_model) self.norm2 = nn.LayerNorm(config.dim_model) self.dropout1 = nn.Dropout(config.dropout) self.dropout2 = nn.Dropout(config.dropout) self.activation = get_activation_fn(config.feedforward_activation) self.pre_norm = config.pre_norm def forward(self, x, pos_embed: Tensor | None = None, key_padding_mask: Tensor | None = None) -> Tensor: skip = x if self.pre_norm: x = self.norm1(x) q = k = x if pos_embed is None else x + pos_embed x = self.self_attn(q, k, value=x, key_padding_mask=key_padding_mask) x = x[0] # note: [0] to select just the output, not the attention weights x = skip + self.dropout1(x) if self.pre_norm: skip = x x = self.norm2(x) else: x = self.norm1(x) skip = x x = self.linear2(self.dropout(self.activation(self.linear1(x)))) x = skip + self.dropout2(x) if not self.pre_norm: x = self.norm2(x) return x class ACTDecoder(nn.Module): def __init__(self, config: ACTConfig): """Convenience module for running multiple decoder layers followed by normalization.""" super().__init__() self.layers = nn.ModuleList([ACTDecoderLayer(config) for _ in range(config.n_decoder_layers)]) self.norm = nn.LayerNorm(config.dim_model) def forward( self, x: Tensor, encoder_out: Tensor, decoder_pos_embed: Tensor | None = None, encoder_pos_embed: Tensor | None = None, ) -> Tensor: for layer in self.layers: x = layer( x, encoder_out, decoder_pos_embed=decoder_pos_embed, encoder_pos_embed=encoder_pos_embed ) if self.norm is not None: x = self.norm(x) return x class ACTDecoderLayer(nn.Module): def __init__(self, config: ACTConfig): super().__init__() self.self_attn = nn.MultiheadAttention(config.dim_model, config.n_heads, dropout=config.dropout) self.multihead_attn = nn.MultiheadAttention(config.dim_model, config.n_heads, dropout=config.dropout) # Feed forward layers. self.linear1 = nn.Linear(config.dim_model, config.dim_feedforward) self.dropout = nn.Dropout(config.dropout) self.linear2 = nn.Linear(config.dim_feedforward, config.dim_model) self.norm1 = nn.LayerNorm(config.dim_model) self.norm2 = nn.LayerNorm(config.dim_model) self.norm3 = nn.LayerNorm(config.dim_model) self.dropout1 = nn.Dropout(config.dropout) self.dropout2 = nn.Dropout(config.dropout) self.dropout3 = nn.Dropout(config.dropout) self.activation = get_activation_fn(config.feedforward_activation) self.pre_norm = config.pre_norm def maybe_add_pos_embed(self, tensor: Tensor, pos_embed: Tensor | None) -> Tensor: return tensor if pos_embed is None else tensor + pos_embed def forward( self, x: Tensor, encoder_out: Tensor, decoder_pos_embed: Tensor | None = None, encoder_pos_embed: Tensor | None = None, ) -> Tensor: """ Args: x: (Decoder Sequence, Batch, Channel) tensor of input tokens. encoder_out: (Encoder Sequence, B, C) output features from the last layer of the encoder we are cross-attending with. decoder_pos_embed: (ES, 1, C) positional embedding for keys (from the encoder). encoder_pos_embed: (DS, 1, C) Positional_embedding for the queries (from the decoder). Returns: (DS, B, C) tensor of decoder output features. """ skip = x if self.pre_norm: x = self.norm1(x) q = k = self.maybe_add_pos_embed(x, decoder_pos_embed) x = self.self_attn(q, k, value=x)[0] # select just the output, not the attention weights x = skip + self.dropout1(x) if self.pre_norm: skip = x x = self.norm2(x) else: x = self.norm1(x) skip = x x = self.multihead_attn( query=self.maybe_add_pos_embed(x, decoder_pos_embed), key=self.maybe_add_pos_embed(encoder_out, encoder_pos_embed), value=encoder_out, )[0] # select just the output, not the attention weights x = skip + self.dropout2(x) if self.pre_norm: skip = x x = self.norm3(x) else: x = self.norm2(x) skip = x x = self.linear2(self.dropout(self.activation(self.linear1(x)))) x = skip + self.dropout3(x) if not self.pre_norm: x = self.norm3(x) return x def create_sinusoidal_pos_embedding(num_positions: int, dimension: int) -> Tensor: """1D sinusoidal positional embeddings as in Attention is All You Need. Args: num_positions: Number of token positions required. Returns: (num_positions, dimension) position embeddings (the first dimension is the batch dimension). """ def get_position_angle_vec(position): return [position / np.power(10000, 2 * (hid_j // 2) / dimension) for hid_j in range(dimension)] sinusoid_table = np.array([get_position_angle_vec(pos_i) for pos_i in range(num_positions)]) sinusoid_table[:, 0::2] = np.sin(sinusoid_table[:, 0::2]) # dim 2i sinusoid_table[:, 1::2] = np.cos(sinusoid_table[:, 1::2]) # dim 2i+1 return torch.from_numpy(sinusoid_table).float() class ACTSinusoidalPositionEmbedding2d(nn.Module): """2D sinusoidal positional embeddings similar to what's presented in Attention Is All You Need. The variation is that the position indices are normalized in [0, 2π] (not quite: the lower bound is 1/H for the vertical direction, and 1/W for the horizontal direction. """ def __init__(self, dimension: int): """ Args: dimension: The desired dimension of the embeddings. """ super().__init__() self.dimension = dimension self._two_pi = 2 * math.pi self._eps = 1e-6 # Inverse "common ratio" for the geometric progression in sinusoid frequencies. self._temperature = 10000 def forward(self, x: Tensor) -> Tensor: """ Args: x: A (B, C, H, W) batch of 2D feature map to generate the embeddings for. Returns: A (1, C, H, W) batch of corresponding sinusoidal positional embeddings. """ not_mask = torch.ones_like(x[0, :1]) # (1, H, W) # Note: These are like range(1, H+1) and range(1, W+1) respectively, but in most implementations # they would be range(0, H) and range(0, W). Keeping it at as is to match the original code. y_range = not_mask.cumsum(1, dtype=torch.float32) x_range = not_mask.cumsum(2, dtype=torch.float32) # "Normalize" the position index such that it ranges in [0, 2π]. # Note: Adding epsilon on the denominator should not be needed as all values of y_embed and x_range # are non-zero by construction. This is an artifact of the original code. y_range = y_range / (y_range[:, -1:, :] + self._eps) * self._two_pi x_range = x_range / (x_range[:, :, -1:] + self._eps) * self._two_pi inverse_frequency = self._temperature ** ( 2 * (torch.arange(self.dimension, dtype=torch.float32, device=x.device) // 2) / self.dimension ) x_range = x_range.unsqueeze(-1) / inverse_frequency # (1, H, W, 1) y_range = y_range.unsqueeze(-1) / inverse_frequency # (1, H, W, 1) # Note: this stack then flatten operation results in interleaved sine and cosine terms. # pos_embed_x and pos_embed_y are (1, H, W, C // 2). pos_embed_x = torch.stack((x_range[..., 0::2].sin(), x_range[..., 1::2].cos()), dim=-1).flatten(3) pos_embed_y = torch.stack((y_range[..., 0::2].sin(), y_range[..., 1::2].cos()), dim=-1).flatten(3) pos_embed = torch.cat((pos_embed_y, pos_embed_x), dim=3).permute(0, 3, 1, 2) # (1, C, H, W) return pos_embed def get_activation_fn(activation: str) -> Callable: """Return an activation function given a string.""" if activation == "relu": return F.relu if activation == "gelu": return F.gelu if activation == "glu": return F.glu raise RuntimeError(f"activation should be relu/gelu/glu, not {activation}.")
lerobot/lerobot/common/policies/act/modeling_act.py/0
{ "file_path": "lerobot/lerobot/common/policies/act/modeling_act.py", "repo_id": "lerobot", "token_count": 15980 }
#!/usr/bin/env python # Copyright 2024 The HuggingFace Inc. team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import torch from torch import nn def populate_queues(queues, batch): for key in batch: # Ignore keys not in the queues already (leaving the responsibility to the caller to make sure the # queues have the keys they want). if key not in queues: continue if len(queues[key]) != queues[key].maxlen: # initialize by copying the first observation several times until the queue is full while len(queues[key]) != queues[key].maxlen: queues[key].append(batch[key]) else: # add latest observation to the queue queues[key].append(batch[key]) return queues def get_device_from_parameters(module: nn.Module) -> torch.device: """Get a module's device by checking one of its parameters. Note: assumes that all parameters have the same device """ return next(iter(module.parameters())).device def get_dtype_from_parameters(module: nn.Module) -> torch.dtype: """Get a module's parameter dtype by checking one of its parameters. Note: assumes that all parameters have the same dtype. """ return next(iter(module.parameters())).dtype def get_output_shape(module: nn.Module, input_shape: tuple) -> tuple: """ Calculates the output shape of a PyTorch module given an input shape. Args: module (nn.Module): a PyTorch module input_shape (tuple): A tuple representing the input shape, e.g., (batch_size, channels, height, width) Returns: tuple: The output shape of the module. """ dummy_input = torch.zeros(size=input_shape) with torch.inference_mode(): output = module(dummy_input) return tuple(output.shape)
lerobot/lerobot/common/policies/utils.py/0
{ "file_path": "lerobot/lerobot/common/policies/utils.py", "repo_id": "lerobot", "token_count": 791 }
"""Logic to calibrate a robot arm built with feetech motors""" # TODO(rcadene, aliberts): move this logic into the robot code when refactoring import time import numpy as np from lerobot.common.robot_devices.motors.feetech import ( CalibrationMode, TorqueMode, convert_degrees_to_steps, ) from lerobot.common.robot_devices.motors.utils import MotorsBus URL_TEMPLATE = ( "https://raw.githubusercontent.com/huggingface/lerobot/main/media/{robot}/{arm}_{position}.webp" ) # The following positions are provided in nominal degree range ]-180, +180[ # For more info on these constants, see comments in the code where they get used. ZERO_POSITION_DEGREE = 0 ROTATED_POSITION_DEGREE = 90 def assert_drive_mode(drive_mode): # `drive_mode` is in [0,1] with 0 means original rotation direction for the motor, and 1 means inverted. if not np.all(np.isin(drive_mode, [0, 1])): raise ValueError(f"`drive_mode` contains values other than 0 or 1: ({drive_mode})") def apply_drive_mode(position, drive_mode): assert_drive_mode(drive_mode) # Convert `drive_mode` from [0, 1] with 0 indicates original rotation direction and 1 inverted, # to [-1, 1] with 1 indicates original rotation direction and -1 inverted. signed_drive_mode = -(drive_mode * 2 - 1) position *= signed_drive_mode return position def move_until_block(arm, motor_name, positive_direction=True, while_move_hook=None): count = 0 while True: present_pos = arm.read("Present_Position", motor_name) if positive_direction: # Move +100 steps every time. Lower the steps to lower the speed at which the arm moves. arm.write("Goal_Position", present_pos + 100, motor_name) else: arm.write("Goal_Position", present_pos - 100, motor_name) if while_move_hook is not None: while_move_hook() present_pos = arm.read("Present_Position", motor_name).item() present_speed = arm.read("Present_Speed", motor_name).item() present_current = arm.read("Present_Current", motor_name).item() # present_load = arm.read("Present_Load", motor_name).item() # present_voltage = arm.read("Present_Voltage", motor_name).item() # present_temperature = arm.read("Present_Temperature", motor_name).item() # print(f"{present_pos=}") # print(f"{present_speed=}") # print(f"{present_current=}") # print(f"{present_load=}") # print(f"{present_voltage=}") # print(f"{present_temperature=}") if present_speed == 0 and present_current > 40: count += 1 if count > 100 or present_current > 300: return present_pos else: count = 0 def move_to_calibrate( arm, motor_name, invert_drive_mode=False, positive_first=True, in_between_move_hook=None, while_move_hook=None, ): initial_pos = arm.read("Present_Position", motor_name) if positive_first: p_present_pos = move_until_block( arm, motor_name, positive_direction=True, while_move_hook=while_move_hook ) else: n_present_pos = move_until_block( arm, motor_name, positive_direction=False, while_move_hook=while_move_hook ) if in_between_move_hook is not None: in_between_move_hook() if positive_first: n_present_pos = move_until_block( arm, motor_name, positive_direction=False, while_move_hook=while_move_hook ) else: p_present_pos = move_until_block( arm, motor_name, positive_direction=True, while_move_hook=while_move_hook ) zero_pos = (n_present_pos + p_present_pos) / 2 calib_data = { "initial_pos": initial_pos, "homing_offset": zero_pos if invert_drive_mode else -zero_pos, "invert_drive_mode": invert_drive_mode, "drive_mode": -1 if invert_drive_mode else 0, "zero_pos": zero_pos, "start_pos": n_present_pos if invert_drive_mode else p_present_pos, "end_pos": p_present_pos if invert_drive_mode else n_present_pos, } return calib_data def apply_offset(calib, offset): calib["zero_pos"] += offset if calib["drive_mode"]: calib["homing_offset"] += offset else: calib["homing_offset"] -= offset return calib def run_arm_auto_calibration(arm: MotorsBus, robot_type: str, arm_name: str, arm_type: str): if robot_type == "so100": return run_arm_auto_calibration_so100(arm, robot_type, arm_name, arm_type) elif robot_type == "moss": return run_arm_auto_calibration_moss(arm, robot_type, arm_name, arm_type) else: raise ValueError(robot_type) def run_arm_auto_calibration_so100(arm: MotorsBus, robot_type: str, arm_name: str, arm_type: str): """All the offsets and magic numbers are hand tuned, and are unique to SO-100 follower arms""" if (arm.read("Torque_Enable") != TorqueMode.DISABLED.value).any(): raise ValueError("To run calibration, the torque must be disabled on all motors.") if not (robot_type == "so100" and arm_type == "follower"): raise NotImplementedError("Auto calibration only supports the follower of so100 arms for now.") print(f"\nRunning calibration of {robot_type} {arm_name} {arm_type}...") print("\nMove arm to initial position") print("See: " + URL_TEMPLATE.format(robot=robot_type, arm=arm_type, position="initial")) input("Press Enter to continue...") # Lower the acceleration of the motors (in [0,254]) initial_acceleration = arm.read("Acceleration") arm.write("Lock", 0) arm.write("Acceleration", 10) time.sleep(1) arm.write("Torque_Enable", TorqueMode.ENABLED.value) print(f'{arm.read("Present_Position", "elbow_flex")=}') calib = {} init_wf_pos = arm.read("Present_Position", "wrist_flex") init_sl_pos = arm.read("Present_Position", "shoulder_lift") init_ef_pos = arm.read("Present_Position", "elbow_flex") arm.write("Goal_Position", init_wf_pos - 800, "wrist_flex") arm.write("Goal_Position", init_sl_pos + 150 + 1024, "shoulder_lift") arm.write("Goal_Position", init_ef_pos - 2048, "elbow_flex") time.sleep(2) print("Calibrate shoulder_pan") calib["shoulder_pan"] = move_to_calibrate(arm, "shoulder_pan") arm.write("Goal_Position", calib["shoulder_pan"]["zero_pos"], "shoulder_pan") time.sleep(1) print("Calibrate gripper") calib["gripper"] = move_to_calibrate(arm, "gripper", invert_drive_mode=True) time.sleep(1) print("Calibrate wrist_flex") calib["wrist_flex"] = move_to_calibrate(arm, "wrist_flex") calib["wrist_flex"] = apply_offset(calib["wrist_flex"], offset=80) def in_between_move_hook(): nonlocal arm, calib time.sleep(2) ef_pos = arm.read("Present_Position", "elbow_flex") sl_pos = arm.read("Present_Position", "shoulder_lift") arm.write("Goal_Position", ef_pos + 1024, "elbow_flex") arm.write("Goal_Position", sl_pos - 1024, "shoulder_lift") time.sleep(2) print("Calibrate elbow_flex") calib["elbow_flex"] = move_to_calibrate( arm, "elbow_flex", positive_first=False, in_between_move_hook=in_between_move_hook ) calib["elbow_flex"] = apply_offset(calib["elbow_flex"], offset=80 - 1024) arm.write("Goal_Position", calib["elbow_flex"]["zero_pos"] + 1024 + 512, "elbow_flex") time.sleep(1) def in_between_move_hook(): nonlocal arm, calib arm.write("Goal_Position", calib["elbow_flex"]["zero_pos"], "elbow_flex") print("Calibrate shoulder_lift") calib["shoulder_lift"] = move_to_calibrate( arm, "shoulder_lift", invert_drive_mode=True, positive_first=False, in_between_move_hook=in_between_move_hook, ) # add an 30 steps as offset to align with body calib["shoulder_lift"] = apply_offset(calib["shoulder_lift"], offset=1024 - 50) def while_move_hook(): nonlocal arm, calib positions = { "shoulder_lift": round(calib["shoulder_lift"]["zero_pos"] - 1600), "elbow_flex": round(calib["elbow_flex"]["zero_pos"] + 1700), "wrist_flex": round(calib["wrist_flex"]["zero_pos"] + 800), "gripper": round(calib["gripper"]["end_pos"]), } arm.write("Goal_Position", list(positions.values()), list(positions.keys())) arm.write("Goal_Position", round(calib["shoulder_lift"]["zero_pos"] - 1600), "shoulder_lift") time.sleep(2) arm.write("Goal_Position", round(calib["elbow_flex"]["zero_pos"] + 1700), "elbow_flex") time.sleep(2) arm.write("Goal_Position", round(calib["wrist_flex"]["zero_pos"] + 800), "wrist_flex") time.sleep(2) arm.write("Goal_Position", round(calib["gripper"]["end_pos"]), "gripper") time.sleep(2) print("Calibrate wrist_roll") calib["wrist_roll"] = move_to_calibrate( arm, "wrist_roll", invert_drive_mode=True, positive_first=False, while_move_hook=while_move_hook ) arm.write("Goal_Position", calib["wrist_roll"]["zero_pos"], "wrist_roll") time.sleep(1) arm.write("Goal_Position", calib["gripper"]["start_pos"], "gripper") time.sleep(1) arm.write("Goal_Position", calib["wrist_flex"]["zero_pos"], "wrist_flex") time.sleep(1) arm.write("Goal_Position", calib["elbow_flex"]["zero_pos"] + 2048, "elbow_flex") arm.write("Goal_Position", calib["shoulder_lift"]["zero_pos"] - 2048, "shoulder_lift") time.sleep(1) arm.write("Goal_Position", calib["shoulder_pan"]["zero_pos"], "shoulder_pan") time.sleep(1) calib_modes = [] for name in arm.motor_names: if name == "gripper": calib_modes.append(CalibrationMode.LINEAR.name) else: calib_modes.append(CalibrationMode.DEGREE.name) calib_dict = { "homing_offset": [calib[name]["homing_offset"] for name in arm.motor_names], "drive_mode": [calib[name]["drive_mode"] for name in arm.motor_names], "start_pos": [calib[name]["start_pos"] for name in arm.motor_names], "end_pos": [calib[name]["end_pos"] for name in arm.motor_names], "calib_mode": calib_modes, "motor_names": arm.motor_names, } # Re-enable original accerlation arm.write("Lock", 0) arm.write("Acceleration", initial_acceleration) time.sleep(1) return calib_dict def run_arm_auto_calibration_moss(arm: MotorsBus, robot_type: str, arm_name: str, arm_type: str): """All the offsets and magic numbers are hand tuned, and are unique to SO-100 follower arms""" if (arm.read("Torque_Enable") != TorqueMode.DISABLED.value).any(): raise ValueError("To run calibration, the torque must be disabled on all motors.") if not (robot_type == "moss" and arm_type == "follower"): raise NotImplementedError("Auto calibration only supports the follower of moss arms for now.") print(f"\nRunning calibration of {robot_type} {arm_name} {arm_type}...") print("\nMove arm to initial position") print("See: " + URL_TEMPLATE.format(robot=robot_type, arm=arm_type, position="initial")) input("Press Enter to continue...") # Lower the acceleration of the motors (in [0,254]) initial_acceleration = arm.read("Acceleration") arm.write("Lock", 0) arm.write("Acceleration", 10) time.sleep(1) arm.write("Torque_Enable", TorqueMode.ENABLED.value) sl_pos = arm.read("Present_Position", "shoulder_lift") arm.write("Goal_Position", sl_pos - 1024 - 450, "shoulder_lift") ef_pos = arm.read("Present_Position", "elbow_flex") arm.write("Goal_Position", ef_pos + 1024 + 450, "elbow_flex") time.sleep(2) calib = {} print("Calibrate shoulder_pan") calib["shoulder_pan"] = move_to_calibrate(arm, "shoulder_pan") arm.write("Goal_Position", calib["shoulder_pan"]["zero_pos"], "shoulder_pan") time.sleep(1) print("Calibrate gripper") calib["gripper"] = move_to_calibrate(arm, "gripper", invert_drive_mode=True) time.sleep(1) print("Calibrate wrist_flex") calib["wrist_flex"] = move_to_calibrate(arm, "wrist_flex", invert_drive_mode=True) calib["wrist_flex"] = apply_offset(calib["wrist_flex"], offset=-210 + 1024) wr_pos = arm.read("Present_Position", "wrist_roll") arm.write("Goal_Position", calib["wrist_flex"]["zero_pos"] - 1024, "wrist_flex") time.sleep(1) arm.write("Goal_Position", wr_pos - 1024, "wrist_roll") time.sleep(1) arm.write("Goal_Position", calib["wrist_flex"]["zero_pos"] - 2048, "wrist_flex") time.sleep(1) arm.write("Goal_Position", calib["gripper"]["end_pos"], "gripper") time.sleep(1) print("Calibrate wrist_roll") calib["wrist_roll"] = move_to_calibrate(arm, "wrist_roll", invert_drive_mode=True) calib["wrist_roll"] = apply_offset(calib["wrist_roll"], offset=790) arm.write("Goal_Position", calib["wrist_roll"]["zero_pos"] - 1024, "wrist_roll") arm.write("Goal_Position", calib["gripper"]["start_pos"], "gripper") arm.write("Goal_Position", calib["wrist_flex"]["zero_pos"] - 1024, "wrist_flex") time.sleep(1) arm.write("Goal_Position", calib["wrist_roll"]["zero_pos"], "wrist_roll") arm.write("Goal_Position", calib["wrist_flex"]["zero_pos"] - 2048, "wrist_flex") def in_between_move_elbow_flex_hook(): nonlocal arm, calib arm.write("Goal_Position", calib["wrist_flex"]["zero_pos"], "wrist_flex") print("Calibrate elbow_flex") calib["elbow_flex"] = move_to_calibrate( arm, "elbow_flex", invert_drive_mode=True, in_between_move_hook=in_between_move_elbow_flex_hook, ) arm.write("Goal_Position", calib["wrist_flex"]["zero_pos"] - 1024, "wrist_flex") def in_between_move_shoulder_lift_hook(): nonlocal arm, calib sl = arm.read("Present_Position", "shoulder_lift") arm.write("Goal_Position", sl - 1500, "shoulder_lift") time.sleep(1) arm.write("Goal_Position", calib["elbow_flex"]["zero_pos"] + 1536, "elbow_flex") time.sleep(1) arm.write("Goal_Position", calib["wrist_flex"]["start_pos"], "wrist_flex") time.sleep(1) print("Calibrate shoulder_lift") calib["shoulder_lift"] = move_to_calibrate( arm, "shoulder_lift", in_between_move_hook=in_between_move_shoulder_lift_hook ) calib["shoulder_lift"] = apply_offset(calib["shoulder_lift"], offset=-1024) arm.write("Goal_Position", calib["wrist_flex"]["zero_pos"] - 1024, "wrist_flex") time.sleep(1) arm.write("Goal_Position", calib["shoulder_lift"]["zero_pos"] + 2048, "shoulder_lift") arm.write("Goal_Position", calib["elbow_flex"]["zero_pos"] - 1024 - 400, "elbow_flex") time.sleep(2) calib_modes = [] for name in arm.motor_names: if name == "gripper": calib_modes.append(CalibrationMode.LINEAR.name) else: calib_modes.append(CalibrationMode.DEGREE.name) calib_dict = { "homing_offset": [calib[name]["homing_offset"] for name in arm.motor_names], "drive_mode": [calib[name]["drive_mode"] for name in arm.motor_names], "start_pos": [calib[name]["start_pos"] for name in arm.motor_names], "end_pos": [calib[name]["end_pos"] for name in arm.motor_names], "calib_mode": calib_modes, "motor_names": arm.motor_names, } # Re-enable original accerlation arm.write("Lock", 0) arm.write("Acceleration", initial_acceleration) time.sleep(1) return calib_dict def run_arm_manual_calibration(arm: MotorsBus, robot_type: str, arm_name: str, arm_type: str): """This function ensures that a neural network trained on data collected on a given robot can work on another robot. For instance before calibration, setting a same goal position for each motor of two different robots will get two very different positions. But after calibration, the two robots will move to the same position.To this end, this function computes the homing offset and the drive mode for each motor of a given robot. Homing offset is used to shift the motor position to a ]-2048, +2048[ nominal range (when the motor uses 2048 steps to complete a half a turn). This range is set around an arbitrary "zero position" corresponding to all motor positions being 0. During the calibration process, you will need to manually move the robot to this "zero position". Drive mode is used to invert the rotation direction of the motor. This is useful when some motors have been assembled in the opposite orientation for some robots. During the calibration process, you will need to manually move the robot to the "rotated position". After calibration, the homing offsets and drive modes are stored in a cache. Example of usage: ```python run_arm_calibration(arm, "so100", "left", "follower") ``` """ if (arm.read("Torque_Enable") != TorqueMode.DISABLED.value).any(): raise ValueError("To run calibration, the torque must be disabled on all motors.") print(f"\nRunning calibration of {robot_type} {arm_name} {arm_type}...") print("\nMove arm to zero position") print("See: " + URL_TEMPLATE.format(robot=robot_type, arm=arm_type, position="zero")) input("Press Enter to continue...") # We arbitrarily chose our zero target position to be a straight horizontal position with gripper upwards and closed. # It is easy to identify and all motors are in a "quarter turn" position. Once calibration is done, this position will # correspond to every motor angle being 0. If you set all 0 as Goal Position, the arm will move in this position. zero_target_pos = convert_degrees_to_steps(ZERO_POSITION_DEGREE, arm.motor_models) # Compute homing offset so that `present_position + homing_offset ~= target_position`. zero_pos = arm.read("Present_Position") homing_offset = zero_target_pos - zero_pos # The rotated target position corresponds to a rotation of a quarter turn from the zero position. # This allows to identify the rotation direction of each motor. # For instance, if the motor rotates 90 degree, and its value is -90 after applying the homing offset, then we know its rotation direction # is inverted. However, for the calibration being successful, we need everyone to follow the same target position. # Sometimes, there is only one possible rotation direction. For instance, if the gripper is closed, there is only one direction which # corresponds to opening the gripper. When the rotation direction is ambiguous, we arbitrarely rotate clockwise from the point of view # of the previous motor in the kinetic chain. print("\nMove arm to rotated target position") print("See: " + URL_TEMPLATE.format(robot=robot_type, arm=arm_type, position="rotated")) input("Press Enter to continue...") rotated_target_pos = convert_degrees_to_steps(ROTATED_POSITION_DEGREE, arm.motor_models) # Find drive mode by rotating each motor by a quarter of a turn. # Drive mode indicates if the motor rotation direction should be inverted (=1) or not (=0). rotated_pos = arm.read("Present_Position") drive_mode = (rotated_pos < zero_pos).astype(np.int32) # Re-compute homing offset to take into account drive mode rotated_drived_pos = apply_drive_mode(rotated_pos, drive_mode) homing_offset = rotated_target_pos - rotated_drived_pos print("\nMove arm to rest position") print("See: " + URL_TEMPLATE.format(robot=robot_type, arm=arm_type, position="rest")) input("Press Enter to continue...") print() # Joints with rotational motions are expressed in degrees in nominal range of [-180, 180] calib_modes = [] for name in arm.motor_names: if name == "gripper": calib_modes.append(CalibrationMode.LINEAR.name) else: calib_modes.append(CalibrationMode.DEGREE.name) calib_dict = { "homing_offset": homing_offset.tolist(), "drive_mode": drive_mode.tolist(), "start_pos": zero_pos.tolist(), "end_pos": rotated_pos.tolist(), "calib_mode": calib_modes, "motor_names": arm.motor_names, } return calib_dict
lerobot/lerobot/common/robot_devices/robots/feetech_calibration.py/0
{ "file_path": "lerobot/lerobot/common/robot_devices/robots/feetech_calibration.py", "repo_id": "lerobot", "token_count": 7917 }
""" This script configure a single motor at a time to a given ID and baudrate. Example of usage: ```bash python lerobot/scripts/configure_motor.py \ --port /dev/tty.usbmodem585A0080521 \ --brand feetech \ --model sts3215 \ --baudrate 1000000 \ --ID 1 ``` """ import argparse import time def get_motor_bus_cls(brand: str) -> tuple: if brand == "feetech": from lerobot.common.robot_devices.motors.configs import FeetechMotorsBusConfig from lerobot.common.robot_devices.motors.feetech import ( MODEL_BAUDRATE_TABLE, SCS_SERIES_BAUDRATE_TABLE, FeetechMotorsBus, ) return FeetechMotorsBusConfig, FeetechMotorsBus, MODEL_BAUDRATE_TABLE, SCS_SERIES_BAUDRATE_TABLE elif brand == "dynamixel": from lerobot.common.robot_devices.motors.configs import DynamixelMotorsBusConfig from lerobot.common.robot_devices.motors.dynamixel import ( MODEL_BAUDRATE_TABLE, X_SERIES_BAUDRATE_TABLE, DynamixelMotorsBus, ) return DynamixelMotorsBusConfig, DynamixelMotorsBus, MODEL_BAUDRATE_TABLE, X_SERIES_BAUDRATE_TABLE else: raise ValueError( f"Currently we do not support this motor brand: {brand}. We currently support feetech and dynamixel motors." ) def configure_motor(port, brand, model, motor_idx_des, baudrate_des): motor_bus_config_cls, motor_bus_cls, model_baudrate_table, series_baudrate_table = get_motor_bus_cls( brand ) # Check if the provided model exists in the model_baud_rate_table if model not in model_baudrate_table: raise ValueError( f"Invalid model '{model}' for brand '{brand}'. Supported models: {list(model_baudrate_table.keys())}" ) # Setup motor names, indices, and models motor_name = "motor" motor_index_arbitrary = motor_idx_des # Use the motor ID passed via argument motor_model = model # Use the motor model passed via argument config = motor_bus_config_cls(port=port, motors={motor_name: (motor_index_arbitrary, motor_model)}) # Initialize the MotorBus with the correct port and motor configurations motor_bus = motor_bus_cls(config=config) # Try to connect to the motor bus and handle any connection-specific errors try: motor_bus.connect() print(f"Connected on port {motor_bus.port}") except OSError as e: print(f"Error occurred when connecting to the motor bus: {e}") return # Motor bus is connected, proceed with the rest of the operations try: print("Scanning all baudrates and motor indices") all_baudrates = set(series_baudrate_table.values()) motor_index = -1 # Set the motor index to an out-of-range value. for baudrate in all_baudrates: motor_bus.set_bus_baudrate(baudrate) present_ids = motor_bus.find_motor_indices(list(range(1, 10))) if len(present_ids) > 1: raise ValueError( "Error: More than one motor ID detected. This script is designed to only handle one motor at a time. Please disconnect all but one motor." ) if len(present_ids) == 1: if motor_index != -1: raise ValueError( "Error: More than one motor ID detected. This script is designed to only handle one motor at a time. Please disconnect all but one motor." ) motor_index = present_ids[0] break if motor_index == -1: raise ValueError("No motors detected. Please ensure you have one motor connected.") print(f"Motor index found at: {motor_index}") if brand == "feetech": # Allows ID and BAUDRATE to be written in memory motor_bus.write_with_motor_ids(motor_bus.motor_models, motor_index, "Lock", 0) if baudrate != baudrate_des: print(f"Setting its baudrate to {baudrate_des}") baudrate_idx = list(series_baudrate_table.values()).index(baudrate_des) # The write can fail, so we allow retries motor_bus.write_with_motor_ids(motor_bus.motor_models, motor_index, "Baud_Rate", baudrate_idx) time.sleep(0.5) motor_bus.set_bus_baudrate(baudrate_des) present_baudrate_idx = motor_bus.read_with_motor_ids( motor_bus.motor_models, motor_index, "Baud_Rate", num_retry=2 ) if present_baudrate_idx != baudrate_idx: raise OSError("Failed to write baudrate.") print(f"Setting its index to desired index {motor_idx_des}") if brand == "feetech": motor_bus.write_with_motor_ids(motor_bus.motor_models, motor_index, "Lock", 0) motor_bus.write_with_motor_ids(motor_bus.motor_models, motor_index, "ID", motor_idx_des) present_idx = motor_bus.read_with_motor_ids(motor_bus.motor_models, motor_idx_des, "ID", num_retry=2) if present_idx != motor_idx_des: raise OSError("Failed to write index.") if brand == "feetech": # Set Maximum_Acceleration to 254 to speedup acceleration and deceleration of # the motors. Note: this configuration is not in the official STS3215 Memory Table motor_bus.write("Lock", 0) motor_bus.write("Maximum_Acceleration", 254) motor_bus.write("Goal_Position", 2048) time.sleep(4) print("Present Position", motor_bus.read("Present_Position")) motor_bus.write("Offset", 0) time.sleep(4) print("Offset", motor_bus.read("Offset")) except Exception as e: print(f"Error occurred during motor configuration: {e}") finally: motor_bus.disconnect() print("Disconnected from motor bus.") if __name__ == "__main__": parser = argparse.ArgumentParser() parser.add_argument("--port", type=str, required=True, help="Motors bus port (e.g. dynamixel,feetech)") parser.add_argument("--brand", type=str, required=True, help="Motor brand (e.g. dynamixel,feetech)") parser.add_argument("--model", type=str, required=True, help="Motor model (e.g. xl330-m077,sts3215)") parser.add_argument("--ID", type=int, required=True, help="Desired ID of the current motor (e.g. 1,2,3)") parser.add_argument( "--baudrate", type=int, default=1000000, help="Desired baudrate for the motor (default: 1000000)" ) args = parser.parse_args() configure_motor(args.port, args.brand, args.model, args.ID, args.baudrate)
lerobot/lerobot/scripts/configure_motor.py/0
{ "file_path": "lerobot/lerobot/scripts/configure_motor.py", "repo_id": "lerobot", "token_count": 2848 }
#!/usr/bin/env python # Copyright 2024 The HuggingFace Inc. team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import json import logging from copy import deepcopy from itertools import chain from pathlib import Path import einops import pytest import torch from datasets import Dataset from huggingface_hub import HfApi from safetensors.torch import load_file import lerobot from lerobot.common.datasets.compute_stats import ( aggregate_stats, compute_stats, get_stats_einops_patterns, ) from lerobot.common.datasets.factory import make_dataset from lerobot.common.datasets.lerobot_dataset import ( LeRobotDataset, MultiLeRobotDataset, ) from lerobot.common.datasets.utils import ( create_branch, flatten_dict, hf_transform_to_torch, unflatten_dict, ) from lerobot.common.envs.factory import make_env_config from lerobot.common.policies.factory import make_policy_config from lerobot.common.robot_devices.robots.utils import make_robot from lerobot.common.utils.utils import seeded_context from lerobot.configs.default import DatasetConfig from lerobot.configs.train import TrainPipelineConfig from tests.fixtures.constants import DUMMY_REPO_ID from tests.utils import DEVICE, require_x86_64_kernel def test_same_attributes_defined(lerobot_dataset_factory, tmp_path): """ Instantiate a LeRobotDataset both ways with '__init__()' and 'create()' and verify that instantiated objects have the same sets of attributes defined. """ # Instantiate both ways robot = make_robot("koch", mock=True) root_create = tmp_path / "create" dataset_create = LeRobotDataset.create(repo_id=DUMMY_REPO_ID, fps=30, robot=robot, root=root_create) root_init = tmp_path / "init" dataset_init = lerobot_dataset_factory(root=root_init) # Access the '_hub_version' cached_property in both instances to force its creation _ = dataset_init.meta._hub_version _ = dataset_create.meta._hub_version init_attr = set(vars(dataset_init).keys()) create_attr = set(vars(dataset_create).keys()) assert init_attr == create_attr def test_dataset_initialization(lerobot_dataset_factory, tmp_path): kwargs = { "repo_id": DUMMY_REPO_ID, "total_episodes": 10, "total_frames": 400, "episodes": [2, 5, 6], } dataset = lerobot_dataset_factory(root=tmp_path, **kwargs) assert dataset.repo_id == kwargs["repo_id"] assert dataset.meta.total_episodes == kwargs["total_episodes"] assert dataset.meta.total_frames == kwargs["total_frames"] assert dataset.episodes == kwargs["episodes"] assert dataset.num_episodes == len(kwargs["episodes"]) assert dataset.num_frames == len(dataset) # TODO(aliberts): # - [ ] test various attributes & state from init and create # - [ ] test init with episodes and check num_frames # - [ ] test add_frame # - [ ] test add_episode # - [ ] test consolidate # - [ ] test push_to_hub # - [ ] test smaller methods @pytest.mark.parametrize( "env_name, repo_id, policy_name", # Single dataset lerobot.env_dataset_policy_triplets, # Multi-dataset # TODO after fix multidataset # + [("aloha", ["lerobot/aloha_sim_insertion_human", "lerobot/aloha_sim_transfer_cube_human"], "act")], ) def test_factory(env_name, repo_id, policy_name): """ Tests that: - we can create a dataset with the factory. - for a commonly used set of data keys, the data dimensions are correct. """ cfg = TrainPipelineConfig( # TODO(rcadene, aliberts): remove dataset download dataset=DatasetConfig(repo_id=repo_id, episodes=[0]), env=make_env_config(env_name), policy=make_policy_config(policy_name), device=DEVICE, ) dataset = make_dataset(cfg) delta_timestamps = dataset.delta_timestamps camera_keys = dataset.meta.camera_keys item = dataset[0] keys_ndim_required = [ ("action", 1, True), ("episode_index", 0, True), ("frame_index", 0, True), ("timestamp", 0, True), # TODO(rcadene): should we rename it agent_pos? ("observation.state", 1, True), ("next.reward", 0, False), ("next.done", 0, False), ] # test number of dimensions for key, ndim, required in keys_ndim_required: if key not in item: if required: assert key in item, f"{key}" else: logging.warning(f'Missing key in dataset: "{key}" not in {dataset}.') continue if delta_timestamps is not None and key in delta_timestamps: assert item[key].ndim == ndim + 1, f"{key}" assert item[key].shape[0] == len(delta_timestamps[key]), f"{key}" else: assert item[key].ndim == ndim, f"{key}" if key in camera_keys: assert item[key].dtype == torch.float32, f"{key}" # TODO(rcadene): we assume for now that image normalization takes place in the model assert item[key].max() <= 1.0, f"{key}" assert item[key].min() >= 0.0, f"{key}" if delta_timestamps is not None and key in delta_timestamps: # test t,c,h,w assert item[key].shape[1] == 3, f"{key}" else: # test c,h,w assert item[key].shape[0] == 3, f"{key}" if delta_timestamps is not None: # test missing keys in delta_timestamps for key in delta_timestamps: assert key in item, f"{key}" # TODO(alexander-soare): If you're hunting for savings on testing time, this takes about 5 seconds. @pytest.mark.skip("TODO after fix multidataset") def test_multidataset_frames(): """Check that all dataset frames are incorporated.""" # Note: use the image variants of the dataset to make the test approx 3x faster. # Note: We really do need three repo_ids here as at some point this caught an issue with the chaining # logic that wouldn't be caught with two repo IDs. repo_ids = [ "lerobot/aloha_sim_insertion_human_image", "lerobot/aloha_sim_transfer_cube_human_image", "lerobot/aloha_sim_insertion_scripted_image", ] sub_datasets = [LeRobotDataset(repo_id) for repo_id in repo_ids] dataset = MultiLeRobotDataset(repo_ids) assert len(dataset) == sum(len(d) for d in sub_datasets) assert dataset.num_frames == sum(d.num_frames for d in sub_datasets) assert dataset.num_episodes == sum(d.num_episodes for d in sub_datasets) # Run through all items of the LeRobotDatasets in parallel with the items of the MultiLerobotDataset and # check they match. expected_dataset_indices = [] for i, sub_dataset in enumerate(sub_datasets): expected_dataset_indices.extend([i] * len(sub_dataset)) for expected_dataset_index, sub_dataset_item, dataset_item in zip( expected_dataset_indices, chain(*sub_datasets), dataset, strict=True ): dataset_index = dataset_item.pop("dataset_index") assert dataset_index == expected_dataset_index assert sub_dataset_item.keys() == dataset_item.keys() for k in sub_dataset_item: assert torch.equal(sub_dataset_item[k], dataset_item[k]) # TODO(aliberts, rcadene): Refactor and move this to a tests/test_compute_stats.py def test_compute_stats_on_xarm(): """Check that the statistics are computed correctly according to the stats_patterns property. We compare with taking a straight min, mean, max, std of all the data in one pass (which we can do because we are working with a small dataset). """ # TODO(rcadene, aliberts): remove dataset download dataset = LeRobotDataset("lerobot/xarm_lift_medium", episodes=[0]) # reduce size of dataset sample on which stats compute is tested to 10 frames dataset.hf_dataset = dataset.hf_dataset.select(range(10)) # Note: we set the batch size to be smaller than the whole dataset to make sure we are testing batched # computation of the statistics. While doing this, we also make sure it works when we don't divide the # dataset into even batches. computed_stats = compute_stats(dataset, batch_size=int(len(dataset) * 0.25), num_workers=0) # get einops patterns to aggregate batches and compute statistics stats_patterns = get_stats_einops_patterns(dataset) # get all frames from the dataset in the same dtype and range as during compute_stats dataloader = torch.utils.data.DataLoader( dataset, num_workers=0, batch_size=len(dataset), shuffle=False, ) full_batch = next(iter(dataloader)) # compute stats based on all frames from the dataset without any batching expected_stats = {} for k, pattern in stats_patterns.items(): full_batch[k] = full_batch[k].float() expected_stats[k] = {} expected_stats[k]["mean"] = einops.reduce(full_batch[k], pattern, "mean") expected_stats[k]["std"] = torch.sqrt( einops.reduce((full_batch[k] - expected_stats[k]["mean"]) ** 2, pattern, "mean") ) expected_stats[k]["min"] = einops.reduce(full_batch[k], pattern, "min") expected_stats[k]["max"] = einops.reduce(full_batch[k], pattern, "max") # test computed stats match expected stats for k in stats_patterns: assert torch.allclose(computed_stats[k]["mean"], expected_stats[k]["mean"]) assert torch.allclose(computed_stats[k]["std"], expected_stats[k]["std"]) assert torch.allclose(computed_stats[k]["min"], expected_stats[k]["min"]) assert torch.allclose(computed_stats[k]["max"], expected_stats[k]["max"]) # load stats used during training which are expected to match the ones returned by computed_stats loaded_stats = dataset.meta.stats # noqa: F841 # TODO(rcadene): we can't test this because expected_stats is computed on a subset # # test loaded stats match expected stats # for k in stats_patterns: # assert torch.allclose(loaded_stats[k]["mean"], expected_stats[k]["mean"]) # assert torch.allclose(loaded_stats[k]["std"], expected_stats[k]["std"]) # assert torch.allclose(loaded_stats[k]["min"], expected_stats[k]["min"]) # assert torch.allclose(loaded_stats[k]["max"], expected_stats[k]["max"]) # TODO(aliberts): Move to more appropriate location def test_flatten_unflatten_dict(): d = { "obs": { "min": 0, "max": 1, "mean": 2, "std": 3, }, "action": { "min": 4, "max": 5, "mean": 6, "std": 7, }, } original_d = deepcopy(d) d = unflatten_dict(flatten_dict(d)) # test equality between nested dicts assert json.dumps(original_d, sort_keys=True) == json.dumps(d, sort_keys=True), f"{original_d} != {d}" @pytest.mark.parametrize( "repo_id", [ "lerobot/pusht", "lerobot/aloha_sim_insertion_human", "lerobot/xarm_lift_medium", # (michel-aractingi) commenting the two datasets from openx as test is failing # "lerobot/nyu_franka_play_dataset", # "lerobot/cmu_stretch", ], ) @require_x86_64_kernel def test_backward_compatibility(repo_id): """The artifacts for this test have been generated by `tests/scripts/save_dataset_to_safetensors.py`.""" # TODO(rcadene, aliberts): remove dataset download dataset = LeRobotDataset(repo_id, episodes=[0]) test_dir = Path("tests/data/save_dataset_to_safetensors") / repo_id def load_and_compare(i): new_frame = dataset[i] # noqa: B023 old_frame = load_file(test_dir / f"frame_{i}.safetensors") # noqa: B023 # ignore language instructions (if exists) in language conditioned datasets # TODO (michel-aractingi): transform language obs to langauge embeddings via tokenizer new_frame.pop("language_instruction", None) old_frame.pop("language_instruction", None) new_frame.pop("task", None) old_frame.pop("task", None) # Remove task_index to allow for backward compatibility # TODO(rcadene): remove when new features have been generated if "task_index" not in old_frame: del new_frame["task_index"] new_keys = set(new_frame.keys()) old_keys = set(old_frame.keys()) assert new_keys == old_keys, f"{new_keys=} and {old_keys=} are not the same" for key in new_frame: assert torch.isclose( new_frame[key], old_frame[key] ).all(), f"{key=} for index={i} does not contain the same value" # test2 first frames of first episode i = dataset.episode_data_index["from"][0].item() load_and_compare(i) load_and_compare(i + 1) # test 2 frames at the middle of first episode i = int((dataset.episode_data_index["to"][0].item() - dataset.episode_data_index["from"][0].item()) / 2) load_and_compare(i) load_and_compare(i + 1) # test 2 last frames of first episode i = dataset.episode_data_index["to"][0].item() load_and_compare(i - 2) load_and_compare(i - 1) # TODO(rcadene): Enable testing on second and last episode # We currently cant because our test dataset only contains the first episode # # test 2 first frames of second episode # i = dataset.episode_data_index["from"][1].item() # load_and_compare(i) # load_and_compare(i + 1) # # test 2 last frames of second episode # i = dataset.episode_data_index["to"][1].item() # load_and_compare(i - 2) # load_and_compare(i - 1) # # test 2 last frames of last episode # i = dataset.episode_data_index["to"][-1].item() # load_and_compare(i - 2) # load_and_compare(i - 1) @pytest.mark.skip("TODO after fix multidataset") def test_multidataset_aggregate_stats(): """Makes 3 basic datasets and checks that aggregate stats are computed correctly.""" with seeded_context(0): data_a = torch.rand(30, dtype=torch.float32) data_b = torch.rand(20, dtype=torch.float32) data_c = torch.rand(20, dtype=torch.float32) hf_dataset_1 = Dataset.from_dict( {"a": data_a[:10], "b": data_b[:10], "c": data_c[:10], "index": torch.arange(10)} ) hf_dataset_1.set_transform(hf_transform_to_torch) hf_dataset_2 = Dataset.from_dict({"a": data_a[10:20], "b": data_b[10:], "index": torch.arange(10)}) hf_dataset_2.set_transform(hf_transform_to_torch) hf_dataset_3 = Dataset.from_dict({"a": data_a[20:], "c": data_c[10:], "index": torch.arange(10)}) hf_dataset_3.set_transform(hf_transform_to_torch) dataset_1 = LeRobotDataset.from_preloaded("d1", hf_dataset=hf_dataset_1) dataset_1.stats = compute_stats(dataset_1, batch_size=len(hf_dataset_1), num_workers=0) dataset_2 = LeRobotDataset.from_preloaded("d2", hf_dataset=hf_dataset_2) dataset_2.stats = compute_stats(dataset_2, batch_size=len(hf_dataset_2), num_workers=0) dataset_3 = LeRobotDataset.from_preloaded("d3", hf_dataset=hf_dataset_3) dataset_3.stats = compute_stats(dataset_3, batch_size=len(hf_dataset_3), num_workers=0) stats = aggregate_stats([dataset_1, dataset_2, dataset_3]) for data_key, data in zip(["a", "b", "c"], [data_a, data_b, data_c], strict=True): for agg_fn in ["mean", "min", "max"]: assert torch.allclose(stats[data_key][agg_fn], einops.reduce(data, "n -> 1", agg_fn)) assert torch.allclose(stats[data_key]["std"], torch.std(data, correction=0)) @pytest.mark.skip("Requires internet access") def test_create_branch(): api = HfApi() repo_id = "cadene/test_create_branch" repo_type = "dataset" branch = "test" ref = f"refs/heads/{branch}" # Prepare a repo with a test branch api.delete_repo(repo_id, repo_type=repo_type, missing_ok=True) api.create_repo(repo_id, repo_type=repo_type) create_branch(repo_id, repo_type=repo_type, branch=branch) # Make sure the test branch exists branches = api.list_repo_refs(repo_id, repo_type=repo_type).branches refs = [branch.ref for branch in branches] assert ref in refs # Overwrite it create_branch(repo_id, repo_type=repo_type, branch=branch) # Clean api.delete_repo(repo_id, repo_type=repo_type) def test_dataset_feature_with_forward_slash_raises_error(): # make sure dir does not exist from lerobot.common.datasets.lerobot_dataset import LEROBOT_HOME dataset_dir = LEROBOT_HOME / "lerobot/test/with/slash" # make sure does not exist if dataset_dir.exists(): dataset_dir.rmdir() with pytest.raises(ValueError): LeRobotDataset.create( repo_id="lerobot/test/with/slash", fps=30, features={"a/b": {"dtype": "float32", "shape": 2, "names": None}}, )
lerobot/tests/test_datasets.py/0
{ "file_path": "lerobot/tests/test_datasets.py", "repo_id": "lerobot", "token_count": 7109 }
<!--⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Configuration [`PeftConfigMixin`] is the base configuration class for storing the adapter configuration of a [`PeftModel`], and [`PromptLearningConfig`] is the base configuration class for soft prompt methods (p-tuning, prefix tuning, and prompt tuning). These base classes contain methods for saving and loading model configurations from the Hub, specifying the PEFT method to use, type of task to perform, and model configurations like number of layers and number of attention heads. ## PeftConfigMixin [[autodoc]] config.PeftConfigMixin - all ## PeftConfig [[autodoc]] PeftConfig - all ## PromptLearningConfig [[autodoc]] PromptLearningConfig - all
peft/docs/source/package_reference/config.md/0
{ "file_path": "peft/docs/source/package_reference/config.md", "repo_id": "peft", "token_count": 224 }
<!--⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Models [`PeftModel`] is the base model class for specifying the base Transformer model and configuration to apply a PEFT method to. The base `PeftModel` contains methods for loading and saving models from the Hub. ## PeftModel [[autodoc]] PeftModel - all ## PeftModelForSequenceClassification A `PeftModel` for sequence classification tasks. [[autodoc]] PeftModelForSequenceClassification - all ## PeftModelForTokenClassification A `PeftModel` for token classification tasks. [[autodoc]] PeftModelForTokenClassification - all ## PeftModelForCausalLM A `PeftModel` for causal language modeling. [[autodoc]] PeftModelForCausalLM - all ## PeftModelForSeq2SeqLM A `PeftModel` for sequence-to-sequence language modeling. [[autodoc]] PeftModelForSeq2SeqLM - all ## PeftModelForQuestionAnswering A `PeftModel` for question answering. [[autodoc]] PeftModelForQuestionAnswering - all ## PeftModelForFeatureExtraction A `PeftModel` for getting extracting features/embeddings from transformer models. [[autodoc]] PeftModelForFeatureExtraction - all ## PeftMixedModel A `PeftModel` for mixing different adapter types (e.g. LoRA and LoHa). [[autodoc]] PeftMixedModel - all ## Utilities [[autodoc]] utils.cast_mixed_precision_params [[autodoc]] get_peft_model [[autodoc]] inject_adapter_in_model [[autodoc]] utils.get_peft_model_state_dict [[autodoc]] utils.prepare_model_for_kbit_training [[autodoc]] get_layer_status [[autodoc]] get_model_status
peft/docs/source/package_reference/peft_model.md/0
{ "file_path": "peft/docs/source/package_reference/peft_model.md", "repo_id": "peft", "token_count": 564 }
<!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Fine-tuning for controllable generation with BOFT (ControlNet) This guide demonstrates how to use BOFT, an orthogonal fine-tuning method, to fine-tune Stable Diffusion with either `stabilityai/stable-diffusion-2-1` or `runwayml/stable-diffusion-v1-5` model for controllable generation. By using BOFT from 🤗 PEFT, we can significantly reduce the number of trainable parameters while still achieving impressive results in various fine-tuning tasks across different foundation models. BOFT enhances model efficiency by integrating full-rank orthogonal matrices with a butterfly structure into specific model blocks, such as attention blocks, mirroring the approach used in LoRA. During fine-tuning, only these inserted matrices are trained, leaving the original model parameters untouched. During inference, the trainable BOFT paramteres can be merged into the original model, eliminating any additional computational costs. As a member of the **orthogonal finetuning** class, BOFT presents a systematic and principled method for fine-tuning. It possesses several unique properties and has demonstrated superior performance compared to LoRA in a variety of scenarios. For further details on BOFT, please consult the [PEFT's GitHub repo's concept guide OFT](https://https://huggingface.co/docs/peft/index), the [original BOFT paper](https://arxiv.org/abs/2311.06243) and the [original OFT paper](https://arxiv.org/abs/2306.07280). In this guide we provide a controllable generation (ControlNet) fine-tuning script that is available in [PEFT's GitHub repo examples](https://github.com/huggingface/peft/tree/main/examples/boft_controlnet). This implementation is adapted from [diffusers's ControlNet](https://github.com/huggingface/diffusers/tree/main/examples/controlnet) and [Hecong Wu's ControlLoRA](https://github.com/HighCWu/ControlLoRA). You can try it out and finetune on your custom images. ## Set up your environment Start by cloning the PEFT repository: ```bash git clone https://github.com/huggingface/peft ``` Navigate to the directory containing the training scripts for fine-tuning Dreambooth with BOFT: ```bash cd peft/examples/boft_controlnet ``` Set up your environment: install PEFT, and all the required libraries. At the time of writing this guide we recommend installing PEFT from source. ```bash conda create --name peft python=3.10 conda activate peft conda install pytorch==2.1.2 torchvision==0.16.2 torchaudio==2.1.2 pytorch-cuda=11.8 -c pytorch -c nvidia conda install xformers -c xformers pip install -r requirements.txt pip install git+https://github.com/huggingface/peft ``` ## Data We use the [control-celeba-hq](https://huggingface.co/datasets/oftverse/control-celeba-hq) dataset for landmark-to-face controllable generation. We also provide evaluation scripts to evaluate the controllable generation performance. This task can be used to quantitatively compare different fine-tuning techniques. ```bash export DATASET_NAME="oftverse/control-celeba-hq" ``` ## Train controllable generation (ControlNet) with BOFT Start with setting some hyperparamters for BOFT: ```bash PEFT_TYPE="boft" BLOCK_NUM=8 BLOCK_SIZE=0 N_BUTTERFLY_FACTOR=0 ``` Here: Navigate to the directory containing the training scripts for fine-tuning Stable Diffusion with BOFT for controllable generation: ```bash ./train_controlnet.sh ``` or ```bash export MODEL_NAME="stabilityai/stable-diffusion-2-1" # export MODEL_NAME="runwayml/stable-diffusion-v1-5" export DATASET_NAME="oftverse/control-celeba-hq" export PROJECT_NAME="controlnet_${PEFT_TYPE}" export RUN_NAME="${PEFT_TYPE}_${BLOCK_NUM}${BLOCK_SIZE}${N_BUTTERFLY_FACTOR}" export CONTROLNET_PATH="" export OUTPUT_DIR="./output/${DATASET_NAME}/${RUN_NAME}" accelerate launch train_controlnet.py \ --pretrained_model_name_or_path=$MODEL_NAME \ --resume_from_checkpoint=$RESUME_PATH \ --controlnet_model_name_or_path=$CONTROLNET_PATH \ --output_dir=$OUTPUT_DIR \ --report_to="wandb" \ --dataset_name=$DATASET_NAME \ --resolution=512 \ --learning_rate=1e-5 \ --checkpointing_steps=5000 \ --max_train_steps=50000 \ --validation_steps=2000 \ --num_validation_images=12 \ --train_batch_size=4 \ --dataloader_num_workers=2 \ --seed="0" \ --lr_scheduler="constant" \ --lr_warmup_steps=0 \ --wandb_project_name=$PROJECT_NAME \ --wandb_run_name=$RUN_NAME \ --enable_xformers_memory_efficient_attention \ --use_boft \ --boft_block_num=$BLOCK_NUM \ --boft_block_size=$BLOCK_SIZE \ --boft_n_butterfly_factor=$N_BUTTERFLY_FACTOR \ --boft_dropout=0.1 \ --boft_bias="boft_only" \ --report_to="wandb" \ ``` Run inference on the saved model to sample new images from the validation set: ```bash ./test_controlnet.sh ``` or ```bash ITER_NUM=50000 export MODEL_NAME="stabilityai/stable-diffusion-2-1" # export MODEL_NAME="runwayml/stable-diffusion-v1-5" export RUN_NAME="${PEFT_TYPE}_${BLOCK_NUM}${BLOCK_SIZE}${N_BUTTERFLY_FACTOR}" export DATASET_NAME="oftverse/control-celeba-hq" export CKPT_NAME="checkpoint-${ITER_NUM}" export OUTPUT_DIR="./output/${DATASET_NAME}/${RUN_NAME}/${CKPT_NAME}" export CONTROLNET_PATH="${OUTPUT_DIR}/controlnet/model.safetensors" export UNET_PATH="${OUTPUT_DIR}/unet/${RUN_NAME}" export RESULTS_PATH="${OUTPUT_DIR}/results" accelerate launch test_controlnet.py \ --pretrained_model_name_or_path=$MODEL_NAME \ --dataset_name=$DATASET_NAME \ --controlnet_path=$CONTROLNET_PATH \ --unet_path=$UNET_PATH \ --adapter_name=$RUN_NAME \ --output_dir=$RESULTS_PATH \ --dataset_name=$DATASET_NAME \ ``` Run evaluation on the sampled images to evaluate the landmark reprojection error: ```bash ./eval.sh ``` or ```bash ITER_NUM=50000 export MODEL_NAME="stabilityai/stable-diffusion-2-1" # export MODEL_NAME="runwayml/stable-diffusion-v1-5" export RUN_NAME="${PEFT_TYPE}_${BLOCK_NUM}${BLOCK_SIZE}${N_BUTTERFLY_FACTOR}" export DATASET_NAME="oftverse/control-celeba-hq" export CKPT_NAME="checkpoint-${ITER_NUM}" export OUTPUT_DIR="./output/${DATASET_NAME}/${RUN_NAME}/${CKPT_NAME}" export CONTROLNET_PATH="${OUTPUT_DIR}/controlnet/model.safetensors" export UNET_PATH="${OUTPUT_DIR}/unet/${RUN_NAME}" accelerate launch eval.py \ --pretrained_model_name_or_path=$MODEL_NAME \ --dataset_name=$DATASET_NAME \ --controlnet_path=$CONTROLNET_PATH \ --unet_path=$UNET_PATH \ --adapter_name=$RUN_NAME \ --output_dir=$OUTPUT_DIR \ --dataset_name=$DATASET_NAME \ --vis_overlays \ ```
peft/examples/boft_controlnet/boft_controlnet.md/0
{ "file_path": "peft/examples/boft_controlnet/boft_controlnet.md", "repo_id": "peft", "token_count": 2426 }
<jupyter_start><jupyter_text>Training PEFT models with new tokens being added to the embedding layers and tokenizerIn this example, we will learn how to train a LoRA model when adding new tokens to the tokenizer and model. This is a common usecase when doing the following:1. Instruction finetuning with new tokens beind added such as ``, ``, ``, ``, `` to properly format the conversations2. Finetuning on a specific language wherein language spoecific tokens are added, e.g., korean tokens being added to vocabulary for finetuning LLM on Korean datasets.3. Instruction finetuning to return outputs in certain format to enable agent behaviour new tokens such as ``, ``, ``, ``, ``, ``, ``.In such cases, you add the Embedding modules to the LORA `target_modules`. PEFT will take care of saving the embedding layers with the new added tokens along with the adapter weights that were trained on the specific initialization of the embeddings weights of the added tokens. Let's import the necessary libraries<jupyter_code>import os os.environ["CUDA_VISIBLE_DEVICES"] = "3" os.environ["WANDB_PROJECT"] = "PeftExamples" import transformers from peft import ( LoraConfig, PeftConfig, PeftModel, get_peft_model, prepare_model_for_kbit_training, ) from transformers import ( AutoModelForCausalLM, AutoTokenizer, HfArgumentParser, TrainingArguments, Trainer, default_data_collator, ) import torch from dataclasses import dataclass, field from typing import Optional from dataclass_csv import DataclassReader from torch.utils.data import Dataset, DataLoader from enum import Enum<jupyter_output><empty_output><jupyter_text>Prepare Model and Tokenizer Now, we will be adding 27 new tokens as well as replace the existing pad, bos and eos tokens of the model.<jupyter_code>class SpecialTokens(str, Enum): begin_target = "<|begintarget|>" end_target = "<|endtarget|>" begin_context = "<|begincontext|>" end_context = "<|endcontext|>" system = "<|system|>" user = "<|user|>" begin_last_user_utterance = "<|beginlastuserutterance|>" end_last_user_utterance = "<|endlastuserutterance|>" begin_dsts = "<|begindsts|>" end_dsts = "<|enddsts|>" begin_dst = "<|begindst|>" end_dst = "<|enddst|>" begin_belief = "<|beginbelief|>" end_belief = "<|endbelief|>" begin_response = "<|beginresponse|>" end_response = "<|endresponse|>" begin_action = "<|beginaction|>" end_action = "<|endaction|>" begin_user_action = "<|beginuseraction|>" end_user_action = "<|enduseraction|>" sys_actions = "<|sysactions|>" begin_intent = "<|beginintent|>" end_intent = "<|endintent|>" begin_requested_slots = "<|beginrequestedslots|>" end_requested_slots = "<|endrequestedslots|>" pad_token = "<|pad|>" bos_token = "<|startoftext|>" @classmethod def list(cls): return [c.value for c in cls]<jupyter_output><empty_output><jupyter_text>We will be finetuning Mistral-7B model. Let's load the tokenizer and add the special tokens followed by loading the base model and resizzing the embedding layers to accomodate the newly added tokens.<jupyter_code>model_name = "mistralai/Mistral-7B-v0.1" tokenizer = AutoTokenizer.from_pretrained( model_name, pad_token=SpecialTokens.pad_token.value, bos_token=SpecialTokens.bos_token.value, eos_token=SpecialTokens.end_target.value, additional_special_tokens=SpecialTokens.list(), ) model = AutoModelForCausalLM.from_pretrained( model_name, low_cpu_mem_usage=True # attn_implementation ="flash_attention_2", # leading to an error ) model.resize_token_embeddings(len(tokenizer))<jupyter_output><empty_output><jupyter_text>Apply LoRA<jupyter_code>config = LoraConfig( r=64, lora_alpha=128, lora_dropout=0.0, target_modules=["embed_tokens", "lm_head", "q_proj", "v_proj"] ) model = get_peft_model(model, config) print(model.print_trainable_parameters()) print(model)<jupyter_output>trainable params: 31,886,720 || all params: 7,273,840,000 || trainable%: 0.43837532857472805 None PeftModel( (base_model): LoraModel( (model): MistralForCausalLM( (model): MistralModel( (embed_tokens): lora.Embedding( (base_layer): Embedding(32027, 4096) (lora_dropout): ModuleDict( (default): Identity() ) (lora_A): ModuleDict() (lora_B): ModuleDict() (lora_embedding_A): ParameterDict( (default): Parameter containing: [torch.FloatTensor of size 64x32027]) (lora_embedding_B): ParameterDict( (default): Parameter containing: [torch.FloatTensor of size 4096x64]) ) (layers): ModuleList( (0-31): 32 x MistralDecoderLayer( (self_attn): MistralAttention( (q_proj): lora.Linear( (base_layer): Linear(in_features=4096, out_features=4096, bias=False) (lora_dropout): ModuleDict( (default): Identity([...]<jupyter_text>Preapre Dataset<jupyter_code>from datasets import load_dataset dataset = load_dataset("smangrul/assistant_chatbot_dataset") dataset = dataset["train"].train_test_split(0.2) text_column = "context" label_column = "target" max_length = 512 def preprocess_function(examples): batch_size = len(examples[text_column]) targets = [str(x) for x in examples[label_column]] model_inputs = tokenizer(examples[text_column]) labels = tokenizer(targets, add_special_tokens=False) # don't add bos token because we concatenate with inputs for i in range(batch_size): sample_input_ids = model_inputs["input_ids"][i] label_input_ids = labels["input_ids"][i] + [tokenizer.eos_token_id] # print(i, sample_input_ids, label_input_ids) model_inputs["input_ids"][i] = sample_input_ids + label_input_ids labels["input_ids"][i] = [-100] * len(sample_input_ids) + label_input_ids model_inputs["attention_mask"][i] = [1] * len(model_inputs["input_ids"][i]) # print(model_inputs) for i in range(batch_size): sample_input_ids = model_inputs["input_ids"][i] label_input_ids = labels["input_ids"][i] model_inputs["input_ids"][i] = [tokenizer.pad_token_id] * ( max_length - len(sample_input_ids) ) + sample_input_ids model_inputs["attention_mask"][i] = [0] * (max_length - len(sample_input_ids)) + model_inputs[ "attention_mask" ][i] labels["input_ids"][i] = [-100] * (max_length - len(sample_input_ids)) + label_input_ids model_inputs["input_ids"][i] = model_inputs["input_ids"][i][:max_length] model_inputs["attention_mask"][i] = model_inputs["attention_mask"][i][:max_length] labels["input_ids"][i] = labels["input_ids"][i][:max_length] model_inputs["labels"] = labels["input_ids"] return model_inputs processed_datasets = dataset.map( preprocess_function, batched=True, num_proc=1, remove_columns=dataset["train"].column_names, load_from_cache_file=False, desc="Running tokenizer on dataset", ) train_dataset = processed_datasets["train"] train_dataset train_dataloader = DataLoader( train_dataset, shuffle=True, collate_fn=default_data_collator, batch_size=8, pin_memory=True ) next(iter(train_dataloader)) tokenizer.decode(train_dataset[0]["input_ids"])<jupyter_output><empty_output><jupyter_text>Train the model<jupyter_code>training_args = TrainingArguments( output_dir="mistral_lora_clm_with_added_tokens", num_train_epochs=2, save_total_limit=5, per_device_train_batch_size=8, warmup_steps=10, weight_decay=0.0001, dataloader_drop_last=True, bf16=True, logging_steps=10, learning_rate=1e-5, gradient_checkpointing=True, gradient_checkpointing_kwargs={"use_reentrant": False}, remove_unused_columns=False, hub_model_id="smangrul/mistral_lora_clm_with_added_tokens", push_to_hub=True, hub_private_repo=True, ) trainer = Trainer( model=model, args=training_args, train_dataset=train_dataset, data_collator=default_data_collator, ) # model.config.use_cache = False trainer.train()<jupyter_output>Detected kernel version 5.4.0, which is below the recommended minimum of 5.5.0; this can cause the process to hang. It is recommended to upgrade the kernel to the minimum version or higher. Failed to detect the name of this notebook, you can set it manually with the WANDB_NOTEBOOK_NAME environment variable to enable code saving. wandb: Currently logged in as: smangrul. Use `wandb login --relogin` to force relogin<jupyter_text>Check the model output on a sample from evaluation dataset<jupyter_code>import random i = random.randint(0, len(dataset["test"])) context = dataset["test"][i]["context"] batch = tokenizer(context, return_tensors="pt") batch = {k: v.to("cuda") for k, v in batch.items()} model.eval() output_tokens = model.generate( **batch, max_new_tokens=256, do_sample=True, temperature=0.2, top_p=0.95, top_k=50, eos_token_id=tokenizer.eos_token_id, pad_token_id=tokenizer.pad_token_id, ) target_predicted = tokenizer.decode(output_tokens[0], skip_special_tokens=False).split("<|endcontext|>")[1] target = dataset["test"][i]["target"] print(f"{context=} \n\n {target_predicted=} \n\n {target=}")<jupyter_output>context="<|begincontext|><|user|>Can you find me a place to eat please?<|system|>Where at? And what kind of cuisine are you craving?<|user|>Somewhere in SF, and I am really craving Thai food at the moment!<|system|>I found a bunch of restaurants, there's actually 10 that you might like in San Francisco, one of them being Baan Thai House & Wine Bar<|user|>How can I reach them? And what's their address?<|system|>You can reach them by phone at 415-379-4505 and visit them at 534 Irving Street<|beginlastuserutterance|>Great, that restaurant sounds good<|endlastuserutterance|><|endcontext|>" target_predicted='<|begintarget|><|begindsts|><|begindst|><|beginintent|> FindRestaurants<|endintent|><|beginbelief|> Restaurants^city->SF~San Francisco|Restaurants^cuisine->Thai|Restaurants^restaurant_name->Baan Thai House & Wine Bar<|endbelief|><|enddst|><|enddsts|><|beginuseraction|> REQUEST->Restaurants^phone_number~|REQUEST->Restaurants^street_address~<|enduseraction|><|beginaction|> INFORM->Rest[...]<jupyter_text>Save the Adapter model When the lora layers are applied to embedding layers, the corresponding base model embedding layers are also saved.<jupyter_code>trainer.push_to_hub() trainer.model.push_to_hub(training_args.output_dir)<jupyter_output>/raid/sourab/peft/src/peft/utils/save_and_load.py:128: UserWarning: Setting `is_embedding_layer_resized` to `True` as embedding layers found in `target_modules` warnings.warn("Setting `is_embedding_layer_resized` to `True` as embedding layers found in `target_modules`")<jupyter_text>Check the model loading is working as expected and generating plausible outputs.<jupyter_code>from peft import PeftModel inference_model = AutoModelForCausalLM.from_pretrained( model_name, low_cpu_mem_usage=True, # attn_implementation ="flash_attention_2", ) inference_model.resize_token_embeddings(len(tokenizer)) inference_model = PeftModel.from_pretrained(inference_model, "smangrul/mistral_lora_clm_with_added_tokens") inference_model.to("cuda") inference_model.eval() output_tokens = inference_model.generate( **batch, max_new_tokens=256, do_sample=True, temperature=0.2, top_p=0.95, top_k=50, eos_token_id=tokenizer.eos_token_id, pad_token_id=tokenizer.pad_token_id, ) target_predicted = tokenizer.decode(output_tokens[0], skip_special_tokens=False).split("<|endcontext|>")[1] print(f"{context=} \n\n {target_predicted=} \n\n {target=}")<jupyter_output><empty_output>
peft/examples/causal_language_modeling/peft_lora_clm_with_additional_tokens.ipynb/0
{ "file_path": "peft/examples/causal_language_modeling/peft_lora_clm_with_additional_tokens.ipynb", "repo_id": "peft", "token_count": 4577 }
# Copyright 2023-present the HuggingFace Inc. team. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import argparse import logging import math import os import random from pathlib import Path import datasets import evaluate import torch import transformers from accelerate import Accelerator from accelerate.logging import get_logger from accelerate.utils import set_seed from datasets import DatasetDict, load_dataset from huggingface_hub import HfApi from torch import nn from torch.utils.data import DataLoader from tqdm import tqdm from transformers import AutoModel, AutoTokenizer, SchedulerType, default_data_collator, get_scheduler from peft import LoraConfig, TaskType, get_peft_model logger = get_logger(__name__) def parse_args(): parser = argparse.ArgumentParser(description="Training a PEFT model for Semantic Search task") parser.add_argument("--dataset_name", type=str, default=None, help="dataset name on HF hub") parser.add_argument( "--max_length", type=int, default=128, help=( "The maximum total input sequence length after tokenization. Sequences longer than this will be truncated," " sequences shorter will be padded if `--pad_to_max_length` is passed." ), ) parser.add_argument( "--model_name_or_path", type=str, help="Path to pretrained model or model identifier from huggingface.co/models.", required=True, ) parser.add_argument( "--per_device_train_batch_size", type=int, default=8, help="Batch size (per device) for the training dataloader.", ) parser.add_argument( "--per_device_eval_batch_size", type=int, default=8, help="Batch size (per device) for the evaluation dataloader.", ) parser.add_argument( "--learning_rate", type=float, default=5e-5, help="Initial learning rate (after the potential warmup period) to use.", ) parser.add_argument("--weight_decay", type=float, default=0.0, help="Weight decay to use.") parser.add_argument("--num_train_epochs", type=int, default=3, help="Total number of training epochs to perform.") parser.add_argument( "--max_train_steps", type=int, default=None, help="Total number of training steps to perform. If provided, overrides num_train_epochs.", ) parser.add_argument( "--gradient_accumulation_steps", type=int, default=1, help="Number of updates steps to accumulate before performing a backward/update pass.", ) parser.add_argument( "--lr_scheduler_type", type=SchedulerType, default="linear", help="The scheduler type to use.", choices=["linear", "cosine", "cosine_with_restarts", "polynomial", "constant", "constant_with_warmup"], ) parser.add_argument( "--num_warmup_steps", type=int, default=0, help="Number of steps for the warmup in the lr scheduler." ) parser.add_argument("--output_dir", type=str, default=None, help="Where to store the final model.") parser.add_argument("--seed", type=int, default=None, help="A seed for reproducible training.") parser.add_argument("--push_to_hub", action="store_true", help="Whether or not to push the model to the Hub.") parser.add_argument( "--hub_model_id", type=str, help="The name of the repository to keep in sync with the local `output_dir`." ) parser.add_argument("--hub_token", type=str, help="The token to use to push to the Model Hub.") parser.add_argument( "--checkpointing_steps", type=str, default=None, help="Whether the various states should be saved at the end of every n steps, or 'epoch' for each epoch.", ) parser.add_argument( "--resume_from_checkpoint", type=str, default=None, help="If the training should continue from a checkpoint folder.", ) parser.add_argument( "--with_tracking", action="store_true", help="Whether to enable experiment trackers for logging.", ) parser.add_argument( "--report_to", type=str, default="all", help=( 'The integration to report the results and logs to. Supported platforms are `"tensorboard"`,' ' `"wandb"`, `"comet_ml"` and `"clearml"`. Use `"all"` (default) to report to all integrations.' "Only applicable when `--with_tracking` is passed." ), ) parser.add_argument( "--sanity_test", action="store_true", help="Whether to enable sanity test.", ) parser.add_argument( "--use_peft", action="store_true", help="Whether to use PEFT.", ) args = parser.parse_args() if args.push_to_hub: assert args.output_dir is not None, "Need an `output_dir` to create a repo when `--push_to_hub` is passed." return args def save_model_hook(models, weights, output_dir): for i, model in enumerate(models): model.save_pretrained(output_dir, state_dict=weights[i]) # make sure to pop weight so that corresponding model is not saved again weights.pop() def load_model_hook(models, input_dir): while len(models) > 0: model = models.pop() # pop models so that they are not loaded again if hasattr(model, "active_adapter") and hasattr(model, "load_adapter"): model.load_adapter(input_dir, model.active_adapter, is_trainable=True) class AutoModelForSentenceEmbedding(nn.Module): def __init__(self, model_name, tokenizer, normalize=True): super().__init__() self.model = AutoModel.from_pretrained( model_name ) # , quantizaton_config=BitsAndBytesConfig(load_in_8bit=True), device_map={"":0}) self.normalize = normalize self.tokenizer = tokenizer def forward(self, **kwargs): model_output = self.model(**kwargs) embeddings = self.mean_pooling(model_output, kwargs["attention_mask"]) if self.normalize: embeddings = torch.nn.functional.normalize(embeddings, p=2, dim=1) return embeddings def mean_pooling(self, model_output, attention_mask): token_embeddings = model_output[0] # First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) def __getattr__(self, name: str): """Forward missing attributes to the wrapped module.""" try: return super().__getattr__(name) # defer to nn.Module's logic except AttributeError: if name == "model": # see #1892: prevent infinite recursion if class is not initialized raise return getattr(self.model, name) def get_cosing_embeddings(query_embs, product_embs): return torch.sum(query_embs * product_embs, axis=1) def get_loss(cosine_score, labels): return torch.mean(torch.square(labels * (1 - cosine_score) + torch.clamp((1 - labels) * cosine_score, min=0.0))) def main(): args = parse_args() accelerator_kwargs = {"gradient_accumulation_steps": args.gradient_accumulation_steps} if args.with_tracking: accelerator_kwargs["log_with"] = args.report_to accelerator_kwargs["project_dir"] = args.output_dir accelerator = Accelerator(**accelerator_kwargs) # Make one log on every process with the configuration for debugging. logging.basicConfig( format="%(asctime)s - %(levelname)s - %(name)s - %(message)s", datefmt="%m/%d/%Y %H:%M:%S", level=logging.INFO, ) logger.info(accelerator.state, main_process_only=False) if accelerator.is_local_main_process: datasets.utils.logging.set_verbosity_warning() transformers.utils.logging.set_verbosity_info() else: datasets.utils.logging.set_verbosity_error() transformers.utils.logging.set_verbosity_error() # If passed along, set the training seed now. if args.seed is not None: set_seed(args.seed) # Handle the repository creation if accelerator.is_main_process: if args.push_to_hub: api = HfApi(token=args.hub_token) # Create repo (repo_name from args or inferred) repo_name = args.hub_model_id if repo_name is None: repo_name = Path(args.output_dir).absolute().name repo_id = api.create_repo(repo_name, exist_ok=True).repo_id with open(os.path.join(args.output_dir, ".gitignore"), "w+") as gitignore: if "step_*" not in gitignore: gitignore.write("step_*\n") if "epoch_*" not in gitignore: gitignore.write("epoch_*\n") elif args.output_dir is not None: os.makedirs(args.output_dir, exist_ok=True) accelerator.wait_for_everyone() # get the tokenizer tokenizer = AutoTokenizer.from_pretrained(args.model_name_or_path) # dataset download and preprocessing if args.sanity_test: train_dataset = load_dataset("smangrul/amazon_esci", split="train[:1024]") val_dataset = load_dataset("smangrul/amazon_esci", split="validation[:1024]") dataset = DatasetDict({"train": train_dataset, "validation": val_dataset}) else: dataset = load_dataset(args.dataset_name, revision="main") def preprocess_function(examples): queries = examples["query"] result = tokenizer(queries, padding="max_length", max_length=70, truncation=True) result = {f"query_{k}": v for k, v in result.items()} products = examples["product_title"] result_products = tokenizer(products, padding="max_length", max_length=70, truncation=True) for k, v in result_products.items(): result[f"product_{k}"] = v result["labels"] = examples["relevance_label"] return result processed_datasets = dataset.map( preprocess_function, batched=True, remove_columns=dataset["train"].column_names, desc="Running tokenizer on dataset", ) # Log a few random samples from the training set: for index in random.sample(range(len(processed_datasets["train"])), 3): logger.info(f"Sample {index} of the training set: {processed_datasets['train'][index]}.") # base model model = AutoModelForSentenceEmbedding(args.model_name_or_path, tokenizer) if args.use_peft: # peft config and wrapping peft_config = LoraConfig( r=8, lora_alpha=16, bias="none", task_type=TaskType.FEATURE_EXTRACTION, target_modules=["key", "query", "value"], ) model = get_peft_model(model, peft_config) model.print_trainable_parameters() accelerator.print(model) # get dataloaders train_dataloader = DataLoader( processed_datasets["train"], shuffle=True, collate_fn=default_data_collator, batch_size=args.per_device_train_batch_size, pin_memory=True, ) eval_dataloader = DataLoader( processed_datasets["validation"], shuffle=False, collate_fn=default_data_collator, batch_size=args.per_device_eval_batch_size, pin_memory=True, ) optimizer = torch.optim.Adam(model.parameters(), lr=args.learning_rate) # Scheduler and math around the number of training steps. overrode_max_train_steps = False num_update_steps_per_epoch = math.ceil(len(train_dataloader) / args.gradient_accumulation_steps) if args.max_train_steps is None: args.max_train_steps = args.num_train_epochs * num_update_steps_per_epoch overrode_max_train_steps = True lr_scheduler = get_scheduler( name=args.lr_scheduler_type, optimizer=optimizer, num_warmup_steps=args.num_warmup_steps, num_training_steps=args.max_train_steps, ) # Prepare everything with our `accelerator`. model, optimizer, train_dataloader, eval_dataloader, lr_scheduler = accelerator.prepare( model, optimizer, train_dataloader, eval_dataloader, lr_scheduler ) # We need to recalculate our total training steps as the size of the training dataloader may have changed num_update_steps_per_epoch = math.ceil(len(train_dataloader) / args.gradient_accumulation_steps) if overrode_max_train_steps: args.max_train_steps = args.num_train_epochs * num_update_steps_per_epoch # Afterwards we recalculate our number of training epochs args.num_train_epochs = math.ceil(args.max_train_steps / num_update_steps_per_epoch) # Figure out how many steps we should save the Accelerator states checkpointing_steps = args.checkpointing_steps if checkpointing_steps is not None and checkpointing_steps.isdigit(): checkpointing_steps = int(checkpointing_steps) # We need to initialize the trackers we use, and also store our configuration. # The trackers initializes automatically on the main process. if args.with_tracking: experiment_config = vars(args) # TensorBoard cannot log Enums, need the raw value experiment_config["lr_scheduler_type"] = experiment_config["lr_scheduler_type"].value accelerator.init_trackers("peft_semantic_search", experiment_config) metric = evaluate.load("roc_auc") total_batch_size = args.per_device_train_batch_size * accelerator.num_processes * args.gradient_accumulation_steps if args.use_peft: # saving and loading checkpoints for resuming training accelerator.register_save_state_pre_hook(save_model_hook) accelerator.register_load_state_pre_hook(load_model_hook) logger.info("***** Running training *****") logger.info(f" Num examples = {len(processed_datasets['train'])}") logger.info(f" Num Epochs = {args.num_train_epochs}") logger.info(f" Instantaneous batch size per device = {args.per_device_train_batch_size}") logger.info(f" Total train batch size (w. parallel, distributed & accumulation) = {total_batch_size}") logger.info(f" Gradient Accumulation steps = {args.gradient_accumulation_steps}") logger.info(f" Total optimization steps = {args.max_train_steps}") # Only show the progress bar once on each machine. progress_bar = tqdm(range(args.max_train_steps), disable=not accelerator.is_local_main_process) completed_steps = 0 starting_epoch = 0 # Potentially load in the weights and states from a previous save if args.resume_from_checkpoint: if args.resume_from_checkpoint is not None or args.resume_from_checkpoint != "": accelerator.print(f"Resumed from checkpoint: {args.resume_from_checkpoint}") accelerator.load_state(args.resume_from_checkpoint) path = os.path.basename(args.resume_from_checkpoint) else: # Get the most recent checkpoint dirs = [f.name for f in os.scandir(os.getcwd()) if f.is_dir()] dirs.sort(key=os.path.getctime) path = dirs[-1] # Sorts folders by date modified, most recent checkpoint is the last # Extract `epoch_{i}` or `step_{i}` training_difference = os.path.splitext(path)[0] if "epoch" in training_difference: starting_epoch = int(training_difference.replace("epoch_", "")) + 1 resume_step = None completed_steps = starting_epoch * num_update_steps_per_epoch else: # need to multiply `gradient_accumulation_steps` to reflect real steps resume_step = int(training_difference.replace("step_", "")) * args.gradient_accumulation_steps starting_epoch = resume_step // len(train_dataloader) resume_step -= starting_epoch * len(train_dataloader) completed_steps = resume_step // args.gradient_accumulation_steps # update the progress_bar if load from checkpoint progress_bar.update(completed_steps) for epoch in range(starting_epoch, args.num_train_epochs): model.train() if args.with_tracking: total_loss = 0 if args.resume_from_checkpoint and epoch == starting_epoch and resume_step is not None: # We skip the first `n` batches in the dataloader when resuming from a checkpoint active_dataloader = accelerator.skip_first_batches(train_dataloader, resume_step) else: active_dataloader = train_dataloader for step, batch in enumerate(active_dataloader): with accelerator.accumulate(model): query_embs = model(**{k.replace("query_", ""): v for k, v in batch.items() if "query" in k}) product_embs = model(**{k.replace("product_", ""): v for k, v in batch.items() if "product" in k}) loss = get_loss(get_cosing_embeddings(query_embs, product_embs), batch["labels"]) total_loss += accelerator.reduce(loss.detach().float(), reduction="sum") accelerator.backward(loss) optimizer.step() lr_scheduler.step() model.zero_grad() # Checks if the accelerator has performed an optimization step behind the scenes if accelerator.sync_gradients: progress_bar.update(1) completed_steps += 1 if (step + 1) % 100 == 0: logger.info(f"Step: {step + 1}, Loss: {total_loss / (step + 1)}") if args.with_tracking: accelerator.log({"train/loss": total_loss / (step + 1)}, step=completed_steps) if isinstance(checkpointing_steps, int): if completed_steps % checkpointing_steps == 0: output_dir = f"step_{completed_steps}" if args.output_dir is not None: output_dir = os.path.join(args.output_dir, output_dir) accelerator.save_state(output_dir) if completed_steps >= args.max_train_steps: break model.eval() for step, batch in enumerate(eval_dataloader): with torch.no_grad(): query_embs = model(**{k.replace("query_", ""): v for k, v in batch.items() if "query" in k}) product_embs = model(**{k.replace("product_", ""): v for k, v in batch.items() if "product" in k}) prediction_scores = get_cosing_embeddings(query_embs, product_embs) prediction_scores, references = accelerator.gather_for_metrics((prediction_scores, batch["labels"])) metric.add_batch( prediction_scores=prediction_scores, references=references, ) result = metric.compute() result = {f"eval/{k}": v for k, v in result.items()} # Use accelerator.print to print only on the main process. accelerator.print(f"epoch {epoch}:", result) if args.with_tracking: result["train/epoch_loss"] = total_loss.item() / len(train_dataloader) accelerator.log(result, step=completed_steps) if args.output_dir is not None: accelerator.wait_for_everyone() if accelerator.is_main_process: if isinstance(checkpointing_steps, str): accelerator.save_state(os.path.join(args.output_dir, f"epoch_{epoch}")) accelerator.unwrap_model(model).save_pretrained( args.output_dir, state_dict=accelerator.get_state_dict(accelerator.unwrap_model(model)) ) tokenizer.save_pretrained(args.output_dir) if args.push_to_hub: commit_message = ( f"Training in progress epoch {epoch}" if epoch < args.num_train_epochs - 1 else "End of training" ) api.upload_folder( repo_id=repo_id, folder_path=args.output_dir, commit_message=commit_message, run_as_future=True, ) accelerator.wait_for_everyone() accelerator.end_training() if __name__ == "__main__": main()
peft/examples/feature_extraction/peft_lora_embedding_semantic_search.py/0
{ "file_path": "peft/examples/feature_extraction/peft_lora_embedding_semantic_search.py", "repo_id": "peft", "token_count": 8775 }
<jupyter_start><jupyter_text>Using PEFT with timm `peft` allows us to train any model with LoRA as long as the layer type is supported. Since `Conv2D` is one of the supported layer types, it makes sense to test it on image models.In this short notebook, we will demonstrate this with an image classification task using [`timm`](https://huggingface.co/docs/timm/index). Imports Make sure that you have the latest version of `peft` installed. To ensure that, run this in your Python environment: python -m pip install --upgrade peft Also, ensure that `timm` is installed: python -m pip install --upgrade timm<jupyter_code>import timm import torch from PIL import Image from timm.data import resolve_data_config from timm.data.transforms_factory import create_transform import peft from datasets import load_dataset torch.manual_seed(0)<jupyter_output><empty_output><jupyter_text>Loading the pre-trained base model We use a small pretrained `timm` model, `PoolFormer`. Find more info on its [model card](https://huggingface.co/timm/poolformer_m36.sail_in1k).<jupyter_code>model_id_timm = "timm/poolformer_m36.sail_in1k"<jupyter_output><empty_output><jupyter_text>We tell `timm` that we deal with 3 classes, to ensure that the classification layer has the correct size.<jupyter_code>model = timm.create_model(model_id_timm, pretrained=True, num_classes=3)<jupyter_output><empty_output><jupyter_text>These are the transformations steps necessary to process the image.<jupyter_code>transform = create_transform(**resolve_data_config(model.pretrained_cfg, model=model))<jupyter_output><empty_output><jupyter_text>Data For this exercise, we use the "beans" dataset. More details on the dataset can be found on [its datasets page](https://huggingface.co/datasets/beans). For our purposes, what's important is that we have image inputs and the target we're trying to predict is one of three classes for each image.<jupyter_code>ds = load_dataset("beans") ds_train = ds["train"] ds_valid = ds["validation"] ds_train[0]["image"]<jupyter_output><empty_output><jupyter_text>We define a small processing function which is responsible for loading and transforming the images, as well as extracting the labels.<jupyter_code>def process(batch): x = torch.cat([transform(img).unsqueeze(0) for img in batch["image"]]) y = torch.tensor(batch["labels"]) return {"x": x, "y": y} ds_train.set_transform(process) ds_valid.set_transform(process) train_loader = torch.utils.data.DataLoader(ds_train, batch_size=32) valid_loader = torch.utils.data.DataLoader(ds_valid, batch_size=32)<jupyter_output><empty_output><jupyter_text>Training This is just a function that performs the train loop, nothing fancy happening.<jupyter_code>def train(model, optimizer, criterion, train_dataloader, valid_dataloader, epochs): for epoch in range(epochs): model.train() train_loss = 0 for batch in train_dataloader: xb, yb = batch["x"], batch["y"] xb, yb = xb.to(device), yb.to(device) outputs = model(xb) lsm = torch.nn.functional.log_softmax(outputs, dim=-1) loss = criterion(lsm, yb) train_loss += loss.detach().float() loss.backward() optimizer.step() optimizer.zero_grad() model.eval() valid_loss = 0 correct = 0 n_total = 0 for batch in valid_dataloader: xb, yb = batch["x"], batch["y"] xb, yb = xb.to(device), yb.to(device) with torch.no_grad(): outputs = model(xb) lsm = torch.nn.functional.log_softmax(outputs, dim=-1) loss = criterion(lsm, yb) valid_loss += loss.detach().float() correct += (outputs.argmax(-1) == yb).sum().item() n_total += len(yb) train_loss_total = (train_loss / len(train_dataloader)).item() valid_loss_total = (valid_loss / len(valid_dataloader)).item() valid_acc_total = correct / n_total print(f"{epoch=:<2} {train_loss_total=:.4f} {valid_loss_total=:.4f} {valid_acc_total=:.4f}")<jupyter_output><empty_output><jupyter_text>Selecting which layers to fine-tune with LoRA Let's take a look at the layers of our model. We only print the first 30, since there are quite a few:<jupyter_code>[(n, type(m)) for n, m in model.named_modules()][:30]<jupyter_output><empty_output><jupyter_text>Most of these layers are not good targets for LoRA, but we see a couple that should interest us. Their names are `'stages.0.blocks.0.mlp.fc1'`, etc. With a bit of regex, we can match them easily.Also, we should inspect the name of the classification layer, since we want to train that one too!<jupyter_code>[(n, type(m)) for n, m in model.named_modules()][-5:]<jupyter_output><empty_output><jupyter_text>config = peft.LoraConfig( r=8, target_modules=r".*\.mlp\.fc\d|head\.fc", ) Okay, this gives us all the information we need to fine-tune this model. With a bit of regex, we match the convolutional layers that should be targeted for LoRA. We also want to train the classification layer `'head.fc'` (without LoRA), so we add it to the `modules_to_save`.<jupyter_code>config = peft.LoraConfig(r=8, target_modules=r".*\.mlp\.fc\d", modules_to_save=["head.fc"])<jupyter_output><empty_output><jupyter_text>Finally, let's create the `peft` model, the optimizer and criterion, and we can get started. As shown below, less than 2% of the model's total parameters are updated thanks to `peft`.<jupyter_code>device = "cuda" if torch.cuda.is_available() else "cpu" peft_model = peft.get_peft_model(model, config).to(device) optimizer = torch.optim.Adam(peft_model.parameters(), lr=2e-4) criterion = torch.nn.CrossEntropyLoss() peft_model.print_trainable_parameters() %time train(peft_model, optimizer, criterion, train_loader, valid_dataloader=valid_loader, epochs=10)<jupyter_output>epoch=0 train_loss_total=1.2999 valid_loss_total=1.0624 valid_acc_total=0.4436 epoch=1 train_loss_total=1.0200 valid_loss_total=0.8906 valid_acc_total=0.7594 epoch=2 train_loss_total=0.8874 valid_loss_total=0.6894 valid_acc_total=0.8045 epoch=3 train_loss_total=0.7440 valid_loss_total=0.4797 valid_acc_total=0.8045 epoch=4 train_loss_total=0.6025 valid_loss_total=0.3419 valid_acc_total=0.8120 epoch=5 train_loss_total=0.4820 valid_loss_total=0.2589 valid_acc_total=0.8421 epoch=6 train_loss_total=0.3567 valid_loss_total=0.2101 valid_acc_total=0.8722 epoch=7 train_loss_total=0.2835 valid_loss_total=0.1385 valid_acc_total=0.9098 epoch=8 train_loss_total=0.1815 valid_loss_total=0.1108 valid_acc_total=0.9474 epoch=9 train_loss_total=0.1341 valid_loss_total=0.0785 valid_acc_total=0.9699 CPU times: user 4min 3s, sys: 36.3 s, total: 4min 40s Wall time: 3min 32s<jupyter_text>We get an accuracy of ~0.97, despite only training a tiny amount of parameters. That's a really nice result. Sharing the model through Hugging Face Hub Pushing the model to Hugging Face Hub If we want to share the fine-tuned weights with the world, we can upload them to Hugging Face Hub like this:<jupyter_code>user = "BenjaminB" # put your user name here model_name = "peft-lora-with-timm-model" model_id = f"{user}/{model_name}" peft_model.push_to_hub(model_id);<jupyter_output><empty_output><jupyter_text>As we can see, the adapter size is only 4.3 MB. The original model was 225 MB. That's a very big saving. Loading the model from HF Hub Now, it only takes one step to load the model from HF Hub. To do this, we can use `PeftModel.from_pretrained`, passing our base model and the model ID:<jupyter_code>base_model = timm.create_model(model_id_timm, pretrained=True, num_classes=3) loaded = peft.PeftModel.from_pretrained(base_model, model_id) x = ds_train[:1]["x"] y_peft = peft_model(x.to(device)) y_loaded = loaded(x) torch.allclose(y_peft.cpu(), y_loaded)<jupyter_output><empty_output><jupyter_text>Clean up Finally, as a clean up step, you may want to delete the repo.<jupyter_code>from huggingface_hub import delete_repo delete_repo(model_id)<jupyter_output><empty_output>
peft/examples/image_classification/image_classification_timm_peft_lora.ipynb/0
{ "file_path": "peft/examples/image_classification/image_classification_timm_peft_lora.ipynb", "repo_id": "peft", "token_count": 3067 }
<jupyter_start><jupyter_text>Using FourierFT for sequence classification In this example, we fine-tune Roberta (base) on a sequence classification task using FourierFT. Imports<jupyter_code># To run this notebook, please run `pip install evaluate` to install additional dependencies not covered by PEFT. import torch from torch.optim import AdamW from torch.utils.data import DataLoader from peft import ( get_peft_model, FourierFTConfig, PeftType, ) import evaluate from datasets import load_dataset from transformers import AutoModelForSequenceClassification, AutoTokenizer, get_linear_schedule_with_warmup, set_seed, AutoConfig from tqdm import tqdm<jupyter_output>/home/zgaoat/anaconda3/envs/pr2/lib/python3.11/site-packages/tqdm/auto.py:21: TqdmWarning: IProgress not found. Please update jupyter and ipywidgets. See https://ipywidgets.readthedocs.io/en/stable/user_install.html from .autonotebook import tqdm as notebook_tqdm<jupyter_text>Parameters<jupyter_code>batch_size = 32 model_name_or_path = "roberta-base" task = "mrpc" peft_type = PeftType.FOURIERFT device = "cuda" if torch.cuda.is_available() else "cpu" num_epochs = 5 # for better results, increase this number n_frequency = 1000 # for better results, increase this number scaling = 150.0 max_length = 512 torch.manual_seed(0) peft_config = FourierFTConfig( task_type="SEQ_CLS", n_frequency=n_frequency, target_modules=["query", "value"], scaling = scaling, ) head_lr = 6e-3 # the learning rate for the classification head for NLU tasks fft_lr = 6e-2 # the learning rate for the parameters other than the classification head (q,v in this case)<jupyter_output><empty_output><jupyter_text>Loading data<jupyter_code>if any(k in model_name_or_path for k in ("gpt", "opt", "bloom")): padding_side = "left" else: padding_side = "right" tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, padding_side=padding_side) if getattr(tokenizer, "pad_token_id") is None: tokenizer.pad_token_id = tokenizer.eos_token_id datasets = load_dataset("glue", task) metric = evaluate.load("glue", task) def tokenize_function(examples): # max_length=None => use the model max length (it's actually the default) outputs = tokenizer(examples["sentence1"], examples["sentence2"], truncation=True, max_length=max_length) return outputs tokenized_datasets = datasets.map( tokenize_function, batched=True, remove_columns=["idx", "sentence1", "sentence2"], ) # We also rename the 'label' column to 'labels' which is the expected name for labels by the models of the # transformers library tokenized_datasets = tokenized_datasets.rename_column("label", "labels") def collate_fn(examples): return tokenizer.pad(examples, padding="longest", return_tensors="pt") # Instantiate dataloaders. train_dataloader = DataLoader(tokenized_datasets["train"], shuffle=True, collate_fn=collate_fn, batch_size=batch_size) eval_dataloader = DataLoader( tokenized_datasets["validation"], shuffle=False, collate_fn=collate_fn, batch_size=batch_size )<jupyter_output><empty_output><jupyter_text>Preparing the FourierFT model<jupyter_code>model = AutoModelForSequenceClassification.from_pretrained(model_name_or_path, return_dict=True, max_length=None) model = get_peft_model(model, peft_config) model.print_trainable_parameters() head_param = list(map(id, model.classifier.parameters())) others_param = filter(lambda p: id(p) not in head_param, model.parameters()) optimizer = AdamW([ {"params": model.classifier.parameters(), "lr": head_lr}, {"params": others_param, "lr": fft_lr} ],weight_decay=0.) # Instantiate scheduler lr_scheduler = get_linear_schedule_with_warmup( optimizer=optimizer, num_warmup_steps=0.06 * (len(train_dataloader) * num_epochs), num_training_steps=(len(train_dataloader) * num_epochs), )<jupyter_output><empty_output><jupyter_text>Training<jupyter_code>model.to(device) for epoch in range(num_epochs): model.train() for step, batch in enumerate(tqdm(train_dataloader)): batch.to(device) outputs = model(**batch) loss = outputs.loss loss.backward() optimizer.step() lr_scheduler.step() optimizer.zero_grad() model.eval() for step, batch in enumerate(tqdm(eval_dataloader)): batch.to(device) with torch.no_grad(): outputs = model(**batch) predictions = outputs.logits.argmax(dim=-1) predictions, references = predictions, batch["labels"] metric.add_batch( predictions=predictions, references=references, ) eval_metric = metric.compute() print(f"epoch {epoch}:", eval_metric)<jupyter_output>0%| | 0/115 [00:00<?, ?it/s]You're using a RobertaTokenizerFast tokenizer. Please note that with a fast tokenizer, using the `__call__` method is faster than using a method to encode the text followed by a call to the `pad` method to get a padded encoding. 100%|██████████| 115/115 [00:06<00:00, 19.03it/s] 100%|██████████| 13/13 [00:00<00:00, 41.72it/s]<jupyter_text>Share adapters on the 🤗 Hub<jupyter_code>account_id = ... # your Hugging Face Hub account ID model.push_to_hub(f"{account_id}/roberta-base-mrpc-peft-fourierft")<jupyter_output>/home/zgaoat/anaconda3/envs/pr2/lib/python3.11/site-packages/huggingface_hub/file_download.py:1132: FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`. warnings.warn(<jupyter_text>Load adapters from the HubYou can also directly load adapters from the Hub using the commands below:<jupyter_code>import torch from peft import PeftModel, PeftConfig from transformers import AutoTokenizer peft_model_id = f"{account_id}/roberta-base-mrpc-peft-fourierft" config = PeftConfig.from_pretrained(peft_model_id) inference_model = AutoModelForSequenceClassification.from_pretrained(config.base_model_name_or_path) tokenizer = AutoTokenizer.from_pretrained(config.base_model_name_or_path) # Load the FourierFT model inference_model = PeftModel.from_pretrained(inference_model, peft_model_id, config=config) inference_model.to(device) inference_model.eval() for step, batch in enumerate(tqdm(eval_dataloader)): batch.to(device) with torch.no_grad(): outputs = inference_model(**batch) predictions = outputs.logits.argmax(dim=-1) predictions, references = predictions, batch["labels"] metric.add_batch( predictions=predictions, references=references, ) eval_metric = metric.compute() print(eval_metric)<jupyter_output>0%| | 0/13 [00:00<?, ?it/s]You're using a RobertaTokenizerFast tokenizer. Please note that with a fast tokenizer, using the `__call__` method is faster than using a method to encode the text followed by a call to the `pad` method to get a padded encoding. 100%|██████████| 13/13 [00:00<00:00, 43.06it/s]
peft/examples/sequence_classification/FourierFT.ipynb/0
{ "file_path": "peft/examples/sequence_classification/FourierFT.ipynb", "repo_id": "peft", "token_count": 2571 }
# Copyright 2024-present the HuggingFace Inc. team. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from __future__ import annotations import warnings from typing import Optional from transformers import PreTrainedModel from .auto import MODEL_TYPE_TO_PEFT_MODEL_MAPPING from .config import PeftConfig from .mapping import PEFT_TYPE_TO_CONFIG_MAPPING, PEFT_TYPE_TO_PREFIX_MAPPING from .mixed_model import PeftMixedModel from .peft_model import PeftModel from .tuners.tuners_utils import BaseTuner, BaseTunerLayer from .utils import _prepare_prompt_learning_config def get_peft_model( model: PreTrainedModel, peft_config: PeftConfig, adapter_name: str = "default", mixed: bool = False, autocast_adapter_dtype: bool = True, revision: Optional[str] = None, low_cpu_mem_usage: bool = False, ) -> PeftModel | PeftMixedModel: """ Returns a Peft model object from a model and a config, where the model will be modified in-place. Args: model ([`transformers.PreTrainedModel`]): Model to be wrapped. peft_config ([`PeftConfig`]): Configuration object containing the parameters of the Peft model. adapter_name (`str`, `optional`, defaults to `"default"`): The name of the adapter to be injected, if not provided, the default adapter name is used ("default"). mixed (`bool`, `optional`, defaults to `False`): Whether to allow mixing different (compatible) adapter types. autocast_adapter_dtype (`bool`, *optional*): Whether to autocast the adapter dtype. Defaults to `True`. Right now, this will only cast adapter weights using float16 or bfloat16 to float32, as this is typically required for stable training, and only affect select PEFT tuners. revision (`str`, `optional`, defaults to `main`): The revision of the base model. If this isn't set, the saved peft model will load the `main` revision for the base model low_cpu_mem_usage (`bool`, `optional`, defaults to `False`): Create empty adapter weights on meta device. Useful to speed up the loading process. Leave this setting as False if you intend on training the model, unless the adapter weights will be replaced by different weights before training starts. """ model_config = BaseTuner.get_model_config(model) old_name = peft_config.base_model_name_or_path new_name = model.__dict__.get("name_or_path", None) peft_config.base_model_name_or_path = new_name # Especially in notebook environments there could be a case that a user wants to experiment with different # configuration values. However, it is likely that there won't be any changes for new configs on an already # initialized PEFT model. The best we can do is warn the user about it. if any(isinstance(module, BaseTunerLayer) for module in model.modules()): warnings.warn( "You are trying to modify a model with PEFT for a second time. If you want to reload the model with a " "different config, make sure to call `.unload()` before." ) if (old_name is not None) and (old_name != new_name): warnings.warn( f"The PEFT config's `base_model_name_or_path` was renamed from '{old_name}' to '{new_name}'. " "Please ensure that the correct base model is loaded when loading this checkpoint." ) if revision is not None: if peft_config.revision is not None and peft_config.revision != revision: warnings.warn( f"peft config has already set base model revision to {peft_config.revision}, overwriting with revision {revision}" ) peft_config.revision = revision if ( (isinstance(peft_config, PEFT_TYPE_TO_CONFIG_MAPPING["LORA"])) and (peft_config.init_lora_weights == "eva") and not low_cpu_mem_usage ): warnings.warn( "lora with eva initialization used with low_cpu_mem_usage=False. " "Setting low_cpu_mem_usage=True can improve the maximum batch size possible for eva initialization." ) prefix = PEFT_TYPE_TO_PREFIX_MAPPING.get(peft_config.peft_type) if prefix and adapter_name in prefix: warnings.warn( f"Adapter name {adapter_name} should not be contained in the prefix {prefix}." "This may lead to reinitialization of the adapter weights during loading." ) if mixed: # note: PeftMixedModel does not support autocast_adapter_dtype, so don't pass it return PeftMixedModel(model, peft_config, adapter_name=adapter_name) if peft_config.task_type not in MODEL_TYPE_TO_PEFT_MODEL_MAPPING.keys() and not peft_config.is_prompt_learning: return PeftModel( model, peft_config, adapter_name=adapter_name, autocast_adapter_dtype=autocast_adapter_dtype, low_cpu_mem_usage=low_cpu_mem_usage, ) if peft_config.is_prompt_learning: peft_config = _prepare_prompt_learning_config(peft_config, model_config) return MODEL_TYPE_TO_PEFT_MODEL_MAPPING[peft_config.task_type]( model, peft_config, adapter_name=adapter_name, autocast_adapter_dtype=autocast_adapter_dtype, low_cpu_mem_usage=low_cpu_mem_usage, )
peft/src/peft/mapping_func.py/0
{ "file_path": "peft/src/peft/mapping_func.py", "repo_id": "peft", "token_count": 2208 }
# Copyright 2023-present the HuggingFace Inc. team. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import math import torch import torch.nn as nn import torch.nn.functional as F from .config import TRANSFORMERS_MODEL_CONFIG class AdaptedAttention(nn.Module): """This module wraps a LLamaAttention module and injects adaption prompts.""" def __init__(self, model_type: str, adapter_len: int, model): """ Initialize object. Args: model_type: The transformer model type. This is used to retrieve the right method to compute query states. adapter_len: The length of the adaption prompt to insert. model: The original transformer attention module that is being wrapped. """ assert not isinstance(model, AdaptedAttention) super().__init__() self.model_type = model_type self.model = model self.adapter_len = adapter_len # Assume all parameters of the attention model we are wrapping are on the same device. device = next(model.parameters()).device # Don't think this was specified in the paper, but we follow the official repo which used an Embedding # which initializes the tokens with standard normal values. # https://github.com/ZrrSkywalker/LLaMA-Adapter/blob/41c3546fe1997ab8a65809dc8d8f9252b19d9faf/llama/model.py#L234 # (bsz, adapter_len, hidden_size) target_dtype = ( model.q_proj.weight.dtype if model.q_proj.weight.dtype not in [torch.int8, torch.uint8] else torch.float32 ) if hasattr(self.model, "hidden_size"): # TODO: remove this clause after 2026-01-01 hidden_size = self.model.hidden_size else: # changed in https://github.com/huggingface/transformers/pull/35235 hidden_size = self.model.config.hidden_size self.adaption_prompt = nn.Parameter( torch.empty(1, adapter_len, hidden_size, device=device, dtype=target_dtype).normal_() ) # Initialize the gate to 0 as this is "zero-init". self.adaption_gate = nn.Parameter(torch.zeros(1, device=device, dtype=target_dtype)) def forward(self, **kwargs): """ Forward pass for the adapter which wraps the original LlamaAttention module. "Official" paper implementation: https://github.com/ZrrSkywalker/LLaMA-Adapter/blob/41c3546fe1997ab8a65809dc8d8f9252b19d9faf/llama/model.py#L141 Args: kwargs: See the original LlamaAttention module. """ if kwargs.get("output_attention", False): raise NotImplementedError("output_attention is not currently supported.") output, *_ = self.model(**kwargs) bsz = output.shape[0] q_len = output.shape[1] embed_dim = output.shape[2] k_proj_layer = TRANSFORMERS_MODEL_CONFIG[self.model_type].k_proj_layer v_proj_layer = TRANSFORMERS_MODEL_CONFIG[self.model_type].v_proj_layer o_proj_layer = TRANSFORMERS_MODEL_CONFIG[self.model_type].o_proj_layer factor = ( self.model.k_proj.in_features // self.model.k_proj.out_features ) # Mistral has different input and output dimension for k_proj and v_proj layers if k_proj_layer == v_proj_layer: _, key, value = getattr(self.model, k_proj_layer)(self.adaption_prompt).split(embed_dim, dim=2) else: key = getattr(self.model, k_proj_layer)(self.adaption_prompt) value = getattr(self.model, v_proj_layer)(self.adaption_prompt) if hasattr(self.model, "num_heads"): # TODO: remove this clause after 2026-01-01 num_heads = self.model.num_heads else: # changed in https://github.com/huggingface/transformers/pull/35235 num_heads = self.model.config.num_attention_heads # (bsz, num_key_value_heads, adapter_len, head_dim) adapter_k = ( key.view(1, self.adapter_len, (num_heads // factor), self.model.head_dim) .repeat(bsz, 1, 1, 1) .transpose(1, 2) ) adapter_v = ( value.view(1, self.adapter_len, (num_heads // factor), self.model.head_dim) .repeat(bsz, 1, 1, 1) .transpose(1, 2) ) # Below is taken from https://github.com/huggingface/transformers/blob/e547458c43dfdbbb8f6a7757237e234c44e20a8f/src/transformers/models/mistral/modeling_mistral.py#L181 # (bsz, num_heads, adapter_len, head_dim) adapter_k = torch.repeat_interleave(adapter_k, repeats=factor, dim=1) adapter_v = torch.repeat_interleave(adapter_v, repeats=factor, dim=1) # Recompute query states. compute_query_states = TRANSFORMERS_MODEL_CONFIG[self.model_type].compute_query_states # (bsz, num_heads, q_len, head_dim) query_states = compute_query_states(model=self.model, **kwargs) previous_dtype = query_states.dtype # (bsz, num_heads, q_len, adapter_len) scores = torch.matmul(query_states, adapter_k.transpose(2, 3).to(previous_dtype)) / math.sqrt( self.model.head_dim ) # Upcast attention to fp32 # (bsz, num_heads, q_len, adapter_len) scores = self.adaption_gate * F.softmax(scores, dim=-1, dtype=torch.float32).to(previous_dtype) # (bsz, q_len, num_heads * head_dim) adapter_output = torch.matmul(scores, adapter_v).transpose(1, 2).reshape(bsz, q_len, -1) # (bsz, q_len, hidden_size) if o_proj_layer is not None: adapter_output = getattr(self.model, o_proj_layer)(adapter_output) # Add adaption prompt output to original output. output = output + adapter_output # Restore original dtype. output = output.to(previous_dtype) return output, *_
peft/src/peft/tuners/adaption_prompt/layer.py/0
{ "file_path": "peft/src/peft/tuners/adaption_prompt/layer.py", "repo_id": "peft", "token_count": 2705 }
# Copyright 2024-present the HuggingFace Inc. team. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import copy import torch from torch.nn import CrossEntropyLoss from peft.utils.integrations import gather_params_ctx class CPTEmbedding(torch.nn.Module): """ CPTEmbedding is a custom embedding layer designed for Context-aware Prompt Tuning (CPT) in PEFT. It initializes embeddings, applies prompt-specific projections, and computes loss using label masks. """ def __init__(self, config, word_embeddings): """ Initializes the CPTEmbedding module. Args: config (Namespace): Configuration object containing model hyperparameters and CPT-specific settings. word_embeddings (torch.nn.Embedding): The base word embedding layer used to initialize CPT embeddings. """ super().__init__() self.config = copy.deepcopy(config) num_virtual_tokens = config.num_virtual_tokens # Initialize embeddings with virtual token dimensions self.embedding = torch.nn.Embedding(num_virtual_tokens, config.token_dim) # Initialize embeddings using text-based prompt tuning, if configured if not config.inference_mode: assert config.num_virtual_tokens == len(config.cpt_token_ids) init_token_ids = torch.LongTensor(config.cpt_token_ids).to(word_embeddings.weight.device) with gather_params_ctx(word_embeddings.parameters()): word_embedding_weights = word_embeddings(init_token_ids).detach().clone() word_embedding_weights = word_embedding_weights.to(torch.float32) self.embedding.weight = torch.nn.Parameter(word_embedding_weights) # Initialize delta embedding with zero weights self.delta_embedding = torch.nn.Embedding(num_virtual_tokens, config.token_dim) self.delta_embedding.weight.data = torch.zeros_like(self.delta_embedding.weight).to(torch.float32) # Apply hook for backward gradient updates self.set_updated_tokens() def forward(self, indices): """ Computes the prompt embeddings and applies delta adjustments. Args: indices (torch.Tensor): Indices of the tokens to be embedded. Returns: torch.Tensor: Sum of prompt embeddings and delta embeddings. """ with torch.no_grad(): prompt_embeddings = self.embedding(indices) self.delta_embedding.weight.data = self.get_projection() # Apply epsilon-based projection delta_prompt_embeddings = self.delta_embedding(indices) return prompt_embeddings + delta_prompt_embeddings def set_updated_tokens(self): """ Sets up a backward hook to selectively update token gradients based on the CPT token type mask. """ tensor_ICL_mask = torch.Tensor(self.config.cpt_tokens_type_mask).long() mask_input_template = torch.remainder(tensor_ICL_mask, 4) == 1 mask_input = torch.remainder(tensor_ICL_mask, 4) == 2 mask_output_template = torch.remainder(tensor_ICL_mask, 4) == 3 mask = mask_input_template | mask_input | mask_output_template mask = mask.view(-1, 1) def backward_hook(grad): grad = grad * mask.to(grad.device) # Apply mask to gradients return grad self.delta_embedding.weight.register_hook(backward_hook) def get_epsilon(self): cpt_tokens_type_mask = self.config.cpt_tokens_type_mask MIN_VALUE = 1e-10 # Calculate normalized epsilon values for input, output, and format tokens normalized_format_eps = self.config.opt_projection_format_epsilon * torch.sqrt( torch.Tensor([self.config.token_dim / 2048]) ) normalized_input_eps = self.config.opt_projection_epsilon * torch.sqrt( torch.Tensor([self.config.token_dim / 2048]) ) epsilon = torch.ones_like(torch.Tensor(cpt_tokens_type_mask)).to(torch.float32) * MIN_VALUE cpt_tokens_type_mask = torch.Tensor(cpt_tokens_type_mask).long() epsilon[(cpt_tokens_type_mask > 0) & (torch.remainder(cpt_tokens_type_mask, 4) == 1)] = normalized_format_eps epsilon[(cpt_tokens_type_mask > 0) & (torch.remainder(cpt_tokens_type_mask, 4) == 3)] = normalized_format_eps epsilon[(cpt_tokens_type_mask > 0) & (torch.remainder(cpt_tokens_type_mask, 4) == 2)] = normalized_input_eps return epsilon def get_projection(self): """ Applies epsilon-based projection to the delta embeddings to control their norm. """ # Apply projection to control delta embedding norm with torch.no_grad(): new_embeddings_weights = self.delta_embedding.weight.clone().to(self.delta_embedding.weight.device) token_norm = torch.norm(new_embeddings_weights, p=2, dim=1) projection_mask = token_norm > 0 if torch.any(projection_mask): epsilon = self.get_epsilon().to(self.delta_embedding.weight.device) new_embeddings_weights[projection_mask] *= ( epsilon[projection_mask] / (token_norm[projection_mask].clamp(min=epsilon[projection_mask])) ).view(-1, 1) return new_embeddings_weights @staticmethod def calculate_loss(base_model_output, labels, cpt_type_mask, config): """ Computes the loss for CPT models with optional exponential decay. Args: base_model_output (ModelOutput): Output from the base model containing logits. labels (torch.Tensor): Ground-truth labels for the input tokens. cpt_type_mask (torch.Tensor): Token type mask used for filtering valid loss terms. config (Namespace): Configuration object containing loss-related hyperparameters. Returns: ModelOutput: The base model output with computed loss. """ device = base_model_output.logits.device lm_logits = base_model_output.logits labels = labels.to(device) # Shift logits and labels for token prediction shift_logits = lm_logits[..., :-1, :].contiguous() shift_labels = labels[..., 1:].contiguous() shift_cpt_type_mask = cpt_type_mask[..., 1:].contiguous() shift_labels_bool = (shift_labels.clone().detach() != -100).bool() batch_size, seq_length, vocab_size = shift_logits.shape # Compute cross-entropy loss loss_fct = CrossEntropyLoss(reduction="none", ignore_index=-100) loss = loss_fct( shift_logits.view(batch_size * seq_length, vocab_size), shift_labels.view(batch_size * seq_length) ) loss = loss.view(batch_size, seq_length) # Apply exponential decay weights to the loss shift_labels_weights = shift_labels_bool.clone().detach().float() for i in range(batch_size): idx_labels = (shift_cpt_type_mask[i] > 0) & (shift_cpt_type_mask[i] % 4 == 0) labels_ids = shift_cpt_type_mask[i][idx_labels].unique() exponential_decay = torch.ones_like(shift_cpt_type_mask[i]).to(device=device).float() decay_value = 1 for label_mask_idx in torch.flip(labels_ids, [0]): exponential_decay[shift_cpt_type_mask[i] == label_mask_idx] = decay_value decay_value *= config.opt_loss_decay_factor if config.opt_weighted_loss_type == "decay": shift_labels_weights[i] *= exponential_decay # Compute the weighted mean loss loss = (loss[shift_labels_bool] * shift_labels_weights[shift_labels_bool]).mean() base_model_output.loss = loss return base_model_output
peft/src/peft/tuners/cpt/model.py/0
{ "file_path": "peft/src/peft/tuners/cpt/model.py", "repo_id": "peft", "token_count": 3563 }
# Copyright 2024-present the HuggingFace Inc. team. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import warnings from copy import deepcopy from typing import List, Optional import torch import torch.nn as nn from peft.tuners.tuners_utils import BaseTunerLayer, check_adapters_to_merge class LNTuningLayer(nn.Module, BaseTunerLayer): """ Selects a layer from the model. """ adapter_layer_names = ("ln_tuning_layers",) def __init__(self, base_layer: nn.Module, adapter_name: str): super().__init__() self.base_layer = base_layer self.ln_tuning_layers = nn.ModuleDict({}) self.update_layer(self.base_layer, adapter_name) self._active_adapter = adapter_name self.merged_adapters = [] def update_layer(self, layer: nn.Module, adapter_name: str): self.ln_tuning_layers[adapter_name] = deepcopy(layer) def enable_adapters(self, enabled: bool) -> None: """Toggle the enabling and disabling of adapters Takes care of setting the requires_grad flag for the adapter weights. Args: enabled (bool): True to enable adapters, False to disable adapters """ if enabled: self.set_adapter(self.active_adapters) self._disable_adapters = False else: if self.merged: self.unmerge() # disable grads on all adapter layers for layer_name in self.adapter_layer_names: layer = getattr(self, layer_name) layer.requires_grad_(False) self._disable_adapters = True def merge(self, adapter_names: Optional[List[str]] = None): adapter_names = check_adapters_to_merge(self, adapter_names) if not adapter_names: # no adapter to merge return if len(adapter_names) > 1: raise ValueError( f"Trying to merge {len(adapter_names)} adapters, but LN " f"tuning does not allow merging more than one adapter at a time" ) merged_adapters = set(self.merged_adapters) if merged_adapters: warnings.warn(f"Already merged with {merged_adapters}. Unmerging first.") self.unmerge() self.base_layer, self.ln_tuning_layers[adapter_names[0]] = ( self.ln_tuning_layers[adapter_names[0]], self.base_layer, ) self.merged_adapters.append(adapter_names[0]) def unmerge(self): if not self.merged: warnings.warn("Already unmerged. Nothing to do.") return # popping one element is sufficient because LN # tuning does not allow merging more than one adapter at a time. merged_name = self.merged_adapters.pop() self.base_layer, self.ln_tuning_layers[merged_name] = ( self.ln_tuning_layers[merged_name], self.base_layer, ) def forward(self, x: torch.Tensor, *args, **kwargs) -> torch.Tensor: if self.disable_adapters: if self.merged: self.unmerge() result = self.base_layer(x, *args, **kwargs) elif self.merged: result = self.base_layer(x, *args, **kwargs) else: if len(self.active_adapters) != 1: raise ValueError( f"Trying to run forward with {len(self.active_adapters)} active " f"adapters, but LN tuning does not allow inference with more than one adapter at a time" ) active_adapter = self.active_adapters[0] result = self.ln_tuning_layers[active_adapter](x, *args, **kwargs) return result def __repr__(self) -> str: rep = super().__repr__() return "ln_tuning." + rep
peft/src/peft/tuners/ln_tuning/layer.py/0
{ "file_path": "peft/src/peft/tuners/ln_tuning/layer.py", "repo_id": "peft", "token_count": 1852 }
# Copyright 2024-present the HuggingFace Inc. team. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from copy import deepcopy import torch import torch.nn.functional as F from torch import nn from peft.utils.integrations import dequantize_module_weight, gather_params_ctx from peft.utils.other import transpose class DoraLinearLayer(nn.Module): def __init__(self, fan_in_fan_out): super().__init__() self.fan_in_fan_out = fan_in_fan_out def get_weight_norm(self, weight, lora_weight, scaling) -> torch.Tensor: # calculate L2 norm of weight matrix, column-wise weight = transpose(weight, self.fan_in_fan_out) weight = weight + scaling * lora_weight weight_norm = torch.linalg.norm(weight, dim=1).to(weight.dtype) return weight_norm def update_layer(self, *, base_layer, lora_A, lora_B, scaling, place_on_cpu=False) -> None: # temporarily convert fp16 to fp32, as fp16 can cause trouble on CPU with PyTorch < 2.2 dtype_is_fp16 = lora_A.dtype == torch.float16 if dtype_is_fp16: lora_A = lora_A.float() lora_B = lora_B.float() with gather_params_ctx(base_layer.parameters()): if base_layer.__class__.__name__ == "Linear4bit": # We have to create a copy of the base layer, otherwise, FSDP will throw an error. 8bit does not work # yet because Int8Params cannot be correctly deep-copied (attributes vanish) base_layer = deepcopy(base_layer) weight = dequantize_module_weight(base_layer) if weight.data.ndim >= 4: # For handling LoRAs applied to Conv layers. lora_weight = torch.mm(lora_B.flatten(start_dim=1), lora_A.flatten(start_dim=1)) lora_weight = lora_weight.reshape(weight.shape) else: lora_weight = lora_B @ lora_A if dtype_is_fp16: lora_weight = lora_weight.half() weight_norm = self.get_weight_norm(weight.to(lora_A.device), lora_weight, scaling) if place_on_cpu: weight_norm = weight_norm.to("cpu") self.weight = nn.Parameter(weight_norm, requires_grad=True) def forward(self, x, *, lora_A, lora_B, scaling, base_layer, base_result=None): """ For DoRA, calculate the extra output from LoRA with DoRA applied. This should be added on top of the base layer output. """ # Don't use `lora_weight = lora_B.weight @ lora_A.weight` because this causes errors with FSDP. Instead, # calculate the same but using forward. x_eye = torch.eye(lora_A.weight.shape[1], device=lora_A.weight.device, dtype=x.dtype) lora_weight = lora_B(lora_A(x_eye)).T magnitude = self.weight weight = dequantize_module_weight(base_layer) weight = weight.to(x.dtype) weight_norm = self.get_weight_norm(weight, lora_weight.detach(), scaling) # see section 4.3 of DoRA (https://arxiv.org/abs/2402.09353) # "[...] we suggest treating ||V +∆V ||_c in # Eq. (5) as a constant, thereby detaching it from the gradient # graph. This means that while ||V + ∆V ||_c dynamically # reflects the updates of ∆V , it won’t receive any gradient # during backpropagation" weight_norm = weight_norm.detach() mag_norm_scale = (magnitude / weight_norm).view(1, -1) lora_result = lora_B(lora_A(x)) bias = None if base_result is not None: bias = base_layer.bias if bias is not None: base_result = base_result - bias else: base_result = F.linear(x, transpose(weight, self.fan_in_fan_out)) result_dora = (mag_norm_scale - 1) * base_result + mag_norm_scale * lora_result * scaling return result_dora def __repr__(self) -> str: rep = super().__repr__() return "lora.dora." + rep class DoraEmbeddingLayer(DoraLinearLayer): def forward(self, x, *, lora_A, lora_B, scaling, base_layer, embed_fn): """ For DoRA, calculate the extra output from LoRA with DoRA applied. This should be added on top of the base layer output. """ lora_weight = (lora_A @ lora_B).T magnitude = self.weight weight = base_layer.weight weight_norm = self.get_weight_norm(weight, lora_weight.detach(), scaling) # see section 4.3 of DoRA (https://arxiv.org/abs/2402.09353) # "[...] we suggest treating ||V +∆V ||_c in # Eq. (5) as a constant, thereby detaching it from the gradient # graph. This means that while ||V + ∆V ||_c dynamically # reflects the updates of ∆V , it won’t receive any gradient # during backpropagation" weight_norm = weight_norm.detach() mag_norm_scale = magnitude / weight_norm result_dora = mag_norm_scale * (embed_fn(x, lora_A) @ lora_B) * scaling return mag_norm_scale, result_dora def __repr__(self) -> str: rep = super().__repr__() return "lora.dora." + rep class _DoraConvNdLayer(DoraLinearLayer): def get_weight_norm(self, weight, lora_weight, scaling) -> torch.Tensor: # calculate L2 norm of weight matrix, column-wise weight = weight + scaling * lora_weight # the following is needed to have compatibility with the 4/5D weight tensors of Conv2D/3D dim = tuple(range(1, weight.dim())) weight_norm = weight.norm(p=2, dim=dim, keepdim=True).transpose(1, 0) return weight_norm def forward(self, x, *, lora_A, lora_B, scaling, base_layer): """ For DoRA, calculate the extra output from LoRA with DoRA applied. This should be added on top of the base layer output. """ weight = base_layer.weight lora_weight = torch.mm(lora_B.weight.flatten(start_dim=1), lora_A.weight.flatten(start_dim=1)) lora_weight = lora_weight.reshape(weight.shape) magnitude = self.weight weight_norm = self.get_weight_norm(weight, lora_weight.detach(), scaling) # see section 4.3 of DoRA (https://arxiv.org/abs/2402.09353) # "[...] we suggest treating ||V +∆V ||_c in # Eq. (5) as a constant, thereby detaching it from the gradient # graph. This means that while ||V + ∆V ||_c dynamically # reflects the updates of ∆V , it won’t receive any gradient # during backpropagation" weight_norm = weight_norm.detach() mag_norm_scale = magnitude / weight_norm result_dora = (mag_norm_scale - 1) * ( self.conv_fn( x, weight, bias=None, stride=base_layer.stride, padding=base_layer.padding, dilation=base_layer.dilation, groups=base_layer.groups, ) ) + mag_norm_scale * lora_B(lora_A(x)) * scaling return result_dora def __repr__(self) -> str: rep = super().__repr__() return "lora.dora." + rep class DoraConv2dLayer(_DoraConvNdLayer): def __init__(self, fan_in_fan_out): super().__init__(fan_in_fan_out) self.conv_fn = F.conv2d class DoraConv3dLayer(_DoraConvNdLayer): def __init__(self, fan_in_fan_out): super().__init__(fan_in_fan_out) self.conv_fn = F.conv3d
peft/src/peft/tuners/lora/dora.py/0
{ "file_path": "peft/src/peft/tuners/lora/dora.py", "repo_id": "peft", "token_count": 3424 }
# Copyright 2023-present the HuggingFace Inc. team. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from __future__ import annotations from dataclasses import dataclass, field from typing import Literal, Optional, Union from peft.config import PeftConfig from peft.utils import PeftType @dataclass class OFTConfig(PeftConfig): """ This is the configuration class to store the configuration of a [`OFTModel`]. Args: r (`int`): OFT rank, number of OFT blocks per injected layer. oft_block_size (`int`): OFT block size across different layers. module_dropout (`float`): The multiplicative dropout probability, by setting OFT blocks to identity during training, similar to the dropout layer in LoRA. target_modules (`Optional[Union[list[str], str]]`): The names of the modules to apply the adapter to. If this is specified, only the modules with the specified names will be replaced. When passing a string, a regex match will be performed. When passing a list of strings, either an exact match will be performed or it is checked if the name of the module ends with any of the passed strings. If this is specified as 'all-linear', then all linear modules are chosen, excluding the output layer. If this is not specified, modules will be chosen according to the model architecture. If the architecture is not known, an error will be raised -- in this case, you should specify the target modules manually. fan_in_fan_out (`bool`): Set this to True if the layer to replace stores weight like (fan_in, fan_out). bias (`str`): Bias type for OFT. Can be 'none', 'all' or 'oft_only'. If 'all' or 'oft_only', the corresponding biases will be updated during training. Be aware that this means that, even when disabling the adapters, the model will not produce the same output as the base model would have without adaptation. exclude_modules (`Optional[Union[List[str], str]]`): The names of the modules to not apply the adapter. When passing a string, a regex match will be performed. When passing a list of strings, either an exact match will be performed or it is checked if the name of the module ends with any of the passed strings. init_weights (`bool`): Whether to perform initialization of OFT weights. layers_to_transform (`Union[List[int], int]`): The layer indices to transform. If a list of ints is passed, it will apply the adapter to the layer indices that are specified in this list. If a single integer is passed, it will apply the transformations on the layer at this index. layers_pattern (`Optional[Union[List[str], str]]`): The layer pattern name, used only if `layers_to_transform` is different from `None`. This should target the `nn.ModuleList` of the model, which is often called `'layers'` or `'h'`. rank_pattern (`dict`): The mapping from layer names or regexp expression to ranks which are different from the default rank specified by `r`. modules_to_save (`List[str]`): List of modules apart from adapter layers to be set as trainable and saved in the final checkpoint. coft (`bool`): Whether to use the constrained variant of OFT or not, off by default. eps (`float`): The control strength of COFT. The freedom of rotation. Only has an effect if `coft` is set to True. block_share (`bool`): Whether to share the OFT parameters between blocks or not. This is `False` by default. """ r: int = field(default=8, metadata={"help": "OFT rank, number of OFT blocks per injected layer."}) oft_block_size: int = field( default=0, metadata={ "help": "OFT block size across different layers.", "note": "You can only specify either r or oft_block_size, but not both simultaneously, because r x oft_block_size = layer dimension.", }, ) module_dropout: float = field( default=0.0, metadata={ "help": "OFT multiplicative dropout, randomly setting blocks of OFT to be identity matrix, similar to the dropout layer in LoRA." }, ) target_modules: Optional[Union[list[str], str]] = field( default=None, metadata={ "help": "List of module names or regex expression of the module names to replace with OFT." "For example, ['q', 'v'] or '.*decoder.*(SelfAttention|EncDecAttention).*(q|v)$' " "This can also be a wildcard 'all-linear' which matches all linear/Conv1D layers except the output layer." }, ) fan_in_fan_out: bool = field( default=False, metadata={"help": "Set this to True if the layer to replace stores weight like (fan_in, fan_out)"}, ) bias: Literal["none", "all", "oft_only"] = field( default="none", metadata={"help": "Bias type for OFT. Can be 'none', 'all' or 'oft_only'"} ) exclude_modules: Optional[Union[list[str], str]] = field( default=None, metadata={"help": "List of module names or regex expression of the module names to exclude from OFT."}, ) init_weights: bool = field( default=True, metadata={ "help": ( "Whether to initialize the weights of the OFT layers with their default initialization. Don't change " "this setting, except if you know exactly what you're doing." ), }, ) layers_to_transform: Optional[Union[list[int], int]] = field( default=None, metadata={ "help": "The layer indexes to transform, is this argument is specified, PEFT will transform only the layers indexes that are specified inside this list. If a single integer is passed, PEFT will transform only the layer at this index." }, ) layers_pattern: Optional[Union[list[str], str]] = field( default=None, metadata={ "help": "The layer pattern name, used only if `layers_to_transform` is different to None and if the layer pattern is not in the common layers pattern. " "This should target the `nn.ModuleList` of the model, which is often called `'layers'` or `'h'`." }, ) modules_to_save: Optional[list[str]] = field( default=None, metadata={ "help": "List of modules apart from OFT layers to be set as trainable and saved in the final checkpoint. " "For example, in Sequence Classification or Token Classification tasks, " "the final layer `classifier/score` are randomly initialized and as such need to be trainable and saved." }, ) coft: bool = field( default=False, metadata={"help": "Whether to use the constrained variant of OFT or not."}, ) eps: float = field( default=6e-5, metadata={ "help": "The control strength of COFT. The freedom of rotation. Only has an effect if `coft` is set to True." }, ) block_share: bool = field( default=False, metadata={"help": "Whether to share the OFT parameters between blocks or not."}, ) rank_pattern: Optional[dict] = field( default_factory=dict, metadata={ "help": ( "The mapping from layer names or regexp expression to ranks which are different from the default rank specified by `r`. " "For example, `{model.decoder.layers.0.encoder_attn.k_proj: 8`}" "Important: the rank pattern won't be applied to the layers after 0.12.1.dev0!" ) }, ) alpha_pattern: Optional[dict] = field( default_factory=dict, metadata={ "help": ( "The mapping from layer names or regexp expression to alphas which are different from the default alpha specified by `alpha`. " "For example, `{model.decoder.layers.0.encoder_attn.k_proj: 32`}" "Important: the alpha pattern won't be applied to the layers after 0.12.1.dev0!" ) }, ) def __post_init__(self): super().__post_init__() self.peft_type = PeftType.OFT self.target_modules = ( set(self.target_modules) if isinstance(self.target_modules, list) else self.target_modules ) self.exclude_modules = ( set(self.exclude_modules) if isinstance(self.exclude_modules, list) else self.exclude_modules ) # check for layers_to_transform and layers_pattern if self.layers_pattern and not self.layers_to_transform: raise ValueError("When `layers_pattern` is specified, `layers_to_transform` must also be specified. ") if self.r == 0 and self.oft_block_size == 0: raise ValueError( f"Either `r` or `oft_block_size` must be non-zero. Currently, r = {self.r} and oft_block_size = {self.oft_block_size}." ) if not (self.r != 0) ^ (self.oft_block_size != 0): raise ValueError( f"You can only specify either r ({self.r}) or oft_block_size ({self.oft_block_size}), but not both simultaneously, because r x oft_block_size == in_features." ) @classmethod def check_kwargs(cls, **kwargs): r""" Check if the kwargs are valid for the configuration. Args: kwargs (additional keyword arguments, *optional*): Additional keyword arguments passed along to the child class initialization. """ if "oft_block_size" not in kwargs: raise ValueError( "OFT has been updated since PEFT 0.14.0. Your trained adapter weights are incompatible " "with the latest version of OFT. Please retrain your adapter weights with newer PEFT versions. " "Alternatively, downgrade PEFT to version 0.13.0 to use the old adapter weights." ) return super().check_kwargs(**kwargs)
peft/src/peft/tuners/oft/config.py/0
{ "file_path": "peft/src/peft/tuners/oft/config.py", "repo_id": "peft", "token_count": 4034 }
# Copyright 2023-present the HuggingFace Inc. team. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import math import torch from peft.utils.integrations import gather_params_ctx from .config import PromptTuningInit class PromptEmbedding(torch.nn.Module): """ The model to encode virtual tokens into prompt embeddings. Args: config ([`PromptTuningConfig`]): The configuration of the prompt embedding. word_embeddings (`torch.nn.Module`): The word embeddings of the base transformer model. **Attributes**: - **embedding** (`torch.nn.Embedding`) -- The embedding layer of the prompt embedding. Example: ```py >>> from peft import PromptEmbedding, PromptTuningConfig >>> config = PromptTuningConfig( ... peft_type="PROMPT_TUNING", ... task_type="SEQ_2_SEQ_LM", ... num_virtual_tokens=20, ... token_dim=768, ... num_transformer_submodules=1, ... num_attention_heads=12, ... num_layers=12, ... prompt_tuning_init="TEXT", ... prompt_tuning_init_text="Predict if sentiment of this review is positive, negative or neutral", ... tokenizer_name_or_path="t5-base", ... ) >>> # t5_model.shared is the word embeddings of the base model >>> prompt_embedding = PromptEmbedding(config, t5_model.shared) ``` Input Shape: (`batch_size`, `total_virtual_tokens`) Output Shape: (`batch_size`, `total_virtual_tokens`, `token_dim`) """ def __init__(self, config, word_embeddings): super().__init__() total_virtual_tokens = config.num_virtual_tokens * config.num_transformer_submodules self.embedding = torch.nn.Embedding(total_virtual_tokens, config.token_dim) if config.prompt_tuning_init == PromptTuningInit.TEXT and not config.inference_mode: from transformers import AutoTokenizer tokenizer_kwargs = config.tokenizer_kwargs or {} tokenizer = AutoTokenizer.from_pretrained(config.tokenizer_name_or_path, **tokenizer_kwargs) init_text = config.prompt_tuning_init_text init_token_ids = tokenizer(init_text)["input_ids"] # Trim or iterate until num_text_tokens matches total_virtual_tokens num_text_tokens = len(init_token_ids) if num_text_tokens > total_virtual_tokens: init_token_ids = init_token_ids[:total_virtual_tokens] elif num_text_tokens < total_virtual_tokens: num_reps = math.ceil(total_virtual_tokens / num_text_tokens) init_token_ids = init_token_ids * num_reps init_token_ids = init_token_ids[:total_virtual_tokens] init_token_ids = torch.LongTensor(init_token_ids).to(word_embeddings.weight.device) with gather_params_ctx(word_embeddings.parameters()): word_embedding_weights = word_embeddings(init_token_ids).detach().clone() word_embedding_weights = word_embedding_weights.to(torch.float32) self.embedding.weight = torch.nn.Parameter(word_embedding_weights) def forward(self, indices): # Just get embeddings prompt_embeddings = self.embedding(indices) return prompt_embeddings
peft/src/peft/tuners/prompt_tuning/model.py/0
{ "file_path": "peft/src/peft/tuners/prompt_tuning/model.py", "repo_id": "peft", "token_count": 1486 }
# Copyright 2023-present the HuggingFace Inc. team. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from .integrations import map_cache_to_layer_device_map from .loftq_utils import replace_lora_weights_loftq from .other import ( CONFIG_NAME, INCLUDE_LINEAR_LAYERS_SHORTHAND, SAFETENSORS_WEIGHTS_NAME, TRANSFORMERS_MODELS_TO_ADALORA_TARGET_MODULES_MAPPING, TRANSFORMERS_MODELS_TO_FOURIERFT_TARGET_MODULES_MAPPING, TRANSFORMERS_MODELS_TO_IA3_FEEDFORWARD_MODULES_MAPPING, TRANSFORMERS_MODELS_TO_IA3_TARGET_MODULES_MAPPING, TRANSFORMERS_MODELS_TO_LNTUNING_TARGET_MODULES_MAPPING, TRANSFORMERS_MODELS_TO_LORA_TARGET_MODULES_MAPPING, TRANSFORMERS_MODELS_TO_PREFIX_TUNING_POSTPROCESS_MAPPING, TRANSFORMERS_MODELS_TO_VBLORA_TARGET_MODULES_MAPPING, TRANSFORMERS_MODELS_TO_VERA_TARGET_MODULES_MAPPING, WEIGHTS_NAME, ModulesToSaveWrapper, _freeze_adapter, _get_batch_size, _get_submodules, _is_valid_match, _prepare_prompt_learning_config, _set_adapter, _set_trainable, bloom_model_postprocess_past_key_value, cast_mixed_precision_params, get_auto_gptq_quant_linear, get_gptqmodel_quant_linear, get_quantization_config, id_tensor_storage, infer_device, prepare_model_for_kbit_training, shift_tokens_right, transpose, ) from .peft_types import PeftType, TaskType, register_peft_method from .save_and_load import get_peft_model_state_dict, load_peft_weights, set_peft_model_state_dict __all__ = [ "CONFIG_NAME", "INCLUDE_LINEAR_LAYERS_SHORTHAND", "SAFETENSORS_WEIGHTS_NAME", "TRANSFORMERS_MODELS_TO_ADALORA_TARGET_MODULES_MAPPING", "TRANSFORMERS_MODELS_TO_FOURIERFT_TARGET_MODULES_MAPPING", "TRANSFORMERS_MODELS_TO_IA3_FEEDFORWARD_MODULES_MAPPING", "TRANSFORMERS_MODELS_TO_IA3_TARGET_MODULES_MAPPING", "TRANSFORMERS_MODELS_TO_LNTUNING_TARGET_MODULES_MAPPING", "TRANSFORMERS_MODELS_TO_LORA_TARGET_MODULES_MAPPING", "TRANSFORMERS_MODELS_TO_PREFIX_TUNING_POSTPROCESS_MAPPING", "TRANSFORMERS_MODELS_TO_VBLORA_TARGET_MODULES_MAPPING", "TRANSFORMERS_MODELS_TO_VERA_TARGET_MODULES_MAPPING", "WEIGHTS_NAME", "ModulesToSaveWrapper", "PeftType", "TaskType", "_freeze_adapter", "_get_batch_size", "_get_submodules", "_is_valid_match", "_prepare_prompt_learning_config", "_set_adapter", "_set_trainable", "bloom_model_postprocess_past_key_value", "cast_mixed_precision_params", "get_auto_gptq_quant_linear", "get_gptqmodel_quant_linear", "get_peft_model_state_dict", "get_quantization_config", "id_tensor_storage", "infer_device", "load_peft_weights", "map_cache_to_layer_device_map", "prepare_model_for_kbit_training", "register_peft_method", "replace_lora_weights_loftq", "set_peft_model_state_dict", "shift_tokens_right", "transpose", ]
peft/src/peft/utils/__init__.py/0
{ "file_path": "peft/src/peft/utils/__init__.py", "repo_id": "peft", "token_count": 1463 }
# Copyright 2024-present the HuggingFace Inc. team. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import torch from safetensors.torch import load_file from transformers import AutoModelForCausalLM from peft import BOFTConfig, PeftModel, get_peft_model from peft.utils import infer_device class TestBoft: device = infer_device() def test_boft_state_dict(self, tmp_path): # see #2050 # ensure that the boft_P buffer is not stored in the checkpoint file and is not necessary to load the model # correctly torch.manual_seed(0) inputs = torch.arange(10).view(-1, 1).to(self.device) model_id = "hf-internal-testing/tiny-random-OPTForCausalLM" model = AutoModelForCausalLM.from_pretrained(model_id).to(self.device) model.eval() output_base = model(inputs).logits config = BOFTConfig(init_weights=False) model = get_peft_model(model, config) model.eval() output_peft = model(inputs).logits atol, rtol = 1e-5, 1e-8 # sanity check: loading boft changed the output assert not torch.allclose(output_base, output_peft, atol=atol, rtol=rtol) model.save_pretrained(tmp_path) del model # check that the boft_P buffer is not present state_dict = load_file(tmp_path / "adapter_model.safetensors") assert not any("boft_P" in key for key in state_dict) # sanity check: the model still produces the same output after loading model = AutoModelForCausalLM.from_pretrained(model_id).to(self.device) model = PeftModel.from_pretrained(model, tmp_path) output_loaded = model(inputs).logits assert torch.allclose(output_peft, output_loaded, atol=atol, rtol=rtol) def test_boft_old_checkpoint_including_boft_P(self, tmp_path): # see #2050 # This test exists to ensure that after the boft_P buffer was made non-persistent, old checkpoints can still be # loaded successfully. torch.manual_seed(0) inputs = torch.arange(10).view(-1, 1).to(self.device) model_id = "hf-internal-testing/tiny-random-OPTForCausalLM" model = AutoModelForCausalLM.from_pretrained(model_id).to(self.device) # first create the expected output config = BOFTConfig(init_weights=False) model = get_peft_model(model, config) model.eval() output_peft = model(inputs).logits del model model = AutoModelForCausalLM.from_pretrained(model_id).to(self.device) # checkpoint from before the PR whose state_dict still contains boft_P hub_id = "peft-internal-testing/boft-tiny-opt-peft-v0.12" model = PeftModel.from_pretrained(model, hub_id) output_old = model(inputs).logits atol, rtol = 1e-5, 1e-8 assert torch.allclose(output_peft, output_old, atol=atol, rtol=rtol)
peft/tests/test_boft.py/0
{ "file_path": "peft/tests/test_boft.py", "repo_id": "peft", "token_count": 1325 }
# Copyright 2024-present the HuggingFace Inc. team. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from __future__ import annotations import torch from torch import nn from peft.import_utils import is_bnb_available from peft.optimizers import create_loraplus_optimizer from .testing_utils import require_bitsandbytes if is_bnb_available(): import bitsandbytes as bnb class SimpleNet(nn.Module): def __init__(self, bias=True): super().__init__() self.embedding = nn.Embedding(100, 20) self.layer_norm = nn.LayerNorm(20) self.lin0 = nn.Linear(20, 20, bias=bias) self.relu = nn.ReLU() self.lin1 = nn.Linear(20, 16, bias=bias) def forward(self, X): X = self.lin0(self.layer_norm(self.embedding(X))) X = self.relu(X) X = self.lin1(X) return X @require_bitsandbytes def test_lora_plus_helper_sucess(): model = SimpleNet() optimizer_cls = bnb.optim.Adam8bit lr = 5e-5 optim_config = { "eps": 1e-6, "betas": (0.9, 0.999), "loraplus_weight_decay": 0.0, } loraplus_lr_ratio = 1.2 loraplus_lr_embedding = 1e-6 optim = create_loraplus_optimizer( model=model, optimizer_cls=optimizer_cls, lr=lr, loraplus_lr_ratio=loraplus_lr_ratio, loraplus_lr_embedding=loraplus_lr_embedding, **optim_config, ) assert optim is not None assert len(optim.param_groups) == 4 assert optim.param_groups[0]["lr"] == lr assert optim.param_groups[1]["lr"] == loraplus_lr_embedding assert optim.param_groups[2]["lr"] == optim.param_groups[3]["lr"] == (lr * loraplus_lr_ratio) @require_bitsandbytes def test_lora_plus_optimizer_sucess(): """ Test if the optimizer is correctly created and step function runs without any exception """ optimizer_cls = bnb.optim.Adam8bit optim_config = { "eps": 1e-6, "betas": (0.9, 0.999), "loraplus_weight_decay": 0.0, } model: SimpleNet = SimpleNet().cuda() optim = create_loraplus_optimizer( model=model, optimizer_cls=optimizer_cls, lr=5e-5, loraplus_lr_ratio=1.2, loraplus_lr_embedding=1e-6, **optim_config, ) loss = torch.nn.CrossEntropyLoss() bnb.optim.GlobalOptimManager.get_instance().register_parameters(model.parameters()) x = torch.randint(100, (2, 4, 10)).cuda() output = model(x).permute(0, 3, 1, 2) label = torch.randint(16, (2, 4, 10)).cuda() loss_value = loss(output, label) loss_value.backward() optim.step()
peft/tests/test_loraplus.py/0
{ "file_path": "peft/tests/test_loraplus.py", "repo_id": "peft", "token_count": 1312 }
#!/usr/bin/env python3 """ Checkpoint Cleaning Script Takes training checkpoints with GPU tensors, optimizer state, extra dict keys, etc. and outputs a CPU tensor checkpoint with only the `state_dict` along with SHA256 calculation for model zoo compatibility. Hacked together by / Copyright 2020 Ross Wightman (https://github.com/rwightman) """ import torch import argparse import os import hashlib import shutil import tempfile from timm.models import load_state_dict try: import safetensors.torch _has_safetensors = True except ImportError: _has_safetensors = False parser = argparse.ArgumentParser(description='PyTorch Checkpoint Cleaner') parser.add_argument('--checkpoint', default='', type=str, metavar='PATH', help='path to latest checkpoint (default: none)') parser.add_argument('--output', default='', type=str, metavar='PATH', help='output path') parser.add_argument('--no-use-ema', dest='no_use_ema', action='store_true', help='use ema version of weights if present') parser.add_argument('--no-hash', dest='no_hash', action='store_true', help='no hash in output filename') parser.add_argument('--clean-aux-bn', dest='clean_aux_bn', action='store_true', help='remove auxiliary batch norm layers (from SplitBN training) from checkpoint') parser.add_argument('--safetensors', action='store_true', help='Save weights using safetensors instead of the default torch way (pickle).') def main(): args = parser.parse_args() if os.path.exists(args.output): print("Error: Output filename ({}) already exists.".format(args.output)) exit(1) clean_checkpoint( args.checkpoint, args.output, not args.no_use_ema, args.no_hash, args.clean_aux_bn, safe_serialization=args.safetensors, ) def clean_checkpoint( checkpoint, output, use_ema=True, no_hash=False, clean_aux_bn=False, safe_serialization: bool=False, ): # Load an existing checkpoint to CPU, strip everything but the state_dict and re-save if checkpoint and os.path.isfile(checkpoint): print("=> Loading checkpoint '{}'".format(checkpoint)) state_dict = load_state_dict(checkpoint, use_ema=use_ema) new_state_dict = {} for k, v in state_dict.items(): if clean_aux_bn and 'aux_bn' in k: # If all aux_bn keys are removed, the SplitBN layers will end up as normal and # load with the unmodified model using BatchNorm2d. continue name = k[7:] if k.startswith('module.') else k new_state_dict[name] = v print("=> Loaded state_dict from '{}'".format(checkpoint)) ext = '' if output: checkpoint_root, checkpoint_base = os.path.split(output) checkpoint_base, ext = os.path.splitext(checkpoint_base) else: checkpoint_root = '' checkpoint_base = os.path.split(checkpoint)[1] checkpoint_base = os.path.splitext(checkpoint_base)[0] temp_filename = '__' + checkpoint_base if safe_serialization: assert _has_safetensors, "`pip install safetensors` to use .safetensors" safetensors.torch.save_file(new_state_dict, temp_filename) else: torch.save(new_state_dict, temp_filename) with open(temp_filename, 'rb') as f: sha_hash = hashlib.sha256(f.read()).hexdigest() if ext: final_ext = ext else: final_ext = ('.safetensors' if safe_serialization else '.pth') if no_hash: final_filename = checkpoint_base + final_ext else: final_filename = '-'.join([checkpoint_base, sha_hash[:8]]) + final_ext shutil.move(temp_filename, os.path.join(checkpoint_root, final_filename)) print("=> Saved state_dict to '{}, SHA256: {}'".format(final_filename, sha_hash)) return final_filename else: print("Error: Checkpoint ({}) doesn't exist".format(checkpoint)) return '' if __name__ == '__main__': main()
pytorch-image-models/clean_checkpoint.py/0
{ "file_path": "pytorch-image-models/clean_checkpoint.py", "repo_id": "pytorch-image-models", "token_count": 1771 }
# RegNetX **RegNetX** is a convolutional network design space with simple, regular models with parameters: depth \\( d \\), initial width \\( w_{0} > 0 \\), and slope \\( w_{a} > 0 \\), and generates a different block width \\( u_{j} \\) for each block \\( j < d \\). The key restriction for the RegNet types of model is that there is a linear parameterisation of block widths (the design space only contains models with this linear structure): \\( u_{j} = w_{0} + w_{a}\cdot{j} \\) For **RegNetX** we have additional restrictions: we set \\( b = 1 \\) (the bottleneck ratio), \\( 12 \leq d \leq 28 \\), and \\( w_{m} \geq 2 \\) (the width multiplier). ## How do I use this model on an image? To load a pretrained model: ```py >>> import timm >>> model = timm.create_model('regnetx_002', pretrained=True) >>> model.eval() ``` To load and preprocess the image: ```py >>> import urllib >>> from PIL import Image >>> from timm.data import resolve_data_config >>> from timm.data.transforms_factory import create_transform >>> config = resolve_data_config({}, model=model) >>> transform = create_transform(**config) >>> url, filename = ("https://github.com/pytorch/hub/raw/master/images/dog.jpg", "dog.jpg") >>> urllib.request.urlretrieve(url, filename) >>> img = Image.open(filename).convert('RGB') >>> tensor = transform(img).unsqueeze(0) # transform and add batch dimension ``` To get the model predictions: ```py >>> import torch >>> with torch.no_grad(): ... out = model(tensor) >>> probabilities = torch.nn.functional.softmax(out[0], dim=0) >>> print(probabilities.shape) >>> # prints: torch.Size([1000]) ``` To get the top-5 predictions class names: ```py >>> # Get imagenet class mappings >>> url, filename = ("https://raw.githubusercontent.com/pytorch/hub/master/imagenet_classes.txt", "imagenet_classes.txt") >>> urllib.request.urlretrieve(url, filename) >>> with open("imagenet_classes.txt", "r") as f: ... categories = [s.strip() for s in f.readlines()] >>> # Print top categories per image >>> top5_prob, top5_catid = torch.topk(probabilities, 5) >>> for i in range(top5_prob.size(0)): ... print(categories[top5_catid[i]], top5_prob[i].item()) >>> # prints class names and probabilities like: >>> # [('Samoyed', 0.6425196528434753), ('Pomeranian', 0.04062102362513542), ('keeshond', 0.03186424449086189), ('white wolf', 0.01739676296710968), ('Eskimo dog', 0.011717947199940681)] ``` Replace the model name with the variant you want to use, e.g. `regnetx_002`. You can find the IDs in the model summaries at the top of this page. To extract image features with this model, follow the [timm feature extraction examples](../feature_extraction), just change the name of the model you want to use. ## How do I finetune this model? You can finetune any of the pre-trained models just by changing the classifier (the last layer). ```py >>> model = timm.create_model('regnetx_002', pretrained=True, num_classes=NUM_FINETUNE_CLASSES) ``` To finetune on your own dataset, you have to write a training loop or adapt [timm's training script](https://github.com/rwightman/pytorch-image-models/blob/master/train.py) to use your dataset. ## How do I train this model? You can follow the [timm recipe scripts](../training_script) for training a new model afresh. ## Citation ```BibTeX @misc{radosavovic2020designing, title={Designing Network Design Spaces}, author={Ilija Radosavovic and Raj Prateek Kosaraju and Ross Girshick and Kaiming He and Piotr Dollár}, year={2020}, eprint={2003.13678}, archivePrefix={arXiv}, primaryClass={cs.CV} } ``` <!-- Type: model-index Collections: - Name: RegNetX Paper: Title: Designing Network Design Spaces URL: https://paperswithcode.com/paper/designing-network-design-spaces Models: - Name: regnetx_002 In Collection: RegNetX Metadata: FLOPs: 255276032 Parameters: 2680000 File Size: 10862199 Architecture: - 1x1 Convolution - Batch Normalization - Convolution - Dense Connections - Global Average Pooling - Grouped Convolution - ReLU Tasks: - Image Classification Training Techniques: - SGD with Momentum - Weight Decay Training Data: - ImageNet Training Resources: 8x NVIDIA V100 GPUs ID: regnetx_002 Epochs: 100 Crop Pct: '0.875' Momentum: 0.9 Batch Size: 1024 Image Size: '224' Weight Decay: 5.0e-05 Interpolation: bicubic Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/regnet.py#L337 Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-regnet/regnetx_002-e7e85e5c.pth Results: - Task: Image Classification Dataset: ImageNet Metrics: Top 1 Accuracy: 68.75% Top 5 Accuracy: 88.56% - Name: regnetx_004 In Collection: RegNetX Metadata: FLOPs: 510619136 Parameters: 5160000 File Size: 20841309 Architecture: - 1x1 Convolution - Batch Normalization - Convolution - Dense Connections - Global Average Pooling - Grouped Convolution - ReLU Tasks: - Image Classification Training Techniques: - SGD with Momentum - Weight Decay Training Data: - ImageNet Training Resources: 8x NVIDIA V100 GPUs ID: regnetx_004 Epochs: 100 Crop Pct: '0.875' Momentum: 0.9 Batch Size: 1024 Image Size: '224' Weight Decay: 5.0e-05 Interpolation: bicubic Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/regnet.py#L343 Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-regnet/regnetx_004-7d0e9424.pth Results: - Task: Image Classification Dataset: ImageNet Metrics: Top 1 Accuracy: 72.39% Top 5 Accuracy: 90.82% - Name: regnetx_006 In Collection: RegNetX Metadata: FLOPs: 771659136 Parameters: 6200000 File Size: 24965172 Architecture: - 1x1 Convolution - Batch Normalization - Convolution - Dense Connections - Global Average Pooling - Grouped Convolution - ReLU Tasks: - Image Classification Training Techniques: - SGD with Momentum - Weight Decay Training Data: - ImageNet Training Resources: 8x NVIDIA V100 GPUs ID: regnetx_006 Epochs: 100 Crop Pct: '0.875' Momentum: 0.9 Batch Size: 1024 Image Size: '224' Weight Decay: 5.0e-05 Interpolation: bicubic Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/regnet.py#L349 Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-regnet/regnetx_006-85ec1baa.pth Results: - Task: Image Classification Dataset: ImageNet Metrics: Top 1 Accuracy: 73.84% Top 5 Accuracy: 91.68% - Name: regnetx_008 In Collection: RegNetX Metadata: FLOPs: 1027038208 Parameters: 7260000 File Size: 29235944 Architecture: - 1x1 Convolution - Batch Normalization - Convolution - Dense Connections - Global Average Pooling - Grouped Convolution - ReLU Tasks: - Image Classification Training Techniques: - SGD with Momentum - Weight Decay Training Data: - ImageNet Training Resources: 8x NVIDIA V100 GPUs ID: regnetx_008 Epochs: 100 Crop Pct: '0.875' Momentum: 0.9 Batch Size: 1024 Image Size: '224' Weight Decay: 5.0e-05 Interpolation: bicubic Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/regnet.py#L355 Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-regnet/regnetx_008-d8b470eb.pth Results: - Task: Image Classification Dataset: ImageNet Metrics: Top 1 Accuracy: 75.05% Top 5 Accuracy: 92.34% - Name: regnetx_016 In Collection: RegNetX Metadata: FLOPs: 2059337856 Parameters: 9190000 File Size: 36988158 Architecture: - 1x1 Convolution - Batch Normalization - Convolution - Dense Connections - Global Average Pooling - Grouped Convolution - ReLU Tasks: - Image Classification Training Techniques: - SGD with Momentum - Weight Decay Training Data: - ImageNet Training Resources: 8x NVIDIA V100 GPUs ID: regnetx_016 Epochs: 100 Crop Pct: '0.875' Momentum: 0.9 Batch Size: 1024 Image Size: '224' Weight Decay: 5.0e-05 Interpolation: bicubic Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/regnet.py#L361 Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-regnet/regnetx_016-65ca972a.pth Results: - Task: Image Classification Dataset: ImageNet Metrics: Top 1 Accuracy: 76.95% Top 5 Accuracy: 93.43% - Name: regnetx_032 In Collection: RegNetX Metadata: FLOPs: 4082555904 Parameters: 15300000 File Size: 61509573 Architecture: - 1x1 Convolution - Batch Normalization - Convolution - Dense Connections - Global Average Pooling - Grouped Convolution - ReLU Tasks: - Image Classification Training Techniques: - SGD with Momentum - Weight Decay Training Data: - ImageNet Training Resources: 8x NVIDIA V100 GPUs ID: regnetx_032 Epochs: 100 Crop Pct: '0.875' Momentum: 0.9 Batch Size: 512 Image Size: '224' Weight Decay: 5.0e-05 Interpolation: bicubic Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/regnet.py#L367 Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-regnet/regnetx_032-ed0c7f7e.pth Results: - Task: Image Classification Dataset: ImageNet Metrics: Top 1 Accuracy: 78.15% Top 5 Accuracy: 94.09% - Name: regnetx_040 In Collection: RegNetX Metadata: FLOPs: 5095167744 Parameters: 22120000 File Size: 88844824 Architecture: - 1x1 Convolution - Batch Normalization - Convolution - Dense Connections - Global Average Pooling - Grouped Convolution - ReLU Tasks: - Image Classification Training Techniques: - SGD with Momentum - Weight Decay Training Data: - ImageNet Training Resources: 8x NVIDIA V100 GPUs ID: regnetx_040 Epochs: 100 Crop Pct: '0.875' Momentum: 0.9 Batch Size: 512 Image Size: '224' Weight Decay: 5.0e-05 Interpolation: bicubic Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/regnet.py#L373 Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-regnet/regnetx_040-73c2a654.pth Results: - Task: Image Classification Dataset: ImageNet Metrics: Top 1 Accuracy: 78.48% Top 5 Accuracy: 94.25% - Name: regnetx_064 In Collection: RegNetX Metadata: FLOPs: 8303405824 Parameters: 26210000 File Size: 105184854 Architecture: - 1x1 Convolution - Batch Normalization - Convolution - Dense Connections - Global Average Pooling - Grouped Convolution - ReLU Tasks: - Image Classification Training Techniques: - SGD with Momentum - Weight Decay Training Data: - ImageNet Training Resources: 8x NVIDIA V100 GPUs ID: regnetx_064 Epochs: 100 Crop Pct: '0.875' Momentum: 0.9 Batch Size: 512 Image Size: '224' Weight Decay: 5.0e-05 Interpolation: bicubic Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/regnet.py#L379 Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-regnet/regnetx_064-29278baa.pth Results: - Task: Image Classification Dataset: ImageNet Metrics: Top 1 Accuracy: 79.06% Top 5 Accuracy: 94.47% - Name: regnetx_080 In Collection: RegNetX Metadata: FLOPs: 10276726784 Parameters: 39570000 File Size: 158720042 Architecture: - 1x1 Convolution - Batch Normalization - Convolution - Dense Connections - Global Average Pooling - Grouped Convolution - ReLU Tasks: - Image Classification Training Techniques: - SGD with Momentum - Weight Decay Training Data: - ImageNet Training Resources: 8x NVIDIA V100 GPUs ID: regnetx_080 Epochs: 100 Crop Pct: '0.875' Momentum: 0.9 Batch Size: 512 Image Size: '224' Weight Decay: 5.0e-05 Interpolation: bicubic Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/regnet.py#L385 Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-regnet/regnetx_080-7c7fcab1.pth Results: - Task: Image Classification Dataset: ImageNet Metrics: Top 1 Accuracy: 79.21% Top 5 Accuracy: 94.55% - Name: regnetx_120 In Collection: RegNetX Metadata: FLOPs: 15536378368 Parameters: 46110000 File Size: 184866342 Architecture: - 1x1 Convolution - Batch Normalization - Convolution - Dense Connections - Global Average Pooling - Grouped Convolution - ReLU Tasks: - Image Classification Training Techniques: - SGD with Momentum - Weight Decay Training Data: - ImageNet Training Resources: 8x NVIDIA V100 GPUs ID: regnetx_120 Epochs: 100 Crop Pct: '0.875' Momentum: 0.9 Batch Size: 512 Image Size: '224' Weight Decay: 5.0e-05 Interpolation: bicubic Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/regnet.py#L391 Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-regnet/regnetx_120-65d5521e.pth Results: - Task: Image Classification Dataset: ImageNet Metrics: Top 1 Accuracy: 79.61% Top 5 Accuracy: 94.73% - Name: regnetx_160 In Collection: RegNetX Metadata: FLOPs: 20491740672 Parameters: 54280000 File Size: 217623862 Architecture: - 1x1 Convolution - Batch Normalization - Convolution - Dense Connections - Global Average Pooling - Grouped Convolution - ReLU Tasks: - Image Classification Training Techniques: - SGD with Momentum - Weight Decay Training Data: - ImageNet Training Resources: 8x NVIDIA V100 GPUs ID: regnetx_160 Epochs: 100 Crop Pct: '0.875' Momentum: 0.9 Batch Size: 512 Image Size: '224' Weight Decay: 5.0e-05 Interpolation: bicubic Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/regnet.py#L397 Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-regnet/regnetx_160-c98c4112.pth Results: - Task: Image Classification Dataset: ImageNet Metrics: Top 1 Accuracy: 79.84% Top 5 Accuracy: 94.82% - Name: regnetx_320 In Collection: RegNetX Metadata: FLOPs: 40798958592 Parameters: 107810000 File Size: 431962133 Architecture: - 1x1 Convolution - Batch Normalization - Convolution - Dense Connections - Global Average Pooling - Grouped Convolution - ReLU Tasks: - Image Classification Training Techniques: - SGD with Momentum - Weight Decay Training Data: - ImageNet Training Resources: 8x NVIDIA V100 GPUs ID: regnetx_320 Epochs: 100 Crop Pct: '0.875' Momentum: 0.9 Batch Size: 256 Image Size: '224' Weight Decay: 5.0e-05 Interpolation: bicubic Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/regnet.py#L403 Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-regnet/regnetx_320-8ea38b93.pth Results: - Task: Image Classification Dataset: ImageNet Metrics: Top 1 Accuracy: 80.25% Top 5 Accuracy: 95.03% -->
pytorch-image-models/hfdocs/source/models/regnetx.mdx/0
{ "file_path": "pytorch-image-models/hfdocs/source/models/regnetx.mdx", "repo_id": "pytorch-image-models", "token_count": 6565 }
# Results CSV files containing an ImageNet-1K and out-of-distribution (OOD) test set validation results for all models with pretrained weights is located in the repository [results folder](https://github.com/rwightman/pytorch-image-models/tree/master/results). ## Self-trained Weights The table below includes ImageNet-1k validation results of model weights that I've trained myself. It is not updated as frequently as the csv results outputs linked above. |Model | Acc@1 (Err) | Acc@5 (Err) | Param # (M) | Interpolation | Image Size | |---|---|---|---|---|---| | efficientnet_b3a | 82.242 (17.758) | 96.114 (3.886) | 12.23 | bicubic | 320 (1.0 crop) | | efficientnet_b3 | 82.076 (17.924) | 96.020 (3.980) | 12.23 | bicubic | 300 | | regnet_32 | 82.002 (17.998) | 95.906 (4.094) | 19.44 | bicubic | 224 | | skresnext50d_32x4d | 81.278 (18.722) | 95.366 (4.634) | 27.5 | bicubic | 288 (1.0 crop) | | seresnext50d_32x4d | 81.266 (18.734) | 95.620 (4.380) | 27.6 | bicubic | 224 | | efficientnet_b2a | 80.608 (19.392) | 95.310 (4.690) | 9.11 | bicubic | 288 (1.0 crop) | | resnet50d | 80.530 (19.470) | 95.160 (4.840) | 25.6 | bicubic | 224 | | mixnet_xl | 80.478 (19.522) | 94.932 (5.068) | 11.90 | bicubic | 224 | | efficientnet_b2 | 80.402 (19.598) | 95.076 (4.924) | 9.11 | bicubic | 260 | | seresnet50 | 80.274 (19.726) | 95.070 (4.930) | 28.1 | bicubic | 224 | | skresnext50d_32x4d | 80.156 (19.844) | 94.642 (5.358) | 27.5 | bicubic | 224 | | cspdarknet53 | 80.058 (19.942) | 95.084 (4.916) | 27.6 | bicubic | 256 | | cspresnext50 | 80.040 (19.960) | 94.944 (5.056) | 20.6 | bicubic | 224 | | resnext50_32x4d | 79.762 (20.238) | 94.600 (5.400) | 25 | bicubic | 224 | | resnext50d_32x4d | 79.674 (20.326) | 94.868 (5.132) | 25.1 | bicubic | 224 | | cspresnet50 | 79.574 (20.426) | 94.712 (5.288) | 21.6 | bicubic | 256 | | ese_vovnet39b | 79.320 (20.680) | 94.710 (5.290) | 24.6 | bicubic | 224 | | resnetblur50 | 79.290 (20.710) | 94.632 (5.368) | 25.6 | bicubic | 224 | | dpn68b | 79.216 (20.784) | 94.414 (5.586) | 12.6 | bicubic | 224 | | resnet50 | 79.038 (20.962) | 94.390 (5.610) | 25.6 | bicubic | 224 | | mixnet_l | 78.976 (21.024 | 94.184 (5.816) | 7.33 | bicubic | 224 | | efficientnet_b1 | 78.692 (21.308) | 94.086 (5.914) | 7.79 | bicubic | 240 | | efficientnet_es | 78.066 (21.934) | 93.926 (6.074) | 5.44 | bicubic | 224 | | seresnext26t_32x4d | 77.998 (22.002) | 93.708 (6.292) | 16.8 | bicubic | 224 | | seresnext26tn_32x4d | 77.986 (22.014) | 93.746 (6.254) | 16.8 | bicubic | 224 | | efficientnet_b0 | 77.698 (22.302) | 93.532 (6.468) | 5.29 | bicubic | 224 | | seresnext26d_32x4d | 77.602 (22.398) | 93.608 (6.392) | 16.8 | bicubic | 224 | | mobilenetv2_120d | 77.294 (22.706 | 93.502 (6.498) | 5.8 | bicubic | 224 | | mixnet_m | 77.256 (22.744) | 93.418 (6.582) | 5.01 | bicubic | 224 | | resnet34d | 77.116 (22.884) | 93.382 (6.618) | 21.8 | bicubic | 224 | | seresnext26_32x4d | 77.104 (22.896) | 93.316 (6.684) | 16.8 | bicubic | 224 | | skresnet34 | 76.912 (23.088) | 93.322 (6.678) | 22.2 | bicubic | 224 | | ese_vovnet19b_dw | 76.798 (23.202) | 93.268 (6.732) | 6.5 | bicubic | 224 | | resnet26d | 76.68 (23.32) | 93.166 (6.834) | 16 | bicubic | 224 | | densenetblur121d | 76.576 (23.424) | 93.190 (6.810) | 8.0 | bicubic | 224 | | mobilenetv2_140 | 76.524 (23.476) | 92.990 (7.010) | 6.1 | bicubic | 224 | | mixnet_s | 75.988 (24.012) | 92.794 (7.206) | 4.13 | bicubic | 224 | | mobilenetv3_large_100 | 75.766 (24.234) | 92.542 (7.458) | 5.5 | bicubic | 224 | | mobilenetv3_rw | 75.634 (24.366) | 92.708 (7.292) | 5.5 | bicubic | 224 | | mnasnet_a1 | 75.448 (24.552) | 92.604 (7.396) | 3.89 | bicubic | 224 | | resnet26 | 75.292 (24.708) | 92.57 (7.43) | 16 | bicubic | 224 | | fbnetc_100 | 75.124 (24.876) | 92.386 (7.614) | 5.6 | bilinear | 224 | | resnet34 | 75.110 (24.890) | 92.284 (7.716) | 22 | bilinear | 224 | | mobilenetv2_110d | 75.052 (24.948) | 92.180 (7.820) | 4.5 | bicubic | 224 | | seresnet34 | 74.808 (25.192) | 92.124 (7.876) | 22 | bilinear | 224 | | mnasnet_b1 | 74.658 (25.342) | 92.114 (7.886) | 4.38 | bicubic | 224 | | spnasnet_100 | 74.084 (25.916) | 91.818 (8.182) | 4.42 | bilinear | 224 | | skresnet18 | 73.038 (26.962) | 91.168 (8.832) | 11.9 | bicubic | 224 | | mobilenetv2_100 | 72.978 (27.022) | 91.016 (8.984) | 3.5 | bicubic | 224 | | resnet18d | 72.260 (27.740) | 90.696 (9.304) | 11.7 | bicubic | 224 | | seresnet18 | 71.742 (28.258) | 90.334 (9.666) | 11.8 | bicubic | 224 | ## Ported and Other Weights For weights ported from other deep learning frameworks (Tensorflow, MXNet GluonCV) or copied from other PyTorch sources, please see the full results tables for ImageNet and various OOD test sets at in the [results tables](https://github.com/rwightman/pytorch-image-models/tree/master/results). Model code .py files contain links to original sources of models and weights.
pytorch-image-models/hfdocs/source/results.mdx/0
{ "file_path": "pytorch-image-models/hfdocs/source/results.mdx", "repo_id": "pytorch-image-models", "token_count": 2259 }
import logging from .constants import * _logger = logging.getLogger(__name__) def resolve_data_config( args=None, pretrained_cfg=None, model=None, use_test_size=False, verbose=False ): assert model or args or pretrained_cfg, "At least one of model, args, or pretrained_cfg required for data config." args = args or {} pretrained_cfg = pretrained_cfg or {} if not pretrained_cfg and model is not None and hasattr(model, 'pretrained_cfg'): pretrained_cfg = model.pretrained_cfg data_config = {} # Resolve input/image size in_chans = 3 if args.get('in_chans', None) is not None: in_chans = args['in_chans'] elif args.get('chans', None) is not None: in_chans = args['chans'] input_size = (in_chans, 224, 224) if args.get('input_size', None) is not None: assert isinstance(args['input_size'], (tuple, list)) assert len(args['input_size']) == 3 input_size = tuple(args['input_size']) in_chans = input_size[0] # input_size overrides in_chans elif args.get('img_size', None) is not None: assert isinstance(args['img_size'], int) input_size = (in_chans, args['img_size'], args['img_size']) else: if use_test_size and pretrained_cfg.get('test_input_size', None) is not None: input_size = pretrained_cfg['test_input_size'] elif pretrained_cfg.get('input_size', None) is not None: input_size = pretrained_cfg['input_size'] data_config['input_size'] = input_size # resolve interpolation method data_config['interpolation'] = 'bicubic' if args.get('interpolation', None): data_config['interpolation'] = args['interpolation'] elif pretrained_cfg.get('interpolation', None): data_config['interpolation'] = pretrained_cfg['interpolation'] # resolve dataset + model mean for normalization data_config['mean'] = IMAGENET_DEFAULT_MEAN if args.get('mean', None) is not None: mean = tuple(args['mean']) if len(mean) == 1: mean = tuple(list(mean) * in_chans) else: assert len(mean) == in_chans data_config['mean'] = mean elif pretrained_cfg.get('mean', None): data_config['mean'] = pretrained_cfg['mean'] # resolve dataset + model std deviation for normalization data_config['std'] = IMAGENET_DEFAULT_STD if args.get('std', None) is not None: std = tuple(args['std']) if len(std) == 1: std = tuple(list(std) * in_chans) else: assert len(std) == in_chans data_config['std'] = std elif pretrained_cfg.get('std', None): data_config['std'] = pretrained_cfg['std'] # resolve default inference crop crop_pct = DEFAULT_CROP_PCT if args.get('crop_pct', None): crop_pct = args['crop_pct'] else: if use_test_size and pretrained_cfg.get('test_crop_pct', None): crop_pct = pretrained_cfg['test_crop_pct'] elif pretrained_cfg.get('crop_pct', None): crop_pct = pretrained_cfg['crop_pct'] data_config['crop_pct'] = crop_pct # resolve default crop percentage crop_mode = DEFAULT_CROP_MODE if args.get('crop_mode', None): crop_mode = args['crop_mode'] elif pretrained_cfg.get('crop_mode', None): crop_mode = pretrained_cfg['crop_mode'] data_config['crop_mode'] = crop_mode if verbose: _logger.info('Data processing configuration for current model + dataset:') for n, v in data_config.items(): _logger.info('\t%s: %s' % (n, str(v))) return data_config def resolve_model_data_config( model, args=None, pretrained_cfg=None, use_test_size=False, verbose=False, ): """ Resolve Model Data Config This is equivalent to resolve_data_config() but with arguments re-ordered to put model first. Args: model (nn.Module): the model instance args (dict): command line arguments / configuration in dict form (overrides pretrained_cfg) pretrained_cfg (dict): pretrained model config (overrides pretrained_cfg attached to model) use_test_size (bool): use the test time input resolution (if one exists) instead of default train resolution verbose (bool): enable extra logging of resolved values Returns: dictionary of config """ return resolve_data_config( args=args, pretrained_cfg=pretrained_cfg, model=model, use_test_size=use_test_size, verbose=verbose, )
pytorch-image-models/timm/data/config.py/0
{ "file_path": "pytorch-image-models/timm/data/config.py", "repo_id": "pytorch-image-models", "token_count": 1927 }
""" Dataset reader for HF IterableDataset """ import math import os from itertools import repeat, chain from typing import Optional import torch import torch.distributed as dist from PIL import Image try: import datasets from datasets.distributed import split_dataset_by_node from datasets.splits import SplitInfo except ImportError as e: print("Please install Hugging Face datasets package `pip install datasets`.") raise e from .class_map import load_class_map from .reader import Reader from .shared_count import SharedCount SHUFFLE_SIZE = int(os.environ.get('HFIDS_SHUFFLE_SIZE', 4096)) class ReaderHfids(Reader): def __init__( self, name: str, root: Optional[str] = None, split: str = 'train', is_training: bool = False, batch_size: int = 1, download: bool = False, repeats: int = 0, seed: int = 42, class_map: Optional[dict] = None, input_key: str = 'image', input_img_mode: str = 'RGB', target_key: str = 'label', target_img_mode: str = '', shuffle_size: Optional[int] = None, num_samples: Optional[int] = None, trust_remote_code: bool = False ): super().__init__() self.root = root self.split = split self.is_training = is_training self.batch_size = batch_size self.download = download self.repeats = repeats self.common_seed = seed # a seed that's fixed across all worker / distributed instances self.shuffle_size = shuffle_size or SHUFFLE_SIZE self.input_key = input_key self.input_img_mode = input_img_mode self.target_key = target_key self.target_img_mode = target_img_mode self.builder = datasets.load_dataset_builder( name, cache_dir=root, trust_remote_code=trust_remote_code, ) if download: self.builder.download_and_prepare() split_info: Optional[SplitInfo] = None if self.builder.info.splits and split in self.builder.info.splits: if isinstance(self.builder.info.splits[split], SplitInfo): split_info: Optional[SplitInfo] = self.builder.info.splits[split] if num_samples: self.num_samples = num_samples elif split_info and split_info.num_examples: self.num_samples = split_info.num_examples else: raise ValueError( "Dataset length is unknown, please pass `num_samples` explicitly. " "The number of steps needs to be known in advance for the learning rate scheduler." ) self.remap_class = False if class_map: self.class_to_idx = load_class_map(class_map) self.remap_class = True else: self.class_to_idx = {} # Distributed world state self.dist_rank = 0 self.dist_num_replicas = 1 if dist.is_available() and dist.is_initialized() and dist.get_world_size() > 1: self.dist_rank = dist.get_rank() self.dist_num_replicas = dist.get_world_size() # Attributes that are updated in _lazy_init self.worker_info = None self.worker_id = 0 self.num_workers = 1 self.global_worker_id = 0 self.global_num_workers = 1 # Initialized lazily on each dataloader worker process self.ds: Optional[datasets.IterableDataset] = None self.epoch = SharedCount() def set_epoch(self, count): # to update the shuffling effective_seed = seed + epoch self.epoch.value = count def set_loader_cfg( self, num_workers: Optional[int] = None, ): if self.ds is not None: return if num_workers is not None: self.num_workers = num_workers self.global_num_workers = self.dist_num_replicas * self.num_workers def _lazy_init(self): """ Lazily initialize worker (in worker processes) """ if self.worker_info is None: worker_info = torch.utils.data.get_worker_info() if worker_info is not None: self.worker_info = worker_info self.worker_id = worker_info.id self.num_workers = worker_info.num_workers self.global_num_workers = self.dist_num_replicas * self.num_workers self.global_worker_id = self.dist_rank * self.num_workers + self.worker_id if self.download: dataset = self.builder.as_dataset(split=self.split) # to distribute evenly to workers ds = dataset.to_iterable_dataset(num_shards=self.global_num_workers) else: # in this case the number of shard is determined by the number of remote files ds = self.builder.as_streaming_dataset(split=self.split) if self.is_training: # will shuffle the list of shards and use a shuffle buffer ds = ds.shuffle(seed=self.common_seed, buffer_size=self.shuffle_size) # Distributed: # The dataset has a number of shards that is a factor of `dist_num_replicas` (i.e. if `ds.n_shards % dist_num_replicas == 0`), # so the shards are evenly assigned across the nodes. # If it's not the case for dataset streaming, each node keeps 1 example out of `dist_num_replicas`, skipping the other examples. # Workers: # In a node, datasets.IterableDataset assigns the shards assigned to the node as evenly as possible to workers. self.ds = split_dataset_by_node(ds, rank=self.dist_rank, world_size=self.dist_num_replicas) def _num_samples_per_worker(self): num_worker_samples = \ max(1, self.repeats) * self.num_samples / max(self.global_num_workers, self.dist_num_replicas) if self.is_training or self.dist_num_replicas > 1: num_worker_samples = math.ceil(num_worker_samples) if self.is_training and self.batch_size is not None: num_worker_samples = math.ceil(num_worker_samples / self.batch_size) * self.batch_size return int(num_worker_samples) def __iter__(self): if self.ds is None: self._lazy_init() self.ds.set_epoch(self.epoch.value) target_sample_count = self._num_samples_per_worker() sample_count = 0 if self.is_training: ds_iter = chain.from_iterable(repeat(self.ds)) else: ds_iter = iter(self.ds) for sample in ds_iter: input_data: Image.Image = sample[self.input_key] if self.input_img_mode and input_data.mode != self.input_img_mode: input_data = input_data.convert(self.input_img_mode) target_data = sample[self.target_key] if self.target_img_mode: assert isinstance(target_data, Image.Image), "target_img_mode is specified but target is not an image" if target_data.mode != self.target_img_mode: target_data = target_data.convert(self.target_img_mode) elif self.remap_class: target_data = self.class_to_idx[target_data] yield input_data, target_data sample_count += 1 if self.is_training and sample_count >= target_sample_count: break def __len__(self): num_samples = self._num_samples_per_worker() * self.num_workers return num_samples def _filename(self, index, basename=False, absolute=False): assert False, "Not supported" # no random access to examples def filenames(self, basename=False, absolute=False): """ Return all filenames in dataset, overrides base""" if self.ds is None: self._lazy_init() names = [] for sample in self.ds: if 'file_name' in sample: name = sample['file_name'] elif 'filename' in sample: name = sample['filename'] elif 'id' in sample: name = sample['id'] elif 'image_id' in sample: name = sample['image_id'] else: assert False, "No supported name field present" names.append(name) return names
pytorch-image-models/timm/data/readers/reader_hfids.py/0
{ "file_path": "pytorch-image-models/timm/data/readers/reader_hfids.py", "repo_id": "pytorch-image-models", "token_count": 3798 }
from typing import Optional import torch import torch.nn as nn import torch.nn.functional as F from .config import use_fused_attn from .mlp import Mlp from .weight_init import trunc_normal_tf_ class AttentionPoolLatent(nn.Module): """ Attention pooling w/ latent query """ fused_attn: torch.jit.Final[bool] def __init__( self, in_features: int, out_features: int = None, embed_dim: int = None, num_heads: int = 8, feat_size: Optional[int] = None, mlp_ratio: float = 4.0, qkv_bias: bool = True, qk_norm: bool = False, latent_len: int = 1, latent_dim: int = None, pos_embed: str = '', pool_type: str = 'token', norm_layer: Optional[nn.Module] = None, drop: float = 0.0, ): super().__init__() embed_dim = embed_dim or in_features out_features = out_features or in_features assert embed_dim % num_heads == 0 self.num_heads = num_heads self.head_dim = embed_dim // num_heads self.feat_size = feat_size self.scale = self.head_dim ** -0.5 self.pool = pool_type self.fused_attn = use_fused_attn() if pos_embed == 'abs': assert feat_size is not None self.pos_embed = nn.Parameter(torch.zeros(feat_size, in_features)) else: self.pos_embed = None self.latent_dim = latent_dim or embed_dim self.latent_len = latent_len self.latent = nn.Parameter(torch.zeros(1, self.latent_len, embed_dim)) self.q = nn.Linear(embed_dim, embed_dim, bias=qkv_bias) self.kv = nn.Linear(embed_dim, embed_dim * 2, bias=qkv_bias) self.q_norm = norm_layer(self.head_dim) if qk_norm else nn.Identity() self.k_norm = norm_layer(self.head_dim) if qk_norm else nn.Identity() self.proj = nn.Linear(embed_dim, embed_dim) self.proj_drop = nn.Dropout(drop) self.norm = norm_layer(out_features) if norm_layer is not None else nn.Identity() self.mlp = Mlp(embed_dim, int(embed_dim * mlp_ratio)) self.init_weights() def init_weights(self): if self.pos_embed is not None: trunc_normal_tf_(self.pos_embed, std=self.pos_embed.shape[1] ** -0.5) trunc_normal_tf_(self.latent, std=self.latent_dim ** -0.5) def forward(self, x): B, N, C = x.shape if self.pos_embed is not None: # FIXME interpolate x = x + self.pos_embed.unsqueeze(0).to(x.dtype) q_latent = self.latent.expand(B, -1, -1) q = self.q(q_latent).reshape(B, self.latent_len, self.num_heads, self.head_dim).transpose(1, 2) kv = self.kv(x).reshape(B, N, 2, self.num_heads, self.head_dim).permute(2, 0, 3, 1, 4) k, v = kv.unbind(0) q, k = self.q_norm(q), self.k_norm(k) if self.fused_attn: x = F.scaled_dot_product_attention(q, k, v) else: q = q * self.scale attn = q @ k.transpose(-2, -1) attn = attn.softmax(dim=-1) x = attn @ v x = x.transpose(1, 2).reshape(B, self.latent_len, C) x = self.proj(x) x = self.proj_drop(x) x = x + self.mlp(self.norm(x)) # optional pool if latent seq_len > 1 and pooled output is desired if self.pool == 'token': x = x[:, 0] elif self.pool == 'avg': x = x.mean(1) return x
pytorch-image-models/timm/layers/attention_pool.py/0
{ "file_path": "pytorch-image-models/timm/layers/attention_pool.py", "repo_id": "pytorch-image-models", "token_count": 1795 }
""" ECA module from ECAnet paper: ECA-Net: Efficient Channel Attention for Deep Convolutional Neural Networks https://arxiv.org/abs/1910.03151 Original ECA model borrowed from https://github.com/BangguWu/ECANet Modified circular ECA implementation and adaption for use in timm package by Chris Ha https://github.com/VRandme Original License: MIT License Copyright (c) 2019 BangguWu, Qilong Wang Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. """ import math from torch import nn import torch.nn.functional as F from .create_act import create_act_layer from .helpers import make_divisible class EcaModule(nn.Module): """Constructs an ECA module. Args: channels: Number of channels of the input feature map for use in adaptive kernel sizes for actual calculations according to channel. gamma, beta: when channel is given parameters of mapping function refer to original paper https://arxiv.org/pdf/1910.03151.pdf (default=None. if channel size not given, use k_size given for kernel size.) kernel_size: Adaptive selection of kernel size (default=3) gamm: used in kernel_size calc, see above beta: used in kernel_size calc, see above act_layer: optional non-linearity after conv, enables conv bias, this is an experiment gate_layer: gating non-linearity to use """ def __init__( self, channels=None, kernel_size=3, gamma=2, beta=1, act_layer=None, gate_layer='sigmoid', rd_ratio=1/8, rd_channels=None, rd_divisor=8, use_mlp=False): super(EcaModule, self).__init__() if channels is not None: t = int(abs(math.log(channels, 2) + beta) / gamma) kernel_size = max(t if t % 2 else t + 1, 3) assert kernel_size % 2 == 1 padding = (kernel_size - 1) // 2 if use_mlp: # NOTE 'mlp' mode is a timm experiment, not in paper assert channels is not None if rd_channels is None: rd_channels = make_divisible(channels * rd_ratio, divisor=rd_divisor) act_layer = act_layer or nn.ReLU self.conv = nn.Conv1d(1, rd_channels, kernel_size=1, padding=0, bias=True) self.act = create_act_layer(act_layer) self.conv2 = nn.Conv1d(rd_channels, 1, kernel_size=kernel_size, padding=padding, bias=True) else: self.conv = nn.Conv1d(1, 1, kernel_size=kernel_size, padding=padding, bias=False) self.act = None self.conv2 = None self.gate = create_act_layer(gate_layer) def forward(self, x): y = x.mean((2, 3)).view(x.shape[0], 1, -1) # view for 1d conv y = self.conv(y) if self.conv2 is not None: y = self.act(y) y = self.conv2(y) y = self.gate(y).view(x.shape[0], -1, 1, 1) return x * y.expand_as(x) EfficientChannelAttn = EcaModule # alias class CecaModule(nn.Module): """Constructs a circular ECA module. ECA module where the conv uses circular padding rather than zero padding. Unlike the spatial dimension, the channels do not have inherent ordering nor locality. Although this module in essence, applies such an assumption, it is unnecessary to limit the channels on either "edge" from being circularly adapted to each other. This will fundamentally increase connectivity and possibly increase performance metrics (accuracy, robustness), without significantly impacting resource metrics (parameter size, throughput,latency, etc) Args: channels: Number of channels of the input feature map for use in adaptive kernel sizes for actual calculations according to channel. gamma, beta: when channel is given parameters of mapping function refer to original paper https://arxiv.org/pdf/1910.03151.pdf (default=None. if channel size not given, use k_size given for kernel size.) kernel_size: Adaptive selection of kernel size (default=3) gamm: used in kernel_size calc, see above beta: used in kernel_size calc, see above act_layer: optional non-linearity after conv, enables conv bias, this is an experiment gate_layer: gating non-linearity to use """ def __init__(self, channels=None, kernel_size=3, gamma=2, beta=1, act_layer=None, gate_layer='sigmoid'): super(CecaModule, self).__init__() if channels is not None: t = int(abs(math.log(channels, 2) + beta) / gamma) kernel_size = max(t if t % 2 else t + 1, 3) has_act = act_layer is not None assert kernel_size % 2 == 1 # PyTorch circular padding mode is buggy as of pytorch 1.4 # see https://github.com/pytorch/pytorch/pull/17240 # implement manual circular padding self.padding = (kernel_size - 1) // 2 self.conv = nn.Conv1d(1, 1, kernel_size=kernel_size, padding=0, bias=has_act) self.gate = create_act_layer(gate_layer) def forward(self, x): y = x.mean((2, 3)).view(x.shape[0], 1, -1) # Manually implement circular padding, F.pad does not seemed to be bugged y = F.pad(y, (self.padding, self.padding), mode='circular') y = self.conv(y) y = self.gate(y).view(x.shape[0], -1, 1, 1) return x * y.expand_as(x) CircularEfficientChannelAttn = CecaModule
pytorch-image-models/timm/layers/eca.py/0
{ "file_path": "pytorch-image-models/timm/layers/eca.py", "repo_id": "pytorch-image-models", "token_count": 2411 }
""" Linear layer (alternate definition) """ import torch import torch.nn.functional as F from torch import nn as nn class Linear(nn.Linear): r"""Applies a linear transformation to the incoming data: :math:`y = xA^T + b` Wraps torch.nn.Linear to support AMP + torchscript usage by manually casting weight & bias to input.dtype to work around an issue w/ torch.addmm in this use case. """ def forward(self, input: torch.Tensor) -> torch.Tensor: if torch.jit.is_scripting(): bias = self.bias.to(dtype=input.dtype) if self.bias is not None else None return F.linear(input, self.weight.to(dtype=input.dtype), bias=bias) else: return F.linear(input, self.weight, self.bias)
pytorch-image-models/timm/layers/linear.py/0
{ "file_path": "pytorch-image-models/timm/layers/linear.py", "repo_id": "pytorch-image-models", "token_count": 282 }
""" Depthwise Separable Conv Modules Basic DWS convs. Other variations of DWS exist with batch norm or activations between the DW and PW convs such as the Depthwise modules in MobileNetV2 / EfficientNet and Xception. Hacked together by / Copyright 2020 Ross Wightman """ from torch import nn as nn from .create_conv2d import create_conv2d from .create_norm_act import get_norm_act_layer class SeparableConvNormAct(nn.Module): """ Separable Conv w/ trailing Norm and Activation """ def __init__(self, in_channels, out_channels, kernel_size=3, stride=1, dilation=1, padding='', bias=False, channel_multiplier=1.0, pw_kernel_size=1, norm_layer=nn.BatchNorm2d, act_layer=nn.ReLU, apply_act=True, drop_layer=None): super(SeparableConvNormAct, self).__init__() self.conv_dw = create_conv2d( in_channels, int(in_channels * channel_multiplier), kernel_size, stride=stride, dilation=dilation, padding=padding, depthwise=True) self.conv_pw = create_conv2d( int(in_channels * channel_multiplier), out_channels, pw_kernel_size, padding=padding, bias=bias) norm_act_layer = get_norm_act_layer(norm_layer, act_layer) norm_kwargs = dict(drop_layer=drop_layer) if drop_layer is not None else {} self.bn = norm_act_layer(out_channels, apply_act=apply_act, **norm_kwargs) @property def in_channels(self): return self.conv_dw.in_channels @property def out_channels(self): return self.conv_pw.out_channels def forward(self, x): x = self.conv_dw(x) x = self.conv_pw(x) x = self.bn(x) return x SeparableConvBnAct = SeparableConvNormAct class SeparableConv2d(nn.Module): """ Separable Conv """ def __init__(self, in_channels, out_channels, kernel_size=3, stride=1, dilation=1, padding='', bias=False, channel_multiplier=1.0, pw_kernel_size=1): super(SeparableConv2d, self).__init__() self.conv_dw = create_conv2d( in_channels, int(in_channels * channel_multiplier), kernel_size, stride=stride, dilation=dilation, padding=padding, depthwise=True) self.conv_pw = create_conv2d( int(in_channels * channel_multiplier), out_channels, pw_kernel_size, padding=padding, bias=bias) @property def in_channels(self): return self.conv_dw.in_channels @property def out_channels(self): return self.conv_pw.out_channels def forward(self, x): x = self.conv_dw(x) x = self.conv_pw(x) return x
pytorch-image-models/timm/layers/separable_conv.py/0
{ "file_path": "pytorch-image-models/timm/layers/separable_conv.py", "repo_id": "pytorch-image-models", "token_count": 1138 }
import dataclasses import logging import os from copy import deepcopy from pathlib import Path from typing import Any, Callable, Dict, List, Optional, Tuple, Union from torch import nn as nn from torch.hub import load_state_dict_from_url from timm.models._features import FeatureListNet, FeatureDictNet, FeatureHookNet, FeatureGetterNet from timm.models._features_fx import FeatureGraphNet from timm.models._helpers import load_state_dict from timm.models._hub import has_hf_hub, download_cached_file, check_cached_file, load_state_dict_from_hf,\ load_custom_from_hf from timm.models._manipulate import adapt_input_conv from timm.models._pretrained import PretrainedCfg from timm.models._prune import adapt_model_from_file from timm.models._registry import get_pretrained_cfg _logger = logging.getLogger(__name__) # Global variables for rarely used pretrained checkpoint download progress and hash check. # Use set_pretrained_download_progress / set_pretrained_check_hash functions to toggle. _DOWNLOAD_PROGRESS = False _CHECK_HASH = False _USE_OLD_CACHE = int(os.environ.get('TIMM_USE_OLD_CACHE', 0)) > 0 __all__ = ['set_pretrained_download_progress', 'set_pretrained_check_hash', 'load_custom_pretrained', 'load_pretrained', 'pretrained_cfg_for_features', 'resolve_pretrained_cfg', 'build_model_with_cfg'] def _resolve_pretrained_source(pretrained_cfg): cfg_source = pretrained_cfg.get('source', '') pretrained_url = pretrained_cfg.get('url', None) pretrained_file = pretrained_cfg.get('file', None) pretrained_sd = pretrained_cfg.get('state_dict', None) hf_hub_id = pretrained_cfg.get('hf_hub_id', None) # resolve where to load pretrained weights from load_from = '' pretrained_loc = '' if cfg_source == 'hf-hub' and has_hf_hub(necessary=True): # hf-hub specified as source via model identifier load_from = 'hf-hub' assert hf_hub_id pretrained_loc = hf_hub_id else: # default source == timm or unspecified if pretrained_sd: # direct state_dict pass through is the highest priority load_from = 'state_dict' pretrained_loc = pretrained_sd assert isinstance(pretrained_loc, dict) elif pretrained_file: # file load override is the second-highest priority if set load_from = 'file' pretrained_loc = pretrained_file else: old_cache_valid = False if _USE_OLD_CACHE: # prioritized old cached weights if exists and env var enabled old_cache_valid = check_cached_file(pretrained_url) if pretrained_url else False if not old_cache_valid and hf_hub_id and has_hf_hub(necessary=True): # hf-hub available as alternate weight source in default_cfg load_from = 'hf-hub' pretrained_loc = hf_hub_id elif pretrained_url: load_from = 'url' pretrained_loc = pretrained_url if load_from == 'hf-hub' and pretrained_cfg.get('hf_hub_filename', None): # if a filename override is set, return tuple for location w/ (hub_id, filename) pretrained_loc = pretrained_loc, pretrained_cfg['hf_hub_filename'] return load_from, pretrained_loc def set_pretrained_download_progress(enable=True): """ Set download progress for pretrained weights on/off (globally). """ global _DOWNLOAD_PROGRESS _DOWNLOAD_PROGRESS = enable def set_pretrained_check_hash(enable=True): """ Set hash checking for pretrained weights on/off (globally). """ global _CHECK_HASH _CHECK_HASH = enable def load_custom_pretrained( model: nn.Module, pretrained_cfg: Optional[Dict] = None, load_fn: Optional[Callable] = None, cache_dir: Optional[Union[str, Path]] = None, ): r"""Loads a custom (read non .pth) weight file Downloads checkpoint file into cache-dir like torch.hub based loaders, but calls a passed in custom load fun, or the `load_pretrained` model member fn. If the object is already present in `model_dir`, it's deserialized and returned. The default value of `model_dir` is ``<hub_dir>/checkpoints`` where `hub_dir` is the directory returned by :func:`~torch.hub.get_dir`. Args: model: The instantiated model to load weights into pretrained_cfg: Default pretrained model cfg load_fn: An external standalone fn that loads weights into provided model, otherwise a fn named 'load_pretrained' on the model will be called if it exists cache_dir: Override model checkpoint cache dir for this load """ pretrained_cfg = pretrained_cfg or getattr(model, 'pretrained_cfg', None) if not pretrained_cfg: _logger.warning("Invalid pretrained config, cannot load weights.") return load_from, pretrained_loc = _resolve_pretrained_source(pretrained_cfg) if not load_from: _logger.warning("No pretrained weights exist for this model. Using random initialization.") return if load_from == 'hf-hub': _logger.warning("Hugging Face hub not currently supported for custom load pretrained models.") elif load_from == 'url': pretrained_loc = download_cached_file( pretrained_loc, check_hash=_CHECK_HASH, progress=_DOWNLOAD_PROGRESS, cache_dir=cache_dir, ) if load_fn is not None: load_fn(model, pretrained_loc) elif hasattr(model, 'load_pretrained'): model.load_pretrained(pretrained_loc) else: _logger.warning("Valid function to load pretrained weights is not available, using random initialization.") def load_pretrained( model: nn.Module, pretrained_cfg: Optional[Dict] = None, num_classes: int = 1000, in_chans: int = 3, filter_fn: Optional[Callable] = None, strict: bool = True, cache_dir: Optional[Union[str, Path]] = None, ): """ Load pretrained checkpoint Args: model: PyTorch module pretrained_cfg: Configuration for pretrained weights / target dataset num_classes: Number of classes for target model. Will adapt pretrained if different. in_chans: Number of input chans for target model. Will adapt pretrained if different. filter_fn: state_dict filter fn for load (takes state_dict, model as args) strict: Strict load of checkpoint cache_dir: Override model checkpoint cache dir for this load """ pretrained_cfg = pretrained_cfg or getattr(model, 'pretrained_cfg', None) if not pretrained_cfg: raise RuntimeError("Invalid pretrained config, cannot load weights. Use `pretrained=False` for random init.") load_from, pretrained_loc = _resolve_pretrained_source(pretrained_cfg) if load_from == 'state_dict': _logger.info(f'Loading pretrained weights from state dict') state_dict = pretrained_loc # pretrained_loc is the actual state dict for this override elif load_from == 'file': _logger.info(f'Loading pretrained weights from file ({pretrained_loc})') if pretrained_cfg.get('custom_load', False): model.load_pretrained(pretrained_loc) return else: state_dict = load_state_dict(pretrained_loc) elif load_from == 'url': _logger.info(f'Loading pretrained weights from url ({pretrained_loc})') if pretrained_cfg.get('custom_load', False): pretrained_loc = download_cached_file( pretrained_loc, progress=_DOWNLOAD_PROGRESS, check_hash=_CHECK_HASH, cache_dir=cache_dir, ) model.load_pretrained(pretrained_loc) return else: try: state_dict = load_state_dict_from_url( pretrained_loc, map_location='cpu', progress=_DOWNLOAD_PROGRESS, check_hash=_CHECK_HASH, weights_only=True, model_dir=cache_dir, ) except TypeError: state_dict = load_state_dict_from_url( pretrained_loc, map_location='cpu', progress=_DOWNLOAD_PROGRESS, check_hash=_CHECK_HASH, model_dir=cache_dir, ) elif load_from == 'hf-hub': _logger.info(f'Loading pretrained weights from Hugging Face hub ({pretrained_loc})') if isinstance(pretrained_loc, (list, tuple)): custom_load = pretrained_cfg.get('custom_load', False) if isinstance(custom_load, str) and custom_load == 'hf': load_custom_from_hf(*pretrained_loc, model, cache_dir=cache_dir) return else: state_dict = load_state_dict_from_hf(*pretrained_loc, cache_dir=cache_dir) else: state_dict = load_state_dict_from_hf(pretrained_loc, weights_only=True, cache_dir=cache_dir) else: model_name = pretrained_cfg.get('architecture', 'this model') raise RuntimeError(f"No pretrained weights exist for {model_name}. Use `pretrained=False` for random init.") if filter_fn is not None: try: state_dict = filter_fn(state_dict, model) except TypeError as e: # for backwards compat with filter fn that take one arg state_dict = filter_fn(state_dict) input_convs = pretrained_cfg.get('first_conv', None) if input_convs is not None and in_chans != 3: if isinstance(input_convs, str): input_convs = (input_convs,) for input_conv_name in input_convs: weight_name = input_conv_name + '.weight' try: state_dict[weight_name] = adapt_input_conv(in_chans, state_dict[weight_name]) _logger.info( f'Converted input conv {input_conv_name} pretrained weights from 3 to {in_chans} channel(s)') except NotImplementedError as e: del state_dict[weight_name] strict = False _logger.warning( f'Unable to convert pretrained {input_conv_name} weights, using random init for this layer.') classifiers = pretrained_cfg.get('classifier', None) label_offset = pretrained_cfg.get('label_offset', 0) if classifiers is not None: if isinstance(classifiers, str): classifiers = (classifiers,) if num_classes != pretrained_cfg['num_classes']: for classifier_name in classifiers: # completely discard fully connected if model num_classes doesn't match pretrained weights state_dict.pop(classifier_name + '.weight', None) state_dict.pop(classifier_name + '.bias', None) strict = False elif label_offset > 0: for classifier_name in classifiers: # special case for pretrained weights with an extra background class in pretrained weights classifier_weight = state_dict[classifier_name + '.weight'] state_dict[classifier_name + '.weight'] = classifier_weight[label_offset:] classifier_bias = state_dict[classifier_name + '.bias'] state_dict[classifier_name + '.bias'] = classifier_bias[label_offset:] load_result = model.load_state_dict(state_dict, strict=strict) if load_result.missing_keys: _logger.info( f'Missing keys ({", ".join(load_result.missing_keys)}) discovered while loading pretrained weights.' f' This is expected if model is being adapted.') if load_result.unexpected_keys: _logger.warning( f'Unexpected keys ({", ".join(load_result.unexpected_keys)}) found while loading pretrained weights.' f' This may be expected if model is being adapted.') def pretrained_cfg_for_features(pretrained_cfg): pretrained_cfg = deepcopy(pretrained_cfg) # remove default pretrained cfg fields that don't have much relevance for feature backbone to_remove = ('num_classes', 'classifier', 'global_pool') # add default final pool size? for tr in to_remove: pretrained_cfg.pop(tr, None) return pretrained_cfg def _filter_kwargs(kwargs, names): if not kwargs or not names: return for n in names: kwargs.pop(n, None) def _update_default_model_kwargs(pretrained_cfg, kwargs, kwargs_filter): """ Update the default_cfg and kwargs before passing to model Args: pretrained_cfg: input pretrained cfg (updated in-place) kwargs: keyword args passed to model build fn (updated in-place) kwargs_filter: keyword arg keys that must be removed before model __init__ """ # Set model __init__ args that can be determined by default_cfg (if not already passed as kwargs) default_kwarg_names = ('num_classes', 'global_pool', 'in_chans') if pretrained_cfg.get('fixed_input_size', False): # if fixed_input_size exists and is True, model takes an img_size arg that fixes its input size default_kwarg_names += ('img_size',) for n in default_kwarg_names: # for legacy reasons, model __init__args uses img_size + in_chans as separate args while # pretrained_cfg has one input_size=(C, H ,W) entry if n == 'img_size': input_size = pretrained_cfg.get('input_size', None) if input_size is not None: assert len(input_size) == 3 kwargs.setdefault(n, input_size[-2:]) elif n == 'in_chans': input_size = pretrained_cfg.get('input_size', None) if input_size is not None: assert len(input_size) == 3 kwargs.setdefault(n, input_size[0]) elif n == 'num_classes': default_val = pretrained_cfg.get(n, None) # if default is < 0, don't pass through to model if default_val is not None and default_val >= 0: kwargs.setdefault(n, pretrained_cfg[n]) else: default_val = pretrained_cfg.get(n, None) if default_val is not None: kwargs.setdefault(n, pretrained_cfg[n]) # Filter keyword args for task specific model variants (some 'features only' models, etc.) _filter_kwargs(kwargs, names=kwargs_filter) def resolve_pretrained_cfg( variant: str, pretrained_cfg: Optional[Union[str, Dict[str, Any]]] = None, pretrained_cfg_overlay: Optional[Dict[str, Any]] = None, ) -> PretrainedCfg: model_with_tag = variant pretrained_tag = None if pretrained_cfg: if isinstance(pretrained_cfg, dict): # pretrained_cfg dict passed as arg, validate by converting to PretrainedCfg pretrained_cfg = PretrainedCfg(**pretrained_cfg) elif isinstance(pretrained_cfg, str): pretrained_tag = pretrained_cfg pretrained_cfg = None # fallback to looking up pretrained cfg in model registry by variant identifier if not pretrained_cfg: if pretrained_tag: model_with_tag = '.'.join([variant, pretrained_tag]) pretrained_cfg = get_pretrained_cfg(model_with_tag) if not pretrained_cfg: _logger.warning( f"No pretrained configuration specified for {model_with_tag} model. Using a default." f" Please add a config to the model pretrained_cfg registry or pass explicitly.") pretrained_cfg = PretrainedCfg() # instance with defaults pretrained_cfg_overlay = pretrained_cfg_overlay or {} if not pretrained_cfg.architecture: pretrained_cfg_overlay.setdefault('architecture', variant) pretrained_cfg = dataclasses.replace(pretrained_cfg, **pretrained_cfg_overlay) return pretrained_cfg def build_model_with_cfg( model_cls: Callable, variant: str, pretrained: bool, pretrained_cfg: Optional[Dict] = None, pretrained_cfg_overlay: Optional[Dict] = None, model_cfg: Optional[Any] = None, feature_cfg: Optional[Dict] = None, pretrained_strict: bool = True, pretrained_filter_fn: Optional[Callable] = None, cache_dir: Optional[Union[str, Path]] = None, kwargs_filter: Optional[Tuple[str]] = None, **kwargs, ): """ Build model with specified default_cfg and optional model_cfg This helper fn aids in the construction of a model including: * handling default_cfg and associated pretrained weight loading * passing through optional model_cfg for models with config based arch spec * features_only model adaptation * pruning config / model adaptation Args: model_cls: Model class variant: Model variant name pretrained: Load the pretrained weights pretrained_cfg: Model's pretrained weight/task config pretrained_cfg_overlay: Entries that will override those in pretrained_cfg model_cfg: Model's architecture config feature_cfg: Feature extraction adapter config pretrained_strict: Load pretrained weights strictly pretrained_filter_fn: Filter callable for pretrained weights cache_dir: Override model cache dir for Hugging Face Hub and Torch checkpoints kwargs_filter: Kwargs keys to filter (remove) before passing to model **kwargs: Model args passed through to model __init__ """ pruned = kwargs.pop('pruned', False) features = False feature_cfg = feature_cfg or {} # resolve and update model pretrained config and model kwargs pretrained_cfg = resolve_pretrained_cfg( variant, pretrained_cfg=pretrained_cfg, pretrained_cfg_overlay=pretrained_cfg_overlay ) pretrained_cfg = pretrained_cfg.to_dict() _update_default_model_kwargs(pretrained_cfg, kwargs, kwargs_filter) # Setup for feature extraction wrapper done at end of this fn if kwargs.pop('features_only', False): features = True feature_cfg.setdefault('out_indices', (0, 1, 2, 3, 4)) if 'out_indices' in kwargs: feature_cfg['out_indices'] = kwargs.pop('out_indices') if 'feature_cls' in kwargs: feature_cfg['feature_cls'] = kwargs.pop('feature_cls') # Instantiate the model if model_cfg is None: model = model_cls(**kwargs) else: model = model_cls(cfg=model_cfg, **kwargs) model.pretrained_cfg = pretrained_cfg model.default_cfg = model.pretrained_cfg # alias for backwards compat if pruned: model = adapt_model_from_file(model, variant) # For classification models, check class attr, then kwargs, then default to 1k, otherwise 0 for feats num_classes_pretrained = 0 if features else getattr(model, 'num_classes', kwargs.get('num_classes', 1000)) if pretrained: load_pretrained( model, pretrained_cfg=pretrained_cfg, num_classes=num_classes_pretrained, in_chans=kwargs.get('in_chans', 3), filter_fn=pretrained_filter_fn, strict=pretrained_strict, cache_dir=cache_dir, ) # Wrap the model in a feature extraction module if enabled if features: use_getter = False if 'feature_cls' in feature_cfg: feature_cls = feature_cfg.pop('feature_cls') if isinstance(feature_cls, str): feature_cls = feature_cls.lower() # flatten_sequential only valid for some feature extractors if feature_cls not in ('dict', 'list', 'hook'): feature_cfg.pop('flatten_sequential', None) if 'hook' in feature_cls: feature_cls = FeatureHookNet elif feature_cls == 'list': feature_cls = FeatureListNet elif feature_cls == 'dict': feature_cls = FeatureDictNet elif feature_cls == 'fx': feature_cls = FeatureGraphNet elif feature_cls == 'getter': use_getter = True feature_cls = FeatureGetterNet else: assert False, f'Unknown feature class {feature_cls}' else: feature_cls = FeatureListNet output_fmt = getattr(model, 'output_fmt', None) if output_fmt is not None and not use_getter: # don't set default for intermediate feat getter feature_cfg.setdefault('output_fmt', output_fmt) model = feature_cls(model, **feature_cfg) model.pretrained_cfg = pretrained_cfg_for_features(pretrained_cfg) # add back pretrained cfg model.default_cfg = model.pretrained_cfg # alias for rename backwards compat (default_cfg -> pretrained_cfg) return model
pytorch-image-models/timm/models/_builder.py/0
{ "file_path": "pytorch-image-models/timm/models/_builder.py", "repo_id": "pytorch-image-models", "token_count": 8732 }
""" Model Registry Hacked together by / Copyright 2020 Ross Wightman """ import fnmatch import re import sys import warnings from collections import defaultdict, deque from copy import deepcopy from dataclasses import replace from typing import Any, Callable, Dict, Iterable, List, Optional, Set, Sequence, Union, Tuple from ._pretrained import PretrainedCfg, DefaultCfg __all__ = [ 'split_model_name_tag', 'get_arch_name', 'register_model', 'generate_default_cfgs', 'list_models', 'list_pretrained', 'is_model', 'model_entrypoint', 'list_modules', 'is_model_in_modules', 'get_pretrained_cfg_value', 'is_model_pretrained', 'get_arch_pretrained_cfgs' ] _module_to_models: Dict[str, Set[str]] = defaultdict(set) # dict of sets to check membership of model in module _model_to_module: Dict[str, str] = {} # mapping of model names to module names _model_entrypoints: Dict[str, Callable[..., Any]] = {} # mapping of model names to architecture entrypoint fns _model_has_pretrained: Set[str] = set() # set of model names that have pretrained weight url present _model_default_cfgs: Dict[str, PretrainedCfg] = {} # central repo for model arch -> default cfg objects _model_pretrained_cfgs: Dict[str, PretrainedCfg] = {} # central repo for model arch.tag -> pretrained cfgs _model_with_tags: Dict[str, List[str]] = defaultdict(list) # shortcut to map each model arch to all model + tag names _module_to_deprecated_models: Dict[str, Dict[str, Optional[str]]] = defaultdict(dict) _deprecated_models: Dict[str, Optional[str]] = {} def split_model_name_tag(model_name: str, no_tag: str = '') -> Tuple[str, str]: model_name, *tag_list = model_name.split('.', 1) tag = tag_list[0] if tag_list else no_tag return model_name, tag def get_arch_name(model_name: str) -> str: return split_model_name_tag(model_name)[0] def generate_default_cfgs(cfgs: Dict[str, Union[Dict[str, Any], PretrainedCfg]]): out = defaultdict(DefaultCfg) default_set = set() # no tag and tags ending with * are prioritized as default for k, v in cfgs.items(): if isinstance(v, dict): v = PretrainedCfg(**v) has_weights = v.has_weights model, tag = split_model_name_tag(k) is_default_set = model in default_set priority = (has_weights and not tag) or (tag.endswith('*') and not is_default_set) tag = tag.strip('*') default_cfg = out[model] if priority: default_cfg.tags.appendleft(tag) default_set.add(model) elif has_weights and not default_cfg.is_pretrained: default_cfg.tags.appendleft(tag) else: default_cfg.tags.append(tag) if has_weights: default_cfg.is_pretrained = True default_cfg.cfgs[tag] = v return out def register_model(fn: Callable[..., Any]) -> Callable[..., Any]: # lookup containing module mod = sys.modules[fn.__module__] module_name_split = fn.__module__.split('.') module_name = module_name_split[-1] if len(module_name_split) else '' # add model to __all__ in module model_name = fn.__name__ if hasattr(mod, '__all__'): mod.__all__.append(model_name) else: mod.__all__ = [model_name] # type: ignore # add entries to registry dict/sets if model_name in _model_entrypoints: warnings.warn( f'Overwriting {model_name} in registry with {fn.__module__}.{model_name}. This is because the name being ' 'registered conflicts with an existing name. Please check if this is not expected.', stacklevel=2, ) _model_entrypoints[model_name] = fn _model_to_module[model_name] = module_name _module_to_models[module_name].add(model_name) if hasattr(mod, 'default_cfgs') and model_name in mod.default_cfgs: # this will catch all models that have entrypoint matching cfg key, but miss any aliasing # entrypoints or non-matching combos default_cfg = mod.default_cfgs[model_name] if not isinstance(default_cfg, DefaultCfg): # new style default cfg dataclass w/ multiple entries per model-arch assert isinstance(default_cfg, dict) # old style cfg dict per model-arch pretrained_cfg = PretrainedCfg(**default_cfg) default_cfg = DefaultCfg(tags=deque(['']), cfgs={'': pretrained_cfg}) for tag_idx, tag in enumerate(default_cfg.tags): is_default = tag_idx == 0 pretrained_cfg = default_cfg.cfgs[tag] model_name_tag = '.'.join([model_name, tag]) if tag else model_name replace_items = dict(architecture=model_name, tag=tag if tag else None) if pretrained_cfg.hf_hub_id and pretrained_cfg.hf_hub_id == 'timm/': # auto-complete hub name w/ architecture.tag replace_items['hf_hub_id'] = pretrained_cfg.hf_hub_id + model_name_tag pretrained_cfg = replace(pretrained_cfg, **replace_items) if is_default: _model_pretrained_cfgs[model_name] = pretrained_cfg if pretrained_cfg.has_weights: # add tagless entry if it's default and has weights _model_has_pretrained.add(model_name) if tag: _model_pretrained_cfgs[model_name_tag] = pretrained_cfg if pretrained_cfg.has_weights: # add model w/ tag if tag is valid _model_has_pretrained.add(model_name_tag) _model_with_tags[model_name].append(model_name_tag) else: _model_with_tags[model_name].append(model_name) # has empty tag (to slowly remove these instances) _model_default_cfgs[model_name] = default_cfg return fn def _deprecated_model_shim(deprecated_name: str, current_fn: Callable = None, current_tag: str = ''): def _fn(pretrained=False, **kwargs): assert current_fn is not None, f'Model {deprecated_name} has been removed with no replacement.' current_name = '.'.join([current_fn.__name__, current_tag]) if current_tag else current_fn.__name__ warnings.warn(f'Mapping deprecated model name {deprecated_name} to current {current_name}.', stacklevel=2) pretrained_cfg = kwargs.pop('pretrained_cfg', None) return current_fn(pretrained=pretrained, pretrained_cfg=pretrained_cfg or current_tag, **kwargs) return _fn def register_model_deprecations(module_name: str, deprecation_map: Dict[str, Optional[str]]): mod = sys.modules[module_name] module_name_split = module_name.split('.') module_name = module_name_split[-1] if len(module_name_split) else '' for deprecated, current in deprecation_map.items(): if hasattr(mod, '__all__'): mod.__all__.append(deprecated) current_fn = None current_tag = '' if current: current_name, current_tag = split_model_name_tag(current) current_fn = getattr(mod, current_name) deprecated_entrypoint_fn = _deprecated_model_shim(deprecated, current_fn, current_tag) setattr(mod, deprecated, deprecated_entrypoint_fn) _model_entrypoints[deprecated] = deprecated_entrypoint_fn _model_to_module[deprecated] = module_name _module_to_models[module_name].add(deprecated) _deprecated_models[deprecated] = current _module_to_deprecated_models[module_name][deprecated] = current def _natural_key(string_: str) -> List[Union[int, str]]: """See https://blog.codinghorror.com/sorting-for-humans-natural-sort-order/""" return [int(s) if s.isdigit() else s for s in re.split(r'(\d+)', string_.lower())] def _expand_filter(filter: str): """ expand a 'base_filter' to 'base_filter.*' if no tag portion""" filter_base, filter_tag = split_model_name_tag(filter) if not filter_tag: return ['.'.join([filter_base, '*']), filter] else: return [filter] def list_models( filter: Union[str, List[str]] = '', module: Union[str, List[str]] = '', pretrained: bool = False, exclude_filters: Union[str, List[str]] = '', name_matches_cfg: bool = False, include_tags: Optional[bool] = None, ) -> List[str]: """ Return list of available model names, sorted alphabetically Args: filter - Wildcard filter string that works with fnmatch module - Limit model selection to a specific submodule (ie 'vision_transformer') pretrained - Include only models with valid pretrained weights if True exclude_filters - Wildcard filters to exclude models after including them with filter name_matches_cfg - Include only models w/ model_name matching default_cfg name (excludes some aliases) include_tags - Include pretrained tags in model names (model.tag). If None, defaults set to True when pretrained=True else False (default: None) Returns: models - The sorted list of models Example: model_list('gluon_resnet*') -- returns all models starting with 'gluon_resnet' model_list('*resnext*, 'resnet') -- returns all models with 'resnext' in 'resnet' module """ if filter: include_filters = filter if isinstance(filter, (tuple, list)) else [filter] else: include_filters = [] if include_tags is None: # FIXME should this be default behaviour? or default to include_tags=True? include_tags = pretrained if not module: all_models: Set[str] = set(_model_entrypoints.keys()) else: if isinstance(module, str): all_models: Set[str] = _module_to_models[module] else: assert isinstance(module, Sequence) all_models: Set[str] = set() for m in module: all_models.update(_module_to_models[m]) all_models = all_models - _deprecated_models.keys() # remove deprecated models from listings if include_tags: # expand model names to include names w/ pretrained tags models_with_tags: Set[str] = set() for m in all_models: models_with_tags.update(_model_with_tags[m]) all_models = models_with_tags # expand include and exclude filters to include a '.*' for proper match if no tags in filter include_filters = [ef for f in include_filters for ef in _expand_filter(f)] exclude_filters = [ef for f in exclude_filters for ef in _expand_filter(f)] if include_filters: models: Set[str] = set() for f in include_filters: include_models = fnmatch.filter(all_models, f) # include these models if len(include_models): models = models.union(include_models) else: models = all_models if exclude_filters: if not isinstance(exclude_filters, (tuple, list)): exclude_filters = [exclude_filters] for xf in exclude_filters: exclude_models = fnmatch.filter(models, xf) # exclude these models if len(exclude_models): models = models.difference(exclude_models) if pretrained: models = _model_has_pretrained.intersection(models) if name_matches_cfg: models = set(_model_pretrained_cfgs).intersection(models) return sorted(models, key=_natural_key) def list_pretrained( filter: Union[str, List[str]] = '', exclude_filters: str = '', ) -> List[str]: return list_models( filter=filter, pretrained=True, exclude_filters=exclude_filters, include_tags=True, ) def get_deprecated_models(module: str = '') -> Dict[str, str]: all_deprecated = _module_to_deprecated_models[module] if module else _deprecated_models return deepcopy(all_deprecated) def is_model(model_name: str) -> bool: """ Check if a model name exists """ arch_name = get_arch_name(model_name) return arch_name in _model_entrypoints def model_entrypoint(model_name: str, module_filter: Optional[str] = None) -> Callable[..., Any]: """Fetch a model entrypoint for specified model name """ arch_name = get_arch_name(model_name) if module_filter and arch_name not in _module_to_models.get(module_filter, {}): raise RuntimeError(f'Model ({model_name} not found in module {module_filter}.') return _model_entrypoints[arch_name] def list_modules() -> List[str]: """ Return list of module names that contain models / model entrypoints """ modules = _module_to_models.keys() return sorted(modules) def is_model_in_modules( model_name: str, module_names: Union[Tuple[str, ...], List[str], Set[str]] ) -> bool: """Check if a model exists within a subset of modules Args: model_name - name of model to check module_names - names of modules to search in """ arch_name = get_arch_name(model_name) assert isinstance(module_names, (tuple, list, set)) return any(arch_name in _module_to_models[n] for n in module_names) def is_model_pretrained(model_name: str) -> bool: return model_name in _model_has_pretrained def get_pretrained_cfg(model_name: str, allow_unregistered: bool = True) -> Optional[PretrainedCfg]: if model_name in _model_pretrained_cfgs: return deepcopy(_model_pretrained_cfgs[model_name]) arch_name, tag = split_model_name_tag(model_name) if arch_name in _model_default_cfgs: # if model arch exists, but the tag is wrong, error out raise RuntimeError(f'Invalid pretrained tag ({tag}) for {arch_name}.') if allow_unregistered: # if model arch doesn't exist, it has no pretrained_cfg registered, allow a default to be created return None raise RuntimeError(f'Model architecture ({arch_name}) has no pretrained cfg registered.') def get_pretrained_cfg_value(model_name: str, cfg_key: str) -> Optional[Any]: """ Get a specific model default_cfg value by key. None if key doesn't exist. """ cfg = get_pretrained_cfg(model_name, allow_unregistered=False) return getattr(cfg, cfg_key, None) def get_arch_pretrained_cfgs(model_name: str) -> Dict[str, PretrainedCfg]: """ Get all pretrained cfgs for a given architecture. """ arch_name, _ = split_model_name_tag(model_name) model_names = _model_with_tags[arch_name] cfgs = {m: _model_pretrained_cfgs[m] for m in model_names} return cfgs
pytorch-image-models/timm/models/_registry.py/0
{ "file_path": "pytorch-image-models/timm/models/_registry.py", "repo_id": "pytorch-image-models", "token_count": 5725 }
""" EdgeNeXt Paper: `EdgeNeXt: Efficiently Amalgamated CNN-Transformer Architecture for Mobile Vision Applications` - https://arxiv.org/abs/2206.10589 Original code and weights from https://github.com/mmaaz60/EdgeNeXt Modifications and additions for timm by / Copyright 2022, Ross Wightman """ import math from functools import partial from typing import Optional, Tuple import torch import torch.nn.functional as F from torch import nn from timm.data import IMAGENET_DEFAULT_MEAN, IMAGENET_DEFAULT_STD from timm.layers import trunc_normal_tf_, DropPath, LayerNorm2d, Mlp, create_conv2d, \ NormMlpClassifierHead, ClassifierHead from ._builder import build_model_with_cfg from ._features_fx import register_notrace_module from ._manipulate import named_apply, checkpoint_seq from ._registry import register_model, generate_default_cfgs __all__ = ['EdgeNeXt'] # model_registry will add each entrypoint fn to this @register_notrace_module # reason: FX can't symbolically trace torch.arange in forward method class PositionalEncodingFourier(nn.Module): def __init__(self, hidden_dim=32, dim=768, temperature=10000): super().__init__() self.token_projection = nn.Conv2d(hidden_dim * 2, dim, kernel_size=1) self.scale = 2 * math.pi self.temperature = temperature self.hidden_dim = hidden_dim self.dim = dim def forward(self, shape: Tuple[int, int, int]): device = self.token_projection.weight.device dtype = self.token_projection.weight.dtype inv_mask = ~torch.zeros(shape).to(device=device, dtype=torch.bool) y_embed = inv_mask.cumsum(1, dtype=torch.float32) x_embed = inv_mask.cumsum(2, dtype=torch.float32) eps = 1e-6 y_embed = y_embed / (y_embed[:, -1:, :] + eps) * self.scale x_embed = x_embed / (x_embed[:, :, -1:] + eps) * self.scale dim_t = torch.arange(self.hidden_dim, dtype=torch.int64, device=device).to(torch.float32) dim_t = self.temperature ** (2 * torch.div(dim_t, 2, rounding_mode='floor') / self.hidden_dim) pos_x = x_embed[:, :, :, None] / dim_t pos_y = y_embed[:, :, :, None] / dim_t pos_x = torch.stack( (pos_x[:, :, :, 0::2].sin(), pos_x[:, :, :, 1::2].cos()), dim=4).flatten(3) pos_y = torch.stack( (pos_y[:, :, :, 0::2].sin(), pos_y[:, :, :, 1::2].cos()), dim=4).flatten(3) pos = torch.cat((pos_y, pos_x), dim=3).permute(0, 3, 1, 2) pos = self.token_projection(pos.to(dtype)) return pos class ConvBlock(nn.Module): def __init__( self, dim, dim_out=None, kernel_size=7, stride=1, conv_bias=True, expand_ratio=4, ls_init_value=1e-6, norm_layer=partial(nn.LayerNorm, eps=1e-6), act_layer=nn.GELU, drop_path=0., ): super().__init__() dim_out = dim_out or dim self.shortcut_after_dw = stride > 1 or dim != dim_out self.conv_dw = create_conv2d( dim, dim_out, kernel_size=kernel_size, stride=stride, depthwise=True, bias=conv_bias) self.norm = norm_layer(dim_out) self.mlp = Mlp(dim_out, int(expand_ratio * dim_out), act_layer=act_layer) self.gamma = nn.Parameter(ls_init_value * torch.ones(dim_out)) if ls_init_value > 0 else None self.drop_path = DropPath(drop_path) if drop_path > 0. else nn.Identity() def forward(self, x): shortcut = x x = self.conv_dw(x) if self.shortcut_after_dw: shortcut = x x = x.permute(0, 2, 3, 1) # (N, C, H, W) -> (N, H, W, C) x = self.norm(x) x = self.mlp(x) if self.gamma is not None: x = self.gamma * x x = x.permute(0, 3, 1, 2) # (N, H, W, C) -> (N, C, H, W) x = shortcut + self.drop_path(x) return x class CrossCovarianceAttn(nn.Module): def __init__( self, dim, num_heads=8, qkv_bias=False, attn_drop=0., proj_drop=0. ): super().__init__() self.num_heads = num_heads self.temperature = nn.Parameter(torch.ones(num_heads, 1, 1)) self.qkv = nn.Linear(dim, dim * 3, bias=qkv_bias) self.attn_drop = nn.Dropout(attn_drop) self.proj = nn.Linear(dim, dim) self.proj_drop = nn.Dropout(proj_drop) def forward(self, x): B, N, C = x.shape qkv = self.qkv(x).reshape(B, N, 3, self.num_heads, -1).permute(2, 0, 3, 4, 1) q, k, v = qkv.unbind(0) # NOTE, this is NOT spatial attn, q, k, v are B, num_heads, C, L --> C x C attn map attn = (F.normalize(q, dim=-1) @ F.normalize(k, dim=-1).transpose(-2, -1)) * self.temperature attn = attn.softmax(dim=-1) attn = self.attn_drop(attn) x = (attn @ v) x = x.permute(0, 3, 1, 2).reshape(B, N, C) x = self.proj(x) x = self.proj_drop(x) return x @torch.jit.ignore def no_weight_decay(self): return {'temperature'} class SplitTransposeBlock(nn.Module): def __init__( self, dim, num_scales=1, num_heads=8, expand_ratio=4, use_pos_emb=True, conv_bias=True, qkv_bias=True, ls_init_value=1e-6, norm_layer=partial(nn.LayerNorm, eps=1e-6), act_layer=nn.GELU, drop_path=0., attn_drop=0., proj_drop=0. ): super().__init__() width = max(int(math.ceil(dim / num_scales)), int(math.floor(dim // num_scales))) self.width = width self.num_scales = max(1, num_scales - 1) convs = [] for i in range(self.num_scales): convs.append(create_conv2d(width, width, kernel_size=3, depthwise=True, bias=conv_bias)) self.convs = nn.ModuleList(convs) self.pos_embd = None if use_pos_emb: self.pos_embd = PositionalEncodingFourier(dim=dim) self.norm_xca = norm_layer(dim) self.gamma_xca = nn.Parameter(ls_init_value * torch.ones(dim)) if ls_init_value > 0 else None self.xca = CrossCovarianceAttn( dim, num_heads=num_heads, qkv_bias=qkv_bias, attn_drop=attn_drop, proj_drop=proj_drop) self.norm = norm_layer(dim, eps=1e-6) self.mlp = Mlp(dim, int(expand_ratio * dim), act_layer=act_layer) self.gamma = nn.Parameter(ls_init_value * torch.ones(dim)) if ls_init_value > 0 else None self.drop_path = DropPath(drop_path) if drop_path > 0. else nn.Identity() def forward(self, x): shortcut = x # scales code re-written for torchscript as per my res2net fixes -rw # NOTE torch.split(x, self.width, 1) causing issues with ONNX export spx = x.chunk(len(self.convs) + 1, dim=1) spo = [] sp = spx[0] for i, conv in enumerate(self.convs): if i > 0: sp = sp + spx[i] sp = conv(sp) spo.append(sp) spo.append(spx[-1]) x = torch.cat(spo, 1) # XCA B, C, H, W = x.shape x = x.reshape(B, C, H * W).permute(0, 2, 1) if self.pos_embd is not None: pos_encoding = self.pos_embd((B, H, W)).reshape(B, -1, x.shape[1]).permute(0, 2, 1) x = x + pos_encoding x = x + self.drop_path(self.gamma_xca * self.xca(self.norm_xca(x))) x = x.reshape(B, H, W, C) # Inverted Bottleneck x = self.norm(x) x = self.mlp(x) if self.gamma is not None: x = self.gamma * x x = x.permute(0, 3, 1, 2) # (N, H, W, C) -> (N, C, H, W) x = shortcut + self.drop_path(x) return x class EdgeNeXtStage(nn.Module): def __init__( self, in_chs, out_chs, stride=2, depth=2, num_global_blocks=1, num_heads=4, scales=2, kernel_size=7, expand_ratio=4, use_pos_emb=False, downsample_block=False, conv_bias=True, ls_init_value=1.0, drop_path_rates=None, norm_layer=LayerNorm2d, norm_layer_cl=partial(nn.LayerNorm, eps=1e-6), act_layer=nn.GELU ): super().__init__() self.grad_checkpointing = False if downsample_block or stride == 1: self.downsample = nn.Identity() else: self.downsample = nn.Sequential( norm_layer(in_chs), nn.Conv2d(in_chs, out_chs, kernel_size=2, stride=2, bias=conv_bias) ) in_chs = out_chs stage_blocks = [] for i in range(depth): if i < depth - num_global_blocks: stage_blocks.append( ConvBlock( dim=in_chs, dim_out=out_chs, stride=stride if downsample_block and i == 0 else 1, conv_bias=conv_bias, kernel_size=kernel_size, expand_ratio=expand_ratio, ls_init_value=ls_init_value, drop_path=drop_path_rates[i], norm_layer=norm_layer_cl, act_layer=act_layer, ) ) else: stage_blocks.append( SplitTransposeBlock( dim=in_chs, num_scales=scales, num_heads=num_heads, expand_ratio=expand_ratio, use_pos_emb=use_pos_emb, conv_bias=conv_bias, ls_init_value=ls_init_value, drop_path=drop_path_rates[i], norm_layer=norm_layer_cl, act_layer=act_layer, ) ) in_chs = out_chs self.blocks = nn.Sequential(*stage_blocks) def forward(self, x): x = self.downsample(x) if self.grad_checkpointing and not torch.jit.is_scripting(): x = checkpoint_seq(self.blocks, x) else: x = self.blocks(x) return x class EdgeNeXt(nn.Module): def __init__( self, in_chans=3, num_classes=1000, global_pool='avg', dims=(24, 48, 88, 168), depths=(3, 3, 9, 3), global_block_counts=(0, 1, 1, 1), kernel_sizes=(3, 5, 7, 9), heads=(8, 8, 8, 8), d2_scales=(2, 2, 3, 4), use_pos_emb=(False, True, False, False), ls_init_value=1e-6, head_init_scale=1., expand_ratio=4, downsample_block=False, conv_bias=True, stem_type='patch', head_norm_first=False, act_layer=nn.GELU, drop_path_rate=0., drop_rate=0., ): super().__init__() self.num_classes = num_classes self.global_pool = global_pool self.drop_rate = drop_rate norm_layer = partial(LayerNorm2d, eps=1e-6) norm_layer_cl = partial(nn.LayerNorm, eps=1e-6) self.feature_info = [] assert stem_type in ('patch', 'overlap') if stem_type == 'patch': self.stem = nn.Sequential( nn.Conv2d(in_chans, dims[0], kernel_size=4, stride=4, bias=conv_bias), norm_layer(dims[0]), ) else: self.stem = nn.Sequential( nn.Conv2d(in_chans, dims[0], kernel_size=9, stride=4, padding=9 // 2, bias=conv_bias), norm_layer(dims[0]), ) curr_stride = 4 stages = [] dp_rates = [x.tolist() for x in torch.linspace(0, drop_path_rate, sum(depths)).split(depths)] in_chs = dims[0] for i in range(4): stride = 2 if curr_stride == 2 or i > 0 else 1 # FIXME support dilation / output_stride curr_stride *= stride stages.append(EdgeNeXtStage( in_chs=in_chs, out_chs=dims[i], stride=stride, depth=depths[i], num_global_blocks=global_block_counts[i], num_heads=heads[i], drop_path_rates=dp_rates[i], scales=d2_scales[i], expand_ratio=expand_ratio, kernel_size=kernel_sizes[i], use_pos_emb=use_pos_emb[i], ls_init_value=ls_init_value, downsample_block=downsample_block, conv_bias=conv_bias, norm_layer=norm_layer, norm_layer_cl=norm_layer_cl, act_layer=act_layer, )) # NOTE feature_info use currently assumes stage 0 == stride 1, rest are stride 2 in_chs = dims[i] self.feature_info += [dict(num_chs=in_chs, reduction=curr_stride, module=f'stages.{i}')] self.stages = nn.Sequential(*stages) self.num_features = self.head_hidden_size = dims[-1] if head_norm_first: self.norm_pre = norm_layer(self.num_features) self.head = ClassifierHead( self.num_features, num_classes, pool_type=global_pool, drop_rate=self.drop_rate, ) else: self.norm_pre = nn.Identity() self.head = NormMlpClassifierHead( self.num_features, num_classes, pool_type=global_pool, drop_rate=self.drop_rate, norm_layer=norm_layer, ) named_apply(partial(_init_weights, head_init_scale=head_init_scale), self) @torch.jit.ignore def group_matcher(self, coarse=False): return dict( stem=r'^stem', blocks=r'^stages\.(\d+)' if coarse else [ (r'^stages\.(\d+)\.downsample', (0,)), # blocks (r'^stages\.(\d+)\.blocks\.(\d+)', None), (r'^norm_pre', (99999,)) ] ) @torch.jit.ignore def set_grad_checkpointing(self, enable=True): for s in self.stages: s.grad_checkpointing = enable @torch.jit.ignore def get_classifier(self) -> nn.Module: return self.head.fc def reset_classifier(self, num_classes: int, global_pool: Optional[str] = None): self.num_classes = num_classes self.head.reset(num_classes, global_pool) def forward_features(self, x): x = self.stem(x) x = self.stages(x) x = self.norm_pre(x) return x def forward_head(self, x, pre_logits: bool = False): return self.head(x, pre_logits=True) if pre_logits else self.head(x) def forward(self, x): x = self.forward_features(x) x = self.forward_head(x) return x def _init_weights(module, name=None, head_init_scale=1.0): if isinstance(module, nn.Conv2d): trunc_normal_tf_(module.weight, std=.02) if module.bias is not None: nn.init.zeros_(module.bias) elif isinstance(module, nn.Linear): trunc_normal_tf_(module.weight, std=.02) nn.init.zeros_(module.bias) if name and 'head.' in name: module.weight.data.mul_(head_init_scale) module.bias.data.mul_(head_init_scale) def checkpoint_filter_fn(state_dict, model): """ Remap FB checkpoints -> timm """ if 'head.norm.weight' in state_dict or 'norm_pre.weight' in state_dict: return state_dict # non-FB checkpoint # models were released as train checkpoints... :/ if 'model_ema' in state_dict: state_dict = state_dict['model_ema'] elif 'model' in state_dict: state_dict = state_dict['model'] elif 'state_dict' in state_dict: state_dict = state_dict['state_dict'] out_dict = {} import re for k, v in state_dict.items(): k = k.replace('downsample_layers.0.', 'stem.') k = re.sub(r'stages.([0-9]+).([0-9]+)', r'stages.\1.blocks.\2', k) k = re.sub(r'downsample_layers.([0-9]+).([0-9]+)', r'stages.\1.downsample.\2', k) k = k.replace('dwconv', 'conv_dw') k = k.replace('pwconv', 'mlp.fc') k = k.replace('head.', 'head.fc.') if k.startswith('norm.'): k = k.replace('norm', 'head.norm') if v.ndim == 2 and 'head' not in k: model_shape = model.state_dict()[k].shape v = v.reshape(model_shape) out_dict[k] = v return out_dict def _create_edgenext(variant, pretrained=False, **kwargs): model = build_model_with_cfg( EdgeNeXt, variant, pretrained, pretrained_filter_fn=checkpoint_filter_fn, feature_cfg=dict(out_indices=(0, 1, 2, 3), flatten_sequential=True), **kwargs) return model def _cfg(url='', **kwargs): return { 'url': url, 'num_classes': 1000, 'input_size': (3, 256, 256), 'pool_size': (8, 8), 'crop_pct': 0.9, 'interpolation': 'bicubic', 'mean': IMAGENET_DEFAULT_MEAN, 'std': IMAGENET_DEFAULT_STD, 'first_conv': 'stem.0', 'classifier': 'head.fc', **kwargs } default_cfgs = generate_default_cfgs({ 'edgenext_xx_small.in1k': _cfg( hf_hub_id='timm/', test_input_size=(3, 288, 288), test_crop_pct=1.0), 'edgenext_x_small.in1k': _cfg( hf_hub_id='timm/', test_input_size=(3, 288, 288), test_crop_pct=1.0), 'edgenext_small.usi_in1k': _cfg( # USI weights hf_hub_id='timm/', crop_pct=0.95, test_input_size=(3, 320, 320), test_crop_pct=1.0, ), 'edgenext_base.usi_in1k': _cfg( # USI weights hf_hub_id='timm/', crop_pct=0.95, test_input_size=(3, 320, 320), test_crop_pct=1.0, ), 'edgenext_base.in21k_ft_in1k': _cfg( # USI weights hf_hub_id='timm/', crop_pct=0.95, test_input_size=(3, 320, 320), test_crop_pct=1.0, ), 'edgenext_small_rw.sw_in1k': _cfg( hf_hub_id='timm/', test_input_size=(3, 320, 320), test_crop_pct=1.0, ), }) @register_model def edgenext_xx_small(pretrained=False, **kwargs) -> EdgeNeXt: # 1.33M & 260.58M @ 256 resolution # 71.23% Top-1 accuracy # No AA, Color Jitter=0.4, No Mixup & Cutmix, DropPath=0.0, BS=4096, lr=0.006, multi-scale-sampler # Jetson FPS=51.66 versus 47.67 for MobileViT_XXS # For A100: FPS @ BS=1: 212.13 & @ BS=256: 7042.06 versus FPS @ BS=1: 96.68 & @ BS=256: 4624.71 for MobileViT_XXS model_args = dict(depths=(2, 2, 6, 2), dims=(24, 48, 88, 168), heads=(4, 4, 4, 4)) return _create_edgenext('edgenext_xx_small', pretrained=pretrained, **dict(model_args, **kwargs)) @register_model def edgenext_x_small(pretrained=False, **kwargs) -> EdgeNeXt: # 2.34M & 538.0M @ 256 resolution # 75.00% Top-1 accuracy # No AA, No Mixup & Cutmix, DropPath=0.0, BS=4096, lr=0.006, multi-scale-sampler # Jetson FPS=31.61 versus 28.49 for MobileViT_XS # For A100: FPS @ BS=1: 179.55 & @ BS=256: 4404.95 versus FPS @ BS=1: 94.55 & @ BS=256: 2361.53 for MobileViT_XS model_args = dict(depths=(3, 3, 9, 3), dims=(32, 64, 100, 192), heads=(4, 4, 4, 4)) return _create_edgenext('edgenext_x_small', pretrained=pretrained, **dict(model_args, **kwargs)) @register_model def edgenext_small(pretrained=False, **kwargs) -> EdgeNeXt: # 5.59M & 1260.59M @ 256 resolution # 79.43% Top-1 accuracy # AA=True, No Mixup & Cutmix, DropPath=0.1, BS=4096, lr=0.006, multi-scale-sampler # Jetson FPS=20.47 versus 18.86 for MobileViT_S # For A100: FPS @ BS=1: 172.33 & @ BS=256: 3010.25 versus FPS @ BS=1: 93.84 & @ BS=256: 1785.92 for MobileViT_S model_args = dict(depths=(3, 3, 9, 3), dims=(48, 96, 160, 304)) return _create_edgenext('edgenext_small', pretrained=pretrained, **dict(model_args, **kwargs)) @register_model def edgenext_base(pretrained=False, **kwargs) -> EdgeNeXt: # 18.51M & 3840.93M @ 256 resolution # 82.5% (normal) 83.7% (USI) Top-1 accuracy # AA=True, Mixup & Cutmix, DropPath=0.1, BS=4096, lr=0.006, multi-scale-sampler # Jetson FPS=xx.xx versus xx.xx for MobileViT_S # For A100: FPS @ BS=1: xxx.xx & @ BS=256: xxxx.xx model_args = dict(depths=[3, 3, 9, 3], dims=[80, 160, 288, 584]) return _create_edgenext('edgenext_base', pretrained=pretrained, **dict(model_args, **kwargs)) @register_model def edgenext_small_rw(pretrained=False, **kwargs) -> EdgeNeXt: model_args = dict( depths=(3, 3, 9, 3), dims=(48, 96, 192, 384), downsample_block=True, conv_bias=False, stem_type='overlap') return _create_edgenext('edgenext_small_rw', pretrained=pretrained, **dict(model_args, **kwargs))
pytorch-image-models/timm/models/edgenext.py/0
{ "file_path": "pytorch-image-models/timm/models/edgenext.py", "repo_id": "pytorch-image-models", "token_count": 11055 }
""" PP-HGNet (V1 & V2) Reference: https://github.com/PaddlePaddle/PaddleClas/blob/develop/docs/zh_CN/models/ImageNet1k/PP-HGNetV2.md The Paddle Implement of PP-HGNet (https://github.com/PaddlePaddle/PaddleClas/blob/release/2.5.1/docs/en/models/PP-HGNet_en.md) PP-HGNet: https://github.com/PaddlePaddle/PaddleClas/blob/release/2.5.1/ppcls/arch/backbone/legendary_models/pp_hgnet.py PP-HGNetv2: https://github.com/PaddlePaddle/PaddleClas/blob/release/2.5.1/ppcls/arch/backbone/legendary_models/pp_hgnet_v2.py """ from typing import Dict, Optional import torch import torch.nn as nn import torch.nn.functional as F from timm.data import IMAGENET_DEFAULT_MEAN, IMAGENET_DEFAULT_STD from timm.layers import SelectAdaptivePool2d, DropPath, create_conv2d from ._builder import build_model_with_cfg from ._registry import register_model, generate_default_cfgs from ._manipulate import checkpoint_seq __all__ = ['HighPerfGpuNet'] class LearnableAffineBlock(nn.Module): def __init__( self, scale_value=1.0, bias_value=0.0 ): super().__init__() self.scale = nn.Parameter(torch.tensor([scale_value]), requires_grad=True) self.bias = nn.Parameter(torch.tensor([bias_value]), requires_grad=True) def forward(self, x): return self.scale * x + self.bias class ConvBNAct(nn.Module): def __init__( self, in_chs, out_chs, kernel_size, stride=1, groups=1, padding='', use_act=True, use_lab=False ): super().__init__() self.use_act = use_act self.use_lab = use_lab self.conv = create_conv2d( in_chs, out_chs, kernel_size, stride=stride, padding=padding, groups=groups, ) self.bn = nn.BatchNorm2d(out_chs) if self.use_act: self.act = nn.ReLU() else: self.act = nn.Identity() if self.use_act and self.use_lab: self.lab = LearnableAffineBlock() else: self.lab = nn.Identity() def forward(self, x): x = self.conv(x) x = self.bn(x) x = self.act(x) x = self.lab(x) return x class LightConvBNAct(nn.Module): def __init__( self, in_chs, out_chs, kernel_size, groups=1, use_lab=False ): super().__init__() self.conv1 = ConvBNAct( in_chs, out_chs, kernel_size=1, use_act=False, use_lab=use_lab, ) self.conv2 = ConvBNAct( out_chs, out_chs, kernel_size=kernel_size, groups=out_chs, use_act=True, use_lab=use_lab, ) def forward(self, x): x = self.conv1(x) x = self.conv2(x) return x class EseModule(nn.Module): def __init__(self, chs): super().__init__() self.conv = nn.Conv2d( chs, chs, kernel_size=1, stride=1, padding=0, ) self.sigmoid = nn.Sigmoid() def forward(self, x): identity = x x = x.mean((2, 3), keepdim=True) x = self.conv(x) x = self.sigmoid(x) return torch.mul(identity, x) class StemV1(nn.Module): # for PP-HGNet def __init__(self, stem_chs): super().__init__() self.stem = nn.Sequential(*[ ConvBNAct( stem_chs[i], stem_chs[i + 1], kernel_size=3, stride=2 if i == 0 else 1) for i in range( len(stem_chs) - 1) ]) self.pool = nn.MaxPool2d(kernel_size=3, stride=2, padding=1) def forward(self, x): x = self.stem(x) x = self.pool(x) return x class StemV2(nn.Module): # for PP-HGNetv2 def __init__(self, in_chs, mid_chs, out_chs, use_lab=False): super().__init__() self.stem1 = ConvBNAct( in_chs, mid_chs, kernel_size=3, stride=2, use_lab=use_lab, ) self.stem2a = ConvBNAct( mid_chs, mid_chs // 2, kernel_size=2, stride=1, use_lab=use_lab, ) self.stem2b = ConvBNAct( mid_chs // 2, mid_chs, kernel_size=2, stride=1, use_lab=use_lab, ) self.stem3 = ConvBNAct( mid_chs * 2, mid_chs, kernel_size=3, stride=2, use_lab=use_lab, ) self.stem4 = ConvBNAct( mid_chs, out_chs, kernel_size=1, stride=1, use_lab=use_lab, ) self.pool = nn.MaxPool2d(kernel_size=2, stride=1, ceil_mode=True) def forward(self, x): x = self.stem1(x) x = F.pad(x, (0, 1, 0, 1)) x2 = self.stem2a(x) x2 = F.pad(x2, (0, 1, 0, 1)) x2 = self.stem2b(x2) x1 = self.pool(x) x = torch.cat([x1, x2], dim=1) x = self.stem3(x) x = self.stem4(x) return x class HighPerfGpuBlock(nn.Module): def __init__( self, in_chs, mid_chs, out_chs, layer_num, kernel_size=3, residual=False, light_block=False, use_lab=False, agg='ese', drop_path=0., ): super().__init__() self.residual = residual self.layers = nn.ModuleList() for i in range(layer_num): if light_block: self.layers.append( LightConvBNAct( in_chs if i == 0 else mid_chs, mid_chs, kernel_size=kernel_size, use_lab=use_lab, ) ) else: self.layers.append( ConvBNAct( in_chs if i == 0 else mid_chs, mid_chs, kernel_size=kernel_size, stride=1, use_lab=use_lab, ) ) # feature aggregation total_chs = in_chs + layer_num * mid_chs if agg == 'se': aggregation_squeeze_conv = ConvBNAct( total_chs, out_chs // 2, kernel_size=1, stride=1, use_lab=use_lab, ) aggregation_excitation_conv = ConvBNAct( out_chs // 2, out_chs, kernel_size=1, stride=1, use_lab=use_lab, ) self.aggregation = nn.Sequential( aggregation_squeeze_conv, aggregation_excitation_conv, ) else: aggregation_conv = ConvBNAct( total_chs, out_chs, kernel_size=1, stride=1, use_lab=use_lab, ) att = EseModule(out_chs) self.aggregation = nn.Sequential( aggregation_conv, att, ) self.drop_path = DropPath(drop_path) if drop_path else nn.Identity() def forward(self, x): identity = x output = [x] for layer in self.layers: x = layer(x) output.append(x) x = torch.cat(output, dim=1) x = self.aggregation(x) if self.residual: x = self.drop_path(x) + identity return x class HighPerfGpuStage(nn.Module): def __init__( self, in_chs, mid_chs, out_chs, block_num, layer_num, downsample=True, stride=2, light_block=False, kernel_size=3, use_lab=False, agg='ese', drop_path=0., ): super().__init__() self.downsample = downsample if downsample: self.downsample = ConvBNAct( in_chs, in_chs, kernel_size=3, stride=stride, groups=in_chs, use_act=False, use_lab=use_lab, ) else: self.downsample = nn.Identity() blocks_list = [] for i in range(block_num): blocks_list.append( HighPerfGpuBlock( in_chs if i == 0 else out_chs, mid_chs, out_chs, layer_num, residual=False if i == 0 else True, kernel_size=kernel_size, light_block=light_block, use_lab=use_lab, agg=agg, drop_path=drop_path[i] if isinstance(drop_path, (list, tuple)) else drop_path, ) ) self.blocks = nn.Sequential(*blocks_list) self.grad_checkpointing= False def forward(self, x): x = self.downsample(x) if self.grad_checkpointing and not torch.jit.is_scripting(): x = checkpoint_seq(self.blocks, x, flatten=False) else: x = self.blocks(x) return x class ClassifierHead(nn.Module): def __init__( self, in_features: int, num_classes: int, pool_type: str = 'avg', drop_rate: float = 0., hidden_size: Optional[int] = 2048, use_lab: bool = False ): super(ClassifierHead, self).__init__() self.num_features = in_features if pool_type is not None: if not pool_type: assert num_classes == 0, 'Classifier head must be removed if pooling is disabled' self.global_pool = SelectAdaptivePool2d(pool_type=pool_type) if hidden_size is not None: self.num_features = hidden_size last_conv = nn.Conv2d( in_features, hidden_size, kernel_size=1, stride=1, padding=0, bias=False, ) act = nn.ReLU() if use_lab: lab = LearnableAffineBlock() self.last_conv = nn.Sequential(last_conv, act, lab) else: self.last_conv = nn.Sequential(last_conv, act) else: self.last_conv = nn.Identity() self.dropout = nn.Dropout(drop_rate) self.flatten = nn.Flatten(1) if pool_type else nn.Identity() # don't flatten if pooling disabled self.fc = nn.Linear(self.num_features, num_classes) if num_classes > 0 else nn.Identity() def reset(self, num_classes: int, pool_type: Optional[str] = None): if pool_type is not None: if not pool_type: assert num_classes == 0, 'Classifier head must be removed if pooling is disabled' self.global_pool = SelectAdaptivePool2d(pool_type=pool_type) self.flatten = nn.Flatten(1) if pool_type else nn.Identity() # don't flatten if pooling disabled self.fc = nn.Linear(self.num_features, num_classes) if num_classes > 0 else nn.Identity() def forward(self, x, pre_logits: bool = False): x = self.global_pool(x) x = self.last_conv(x) x = self.dropout(x) x = self.flatten(x) if pre_logits: return x x = self.fc(x) return x class HighPerfGpuNet(nn.Module): def __init__( self, cfg: Dict, in_chans: int = 3, num_classes: int = 1000, global_pool: str = 'avg', head_hidden_size: Optional[int] = 2048, drop_rate: float = 0., drop_path_rate: float = 0., use_lab: bool = False, **kwargs, ): super(HighPerfGpuNet, self).__init__() stem_type = cfg["stem_type"] stem_chs = cfg["stem_chs"] stages_cfg = [cfg["stage1"], cfg["stage2"], cfg["stage3"], cfg["stage4"]] self.num_classes = num_classes self.drop_rate = drop_rate self.use_lab = use_lab assert stem_type in ['v1', 'v2'] if stem_type == 'v2': self.stem = StemV2( in_chs=in_chans, mid_chs=stem_chs[0], out_chs=stem_chs[1], use_lab=use_lab) else: self.stem = StemV1([in_chans] + stem_chs) current_stride = 4 stages = [] self.feature_info = [] block_depths = [c[3] for c in stages_cfg] dpr = [x.tolist() for x in torch.linspace(0, drop_path_rate, sum(block_depths)).split(block_depths)] for i, stage_config in enumerate(stages_cfg): in_chs, mid_chs, out_chs, block_num, downsample, light_block, kernel_size, layer_num = stage_config stages += [HighPerfGpuStage( in_chs=in_chs, mid_chs=mid_chs, out_chs=out_chs, block_num=block_num, layer_num=layer_num, downsample=downsample, light_block=light_block, kernel_size=kernel_size, use_lab=use_lab, agg='ese' if stem_type == 'v1' else 'se', drop_path=dpr[i], )] self.num_features = out_chs if downsample: current_stride *= 2 self.feature_info += [dict(num_chs=self.num_features, reduction=current_stride, module=f'stages.{i}')] self.stages = nn.Sequential(*stages) self.head = ClassifierHead( self.num_features, num_classes=num_classes, pool_type=global_pool, drop_rate=drop_rate, hidden_size=head_hidden_size, use_lab=use_lab ) self.head_hidden_size = self.head.num_features for n, m in self.named_modules(): if isinstance(m, nn.Conv2d): nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='relu') elif isinstance(m, nn.BatchNorm2d): nn.init.ones_(m.weight) nn.init.zeros_(m.bias) elif isinstance(m, nn.Linear): nn.init.zeros_(m.bias) @torch.jit.ignore def group_matcher(self, coarse=False): return dict( stem=r'^stem', blocks=r'^stages\.(\d+)' if coarse else r'^stages\.(\d+).blocks\.(\d+)', ) @torch.jit.ignore def set_grad_checkpointing(self, enable=True): for s in self.stages: s.grad_checkpointing = enable @torch.jit.ignore def get_classifier(self) -> nn.Module: return self.head.fc def reset_classifier(self, num_classes: int, global_pool: Optional[str] = None): self.num_classes = num_classes self.head.reset(num_classes, global_pool) def forward_features(self, x): x = self.stem(x) return self.stages(x) def forward_head(self, x, pre_logits: bool = False): return self.head(x, pre_logits=pre_logits) if pre_logits else self.head(x) def forward(self, x): x = self.forward_features(x) x = self.forward_head(x) return x model_cfgs = dict( # PP-HGNet hgnet_tiny={ "stem_type": 'v1', "stem_chs": [48, 48, 96], # in_chs, mid_chs, out_chs, blocks, downsample, light_block, kernel_size, layer_num "stage1": [96, 96, 224, 1, False, False, 3, 5], "stage2": [224, 128, 448, 1, True, False, 3, 5], "stage3": [448, 160, 512, 2, True, False, 3, 5], "stage4": [512, 192, 768, 1, True, False, 3, 5], }, hgnet_small={ "stem_type": 'v1', "stem_chs": [64, 64, 128], # in_chs, mid_chs, out_chs, blocks, downsample, light_block, kernel_size, layer_num "stage1": [128, 128, 256, 1, False, False, 3, 6], "stage2": [256, 160, 512, 1, True, False, 3, 6], "stage3": [512, 192, 768, 2, True, False, 3, 6], "stage4": [768, 224, 1024, 1, True, False, 3, 6], }, hgnet_base={ "stem_type": 'v1', "stem_chs": [96, 96, 160], # in_chs, mid_chs, out_chs, blocks, downsample, light_block, kernel_size, layer_num "stage1": [160, 192, 320, 1, False, False, 3, 7], "stage2": [320, 224, 640, 2, True, False, 3, 7], "stage3": [640, 256, 960, 3, True, False, 3, 7], "stage4": [960, 288, 1280, 2, True, False, 3, 7], }, # PP-HGNetv2 hgnetv2_b0={ "stem_type": 'v2', "stem_chs": [16, 16], # in_chs, mid_chs, out_chs, blocks, downsample, light_block, kernel_size, layer_num "stage1": [16, 16, 64, 1, False, False, 3, 3], "stage2": [64, 32, 256, 1, True, False, 3, 3], "stage3": [256, 64, 512, 2, True, True, 5, 3], "stage4": [512, 128, 1024, 1, True, True, 5, 3], }, hgnetv2_b1={ "stem_type": 'v2', "stem_chs": [24, 32], # in_chs, mid_chs, out_chs, blocks, downsample, light_block, kernel_size, layer_num "stage1": [32, 32, 64, 1, False, False, 3, 3], "stage2": [64, 48, 256, 1, True, False, 3, 3], "stage3": [256, 96, 512, 2, True, True, 5, 3], "stage4": [512, 192, 1024, 1, True, True, 5, 3], }, hgnetv2_b2={ "stem_type": 'v2', "stem_chs": [24, 32], # in_chs, mid_chs, out_chs, blocks, downsample, light_block, kernel_size, layer_num "stage1": [32, 32, 96, 1, False, False, 3, 4], "stage2": [96, 64, 384, 1, True, False, 3, 4], "stage3": [384, 128, 768, 3, True, True, 5, 4], "stage4": [768, 256, 1536, 1, True, True, 5, 4], }, hgnetv2_b3={ "stem_type": 'v2', "stem_chs": [24, 32], # in_chs, mid_chs, out_chs, blocks, downsample, light_block, kernel_size, layer_num "stage1": [32, 32, 128, 1, False, False, 3, 5], "stage2": [128, 64, 512, 1, True, False, 3, 5], "stage3": [512, 128, 1024, 3, True, True, 5, 5], "stage4": [1024, 256, 2048, 1, True, True, 5, 5], }, hgnetv2_b4={ "stem_type": 'v2', "stem_chs": [32, 48], # in_chs, mid_chs, out_chs, blocks, downsample, light_block, kernel_size, layer_num "stage1": [48, 48, 128, 1, False, False, 3, 6], "stage2": [128, 96, 512, 1, True, False, 3, 6], "stage3": [512, 192, 1024, 3, True, True, 5, 6], "stage4": [1024, 384, 2048, 1, True, True, 5, 6], }, hgnetv2_b5={ "stem_type": 'v2', "stem_chs": [32, 64], # in_chs, mid_chs, out_chs, blocks, downsample, light_block, kernel_size, layer_num "stage1": [64, 64, 128, 1, False, False, 3, 6], "stage2": [128, 128, 512, 2, True, False, 3, 6], "stage3": [512, 256, 1024, 5, True, True, 5, 6], "stage4": [1024, 512, 2048, 2, True, True, 5, 6], }, hgnetv2_b6={ "stem_type": 'v2', "stem_chs": [48, 96], # in_chs, mid_chs, out_chs, blocks, downsample, light_block, kernel_size, layer_num "stage1": [96, 96, 192, 2, False, False, 3, 6], "stage2": [192, 192, 512, 3, True, False, 3, 6], "stage3": [512, 384, 1024, 6, True, True, 5, 6], "stage4": [1024, 768, 2048, 3, True, True, 5, 6], }, ) def _create_hgnet(variant, pretrained=False, **kwargs): out_indices = kwargs.pop('out_indices', (0, 1, 2, 3)) return build_model_with_cfg( HighPerfGpuNet, variant, pretrained, model_cfg=model_cfgs[variant], feature_cfg=dict(flatten_sequential=True, out_indices=out_indices), **kwargs, ) def _cfg(url='', **kwargs): return { 'url': url, 'num_classes': 1000, 'input_size': (3, 224, 224), 'pool_size': (7, 7), 'crop_pct': 0.965, 'interpolation': 'bicubic', 'mean': IMAGENET_DEFAULT_MEAN, 'std': IMAGENET_DEFAULT_STD, 'classifier': 'head.fc', 'first_conv': 'stem.stem1.conv', 'test_crop_pct': 1.0, 'test_input_size': (3, 288, 288), **kwargs, } default_cfgs = generate_default_cfgs({ 'hgnet_tiny.paddle_in1k': _cfg( first_conv='stem.stem.0.conv', hf_hub_id='timm/'), 'hgnet_tiny.ssld_in1k': _cfg( first_conv='stem.stem.0.conv', hf_hub_id='timm/'), 'hgnet_small.paddle_in1k': _cfg( first_conv='stem.stem.0.conv', hf_hub_id='timm/'), 'hgnet_small.ssld_in1k': _cfg( first_conv='stem.stem.0.conv', hf_hub_id='timm/'), 'hgnet_base.ssld_in1k': _cfg( first_conv='stem.stem.0.conv', hf_hub_id='timm/'), 'hgnetv2_b0.ssld_stage2_ft_in1k': _cfg( hf_hub_id='timm/'), 'hgnetv2_b0.ssld_stage1_in22k_in1k': _cfg( hf_hub_id='timm/'), 'hgnetv2_b1.ssld_stage2_ft_in1k': _cfg( hf_hub_id='timm/'), 'hgnetv2_b1.ssld_stage1_in22k_in1k': _cfg( hf_hub_id='timm/'), 'hgnetv2_b2.ssld_stage2_ft_in1k': _cfg( hf_hub_id='timm/'), 'hgnetv2_b2.ssld_stage1_in22k_in1k': _cfg( hf_hub_id='timm/'), 'hgnetv2_b3.ssld_stage2_ft_in1k': _cfg( hf_hub_id='timm/'), 'hgnetv2_b3.ssld_stage1_in22k_in1k': _cfg( hf_hub_id='timm/'), 'hgnetv2_b4.ssld_stage2_ft_in1k': _cfg( hf_hub_id='timm/'), 'hgnetv2_b4.ssld_stage1_in22k_in1k': _cfg( hf_hub_id='timm/'), 'hgnetv2_b5.ssld_stage2_ft_in1k': _cfg( hf_hub_id='timm/'), 'hgnetv2_b5.ssld_stage1_in22k_in1k': _cfg( hf_hub_id='timm/'), 'hgnetv2_b6.ssld_stage2_ft_in1k': _cfg( hf_hub_id='timm/'), 'hgnetv2_b6.ssld_stage1_in22k_in1k': _cfg( hf_hub_id='timm/'), }) @register_model def hgnet_tiny(pretrained=False, **kwargs) -> HighPerfGpuNet: return _create_hgnet('hgnet_tiny', pretrained=pretrained, **kwargs) @register_model def hgnet_small(pretrained=False, **kwargs) -> HighPerfGpuNet: return _create_hgnet('hgnet_small', pretrained=pretrained, **kwargs) @register_model def hgnet_base(pretrained=False, **kwargs) -> HighPerfGpuNet: return _create_hgnet('hgnet_base', pretrained=pretrained, **kwargs) @register_model def hgnetv2_b0(pretrained=False, **kwargs) -> HighPerfGpuNet: return _create_hgnet('hgnetv2_b0', pretrained=pretrained, use_lab=True, **kwargs) @register_model def hgnetv2_b1(pretrained=False, **kwargs) -> HighPerfGpuNet: return _create_hgnet('hgnetv2_b1', pretrained=pretrained, use_lab=True, **kwargs) @register_model def hgnetv2_b2(pretrained=False, **kwargs) -> HighPerfGpuNet: return _create_hgnet('hgnetv2_b2', pretrained=pretrained, use_lab=True, **kwargs) @register_model def hgnetv2_b3(pretrained=False, **kwargs) -> HighPerfGpuNet: return _create_hgnet('hgnetv2_b3', pretrained=pretrained, use_lab=True, **kwargs) @register_model def hgnetv2_b4(pretrained=False, **kwargs) -> HighPerfGpuNet: return _create_hgnet('hgnetv2_b4', pretrained=pretrained, **kwargs) @register_model def hgnetv2_b5(pretrained=False, **kwargs) -> HighPerfGpuNet: return _create_hgnet('hgnetv2_b5', pretrained=pretrained, **kwargs) @register_model def hgnetv2_b6(pretrained=False, **kwargs) -> HighPerfGpuNet: return _create_hgnet('hgnetv2_b6', pretrained=pretrained, **kwargs)
pytorch-image-models/timm/models/hgnet.py/0
{ "file_path": "pytorch-image-models/timm/models/hgnet.py", "repo_id": "pytorch-image-models", "token_count": 13177 }
""" MobileViT Paper: V1: `MobileViT: Light-weight, General-purpose, and Mobile-friendly Vision Transformer` - https://arxiv.org/abs/2110.02178 V2: `Separable Self-attention for Mobile Vision Transformers` - https://arxiv.org/abs/2206.02680 MobileVitBlock and checkpoints adapted from https://github.com/apple/ml-cvnets (original copyright below) License: https://github.com/apple/ml-cvnets/blob/main/LICENSE (Apple open source) Rest of code, ByobNet, and Transformer block hacked together by / Copyright 2022, Ross Wightman """ # # For licensing see accompanying LICENSE file. # Copyright (C) 2020 Apple Inc. All Rights Reserved. # import math from typing import Callable, Tuple, Optional import torch import torch.nn.functional as F from torch import nn from timm.layers import to_2tuple, make_divisible, GroupNorm1, ConvMlp, DropPath, is_exportable from ._builder import build_model_with_cfg from ._features_fx import register_notrace_module from ._registry import register_model, generate_default_cfgs, register_model_deprecations from .byobnet import register_block, ByoBlockCfg, ByoModelCfg, ByobNet, LayerFn, num_groups from .vision_transformer import Block as TransformerBlock __all__ = [] def _inverted_residual_block(d, c, s, br=4.0): # inverted residual is a bottleneck block with bottle_ratio > 1 applied to in_chs, linear output, gs=1 (depthwise) return ByoBlockCfg( type='bottle', d=d, c=c, s=s, gs=1, br=br, block_kwargs=dict(bottle_in=True, linear_out=True)) def _mobilevit_block(d, c, s, transformer_dim, transformer_depth, patch_size=4, br=4.0): # inverted residual + mobilevit blocks as per MobileViT network return ( _inverted_residual_block(d=d, c=c, s=s, br=br), ByoBlockCfg( type='mobilevit', d=1, c=c, s=1, block_kwargs=dict( transformer_dim=transformer_dim, transformer_depth=transformer_depth, patch_size=patch_size) ) ) def _mobilevitv2_block(d, c, s, transformer_depth, patch_size=2, br=2.0, transformer_br=0.5): # inverted residual + mobilevit blocks as per MobileViT network return ( _inverted_residual_block(d=d, c=c, s=s, br=br), ByoBlockCfg( type='mobilevit2', d=1, c=c, s=1, br=transformer_br, gs=1, block_kwargs=dict( transformer_depth=transformer_depth, patch_size=patch_size) ) ) def _mobilevitv2_cfg(multiplier=1.0): chs = (64, 128, 256, 384, 512) if multiplier != 1.0: chs = tuple([int(c * multiplier) for c in chs]) cfg = ByoModelCfg( blocks=( _inverted_residual_block(d=1, c=chs[0], s=1, br=2.0), _inverted_residual_block(d=2, c=chs[1], s=2, br=2.0), _mobilevitv2_block(d=1, c=chs[2], s=2, transformer_depth=2), _mobilevitv2_block(d=1, c=chs[3], s=2, transformer_depth=4), _mobilevitv2_block(d=1, c=chs[4], s=2, transformer_depth=3), ), stem_chs=int(32 * multiplier), stem_type='3x3', stem_pool='', downsample='', act_layer='silu', ) return cfg model_cfgs = dict( mobilevit_xxs=ByoModelCfg( blocks=( _inverted_residual_block(d=1, c=16, s=1, br=2.0), _inverted_residual_block(d=3, c=24, s=2, br=2.0), _mobilevit_block(d=1, c=48, s=2, transformer_dim=64, transformer_depth=2, patch_size=2, br=2.0), _mobilevit_block(d=1, c=64, s=2, transformer_dim=80, transformer_depth=4, patch_size=2, br=2.0), _mobilevit_block(d=1, c=80, s=2, transformer_dim=96, transformer_depth=3, patch_size=2, br=2.0), ), stem_chs=16, stem_type='3x3', stem_pool='', downsample='', act_layer='silu', num_features=320, ), mobilevit_xs=ByoModelCfg( blocks=( _inverted_residual_block(d=1, c=32, s=1), _inverted_residual_block(d=3, c=48, s=2), _mobilevit_block(d=1, c=64, s=2, transformer_dim=96, transformer_depth=2, patch_size=2), _mobilevit_block(d=1, c=80, s=2, transformer_dim=120, transformer_depth=4, patch_size=2), _mobilevit_block(d=1, c=96, s=2, transformer_dim=144, transformer_depth=3, patch_size=2), ), stem_chs=16, stem_type='3x3', stem_pool='', downsample='', act_layer='silu', num_features=384, ), mobilevit_s=ByoModelCfg( blocks=( _inverted_residual_block(d=1, c=32, s=1), _inverted_residual_block(d=3, c=64, s=2), _mobilevit_block(d=1, c=96, s=2, transformer_dim=144, transformer_depth=2, patch_size=2), _mobilevit_block(d=1, c=128, s=2, transformer_dim=192, transformer_depth=4, patch_size=2), _mobilevit_block(d=1, c=160, s=2, transformer_dim=240, transformer_depth=3, patch_size=2), ), stem_chs=16, stem_type='3x3', stem_pool='', downsample='', act_layer='silu', num_features=640, ), semobilevit_s=ByoModelCfg( blocks=( _inverted_residual_block(d=1, c=32, s=1), _inverted_residual_block(d=3, c=64, s=2), _mobilevit_block(d=1, c=96, s=2, transformer_dim=144, transformer_depth=2, patch_size=2), _mobilevit_block(d=1, c=128, s=2, transformer_dim=192, transformer_depth=4, patch_size=2), _mobilevit_block(d=1, c=160, s=2, transformer_dim=240, transformer_depth=3, patch_size=2), ), stem_chs=16, stem_type='3x3', stem_pool='', downsample='', attn_layer='se', attn_kwargs=dict(rd_ratio=1/8), num_features=640, ), mobilevitv2_050=_mobilevitv2_cfg(.50), mobilevitv2_075=_mobilevitv2_cfg(.75), mobilevitv2_125=_mobilevitv2_cfg(1.25), mobilevitv2_100=_mobilevitv2_cfg(1.0), mobilevitv2_150=_mobilevitv2_cfg(1.5), mobilevitv2_175=_mobilevitv2_cfg(1.75), mobilevitv2_200=_mobilevitv2_cfg(2.0), ) @register_notrace_module class MobileVitBlock(nn.Module): """ MobileViT block Paper: https://arxiv.org/abs/2110.02178?context=cs.LG """ def __init__( self, in_chs: int, out_chs: Optional[int] = None, kernel_size: int = 3, stride: int = 1, bottle_ratio: float = 1.0, group_size: Optional[int] = None, dilation: Tuple[int, int] = (1, 1), mlp_ratio: float = 2.0, transformer_dim: Optional[int] = None, transformer_depth: int = 2, patch_size: int = 8, num_heads: int = 4, attn_drop: float = 0., drop: int = 0., no_fusion: bool = False, drop_path_rate: float = 0., layers: LayerFn = None, transformer_norm_layer: Callable = nn.LayerNorm, **kwargs, # eat unused args ): super(MobileVitBlock, self).__init__() layers = layers or LayerFn() groups = num_groups(group_size, in_chs) out_chs = out_chs or in_chs transformer_dim = transformer_dim or make_divisible(bottle_ratio * in_chs) self.conv_kxk = layers.conv_norm_act( in_chs, in_chs, kernel_size=kernel_size, stride=stride, groups=groups, dilation=dilation[0]) self.conv_1x1 = nn.Conv2d(in_chs, transformer_dim, kernel_size=1, bias=False) self.transformer = nn.Sequential(*[ TransformerBlock( transformer_dim, mlp_ratio=mlp_ratio, num_heads=num_heads, qkv_bias=True, attn_drop=attn_drop, proj_drop=drop, drop_path=drop_path_rate, act_layer=layers.act, norm_layer=transformer_norm_layer, ) for _ in range(transformer_depth) ]) self.norm = transformer_norm_layer(transformer_dim) self.conv_proj = layers.conv_norm_act(transformer_dim, out_chs, kernel_size=1, stride=1) if no_fusion: self.conv_fusion = None else: self.conv_fusion = layers.conv_norm_act(in_chs + out_chs, out_chs, kernel_size=kernel_size, stride=1) self.patch_size = to_2tuple(patch_size) self.patch_area = self.patch_size[0] * self.patch_size[1] def forward(self, x: torch.Tensor) -> torch.Tensor: shortcut = x # Local representation x = self.conv_kxk(x) x = self.conv_1x1(x) # Unfold (feature map -> patches) patch_h, patch_w = self.patch_size B, C, H, W = x.shape new_h, new_w = math.ceil(H / patch_h) * patch_h, math.ceil(W / patch_w) * patch_w num_patch_h, num_patch_w = new_h // patch_h, new_w // patch_w # n_h, n_w num_patches = num_patch_h * num_patch_w # N interpolate = False if new_h != H or new_w != W: # Note: Padding can be done, but then it needs to be handled in attention function. x = F.interpolate(x, size=(new_h, new_w), mode="bilinear", align_corners=False) interpolate = True # [B, C, H, W] --> [B * C * n_h, n_w, p_h, p_w] x = x.reshape(B * C * num_patch_h, patch_h, num_patch_w, patch_w).transpose(1, 2) # [B * C * n_h, n_w, p_h, p_w] --> [BP, N, C] where P = p_h * p_w and N = n_h * n_w x = x.reshape(B, C, num_patches, self.patch_area).transpose(1, 3).reshape(B * self.patch_area, num_patches, -1) # Global representations x = self.transformer(x) x = self.norm(x) # Fold (patch -> feature map) # [B, P, N, C] --> [B*C*n_h, n_w, p_h, p_w] x = x.contiguous().view(B, self.patch_area, num_patches, -1) x = x.transpose(1, 3).reshape(B * C * num_patch_h, num_patch_w, patch_h, patch_w) # [B*C*n_h, n_w, p_h, p_w] --> [B*C*n_h, p_h, n_w, p_w] --> [B, C, H, W] x = x.transpose(1, 2).reshape(B, C, num_patch_h * patch_h, num_patch_w * patch_w) if interpolate: x = F.interpolate(x, size=(H, W), mode="bilinear", align_corners=False) x = self.conv_proj(x) if self.conv_fusion is not None: x = self.conv_fusion(torch.cat((shortcut, x), dim=1)) return x class LinearSelfAttention(nn.Module): """ This layer applies a self-attention with linear complexity, as described in `https://arxiv.org/abs/2206.02680` This layer can be used for self- as well as cross-attention. Args: embed_dim (int): :math:`C` from an expected input of size :math:`(N, C, H, W)` attn_drop (float): Dropout value for context scores. Default: 0.0 bias (bool): Use bias in learnable layers. Default: True Shape: - Input: :math:`(N, C, P, N)` where :math:`N` is the batch size, :math:`C` is the input channels, :math:`P` is the number of pixels in the patch, and :math:`N` is the number of patches - Output: same as the input .. note:: For MobileViTv2, we unfold the feature map [B, C, H, W] into [B, C, P, N] where P is the number of pixels in a patch and N is the number of patches. Because channel is the first dimension in this unfolded tensor, we use point-wise convolution (instead of a linear layer). This avoids a transpose operation (which may be expensive on resource-constrained devices) that may be required to convert the unfolded tensor from channel-first to channel-last format in case of a linear layer. """ def __init__( self, embed_dim: int, attn_drop: float = 0.0, proj_drop: float = 0.0, bias: bool = True, ) -> None: super().__init__() self.embed_dim = embed_dim self.qkv_proj = nn.Conv2d( in_channels=embed_dim, out_channels=1 + (2 * embed_dim), bias=bias, kernel_size=1, ) self.attn_drop = nn.Dropout(attn_drop) self.out_proj = nn.Conv2d( in_channels=embed_dim, out_channels=embed_dim, bias=bias, kernel_size=1, ) self.out_drop = nn.Dropout(proj_drop) def _forward_self_attn(self, x: torch.Tensor) -> torch.Tensor: # [B, C, P, N] --> [B, h + 2d, P, N] qkv = self.qkv_proj(x) # Project x into query, key and value # Query --> [B, 1, P, N] # value, key --> [B, d, P, N] query, key, value = qkv.split([1, self.embed_dim, self.embed_dim], dim=1) # apply softmax along N dimension context_scores = F.softmax(query, dim=-1) context_scores = self.attn_drop(context_scores) # Compute context vector # [B, d, P, N] x [B, 1, P, N] -> [B, d, P, N] --> [B, d, P, 1] context_vector = (key * context_scores).sum(dim=-1, keepdim=True) # combine context vector with values # [B, d, P, N] * [B, d, P, 1] --> [B, d, P, N] out = F.relu(value) * context_vector.expand_as(value) out = self.out_proj(out) out = self.out_drop(out) return out @torch.jit.ignore() def _forward_cross_attn(self, x: torch.Tensor, x_prev: Optional[torch.Tensor] = None) -> torch.Tensor: # x --> [B, C, P, N] # x_prev = [B, C, P, M] batch_size, in_dim, kv_patch_area, kv_num_patches = x.shape q_patch_area, q_num_patches = x.shape[-2:] assert ( kv_patch_area == q_patch_area ), "The number of pixels in a patch for query and key_value should be the same" # compute query, key, and value # [B, C, P, M] --> [B, 1 + d, P, M] qk = F.conv2d( x_prev, weight=self.qkv_proj.weight[:self.embed_dim + 1], bias=self.qkv_proj.bias[:self.embed_dim + 1], ) # [B, 1 + d, P, M] --> [B, 1, P, M], [B, d, P, M] query, key = qk.split([1, self.embed_dim], dim=1) # [B, C, P, N] --> [B, d, P, N] value = F.conv2d( x, weight=self.qkv_proj.weight[self.embed_dim + 1], bias=self.qkv_proj.bias[self.embed_dim + 1] if self.qkv_proj.bias is not None else None, ) # apply softmax along M dimension context_scores = F.softmax(query, dim=-1) context_scores = self.attn_drop(context_scores) # compute context vector # [B, d, P, M] * [B, 1, P, M] -> [B, d, P, M] --> [B, d, P, 1] context_vector = (key * context_scores).sum(dim=-1, keepdim=True) # combine context vector with values # [B, d, P, N] * [B, d, P, 1] --> [B, d, P, N] out = F.relu(value) * context_vector.expand_as(value) out = self.out_proj(out) out = self.out_drop(out) return out def forward(self, x: torch.Tensor, x_prev: Optional[torch.Tensor] = None) -> torch.Tensor: if x_prev is None: return self._forward_self_attn(x) else: return self._forward_cross_attn(x, x_prev=x_prev) class LinearTransformerBlock(nn.Module): """ This class defines the pre-norm transformer encoder with linear self-attention in `MobileViTv2 paper <>`_ Args: embed_dim (int): :math:`C_{in}` from an expected input of size :math:`(B, C_{in}, P, N)` mlp_ratio (float): Inner dimension ratio of the FFN relative to embed_dim drop (float): Dropout rate. Default: 0.0 attn_drop (float): Dropout rate for attention in multi-head attention. Default: 0.0 drop_path (float): Stochastic depth rate Default: 0.0 norm_layer (Callable): Normalization layer. Default: layer_norm_2d Shape: - Input: :math:`(B, C_{in}, P, N)` where :math:`B` is batch size, :math:`C_{in}` is input embedding dim, :math:`P` is number of pixels in a patch, and :math:`N` is number of patches, - Output: same shape as the input """ def __init__( self, embed_dim: int, mlp_ratio: float = 2.0, drop: float = 0.0, attn_drop: float = 0.0, drop_path: float = 0.0, act_layer=None, norm_layer=None, ) -> None: super().__init__() act_layer = act_layer or nn.SiLU norm_layer = norm_layer or GroupNorm1 self.norm1 = norm_layer(embed_dim) self.attn = LinearSelfAttention(embed_dim=embed_dim, attn_drop=attn_drop, proj_drop=drop) self.drop_path1 = DropPath(drop_path) self.norm2 = norm_layer(embed_dim) self.mlp = ConvMlp( in_features=embed_dim, hidden_features=int(embed_dim * mlp_ratio), act_layer=act_layer, drop=drop) self.drop_path2 = DropPath(drop_path) def forward(self, x: torch.Tensor, x_prev: Optional[torch.Tensor] = None) -> torch.Tensor: if x_prev is None: # self-attention x = x + self.drop_path1(self.attn(self.norm1(x))) else: # cross-attention res = x x = self.norm1(x) # norm x = self.attn(x, x_prev) # attn x = self.drop_path1(x) + res # residual # Feed forward network x = x + self.drop_path2(self.mlp(self.norm2(x))) return x @register_notrace_module class MobileVitV2Block(nn.Module): """ This class defines the `MobileViTv2 block <>`_ """ def __init__( self, in_chs: int, out_chs: Optional[int] = None, kernel_size: int = 3, bottle_ratio: float = 1.0, group_size: Optional[int] = 1, dilation: Tuple[int, int] = (1, 1), mlp_ratio: float = 2.0, transformer_dim: Optional[int] = None, transformer_depth: int = 2, patch_size: int = 8, attn_drop: float = 0., drop: int = 0., drop_path_rate: float = 0., layers: LayerFn = None, transformer_norm_layer: Callable = GroupNorm1, **kwargs, # eat unused args ): super(MobileVitV2Block, self).__init__() layers = layers or LayerFn() groups = num_groups(group_size, in_chs) out_chs = out_chs or in_chs transformer_dim = transformer_dim or make_divisible(bottle_ratio * in_chs) self.conv_kxk = layers.conv_norm_act( in_chs, in_chs, kernel_size=kernel_size, stride=1, groups=groups, dilation=dilation[0]) self.conv_1x1 = nn.Conv2d(in_chs, transformer_dim, kernel_size=1, bias=False) self.transformer = nn.Sequential(*[ LinearTransformerBlock( transformer_dim, mlp_ratio=mlp_ratio, attn_drop=attn_drop, drop=drop, drop_path=drop_path_rate, act_layer=layers.act, norm_layer=transformer_norm_layer ) for _ in range(transformer_depth) ]) self.norm = transformer_norm_layer(transformer_dim) self.conv_proj = layers.conv_norm_act(transformer_dim, out_chs, kernel_size=1, stride=1, apply_act=False) self.patch_size = to_2tuple(patch_size) self.patch_area = self.patch_size[0] * self.patch_size[1] self.coreml_exportable = is_exportable() def forward(self, x: torch.Tensor) -> torch.Tensor: B, C, H, W = x.shape patch_h, patch_w = self.patch_size new_h, new_w = math.ceil(H / patch_h) * patch_h, math.ceil(W / patch_w) * patch_w num_patch_h, num_patch_w = new_h // patch_h, new_w // patch_w # n_h, n_w num_patches = num_patch_h * num_patch_w # N if new_h != H or new_w != W: x = F.interpolate(x, size=(new_h, new_w), mode="bilinear", align_corners=True) # Local representation x = self.conv_kxk(x) x = self.conv_1x1(x) # Unfold (feature map -> patches), [B, C, H, W] -> [B, C, P, N] C = x.shape[1] if self.coreml_exportable: x = F.unfold(x, kernel_size=(patch_h, patch_w), stride=(patch_h, patch_w)) else: x = x.reshape(B, C, num_patch_h, patch_h, num_patch_w, patch_w).permute(0, 1, 3, 5, 2, 4) x = x.reshape(B, C, -1, num_patches) # Global representations x = self.transformer(x) x = self.norm(x) # Fold (patches -> feature map), [B, C, P, N] --> [B, C, H, W] if self.coreml_exportable: # adopted from https://github.com/apple/ml-cvnets/blob/main/cvnets/modules/mobilevit_block.py#L609-L624 x = x.reshape(B, C * patch_h * patch_w, num_patch_h, num_patch_w) x = F.pixel_shuffle(x, upscale_factor=patch_h) else: x = x.reshape(B, C, patch_h, patch_w, num_patch_h, num_patch_w).permute(0, 1, 4, 2, 5, 3) x = x.reshape(B, C, num_patch_h * patch_h, num_patch_w * patch_w) x = self.conv_proj(x) return x register_block('mobilevit', MobileVitBlock) register_block('mobilevit2', MobileVitV2Block) def _create_mobilevit(variant, cfg_variant=None, pretrained=False, **kwargs): return build_model_with_cfg( ByobNet, variant, pretrained, model_cfg=model_cfgs[variant] if not cfg_variant else model_cfgs[cfg_variant], feature_cfg=dict(flatten_sequential=True), **kwargs) def _create_mobilevit2(variant, cfg_variant=None, pretrained=False, **kwargs): return build_model_with_cfg( ByobNet, variant, pretrained, model_cfg=model_cfgs[variant] if not cfg_variant else model_cfgs[cfg_variant], feature_cfg=dict(flatten_sequential=True), **kwargs) def _cfg(url='', **kwargs): return { 'url': url, 'num_classes': 1000, 'input_size': (3, 256, 256), 'pool_size': (8, 8), 'crop_pct': 0.9, 'interpolation': 'bicubic', 'mean': (0., 0., 0.), 'std': (1., 1., 1.), 'first_conv': 'stem.conv', 'classifier': 'head.fc', 'fixed_input_size': False, **kwargs } default_cfgs = generate_default_cfgs({ 'mobilevit_xxs.cvnets_in1k': _cfg(hf_hub_id='timm/'), 'mobilevit_xs.cvnets_in1k': _cfg(hf_hub_id='timm/'), 'mobilevit_s.cvnets_in1k': _cfg(hf_hub_id='timm/'), 'mobilevitv2_050.cvnets_in1k': _cfg( hf_hub_id='timm/', crop_pct=0.888), 'mobilevitv2_075.cvnets_in1k': _cfg( hf_hub_id='timm/', crop_pct=0.888), 'mobilevitv2_100.cvnets_in1k': _cfg( hf_hub_id='timm/', crop_pct=0.888), 'mobilevitv2_125.cvnets_in1k': _cfg( hf_hub_id='timm/', crop_pct=0.888), 'mobilevitv2_150.cvnets_in1k': _cfg( hf_hub_id='timm/', crop_pct=0.888), 'mobilevitv2_175.cvnets_in1k': _cfg( hf_hub_id='timm/', crop_pct=0.888), 'mobilevitv2_200.cvnets_in1k': _cfg( hf_hub_id='timm/', crop_pct=0.888), 'mobilevitv2_150.cvnets_in22k_ft_in1k': _cfg( hf_hub_id='timm/', crop_pct=0.888), 'mobilevitv2_175.cvnets_in22k_ft_in1k': _cfg( hf_hub_id='timm/', crop_pct=0.888), 'mobilevitv2_200.cvnets_in22k_ft_in1k': _cfg( hf_hub_id='timm/', crop_pct=0.888), 'mobilevitv2_150.cvnets_in22k_ft_in1k_384': _cfg( hf_hub_id='timm/', input_size=(3, 384, 384), pool_size=(12, 12), crop_pct=1.0), 'mobilevitv2_175.cvnets_in22k_ft_in1k_384': _cfg( hf_hub_id='timm/', input_size=(3, 384, 384), pool_size=(12, 12), crop_pct=1.0), 'mobilevitv2_200.cvnets_in22k_ft_in1k_384': _cfg( hf_hub_id='timm/', input_size=(3, 384, 384), pool_size=(12, 12), crop_pct=1.0), }) @register_model def mobilevit_xxs(pretrained=False, **kwargs) -> ByobNet: return _create_mobilevit('mobilevit_xxs', pretrained=pretrained, **kwargs) @register_model def mobilevit_xs(pretrained=False, **kwargs) -> ByobNet: return _create_mobilevit('mobilevit_xs', pretrained=pretrained, **kwargs) @register_model def mobilevit_s(pretrained=False, **kwargs) -> ByobNet: return _create_mobilevit('mobilevit_s', pretrained=pretrained, **kwargs) @register_model def mobilevitv2_050(pretrained=False, **kwargs) -> ByobNet: return _create_mobilevit('mobilevitv2_050', pretrained=pretrained, **kwargs) @register_model def mobilevitv2_075(pretrained=False, **kwargs) -> ByobNet: return _create_mobilevit('mobilevitv2_075', pretrained=pretrained, **kwargs) @register_model def mobilevitv2_100(pretrained=False, **kwargs) -> ByobNet: return _create_mobilevit('mobilevitv2_100', pretrained=pretrained, **kwargs) @register_model def mobilevitv2_125(pretrained=False, **kwargs) -> ByobNet: return _create_mobilevit('mobilevitv2_125', pretrained=pretrained, **kwargs) @register_model def mobilevitv2_150(pretrained=False, **kwargs) -> ByobNet: return _create_mobilevit('mobilevitv2_150', pretrained=pretrained, **kwargs) @register_model def mobilevitv2_175(pretrained=False, **kwargs) -> ByobNet: return _create_mobilevit('mobilevitv2_175', pretrained=pretrained, **kwargs) @register_model def mobilevitv2_200(pretrained=False, **kwargs) -> ByobNet: return _create_mobilevit('mobilevitv2_200', pretrained=pretrained, **kwargs) register_model_deprecations(__name__, { 'mobilevitv2_150_in22ft1k': 'mobilevitv2_150.cvnets_in22k_ft_in1k', 'mobilevitv2_175_in22ft1k': 'mobilevitv2_175.cvnets_in22k_ft_in1k', 'mobilevitv2_200_in22ft1k': 'mobilevitv2_200.cvnets_in22k_ft_in1k', 'mobilevitv2_150_384_in22ft1k': 'mobilevitv2_150.cvnets_in22k_ft_in1k_384', 'mobilevitv2_175_384_in22ft1k': 'mobilevitv2_175.cvnets_in22k_ft_in1k_384', 'mobilevitv2_200_384_in22ft1k': 'mobilevitv2_200.cvnets_in22k_ft_in1k_384', })
pytorch-image-models/timm/models/mobilevit.py/0
{ "file_path": "pytorch-image-models/timm/models/mobilevit.py", "repo_id": "pytorch-image-models", "token_count": 12812 }
"""PyTorch ResNet This started as a copy of https://github.com/pytorch/vision 'resnet.py' (BSD-3-Clause) with additional dropout and dynamic global avg/max pool. ResNeXt, SE-ResNeXt, SENet, and MXNet Gluon stem/downsample variants, tiered stems added by Ross Wightman Copyright 2019, Ross Wightman """ import math from functools import partial from typing import Any, Dict, List, Optional, Tuple, Type, Union import torch import torch.nn as nn import torch.nn.functional as F from timm.data import IMAGENET_DEFAULT_MEAN, IMAGENET_DEFAULT_STD from timm.layers import DropBlock2d, DropPath, AvgPool2dSame, BlurPool2d, LayerType, create_attn, \ get_attn, get_act_layer, get_norm_layer, create_classifier, create_aa, to_ntuple from ._builder import build_model_with_cfg from ._features import feature_take_indices from ._manipulate import checkpoint_seq from ._registry import register_model, generate_default_cfgs, register_model_deprecations __all__ = ['ResNet', 'BasicBlock', 'Bottleneck'] # model_registry will add each entrypoint fn to this def get_padding(kernel_size: int, stride: int, dilation: int = 1) -> int: padding = ((stride - 1) + dilation * (kernel_size - 1)) // 2 return padding class BasicBlock(nn.Module): expansion = 1 def __init__( self, inplanes: int, planes: int, stride: int = 1, downsample: Optional[nn.Module] = None, cardinality: int = 1, base_width: int = 64, reduce_first: int = 1, dilation: int = 1, first_dilation: Optional[int] = None, act_layer: Type[nn.Module] = nn.ReLU, norm_layer: Type[nn.Module] = nn.BatchNorm2d, attn_layer: Optional[Type[nn.Module]] = None, aa_layer: Optional[Type[nn.Module]] = None, drop_block: Optional[Type[nn.Module]] = None, drop_path: Optional[nn.Module] = None, ): """ Args: inplanes: Input channel dimensionality. planes: Used to determine output channel dimensionalities. stride: Stride used in convolution layers. downsample: Optional downsample layer for residual path. cardinality: Number of convolution groups. base_width: Base width used to determine output channel dimensionality. reduce_first: Reduction factor for first convolution output width of residual blocks. dilation: Dilation rate for convolution layers. first_dilation: Dilation rate for first convolution layer. act_layer: Activation layer. norm_layer: Normalization layer. attn_layer: Attention layer. aa_layer: Anti-aliasing layer. drop_block: Class for DropBlock layer. drop_path: Optional DropPath layer. """ super(BasicBlock, self).__init__() assert cardinality == 1, 'BasicBlock only supports cardinality of 1' assert base_width == 64, 'BasicBlock does not support changing base width' first_planes = planes // reduce_first outplanes = planes * self.expansion first_dilation = first_dilation or dilation use_aa = aa_layer is not None and (stride == 2 or first_dilation != dilation) self.conv1 = nn.Conv2d( inplanes, first_planes, kernel_size=3, stride=1 if use_aa else stride, padding=first_dilation, dilation=first_dilation, bias=False) self.bn1 = norm_layer(first_planes) self.drop_block = drop_block() if drop_block is not None else nn.Identity() self.act1 = act_layer(inplace=True) self.aa = create_aa(aa_layer, channels=first_planes, stride=stride, enable=use_aa) self.conv2 = nn.Conv2d( first_planes, outplanes, kernel_size=3, padding=dilation, dilation=dilation, bias=False) self.bn2 = norm_layer(outplanes) self.se = create_attn(attn_layer, outplanes) self.act2 = act_layer(inplace=True) self.downsample = downsample self.stride = stride self.dilation = dilation self.drop_path = drop_path def zero_init_last(self): if getattr(self.bn2, 'weight', None) is not None: nn.init.zeros_(self.bn2.weight) def forward(self, x: torch.Tensor) -> torch.Tensor: shortcut = x x = self.conv1(x) x = self.bn1(x) x = self.drop_block(x) x = self.act1(x) x = self.aa(x) x = self.conv2(x) x = self.bn2(x) if self.se is not None: x = self.se(x) if self.drop_path is not None: x = self.drop_path(x) if self.downsample is not None: shortcut = self.downsample(shortcut) x += shortcut x = self.act2(x) return x class Bottleneck(nn.Module): expansion = 4 def __init__( self, inplanes: int, planes: int, stride: int = 1, downsample: Optional[nn.Module] = None, cardinality: int = 1, base_width: int = 64, reduce_first: int = 1, dilation: int = 1, first_dilation: Optional[int] = None, act_layer: Type[nn.Module] = nn.ReLU, norm_layer: Type[nn.Module] = nn.BatchNorm2d, attn_layer: Optional[Type[nn.Module]] = None, aa_layer: Optional[Type[nn.Module]] = None, drop_block: Optional[Type[nn.Module]] = None, drop_path: Optional[nn.Module] = None, ): """ Args: inplanes: Input channel dimensionality. planes: Used to determine output channel dimensionalities. stride: Stride used in convolution layers. downsample: Optional downsample layer for residual path. cardinality: Number of convolution groups. base_width: Base width used to determine output channel dimensionality. reduce_first: Reduction factor for first convolution output width of residual blocks. dilation: Dilation rate for convolution layers. first_dilation: Dilation rate for first convolution layer. act_layer: Activation layer. norm_layer: Normalization layer. attn_layer: Attention layer. aa_layer: Anti-aliasing layer. drop_block: Class for DropBlock layer. drop_path: Optional DropPath layer. """ super(Bottleneck, self).__init__() width = int(math.floor(planes * (base_width / 64)) * cardinality) first_planes = width // reduce_first outplanes = planes * self.expansion first_dilation = first_dilation or dilation use_aa = aa_layer is not None and (stride == 2 or first_dilation != dilation) self.conv1 = nn.Conv2d(inplanes, first_planes, kernel_size=1, bias=False) self.bn1 = norm_layer(first_planes) self.act1 = act_layer(inplace=True) self.conv2 = nn.Conv2d( first_planes, width, kernel_size=3, stride=1 if use_aa else stride, padding=first_dilation, dilation=first_dilation, groups=cardinality, bias=False) self.bn2 = norm_layer(width) self.drop_block = drop_block() if drop_block is not None else nn.Identity() self.act2 = act_layer(inplace=True) self.aa = create_aa(aa_layer, channels=width, stride=stride, enable=use_aa) self.conv3 = nn.Conv2d(width, outplanes, kernel_size=1, bias=False) self.bn3 = norm_layer(outplanes) self.se = create_attn(attn_layer, outplanes) self.act3 = act_layer(inplace=True) self.downsample = downsample self.stride = stride self.dilation = dilation self.drop_path = drop_path def zero_init_last(self): if getattr(self.bn3, 'weight', None) is not None: nn.init.zeros_(self.bn3.weight) def forward(self, x: torch.Tensor) -> torch.Tensor: shortcut = x x = self.conv1(x) x = self.bn1(x) x = self.act1(x) x = self.conv2(x) x = self.bn2(x) x = self.drop_block(x) x = self.act2(x) x = self.aa(x) x = self.conv3(x) x = self.bn3(x) if self.se is not None: x = self.se(x) if self.drop_path is not None: x = self.drop_path(x) if self.downsample is not None: shortcut = self.downsample(shortcut) x += shortcut x = self.act3(x) return x def downsample_conv( in_channels: int, out_channels: int, kernel_size: int, stride: int = 1, dilation: int = 1, first_dilation: Optional[int] = None, norm_layer: Optional[Type[nn.Module]] = None, ) -> nn.Module: norm_layer = norm_layer or nn.BatchNorm2d kernel_size = 1 if stride == 1 and dilation == 1 else kernel_size first_dilation = (first_dilation or dilation) if kernel_size > 1 else 1 p = get_padding(kernel_size, stride, first_dilation) return nn.Sequential(*[ nn.Conv2d( in_channels, out_channels, kernel_size, stride=stride, padding=p, dilation=first_dilation, bias=False), norm_layer(out_channels) ]) def downsample_avg( in_channels: int, out_channels: int, kernel_size: int, stride: int = 1, dilation: int = 1, first_dilation: Optional[int] = None, norm_layer: Optional[Type[nn.Module]] = None, ) -> nn.Module: norm_layer = norm_layer or nn.BatchNorm2d avg_stride = stride if dilation == 1 else 1 if stride == 1 and dilation == 1: pool = nn.Identity() else: avg_pool_fn = AvgPool2dSame if avg_stride == 1 and dilation > 1 else nn.AvgPool2d pool = avg_pool_fn(2, avg_stride, ceil_mode=True, count_include_pad=False) return nn.Sequential(*[ pool, nn.Conv2d(in_channels, out_channels, 1, stride=1, padding=0, bias=False), norm_layer(out_channels) ]) def drop_blocks(drop_prob: float = 0.): return [ None, None, partial(DropBlock2d, drop_prob=drop_prob, block_size=5, gamma_scale=0.25) if drop_prob else None, partial(DropBlock2d, drop_prob=drop_prob, block_size=3, gamma_scale=1.00) if drop_prob else None] def make_blocks( block_fns: Tuple[Union[BasicBlock, Bottleneck]], channels: Tuple[int, ...], block_repeats: Tuple[int, ...], inplanes: int, reduce_first: int = 1, output_stride: int = 32, down_kernel_size: int = 1, avg_down: bool = False, drop_block_rate: float = 0., drop_path_rate: float = 0., **kwargs, ) -> Tuple[List[Tuple[str, nn.Module]], List[Dict[str, Any]]]: stages = [] feature_info = [] net_num_blocks = sum(block_repeats) net_block_idx = 0 net_stride = 4 dilation = prev_dilation = 1 for stage_idx, (block_fn, planes, num_blocks, db) in enumerate(zip(block_fns, channels, block_repeats, drop_blocks(drop_block_rate))): stage_name = f'layer{stage_idx + 1}' # never liked this name, but weight compat requires it stride = 1 if stage_idx == 0 else 2 if net_stride >= output_stride: dilation *= stride stride = 1 else: net_stride *= stride downsample = None if stride != 1 or inplanes != planes * block_fn.expansion: down_kwargs = dict( in_channels=inplanes, out_channels=planes * block_fn.expansion, kernel_size=down_kernel_size, stride=stride, dilation=dilation, first_dilation=prev_dilation, norm_layer=kwargs.get('norm_layer'), ) downsample = downsample_avg(**down_kwargs) if avg_down else downsample_conv(**down_kwargs) block_kwargs = dict(reduce_first=reduce_first, dilation=dilation, drop_block=db, **kwargs) blocks = [] for block_idx in range(num_blocks): downsample = downsample if block_idx == 0 else None stride = stride if block_idx == 0 else 1 block_dpr = drop_path_rate * net_block_idx / (net_num_blocks - 1) # stochastic depth linear decay rule blocks.append(block_fn( inplanes, planes, stride, downsample, first_dilation=prev_dilation, drop_path=DropPath(block_dpr) if block_dpr > 0. else None, **block_kwargs, )) prev_dilation = dilation inplanes = planes * block_fn.expansion net_block_idx += 1 stages.append((stage_name, nn.Sequential(*blocks))) feature_info.append(dict(num_chs=inplanes, reduction=net_stride, module=stage_name)) return stages, feature_info class ResNet(nn.Module): """ResNet / ResNeXt / SE-ResNeXt / SE-Net This class implements all variants of ResNet, ResNeXt, SE-ResNeXt, and SENet that * have > 1 stride in the 3x3 conv layer of bottleneck * have conv-bn-act ordering This ResNet impl supports a number of stem and downsample options based on the v1c, v1d, v1e, and v1s variants included in the MXNet Gluon ResNetV1b model. The C and D variants are also discussed in the 'Bag of Tricks' paper: https://arxiv.org/pdf/1812.01187. The B variant is equivalent to torchvision default. ResNet variants (the same modifications can be used in SE/ResNeXt models as well): * normal, b - 7x7 stem, stem_width = 64, same as torchvision ResNet, NVIDIA ResNet 'v1.5', Gluon v1b * c - 3 layer deep 3x3 stem, stem_width = 32 (32, 32, 64) * d - 3 layer deep 3x3 stem, stem_width = 32 (32, 32, 64), average pool in downsample * e - 3 layer deep 3x3 stem, stem_width = 64 (64, 64, 128), average pool in downsample * s - 3 layer deep 3x3 stem, stem_width = 64 (64, 64, 128) * t - 3 layer deep 3x3 stem, stem width = 32 (24, 48, 64), average pool in downsample * tn - 3 layer deep 3x3 stem, stem width = 32 (24, 32, 64), average pool in downsample ResNeXt * normal - 7x7 stem, stem_width = 64, standard cardinality and base widths * same c,d, e, s variants as ResNet can be enabled SE-ResNeXt * normal - 7x7 stem, stem_width = 64 * same c, d, e, s variants as ResNet can be enabled SENet-154 - 3 layer deep 3x3 stem (same as v1c-v1s), stem_width = 64, cardinality=64, reduction by 2 on width of first bottleneck convolution, 3x3 downsample convs after first block """ def __init__( self, block: Union[BasicBlock, Bottleneck], layers: Tuple[int, ...], num_classes: int = 1000, in_chans: int = 3, output_stride: int = 32, global_pool: str = 'avg', cardinality: int = 1, base_width: int = 64, stem_width: int = 64, stem_type: str = '', replace_stem_pool: bool = False, block_reduce_first: int = 1, down_kernel_size: int = 1, avg_down: bool = False, channels: Optional[Tuple[int, ...]] = (64, 128, 256, 512), act_layer: LayerType = nn.ReLU, norm_layer: LayerType = nn.BatchNorm2d, aa_layer: Optional[Type[nn.Module]] = None, drop_rate: float = 0.0, drop_path_rate: float = 0., drop_block_rate: float = 0., zero_init_last: bool = True, block_args: Optional[Dict[str, Any]] = None, ): """ Args: block (nn.Module): class for the residual block. Options are BasicBlock, Bottleneck. layers (List[int]) : number of layers in each block num_classes (int): number of classification classes (default 1000) in_chans (int): number of input (color) channels. (default 3) output_stride (int): output stride of the network, 32, 16, or 8. (default 32) global_pool (str): Global pooling type. One of 'avg', 'max', 'avgmax', 'catavgmax' (default 'avg') cardinality (int): number of convolution groups for 3x3 conv in Bottleneck. (default 1) base_width (int): bottleneck channels factor. `planes * base_width / 64 * cardinality` (default 64) stem_width (int): number of channels in stem convolutions (default 64) stem_type (str): The type of stem (default ''): * '', default - a single 7x7 conv with a width of stem_width * 'deep' - three 3x3 convolution layers of widths stem_width, stem_width, stem_width * 2 * 'deep_tiered' - three 3x3 conv layers of widths stem_width//4 * 3, stem_width, stem_width * 2 replace_stem_pool (bool): replace stem max-pooling layer with a 3x3 stride-2 convolution block_reduce_first (int): Reduction factor for first convolution output width of residual blocks, 1 for all archs except senets, where 2 (default 1) down_kernel_size (int): kernel size of residual block downsample path, 1x1 for most, 3x3 for senets (default: 1) avg_down (bool): use avg pooling for projection skip connection between stages/downsample (default False) act_layer (str, nn.Module): activation layer norm_layer (str, nn.Module): normalization layer aa_layer (nn.Module): anti-aliasing layer drop_rate (float): Dropout probability before classifier, for training (default 0.) drop_path_rate (float): Stochastic depth drop-path rate (default 0.) drop_block_rate (float): Drop block rate (default 0.) zero_init_last (bool): zero-init the last weight in residual path (usually last BN affine weight) block_args (dict): Extra kwargs to pass through to block module """ super(ResNet, self).__init__() block_args = block_args or dict() assert output_stride in (8, 16, 32) self.num_classes = num_classes self.drop_rate = drop_rate self.grad_checkpointing = False act_layer = get_act_layer(act_layer) norm_layer = get_norm_layer(norm_layer) # Stem deep_stem = 'deep' in stem_type inplanes = stem_width * 2 if deep_stem else 64 if deep_stem: stem_chs = (stem_width, stem_width) if 'tiered' in stem_type: stem_chs = (3 * (stem_width // 4), stem_width) self.conv1 = nn.Sequential(*[ nn.Conv2d(in_chans, stem_chs[0], 3, stride=2, padding=1, bias=False), norm_layer(stem_chs[0]), act_layer(inplace=True), nn.Conv2d(stem_chs[0], stem_chs[1], 3, stride=1, padding=1, bias=False), norm_layer(stem_chs[1]), act_layer(inplace=True), nn.Conv2d(stem_chs[1], inplanes, 3, stride=1, padding=1, bias=False)]) else: self.conv1 = nn.Conv2d(in_chans, inplanes, kernel_size=7, stride=2, padding=3, bias=False) self.bn1 = norm_layer(inplanes) self.act1 = act_layer(inplace=True) self.feature_info = [dict(num_chs=inplanes, reduction=2, module='act1')] # Stem pooling. The name 'maxpool' remains for weight compatibility. if replace_stem_pool: self.maxpool = nn.Sequential(*filter(None, [ nn.Conv2d(inplanes, inplanes, 3, stride=1 if aa_layer else 2, padding=1, bias=False), create_aa(aa_layer, channels=inplanes, stride=2) if aa_layer is not None else None, norm_layer(inplanes), act_layer(inplace=True), ])) else: if aa_layer is not None: if issubclass(aa_layer, nn.AvgPool2d): self.maxpool = aa_layer(2) else: self.maxpool = nn.Sequential(*[ nn.MaxPool2d(kernel_size=3, stride=1, padding=1), aa_layer(channels=inplanes, stride=2)]) else: self.maxpool = nn.MaxPool2d(kernel_size=3, stride=2, padding=1) # Feature Blocks block_fns = to_ntuple(len(channels))(block) stage_modules, stage_feature_info = make_blocks( block_fns, channels, layers, inplanes, cardinality=cardinality, base_width=base_width, output_stride=output_stride, reduce_first=block_reduce_first, avg_down=avg_down, down_kernel_size=down_kernel_size, act_layer=act_layer, norm_layer=norm_layer, aa_layer=aa_layer, drop_block_rate=drop_block_rate, drop_path_rate=drop_path_rate, **block_args, ) for stage in stage_modules: self.add_module(*stage) # layer1, layer2, etc self.feature_info.extend(stage_feature_info) # Head (Pooling and Classifier) self.num_features = self.head_hidden_size = channels[-1] * block_fns[-1].expansion self.global_pool, self.fc = create_classifier(self.num_features, self.num_classes, pool_type=global_pool) self.init_weights(zero_init_last=zero_init_last) @torch.jit.ignore def init_weights(self, zero_init_last: bool = True): for n, m in self.named_modules(): if isinstance(m, nn.Conv2d): nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='relu') if zero_init_last: for m in self.modules(): if hasattr(m, 'zero_init_last'): m.zero_init_last() @torch.jit.ignore def group_matcher(self, coarse: bool = False): matcher = dict(stem=r'^conv1|bn1|maxpool', blocks=r'^layer(\d+)' if coarse else r'^layer(\d+)\.(\d+)') return matcher @torch.jit.ignore def set_grad_checkpointing(self, enable: bool = True): self.grad_checkpointing = enable @torch.jit.ignore def get_classifier(self, name_only: bool = False): return 'fc' if name_only else self.fc def reset_classifier(self, num_classes: int, global_pool: str = 'avg'): self.num_classes = num_classes self.global_pool, self.fc = create_classifier(self.num_features, self.num_classes, pool_type=global_pool) def forward_intermediates( self, x: torch.Tensor, indices: Optional[Union[int, List[int]]] = None, norm: bool = False, stop_early: bool = False, output_fmt: str = 'NCHW', intermediates_only: bool = False, ) -> Union[List[torch.Tensor], Tuple[torch.Tensor, List[torch.Tensor]]]: """ Forward features that returns intermediates. Args: x: Input image tensor indices: Take last n blocks if int, all if None, select matching indices if sequence norm: Apply norm layer to compatible intermediates stop_early: Stop iterating over blocks when last desired intermediate hit output_fmt: Shape of intermediate feature outputs intermediates_only: Only return intermediate features Returns: """ assert output_fmt in ('NCHW',), 'Output shape must be NCHW.' intermediates = [] take_indices, max_index = feature_take_indices(5, indices) # forward pass feat_idx = 0 x = self.conv1(x) x = self.bn1(x) x = self.act1(x) if feat_idx in take_indices: intermediates.append(x) x = self.maxpool(x) layer_names = ('layer1', 'layer2', 'layer3', 'layer4') if stop_early: layer_names = layer_names[:max_index] for n in layer_names: feat_idx += 1 x = getattr(self, n)(x) # won't work with torchscript, but keeps code reasonable, FML if feat_idx in take_indices: intermediates.append(x) if intermediates_only: return intermediates return x, intermediates def prune_intermediate_layers( self, indices: Union[int, List[int]] = 1, prune_norm: bool = False, prune_head: bool = True, ): """ Prune layers not required for specified intermediates. """ take_indices, max_index = feature_take_indices(5, indices) layer_names = ('layer1', 'layer2', 'layer3', 'layer4') layer_names = layer_names[max_index:] for n in layer_names: setattr(self, n, nn.Identity()) if prune_head: self.reset_classifier(0, '') return take_indices def forward_features(self, x: torch.Tensor) -> torch.Tensor: x = self.conv1(x) x = self.bn1(x) x = self.act1(x) x = self.maxpool(x) if self.grad_checkpointing and not torch.jit.is_scripting(): x = checkpoint_seq([self.layer1, self.layer2, self.layer3, self.layer4], x, flatten=True) else: x = self.layer1(x) x = self.layer2(x) x = self.layer3(x) x = self.layer4(x) return x def forward_head(self, x: torch.Tensor, pre_logits: bool = False) -> torch.Tensor: x = self.global_pool(x) if self.drop_rate: x = F.dropout(x, p=float(self.drop_rate), training=self.training) return x if pre_logits else self.fc(x) def forward(self, x: torch.Tensor) -> torch.Tensor: x = self.forward_features(x) x = self.forward_head(x) return x def _create_resnet(variant, pretrained: bool = False, **kwargs) -> ResNet: return build_model_with_cfg(ResNet, variant, pretrained, **kwargs) def _cfg(url='', **kwargs): return { 'url': url, 'num_classes': 1000, 'input_size': (3, 224, 224), 'pool_size': (7, 7), 'crop_pct': 0.875, 'interpolation': 'bilinear', 'mean': IMAGENET_DEFAULT_MEAN, 'std': IMAGENET_DEFAULT_STD, 'first_conv': 'conv1', 'classifier': 'fc', **kwargs } def _tcfg(url='', **kwargs): return _cfg(url=url, **dict({'interpolation': 'bicubic'}, **kwargs)) def _ttcfg(url='', **kwargs): return _cfg(url=url, **dict({ 'interpolation': 'bicubic', 'test_input_size': (3, 288, 288), 'test_crop_pct': 0.95, 'origin_url': 'https://github.com/huggingface/pytorch-image-models', }, **kwargs)) def _rcfg(url='', **kwargs): return _cfg(url=url, **dict({ 'interpolation': 'bicubic', 'crop_pct': 0.95, 'test_input_size': (3, 288, 288), 'test_crop_pct': 1.0, 'origin_url': 'https://github.com/huggingface/pytorch-image-models', 'paper_ids': 'arXiv:2110.00476' }, **kwargs)) def _r3cfg(url='', **kwargs): return _cfg(url=url, **dict({ 'interpolation': 'bicubic', 'input_size': (3, 160, 160), 'pool_size': (5, 5), 'crop_pct': 0.95, 'test_input_size': (3, 224, 224), 'test_crop_pct': 0.95, 'origin_url': 'https://github.com/huggingface/pytorch-image-models', 'paper_ids': 'arXiv:2110.00476', }, **kwargs)) def _gcfg(url='', **kwargs): return _cfg(url=url, **dict({ 'interpolation': 'bicubic', 'origin_url': 'https://cv.gluon.ai/model_zoo/classification.html', }, **kwargs)) default_cfgs = generate_default_cfgs({ # ResNet and Wide ResNet trained w/ timm (RSB paper and others) 'resnet10t.c3_in1k': _ttcfg( hf_hub_id='timm/', url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-rsb-weights/resnet10t_176_c3-f3215ab1.pth', input_size=(3, 176, 176), pool_size=(6, 6), test_crop_pct=0.95, test_input_size=(3, 224, 224), first_conv='conv1.0'), 'resnet14t.c3_in1k': _ttcfg( hf_hub_id='timm/', url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-rsb-weights/resnet14t_176_c3-c4ed2c37.pth', input_size=(3, 176, 176), pool_size=(6, 6), test_crop_pct=0.95, test_input_size=(3, 224, 224), first_conv='conv1.0'), 'resnet18.a1_in1k': _rcfg( hf_hub_id='timm/', url='https://github.com/huggingface/pytorch-image-models/releases/download/v0.1-rsb-weights/resnet18_a1_0-d63eafa0.pth'), 'resnet18.a2_in1k': _rcfg( hf_hub_id='timm/', url='https://github.com/huggingface/pytorch-image-models/releases/download/v0.1-rsb-weights/resnet18_a2_0-b61bd467.pth'), 'resnet18.a3_in1k': _r3cfg( hf_hub_id='timm/', url='https://github.com/huggingface/pytorch-image-models/releases/download/v0.1-rsb-weights/resnet18_a3_0-40c531c8.pth'), 'resnet18d.ra2_in1k': _ttcfg( hf_hub_id='timm/', url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/resnet18d_ra2-48a79e06.pth', first_conv='conv1.0'), 'resnet18d.ra4_e3600_r224_in1k': _rcfg( hf_hub_id='timm/', mean=(0.5, 0.5, 0.5), std=(0.5, 0.5, 0.5), crop_pct=0.9, first_conv='conv1.0'), 'resnet34.a1_in1k': _rcfg( hf_hub_id='timm/', url='https://github.com/huggingface/pytorch-image-models/releases/download/v0.1-rsb-weights/resnet34_a1_0-46f8f793.pth'), 'resnet34.a2_in1k': _rcfg( hf_hub_id='timm/', url='https://github.com/huggingface/pytorch-image-models/releases/download/v0.1-rsb-weights/resnet34_a2_0-82d47d71.pth'), 'resnet34.a3_in1k': _r3cfg( hf_hub_id='timm/', url='https://github.com/huggingface/pytorch-image-models/releases/download/v0.1-rsb-weights/resnet34_a3_0-a20cabb6.pth', crop_pct=0.95), 'resnet34.bt_in1k': _ttcfg( hf_hub_id='timm/', url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/resnet34-43635321.pth'), 'resnet34.ra4_e3600_r224_in1k': _rcfg( hf_hub_id='timm/', mean=(0.5, 0.5, 0.5), std=(0.5, 0.5, 0.5), crop_pct=0.9), 'resnet34d.ra2_in1k': _ttcfg( hf_hub_id='timm/', url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/resnet34d_ra2-f8dcfcaf.pth', first_conv='conv1.0'), 'resnet26.bt_in1k': _ttcfg( hf_hub_id='timm/', url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/resnet26-9aa10e23.pth'), 'resnet26d.bt_in1k': _ttcfg( hf_hub_id='timm/', url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/resnet26d-69e92c46.pth', first_conv='conv1.0'), 'resnet26t.ra2_in1k': _ttcfg( hf_hub_id='timm/', url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-attn-weights/resnet26t_256_ra2-6f6fa748.pth', first_conv='conv1.0', input_size=(3, 256, 256), pool_size=(8, 8), crop_pct=0.94, test_input_size=(3, 320, 320), test_crop_pct=1.0), 'resnet50.a1_in1k': _rcfg( hf_hub_id='timm/', url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-rsb-weights/resnet50_a1_0-14fe96d1.pth'), 'resnet50.a1h_in1k': _rcfg( hf_hub_id='timm/', url='https://github.com/huggingface/pytorch-image-models/releases/download/v0.1-rsb-weights/resnet50_a1h2_176-001a1197.pth', input_size=(3, 176, 176), pool_size=(6, 6), crop_pct=0.9, test_input_size=(3, 224, 224), test_crop_pct=1.0), 'resnet50.a2_in1k': _rcfg( hf_hub_id='timm/', url='https://github.com/huggingface/pytorch-image-models/releases/download/v0.1-rsb-weights/resnet50_a2_0-a2746f79.pth'), 'resnet50.a3_in1k': _r3cfg( hf_hub_id='timm/', url='https://github.com/huggingface/pytorch-image-models/releases/download/v0.1-rsb-weights/resnet50_a3_0-59cae1ef.pth'), 'resnet50.b1k_in1k': _rcfg( hf_hub_id='timm/', url='https://github.com/huggingface/pytorch-image-models/releases/download/v0.1-rsb-weights/resnet50_b1k-532a802a.pth'), 'resnet50.b2k_in1k': _rcfg( hf_hub_id='timm/', url='https://github.com/huggingface/pytorch-image-models/releases/download/v0.1-rsb-weights/resnet50_b2k-1ba180c1.pth'), 'resnet50.c1_in1k': _rcfg( hf_hub_id='timm/', url='https://github.com/huggingface/pytorch-image-models/releases/download/v0.1-rsb-weights/resnet50_c1-5ba5e060.pth'), 'resnet50.c2_in1k': _rcfg( hf_hub_id='timm/', url='https://github.com/huggingface/pytorch-image-models/releases/download/v0.1-rsb-weights/resnet50_c2-d01e05b2.pth'), 'resnet50.d_in1k': _rcfg( hf_hub_id='timm/', url='https://github.com/huggingface/pytorch-image-models/releases/download/v0.1-rsb-weights/resnet50_d-f39db8af.pth'), 'resnet50.ram_in1k': _ttcfg( hf_hub_id='timm/', url='https://github.com/huggingface/pytorch-image-models/releases/download/v0.1-weights/resnet50_ram-a26f946b.pth'), 'resnet50.am_in1k': _tcfg( hf_hub_id='timm/', url='https://github.com/huggingface/pytorch-image-models/releases/download/v0.1-weights/resnet50_am-6c502b37.pth'), 'resnet50.ra_in1k': _ttcfg( hf_hub_id='timm/', url='https://github.com/huggingface/pytorch-image-models/releases/download/v0.1-weights/resnet50_ra-85ebb6e5.pth'), 'resnet50.bt_in1k': _ttcfg( hf_hub_id='timm/', url='https://github.com/huggingface/pytorch-image-models/releases/download/v0.1-weights/rw_resnet50-86acaeed.pth'), 'resnet50d.ra2_in1k': _ttcfg( hf_hub_id='timm/', url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/resnet50d_ra2-464e36ba.pth', first_conv='conv1.0'), 'resnet50d.ra4_e3600_r224_in1k': _rcfg( hf_hub_id='timm/', mean=(0.5, 0.5, 0.5), std=(0.5, 0.5, 0.5), crop_pct=0.95, test_input_size=(3, 288, 288), test_crop_pct=1.0, first_conv='conv1.0'), 'resnet50d.a1_in1k': _rcfg( hf_hub_id='timm/', url='https://github.com/huggingface/pytorch-image-models/releases/download/v0.1-rsb-weights/resnet50d_a1_0-e20cff14.pth', first_conv='conv1.0'), 'resnet50d.a2_in1k': _rcfg( hf_hub_id='timm/', url='https://github.com/huggingface/pytorch-image-models/releases/download/v0.1-rsb-weights/resnet50d_a2_0-a3adc64d.pth', first_conv='conv1.0'), 'resnet50d.a3_in1k': _r3cfg( hf_hub_id='timm/', url='https://github.com/huggingface/pytorch-image-models/releases/download/v0.1-rsb-weights/resnet50d_a3_0-403fdfad.pth', first_conv='conv1.0'), 'resnet50t.untrained': _ttcfg(first_conv='conv1.0'), 'resnet101.a1h_in1k': _rcfg( hf_hub_id='timm/', url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-rsb-weights/resnet101_a1h-36d3f2aa.pth'), 'resnet101.a1_in1k': _rcfg( hf_hub_id='timm/', url='https://github.com/huggingface/pytorch-image-models/releases/download/v0.1-rsb-weights/resnet101_a1_0-cdcb52a9.pth'), 'resnet101.a2_in1k': _rcfg( hf_hub_id='timm/', url='https://github.com/huggingface/pytorch-image-models/releases/download/v0.1-rsb-weights/resnet101_a2_0-6edb36c7.pth'), 'resnet101.a3_in1k': _r3cfg( hf_hub_id='timm/', url='https://github.com/huggingface/pytorch-image-models/releases/download/v0.1-rsb-weights/resnet101_a3_0-1db14157.pth'), 'resnet101d.ra2_in1k': _ttcfg( hf_hub_id='timm/', url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/resnet101d_ra2-2803ffab.pth', first_conv='conv1.0', input_size=(3, 256, 256), pool_size=(8, 8), crop_pct=0.95, test_crop_pct=1.0, test_input_size=(3, 320, 320)), 'resnet152.a1h_in1k': _rcfg( hf_hub_id='timm/', url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-rsb-weights/resnet152_a1h-dc400468.pth'), 'resnet152.a1_in1k': _rcfg( hf_hub_id='timm/', url='https://github.com/huggingface/pytorch-image-models/releases/download/v0.1-rsb-weights/resnet152_a1_0-2eee8a7a.pth'), 'resnet152.a2_in1k': _rcfg( hf_hub_id='timm/', url='https://github.com/huggingface/pytorch-image-models/releases/download/v0.1-rsb-weights/resnet152_a2_0-b4c6978f.pth'), 'resnet152.a3_in1k': _r3cfg( hf_hub_id='timm/', url='https://github.com/huggingface/pytorch-image-models/releases/download/v0.1-rsb-weights/resnet152_a3_0-134d4688.pth'), 'resnet152d.ra2_in1k': _ttcfg( hf_hub_id='timm/', url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/resnet152d_ra2-5cac0439.pth', first_conv='conv1.0', input_size=(3, 256, 256), pool_size=(8, 8), crop_pct=0.95, test_crop_pct=1.0, test_input_size=(3, 320, 320)), 'resnet200.untrained': _ttcfg(), 'resnet200d.ra2_in1k': _ttcfg( hf_hub_id='timm/', url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/resnet200d_ra2-bdba9bf9.pth', first_conv='conv1.0', input_size=(3, 256, 256), pool_size=(8, 8), crop_pct=0.95, test_crop_pct=1.0, test_input_size=(3, 320, 320)), 'wide_resnet50_2.racm_in1k': _ttcfg( hf_hub_id='timm/', url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/wide_resnet50_racm-8234f177.pth'), # torchvision resnet weights 'resnet18.tv_in1k': _cfg( hf_hub_id='timm/', url='https://download.pytorch.org/models/resnet18-f37072fd.pth', license='bsd-3-clause', origin_url='https://github.com/pytorch/vision'), 'resnet34.tv_in1k': _cfg( hf_hub_id='timm/', url='https://download.pytorch.org/models/resnet34-b627a593.pth', license='bsd-3-clause', origin_url='https://github.com/pytorch/vision'), 'resnet50.tv_in1k': _cfg( hf_hub_id='timm/', url='https://download.pytorch.org/models/resnet50-0676ba61.pth', license='bsd-3-clause', origin_url='https://github.com/pytorch/vision'), 'resnet50.tv2_in1k': _cfg( hf_hub_id='timm/', url='https://download.pytorch.org/models/resnet50-11ad3fa6.pth', input_size=(3, 176, 176), pool_size=(6, 6), test_input_size=(3, 224, 224), test_crop_pct=0.965, license='bsd-3-clause', origin_url='https://github.com/pytorch/vision'), 'resnet101.tv_in1k': _cfg( hf_hub_id='timm/', url='https://download.pytorch.org/models/resnet101-63fe2227.pth', license='bsd-3-clause', origin_url='https://github.com/pytorch/vision'), 'resnet101.tv2_in1k': _cfg( hf_hub_id='timm/', url='https://download.pytorch.org/models/resnet101-cd907fc2.pth', input_size=(3, 176, 176), pool_size=(6, 6), test_input_size=(3, 224, 224), test_crop_pct=0.965, license='bsd-3-clause', origin_url='https://github.com/pytorch/vision'), 'resnet152.tv_in1k': _cfg( hf_hub_id='timm/', url='https://download.pytorch.org/models/resnet152-394f9c45.pth', license='bsd-3-clause', origin_url='https://github.com/pytorch/vision'), 'resnet152.tv2_in1k': _cfg( hf_hub_id='timm/', url='https://download.pytorch.org/models/resnet152-f82ba261.pth', input_size=(3, 176, 176), pool_size=(6, 6), test_input_size=(3, 224, 224), test_crop_pct=0.965, license='bsd-3-clause', origin_url='https://github.com/pytorch/vision'), 'wide_resnet50_2.tv_in1k': _cfg( hf_hub_id='timm/', url='https://download.pytorch.org/models/wide_resnet50_2-95faca4d.pth', license='bsd-3-clause', origin_url='https://github.com/pytorch/vision'), 'wide_resnet50_2.tv2_in1k': _cfg( hf_hub_id='timm/', url='https://download.pytorch.org/models/wide_resnet50_2-9ba9bcbe.pth', input_size=(3, 176, 176), pool_size=(6, 6), test_input_size=(3, 224, 224), test_crop_pct=0.965, license='bsd-3-clause', origin_url='https://github.com/pytorch/vision'), 'wide_resnet101_2.tv_in1k': _cfg( hf_hub_id='timm/', url='https://download.pytorch.org/models/wide_resnet101_2-32ee1156.pth', license='bsd-3-clause', origin_url='https://github.com/pytorch/vision'), 'wide_resnet101_2.tv2_in1k': _cfg( hf_hub_id='timm/', url='https://download.pytorch.org/models/wide_resnet101_2-d733dc28.pth', input_size=(3, 176, 176), pool_size=(6, 6), test_input_size=(3, 224, 224), test_crop_pct=0.965, license='bsd-3-clause', origin_url='https://github.com/pytorch/vision'), # ResNets w/ alternative norm layers 'resnet50_gn.a1h_in1k': _ttcfg( hf_hub_id='timm/', url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-rsb-weights/resnet50_gn_a1h2-8fe6c4d0.pth', crop_pct=0.94), # ResNeXt trained in timm (RSB paper and others) 'resnext50_32x4d.a1h_in1k': _rcfg( hf_hub_id='timm/', url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-rsb-weights/resnext50_32x4d_a1h-0146ab0a.pth'), 'resnext50_32x4d.a1_in1k': _rcfg( hf_hub_id='timm/', url='https://github.com/huggingface/pytorch-image-models/releases/download/v0.1-rsb-weights/resnext50_32x4d_a1_0-b5a91a1d.pth'), 'resnext50_32x4d.a2_in1k': _rcfg( hf_hub_id='timm/', url='https://github.com/huggingface/pytorch-image-models/releases/download/v0.1-rsb-weights/resnext50_32x4d_a2_0-efc76add.pth'), 'resnext50_32x4d.a3_in1k': _r3cfg( hf_hub_id='timm/', url='https://github.com/huggingface/pytorch-image-models/releases/download/v0.1-rsb-weights/resnext50_32x4d_a3_0-3e450271.pth'), 'resnext50_32x4d.ra_in1k': _ttcfg( hf_hub_id='timm/', url='https://github.com/huggingface/pytorch-image-models/releases/download/v0.1-weights/resnext50_32x4d_ra-d733960d.pth'), 'resnext50d_32x4d.bt_in1k': _ttcfg( hf_hub_id='timm/', url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/resnext50d_32x4d-103e99f8.pth', first_conv='conv1.0'), 'resnext101_32x4d.untrained': _ttcfg(), 'resnext101_64x4d.c1_in1k': _rcfg( hf_hub_id='timm/', url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-tpu-weights/resnext101_64x4d_c-0d0e0cc0.pth'), # torchvision ResNeXt weights 'resnext50_32x4d.tv_in1k': _cfg( hf_hub_id='timm/', url='https://download.pytorch.org/models/resnext50_32x4d-7cdf4587.pth', license='bsd-3-clause', origin_url='https://github.com/pytorch/vision'), 'resnext101_32x8d.tv_in1k': _cfg( hf_hub_id='timm/', url='https://download.pytorch.org/models/resnext101_32x8d-8ba56ff5.pth', license='bsd-3-clause', origin_url='https://github.com/pytorch/vision'), 'resnext101_64x4d.tv_in1k': _cfg( hf_hub_id='timm/', url='https://download.pytorch.org/models/resnext101_64x4d-173b62eb.pth', license='bsd-3-clause', origin_url='https://github.com/pytorch/vision'), 'resnext50_32x4d.tv2_in1k': _cfg( hf_hub_id='timm/', url='https://download.pytorch.org/models/resnext50_32x4d-1a0047aa.pth', input_size=(3, 176, 176), pool_size=(6, 6), test_input_size=(3, 224, 224), test_crop_pct=0.965, license='bsd-3-clause', origin_url='https://github.com/pytorch/vision'), 'resnext101_32x8d.tv2_in1k': _cfg( hf_hub_id='timm/', url='https://download.pytorch.org/models/resnext101_32x8d-110c445d.pth', input_size=(3, 176, 176), pool_size=(6, 6), test_input_size=(3, 224, 224), test_crop_pct=0.965, license='bsd-3-clause', origin_url='https://github.com/pytorch/vision'), # ResNeXt models - Weakly Supervised Pretraining on Instagram Hashtags # from https://github.com/facebookresearch/WSL-Images # Please note the CC-BY-NC 4.0 license on these weights, non-commercial use only. 'resnext101_32x8d.fb_wsl_ig1b_ft_in1k': _cfg( hf_hub_id='timm/', url='https://download.pytorch.org/models/ig_resnext101_32x8-c38310e5.pth', license='cc-by-nc-4.0', origin_url='https://github.com/facebookresearch/WSL-Images'), 'resnext101_32x16d.fb_wsl_ig1b_ft_in1k': _cfg( hf_hub_id='timm/', url='https://download.pytorch.org/models/ig_resnext101_32x16-c6f796b0.pth', license='cc-by-nc-4.0', origin_url='https://github.com/facebookresearch/WSL-Images'), 'resnext101_32x32d.fb_wsl_ig1b_ft_in1k': _cfg( hf_hub_id='timm/', url='https://download.pytorch.org/models/ig_resnext101_32x32-e4b90b00.pth', license='cc-by-nc-4.0', origin_url='https://github.com/facebookresearch/WSL-Images'), 'resnext101_32x48d.fb_wsl_ig1b_ft_in1k': _cfg( hf_hub_id='timm/', url='https://download.pytorch.org/models/ig_resnext101_32x48-3e41cc8a.pth', license='cc-by-nc-4.0', origin_url='https://github.com/facebookresearch/WSL-Images'), # Semi-Supervised ResNe*t models from https://github.com/facebookresearch/semi-supervised-ImageNet1K-models # Please note the CC-BY-NC 4.0 license on theses weights, non-commercial use only. 'resnet18.fb_ssl_yfcc100m_ft_in1k': _cfg( hf_hub_id='timm/', url='https://dl.fbaipublicfiles.com/semiweaksupervision/model_files/semi_supervised_resnet18-d92f0530.pth', license='cc-by-nc-4.0', origin_url='https://github.com/facebookresearch/semi-supervised-ImageNet1K-models'), 'resnet50.fb_ssl_yfcc100m_ft_in1k': _cfg( hf_hub_id='timm/', url='https://dl.fbaipublicfiles.com/semiweaksupervision/model_files/semi_supervised_resnet50-08389792.pth', license='cc-by-nc-4.0', origin_url='https://github.com/facebookresearch/semi-supervised-ImageNet1K-models'), 'resnext50_32x4d.fb_ssl_yfcc100m_ft_in1k': _cfg( hf_hub_id='timm/', url='https://dl.fbaipublicfiles.com/semiweaksupervision/model_files/semi_supervised_resnext50_32x4-ddb3e555.pth', license='cc-by-nc-4.0', origin_url='https://github.com/facebookresearch/semi-supervised-ImageNet1K-models'), 'resnext101_32x4d.fb_ssl_yfcc100m_ft_in1k': _cfg( hf_hub_id='timm/', url='https://dl.fbaipublicfiles.com/semiweaksupervision/model_files/semi_supervised_resnext101_32x4-dc43570a.pth', license='cc-by-nc-4.0', origin_url='https://github.com/facebookresearch/semi-supervised-ImageNet1K-models'), 'resnext101_32x8d.fb_ssl_yfcc100m_ft_in1k': _cfg( hf_hub_id='timm/', url='https://dl.fbaipublicfiles.com/semiweaksupervision/model_files/semi_supervised_resnext101_32x8-2cfe2f8b.pth', license='cc-by-nc-4.0', origin_url='https://github.com/facebookresearch/semi-supervised-ImageNet1K-models'), 'resnext101_32x16d.fb_ssl_yfcc100m_ft_in1k': _cfg( hf_hub_id='timm/', url='https://dl.fbaipublicfiles.com/semiweaksupervision/model_files/semi_supervised_resnext101_32x16-15fffa57.pth', license='cc-by-nc-4.0', origin_url='https://github.com/facebookresearch/semi-supervised-ImageNet1K-models'), # Semi-Weakly Supervised ResNe*t models from https://github.com/facebookresearch/semi-supervised-ImageNet1K-models # Please note the CC-BY-NC 4.0 license on theses weights, non-commercial use only. 'resnet18.fb_swsl_ig1b_ft_in1k': _cfg( hf_hub_id='timm/', url='https://dl.fbaipublicfiles.com/semiweaksupervision/model_files/semi_weakly_supervised_resnet18-118f1556.pth', license='cc-by-nc-4.0', origin_url='https://github.com/facebookresearch/semi-supervised-ImageNet1K-models'), 'resnet50.fb_swsl_ig1b_ft_in1k': _cfg( hf_hub_id='timm/', url='https://dl.fbaipublicfiles.com/semiweaksupervision/model_files/semi_weakly_supervised_resnet50-16a12f1b.pth', license='cc-by-nc-4.0', origin_url='https://github.com/facebookresearch/semi-supervised-ImageNet1K-models'), 'resnext50_32x4d.fb_swsl_ig1b_ft_in1k': _cfg( hf_hub_id='timm/', url='https://dl.fbaipublicfiles.com/semiweaksupervision/model_files/semi_weakly_supervised_resnext50_32x4-72679e44.pth', license='cc-by-nc-4.0', origin_url='https://github.com/facebookresearch/semi-supervised-ImageNet1K-models'), 'resnext101_32x4d.fb_swsl_ig1b_ft_in1k': _cfg( hf_hub_id='timm/', url='https://dl.fbaipublicfiles.com/semiweaksupervision/model_files/semi_weakly_supervised_resnext101_32x4-3f87e46b.pth', license='cc-by-nc-4.0', origin_url='https://github.com/facebookresearch/semi-supervised-ImageNet1K-models'), 'resnext101_32x8d.fb_swsl_ig1b_ft_in1k': _cfg( hf_hub_id='timm/', url='https://dl.fbaipublicfiles.com/semiweaksupervision/model_files/semi_weakly_supervised_resnext101_32x8-b4712904.pth', license='cc-by-nc-4.0', origin_url='https://github.com/facebookresearch/semi-supervised-ImageNet1K-models'), 'resnext101_32x16d.fb_swsl_ig1b_ft_in1k': _cfg( hf_hub_id='timm/', url='https://dl.fbaipublicfiles.com/semiweaksupervision/model_files/semi_weakly_supervised_resnext101_32x16-f3559a9c.pth', license='cc-by-nc-4.0', origin_url='https://github.com/facebookresearch/semi-supervised-ImageNet1K-models'), # Efficient Channel Attention ResNets 'ecaresnet26t.ra2_in1k': _ttcfg( hf_hub_id='timm/', url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/ecaresnet26t_ra2-46609757.pth', first_conv='conv1.0', input_size=(3, 256, 256), pool_size=(8, 8), test_crop_pct=0.95, test_input_size=(3, 320, 320)), 'ecaresnetlight.miil_in1k': _tcfg( hf_hub_id='timm/', url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-tresnet/ecaresnetlight-75a9c627.pth', test_crop_pct=0.95, test_input_size=(3, 288, 288)), 'ecaresnet50d.miil_in1k': _tcfg( hf_hub_id='timm/', url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-tresnet/ecaresnet50d-93c81e3b.pth', first_conv='conv1.0', test_crop_pct=0.95, test_input_size=(3, 288, 288)), 'ecaresnet50d_pruned.miil_in1k': _tcfg( hf_hub_id='timm/', url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-tresnet/ecaresnet50d_p-e4fa23c2.pth', first_conv='conv1.0', test_crop_pct=0.95, test_input_size=(3, 288, 288)), 'ecaresnet50t.ra2_in1k': _tcfg( hf_hub_id='timm/', url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/ecaresnet50t_ra2-f7ac63c4.pth', first_conv='conv1.0', input_size=(3, 256, 256), pool_size=(8, 8), test_crop_pct=0.95, test_input_size=(3, 320, 320)), 'ecaresnet50t.a1_in1k': _rcfg( hf_hub_id='timm/', url='https://github.com/huggingface/pytorch-image-models/releases/download/v0.1-rsb-weights/ecaresnet50t_a1_0-99bd76a8.pth', first_conv='conv1.0'), 'ecaresnet50t.a2_in1k': _rcfg( hf_hub_id='timm/', url='https://github.com/huggingface/pytorch-image-models/releases/download/v0.1-rsb-weights/ecaresnet50t_a2_0-b1c7b745.pth', first_conv='conv1.0'), 'ecaresnet50t.a3_in1k': _r3cfg( hf_hub_id='timm/', url='https://github.com/huggingface/pytorch-image-models/releases/download/v0.1-rsb-weights/ecaresnet50t_a3_0-8cc311f1.pth', first_conv='conv1.0'), 'ecaresnet101d.miil_in1k': _tcfg( hf_hub_id='timm/', url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-tresnet/ecaresnet101d-153dad65.pth', first_conv='conv1.0', test_crop_pct=0.95, test_input_size=(3, 288, 288)), 'ecaresnet101d_pruned.miil_in1k': _tcfg( hf_hub_id='timm/', url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-tresnet/ecaresnet101d_p-9e74cb91.pth', first_conv='conv1.0', test_crop_pct=0.95, test_input_size=(3, 288, 288)), 'ecaresnet200d.untrained': _ttcfg( first_conv='conv1.0', input_size=(3, 256, 256), crop_pct=0.95, pool_size=(8, 8)), 'ecaresnet269d.ra2_in1k': _ttcfg( hf_hub_id='timm/', url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/ecaresnet269d_320_ra2-7baa55cb.pth', first_conv='conv1.0', input_size=(3, 320, 320), pool_size=(10, 10), crop_pct=0.95, test_crop_pct=1.0, test_input_size=(3, 352, 352)), # Efficient Channel Attention ResNeXts 'ecaresnext26t_32x4d.untrained': _tcfg(first_conv='conv1.0'), 'ecaresnext50t_32x4d.untrained': _tcfg(first_conv='conv1.0'), # Squeeze-Excitation ResNets, to eventually replace the models in senet.py 'seresnet18.untrained': _ttcfg(), 'seresnet34.untrained': _ttcfg(), 'seresnet50.a1_in1k': _rcfg( hf_hub_id='timm/', url='https://github.com/huggingface/pytorch-image-models/releases/download/v0.1-rsb-weights/seresnet50_a1_0-ffa00869.pth', crop_pct=0.95), 'seresnet50.a2_in1k': _rcfg( hf_hub_id='timm/', url='https://github.com/huggingface/pytorch-image-models/releases/download/v0.1-rsb-weights/seresnet50_a2_0-850de0d9.pth', crop_pct=0.95), 'seresnet50.a3_in1k': _r3cfg( hf_hub_id='timm/', url='https://github.com/huggingface/pytorch-image-models/releases/download/v0.1-rsb-weights/seresnet50_a3_0-317ecd56.pth', crop_pct=0.95), 'seresnet50.ra2_in1k': _ttcfg( hf_hub_id='timm/', url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/seresnet50_ra_224-8efdb4bb.pth'), 'seresnet50t.untrained': _ttcfg( first_conv='conv1.0'), 'seresnet101.untrained': _ttcfg(), 'seresnet152.untrained': _ttcfg(), 'seresnet152d.ra2_in1k': _ttcfg( hf_hub_id='timm/', url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/seresnet152d_ra2-04464dd2.pth', first_conv='conv1.0', input_size=(3, 256, 256), pool_size=(8, 8), crop_pct=0.95, test_crop_pct=1.0, test_input_size=(3, 320, 320) ), 'seresnet200d.untrained': _ttcfg( first_conv='conv1.0', input_size=(3, 256, 256), pool_size=(8, 8)), 'seresnet269d.untrained': _ttcfg( first_conv='conv1.0', input_size=(3, 256, 256), pool_size=(8, 8)), # Squeeze-Excitation ResNeXts, to eventually replace the models in senet.py 'seresnext26d_32x4d.bt_in1k': _ttcfg( hf_hub_id='timm/', url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/seresnext26d_32x4d-80fa48a3.pth', first_conv='conv1.0'), 'seresnext26t_32x4d.bt_in1k': _ttcfg( hf_hub_id='timm/', url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/seresnext26tn_32x4d-569cb627.pth', first_conv='conv1.0'), 'seresnext50_32x4d.racm_in1k': _ttcfg( hf_hub_id='timm/', url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/seresnext50_32x4d_racm-a304a460.pth'), 'seresnext101_32x4d.untrained': _ttcfg(), 'seresnext101_32x8d.ah_in1k': _rcfg( hf_hub_id='timm/', url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-tpu-weights/seresnext101_32x8d_ah-e6bc4c0a.pth'), 'seresnext101d_32x8d.ah_in1k': _rcfg( hf_hub_id='timm/', url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-tpu-weights/seresnext101d_32x8d_ah-191d7b94.pth', first_conv='conv1.0'), # ResNets with anti-aliasing / blur pool 'resnetaa50d.sw_in12k_ft_in1k': _ttcfg( hf_hub_id='timm/', first_conv='conv1.0', crop_pct=0.95, test_crop_pct=1.0), 'resnetaa101d.sw_in12k_ft_in1k': _ttcfg( hf_hub_id='timm/', first_conv='conv1.0', crop_pct=0.95, test_crop_pct=1.0), 'seresnextaa101d_32x8d.sw_in12k_ft_in1k_288': _ttcfg( hf_hub_id='timm/', crop_pct=0.95, input_size=(3, 288, 288), pool_size=(9, 9), test_input_size=(3, 320, 320), test_crop_pct=1.0, first_conv='conv1.0'), 'seresnextaa101d_32x8d.sw_in12k_ft_in1k': _ttcfg( hf_hub_id='timm/', first_conv='conv1.0', test_crop_pct=1.0), 'seresnextaa201d_32x8d.sw_in12k_ft_in1k_384': _cfg( hf_hub_id='timm/', interpolation='bicubic', first_conv='conv1.0', pool_size=(12, 12), input_size=(3, 384, 384), crop_pct=1.0), 'seresnextaa201d_32x8d.sw_in12k': _cfg( hf_hub_id='timm/', num_classes=11821, interpolation='bicubic', first_conv='conv1.0', crop_pct=0.95, input_size=(3, 320, 320), pool_size=(10, 10), test_input_size=(3, 384, 384), test_crop_pct=1.0), 'resnetaa50d.sw_in12k': _ttcfg( hf_hub_id='timm/', num_classes=11821, first_conv='conv1.0', crop_pct=0.95, test_crop_pct=1.0), 'resnetaa50d.d_in12k': _ttcfg( hf_hub_id='timm/', num_classes=11821, first_conv='conv1.0', crop_pct=0.95, test_crop_pct=1.0), 'resnetaa101d.sw_in12k': _ttcfg( hf_hub_id='timm/', num_classes=11821, first_conv='conv1.0', crop_pct=0.95, test_crop_pct=1.0), 'seresnextaa101d_32x8d.sw_in12k': _ttcfg( hf_hub_id='timm/', num_classes=11821, first_conv='conv1.0', crop_pct=0.95, test_crop_pct=1.0), 'resnetblur18.untrained': _ttcfg(), 'resnetblur50.bt_in1k': _ttcfg( hf_hub_id='timm/', url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/resnetblur50-84f4748f.pth'), 'resnetblur50d.untrained': _ttcfg(first_conv='conv1.0'), 'resnetblur101d.untrained': _ttcfg(first_conv='conv1.0'), 'resnetaa34d.untrained': _ttcfg(first_conv='conv1.0'), 'resnetaa50.a1h_in1k': _rcfg( hf_hub_id='timm/', url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-rsb-weights/resnetaa50_a1h-4cf422b3.pth'), 'seresnetaa50d.untrained': _ttcfg(first_conv='conv1.0'), 'seresnextaa101d_32x8d.ah_in1k': _rcfg( hf_hub_id='timm/', url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-tpu-weights/seresnextaa101d_32x8d_ah-83c8ae12.pth', first_conv='conv1.0'), # ResNet-RS models 'resnetrs50.tf_in1k': _cfg( hf_hub_id='timm/', url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-rs-weights/resnetrs50_ema-6b53758b.pth', input_size=(3, 160, 160), pool_size=(5, 5), crop_pct=0.91, test_input_size=(3, 224, 224), interpolation='bicubic', first_conv='conv1.0'), 'resnetrs101.tf_in1k': _cfg( hf_hub_id='timm/', url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-rs-weights/resnetrs101_i192_ema-1509bbf6.pth', input_size=(3, 192, 192), pool_size=(6, 6), crop_pct=0.94, test_input_size=(3, 288, 288), interpolation='bicubic', first_conv='conv1.0'), 'resnetrs152.tf_in1k': _cfg( hf_hub_id='timm/', url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-rs-weights/resnetrs152_i256_ema-a9aff7f9.pth', input_size=(3, 256, 256), pool_size=(8, 8), crop_pct=1.0, test_input_size=(3, 320, 320), interpolation='bicubic', first_conv='conv1.0'), 'resnetrs200.tf_in1k': _cfg( hf_hub_id='timm/', url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-tpu-weights/resnetrs200_c-6b698b88.pth', input_size=(3, 256, 256), pool_size=(8, 8), crop_pct=1.0, test_input_size=(3, 320, 320), interpolation='bicubic', first_conv='conv1.0'), 'resnetrs270.tf_in1k': _cfg( hf_hub_id='timm/', url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-rs-weights/resnetrs270_ema-b40e674c.pth', input_size=(3, 256, 256), pool_size=(8, 8), crop_pct=1.0, test_input_size=(3, 352, 352), interpolation='bicubic', first_conv='conv1.0'), 'resnetrs350.tf_in1k': _cfg( hf_hub_id='timm/', url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-rs-weights/resnetrs350_i256_ema-5a1aa8f1.pth', input_size=(3, 288, 288), pool_size=(9, 9), crop_pct=1.0, test_input_size=(3, 384, 384), interpolation='bicubic', first_conv='conv1.0'), 'resnetrs420.tf_in1k': _cfg( hf_hub_id='timm/', url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-rs-weights/resnetrs420_ema-972dee69.pth', input_size=(3, 320, 320), pool_size=(10, 10), crop_pct=1.0, test_input_size=(3, 416, 416), interpolation='bicubic', first_conv='conv1.0'), # gluon resnet weights 'resnet18.gluon_in1k': _gcfg( hf_hub_id='timm/', url='https://github.com/rwightman/pytorch-pretrained-gluonresnet/releases/download/v0.1/gluon_resnet18_v1b-0757602b.pth'), 'resnet34.gluon_in1k': _gcfg( hf_hub_id='timm/', url='https://github.com/rwightman/pytorch-pretrained-gluonresnet/releases/download/v0.1/gluon_resnet34_v1b-c6d82d59.pth'), 'resnet50.gluon_in1k': _gcfg( hf_hub_id='timm/', url='https://github.com/rwightman/pytorch-pretrained-gluonresnet/releases/download/v0.1/gluon_resnet50_v1b-0ebe02e2.pth'), 'resnet101.gluon_in1k': _gcfg( hf_hub_id='timm/', url='https://github.com/rwightman/pytorch-pretrained-gluonresnet/releases/download/v0.1/gluon_resnet101_v1b-3b017079.pth'), 'resnet152.gluon_in1k': _gcfg( hf_hub_id='timm/', url='https://github.com/rwightman/pytorch-pretrained-gluonresnet/releases/download/v0.1/gluon_resnet152_v1b-c1edb0dd.pth'), 'resnet50c.gluon_in1k': _gcfg( hf_hub_id='timm/', url='https://github.com/rwightman/pytorch-pretrained-gluonresnet/releases/download/v0.1/gluon_resnet50_v1c-48092f55.pth', first_conv='conv1.0'), 'resnet101c.gluon_in1k': _gcfg( hf_hub_id='timm/', url='https://github.com/rwightman/pytorch-pretrained-gluonresnet/releases/download/v0.1/gluon_resnet101_v1c-1f26822a.pth', first_conv='conv1.0'), 'resnet152c.gluon_in1k': _gcfg( hf_hub_id='timm/', url='https://github.com/rwightman/pytorch-pretrained-gluonresnet/releases/download/v0.1/gluon_resnet152_v1c-a3bb0b98.pth', first_conv='conv1.0'), 'resnet50d.gluon_in1k': _gcfg( hf_hub_id='timm/', url='https://github.com/rwightman/pytorch-pretrained-gluonresnet/releases/download/v0.1/gluon_resnet50_v1d-818a1b1b.pth', first_conv='conv1.0'), 'resnet101d.gluon_in1k': _gcfg( hf_hub_id='timm/', url='https://github.com/rwightman/pytorch-pretrained-gluonresnet/releases/download/v0.1/gluon_resnet101_v1d-0f9c8644.pth', first_conv='conv1.0'), 'resnet152d.gluon_in1k': _gcfg( hf_hub_id='timm/', url='https://github.com/rwightman/pytorch-pretrained-gluonresnet/releases/download/v0.1/gluon_resnet152_v1d-bd354e12.pth', first_conv='conv1.0'), 'resnet50s.gluon_in1k': _gcfg( hf_hub_id='timm/', url='https://github.com/rwightman/pytorch-pretrained-gluonresnet/releases/download/v0.1/gluon_resnet50_v1s-1762acc0.pth', first_conv='conv1.0'), 'resnet101s.gluon_in1k': _gcfg( hf_hub_id='timm/', url='https://github.com/rwightman/pytorch-pretrained-gluonresnet/releases/download/v0.1/gluon_resnet101_v1s-60fe0cc1.pth', first_conv='conv1.0'), 'resnet152s.gluon_in1k': _gcfg( hf_hub_id='timm/', url='https://github.com/rwightman/pytorch-pretrained-gluonresnet/releases/download/v0.1/gluon_resnet152_v1s-dcc41b81.pth', first_conv='conv1.0'), 'resnext50_32x4d.gluon_in1k': _gcfg( hf_hub_id='timm/', url='https://github.com/rwightman/pytorch-pretrained-gluonresnet/releases/download/v0.1/gluon_resnext50_32x4d-e6a097c1.pth'), 'resnext101_32x4d.gluon_in1k': _gcfg( hf_hub_id='timm/', url='https://github.com/rwightman/pytorch-pretrained-gluonresnet/releases/download/v0.1/gluon_resnext101_32x4d-b253c8c4.pth'), 'resnext101_64x4d.gluon_in1k': _gcfg( hf_hub_id='timm/', url='https://github.com/rwightman/pytorch-pretrained-gluonresnet/releases/download/v0.1/gluon_resnext101_64x4d-f9a8e184.pth'), 'seresnext50_32x4d.gluon_in1k': _gcfg( hf_hub_id='timm/', url='https://github.com/rwightman/pytorch-pretrained-gluonresnet/releases/download/v0.1/gluon_seresnext50_32x4d-90cf2d6e.pth'), 'seresnext101_32x4d.gluon_in1k': _gcfg( hf_hub_id='timm/', url='https://github.com/rwightman/pytorch-pretrained-gluonresnet/releases/download/v0.1/gluon_seresnext101_32x4d-cf52900d.pth'), 'seresnext101_64x4d.gluon_in1k': _gcfg( hf_hub_id='timm/', url='https://github.com/rwightman/pytorch-pretrained-gluonresnet/releases/download/v0.1/gluon_seresnext101_64x4d-f9926f93.pth'), 'senet154.gluon_in1k': _gcfg( hf_hub_id='timm/', url='https://github.com/rwightman/pytorch-pretrained-gluonresnet/releases/download/v0.1/gluon_senet154-70a1a3c0.pth', first_conv='conv1.0'), 'test_resnet.r160_in1k': _cfg( hf_hub_id='timm/', mean=(0.5, 0.5, 0.5), std=(0.5, 0.5, 0.5), crop_pct=0.95, input_size=(3, 160, 160), pool_size=(5, 5), first_conv='conv1.0'), }) @register_model def resnet10t(pretrained: bool = False, **kwargs) -> ResNet: """Constructs a ResNet-10-T model. """ model_args = dict(block=BasicBlock, layers=(1, 1, 1, 1), stem_width=32, stem_type='deep_tiered', avg_down=True) return _create_resnet('resnet10t', pretrained, **dict(model_args, **kwargs)) @register_model def resnet14t(pretrained: bool = False, **kwargs) -> ResNet: """Constructs a ResNet-14-T model. """ model_args = dict(block=Bottleneck, layers=(1, 1, 1, 1), stem_width=32, stem_type='deep_tiered', avg_down=True) return _create_resnet('resnet14t', pretrained, **dict(model_args, **kwargs)) @register_model def resnet18(pretrained: bool = False, **kwargs) -> ResNet: """Constructs a ResNet-18 model. """ model_args = dict(block=BasicBlock, layers=(2, 2, 2, 2)) return _create_resnet('resnet18', pretrained, **dict(model_args, **kwargs)) @register_model def resnet18d(pretrained: bool = False, **kwargs) -> ResNet: """Constructs a ResNet-18-D model. """ model_args = dict(block=BasicBlock, layers=(2, 2, 2, 2), stem_width=32, stem_type='deep', avg_down=True) return _create_resnet('resnet18d', pretrained, **dict(model_args, **kwargs)) @register_model def resnet34(pretrained: bool = False, **kwargs) -> ResNet: """Constructs a ResNet-34 model. """ model_args = dict(block=BasicBlock, layers=(3, 4, 6, 3)) return _create_resnet('resnet34', pretrained, **dict(model_args, **kwargs)) @register_model def resnet34d(pretrained: bool = False, **kwargs) -> ResNet: """Constructs a ResNet-34-D model. """ model_args = dict(block=BasicBlock, layers=(3, 4, 6, 3), stem_width=32, stem_type='deep', avg_down=True) return _create_resnet('resnet34d', pretrained, **dict(model_args, **kwargs)) @register_model def resnet26(pretrained: bool = False, **kwargs) -> ResNet: """Constructs a ResNet-26 model. """ model_args = dict(block=Bottleneck, layers=(2, 2, 2, 2)) return _create_resnet('resnet26', pretrained, **dict(model_args, **kwargs)) @register_model def resnet26t(pretrained: bool = False, **kwargs) -> ResNet: """Constructs a ResNet-26-T model. """ model_args = dict(block=Bottleneck, layers=(2, 2, 2, 2), stem_width=32, stem_type='deep_tiered', avg_down=True) return _create_resnet('resnet26t', pretrained, **dict(model_args, **kwargs)) @register_model def resnet26d(pretrained: bool = False, **kwargs) -> ResNet: """Constructs a ResNet-26-D model. """ model_args = dict(block=Bottleneck, layers=(2, 2, 2, 2), stem_width=32, stem_type='deep', avg_down=True) return _create_resnet('resnet26d', pretrained, **dict(model_args, **kwargs)) @register_model def resnet50(pretrained: bool = False, **kwargs) -> ResNet: """Constructs a ResNet-50 model. """ model_args = dict(block=Bottleneck, layers=(3, 4, 6, 3)) return _create_resnet('resnet50', pretrained, **dict(model_args, **kwargs)) @register_model def resnet50c(pretrained: bool = False, **kwargs) -> ResNet: """Constructs a ResNet-50-C model. """ model_args = dict(block=Bottleneck, layers=(3, 4, 6, 3), stem_width=32, stem_type='deep') return _create_resnet('resnet50c', pretrained, **dict(model_args, **kwargs)) @register_model def resnet50d(pretrained: bool = False, **kwargs) -> ResNet: """Constructs a ResNet-50-D model. """ model_args = dict(block=Bottleneck, layers=(3, 4, 6, 3), stem_width=32, stem_type='deep', avg_down=True) return _create_resnet('resnet50d', pretrained, **dict(model_args, **kwargs)) @register_model def resnet50s(pretrained: bool = False, **kwargs) -> ResNet: """Constructs a ResNet-50-S model. """ model_args = dict(block=Bottleneck, layers=(3, 4, 6, 3), stem_width=64, stem_type='deep') return _create_resnet('resnet50s', pretrained, **dict(model_args, **kwargs)) @register_model def resnet50t(pretrained: bool = False, **kwargs) -> ResNet: """Constructs a ResNet-50-T model. """ model_args = dict(block=Bottleneck, layers=(3, 4, 6, 3), stem_width=32, stem_type='deep_tiered', avg_down=True) return _create_resnet('resnet50t', pretrained, **dict(model_args, **kwargs)) @register_model def resnet101(pretrained: bool = False, **kwargs) -> ResNet: """Constructs a ResNet-101 model. """ model_args = dict(block=Bottleneck, layers=(3, 4, 23, 3)) return _create_resnet('resnet101', pretrained, **dict(model_args, **kwargs)) @register_model def resnet101c(pretrained: bool = False, **kwargs) -> ResNet: """Constructs a ResNet-101-C model. """ model_args = dict(block=Bottleneck, layers=(3, 4, 23, 3), stem_width=32, stem_type='deep') return _create_resnet('resnet101c', pretrained, **dict(model_args, **kwargs)) @register_model def resnet101d(pretrained: bool = False, **kwargs) -> ResNet: """Constructs a ResNet-101-D model. """ model_args = dict(block=Bottleneck, layers=(3, 4, 23, 3), stem_width=32, stem_type='deep', avg_down=True) return _create_resnet('resnet101d', pretrained, **dict(model_args, **kwargs)) @register_model def resnet101s(pretrained: bool = False, **kwargs) -> ResNet: """Constructs a ResNet-101-S model. """ model_args = dict(block=Bottleneck, layers=(3, 4, 23, 3), stem_width=64, stem_type='deep') return _create_resnet('resnet101s', pretrained, **dict(model_args, **kwargs)) @register_model def resnet152(pretrained: bool = False, **kwargs) -> ResNet: """Constructs a ResNet-152 model. """ model_args = dict(block=Bottleneck, layers=(3, 8, 36, 3)) return _create_resnet('resnet152', pretrained, **dict(model_args, **kwargs)) @register_model def resnet152c(pretrained: bool = False, **kwargs) -> ResNet: """Constructs a ResNet-152-C model. """ model_args = dict(block=Bottleneck, layers=(3, 8, 36, 3), stem_width=32, stem_type='deep') return _create_resnet('resnet152c', pretrained, **dict(model_args, **kwargs)) @register_model def resnet152d(pretrained: bool = False, **kwargs) -> ResNet: """Constructs a ResNet-152-D model. """ model_args = dict(block=Bottleneck, layers=(3, 8, 36, 3), stem_width=32, stem_type='deep', avg_down=True) return _create_resnet('resnet152d', pretrained, **dict(model_args, **kwargs)) @register_model def resnet152s(pretrained: bool = False, **kwargs) -> ResNet: """Constructs a ResNet-152-S model. """ model_args = dict(block=Bottleneck, layers=(3, 8, 36, 3), stem_width=64, stem_type='deep') return _create_resnet('resnet152s', pretrained, **dict(model_args, **kwargs)) @register_model def resnet200(pretrained: bool = False, **kwargs) -> ResNet: """Constructs a ResNet-200 model. """ model_args = dict(block=Bottleneck, layers=(3, 24, 36, 3)) return _create_resnet('resnet200', pretrained, **dict(model_args, **kwargs)) @register_model def resnet200d(pretrained: bool = False, **kwargs) -> ResNet: """Constructs a ResNet-200-D model. """ model_args = dict(block=Bottleneck, layers=(3, 24, 36, 3), stem_width=32, stem_type='deep', avg_down=True) return _create_resnet('resnet200d', pretrained, **dict(model_args, **kwargs)) @register_model def wide_resnet50_2(pretrained: bool = False, **kwargs) -> ResNet: """Constructs a Wide ResNet-50-2 model. The model is the same as ResNet except for the bottleneck number of channels which is twice larger in every block. The number of channels in outer 1x1 convolutions is the same, e.g. last block in ResNet-50 has 2048-512-2048 channels, and in Wide ResNet-50-2 has 2048-1024-2048. """ model_args = dict(block=Bottleneck, layers=(3, 4, 6, 3), base_width=128) return _create_resnet('wide_resnet50_2', pretrained, **dict(model_args, **kwargs)) @register_model def wide_resnet101_2(pretrained: bool = False, **kwargs) -> ResNet: """Constructs a Wide ResNet-101-2 model. The model is the same as ResNet except for the bottleneck number of channels which is twice larger in every block. The number of channels in outer 1x1 convolutions is the same. """ model_args = dict(block=Bottleneck, layers=(3, 4, 23, 3), base_width=128) return _create_resnet('wide_resnet101_2', pretrained, **dict(model_args, **kwargs)) @register_model def resnet50_gn(pretrained: bool = False, **kwargs) -> ResNet: """Constructs a ResNet-50 model w/ GroupNorm """ model_args = dict(block=Bottleneck, layers=(3, 4, 6, 3), norm_layer='groupnorm') return _create_resnet('resnet50_gn', pretrained, **dict(model_args, **kwargs)) @register_model def resnext50_32x4d(pretrained: bool = False, **kwargs) -> ResNet: """Constructs a ResNeXt50-32x4d model. """ model_args = dict(block=Bottleneck, layers=(3, 4, 6, 3), cardinality=32, base_width=4) return _create_resnet('resnext50_32x4d', pretrained, **dict(model_args, **kwargs)) @register_model def resnext50d_32x4d(pretrained: bool = False, **kwargs) -> ResNet: """Constructs a ResNeXt50d-32x4d model. ResNext50 w/ deep stem & avg pool downsample """ model_args = dict( block=Bottleneck, layers=(3, 4, 6, 3), cardinality=32, base_width=4, stem_width=32, stem_type='deep', avg_down=True) return _create_resnet('resnext50d_32x4d', pretrained, **dict(model_args, **kwargs)) @register_model def resnext101_32x4d(pretrained: bool = False, **kwargs) -> ResNet: """Constructs a ResNeXt-101 32x4d model. """ model_args = dict(block=Bottleneck, layers=(3, 4, 23, 3), cardinality=32, base_width=4) return _create_resnet('resnext101_32x4d', pretrained, **dict(model_args, **kwargs)) @register_model def resnext101_32x8d(pretrained: bool = False, **kwargs) -> ResNet: """Constructs a ResNeXt-101 32x8d model. """ model_args = dict(block=Bottleneck, layers=(3, 4, 23, 3), cardinality=32, base_width=8) return _create_resnet('resnext101_32x8d', pretrained, **dict(model_args, **kwargs)) @register_model def resnext101_32x16d(pretrained: bool = False, **kwargs) -> ResNet: """Constructs a ResNeXt-101 32x16d model """ model_args = dict(block=Bottleneck, layers=(3, 4, 23, 3), cardinality=32, base_width=16) return _create_resnet('resnext101_32x16d', pretrained, **dict(model_args, **kwargs)) @register_model def resnext101_32x32d(pretrained: bool = False, **kwargs) -> ResNet: """Constructs a ResNeXt-101 32x32d model """ model_args = dict(block=Bottleneck, layers=(3, 4, 23, 3), cardinality=32, base_width=32) return _create_resnet('resnext101_32x32d', pretrained, **dict(model_args, **kwargs)) @register_model def resnext101_64x4d(pretrained: bool = False, **kwargs) -> ResNet: """Constructs a ResNeXt101-64x4d model. """ model_args = dict(block=Bottleneck, layers=(3, 4, 23, 3), cardinality=64, base_width=4) return _create_resnet('resnext101_64x4d', pretrained, **dict(model_args, **kwargs)) @register_model def ecaresnet26t(pretrained: bool = False, **kwargs) -> ResNet: """Constructs an ECA-ResNeXt-26-T model. This is technically a 28 layer ResNet, like a 'D' bag-of-tricks model but with tiered 24, 32, 64 channels in the deep stem and ECA attn. """ model_args = dict( block=Bottleneck, layers=(2, 2, 2, 2), stem_width=32, stem_type='deep_tiered', avg_down=True, block_args=dict(attn_layer='eca')) return _create_resnet('ecaresnet26t', pretrained, **dict(model_args, **kwargs)) @register_model def ecaresnet50d(pretrained: bool = False, **kwargs) -> ResNet: """Constructs a ResNet-50-D model with eca. """ model_args = dict( block=Bottleneck, layers=(3, 4, 6, 3), stem_width=32, stem_type='deep', avg_down=True, block_args=dict(attn_layer='eca')) return _create_resnet('ecaresnet50d', pretrained, **dict(model_args, **kwargs)) @register_model def ecaresnet50d_pruned(pretrained: bool = False, **kwargs) -> ResNet: """Constructs a ResNet-50-D model pruned with eca. The pruning has been obtained using https://arxiv.org/pdf/2002.08258.pdf """ model_args = dict( block=Bottleneck, layers=(3, 4, 6, 3), stem_width=32, stem_type='deep', avg_down=True, block_args=dict(attn_layer='eca')) return _create_resnet('ecaresnet50d_pruned', pretrained, pruned=True, **dict(model_args, **kwargs)) @register_model def ecaresnet50t(pretrained: bool = False, **kwargs) -> ResNet: """Constructs an ECA-ResNet-50-T model. Like a 'D' bag-of-tricks model but with tiered 24, 32, 64 channels in the deep stem and ECA attn. """ model_args = dict( block=Bottleneck, layers=(3, 4, 6, 3), stem_width=32, stem_type='deep_tiered', avg_down=True, block_args=dict(attn_layer='eca')) return _create_resnet('ecaresnet50t', pretrained, **dict(model_args, **kwargs)) @register_model def ecaresnetlight(pretrained: bool = False, **kwargs) -> ResNet: """Constructs a ResNet-50-D light model with eca. """ model_args = dict( block=Bottleneck, layers=(1, 1, 11, 3), stem_width=32, avg_down=True, block_args=dict(attn_layer='eca')) return _create_resnet('ecaresnetlight', pretrained, **dict(model_args, **kwargs)) @register_model def ecaresnet101d(pretrained: bool = False, **kwargs) -> ResNet: """Constructs a ResNet-101-D model with eca. """ model_args = dict( block=Bottleneck, layers=(3, 4, 23, 3), stem_width=32, stem_type='deep', avg_down=True, block_args=dict(attn_layer='eca')) return _create_resnet('ecaresnet101d', pretrained, **dict(model_args, **kwargs)) @register_model def ecaresnet101d_pruned(pretrained: bool = False, **kwargs) -> ResNet: """Constructs a ResNet-101-D model pruned with eca. The pruning has been obtained using https://arxiv.org/pdf/2002.08258.pdf """ model_args = dict( block=Bottleneck, layers=(3, 4, 23, 3), stem_width=32, stem_type='deep', avg_down=True, block_args=dict(attn_layer='eca')) return _create_resnet('ecaresnet101d_pruned', pretrained, pruned=True, **dict(model_args, **kwargs)) @register_model def ecaresnet200d(pretrained: bool = False, **kwargs) -> ResNet: """Constructs a ResNet-200-D model with ECA. """ model_args = dict( block=Bottleneck, layers=(3, 24, 36, 3), stem_width=32, stem_type='deep', avg_down=True, block_args=dict(attn_layer='eca')) return _create_resnet('ecaresnet200d', pretrained, **dict(model_args, **kwargs)) @register_model def ecaresnet269d(pretrained: bool = False, **kwargs) -> ResNet: """Constructs a ResNet-269-D model with ECA. """ model_args = dict( block=Bottleneck, layers=(3, 30, 48, 8), stem_width=32, stem_type='deep', avg_down=True, block_args=dict(attn_layer='eca')) return _create_resnet('ecaresnet269d', pretrained, **dict(model_args, **kwargs)) @register_model def ecaresnext26t_32x4d(pretrained: bool = False, **kwargs) -> ResNet: """Constructs an ECA-ResNeXt-26-T model. This is technically a 28 layer ResNet, like a 'D' bag-of-tricks model but with tiered 24, 32, 64 channels in the deep stem. This model replaces SE module with the ECA module """ model_args = dict( block=Bottleneck, layers=(2, 2, 2, 2), cardinality=32, base_width=4, stem_width=32, stem_type='deep_tiered', avg_down=True, block_args=dict(attn_layer='eca')) return _create_resnet('ecaresnext26t_32x4d', pretrained, **dict(model_args, **kwargs)) @register_model def ecaresnext50t_32x4d(pretrained: bool = False, **kwargs) -> ResNet: """Constructs an ECA-ResNeXt-50-T model. This is technically a 28 layer ResNet, like a 'D' bag-of-tricks model but with tiered 24, 32, 64 channels in the deep stem. This model replaces SE module with the ECA module """ model_args = dict( block=Bottleneck, layers=(2, 2, 2, 2), cardinality=32, base_width=4, stem_width=32, stem_type='deep_tiered', avg_down=True, block_args=dict(attn_layer='eca')) return _create_resnet('ecaresnext50t_32x4d', pretrained, **dict(model_args, **kwargs)) @register_model def seresnet18(pretrained: bool = False, **kwargs) -> ResNet: model_args = dict(block=BasicBlock, layers=(2, 2, 2, 2), block_args=dict(attn_layer='se')) return _create_resnet('seresnet18', pretrained, **dict(model_args, **kwargs)) @register_model def seresnet34(pretrained: bool = False, **kwargs) -> ResNet: model_args = dict(block=BasicBlock, layers=(3, 4, 6, 3), block_args=dict(attn_layer='se')) return _create_resnet('seresnet34', pretrained, **dict(model_args, **kwargs)) @register_model def seresnet50(pretrained: bool = False, **kwargs) -> ResNet: model_args = dict(block=Bottleneck, layers=(3, 4, 6, 3), block_args=dict(attn_layer='se')) return _create_resnet('seresnet50', pretrained, **dict(model_args, **kwargs)) @register_model def seresnet50t(pretrained: bool = False, **kwargs) -> ResNet: model_args = dict( block=Bottleneck, layers=(3, 4, 6, 3), stem_width=32, stem_type='deep_tiered', avg_down=True, block_args=dict(attn_layer='se')) return _create_resnet('seresnet50t', pretrained, **dict(model_args, **kwargs)) @register_model def seresnet101(pretrained: bool = False, **kwargs) -> ResNet: model_args = dict(block=Bottleneck, layers=(3, 4, 23, 3), block_args=dict(attn_layer='se')) return _create_resnet('seresnet101', pretrained, **dict(model_args, **kwargs)) @register_model def seresnet152(pretrained: bool = False, **kwargs) -> ResNet: model_args = dict(block=Bottleneck, layers=(3, 8, 36, 3), block_args=dict(attn_layer='se')) return _create_resnet('seresnet152', pretrained, **dict(model_args, **kwargs)) @register_model def seresnet152d(pretrained: bool = False, **kwargs) -> ResNet: model_args = dict( block=Bottleneck, layers=(3, 8, 36, 3), stem_width=32, stem_type='deep', avg_down=True, block_args=dict(attn_layer='se')) return _create_resnet('seresnet152d', pretrained, **dict(model_args, **kwargs)) @register_model def seresnet200d(pretrained: bool = False, **kwargs) -> ResNet: """Constructs a ResNet-200-D model with SE attn. """ model_args = dict( block=Bottleneck, layers=(3, 24, 36, 3), stem_width=32, stem_type='deep', avg_down=True, block_args=dict(attn_layer='se')) return _create_resnet('seresnet200d', pretrained, **dict(model_args, **kwargs)) @register_model def seresnet269d(pretrained: bool = False, **kwargs) -> ResNet: """Constructs a ResNet-269-D model with SE attn. """ model_args = dict( block=Bottleneck, layers=(3, 30, 48, 8), stem_width=32, stem_type='deep', avg_down=True, block_args=dict(attn_layer='se')) return _create_resnet('seresnet269d', pretrained, **dict(model_args, **kwargs)) @register_model def seresnext26d_32x4d(pretrained: bool = False, **kwargs) -> ResNet: """Constructs a SE-ResNeXt-26-D model.` This is technically a 28 layer ResNet, using the 'D' modifier from Gluon / bag-of-tricks for combination of deep stem and avg_pool in downsample. """ model_args = dict( block=Bottleneck, layers=(2, 2, 2, 2), cardinality=32, base_width=4, stem_width=32, stem_type='deep', avg_down=True, block_args=dict(attn_layer='se')) return _create_resnet('seresnext26d_32x4d', pretrained, **dict(model_args, **kwargs)) @register_model def seresnext26t_32x4d(pretrained: bool = False, **kwargs) -> ResNet: """Constructs a SE-ResNet-26-T model. This is technically a 28 layer ResNet, like a 'D' bag-of-tricks model but with tiered 24, 32, 64 channels in the deep stem. """ model_args = dict( block=Bottleneck, layers=(2, 2, 2, 2), cardinality=32, base_width=4, stem_width=32, stem_type='deep_tiered', avg_down=True, block_args=dict(attn_layer='se')) return _create_resnet('seresnext26t_32x4d', pretrained, **dict(model_args, **kwargs)) @register_model def seresnext50_32x4d(pretrained: bool = False, **kwargs) -> ResNet: model_args = dict( block=Bottleneck, layers=(3, 4, 6, 3), cardinality=32, base_width=4, block_args=dict(attn_layer='se')) return _create_resnet('seresnext50_32x4d', pretrained, **dict(model_args, **kwargs)) @register_model def seresnext101_32x4d(pretrained: bool = False, **kwargs) -> ResNet: model_args = dict( block=Bottleneck, layers=(3, 4, 23, 3), cardinality=32, base_width=4, block_args=dict(attn_layer='se')) return _create_resnet('seresnext101_32x4d', pretrained, **dict(model_args, **kwargs)) @register_model def seresnext101_32x8d(pretrained: bool = False, **kwargs) -> ResNet: model_args = dict( block=Bottleneck, layers=(3, 4, 23, 3), cardinality=32, base_width=8, block_args=dict(attn_layer='se')) return _create_resnet('seresnext101_32x8d', pretrained, **dict(model_args, **kwargs)) @register_model def seresnext101d_32x8d(pretrained: bool = False, **kwargs) -> ResNet: model_args = dict( block=Bottleneck, layers=(3, 4, 23, 3), cardinality=32, base_width=8, stem_width=32, stem_type='deep', avg_down=True, block_args=dict(attn_layer='se')) return _create_resnet('seresnext101d_32x8d', pretrained, **dict(model_args, **kwargs)) @register_model def seresnext101_64x4d(pretrained: bool = False, **kwargs) -> ResNet: model_args = dict( block=Bottleneck, layers=(3, 4, 23, 3), cardinality=64, base_width=4, block_args=dict(attn_layer='se')) return _create_resnet('seresnext101_64x4d', pretrained, **dict(model_args, **kwargs)) @register_model def senet154(pretrained: bool = False, **kwargs) -> ResNet: model_args = dict( block=Bottleneck, layers=(3, 8, 36, 3), cardinality=64, base_width=4, stem_type='deep', down_kernel_size=3, block_reduce_first=2, block_args=dict(attn_layer='se')) return _create_resnet('senet154', pretrained, **dict(model_args, **kwargs)) @register_model def resnetblur18(pretrained: bool = False, **kwargs) -> ResNet: """Constructs a ResNet-18 model with blur anti-aliasing """ model_args = dict(block=BasicBlock, layers=(2, 2, 2, 2), aa_layer=BlurPool2d) return _create_resnet('resnetblur18', pretrained, **dict(model_args, **kwargs)) @register_model def resnetblur50(pretrained: bool = False, **kwargs) -> ResNet: """Constructs a ResNet-50 model with blur anti-aliasing """ model_args = dict(block=Bottleneck, layers=(3, 4, 6, 3), aa_layer=BlurPool2d) return _create_resnet('resnetblur50', pretrained, **dict(model_args, **kwargs)) @register_model def resnetblur50d(pretrained: bool = False, **kwargs) -> ResNet: """Constructs a ResNet-50-D model with blur anti-aliasing """ model_args = dict( block=Bottleneck, layers=(3, 4, 6, 3), aa_layer=BlurPool2d, stem_width=32, stem_type='deep', avg_down=True) return _create_resnet('resnetblur50d', pretrained, **dict(model_args, **kwargs)) @register_model def resnetblur101d(pretrained: bool = False, **kwargs) -> ResNet: """Constructs a ResNet-101-D model with blur anti-aliasing """ model_args = dict( block=Bottleneck, layers=(3, 4, 23, 3), aa_layer=BlurPool2d, stem_width=32, stem_type='deep', avg_down=True) return _create_resnet('resnetblur101d', pretrained, **dict(model_args, **kwargs)) @register_model def resnetaa34d(pretrained: bool = False, **kwargs) -> ResNet: """Constructs a ResNet-34-D model w/ avgpool anti-aliasing """ model_args = dict( block=BasicBlock, layers=(3, 4, 6, 3), aa_layer=nn.AvgPool2d, stem_width=32, stem_type='deep', avg_down=True) return _create_resnet('resnetaa34d', pretrained, **dict(model_args, **kwargs)) @register_model def resnetaa50(pretrained: bool = False, **kwargs) -> ResNet: """Constructs a ResNet-50 model with avgpool anti-aliasing """ model_args = dict(block=Bottleneck, layers=(3, 4, 6, 3), aa_layer=nn.AvgPool2d) return _create_resnet('resnetaa50', pretrained, **dict(model_args, **kwargs)) @register_model def resnetaa50d(pretrained: bool = False, **kwargs) -> ResNet: """Constructs a ResNet-50-D model with avgpool anti-aliasing """ model_args = dict( block=Bottleneck, layers=(3, 4, 6, 3), aa_layer=nn.AvgPool2d, stem_width=32, stem_type='deep', avg_down=True) return _create_resnet('resnetaa50d', pretrained, **dict(model_args, **kwargs)) @register_model def resnetaa101d(pretrained: bool = False, **kwargs) -> ResNet: """Constructs a ResNet-101-D model with avgpool anti-aliasing """ model_args = dict( block=Bottleneck, layers=(3, 4, 23, 3), aa_layer=nn.AvgPool2d, stem_width=32, stem_type='deep', avg_down=True) return _create_resnet('resnetaa101d', pretrained, **dict(model_args, **kwargs)) @register_model def seresnetaa50d(pretrained: bool = False, **kwargs) -> ResNet: """Constructs a SE=ResNet-50-D model with avgpool anti-aliasing """ model_args = dict( block=Bottleneck, layers=(3, 4, 6, 3), aa_layer=nn.AvgPool2d, stem_width=32, stem_type='deep', avg_down=True, block_args=dict(attn_layer='se')) return _create_resnet('seresnetaa50d', pretrained, **dict(model_args, **kwargs)) @register_model def seresnextaa101d_32x8d(pretrained: bool = False, **kwargs) -> ResNet: """Constructs a SE=ResNeXt-101-D 32x8d model with avgpool anti-aliasing """ model_args = dict( block=Bottleneck, layers=(3, 4, 23, 3), cardinality=32, base_width=8, stem_width=32, stem_type='deep', avg_down=True, aa_layer=nn.AvgPool2d, block_args=dict(attn_layer='se')) return _create_resnet('seresnextaa101d_32x8d', pretrained, **dict(model_args, **kwargs)) @register_model def seresnextaa201d_32x8d(pretrained: bool = False, **kwargs): """Constructs a SE=ResNeXt-101-D 32x8d model with avgpool anti-aliasing """ model_args = dict( block=Bottleneck, layers=(3, 24, 36, 4), cardinality=32, base_width=8, stem_width=64, stem_type='deep', avg_down=True, aa_layer=nn.AvgPool2d, block_args=dict(attn_layer='se')) return _create_resnet('seresnextaa201d_32x8d', pretrained, **dict(model_args, **kwargs)) @register_model def resnetrs50(pretrained: bool = False, **kwargs) -> ResNet: """Constructs a ResNet-RS-50 model. Paper: Revisiting ResNets - https://arxiv.org/abs/2103.07579 Pretrained weights from https://github.com/tensorflow/tpu/tree/bee9c4f6/models/official/resnet/resnet_rs """ attn_layer = partial(get_attn('se'), rd_ratio=0.25) model_args = dict( block=Bottleneck, layers=(3, 4, 6, 3), stem_width=32, stem_type='deep', replace_stem_pool=True, avg_down=True, block_args=dict(attn_layer=attn_layer)) return _create_resnet('resnetrs50', pretrained, **dict(model_args, **kwargs)) @register_model def resnetrs101(pretrained: bool = False, **kwargs) -> ResNet: """Constructs a ResNet-RS-101 model. Paper: Revisiting ResNets - https://arxiv.org/abs/2103.07579 Pretrained weights from https://github.com/tensorflow/tpu/tree/bee9c4f6/models/official/resnet/resnet_rs """ attn_layer = partial(get_attn('se'), rd_ratio=0.25) model_args = dict( block=Bottleneck, layers=(3, 4, 23, 3), stem_width=32, stem_type='deep', replace_stem_pool=True, avg_down=True, block_args=dict(attn_layer=attn_layer)) return _create_resnet('resnetrs101', pretrained, **dict(model_args, **kwargs)) @register_model def resnetrs152(pretrained: bool = False, **kwargs) -> ResNet: """Constructs a ResNet-RS-152 model. Paper: Revisiting ResNets - https://arxiv.org/abs/2103.07579 Pretrained weights from https://github.com/tensorflow/tpu/tree/bee9c4f6/models/official/resnet/resnet_rs """ attn_layer = partial(get_attn('se'), rd_ratio=0.25) model_args = dict( block=Bottleneck, layers=(3, 8, 36, 3), stem_width=32, stem_type='deep', replace_stem_pool=True, avg_down=True, block_args=dict(attn_layer=attn_layer)) return _create_resnet('resnetrs152', pretrained, **dict(model_args, **kwargs)) @register_model def resnetrs200(pretrained: bool = False, **kwargs) -> ResNet: """Constructs a ResNet-RS-200 model. Paper: Revisiting ResNets - https://arxiv.org/abs/2103.07579 Pretrained weights from https://github.com/tensorflow/tpu/tree/bee9c4f6/models/official/resnet/resnet_rs """ attn_layer = partial(get_attn('se'), rd_ratio=0.25) model_args = dict( block=Bottleneck, layers=(3, 24, 36, 3), stem_width=32, stem_type='deep', replace_stem_pool=True, avg_down=True, block_args=dict(attn_layer=attn_layer)) return _create_resnet('resnetrs200', pretrained, **dict(model_args, **kwargs)) @register_model def resnetrs270(pretrained: bool = False, **kwargs) -> ResNet: """Constructs a ResNet-RS-270 model. Paper: Revisiting ResNets - https://arxiv.org/abs/2103.07579 Pretrained weights from https://github.com/tensorflow/tpu/tree/bee9c4f6/models/official/resnet/resnet_rs """ attn_layer = partial(get_attn('se'), rd_ratio=0.25) model_args = dict( block=Bottleneck, layers=(4, 29, 53, 4), stem_width=32, stem_type='deep', replace_stem_pool=True, avg_down=True, block_args=dict(attn_layer=attn_layer)) return _create_resnet('resnetrs270', pretrained, **dict(model_args, **kwargs)) @register_model def resnetrs350(pretrained: bool = False, **kwargs) -> ResNet: """Constructs a ResNet-RS-350 model. Paper: Revisiting ResNets - https://arxiv.org/abs/2103.07579 Pretrained weights from https://github.com/tensorflow/tpu/tree/bee9c4f6/models/official/resnet/resnet_rs """ attn_layer = partial(get_attn('se'), rd_ratio=0.25) model_args = dict( block=Bottleneck, layers=(4, 36, 72, 4), stem_width=32, stem_type='deep', replace_stem_pool=True, avg_down=True, block_args=dict(attn_layer=attn_layer)) return _create_resnet('resnetrs350', pretrained, **dict(model_args, **kwargs)) @register_model def resnetrs420(pretrained: bool = False, **kwargs) -> ResNet: """Constructs a ResNet-RS-420 model Paper: Revisiting ResNets - https://arxiv.org/abs/2103.07579 Pretrained weights from https://github.com/tensorflow/tpu/tree/bee9c4f6/models/official/resnet/resnet_rs """ attn_layer = partial(get_attn('se'), rd_ratio=0.25) model_args = dict( block=Bottleneck, layers=(4, 44, 87, 4), stem_width=32, stem_type='deep', replace_stem_pool=True, avg_down=True, block_args=dict(attn_layer=attn_layer)) return _create_resnet('resnetrs420', pretrained, **dict(model_args, **kwargs)) @register_model def test_resnet(pretrained: bool = False, **kwargs) -> ResNet: """Constructs a tiny ResNet test model. """ model_args = dict( block=[BasicBlock, BasicBlock, Bottleneck, BasicBlock], layers=(1, 1, 1, 1), stem_width=16, stem_type='deep', avg_down=True, channels=(32, 48, 48, 96)) return _create_resnet('test_resnet', pretrained, **dict(model_args, **kwargs)) register_model_deprecations(__name__, { 'tv_resnet34': 'resnet34.tv_in1k', 'tv_resnet50': 'resnet50.tv_in1k', 'tv_resnet101': 'resnet101.tv_in1k', 'tv_resnet152': 'resnet152.tv_in1k', 'tv_resnext50_32x4d' : 'resnext50_32x4d.tv_in1k', 'ig_resnext101_32x8d': 'resnext101_32x8d.fb_wsl_ig1b_ft_in1k', 'ig_resnext101_32x16d': 'resnext101_32x8d.fb_wsl_ig1b_ft_in1k', 'ig_resnext101_32x32d': 'resnext101_32x8d.fb_wsl_ig1b_ft_in1k', 'ig_resnext101_32x48d': 'resnext101_32x8d.fb_wsl_ig1b_ft_in1k', 'ssl_resnet18': 'resnet18.fb_ssl_yfcc100m_ft_in1k', 'ssl_resnet50': 'resnet50.fb_ssl_yfcc100m_ft_in1k', 'ssl_resnext50_32x4d': 'resnext50_32x4d.fb_ssl_yfcc100m_ft_in1k', 'ssl_resnext101_32x4d': 'resnext101_32x4d.fb_ssl_yfcc100m_ft_in1k', 'ssl_resnext101_32x8d': 'resnext101_32x8d.fb_ssl_yfcc100m_ft_in1k', 'ssl_resnext101_32x16d': 'resnext101_32x16d.fb_ssl_yfcc100m_ft_in1k', 'swsl_resnet18': 'resnet18.fb_swsl_ig1b_ft_in1k', 'swsl_resnet50': 'resnet50.fb_swsl_ig1b_ft_in1k', 'swsl_resnext50_32x4d': 'resnext50_32x4d.fb_swsl_ig1b_ft_in1k', 'swsl_resnext101_32x4d': 'resnext101_32x4d.fb_swsl_ig1b_ft_in1k', 'swsl_resnext101_32x8d': 'resnext101_32x8d.fb_swsl_ig1b_ft_in1k', 'swsl_resnext101_32x16d': 'resnext101_32x16d.fb_swsl_ig1b_ft_in1k', 'gluon_resnet18_v1b': 'resnet18.gluon_in1k', 'gluon_resnet34_v1b': 'resnet34.gluon_in1k', 'gluon_resnet50_v1b': 'resnet50.gluon_in1k', 'gluon_resnet101_v1b': 'resnet101.gluon_in1k', 'gluon_resnet152_v1b': 'resnet152.gluon_in1k', 'gluon_resnet50_v1c': 'resnet50c.gluon_in1k', 'gluon_resnet101_v1c': 'resnet101c.gluon_in1k', 'gluon_resnet152_v1c': 'resnet152c.gluon_in1k', 'gluon_resnet50_v1d': 'resnet50d.gluon_in1k', 'gluon_resnet101_v1d': 'resnet101d.gluon_in1k', 'gluon_resnet152_v1d': 'resnet152d.gluon_in1k', 'gluon_resnet50_v1s': 'resnet50s.gluon_in1k', 'gluon_resnet101_v1s': 'resnet101s.gluon_in1k', 'gluon_resnet152_v1s': 'resnet152s.gluon_in1k', 'gluon_resnext50_32x4d': 'resnext50_32x4d.gluon_in1k', 'gluon_resnext101_32x4d': 'resnext101_32x4d.gluon_in1k', 'gluon_resnext101_64x4d': 'resnext101_64x4d.gluon_in1k', 'gluon_seresnext50_32x4d': 'seresnext50_32x4d.gluon_in1k', 'gluon_seresnext101_32x4d': 'seresnext101_32x4d.gluon_in1k', 'gluon_seresnext101_64x4d': 'seresnext101_64x4d.gluon_in1k', 'gluon_senet154': 'senet154.gluon_in1k', 'seresnext26tn_32x4d': 'seresnext26t_32x4d', })
pytorch-image-models/timm/models/resnet.py/0
{ "file_path": "pytorch-image-models/timm/models/resnet.py", "repo_id": "pytorch-image-models", "token_count": 45896 }
""" Vision Transformer (ViT) in PyTorch A PyTorch implement of Vision Transformers as described in: 'An Image Is Worth 16 x 16 Words: Transformers for Image Recognition at Scale' - https://arxiv.org/abs/2010.11929 `How to train your ViT? Data, Augmentation, and Regularization in Vision Transformers` - https://arxiv.org/abs/2106.10270 `FlexiViT: One Model for All Patch Sizes` - https://arxiv.org/abs/2212.08013 The official jax code is released and available at * https://github.com/google-research/vision_transformer * https://github.com/google-research/big_vision Acknowledgments: * The paper authors for releasing code and weights, thanks! * I fixed my class token impl based on Phil Wang's https://github.com/lucidrains/vit-pytorch * Simple transformer style inspired by Andrej Karpathy's https://github.com/karpathy/minGPT * Bert reference code checks against Huggingface Transformers and Tensorflow Bert Hacked together by / Copyright 2020, Ross Wightman """ import copy import logging import math from collections import OrderedDict from functools import partial from typing import Any, Callable, Dict, Optional, Set, Tuple, Type, Union, List try: from typing import Literal except ImportError: from typing_extensions import Literal import torch import torch.nn as nn import torch.nn.functional as F from torch.jit import Final from timm.data import IMAGENET_DEFAULT_MEAN, IMAGENET_DEFAULT_STD, IMAGENET_INCEPTION_MEAN, IMAGENET_INCEPTION_STD, \ OPENAI_CLIP_MEAN, OPENAI_CLIP_STD from timm.layers import PatchEmbed, Mlp, DropPath, AttentionPoolLatent, RmsNorm, PatchDropout, SwiGLUPacked, SwiGLU, \ trunc_normal_, lecun_normal_, resample_patch_embed, resample_abs_pos_embed, use_fused_attn, \ get_act_layer, get_norm_layer, LayerType from ._builder import build_model_with_cfg from ._features import feature_take_indices from ._manipulate import named_apply, checkpoint_seq, adapt_input_conv from ._registry import generate_default_cfgs, register_model, register_model_deprecations __all__ = ['VisionTransformer'] # model_registry will add each entrypoint fn to this _logger = logging.getLogger(__name__) class Attention(nn.Module): fused_attn: Final[bool] def __init__( self, dim: int, num_heads: int = 8, qkv_bias: bool = False, qk_norm: bool = False, proj_bias: bool = True, attn_drop: float = 0., proj_drop: float = 0., norm_layer: Type[nn.Module] = nn.LayerNorm, ) -> None: super().__init__() assert dim % num_heads == 0, 'dim should be divisible by num_heads' self.num_heads = num_heads self.head_dim = dim // num_heads self.scale = self.head_dim ** -0.5 self.fused_attn = use_fused_attn() self.qkv = nn.Linear(dim, dim * 3, bias=qkv_bias) self.q_norm = norm_layer(self.head_dim) if qk_norm else nn.Identity() self.k_norm = norm_layer(self.head_dim) if qk_norm else nn.Identity() self.attn_drop = nn.Dropout(attn_drop) self.proj = nn.Linear(dim, dim, bias=proj_bias) self.proj_drop = nn.Dropout(proj_drop) def forward(self, x: torch.Tensor) -> torch.Tensor: B, N, C = x.shape qkv = self.qkv(x).reshape(B, N, 3, self.num_heads, self.head_dim).permute(2, 0, 3, 1, 4) q, k, v = qkv.unbind(0) q, k = self.q_norm(q), self.k_norm(k) if self.fused_attn: x = F.scaled_dot_product_attention( q, k, v, dropout_p=self.attn_drop.p if self.training else 0., ) else: q = q * self.scale attn = q @ k.transpose(-2, -1) attn = attn.softmax(dim=-1) attn = self.attn_drop(attn) x = attn @ v x = x.transpose(1, 2).reshape(B, N, C) x = self.proj(x) x = self.proj_drop(x) return x class LayerScale(nn.Module): def __init__( self, dim: int, init_values: float = 1e-5, inplace: bool = False, ) -> None: super().__init__() self.inplace = inplace self.gamma = nn.Parameter(init_values * torch.ones(dim)) def forward(self, x: torch.Tensor) -> torch.Tensor: return x.mul_(self.gamma) if self.inplace else x * self.gamma class Block(nn.Module): def __init__( self, dim: int, num_heads: int, mlp_ratio: float = 4., qkv_bias: bool = False, qk_norm: bool = False, proj_bias: bool = True, proj_drop: float = 0., attn_drop: float = 0., init_values: Optional[float] = None, drop_path: float = 0., act_layer: Type[nn.Module] = nn.GELU, norm_layer: Type[nn.Module] = nn.LayerNorm, mlp_layer: Type[nn.Module] = Mlp, ) -> None: super().__init__() self.norm1 = norm_layer(dim) self.attn = Attention( dim, num_heads=num_heads, qkv_bias=qkv_bias, qk_norm=qk_norm, proj_bias=proj_bias, attn_drop=attn_drop, proj_drop=proj_drop, norm_layer=norm_layer, ) self.ls1 = LayerScale(dim, init_values=init_values) if init_values else nn.Identity() self.drop_path1 = DropPath(drop_path) if drop_path > 0. else nn.Identity() self.norm2 = norm_layer(dim) self.mlp = mlp_layer( in_features=dim, hidden_features=int(dim * mlp_ratio), act_layer=act_layer, bias=proj_bias, drop=proj_drop, ) self.ls2 = LayerScale(dim, init_values=init_values) if init_values else nn.Identity() self.drop_path2 = DropPath(drop_path) if drop_path > 0. else nn.Identity() def forward(self, x: torch.Tensor) -> torch.Tensor: x = x + self.drop_path1(self.ls1(self.attn(self.norm1(x)))) x = x + self.drop_path2(self.ls2(self.mlp(self.norm2(x)))) return x class ResPostBlock(nn.Module): def __init__( self, dim: int, num_heads: int, mlp_ratio: float = 4., qkv_bias: bool = False, qk_norm: bool = False, proj_bias: bool = True, proj_drop: float = 0., attn_drop: float = 0., init_values: Optional[float] = None, drop_path: float = 0., act_layer: Type[nn.Module] = nn.GELU, norm_layer: Type[nn.Module] = nn.LayerNorm, mlp_layer: Type[nn.Module] = Mlp, ) -> None: super().__init__() self.init_values = init_values self.attn = Attention( dim, num_heads=num_heads, qkv_bias=qkv_bias, qk_norm=qk_norm, proj_bias=proj_bias, attn_drop=attn_drop, proj_drop=proj_drop, norm_layer=norm_layer, ) self.norm1 = norm_layer(dim) self.drop_path1 = DropPath(drop_path) if drop_path > 0. else nn.Identity() self.mlp = mlp_layer( in_features=dim, hidden_features=int(dim * mlp_ratio), act_layer=act_layer, bias=proj_bias, drop=proj_drop, ) self.norm2 = norm_layer(dim) self.drop_path2 = DropPath(drop_path) if drop_path > 0. else nn.Identity() self.init_weights() def init_weights(self) -> None: # NOTE this init overrides that base model init with specific changes for the block type if self.init_values is not None: nn.init.constant_(self.norm1.weight, self.init_values) nn.init.constant_(self.norm2.weight, self.init_values) def forward(self, x: torch.Tensor) -> torch.Tensor: x = x + self.drop_path1(self.norm1(self.attn(x))) x = x + self.drop_path2(self.norm2(self.mlp(x))) return x class ParallelScalingBlock(nn.Module): """ Parallel ViT block (MLP & Attention in parallel) Based on: 'Scaling Vision Transformers to 22 Billion Parameters` - https://arxiv.org/abs/2302.05442 """ fused_attn: Final[bool] def __init__( self, dim: int, num_heads: int, mlp_ratio: float = 4., qkv_bias: bool = False, qk_norm: bool = False, proj_bias: bool = True, proj_drop: float = 0., attn_drop: float = 0., init_values: Optional[float] = None, drop_path: float = 0., act_layer: Type[nn.Module] = nn.GELU, norm_layer: Type[nn.Module] = nn.LayerNorm, mlp_layer: Optional[Type[nn.Module]] = None, ) -> None: super().__init__() assert dim % num_heads == 0, 'dim should be divisible by num_heads' self.num_heads = num_heads self.head_dim = dim // num_heads self.scale = self.head_dim ** -0.5 self.fused_attn = use_fused_attn() mlp_hidden_dim = int(mlp_ratio * dim) in_proj_out_dim = mlp_hidden_dim + 3 * dim self.in_norm = norm_layer(dim) self.in_proj = nn.Linear(dim, in_proj_out_dim, bias=qkv_bias) self.in_split = [mlp_hidden_dim] + [dim] * 3 if qkv_bias: self.register_buffer('qkv_bias', None) self.register_parameter('mlp_bias', None) else: self.register_buffer('qkv_bias', torch.zeros(3 * dim), persistent=False) self.mlp_bias = nn.Parameter(torch.zeros(mlp_hidden_dim)) self.q_norm = norm_layer(self.head_dim) if qk_norm else nn.Identity() self.k_norm = norm_layer(self.head_dim) if qk_norm else nn.Identity() self.attn_drop = nn.Dropout(attn_drop) self.attn_out_proj = nn.Linear(dim, dim, bias=proj_bias) self.mlp_drop = nn.Dropout(proj_drop) self.mlp_act = act_layer() self.mlp_out_proj = nn.Linear(mlp_hidden_dim, dim, bias=proj_bias) self.ls = LayerScale(dim, init_values=init_values) if init_values is not None else nn.Identity() self.drop_path = DropPath(drop_path) if drop_path > 0. else nn.Identity() def forward(self, x: torch.Tensor) -> torch.Tensor: B, N, C = x.shape # Combined MLP fc1 & qkv projections y = self.in_norm(x) if self.mlp_bias is not None: # Concat constant zero-bias for qkv w/ trainable mlp_bias. # Appears faster than adding to x_mlp separately y = F.linear(y, self.in_proj.weight, torch.cat((self.qkv_bias, self.mlp_bias))) else: y = self.in_proj(y) x_mlp, q, k, v = torch.split(y, self.in_split, dim=-1) # Dot product attention w/ qk norm q = self.q_norm(q.view(B, N, self.num_heads, self.head_dim)).transpose(1, 2) k = self.k_norm(k.view(B, N, self.num_heads, self.head_dim)).transpose(1, 2) v = v.view(B, N, self.num_heads, self.head_dim).transpose(1, 2) if self.fused_attn: x_attn = F.scaled_dot_product_attention( q, k, v, dropout_p=self.attn_drop.p if self.training else 0., ) else: q = q * self.scale attn = q @ k.transpose(-2, -1) attn = attn.softmax(dim=-1) attn = self.attn_drop(attn) x_attn = attn @ v x_attn = x_attn.transpose(1, 2).reshape(B, N, C) x_attn = self.attn_out_proj(x_attn) # MLP activation, dropout, fc2 x_mlp = self.mlp_act(x_mlp) x_mlp = self.mlp_drop(x_mlp) x_mlp = self.mlp_out_proj(x_mlp) # Add residual w/ drop path & layer scale applied y = self.drop_path(self.ls(x_attn + x_mlp)) x = x + y return x class ParallelThingsBlock(nn.Module): """ Parallel ViT block (N parallel attention followed by N parallel MLP) Based on: `Three things everyone should know about Vision Transformers` - https://arxiv.org/abs/2203.09795 """ def __init__( self, dim: int, num_heads: int, num_parallel: int = 2, mlp_ratio: float = 4., qkv_bias: bool = False, qk_norm: bool = False, proj_bias: bool = True, init_values: Optional[float] = None, proj_drop: float = 0., attn_drop: float = 0., drop_path: float = 0., act_layer: Type[nn.Module] = nn.GELU, norm_layer: Type[nn.Module] = nn.LayerNorm, mlp_layer: Type[nn.Module] = Mlp, ) -> None: super().__init__() self.num_parallel = num_parallel self.attns = nn.ModuleList() self.ffns = nn.ModuleList() for _ in range(num_parallel): self.attns.append(nn.Sequential(OrderedDict([ ('norm', norm_layer(dim)), ('attn', Attention( dim, num_heads=num_heads, qkv_bias=qkv_bias, qk_norm=qk_norm, proj_bias=proj_bias, attn_drop=attn_drop, proj_drop=proj_drop, norm_layer=norm_layer, )), ('ls', LayerScale(dim, init_values=init_values) if init_values else nn.Identity()), ('drop_path', DropPath(drop_path) if drop_path > 0. else nn.Identity()) ]))) self.ffns.append(nn.Sequential(OrderedDict([ ('norm', norm_layer(dim)), ('mlp', mlp_layer( dim, hidden_features=int(dim * mlp_ratio), act_layer=act_layer, bias=proj_bias, drop=proj_drop, )), ('ls', LayerScale(dim, init_values=init_values) if init_values else nn.Identity()), ('drop_path', DropPath(drop_path) if drop_path > 0. else nn.Identity()) ]))) def _forward_jit(self, x: torch.Tensor) -> torch.Tensor: x = x + torch.stack([attn(x) for attn in self.attns]).sum(dim=0) x = x + torch.stack([ffn(x) for ffn in self.ffns]).sum(dim=0) return x @torch.jit.ignore def _forward(self, x: torch.Tensor) -> torch.Tensor: x = x + sum(attn(x) for attn in self.attns) x = x + sum(ffn(x) for ffn in self.ffns) return x def forward(self, x: torch.Tensor) -> torch.Tensor: if torch.jit.is_scripting() or torch.jit.is_tracing(): return self._forward_jit(x) else: return self._forward(x) def global_pool_nlc( x: torch.Tensor, pool_type: str = 'token', num_prefix_tokens: int = 1, reduce_include_prefix: bool = False, ): if not pool_type: return x if pool_type == 'token': x = x[:, 0] # class token else: x = x if reduce_include_prefix else x[:, num_prefix_tokens:] if pool_type == 'avg': x = x.mean(dim=1) elif pool_type == 'avgmax': x = 0.5 * (x.amax(dim=1) + x.mean(dim=1)) elif pool_type == 'max': x = x.amax(dim=1) else: assert not pool_type, f'Unknown pool type {pool_type}' return x class VisionTransformer(nn.Module): """ Vision Transformer A PyTorch impl of : `An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale` - https://arxiv.org/abs/2010.11929 """ dynamic_img_size: Final[bool] def __init__( self, img_size: Union[int, Tuple[int, int]] = 224, patch_size: Union[int, Tuple[int, int]] = 16, in_chans: int = 3, num_classes: int = 1000, global_pool: Literal['', 'avg', 'avgmax', 'max', 'token', 'map'] = 'token', embed_dim: int = 768, depth: int = 12, num_heads: int = 12, mlp_ratio: float = 4., qkv_bias: bool = True, qk_norm: bool = False, proj_bias: bool = True, init_values: Optional[float] = None, class_token: bool = True, pos_embed: str = 'learn', no_embed_class: bool = False, reg_tokens: int = 0, pre_norm: bool = False, final_norm: bool = True, fc_norm: Optional[bool] = None, dynamic_img_size: bool = False, dynamic_img_pad: bool = False, drop_rate: float = 0., pos_drop_rate: float = 0., patch_drop_rate: float = 0., proj_drop_rate: float = 0., attn_drop_rate: float = 0., drop_path_rate: float = 0., weight_init: Literal['skip', 'jax', 'jax_nlhb', 'moco', ''] = '', fix_init: bool = False, embed_layer: Callable = PatchEmbed, embed_norm_layer: Optional[LayerType] = None, norm_layer: Optional[LayerType] = None, act_layer: Optional[LayerType] = None, block_fn: Type[nn.Module] = Block, mlp_layer: Type[nn.Module] = Mlp, ) -> None: """ Args: img_size: Input image size. patch_size: Patch size. in_chans: Number of image input channels. num_classes: Number of classes for classification head. global_pool: Type of global pooling for final sequence (default: 'token'). embed_dim: Transformer embedding dimension. depth: Depth of transformer. num_heads: Number of attention heads. mlp_ratio: Ratio of mlp hidden dim to embedding dim. qkv_bias: Enable bias for qkv projections if True. init_values: Layer-scale init values (layer-scale enabled if not None). class_token: Use class token. no_embed_class: Don't include position embeddings for class (or reg) tokens. reg_tokens: Number of register tokens. pre_norm: Enable norm after embeddings, before transformer blocks (standard in CLIP ViT). final_norm: Enable norm after transformer blocks, before head (standard in most ViT). fc_norm: Move final norm after pool (instead of before), if None, enabled when global_pool == 'avg'. drop_rate: Head dropout rate. pos_drop_rate: Position embedding dropout rate. attn_drop_rate: Attention dropout rate. drop_path_rate: Stochastic depth rate. weight_init: Weight initialization scheme. fix_init: Apply weight initialization fix (scaling w/ layer index). embed_layer: Patch embedding layer. embed_norm_layer: Normalization layer to use / override in patch embed module. norm_layer: Normalization layer. act_layer: MLP activation layer. block_fn: Transformer block layer. """ super().__init__() assert global_pool in ('', 'avg', 'avgmax', 'max', 'token', 'map') assert class_token or global_pool != 'token' assert pos_embed in ('', 'none', 'learn') use_fc_norm = global_pool in ('avg', 'avgmax', 'max') if fc_norm is None else fc_norm norm_layer = get_norm_layer(norm_layer) or partial(nn.LayerNorm, eps=1e-6) embed_norm_layer = get_norm_layer(embed_norm_layer) act_layer = get_act_layer(act_layer) or nn.GELU self.num_classes = num_classes self.global_pool = global_pool self.num_features = self.head_hidden_size = self.embed_dim = embed_dim # for consistency with other models self.num_prefix_tokens = 1 if class_token else 0 self.num_prefix_tokens += reg_tokens self.num_reg_tokens = reg_tokens self.has_class_token = class_token self.no_embed_class = no_embed_class # don't embed prefix positions (includes reg) self.dynamic_img_size = dynamic_img_size self.grad_checkpointing = False embed_args = {} if dynamic_img_size: # flatten deferred until after pos embed embed_args.update(dict(strict_img_size=False, output_fmt='NHWC')) if embed_norm_layer is not None: embed_args['norm_layer'] = embed_norm_layer self.patch_embed = embed_layer( img_size=img_size, patch_size=patch_size, in_chans=in_chans, embed_dim=embed_dim, bias=not pre_norm, # disable bias if pre-norm is used (e.g. CLIP) dynamic_img_pad=dynamic_img_pad, **embed_args, ) num_patches = self.patch_embed.num_patches reduction = self.patch_embed.feat_ratio() if hasattr(self.patch_embed, 'feat_ratio') else patch_size self.cls_token = nn.Parameter(torch.zeros(1, 1, embed_dim)) if class_token else None self.reg_token = nn.Parameter(torch.zeros(1, reg_tokens, embed_dim)) if reg_tokens else None embed_len = num_patches if no_embed_class else num_patches + self.num_prefix_tokens if not pos_embed or pos_embed == 'none': self.pos_embed = None else: self.pos_embed = nn.Parameter(torch.randn(1, embed_len, embed_dim) * .02) self.pos_drop = nn.Dropout(p=pos_drop_rate) if patch_drop_rate > 0: self.patch_drop = PatchDropout( patch_drop_rate, num_prefix_tokens=self.num_prefix_tokens, ) else: self.patch_drop = nn.Identity() self.norm_pre = norm_layer(embed_dim) if pre_norm else nn.Identity() dpr = [x.item() for x in torch.linspace(0, drop_path_rate, depth)] # stochastic depth decay rule self.blocks = nn.Sequential(*[ block_fn( dim=embed_dim, num_heads=num_heads, mlp_ratio=mlp_ratio, qkv_bias=qkv_bias, qk_norm=qk_norm, proj_bias=proj_bias, init_values=init_values, proj_drop=proj_drop_rate, attn_drop=attn_drop_rate, drop_path=dpr[i], norm_layer=norm_layer, act_layer=act_layer, mlp_layer=mlp_layer, ) for i in range(depth)]) self.feature_info = [ dict(module=f'blocks.{i}', num_chs=embed_dim, reduction=reduction) for i in range(depth)] self.norm = norm_layer(embed_dim) if final_norm and not use_fc_norm else nn.Identity() # Classifier Head if global_pool == 'map': self.attn_pool = AttentionPoolLatent( self.embed_dim, num_heads=num_heads, mlp_ratio=mlp_ratio, norm_layer=norm_layer, ) else: self.attn_pool = None self.fc_norm = norm_layer(embed_dim) if final_norm and use_fc_norm else nn.Identity() self.head_drop = nn.Dropout(drop_rate) self.head = nn.Linear(self.embed_dim, num_classes) if num_classes > 0 else nn.Identity() if weight_init != 'skip': self.init_weights(weight_init) if fix_init: self.fix_init_weight() def fix_init_weight(self): def rescale(param, _layer_id): param.div_(math.sqrt(2.0 * _layer_id)) for layer_id, layer in enumerate(self.blocks): rescale(layer.attn.proj.weight.data, layer_id + 1) rescale(layer.mlp.fc2.weight.data, layer_id + 1) def init_weights(self, mode: str = '') -> None: assert mode in ('jax', 'jax_nlhb', 'moco', '') head_bias = -math.log(self.num_classes) if 'nlhb' in mode else 0. if self.pos_embed is not None: trunc_normal_(self.pos_embed, std=.02) if self.cls_token is not None: nn.init.normal_(self.cls_token, std=1e-6) if self.reg_token is not None: nn.init.normal_(self.reg_token, std=1e-6) named_apply(get_init_weights_vit(mode, head_bias), self) def _init_weights(self, m: nn.Module) -> None: # this fn left here for compat with downstream users init_weights_vit_timm(m) @torch.jit.ignore() def load_pretrained(self, checkpoint_path: str, prefix: str = '') -> None: _load_weights(self, checkpoint_path, prefix) @torch.jit.ignore def no_weight_decay(self) -> Set: return {'pos_embed', 'cls_token', 'dist_token'} @torch.jit.ignore def group_matcher(self, coarse: bool = False) -> Dict: return dict( stem=r'^cls_token|pos_embed|patch_embed', # stem and embed blocks=[(r'^blocks\.(\d+)', None), (r'^norm', (99999,))] ) @torch.jit.ignore def set_grad_checkpointing(self, enable: bool = True) -> None: self.grad_checkpointing = enable if hasattr(self.patch_embed, 'set_grad_checkpointing'): self.patch_embed.set_grad_checkpointing(enable) @torch.jit.ignore def get_classifier(self) -> nn.Module: return self.head def reset_classifier(self, num_classes: int, global_pool: Optional[str] = None): self.num_classes = num_classes if global_pool is not None: assert global_pool in ('', 'avg', 'avgmax', 'max', 'token', 'map') if global_pool == 'map' and self.attn_pool is None: assert False, "Cannot currently add attention pooling in reset_classifier()." elif global_pool != 'map' and self.attn_pool is not None: self.attn_pool = None # remove attention pooling self.global_pool = global_pool self.head = nn.Linear(self.embed_dim, num_classes) if num_classes > 0 else nn.Identity() def set_input_size( self, img_size: Optional[Tuple[int, int]] = None, patch_size: Optional[Tuple[int, int]] = None, ): """Method updates the input image resolution, patch size Args: img_size: New input resolution, if None current resolution is used patch_size: New patch size, if None existing patch size is used """ prev_grid_size = self.patch_embed.grid_size self.patch_embed.set_input_size(img_size=img_size, patch_size=patch_size) if self.pos_embed is not None: num_prefix_tokens = 0 if self.no_embed_class else self.num_prefix_tokens num_new_tokens = self.patch_embed.num_patches + num_prefix_tokens if num_new_tokens != self.pos_embed.shape[1]: self.pos_embed = nn.Parameter(resample_abs_pos_embed( self.pos_embed, new_size=self.patch_embed.grid_size, old_size=prev_grid_size, num_prefix_tokens=num_prefix_tokens, verbose=True, )) def _pos_embed(self, x: torch.Tensor) -> torch.Tensor: if self.pos_embed is None: return x.view(x.shape[0], -1, x.shape[-1]) if self.dynamic_img_size: B, H, W, C = x.shape prev_grid_size = self.patch_embed.grid_size pos_embed = resample_abs_pos_embed( self.pos_embed, new_size=(H, W), old_size=prev_grid_size, num_prefix_tokens=0 if self.no_embed_class else self.num_prefix_tokens, ) x = x.view(B, -1, C) else: pos_embed = self.pos_embed to_cat = [] if self.cls_token is not None: to_cat.append(self.cls_token.expand(x.shape[0], -1, -1)) if self.reg_token is not None: to_cat.append(self.reg_token.expand(x.shape[0], -1, -1)) if self.no_embed_class: # deit-3, updated JAX (big vision) # position embedding does not overlap with class token, add then concat x = x + pos_embed if to_cat: x = torch.cat(to_cat + [x], dim=1) else: # original timm, JAX, and deit vit impl # pos_embed has entry for class token, concat then add if to_cat: x = torch.cat(to_cat + [x], dim=1) x = x + pos_embed return self.pos_drop(x) def forward_intermediates( self, x: torch.Tensor, indices: Optional[Union[int, List[int]]] = None, return_prefix_tokens: bool = False, norm: bool = False, stop_early: bool = False, output_fmt: str = 'NCHW', intermediates_only: bool = False, ) -> Union[List[torch.Tensor], Tuple[torch.Tensor, List[torch.Tensor]]]: """ Forward features that returns intermediates. Args: x: Input image tensor indices: Take last n blocks if int, all if None, select matching indices if sequence return_prefix_tokens: Return both prefix and spatial intermediate tokens norm: Apply norm layer to all intermediates stop_early: Stop iterating over blocks when last desired intermediate hit output_fmt: Shape of intermediate feature outputs intermediates_only: Only return intermediate features Returns: """ assert output_fmt in ('NCHW', 'NLC'), 'Output format must be one of NCHW or NLC.' reshape = output_fmt == 'NCHW' intermediates = [] take_indices, max_index = feature_take_indices(len(self.blocks), indices) # forward pass B, _, height, width = x.shape x = self.patch_embed(x) x = self._pos_embed(x) x = self.patch_drop(x) x = self.norm_pre(x) if torch.jit.is_scripting() or not stop_early: # can't slice blocks in torchscript blocks = self.blocks else: blocks = self.blocks[:max_index + 1] for i, blk in enumerate(blocks): x = blk(x) if i in take_indices: # normalize intermediates with final norm layer if enabled intermediates.append(self.norm(x) if norm else x) # process intermediates if self.num_prefix_tokens: # split prefix (e.g. class, distill) and spatial feature tokens prefix_tokens = [y[:, 0:self.num_prefix_tokens] for y in intermediates] intermediates = [y[:, self.num_prefix_tokens:] for y in intermediates] if reshape: # reshape to BCHW output format H, W = self.patch_embed.dynamic_feat_size((height, width)) intermediates = [y.reshape(B, H, W, -1).permute(0, 3, 1, 2).contiguous() for y in intermediates] if not torch.jit.is_scripting() and return_prefix_tokens: # return_prefix not support in torchscript due to poor type handling intermediates = list(zip(intermediates, prefix_tokens)) if intermediates_only: return intermediates x = self.norm(x) return x, intermediates def prune_intermediate_layers( self, indices: Union[int, List[int]] = 1, prune_norm: bool = False, prune_head: bool = True, ): """ Prune layers not required for specified intermediates. """ take_indices, max_index = feature_take_indices(len(self.blocks), indices) self.blocks = self.blocks[:max_index + 1] # truncate blocks if prune_norm: self.norm = nn.Identity() if prune_head: self.fc_norm = nn.Identity() self.reset_classifier(0, '') return take_indices def get_intermediate_layers( self, x: torch.Tensor, n: Union[int, List[int], Tuple[int]] = 1, reshape: bool = False, return_prefix_tokens: bool = False, norm: bool = False, ) -> List[torch.Tensor]: """ Intermediate layer accessor inspired by DINO / DINOv2 interface. NOTE: This API is for backwards compat, favour using forward_intermediates() directly. """ return self.forward_intermediates( x, n, return_prefix_tokens=return_prefix_tokens, norm=norm, output_fmt='NCHW' if reshape else 'NLC', intermediates_only=True, ) def forward_features(self, x: torch.Tensor) -> torch.Tensor: x = self.patch_embed(x) x = self._pos_embed(x) x = self.patch_drop(x) x = self.norm_pre(x) if self.grad_checkpointing and not torch.jit.is_scripting(): x = checkpoint_seq(self.blocks, x) else: x = self.blocks(x) x = self.norm(x) return x def pool(self, x: torch.Tensor, pool_type: Optional[str] = None) -> torch.Tensor: if self.attn_pool is not None: x = self.attn_pool(x) return x pool_type = self.global_pool if pool_type is None else pool_type x = global_pool_nlc(x, pool_type=pool_type, num_prefix_tokens=self.num_prefix_tokens) return x def forward_head(self, x: torch.Tensor, pre_logits: bool = False) -> torch.Tensor: x = self.pool(x) x = self.fc_norm(x) x = self.head_drop(x) return x if pre_logits else self.head(x) def forward(self, x: torch.Tensor) -> torch.Tensor: x = self.forward_features(x) x = self.forward_head(x) return x def init_weights_vit_timm(module: nn.Module, name: str = '') -> None: """ ViT weight initialization, original timm impl (for reproducibility) """ if isinstance(module, nn.Linear): trunc_normal_(module.weight, std=.02) if module.bias is not None: nn.init.zeros_(module.bias) elif hasattr(module, 'init_weights'): module.init_weights() def init_weights_vit_jax(module: nn.Module, name: str = '', head_bias: float = 0.0) -> None: """ ViT weight initialization, matching JAX (Flax) impl """ if isinstance(module, nn.Linear): if name.startswith('head'): nn.init.zeros_(module.weight) nn.init.constant_(module.bias, head_bias) else: nn.init.xavier_uniform_(module.weight) if module.bias is not None: nn.init.normal_(module.bias, std=1e-6) if 'mlp' in name else nn.init.zeros_(module.bias) elif isinstance(module, nn.Conv2d): lecun_normal_(module.weight) if module.bias is not None: nn.init.zeros_(module.bias) elif hasattr(module, 'init_weights'): module.init_weights() def init_weights_vit_moco(module: nn.Module, name: str = '') -> None: """ ViT weight initialization, matching moco-v3 impl minus fixed PatchEmbed """ if isinstance(module, nn.Linear): if 'qkv' in name: # treat the weights of Q, K, V separately val = math.sqrt(6. / float(module.weight.shape[0] // 3 + module.weight.shape[1])) nn.init.uniform_(module.weight, -val, val) else: nn.init.xavier_uniform_(module.weight) if module.bias is not None: nn.init.zeros_(module.bias) elif hasattr(module, 'init_weights'): module.init_weights() def get_init_weights_vit(mode: str = 'jax', head_bias: float = 0.0) -> Callable: if 'jax' in mode: return partial(init_weights_vit_jax, head_bias=head_bias) elif 'moco' in mode: return init_weights_vit_moco else: return init_weights_vit_timm def resize_pos_embed( posemb: torch.Tensor, posemb_new: torch.Tensor, num_prefix_tokens: int = 1, gs_new: Tuple[int, int] = (), interpolation: str = 'bicubic', antialias: bool = False, ) -> torch.Tensor: """ Rescale the grid of position embeddings when loading from state_dict. *DEPRECATED* This function is being deprecated in favour of using resample_abs_pos_embed """ ntok_new = posemb_new.shape[1] - num_prefix_tokens ntok_old = posemb.shape[1] - num_prefix_tokens gs_old = [int(math.sqrt(ntok_old))] * 2 if not len(gs_new): # backwards compatibility gs_new = [int(math.sqrt(ntok_new))] * 2 return resample_abs_pos_embed( posemb, gs_new, gs_old, num_prefix_tokens=num_prefix_tokens, interpolation=interpolation, antialias=antialias, verbose=True, ) @torch.no_grad() def _load_weights(model: VisionTransformer, checkpoint_path: str, prefix: str = '', load_bfloat16: bool = False) -> None: """ Load weights from .npz checkpoints for official Google Brain Flax implementation """ import numpy as np if load_bfloat16: import jax.numpy as jnp import ml_dtypes def _n2p(_w, t=True, idx=None): if idx is not None: _w = _w[idx] if load_bfloat16: _w = _w.view(ml_dtypes.bfloat16).astype(jnp.float32) _w = np.array(_w) if _w.ndim == 4 and _w.shape[0] == _w.shape[1] == _w.shape[2] == 1: _w = _w.flatten() if t: if _w.ndim == 4: _w = _w.transpose([3, 2, 0, 1]) elif _w.ndim == 3: _w = _w.transpose([2, 0, 1]) elif _w.ndim == 2: _w = _w.transpose([1, 0]) _w = torch.from_numpy(_w) return _w if load_bfloat16: w = jnp.load(checkpoint_path) else: w = np.load(checkpoint_path) interpolation = 'bilinear' antialias = False big_vision = False if not prefix: if 'opt/target/embedding/kernel' in w: prefix = 'opt/target/' elif 'params/embedding/kernel' in w: prefix = 'params/' big_vision = True elif 'params/img/embedding/kernel' in w: prefix = 'params/img/' big_vision = True if hasattr(model.patch_embed, 'backbone'): # hybrid backbone = model.patch_embed.backbone stem_only = not hasattr(backbone, 'stem') stem = backbone if stem_only else backbone.stem stem.conv.weight.copy_(adapt_input_conv(stem.conv.weight.shape[1], _n2p(w[f'{prefix}conv_root/kernel']))) stem.norm.weight.copy_(_n2p(w[f'{prefix}gn_root/scale'])) stem.norm.bias.copy_(_n2p(w[f'{prefix}gn_root/bias'])) if not stem_only: for i, stage in enumerate(backbone.stages): for j, block in enumerate(stage.blocks): bp = f'{prefix}block{i + 1}/unit{j + 1}/' for r in range(3): getattr(block, f'conv{r + 1}').weight.copy_(_n2p(w[f'{bp}conv{r + 1}/kernel'])) getattr(block, f'norm{r + 1}').weight.copy_(_n2p(w[f'{bp}gn{r + 1}/scale'])) getattr(block, f'norm{r + 1}').bias.copy_(_n2p(w[f'{bp}gn{r + 1}/bias'])) if block.downsample is not None: block.downsample.conv.weight.copy_(_n2p(w[f'{bp}conv_proj/kernel'])) block.downsample.norm.weight.copy_(_n2p(w[f'{bp}gn_proj/scale'])) block.downsample.norm.bias.copy_(_n2p(w[f'{bp}gn_proj/bias'])) embed_conv_w = _n2p(w[f'{prefix}embedding/kernel']) else: embed_conv_w = adapt_input_conv( model.patch_embed.proj.weight.shape[1], _n2p(w[f'{prefix}embedding/kernel'])) if embed_conv_w.shape[-2:] != model.patch_embed.proj.weight.shape[-2:]: embed_conv_w = resample_patch_embed( embed_conv_w, model.patch_embed.proj.weight.shape[-2:], interpolation=interpolation, antialias=antialias, verbose=True, ) model.patch_embed.proj.weight.copy_(embed_conv_w) model.patch_embed.proj.bias.copy_(_n2p(w[f'{prefix}embedding/bias'])) if model.cls_token is not None: model.cls_token.copy_(_n2p(w[f'{prefix}cls'], t=False)) if big_vision: pos_embed_w = _n2p(w[f'{prefix}pos_embedding'], t=False) else: pos_embed_w = _n2p(w[f'{prefix}Transformer/posembed_input/pos_embedding'], t=False) if pos_embed_w.shape != model.pos_embed.shape: num_prefix_tokens = 0 if getattr(model, 'no_embed_class', False) else getattr(model, 'num_prefix_tokens', 1) pos_embed_w = resample_abs_pos_embed( # resize pos embedding when different size from pretrained weights pos_embed_w, new_size=model.patch_embed.grid_size, num_prefix_tokens=num_prefix_tokens, interpolation=interpolation, antialias=antialias, verbose=True, ) model.pos_embed.copy_(pos_embed_w) model.norm.weight.copy_(_n2p(w[f'{prefix}Transformer/encoder_norm/scale'])) model.norm.bias.copy_(_n2p(w[f'{prefix}Transformer/encoder_norm/bias'])) if (isinstance(model.head, nn.Linear) and f'{prefix}head/bias' in w and model.head.bias.shape[0] == w[f'{prefix}head/bias'].shape[-1]): model.head.weight.copy_(_n2p(w[f'{prefix}head/kernel'])) model.head.bias.copy_(_n2p(w[f'{prefix}head/bias'])) # NOTE representation layer has been removed, not used in latest 21k/1k pretrained weights # if isinstance(getattr(model.pre_logits, 'fc', None), nn.Linear) and f'{prefix}pre_logits/bias' in w: # model.pre_logits.fc.weight.copy_(_n2p(w[f'{prefix}pre_logits/kernel'])) # model.pre_logits.fc.bias.copy_(_n2p(w[f'{prefix}pre_logits/bias'])) if model.attn_pool is not None: block_prefix = f'{prefix}MAPHead_0/' mha_prefix = block_prefix + f'MultiHeadDotProductAttention_0/' model.attn_pool.latent.copy_(_n2p(w[f'{block_prefix}probe'], t=False)) model.attn_pool.kv.weight.copy_(torch.cat([ _n2p(w[f'{mha_prefix}{n}/kernel'], t=False).flatten(1).T for n in ('key', 'value')])) model.attn_pool.kv.bias.copy_(torch.cat([ _n2p(w[f'{mha_prefix}{n}/bias'], t=False).reshape(-1) for n in ('key', 'value')])) model.attn_pool.q.weight.copy_(_n2p(w[f'{mha_prefix}query/kernel'], t=False).flatten(1).T) model.attn_pool.q.bias.copy_(_n2p(w[f'{mha_prefix}query/bias'], t=False).reshape(-1)) model.attn_pool.proj.weight.copy_(_n2p(w[f'{mha_prefix}out/kernel']).flatten(1)) model.attn_pool.proj.bias.copy_(_n2p(w[f'{mha_prefix}out/bias'])) model.attn_pool.norm.weight.copy_(_n2p(w[f'{block_prefix}LayerNorm_0/scale'])) model.attn_pool.norm.bias.copy_(_n2p(w[f'{block_prefix}LayerNorm_0/bias'])) for r in range(2): getattr(model.attn_pool.mlp, f'fc{r + 1}').weight.copy_(_n2p(w[f'{block_prefix}MlpBlock_0/Dense_{r}/kernel'])) getattr(model.attn_pool.mlp, f'fc{r + 1}').bias.copy_(_n2p(w[f'{block_prefix}MlpBlock_0/Dense_{r}/bias'])) mha_sub, b_sub, ln1_sub = (0, 0, 1) if big_vision else (1, 3, 2) for i, block in enumerate(model.blocks.children()): if f'{prefix}Transformer/encoderblock/LayerNorm_0/scale' in w: block_prefix = f'{prefix}Transformer/encoderblock/' idx = i else: block_prefix = f'{prefix}Transformer/encoderblock_{i}/' idx = None mha_prefix = block_prefix + f'MultiHeadDotProductAttention_{mha_sub}/' block.norm1.weight.copy_(_n2p(w[f'{block_prefix}LayerNorm_0/scale'], idx=idx)) block.norm1.bias.copy_(_n2p(w[f'{block_prefix}LayerNorm_0/bias'], idx=idx)) block.attn.qkv.weight.copy_(torch.cat([ _n2p(w[f'{mha_prefix}{n}/kernel'], t=False, idx=idx).flatten(1).T for n in ('query', 'key', 'value')])) block.attn.qkv.bias.copy_(torch.cat([ _n2p(w[f'{mha_prefix}{n}/bias'], t=False, idx=idx).reshape(-1) for n in ('query', 'key', 'value')])) block.attn.proj.weight.copy_(_n2p(w[f'{mha_prefix}out/kernel'], idx=idx).flatten(1)) block.attn.proj.bias.copy_(_n2p(w[f'{mha_prefix}out/bias'], idx=idx)) block.norm2.weight.copy_(_n2p(w[f'{block_prefix}LayerNorm_{ln1_sub}/scale'], idx=idx)) block.norm2.bias.copy_(_n2p(w[f'{block_prefix}LayerNorm_{ln1_sub}/bias'], idx=idx)) for r in range(2): getattr(block.mlp, f'fc{r + 1}').weight.copy_( _n2p(w[f'{block_prefix}MlpBlock_{b_sub}/Dense_{r}/kernel'], idx=idx)) getattr(block.mlp, f'fc{r + 1}').bias.copy_( _n2p(w[f'{block_prefix}MlpBlock_{b_sub}/Dense_{r}/bias'], idx=idx)) def _convert_openai_clip( state_dict: Dict[str, torch.Tensor], model: VisionTransformer, prefix: str = 'visual.', ) -> Dict[str, torch.Tensor]: out_dict = {} swaps = [ ('conv1', 'patch_embed.proj'), ('positional_embedding', 'pos_embed'), ('transformer.resblocks.', 'blocks.'), ('ln_pre', 'norm_pre'), ('ln_post', 'norm'), ('ln_', 'norm'), ('in_proj_', 'qkv.'), ('out_proj', 'proj'), ('mlp.c_fc', 'mlp.fc1'), ('mlp.c_proj', 'mlp.fc2'), ] for k, v in state_dict.items(): if not k.startswith(prefix): continue k = k.replace(prefix, '') for sp in swaps: k = k.replace(sp[0], sp[1]) if k == 'proj': k = 'head.weight' v = v.transpose(0, 1) out_dict['head.bias'] = torch.zeros(v.shape[0]) elif k == 'class_embedding': k = 'cls_token' v = v.unsqueeze(0).unsqueeze(1) elif k == 'pos_embed': v = v.unsqueeze(0) out_dict[k] = v return out_dict def _convert_dinov2( state_dict: Dict[str, torch.Tensor], model: VisionTransformer, ) -> Dict[str, torch.Tensor]: import re out_dict = {} state_dict.pop("mask_token", None) if 'register_tokens' in state_dict: # convert dinov2 w/ registers to no_embed_class timm model (neither cls or reg tokens overlap pos embed) out_dict['reg_token'] = state_dict.pop('register_tokens') out_dict['cls_token'] = state_dict.pop('cls_token') + state_dict['pos_embed'][:, 0] out_dict['pos_embed'] = state_dict.pop('pos_embed')[:, 1:] for k, v in state_dict.items(): if re.match(r"blocks\.(\d+)\.mlp\.w12\.(?:weight|bias)", k): out_dict[k.replace("w12", "fc1")] = v continue elif re.match(r"blocks\.(\d+)\.mlp\.w3\.(?:weight|bias)", k): out_dict[k.replace("w3", "fc2")] = v continue out_dict[k] = v return out_dict def _convert_aimv2( state_dict: Dict[str, torch.Tensor], model: VisionTransformer, ) -> Dict[str, torch.Tensor]: out_dict = {} for k, v in state_dict.items(): k = k.replace('norm_1', 'norm1') k = k.replace('norm_2', 'norm2') k = k.replace('preprocessor.patchifier.', 'patch_embed.') k = k.replace('preprocessor.pos_embed', 'pos_embed') k = k.replace('trunk.', '') k = k.replace('post_trunk_norm.', 'norm.') k = k.replace('mlp.fc1', 'mlp.fc1_g') k = k.replace('mlp.fc3', 'mlp.fc1_x') out_dict[k] = v return out_dict def checkpoint_filter_fn( state_dict: Dict[str, torch.Tensor], model: VisionTransformer, adapt_layer_scale: bool = False, interpolation: str = 'bicubic', antialias: bool = True, ) -> Dict[str, torch.Tensor]: """ convert patch embedding weight from manual patchify + linear proj to conv""" import re out_dict = {} state_dict = state_dict.get('model', state_dict) state_dict = state_dict.get('state_dict', state_dict) prefix = '' if 'visual.class_embedding' in state_dict: state_dict = _convert_openai_clip(state_dict, model) elif 'module.visual.class_embedding' in state_dict: state_dict = _convert_openai_clip(state_dict, model, prefix='module.visual.') elif "mask_token" in state_dict: state_dict = _convert_dinov2(state_dict, model) elif "encoder" in state_dict: # IJEPA, vit in an 'encoder' submodule state_dict = state_dict['encoder'] prefix = 'module.' elif 'visual.trunk.pos_embed' in state_dict or 'visual.trunk.blocks.0.norm1.weight' in state_dict: # OpenCLIP model with timm vision encoder prefix = 'visual.trunk.' if 'visual.head.proj.weight' in state_dict and isinstance(model.head, nn.Linear): # remap final nn.Linear if it exists outside of the timm .trunk (ie in visual.head.proj) out_dict['head.weight'] = state_dict['visual.head.proj.weight'] out_dict['head.bias'] = torch.zeros(state_dict['visual.head.proj.weight'].shape[0]) elif 'preprocessor.patchifier.proj.weight' in state_dict: state_dict = _convert_aimv2(state_dict, model) if prefix: # filter on & remove prefix string from keys state_dict = {k[len(prefix):]: v for k, v in state_dict.items() if k.startswith(prefix)} for k, v in state_dict.items(): if 'patch_embed.proj.weight' in k: O, I, H, W = model.patch_embed.proj.weight.shape if len(v.shape) < 4: # For old models that I trained prior to conv based patchification O, I, H, W = model.patch_embed.proj.weight.shape v = v.reshape(O, -1, H, W) if v.shape[-1] != W or v.shape[-2] != H: v = resample_patch_embed( v, (H, W), interpolation=interpolation, antialias=antialias, verbose=True, ) elif k == 'pos_embed' and v.shape[1] != model.pos_embed.shape[1]: # To resize pos embedding when using model at different size from pretrained weights num_prefix_tokens = 0 if getattr(model, 'no_embed_class', False) else getattr(model, 'num_prefix_tokens', 1) v = resample_abs_pos_embed( v, new_size=model.patch_embed.grid_size, num_prefix_tokens=num_prefix_tokens, interpolation=interpolation, antialias=antialias, verbose=True, ) elif adapt_layer_scale and 'gamma_' in k: # remap layer-scale gamma into sub-module (deit3 models) k = re.sub(r'gamma_([0-9])', r'ls\1.gamma', k) elif 'pre_logits' in k: # NOTE representation layer removed as not used in latest 21k/1k pretrained weights continue out_dict[k] = v return out_dict def _cfg(url: str = '', **kwargs) -> Dict[str, Any]: return { 'url': url, 'num_classes': 1000, 'input_size': (3, 224, 224), 'pool_size': None, 'crop_pct': 0.9, 'interpolation': 'bicubic', 'fixed_input_size': True, 'mean': IMAGENET_INCEPTION_MEAN, 'std': IMAGENET_INCEPTION_STD, 'first_conv': 'patch_embed.proj', 'classifier': 'head', **kwargs, } default_cfgs = { # re-finetuned augreg 21k FT on in1k weights 'vit_base_patch16_224.augreg2_in21k_ft_in1k': _cfg( hf_hub_id='timm/'), 'vit_base_patch16_384.augreg2_in21k_ft_in1k': _cfg(), 'vit_base_patch8_224.augreg2_in21k_ft_in1k': _cfg( hf_hub_id='timm/'), # How to train your ViT (augreg) weights, pretrained on 21k FT on in1k 'vit_tiny_patch16_224.augreg_in21k_ft_in1k': _cfg( url='https://storage.googleapis.com/vit_models/augreg/Ti_16-i21k-300ep-lr_0.001-aug_none-wd_0.03-do_0.0-sd_0.0--imagenet2012-steps_20k-lr_0.03-res_224.npz', hf_hub_id='timm/', custom_load=True), 'vit_tiny_patch16_384.augreg_in21k_ft_in1k': _cfg( url='https://storage.googleapis.com/vit_models/augreg/Ti_16-i21k-300ep-lr_0.001-aug_none-wd_0.03-do_0.0-sd_0.0--imagenet2012-steps_20k-lr_0.03-res_384.npz', hf_hub_id='timm/', custom_load=True, input_size=(3, 384, 384), crop_pct=1.0), 'vit_small_patch32_224.augreg_in21k_ft_in1k': _cfg( url='https://storage.googleapis.com/vit_models/augreg/S_32-i21k-300ep-lr_0.001-aug_light1-wd_0.03-do_0.0-sd_0.0--imagenet2012-steps_20k-lr_0.03-res_224.npz', hf_hub_id='timm/', custom_load=True), 'vit_small_patch32_384.augreg_in21k_ft_in1k': _cfg( url='https://storage.googleapis.com/vit_models/augreg/S_32-i21k-300ep-lr_0.001-aug_light1-wd_0.03-do_0.0-sd_0.0--imagenet2012-steps_20k-lr_0.03-res_384.npz', hf_hub_id='timm/', custom_load=True, input_size=(3, 384, 384), crop_pct=1.0), 'vit_small_patch16_224.augreg_in21k_ft_in1k': _cfg( url='https://storage.googleapis.com/vit_models/augreg/S_16-i21k-300ep-lr_0.001-aug_light1-wd_0.03-do_0.0-sd_0.0--imagenet2012-steps_20k-lr_0.03-res_224.npz', hf_hub_id='timm/', custom_load=True), 'vit_small_patch16_384.augreg_in21k_ft_in1k': _cfg( url='https://storage.googleapis.com/vit_models/augreg/S_16-i21k-300ep-lr_0.001-aug_light1-wd_0.03-do_0.0-sd_0.0--imagenet2012-steps_20k-lr_0.03-res_384.npz', hf_hub_id='timm/', custom_load=True, input_size=(3, 384, 384), crop_pct=1.0), 'vit_base_patch32_224.augreg_in21k_ft_in1k': _cfg( url='https://storage.googleapis.com/vit_models/augreg/B_32-i21k-300ep-lr_0.001-aug_medium1-wd_0.03-do_0.0-sd_0.0--imagenet2012-steps_20k-lr_0.03-res_224.npz', hf_hub_id='timm/', custom_load=True), 'vit_base_patch32_384.augreg_in21k_ft_in1k': _cfg( url='https://storage.googleapis.com/vit_models/augreg/B_32-i21k-300ep-lr_0.001-aug_light1-wd_0.1-do_0.0-sd_0.0--imagenet2012-steps_20k-lr_0.03-res_384.npz', hf_hub_id='timm/', custom_load=True, input_size=(3, 384, 384), crop_pct=1.0), 'vit_base_patch16_224.augreg_in21k_ft_in1k': _cfg( url='https://storage.googleapis.com/vit_models/augreg/B_16-i21k-300ep-lr_0.001-aug_medium1-wd_0.1-do_0.0-sd_0.0--imagenet2012-steps_20k-lr_0.01-res_224.npz', hf_hub_id='timm/', custom_load=True), 'vit_base_patch16_384.augreg_in21k_ft_in1k': _cfg( url='https://storage.googleapis.com/vit_models/augreg/B_16-i21k-300ep-lr_0.001-aug_medium1-wd_0.1-do_0.0-sd_0.0--imagenet2012-steps_20k-lr_0.01-res_384.npz', hf_hub_id='timm/', custom_load=True, input_size=(3, 384, 384), crop_pct=1.0), 'vit_base_patch8_224.augreg_in21k_ft_in1k': _cfg( url='https://storage.googleapis.com/vit_models/augreg/B_8-i21k-300ep-lr_0.001-aug_medium1-wd_0.1-do_0.0-sd_0.0--imagenet2012-steps_20k-lr_0.01-res_224.npz', hf_hub_id='timm/', custom_load=True), 'vit_large_patch16_224.augreg_in21k_ft_in1k': _cfg( url='https://storage.googleapis.com/vit_models/augreg/L_16-i21k-300ep-lr_0.001-aug_medium1-wd_0.1-do_0.1-sd_0.1--imagenet2012-steps_20k-lr_0.01-res_224.npz', hf_hub_id='timm/', custom_load=True), 'vit_large_patch16_384.augreg_in21k_ft_in1k': _cfg( url='https://storage.googleapis.com/vit_models/augreg/L_16-i21k-300ep-lr_0.001-aug_medium1-wd_0.1-do_0.1-sd_0.1--imagenet2012-steps_20k-lr_0.01-res_384.npz', hf_hub_id='timm/', custom_load=True, input_size=(3, 384, 384), crop_pct=1.0), # patch models (weights from official Google JAX impl) pretrained on in21k FT on in1k 'vit_base_patch16_224.orig_in21k_ft_in1k': _cfg( url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-vitjx/jx_vit_base_p16_224-80ecf9dd.pth', hf_hub_id='timm/'), 'vit_base_patch16_384.orig_in21k_ft_in1k': _cfg( url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-vitjx/jx_vit_base_p16_384-83fb41ba.pth', hf_hub_id='timm/', input_size=(3, 384, 384), crop_pct=1.0), 'vit_large_patch32_384.orig_in21k_ft_in1k': _cfg( url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-vitjx/jx_vit_large_p32_384-9b920ba8.pth', hf_hub_id='timm/', input_size=(3, 384, 384), crop_pct=1.0), # How to train your ViT (augreg) weights trained on in1k only 'vit_small_patch16_224.augreg_in1k': _cfg( url='https://storage.googleapis.com/vit_models/augreg/S_16-i1k-300ep-lr_0.001-aug_medium2-wd_0.1-do_0.0-sd_0.0--imagenet2012-steps_20k-lr_0.01-res_224.npz', hf_hub_id='timm/', custom_load=True), 'vit_small_patch16_384.augreg_in1k': _cfg( url='https://storage.googleapis.com/vit_models/augreg/S_16-i1k-300ep-lr_0.001-aug_medium2-wd_0.1-do_0.0-sd_0.0--imagenet2012-steps_20k-lr_0.01-res_384.npz', hf_hub_id='timm/', custom_load=True, input_size=(3, 384, 384), crop_pct=1.0), 'vit_base_patch32_224.augreg_in1k': _cfg( url='https://storage.googleapis.com/vit_models/augreg/B_32-i1k-300ep-lr_0.001-aug_medium2-wd_0.1-do_0.1-sd_0.1--imagenet2012-steps_20k-lr_0.01-res_224.npz', hf_hub_id='timm/', custom_load=True), 'vit_base_patch32_384.augreg_in1k': _cfg( url='https://storage.googleapis.com/vit_models/augreg/B_32-i1k-300ep-lr_0.001-aug_medium2-wd_0.1-do_0.1-sd_0.1--imagenet2012-steps_20k-lr_0.01-res_384.npz', hf_hub_id='timm/', custom_load=True, input_size=(3, 384, 384), crop_pct=1.0), 'vit_base_patch16_224.augreg_in1k': _cfg( url='https://storage.googleapis.com/vit_models/augreg/B_16-i1k-300ep-lr_0.001-aug_strong2-wd_0.1-do_0.1-sd_0.1--imagenet2012-steps_20k-lr_0.01-res_224.npz', hf_hub_id='timm/', custom_load=True), 'vit_base_patch16_384.augreg_in1k': _cfg( url='https://storage.googleapis.com/vit_models/augreg/B_16-i1k-300ep-lr_0.001-aug_strong2-wd_0.1-do_0.1-sd_0.1--imagenet2012-steps_20k-lr_0.01-res_384.npz', hf_hub_id='timm/', custom_load=True, input_size=(3, 384, 384), crop_pct=1.0), 'vit_large_patch14_224.untrained': _cfg(url=''), 'vit_huge_patch14_224.untrained': _cfg(url=''), 'vit_giant_patch14_224.untrained': _cfg(url=''), 'vit_gigantic_patch14_224.untrained': _cfg(url=''), # patch models, imagenet21k (weights from official Google JAX impl), classifier not valid 'vit_base_patch32_224.orig_in21k': _cfg( #url='https://github.com/huggingface/pytorch-image-models/releases/download/v0.1-vitjx/jx_vit_base_patch32_224_in21k-8db57226.pth', hf_hub_id='timm/', num_classes=0), 'vit_base_patch16_224.orig_in21k': _cfg( #url='https://github.com/huggingface/pytorch-image-models/releases/download/v0.1-vitjx/jx_vit_base_patch16_224_in21k-e5005f0a.pth', hf_hub_id='timm/', num_classes=0), 'vit_large_patch32_224.orig_in21k': _cfg( #url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-vitjx/jx_vit_large_patch32_224_in21k-9046d2e7.pth', hf_hub_id='timm/', num_classes=0), 'vit_large_patch16_224.orig_in21k': _cfg( #url='https://github.com/huggingface/pytorch-image-models/releases/download/v0.1-vitjx/jx_vit_large_patch16_224_in21k-606da67d.pth', hf_hub_id='timm/', num_classes=0), 'vit_huge_patch14_224.orig_in21k': _cfg( hf_hub_id='timm/', num_classes=0), # How to train your ViT (augreg) weights, pretrained on in21k 'vit_tiny_patch16_224.augreg_in21k': _cfg( url='https://storage.googleapis.com/vit_models/augreg/Ti_16-i21k-300ep-lr_0.001-aug_none-wd_0.03-do_0.0-sd_0.0.npz', hf_hub_id='timm/', custom_load=True, num_classes=21843), 'vit_small_patch32_224.augreg_in21k': _cfg( url='https://storage.googleapis.com/vit_models/augreg/S_32-i21k-300ep-lr_0.001-aug_light1-wd_0.03-do_0.0-sd_0.0.npz', hf_hub_id='timm/', custom_load=True, num_classes=21843), 'vit_small_patch16_224.augreg_in21k': _cfg( url='https://storage.googleapis.com/vit_models/augreg/S_16-i21k-300ep-lr_0.001-aug_light1-wd_0.03-do_0.0-sd_0.0.npz', hf_hub_id='timm/', custom_load=True, num_classes=21843), 'vit_base_patch32_224.augreg_in21k': _cfg( url='https://storage.googleapis.com/vit_models/augreg/B_32-i21k-300ep-lr_0.001-aug_medium1-wd_0.03-do_0.0-sd_0.0.npz', hf_hub_id='timm/', custom_load=True, num_classes=21843), 'vit_base_patch16_224.augreg_in21k': _cfg( url='https://storage.googleapis.com/vit_models/augreg/B_16-i21k-300ep-lr_0.001-aug_medium1-wd_0.1-do_0.0-sd_0.0.npz', hf_hub_id='timm/', custom_load=True, num_classes=21843), 'vit_base_patch8_224.augreg_in21k': _cfg( url='https://storage.googleapis.com/vit_models/augreg/B_8-i21k-300ep-lr_0.001-aug_medium1-wd_0.1-do_0.0-sd_0.0.npz', hf_hub_id='timm/', custom_load=True, num_classes=21843), 'vit_large_patch16_224.augreg_in21k': _cfg( url='https://storage.googleapis.com/vit_models/augreg/L_16-i21k-300ep-lr_0.001-aug_medium1-wd_0.1-do_0.1-sd_0.1.npz', hf_hub_id='timm/', custom_load=True, num_classes=21843), # SAM trained models (https://arxiv.org/abs/2106.01548) 'vit_base_patch32_224.sam_in1k': _cfg( url='https://storage.googleapis.com/vit_models/sam/ViT-B_32.npz', custom_load=True, hf_hub_id='timm/'), 'vit_base_patch16_224.sam_in1k': _cfg( url='https://storage.googleapis.com/vit_models/sam/ViT-B_16.npz', custom_load=True, hf_hub_id='timm/'), # DINO pretrained - https://arxiv.org/abs/2104.14294 (no classifier head, for fine-tune only) 'vit_small_patch16_224.dino': _cfg( url='https://dl.fbaipublicfiles.com/dino/dino_deitsmall16_pretrain/dino_deitsmall16_pretrain.pth', hf_hub_id='timm/', mean=IMAGENET_DEFAULT_MEAN, std=IMAGENET_DEFAULT_STD, num_classes=0), 'vit_small_patch8_224.dino': _cfg( url='https://dl.fbaipublicfiles.com/dino/dino_deitsmall8_pretrain/dino_deitsmall8_pretrain.pth', hf_hub_id='timm/', mean=IMAGENET_DEFAULT_MEAN, std=IMAGENET_DEFAULT_STD, num_classes=0), 'vit_base_patch16_224.dino': _cfg( url='https://dl.fbaipublicfiles.com/dino/dino_vitbase16_pretrain/dino_vitbase16_pretrain.pth', hf_hub_id='timm/', mean=IMAGENET_DEFAULT_MEAN, std=IMAGENET_DEFAULT_STD, num_classes=0), 'vit_base_patch8_224.dino': _cfg( url='https://dl.fbaipublicfiles.com/dino/dino_vitbase8_pretrain/dino_vitbase8_pretrain.pth', hf_hub_id='timm/', mean=IMAGENET_DEFAULT_MEAN, std=IMAGENET_DEFAULT_STD, num_classes=0), # DINOv2 pretrained - https://arxiv.org/abs/2304.07193 (no classifier head, for fine-tune/features only) 'vit_small_patch14_dinov2.lvd142m': _cfg( url='https://dl.fbaipublicfiles.com/dinov2/dinov2_vits14/dinov2_vits14_pretrain.pth', hf_hub_id='timm/', license='apache-2.0', mean=IMAGENET_DEFAULT_MEAN, std=IMAGENET_DEFAULT_STD, num_classes=0, input_size=(3, 518, 518), crop_pct=1.0), 'vit_base_patch14_dinov2.lvd142m': _cfg( url='https://dl.fbaipublicfiles.com/dinov2/dinov2_vitb14/dinov2_vitb14_pretrain.pth', hf_hub_id='timm/', license='apache-2.0', mean=IMAGENET_DEFAULT_MEAN, std=IMAGENET_DEFAULT_STD, num_classes=0, input_size=(3, 518, 518), crop_pct=1.0), 'vit_large_patch14_dinov2.lvd142m': _cfg( url='https://dl.fbaipublicfiles.com/dinov2/dinov2_vitl14/dinov2_vitl14_pretrain.pth', hf_hub_id='timm/', license='apache-2.0', mean=IMAGENET_DEFAULT_MEAN, std=IMAGENET_DEFAULT_STD, num_classes=0, input_size=(3, 518, 518), crop_pct=1.0), 'vit_giant_patch14_dinov2.lvd142m': _cfg( url='https://dl.fbaipublicfiles.com/dinov2/dinov2_vitg14/dinov2_vitg14_pretrain.pth', hf_hub_id='timm/', license='apache-2.0', mean=IMAGENET_DEFAULT_MEAN, std=IMAGENET_DEFAULT_STD, num_classes=0, input_size=(3, 518, 518), crop_pct=1.0), # DINOv2 pretrained w/ registers - https://arxiv.org/abs/2309.16588 (no classifier head, for fine-tune/features only) 'vit_small_patch14_reg4_dinov2.lvd142m': _cfg( url='https://dl.fbaipublicfiles.com/dinov2/dinov2_vits14/dinov2_vits14_reg4_pretrain.pth', hf_hub_id='timm/', license='apache-2.0', mean=IMAGENET_DEFAULT_MEAN, std=IMAGENET_DEFAULT_STD, num_classes=0, input_size=(3, 518, 518), crop_pct=1.0), 'vit_base_patch14_reg4_dinov2.lvd142m': _cfg( url='https://dl.fbaipublicfiles.com/dinov2/dinov2_vitb14/dinov2_vitb14_reg4_pretrain.pth', hf_hub_id='timm/', license='apache-2.0', mean=IMAGENET_DEFAULT_MEAN, std=IMAGENET_DEFAULT_STD, num_classes=0, input_size=(3, 518, 518), crop_pct=1.0), 'vit_large_patch14_reg4_dinov2.lvd142m': _cfg( url='https://dl.fbaipublicfiles.com/dinov2/dinov2_vitl14/dinov2_vitl14_reg4_pretrain.pth', hf_hub_id='timm/', license='apache-2.0', mean=IMAGENET_DEFAULT_MEAN, std=IMAGENET_DEFAULT_STD, num_classes=0, input_size=(3, 518, 518), crop_pct=1.0), 'vit_giant_patch14_reg4_dinov2.lvd142m': _cfg( url='https://dl.fbaipublicfiles.com/dinov2/dinov2_vitg14/dinov2_vitg14_reg4_pretrain.pth', hf_hub_id='timm/', license='apache-2.0', mean=IMAGENET_DEFAULT_MEAN, std=IMAGENET_DEFAULT_STD, num_classes=0, input_size=(3, 518, 518), crop_pct=1.0), # ViT ImageNet-21K-P pretraining by MILL 'vit_base_patch16_224_miil.in21k': _cfg( url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-tresnet/vit_base_patch16_224_in21k_miil-887286df.pth', hf_hub_id='timm/', mean=(0., 0., 0.), std=(1., 1., 1.), crop_pct=0.875, interpolation='bilinear', num_classes=11221), 'vit_base_patch16_224_miil.in21k_ft_in1k': _cfg( url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-tresnet/vit_base_patch16_224_1k_miil_84_4-2deb18e3.pth', hf_hub_id='timm/', mean=(0., 0., 0.), std=(1., 1., 1.), crop_pct=0.875, interpolation='bilinear'), # Custom timm variants 'vit_base_patch16_rpn_224.sw_in1k': _cfg( url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-tpu-weights/vit_base_patch16_rpn_224-sw-3b07e89d.pth', hf_hub_id='timm/'), 'vit_medium_patch16_gap_240.sw_in12k': _cfg( hf_hub_id='timm/', input_size=(3, 240, 240), crop_pct=0.95, num_classes=11821), 'vit_medium_patch16_gap_256.sw_in12k_ft_in1k': _cfg( hf_hub_id='timm/', input_size=(3, 256, 256), crop_pct=0.95), 'vit_medium_patch16_gap_384.sw_in12k_ft_in1k': _cfg( hf_hub_id='timm/', input_size=(3, 384, 384), crop_pct=0.95, crop_mode='squash'), 'vit_base_patch16_gap_224': _cfg(), # CLIP pretrained image tower and related fine-tuned weights 'vit_base_patch32_clip_224.laion2b_ft_in12k_in1k': _cfg( hf_hub_id='timm/', mean=OPENAI_CLIP_MEAN, std=OPENAI_CLIP_STD), 'vit_base_patch32_clip_384.laion2b_ft_in12k_in1k': _cfg( hf_hub_id='timm/', mean=OPENAI_CLIP_MEAN, std=OPENAI_CLIP_STD, crop_pct=1.0, input_size=(3, 384, 384)), 'vit_base_patch32_clip_448.laion2b_ft_in12k_in1k': _cfg( hf_hub_id='timm/', mean=OPENAI_CLIP_MEAN, std=OPENAI_CLIP_STD, crop_pct=1.0, input_size=(3, 448, 448)), 'vit_base_patch16_clip_224.laion2b_ft_in12k_in1k': _cfg( hf_hub_id='timm/', mean=OPENAI_CLIP_MEAN, std=OPENAI_CLIP_STD, crop_pct=0.95), 'vit_base_patch16_clip_384.laion2b_ft_in12k_in1k': _cfg( hf_hub_id='timm/', mean=OPENAI_CLIP_MEAN, std=OPENAI_CLIP_STD, crop_pct=1.0, input_size=(3, 384, 384), crop_mode='squash'), 'vit_large_patch14_clip_224.laion2b_ft_in12k_in1k': _cfg( hf_hub_id='timm/', mean=IMAGENET_INCEPTION_MEAN, std=IMAGENET_INCEPTION_STD, crop_pct=1.0), 'vit_large_patch14_clip_336.laion2b_ft_in12k_in1k': _cfg( hf_hub_id='timm/', mean=IMAGENET_INCEPTION_MEAN, std=IMAGENET_INCEPTION_STD, crop_pct=1.0, input_size=(3, 336, 336), crop_mode='squash'), 'vit_huge_patch14_clip_224.laion2b_ft_in12k_in1k': _cfg( hf_hub_id='timm/', mean=OPENAI_CLIP_MEAN, std=OPENAI_CLIP_STD, crop_pct=1.0), 'vit_huge_patch14_clip_336.laion2b_ft_in12k_in1k': _cfg( hf_hub_id='timm/', mean=OPENAI_CLIP_MEAN, std=OPENAI_CLIP_STD, crop_pct=1.0, input_size=(3, 336, 336), crop_mode='squash'), 'vit_base_patch32_clip_224.openai_ft_in12k_in1k': _cfg( # hf_hub_id='timm/vit_base_patch32_clip_224.openai_ft_in12k_in1k', # FIXME weight exists, need to push mean=OPENAI_CLIP_MEAN, std=OPENAI_CLIP_STD), 'vit_base_patch32_clip_384.openai_ft_in12k_in1k': _cfg( hf_hub_id='timm/', mean=OPENAI_CLIP_MEAN, std=OPENAI_CLIP_STD, crop_pct=0.95, input_size=(3, 384, 384), crop_mode='squash'), 'vit_base_patch16_clip_224.openai_ft_in12k_in1k': _cfg( hf_hub_id='timm/', mean=OPENAI_CLIP_MEAN, std=OPENAI_CLIP_STD, crop_pct=0.95), 'vit_base_patch16_clip_384.openai_ft_in12k_in1k': _cfg( hf_hub_id='timm/', mean=OPENAI_CLIP_MEAN, std=OPENAI_CLIP_STD, crop_pct=0.95, input_size=(3, 384, 384), crop_mode='squash'), 'vit_large_patch14_clip_224.openai_ft_in12k_in1k': _cfg( hf_hub_id='timm/', mean=OPENAI_CLIP_MEAN, std=OPENAI_CLIP_STD, crop_pct=1.0), 'vit_large_patch14_clip_336.openai_ft_in12k_in1k': _cfg( hf_hub_id='timm/', mean=OPENAI_CLIP_MEAN, std=OPENAI_CLIP_STD, crop_pct=1.0, input_size=(3, 336, 336), crop_mode='squash'), 'vit_base_patch32_clip_224.laion2b_ft_in1k': _cfg( hf_hub_id='timm/', mean=OPENAI_CLIP_MEAN, std=OPENAI_CLIP_STD), 'vit_base_patch16_clip_224.laion2b_ft_in1k': _cfg( hf_hub_id='timm/', mean=OPENAI_CLIP_MEAN, std=OPENAI_CLIP_STD, crop_pct=1.0), 'vit_base_patch16_clip_384.laion2b_ft_in1k': _cfg( hf_hub_id='timm/', mean=OPENAI_CLIP_MEAN, std=OPENAI_CLIP_STD, crop_pct=1.0, input_size=(3, 384, 384), crop_mode='squash'), 'vit_large_patch14_clip_224.laion2b_ft_in1k': _cfg( hf_hub_id='timm/', mean=IMAGENET_INCEPTION_MEAN, std=IMAGENET_INCEPTION_STD, crop_pct=1.0), 'vit_large_patch14_clip_336.laion2b_ft_in1k': _cfg( hf_hub_id='timm/', mean=IMAGENET_INCEPTION_MEAN, std=IMAGENET_INCEPTION_STD, crop_pct=1.0, input_size=(3, 336, 336), crop_mode='squash'), 'vit_huge_patch14_clip_224.laion2b_ft_in1k': _cfg( hf_hub_id='timm/', mean=OPENAI_CLIP_MEAN, std=OPENAI_CLIP_STD, crop_pct=1.0), 'vit_huge_patch14_clip_336.laion2b_ft_in1k': _cfg( hf_hub_id='', mean=OPENAI_CLIP_MEAN, std=OPENAI_CLIP_STD, crop_pct=1.0, input_size=(3, 336, 336), crop_mode='squash'), 'vit_base_patch32_clip_224.openai_ft_in1k': _cfg( hf_hub_id='timm/', mean=OPENAI_CLIP_MEAN, std=OPENAI_CLIP_STD), 'vit_base_patch16_clip_224.openai_ft_in1k': _cfg( hf_hub_id='timm/', mean=OPENAI_CLIP_MEAN, std=OPENAI_CLIP_STD), 'vit_base_patch16_clip_384.openai_ft_in1k': _cfg( hf_hub_id='timm/', mean=OPENAI_CLIP_MEAN, std=OPENAI_CLIP_STD, crop_pct=1.0, input_size=(3, 384, 384), crop_mode='squash'), 'vit_large_patch14_clip_224.openai_ft_in1k': _cfg( hf_hub_id='timm/', mean=OPENAI_CLIP_MEAN, std=OPENAI_CLIP_STD, crop_pct=1.0), 'vit_base_patch16_clip_224.laion2b_ft_in12k': _cfg( hf_hub_id='timm/', mean=OPENAI_CLIP_MEAN, std=OPENAI_CLIP_STD, num_classes=11821), 'vit_large_patch14_clip_224.laion2b_ft_in12k': _cfg( hf_hub_id='timm/', mean=IMAGENET_INCEPTION_MEAN, std=IMAGENET_INCEPTION_STD, crop_pct=1.0, num_classes=11821), 'vit_huge_patch14_clip_224.laion2b_ft_in12k': _cfg( hf_hub_id='timm/', mean=OPENAI_CLIP_MEAN, std=OPENAI_CLIP_STD, crop_pct=1.0, num_classes=11821), 'vit_base_patch16_clip_224.openai_ft_in12k': _cfg( hf_hub_id='timm/', mean=OPENAI_CLIP_MEAN, std=OPENAI_CLIP_STD, num_classes=11821), 'vit_large_patch14_clip_224.openai_ft_in12k': _cfg( hf_hub_id='timm/', mean=OPENAI_CLIP_MEAN, std=OPENAI_CLIP_STD, crop_pct=1.0, num_classes=11821), 'vit_base_patch32_clip_224.laion2b': _cfg( hf_hub_id='timm/', mean=OPENAI_CLIP_MEAN, std=OPENAI_CLIP_STD, num_classes=512), 'vit_base_patch16_clip_224.laion2b': _cfg( hf_hub_id='timm/', mean=OPENAI_CLIP_MEAN, std=OPENAI_CLIP_STD, crop_pct=1.0, num_classes=512), 'vit_large_patch14_clip_224.laion2b': _cfg( hf_hub_id='timm/', mean=IMAGENET_INCEPTION_MEAN, std=IMAGENET_INCEPTION_STD, crop_pct=1.0, num_classes=768), 'vit_huge_patch14_clip_224.laion2b': _cfg( hf_hub_id='timm/', mean=OPENAI_CLIP_MEAN, std=OPENAI_CLIP_STD, crop_pct=1.0, num_classes=1024), 'vit_giant_patch14_clip_224.laion2b': _cfg( hf_hub_id='timm/', mean=OPENAI_CLIP_MEAN, std=OPENAI_CLIP_STD, crop_pct=1.0, num_classes=1024), 'vit_gigantic_patch14_clip_224.laion2b': _cfg( hf_hub_id='timm/', mean=OPENAI_CLIP_MEAN, std=OPENAI_CLIP_STD, crop_pct=1.0, num_classes=1280), 'vit_base_patch32_clip_224.laion400m_e32': _cfg( hf_hub_id='timm/', notes=('natively QuickGELU, use quickgelu model variant for original results',), mean=OPENAI_CLIP_MEAN, std=OPENAI_CLIP_STD, num_classes=512), 'vit_base_patch16_clip_224.laion400m_e32': _cfg( hf_hub_id='timm/', mean=OPENAI_CLIP_MEAN, std=OPENAI_CLIP_STD, crop_pct=1.0, num_classes=512), 'vit_base_patch16_plus_clip_240.laion400m_e32': _cfg( hf_hub_id='timm/', mean=OPENAI_CLIP_MEAN, std=OPENAI_CLIP_STD, input_size=(3, 240, 240), crop_pct=1.0, num_classes=640), 'vit_large_patch14_clip_224.laion400m_e32': _cfg( hf_hub_id='timm/', mean=OPENAI_CLIP_MEAN, std=OPENAI_CLIP_STD, crop_pct=1.0, num_classes=768), 'vit_base_patch32_clip_224.datacompxl': _cfg( hf_hub_id='timm/', mean=OPENAI_CLIP_MEAN, std=OPENAI_CLIP_STD, crop_pct=1.0, num_classes=512), 'vit_base_patch32_clip_256.datacompxl': _cfg( hf_hub_id='timm/', mean=OPENAI_CLIP_MEAN, std=OPENAI_CLIP_STD, crop_pct=1.0, input_size=(3, 256, 256), num_classes=512), 'vit_base_patch16_clip_224.datacompxl': _cfg( hf_hub_id='timm/', mean=OPENAI_CLIP_MEAN, std=OPENAI_CLIP_STD, crop_pct=1.0, num_classes=512), 'vit_large_patch14_clip_224.datacompxl': _cfg( hf_hub_id='timm/', mean=OPENAI_CLIP_MEAN, std=OPENAI_CLIP_STD, crop_pct=1.0, num_classes=768), 'vit_base_patch16_clip_224.dfn2b': _cfg( hf_hub_id='timm/', license='apple-ascl', mean=OPENAI_CLIP_MEAN, std=OPENAI_CLIP_STD, crop_pct=1.0, num_classes=512), 'vit_large_patch14_clip_224.dfn2b_s39b': _cfg( hf_hub_id='timm/', license='apple-ascl', mean=OPENAI_CLIP_MEAN, std=OPENAI_CLIP_STD, crop_pct=1.0, num_classes=768), 'vit_large_patch14_clip_224.dfn2b': _cfg( hf_hub_id='timm/', license='apple-ascl', notes=('natively QuickGELU, use quickgelu model variant for original results',), mean=OPENAI_CLIP_MEAN, std=OPENAI_CLIP_STD, crop_pct=1.0, num_classes=768), 'vit_huge_patch14_clip_224.dfn5b': _cfg( hf_hub_id='timm/', license='apple-ascl', notes=('natively QuickGELU, use quickgelu model variant for original results',), mean=OPENAI_CLIP_MEAN, std=OPENAI_CLIP_STD, crop_pct=1.0, num_classes=1024), 'vit_huge_patch14_clip_378.dfn5b': _cfg( hf_hub_id='timm/', mean=OPENAI_CLIP_MEAN, std=OPENAI_CLIP_STD, license='apple-ascl', notes=('natively QuickGELU, use quickgelu model variant for original results',), crop_pct=1.0, input_size=(3, 378, 378), num_classes=1024), 'vit_base_patch32_clip_224.metaclip_2pt5b': _cfg( hf_hub_id='timm/', license='cc-by-nc-4.0', notes=('natively QuickGELU, use quickgelu model variant for original results',), mean=OPENAI_CLIP_MEAN, std=OPENAI_CLIP_STD, crop_pct=1.0, num_classes=512), 'vit_base_patch16_clip_224.metaclip_2pt5b': _cfg( hf_hub_id='timm/', license='cc-by-nc-4.0', notes=('natively QuickGELU, use quickgelu model variant for original results',), mean=OPENAI_CLIP_MEAN, std=OPENAI_CLIP_STD, crop_pct=1.0, num_classes=512), 'vit_large_patch14_clip_224.metaclip_2pt5b': _cfg( hf_hub_id='timm/', license='cc-by-nc-4.0', notes=('natively QuickGELU, use quickgelu model variant for original results',), mean=OPENAI_CLIP_MEAN, std=OPENAI_CLIP_STD, crop_pct=1.0, num_classes=768), 'vit_huge_patch14_clip_224.metaclip_2pt5b': _cfg( hf_hub_id='timm/', license='cc-by-nc-4.0', notes=('natively QuickGELU, use quickgelu model variant for original results',), mean=OPENAI_CLIP_MEAN, std=OPENAI_CLIP_STD, crop_pct=1.0, num_classes=1024), 'vit_huge_patch14_clip_224.metaclip_altogether': _cfg( hf_hub_id='timm/', license='cc-by-nc-4.0', mean=OPENAI_CLIP_MEAN, std=OPENAI_CLIP_STD, crop_pct=1.0, num_classes=1024), 'vit_gigantic_patch14_clip_224.metaclip_2pt5b': _cfg( hf_hub_id='timm/', license='cc-by-nc-4.0', notes=('natively QuickGELU, use quickgelu model variant for original results',), mean=OPENAI_CLIP_MEAN, std=OPENAI_CLIP_STD, crop_pct=1.0, num_classes=1280), 'vit_base_patch32_clip_224.metaclip_400m': _cfg( hf_hub_id='timm/', license='cc-by-nc-4.0', notes=('natively QuickGELU, use quickgelu model variant for original results',), mean=OPENAI_CLIP_MEAN, std=OPENAI_CLIP_STD, crop_pct=1.0, num_classes=512), 'vit_base_patch16_clip_224.metaclip_400m': _cfg( hf_hub_id='timm/', license='cc-by-nc-4.0', notes=('natively QuickGELU, use quickgelu model variant for original results',), mean=OPENAI_CLIP_MEAN, std=OPENAI_CLIP_STD, crop_pct=1.0, num_classes=512), 'vit_large_patch14_clip_224.metaclip_400m': _cfg( hf_hub_id='timm/', license='cc-by-nc-4.0', notes=('natively QuickGELU, use quickgelu model variant for original results',), mean=OPENAI_CLIP_MEAN, std=OPENAI_CLIP_STD, crop_pct=1.0, num_classes=768), 'vit_base_patch32_clip_224.openai': _cfg( hf_hub_id='timm/', notes=('natively QuickGELU, use quickgelu model variant for original results',), mean=OPENAI_CLIP_MEAN, std=OPENAI_CLIP_STD, num_classes=512), 'vit_base_patch16_clip_224.openai': _cfg( hf_hub_id='timm/', notes=('natively QuickGELU, use quickgelu model variant for original results',), mean=OPENAI_CLIP_MEAN, std=OPENAI_CLIP_STD, num_classes=512), 'vit_large_patch14_clip_224.openai': _cfg( hf_hub_id='timm/', notes=('natively QuickGELU, use quickgelu model variant for original results',), mean=OPENAI_CLIP_MEAN, std=OPENAI_CLIP_STD, crop_pct=1.0, num_classes=768), 'vit_large_patch14_clip_336.openai': _cfg( hf_hub_id='timm/', notes=('natively QuickGELU, use quickgelu model variant for original results',), mean=OPENAI_CLIP_MEAN, std=OPENAI_CLIP_STD, crop_pct=1.0, input_size=(3, 336, 336), num_classes=768), # experimental (may be removed) 'vit_base_patch32_plus_256.untrained': _cfg(url='', input_size=(3, 256, 256), crop_pct=0.95), 'vit_base_patch16_plus_240.untrained': _cfg(url='', input_size=(3, 240, 240), crop_pct=0.95), 'vit_small_patch16_36x1_224.untrained': _cfg(url=''), 'vit_small_patch16_18x2_224.untrained': _cfg(url=''), 'vit_base_patch16_18x2_224.untrained': _cfg(url=''), # EVA fine-tuned weights from MAE style MIM - EVA-CLIP target pretrain # https://github.com/baaivision/EVA/blob/7ecf2c0a370d97967e86d047d7af9188f78d2df3/eva/README.md#eva-l-learning-better-mim-representations-from-eva-clip 'eva_large_patch14_196.in22k_ft_in22k_in1k': _cfg( # hf_hub_id='BAAI/EVA', hf_hub_filename='eva_l_psz14_196px_21k_to_1k_ft_88p6.pt', hf_hub_id='timm/', license='mit', mean=OPENAI_CLIP_MEAN, std=OPENAI_CLIP_STD, input_size=(3, 196, 196), crop_pct=1.0), 'eva_large_patch14_336.in22k_ft_in22k_in1k': _cfg( # hf_hub_id='BAAI/EVA', hf_hub_filename='eva_l_psz14_336px_21k_to_1k_ft_89p2.pt', hf_hub_id='timm/', license='mit', mean=OPENAI_CLIP_MEAN, std=OPENAI_CLIP_STD, input_size=(3, 336, 336), crop_pct=1.0, crop_mode='squash'), 'eva_large_patch14_196.in22k_ft_in1k': _cfg( # hf_hub_id='BAAI/EVA', hf_hub_filename='eva_l_psz14_196px_1k_ft_88p0.pt', hf_hub_id='timm/', license='mit', mean=OPENAI_CLIP_MEAN, std=OPENAI_CLIP_STD, input_size=(3, 196, 196), crop_pct=1.0), 'eva_large_patch14_336.in22k_ft_in1k': _cfg( # hf_hub_id='BAAI/EVA', hf_hub_filename='eva_l_psz14_336px_1k_ft_88p65.pt', hf_hub_id='timm/', license='mit', mean=OPENAI_CLIP_MEAN, std=OPENAI_CLIP_STD, input_size=(3, 336, 336), crop_pct=1.0, crop_mode='squash'), 'flexivit_small.1200ep_in1k': _cfg( url='https://storage.googleapis.com/big_vision/flexivit/flexivit_s_i1k.npz', custom_load=True, hf_hub_id='timm/', input_size=(3, 240, 240), crop_pct=0.95), 'flexivit_small.600ep_in1k': _cfg( url='https://storage.googleapis.com/big_vision/flexivit/flexivit_s_i1k_600ep.npz', custom_load=True, hf_hub_id='timm/', input_size=(3, 240, 240), crop_pct=0.95), 'flexivit_small.300ep_in1k': _cfg( url='https://storage.googleapis.com/big_vision/flexivit/flexivit_s_i1k_300ep.npz', custom_load=True, hf_hub_id='timm/', input_size=(3, 240, 240), crop_pct=0.95), 'flexivit_base.1200ep_in1k': _cfg( url='https://storage.googleapis.com/big_vision/flexivit/flexivit_b_i1k.npz', custom_load=True, hf_hub_id='timm/', input_size=(3, 240, 240), crop_pct=0.95), 'flexivit_base.600ep_in1k': _cfg( url='https://storage.googleapis.com/big_vision/flexivit/flexivit_b_i1k_600ep.npz', custom_load=True, hf_hub_id='timm/', input_size=(3, 240, 240), crop_pct=0.95), 'flexivit_base.300ep_in1k': _cfg( url='https://storage.googleapis.com/big_vision/flexivit/flexivit_b_i1k_300ep.npz', custom_load=True, hf_hub_id='timm/', input_size=(3, 240, 240), crop_pct=0.95), 'flexivit_base.1000ep_in21k': _cfg( url='https://storage.googleapis.com/big_vision/flexivit/flexivit_b_i21k_1000ep.npz', custom_load=True, hf_hub_id='timm/', input_size=(3, 240, 240), crop_pct=0.95, num_classes=21843), 'flexivit_base.300ep_in21k': _cfg( url='https://storage.googleapis.com/big_vision/flexivit/flexivit_b_i21k_300ep.npz', custom_load=True, hf_hub_id='timm/', input_size=(3, 240, 240), crop_pct=0.95, num_classes=21843), 'flexivit_large.1200ep_in1k': _cfg( url='https://storage.googleapis.com/big_vision/flexivit/flexivit_l_i1k.npz', custom_load=True, hf_hub_id='timm/', input_size=(3, 240, 240), crop_pct=0.95), 'flexivit_large.600ep_in1k': _cfg( url='https://storage.googleapis.com/big_vision/flexivit/flexivit_l_i1k_600ep.npz', custom_load=True, hf_hub_id='timm/', input_size=(3, 240, 240), crop_pct=0.95), 'flexivit_large.300ep_in1k': _cfg( url='https://storage.googleapis.com/big_vision/flexivit/flexivit_l_i1k_300ep.npz', custom_load=True, hf_hub_id='timm/', input_size=(3, 240, 240), crop_pct=0.95), 'flexivit_base.patch16_in21k': _cfg( url='https://storage.googleapis.com/big_vision/flexivit/vit_b16_i21k_300ep.npz', custom_load=True, hf_hub_id='timm/', input_size=(3, 240, 240), crop_pct=0.95, num_classes=21843), 'flexivit_base.patch30_in21k': _cfg( url='https://storage.googleapis.com/big_vision/flexivit/vit_b30_i21k_300ep.npz', custom_load=True, hf_hub_id='timm/', input_size=(3, 240, 240), crop_pct=0.95, num_classes=21843), 'vit_base_patch16_xp_224.untrained': _cfg(url=''), 'vit_large_patch14_xp_224.untrained': _cfg(url=''), 'vit_huge_patch14_xp_224.untrained': _cfg(url=''), 'vit_base_patch16_224.mae': _cfg( url='https://dl.fbaipublicfiles.com/mae/pretrain/mae_pretrain_vit_base.pth', hf_hub_id='timm/', license='cc-by-nc-4.0', mean=IMAGENET_DEFAULT_MEAN, std=IMAGENET_DEFAULT_STD, num_classes=0), 'vit_large_patch16_224.mae': _cfg( url='https://dl.fbaipublicfiles.com/mae/pretrain/mae_pretrain_vit_large.pth', hf_hub_id='timm/', license='cc-by-nc-4.0', mean=IMAGENET_DEFAULT_MEAN, std=IMAGENET_DEFAULT_STD, num_classes=0), 'vit_huge_patch14_224.mae': _cfg( url='https://dl.fbaipublicfiles.com/mae/pretrain/mae_pretrain_vit_huge.pth', hf_hub_id='timm/', license='cc-by-nc-4.0', mean=IMAGENET_DEFAULT_MEAN, std=IMAGENET_DEFAULT_STD, num_classes=0), 'vit_huge_patch14_gap_224.in1k_ijepa': _cfg( url='https://dl.fbaipublicfiles.com/ijepa/IN1K-vit.h.14-300e.pth.tar', # hf_hub_id='timm/', license='cc-by-nc-4.0', mean=IMAGENET_DEFAULT_MEAN, std=IMAGENET_DEFAULT_STD, num_classes=0), 'vit_huge_patch14_gap_224.in22k_ijepa': _cfg( url='https://dl.fbaipublicfiles.com/ijepa/IN22K-vit.h.14-900e.pth.tar', # hf_hub_id='timm/', license='cc-by-nc-4.0', mean=IMAGENET_DEFAULT_MEAN, std=IMAGENET_DEFAULT_STD, num_classes=0), 'vit_huge_patch16_gap_448.in1k_ijepa': _cfg( url='https://dl.fbaipublicfiles.com/ijepa/IN1K-vit.h.16-448px-300e.pth.tar', # hf_hub_id='timm/', license='cc-by-nc-4.0', input_size=(3, 448, 448), crop_pct=1.0, mean=IMAGENET_DEFAULT_MEAN, std=IMAGENET_DEFAULT_STD, num_classes=0), 'vit_giant_patch16_gap_224.in22k_ijepa': _cfg( url='https://dl.fbaipublicfiles.com/ijepa/IN22K-vit.g.16-600e.pth.tar', # hf_hub_id='timm/', license='cc-by-nc-4.0', mean=IMAGENET_DEFAULT_MEAN, std=IMAGENET_DEFAULT_STD, num_classes=0), 'vit_base_patch16_siglip_224.webli': _cfg( hf_hub_id='timm/', num_classes=0), 'vit_base_patch16_siglip_256.webli': _cfg( hf_hub_id='timm/', input_size=(3, 256, 256), num_classes=0), 'vit_base_patch16_siglip_256.webli_i18n': _cfg( hf_hub_id='timm/', input_size=(3, 256, 256), num_classes=0), 'vit_base_patch16_siglip_384.webli': _cfg( hf_hub_id='timm/', input_size=(3, 384, 384), num_classes=0), 'vit_base_patch16_siglip_512.webli': _cfg( hf_hub_id='timm/', input_size=(3, 512, 512), num_classes=0), 'vit_large_patch16_siglip_256.webli': _cfg( hf_hub_id='timm/', input_size=(3, 256, 256), num_classes=0), 'vit_large_patch16_siglip_384.webli': _cfg( hf_hub_id='timm/', input_size=(3, 384, 384), num_classes=0), 'vit_so400m_patch14_siglip_224.webli': _cfg( hf_hub_id='timm/', num_classes=0), 'vit_so400m_patch16_siglip_256.webli_i18n': _cfg( hf_hub_id='timm/', input_size=(3, 256, 256), num_classes=0), 'vit_so400m_patch14_siglip_378.webli': _cfg( hf_hub_id='timm/', input_size=(3, 378, 378), num_classes=0), 'vit_so400m_patch14_siglip_384.webli': _cfg( hf_hub_id='timm/', input_size=(3, 384, 384), num_classes=0), 'vit_base_patch16_siglip_gap_224.webli': _cfg( hf_hub_id='timm/', num_classes=0), 'vit_base_patch16_siglip_gap_256.webli': _cfg( hf_hub_id='timm/', input_size=(3, 256, 256), num_classes=0), 'vit_base_patch16_siglip_gap_256.webli_i18n': _cfg( hf_hub_id='timm/', input_size=(3, 256, 256), num_classes=0), 'vit_base_patch16_siglip_gap_384.webli': _cfg( hf_hub_id='timm/', input_size=(3, 384, 384), num_classes=0), 'vit_base_patch16_siglip_gap_512.webli': _cfg( hf_hub_id='timm/', input_size=(3, 512, 512), num_classes=0), 'vit_large_patch16_siglip_gap_256.webli': _cfg( hf_hub_id='timm/', input_size=(3, 256, 256), num_classes=0), 'vit_large_patch16_siglip_gap_384.webli': _cfg( hf_hub_id='timm/', input_size=(3, 384, 384), num_classes=0), 'vit_so400m_patch14_siglip_gap_224.webli': _cfg( hf_hub_id='timm/', num_classes=0), 'vit_so400m_patch14_siglip_gap_224.pali_mix': _cfg( hf_hub_id='timm/', num_classes=0), 'vit_so400m_patch14_siglip_gap_224.pali_pt': _cfg( hf_hub_id='timm/', num_classes=0), 'vit_so400m_patch14_siglip_gap_224.pali2_3b_pt': _cfg( hf_hub_id='timm/', num_classes=0), 'vit_so400m_patch14_siglip_gap_224.pali2_10b_pt': _cfg( hf_hub_id='timm/', num_classes=0), # 'vit_so400m_patch14_siglip_gap_224.pali2_28b_pt': _cfg( # hf_hub_id='google/paligemma2-28b-pt-224-jax', # hf_hub_filename='pt_27b_224.npz', # custom_load='hf', # num_classes=0), 'vit_so400m_patch16_siglip_gap_256.webli_i18n': _cfg( hf_hub_id='timm/', input_size=(3, 256, 256), num_classes=0), 'vit_so400m_patch14_siglip_gap_378.webli': _cfg( hf_hub_id='timm/', input_size=(3, 378, 378), crop_pct=1.0, num_classes=0), 'vit_so400m_patch14_siglip_gap_384.webli': _cfg( hf_hub_id='timm/', input_size=(3, 384, 384), crop_pct=1.0, num_classes=0), 'vit_so400m_patch14_siglip_gap_448.pali_mix': _cfg( hf_hub_id='timm/', input_size=(3, 448, 448), crop_pct=1.0, num_classes=0), 'vit_so400m_patch14_siglip_gap_448.pali_pt': _cfg( hf_hub_id='timm/', input_size=(3, 448, 448), crop_pct=1.0, num_classes=0), 'vit_so400m_patch14_siglip_gap_448.pali_refcoco_seg': _cfg( hf_hub_id='timm/', input_size=(3, 448, 448), crop_pct=1.0, num_classes=0), 'vit_so400m_patch14_siglip_gap_448.pali_ocrvqa': _cfg( hf_hub_id='timm/', input_size=(3, 448, 448), crop_pct=1.0, num_classes=0), 'vit_so400m_patch14_siglip_gap_448.pali2_3b_pt': _cfg( hf_hub_id='timm/', input_size=(3, 448, 448), crop_pct=1.0, num_classes=0), 'vit_so400m_patch14_siglip_gap_448.pali2_10b_pt': _cfg( hf_hub_id='timm/', input_size=(3, 448, 448), crop_pct=1.0, num_classes=0), # 'vit_so400m_patch14_siglip_gap_448.pali2_28b_pt': _cfg( # hf_hub_id='google/paligemma2-28b-pt-448-jax', # hf_hub_filename='pt_27b_448.npz', # custom_load='hf', # input_size=(3, 448, 448), crop_pct=1.0, # num_classes=0), 'vit_so400m_patch14_siglip_gap_448.pali2_3b_docci': _cfg( hf_hub_id='timm/', input_size=(3, 448, 448), crop_pct=1.0, num_classes=0), 'vit_so400m_patch14_siglip_gap_448.pali2_10b_docci': _cfg( hf_hub_id='timm/', input_size=(3, 448, 448), crop_pct=1.0, num_classes=0), 'vit_so400m_patch14_siglip_gap_896.pali_pt': _cfg( hf_hub_id='timm/', input_size=(3, 896, 896), crop_pct=1.0, num_classes=0), 'vit_so400m_patch14_siglip_gap_896.pali_refcoco_seg': _cfg( hf_hub_id='timm/', input_size=(3, 896, 896), crop_pct=1.0, num_classes=0), 'vit_so400m_patch14_siglip_gap_896.pali_ocrvqa': _cfg( hf_hub_id='timm/', input_size=(3, 896, 896), crop_pct=1.0, num_classes=0), 'vit_so400m_patch14_siglip_gap_896.pali2_3b_pt': _cfg( hf_hub_id='timm/', input_size=(3, 896, 896), crop_pct=1.0, num_classes=0), 'vit_so400m_patch14_siglip_gap_896.pali2_10b_pt': _cfg( hf_hub_id='timm/', input_size=(3, 896, 896), crop_pct=1.0, num_classes=0), # 'vit_so400m_patch14_siglip_gap_896.pali2_28b_pt': _cfg( # hf_hub_id='google/paligemma2-28b-pt-896-jax', # hf_hub_filename='pt_27b_896.npz', # custom_load='hf', # input_size=(3, 896, 896), crop_pct=1.0, # num_classes=0), 'vit_so400m_patch14_siglip_378.webli_ft_in1k': _cfg( hf_hub_id='timm/', input_size=(3, 378, 378), crop_pct=1.0, crop_mode='squash', ), 'vit_so400m_patch14_siglip_gap_378.webli_ft_in1k': _cfg( hf_hub_id='timm/', input_size=(3, 378, 378), crop_pct=1.0, crop_mode='squash', ), 'vit_xsmall_patch16_clip_224.tinyclip_yfcc15m': _cfg( hf_hub_id='timm/', license='mit', mean=OPENAI_CLIP_MEAN, std=OPENAI_CLIP_STD, num_classes=512), 'vit_medium_patch32_clip_224.tinyclip_laion400m': _cfg( hf_hub_id='timm/', license='mit', mean=OPENAI_CLIP_MEAN, std=OPENAI_CLIP_STD, num_classes=512), 'vit_medium_patch16_clip_224.tinyclip_yfcc15m': _cfg( hf_hub_id='timm/', license='mit', mean=OPENAI_CLIP_MEAN, std=OPENAI_CLIP_STD, num_classes=512), 'vit_betwixt_patch32_clip_224.tinyclip_laion400m': _cfg( hf_hub_id='timm/', license='mit', mean=OPENAI_CLIP_MEAN, std=OPENAI_CLIP_STD, num_classes=512), 'vit_wee_patch16_reg1_gap_256.sbb_in1k': _cfg( hf_hub_id='timm/', input_size=(3, 256, 256), crop_pct=0.95), 'vit_pwee_patch16_reg1_gap_256.sbb_in1k': _cfg( hf_hub_id='timm/', input_size=(3, 256, 256), crop_pct=0.95), 'vit_little_patch16_reg1_gap_256.sbb_in12k_ft_in1k': _cfg( hf_hub_id='timm/', input_size=(3, 256, 256), crop_pct=0.95), 'vit_little_patch16_reg1_gap_256.sbb_in12k': _cfg( hf_hub_id='timm/', num_classes=11821, input_size=(3, 256, 256), crop_pct=0.95), 'vit_little_patch16_reg4_gap_256.sbb_in1k': _cfg( hf_hub_id='timm/', input_size=(3, 256, 256), crop_pct=0.95), 'vit_medium_patch16_reg1_gap_256.sbb_in1k': _cfg( hf_hub_id='timm/', input_size=(3, 256, 256), crop_pct=0.95), 'vit_medium_patch16_reg4_gap_256.sbb_in12k_ft_in1k': _cfg( hf_hub_id='timm/', input_size=(3, 256, 256), crop_pct=0.95), 'vit_medium_patch16_reg4_gap_256.sbb_in1k': _cfg( hf_hub_id='timm/', input_size=(3, 256, 256), crop_pct=0.95), 'vit_medium_patch16_reg4_gap_256.sbb_in12k': _cfg( hf_hub_id='timm/', num_classes=11821, input_size=(3, 256, 256), crop_pct=0.95), 'vit_mediumd_patch16_reg4_gap_256.sbb2_e200_in12k_ft_in1k': _cfg( hf_hub_id='timm/', input_size=(3, 256, 256), crop_pct=0.95), 'vit_mediumd_patch16_reg4_gap_256.sbb_in12k_ft_in1k': _cfg( hf_hub_id='timm/', input_size=(3, 256, 256), crop_pct=0.95), 'vit_mediumd_patch16_reg4_gap_256.sbb2_e200_in12k': _cfg( hf_hub_id='timm/', num_classes=11821, input_size=(3, 256, 256), crop_pct=0.95), 'vit_mediumd_patch16_reg4_gap_256.sbb_in12k': _cfg( hf_hub_id='timm/', num_classes=11821, input_size=(3, 256, 256), crop_pct=0.95), 'vit_mediumd_patch16_reg4_gap_384.sbb2_e200_in12k_ft_in1k': _cfg( hf_hub_id='timm/', input_size=(3, 384, 384), crop_pct=1.0), 'vit_betwixt_patch16_reg1_gap_256.sbb_in1k': _cfg( hf_hub_id='timm/', input_size=(3, 256, 256), crop_pct=0.95), 'vit_betwixt_patch16_reg4_gap_256.sbb2_e200_in12k_ft_in1k': _cfg( hf_hub_id='timm/', input_size=(3, 256, 256), crop_pct=0.95), 'vit_betwixt_patch16_reg4_gap_256.sbb_in12k_ft_in1k': _cfg( hf_hub_id='timm/', input_size=(3, 256, 256), crop_pct=0.95), 'vit_betwixt_patch16_reg4_gap_256.sbb_in1k': _cfg( hf_hub_id='timm/', input_size=(3, 256, 256), crop_pct=0.95), 'vit_betwixt_patch16_reg4_gap_256.sbb2_e200_in12k': _cfg( hf_hub_id='timm/', num_classes=11821, input_size=(3, 256, 256), crop_pct=0.95), 'vit_betwixt_patch16_reg4_gap_256.sbb_in12k': _cfg( hf_hub_id='timm/', num_classes=11821, input_size=(3, 256, 256), crop_pct=0.95), 'vit_betwixt_patch16_reg4_gap_384.sbb2_e200_in12k_ft_in1k': _cfg( hf_hub_id='timm/', input_size=(3, 384, 384), crop_pct=1.0), 'vit_base_patch16_reg4_gap_256.untrained': _cfg( input_size=(3, 256, 256)), 'vit_so150m_patch16_reg4_gap_256.sbb_e250_in12k_ft_in1k': _cfg( hf_hub_id='timm/', input_size=(3, 256, 256), crop_pct=0.95), 'vit_so150m_patch16_reg4_gap_256.sbb_e250_in12k': _cfg( hf_hub_id='timm/', num_classes=11821, input_size=(3, 256, 256), crop_pct=0.95), 'vit_so150m_patch16_reg4_gap_384.sbb_e250_in12k_ft_in1k': _cfg( hf_hub_id='timm/', input_size=(3, 384, 384), crop_pct=1.0), 'vit_so150m_patch16_reg4_map_256.untrained': _cfg( input_size=(3, 256, 256)), 'vit_so150m2_patch16_reg1_gap_256.untrained': _cfg( input_size=(3, 256, 256), crop_pct=0.95), 'vit_intern300m_patch14_448.ogvl_dist': _cfg( hf_hub_id='timm/', mean=IMAGENET_DEFAULT_MEAN, std=IMAGENET_DEFAULT_STD, input_size=(3, 448, 448), crop_pct=1.0, num_classes=0, ), 'aimv2_large_patch14_224.apple_pt': _cfg( hf_hub_id='timm/', mean=OPENAI_CLIP_MEAN, std=OPENAI_CLIP_STD, license='apple-ascl', crop_pct=1.0, num_classes=0), 'aimv2_large_patch14_224.apple_pt_dist': _cfg( hf_hub_id='timm/', mean=OPENAI_CLIP_MEAN, std=OPENAI_CLIP_STD, license='apple-ascl', crop_pct=1.0, num_classes=0), 'aimv2_huge_patch14_224.apple_pt': _cfg( hf_hub_id='timm/', mean=OPENAI_CLIP_MEAN, std=OPENAI_CLIP_STD, license='apple-ascl', crop_pct=1.0, num_classes=0), 'aimv2_1b_patch14_224.apple_pt': _cfg( hf_hub_id='timm/', mean=OPENAI_CLIP_MEAN, std=OPENAI_CLIP_STD, license='apple-ascl', crop_pct=1.0, num_classes=0), 'aimv2_3b_patch14_224.apple_pt': _cfg( hf_hub_id='timm/', mean=OPENAI_CLIP_MEAN, std=OPENAI_CLIP_STD, license='apple-ascl', crop_pct=1.0, num_classes=0), 'aimv2_large_patch14_336.apple_pt': _cfg( hf_hub_id='timm/', mean=OPENAI_CLIP_MEAN, std=OPENAI_CLIP_STD, license='apple-ascl', input_size=(3, 336, 336), crop_pct=1.0, num_classes=0), 'aimv2_large_patch14_336.apple_pt_dist': _cfg( hf_hub_id='timm/', mean=OPENAI_CLIP_MEAN, std=OPENAI_CLIP_STD, license='apple-ascl', input_size=(3, 336, 336), crop_pct=1.0, num_classes=0), 'aimv2_huge_patch14_336.apple_pt': _cfg( hf_hub_id='timm/', mean=OPENAI_CLIP_MEAN, std=OPENAI_CLIP_STD, license='apple-ascl', input_size=(3, 336, 336), crop_pct=1.0, num_classes=0), 'aimv2_1b_patch14_336.apple_pt': _cfg( hf_hub_id='timm/', mean=OPENAI_CLIP_MEAN, std=OPENAI_CLIP_STD, license='apple-ascl', input_size=(3, 336, 336), crop_pct=1.0, num_classes=0), 'aimv2_3b_patch14_336.apple_pt': _cfg( hf_hub_id='timm/', mean=OPENAI_CLIP_MEAN, std=OPENAI_CLIP_STD, license='apple-ascl', input_size=(3, 336, 336), crop_pct=1.0, num_classes=0), 'aimv2_large_patch14_448.apple_pt': _cfg( hf_hub_id='timm/', mean=OPENAI_CLIP_MEAN, std=OPENAI_CLIP_STD, license='apple-ascl', input_size=(3, 448, 448), crop_pct=1.0, num_classes=0), 'aimv2_huge_patch14_448.apple_pt': _cfg( hf_hub_id='timm/', mean=OPENAI_CLIP_MEAN, std=OPENAI_CLIP_STD, license='apple-ascl', input_size=(3, 448, 448), crop_pct=1.0, num_classes=0), 'aimv2_1b_patch14_448.apple_pt': _cfg( hf_hub_id='timm/', mean=OPENAI_CLIP_MEAN, std=OPENAI_CLIP_STD, license='apple-ascl', input_size=(3, 448, 448), crop_pct=1.0, num_classes=0), 'aimv2_3b_patch14_448.apple_pt': _cfg( hf_hub_id='timm/', mean=OPENAI_CLIP_MEAN, std=OPENAI_CLIP_STD, license='apple-ascl', input_size=(3, 448, 448), crop_pct=1.0, num_classes=0), 'test_vit.r160_in1k': _cfg( hf_hub_id='timm/', input_size=(3, 160, 160), crop_pct=0.95), 'test_vit2.r160_in1k': _cfg( hf_hub_id='timm/', input_size=(3, 160, 160), crop_pct=0.95), 'test_vit3.r160_in1k': _cfg( hf_hub_id='timm/', input_size=(3, 160, 160), crop_pct=0.95), 'test_vit4.r160_in1k': _cfg( input_size=(3, 160, 160), crop_pct=0.95), } _quick_gelu_cfgs = [n for n, c in default_cfgs.items() if c.get('notes', ()) and 'quickgelu' in c['notes'][0]] for n in _quick_gelu_cfgs: # generate quickgelu default cfgs based on contents of notes field c = copy.deepcopy(default_cfgs[n]) if c['hf_hub_id'] == 'timm/': c['hf_hub_id'] = 'timm/' + n # need to use non-quickgelu model name for hub id default_cfgs[n.replace('_clip_', '_clip_quickgelu_')] = c default_cfgs = generate_default_cfgs(default_cfgs) def _create_vision_transformer(variant: str, pretrained: bool = False, **kwargs) -> VisionTransformer: out_indices = kwargs.pop('out_indices', 3) if 'flexi' in variant: # FIXME Google FlexiViT pretrained models have a strong preference for bilinear patch / embed # interpolation, other pretrained models resize better w/ anti-aliased bicubic interpolation. _filter_fn = partial(checkpoint_filter_fn, interpolation='bilinear', antialias=False) else: _filter_fn = checkpoint_filter_fn # FIXME attn pool (currently only in siglip) params removed if pool disabled, is there a better soln? strict = kwargs.pop('pretrained_strict', True) if 'siglip' in variant and kwargs.get('global_pool', None) != 'map': strict = False return build_model_with_cfg( VisionTransformer, variant, pretrained, pretrained_filter_fn=_filter_fn, pretrained_strict=strict, feature_cfg=dict(out_indices=out_indices, feature_cls='getter'), **kwargs, ) @register_model def vit_tiny_patch16_224(pretrained: bool = False, **kwargs) -> VisionTransformer: """ ViT-Tiny (Vit-Ti/16) """ model_args = dict(patch_size=16, embed_dim=192, depth=12, num_heads=3) model = _create_vision_transformer('vit_tiny_patch16_224', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def vit_tiny_patch16_384(pretrained: bool = False, **kwargs) -> VisionTransformer: """ ViT-Tiny (Vit-Ti/16) @ 384x384. """ model_args = dict(patch_size=16, embed_dim=192, depth=12, num_heads=3) model = _create_vision_transformer('vit_tiny_patch16_384', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def vit_small_patch32_224(pretrained: bool = False, **kwargs) -> VisionTransformer: """ ViT-Small (ViT-S/32) """ model_args = dict(patch_size=32, embed_dim=384, depth=12, num_heads=6) model = _create_vision_transformer('vit_small_patch32_224', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def vit_small_patch32_384(pretrained: bool = False, **kwargs) -> VisionTransformer: """ ViT-Small (ViT-S/32) at 384x384. """ model_args = dict(patch_size=32, embed_dim=384, depth=12, num_heads=6) model = _create_vision_transformer('vit_small_patch32_384', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def vit_small_patch16_224(pretrained: bool = False, **kwargs) -> VisionTransformer: """ ViT-Small (ViT-S/16) """ model_args = dict(patch_size=16, embed_dim=384, depth=12, num_heads=6) model = _create_vision_transformer('vit_small_patch16_224', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def vit_small_patch16_384(pretrained: bool = False, **kwargs) -> VisionTransformer: """ ViT-Small (ViT-S/16) """ model_args = dict(patch_size=16, embed_dim=384, depth=12, num_heads=6) model = _create_vision_transformer('vit_small_patch16_384', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def vit_small_patch8_224(pretrained: bool = False, **kwargs) -> VisionTransformer: """ ViT-Small (ViT-S/8) """ model_args = dict(patch_size=8, embed_dim=384, depth=12, num_heads=6) model = _create_vision_transformer('vit_small_patch8_224', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def vit_base_patch32_224(pretrained: bool = False, **kwargs) -> VisionTransformer: """ ViT-Base (ViT-B/32) from original paper (https://arxiv.org/abs/2010.11929). ImageNet-1k weights fine-tuned from in21k, source https://github.com/google-research/vision_transformer. """ model_args = dict(patch_size=32, embed_dim=768, depth=12, num_heads=12) model = _create_vision_transformer('vit_base_patch32_224', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def vit_base_patch32_384(pretrained: bool = False, **kwargs) -> VisionTransformer: """ ViT-Base model (ViT-B/32) from original paper (https://arxiv.org/abs/2010.11929). ImageNet-1k weights fine-tuned from in21k @ 384x384, source https://github.com/google-research/vision_transformer. """ model_args = dict(patch_size=32, embed_dim=768, depth=12, num_heads=12) model = _create_vision_transformer('vit_base_patch32_384', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def vit_base_patch16_224(pretrained: bool = False, **kwargs) -> VisionTransformer: """ ViT-Base (ViT-B/16) from original paper (https://arxiv.org/abs/2010.11929). ImageNet-1k weights fine-tuned from in21k @ 224x224, source https://github.com/google-research/vision_transformer. """ model_args = dict(patch_size=16, embed_dim=768, depth=12, num_heads=12) model = _create_vision_transformer('vit_base_patch16_224', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def vit_base_patch16_384(pretrained: bool = False, **kwargs) -> VisionTransformer: """ ViT-Base model (ViT-B/16) from original paper (https://arxiv.org/abs/2010.11929). ImageNet-1k weights fine-tuned from in21k @ 384x384, source https://github.com/google-research/vision_transformer. """ model_args = dict(patch_size=16, embed_dim=768, depth=12, num_heads=12) model = _create_vision_transformer('vit_base_patch16_384', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def vit_base_patch8_224(pretrained: bool = False, **kwargs) -> VisionTransformer: """ ViT-Base (ViT-B/8) from original paper (https://arxiv.org/abs/2010.11929). ImageNet-1k weights fine-tuned from in21k @ 224x224, source https://github.com/google-research/vision_transformer. """ model_args = dict(patch_size=8, embed_dim=768, depth=12, num_heads=12) model = _create_vision_transformer('vit_base_patch8_224', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def vit_large_patch32_224(pretrained: bool = False, **kwargs) -> VisionTransformer: """ ViT-Large model (ViT-L/32) from original paper (https://arxiv.org/abs/2010.11929). No pretrained weights. """ model_args = dict(patch_size=32, embed_dim=1024, depth=24, num_heads=16) model = _create_vision_transformer('vit_large_patch32_224', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def vit_large_patch32_384(pretrained: bool = False, **kwargs) -> VisionTransformer: """ ViT-Large model (ViT-L/32) from original paper (https://arxiv.org/abs/2010.11929). ImageNet-1k weights fine-tuned from in21k @ 384x384, source https://github.com/google-research/vision_transformer. """ model_args = dict(patch_size=32, embed_dim=1024, depth=24, num_heads=16) model = _create_vision_transformer('vit_large_patch32_384', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def vit_large_patch16_224(pretrained: bool = False, **kwargs) -> VisionTransformer: """ ViT-Large model (ViT-L/16) from original paper (https://arxiv.org/abs/2010.11929). ImageNet-1k weights fine-tuned from in21k @ 224x224, source https://github.com/google-research/vision_transformer. """ model_args = dict(patch_size=16, embed_dim=1024, depth=24, num_heads=16) model = _create_vision_transformer('vit_large_patch16_224', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def vit_large_patch16_384(pretrained: bool = False, **kwargs) -> VisionTransformer: """ ViT-Large model (ViT-L/16) from original paper (https://arxiv.org/abs/2010.11929). ImageNet-1k weights fine-tuned from in21k @ 384x384, source https://github.com/google-research/vision_transformer. """ model_args = dict(patch_size=16, embed_dim=1024, depth=24, num_heads=16) model = _create_vision_transformer('vit_large_patch16_384', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def vit_large_patch14_224(pretrained: bool = False, **kwargs) -> VisionTransformer: """ ViT-Large model (ViT-L/14) """ model_args = dict(patch_size=14, embed_dim=1024, depth=24, num_heads=16) model = _create_vision_transformer('vit_large_patch14_224', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def vit_huge_patch14_224(pretrained: bool = False, **kwargs) -> VisionTransformer: """ ViT-Huge model (ViT-H/14) from original paper (https://arxiv.org/abs/2010.11929). """ model_args = dict(patch_size=14, embed_dim=1280, depth=32, num_heads=16) model = _create_vision_transformer('vit_huge_patch14_224', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def vit_giant_patch14_224(pretrained: bool = False, **kwargs) -> VisionTransformer: """ ViT-Giant (little-g) model (ViT-g/14) from `Scaling Vision Transformers` - https://arxiv.org/abs/2106.04560 """ model_args = dict(patch_size=14, embed_dim=1408, mlp_ratio=48/11, depth=40, num_heads=16) model = _create_vision_transformer('vit_giant_patch14_224', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def vit_gigantic_patch14_224(pretrained: bool = False, **kwargs) -> VisionTransformer: """ ViT-Gigantic (big-G) model (ViT-G/14) from `Scaling Vision Transformers` - https://arxiv.org/abs/2106.04560 """ model_args = dict(patch_size=14, embed_dim=1664, mlp_ratio=64/13, depth=48, num_heads=16) model = _create_vision_transformer( 'vit_gigantic_patch14_224', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def vit_base_patch16_224_miil(pretrained: bool = False, **kwargs) -> VisionTransformer: """ ViT-Base (ViT-B/16) from original paper (https://arxiv.org/abs/2010.11929). Weights taken from: https://github.com/Alibaba-MIIL/ImageNet21K """ model_args = dict(patch_size=16, embed_dim=768, depth=12, num_heads=12, qkv_bias=False) model = _create_vision_transformer( 'vit_base_patch16_224_miil', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def vit_medium_patch16_gap_240(pretrained: bool = False, **kwargs) -> VisionTransformer: """ ViT-Medium (ViT-M/16) w/o class token, w/ avg-pool @ 240x240 """ model_args = dict( patch_size=16, embed_dim=512, depth=12, num_heads=8, class_token=False, global_pool='avg', qkv_bias=False, init_values=1e-6, fc_norm=False) model = _create_vision_transformer( 'vit_medium_patch16_gap_240', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def vit_medium_patch16_gap_256(pretrained: bool = False, **kwargs) -> VisionTransformer: """ ViT-Medium (ViT-M/16) w/o class token, w/ avg-pool @ 256x256 """ model_args = dict( patch_size=16, embed_dim=512, depth=12, num_heads=8, class_token=False, global_pool='avg', qkv_bias=False, init_values=1e-6, fc_norm=False) model = _create_vision_transformer( 'vit_medium_patch16_gap_256', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def vit_medium_patch16_gap_384(pretrained: bool = False, **kwargs) -> VisionTransformer: """ ViT-Medium (ViT-M/16) w/o class token, w/ avg-pool @ 384x384 """ model_args = dict( patch_size=16, embed_dim=512, depth=12, num_heads=8, class_token=False, global_pool='avg', qkv_bias=False, init_values=1e-6, fc_norm=False) model = _create_vision_transformer( 'vit_medium_patch16_gap_384', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def vit_betwixt_patch16_gap_256(pretrained: bool = False, **kwargs) -> VisionTransformer: """ ViT-Betwixt (ViT-b/16) w/o class token, w/ avg-pool @ 256x256 """ model_args = dict( patch_size=16, embed_dim=640, depth=12, num_heads=10, class_token=False, global_pool='avg', qkv_bias=False, init_values=1e-6, fc_norm=False) model = _create_vision_transformer( 'vit_medium_patch16_gap_256', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def vit_base_patch16_gap_224(pretrained: bool = False, **kwargs) -> VisionTransformer: """ ViT-Base (ViT-B/16) w/o class token, w/ avg-pool @ 224x224 """ model_args = dict( patch_size=16, embed_dim=768, depth=12, num_heads=16, class_token=False, global_pool='avg', fc_norm=False) model = _create_vision_transformer( 'vit_base_patch16_gap_224', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def vit_huge_patch14_gap_224(pretrained: bool = False, **kwargs) -> VisionTransformer: """ ViT-Huge model (ViT-H/14) w/ no class token, avg pool """ model_args = dict( patch_size=14, embed_dim=1280, depth=32, num_heads=16, class_token=False, global_pool='avg', fc_norm=False) model = _create_vision_transformer( 'vit_huge_patch14_gap_224', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def vit_huge_patch16_gap_448(pretrained: bool = False, **kwargs) -> VisionTransformer: """ ViT-Huge model (ViT-H/16) w/ no class token, avg pool @ 448x448 """ model_args = dict( patch_size=16, embed_dim=1280, depth=32, num_heads=16, class_token=False, global_pool='avg', fc_norm=False) model = _create_vision_transformer( 'vit_huge_patch16_gap_448', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def vit_giant_patch16_gap_224(pretrained: bool = False, **kwargs) -> VisionTransformer: """ ViT-Giant (little-gg) model (ViT-g/16) w/ no class token, avg pool """ model_args = dict( patch_size=16, embed_dim=1408, depth=40, num_heads=16, mlp_ratio=48/11, class_token=False, global_pool='avg', fc_norm=False) model = _create_vision_transformer( 'vit_giant_patch16_gap_224', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def vit_xsmall_patch16_clip_224(pretrained: bool = False, **kwargs) -> VisionTransformer: # TinyCLIP 8M model_args = dict(embed_dim=256, depth=10, num_heads=4, pre_norm=True, norm_layer=nn.LayerNorm) model = _create_vision_transformer( 'vit_xsmall_patch16_clip_224', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def vit_medium_patch32_clip_224(pretrained: bool = False, **kwargs) -> VisionTransformer: # TinyCLIP 40M model_args = dict( patch_size=32, embed_dim=512, depth=12, num_heads=8, pre_norm=True, norm_layer=nn.LayerNorm) model = _create_vision_transformer( 'vit_medium_patch32_clip_224', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def vit_medium_patch16_clip_224(pretrained: bool = False, **kwargs) -> VisionTransformer: # TinyCLIP 39M model_args = dict(embed_dim=512, depth=12, num_heads=8, pre_norm=True, norm_layer=nn.LayerNorm) model = _create_vision_transformer( 'vit_medium_patch16_clip_224', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def vit_betwixt_patch32_clip_224(pretrained: bool = False, **kwargs) -> VisionTransformer: # TinyCLIP 61M model_args = dict( patch_size=32, embed_dim=640, depth=12, num_heads=10, pre_norm=True, norm_layer=nn.LayerNorm) model = _create_vision_transformer( 'vit_betwixt_patch32_clip_224', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def vit_base_patch32_clip_224(pretrained: bool = False, **kwargs) -> VisionTransformer: """ ViT-B/32 CLIP image tower @ 224x224 """ model_args = dict( patch_size=32, embed_dim=768, depth=12, num_heads=12, pre_norm=True, norm_layer=nn.LayerNorm) model = _create_vision_transformer( 'vit_base_patch32_clip_224', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def vit_base_patch32_clip_256(pretrained: bool = False, **kwargs) -> VisionTransformer: """ ViT-B/32 CLIP image tower @ 256x256 """ model_args = dict( patch_size=32, embed_dim=768, depth=12, num_heads=12, pre_norm=True, norm_layer=nn.LayerNorm) model = _create_vision_transformer( 'vit_base_patch32_clip_256', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def vit_base_patch32_clip_384(pretrained: bool = False, **kwargs) -> VisionTransformer: """ ViT-B/32 CLIP image tower @ 384x384 """ model_args = dict( patch_size=32, embed_dim=768, depth=12, num_heads=12, pre_norm=True, norm_layer=nn.LayerNorm) model = _create_vision_transformer( 'vit_base_patch32_clip_384', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def vit_base_patch32_clip_448(pretrained: bool = False, **kwargs) -> VisionTransformer: """ ViT-B/32 CLIP image tower @ 448x448 """ model_args = dict( patch_size=32, embed_dim=768, depth=12, num_heads=12, pre_norm=True, norm_layer=nn.LayerNorm) model = _create_vision_transformer( 'vit_base_patch32_clip_448', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def vit_base_patch16_clip_224(pretrained: bool = False, **kwargs) -> VisionTransformer: """ ViT-B/16 CLIP image tower """ model_args = dict(patch_size=16, embed_dim=768, depth=12, num_heads=12, pre_norm=True, norm_layer=nn.LayerNorm) model = _create_vision_transformer( 'vit_base_patch16_clip_224', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def vit_base_patch16_clip_384(pretrained: bool = False, **kwargs) -> VisionTransformer: """ ViT-B/16 CLIP image tower @ 384x384 """ model_args = dict(patch_size=16, embed_dim=768, depth=12, num_heads=12, pre_norm=True, norm_layer=nn.LayerNorm) model = _create_vision_transformer( 'vit_base_patch16_clip_384', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def vit_base_patch16_plus_clip_240(pretrained: bool = False, **kwargs) -> VisionTransformer: """ ViT-Base (ViT-B/16+) CLIP image tower @ 240x240 """ model_args = dict(patch_size=16, embed_dim=896, depth=12, num_heads=14, pre_norm=True, norm_layer=nn.LayerNorm) model = _create_vision_transformer( 'vit_base_patch16_plus_clip_240', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def vit_large_patch14_clip_224(pretrained: bool = False, **kwargs) -> VisionTransformer: """ ViT-Large model (ViT-L/14) CLIP image tower """ model_args = dict(patch_size=14, embed_dim=1024, depth=24, num_heads=16, pre_norm=True, norm_layer=nn.LayerNorm) model = _create_vision_transformer( 'vit_large_patch14_clip_224', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def vit_large_patch14_clip_336(pretrained: bool = False, **kwargs) -> VisionTransformer: """ ViT-Large model (ViT-L/14) CLIP image tower @ 336x336 """ model_args = dict(patch_size=14, embed_dim=1024, depth=24, num_heads=16, pre_norm=True, norm_layer=nn.LayerNorm) model = _create_vision_transformer( 'vit_large_patch14_clip_336', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def vit_huge_patch14_clip_224(pretrained: bool = False, **kwargs) -> VisionTransformer: """ ViT-Huge model (ViT-H/14) CLIP image tower. """ model_args = dict(patch_size=14, embed_dim=1280, depth=32, num_heads=16, pre_norm=True, norm_layer=nn.LayerNorm) model = _create_vision_transformer( 'vit_huge_patch14_clip_224', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def vit_huge_patch14_clip_336(pretrained: bool = False, **kwargs) -> VisionTransformer: """ ViT-Huge model (ViT-H/14) CLIP image tower @ 336x336 """ model_args = dict(patch_size=14, embed_dim=1280, depth=32, num_heads=16, pre_norm=True, norm_layer=nn.LayerNorm) model = _create_vision_transformer( 'vit_huge_patch14_clip_336', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def vit_huge_patch14_clip_378(pretrained: bool = False, **kwargs) -> VisionTransformer: """ ViT-Huge model (ViT-H/14) CLIP image tower @ 378x378 """ model_args = dict(patch_size=14, embed_dim=1280, depth=32, num_heads=16, pre_norm=True, norm_layer=nn.LayerNorm) model = _create_vision_transformer( 'vit_huge_patch14_clip_378', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def vit_giant_patch14_clip_224(pretrained: bool = False, **kwargs) -> VisionTransformer: """ ViT-Giant (little-g) model (ViT-g/14) from `Scaling Vision Transformers` - https://arxiv.org/abs/2106.04560 Pretrained weights from CLIP image tower. """ model_args = dict( patch_size=14, embed_dim=1408, mlp_ratio=48/11, depth=40, num_heads=16, pre_norm=True, norm_layer=nn.LayerNorm) model = _create_vision_transformer( 'vit_giant_patch14_clip_224', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def vit_gigantic_patch14_clip_224(pretrained: bool = False, **kwargs) -> VisionTransformer: """ ViT-bigG model (ViT-G/14) from `Scaling Vision Transformers` - https://arxiv.org/abs/2106.04560 Pretrained weights from CLIP image tower. """ model_args = dict( patch_size=14, embed_dim=1664, mlp_ratio=64/13, depth=48, num_heads=16, pre_norm=True, norm_layer=nn.LayerNorm) model = _create_vision_transformer( 'vit_gigantic_patch14_clip_224', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def vit_base_patch32_clip_quickgelu_224(pretrained: bool = False, **kwargs) -> VisionTransformer: """ ViT-B/32 CLIP image tower @ 224x224 """ model_args = dict( patch_size=32, embed_dim=768, depth=12, num_heads=12, pre_norm=True, norm_layer=nn.LayerNorm, act_layer='quick_gelu') model = _create_vision_transformer( 'vit_base_patch32_clip_quickgelu_224', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def vit_base_patch16_clip_quickgelu_224(pretrained: bool = False, **kwargs) -> VisionTransformer: """ ViT-B/16 CLIP image tower w/ QuickGELU act """ model_args = dict( patch_size=16, embed_dim=768, depth=12, num_heads=12, pre_norm=True, norm_layer=nn.LayerNorm, act_layer='quick_gelu') model = _create_vision_transformer( 'vit_base_patch16_clip_quickgelu_224', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def vit_large_patch14_clip_quickgelu_224(pretrained: bool = False, **kwargs) -> VisionTransformer: """ ViT-Large model (ViT-L/14) CLIP image tower w/ QuickGELU act """ model_args = dict( patch_size=14, embed_dim=1024, depth=24, num_heads=16, pre_norm=True, norm_layer=nn.LayerNorm, act_layer='quick_gelu') model = _create_vision_transformer( 'vit_large_patch14_clip_quickgelu_224', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def vit_large_patch14_clip_quickgelu_336(pretrained: bool = False, **kwargs) -> VisionTransformer: """ ViT-Large model (ViT-L/14) CLIP image tower @ 336x336 w/ QuickGELU act """ model_args = dict( patch_size=14, embed_dim=1024, depth=24, num_heads=16, pre_norm=True, norm_layer=nn.LayerNorm, act_layer='quick_gelu') model = _create_vision_transformer( 'vit_large_patch14_clip_quickgelu_336', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def vit_huge_patch14_clip_quickgelu_224(pretrained: bool = False, **kwargs) -> VisionTransformer: """ ViT-Huge model (ViT-H/14) CLIP image tower w/ QuickGELU act. """ model_args = dict( patch_size=14, embed_dim=1280, depth=32, num_heads=16, pre_norm=True, norm_layer=nn.LayerNorm, act_layer='quick_gelu') model = _create_vision_transformer( 'vit_huge_patch14_clip_quickgelu_224', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def vit_huge_patch14_clip_quickgelu_378(pretrained: bool = False, **kwargs) -> VisionTransformer: """ ViT-Huge model (ViT-H/14) CLIP image tower @ 378x378 w/ QuickGELU act """ model_args = dict( patch_size=14, embed_dim=1280, depth=32, num_heads=16, pre_norm=True, norm_layer=nn.LayerNorm, act_layer='quick_gelu') model = _create_vision_transformer( 'vit_huge_patch14_clip_quickgelu_378', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def vit_gigantic_patch14_clip_quickgelu_224(pretrained: bool = False, **kwargs) -> VisionTransformer: """ ViT-bigG model (ViT-G/14) w/ QuickGELU act """ model_args = dict( patch_size=14, embed_dim=1664, mlp_ratio=64/13, depth=48, num_heads=16, pre_norm=True, norm_layer=nn.LayerNorm, act_layer='quick_gelu') model = _create_vision_transformer( 'vit_gigantic_patch14_clip_quickgelu_224', pretrained=pretrained, **dict(model_args, **kwargs)) return model # Experimental models below @register_model def vit_base_patch32_plus_256(pretrained: bool = False, **kwargs) -> VisionTransformer: """ ViT-Base (ViT-B/32+) """ model_args = dict(patch_size=32, embed_dim=896, depth=12, num_heads=14, init_values=1e-5) model = _create_vision_transformer( 'vit_base_patch32_plus_256', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def vit_base_patch16_plus_240(pretrained: bool = False, **kwargs) -> VisionTransformer: """ ViT-Base (ViT-B/16+) """ model_args = dict(patch_size=16, embed_dim=896, depth=12, num_heads=14, init_values=1e-5) model = _create_vision_transformer( 'vit_base_patch16_plus_240', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def vit_base_patch16_rpn_224(pretrained: bool = False, **kwargs) -> VisionTransformer: """ ViT-Base (ViT-B/16) w/ residual post-norm """ model_args = dict( patch_size=16, embed_dim=768, depth=12, num_heads=12, qkv_bias=False, init_values=1e-5, class_token=False, block_fn=ResPostBlock, global_pool='avg') model = _create_vision_transformer( 'vit_base_patch16_rpn_224', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def vit_small_patch16_36x1_224(pretrained: bool = False, **kwargs) -> VisionTransformer: """ ViT-Base w/ LayerScale + 36 x 1 (36 block serial) config. Experimental, may remove. Based on `Three things everyone should know about Vision Transformers` - https://arxiv.org/abs/2203.09795 Paper focuses on 24x2 + 48x1 for 'Small' width but those are extremely slow. """ model_args = dict(patch_size=16, embed_dim=384, depth=36, num_heads=6, init_values=1e-5) model = _create_vision_transformer( 'vit_small_patch16_36x1_224', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def vit_small_patch16_18x2_224(pretrained: bool = False, **kwargs) -> VisionTransformer: """ ViT-Small w/ LayerScale + 18 x 2 (36 block parallel) config. Experimental, may remove. Based on `Three things everyone should know about Vision Transformers` - https://arxiv.org/abs/2203.09795 Paper focuses on 24x2 + 48x1 for 'Small' width but those are extremely slow. """ model_args = dict( patch_size=16, embed_dim=384, depth=18, num_heads=6, init_values=1e-5, block_fn=ParallelThingsBlock) model = _create_vision_transformer( 'vit_small_patch16_18x2_224', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def vit_base_patch16_18x2_224(pretrained: bool = False, **kwargs) -> VisionTransformer: """ ViT-Base w/ LayerScale + 18 x 2 (36 block parallel) config. Experimental, may remove. Based on `Three things everyone should know about Vision Transformers` - https://arxiv.org/abs/2203.09795 """ model_args = dict( patch_size=16, embed_dim=768, depth=18, num_heads=12, init_values=1e-5, block_fn=ParallelThingsBlock) model = _create_vision_transformer( 'vit_base_patch16_18x2_224', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def eva_large_patch14_196(pretrained: bool = False, **kwargs) -> VisionTransformer: """ EVA-large model https://arxiv.org/abs/2211.07636 /via MAE MIM pretrain""" model_args = dict(patch_size=14, embed_dim=1024, depth=24, num_heads=16, global_pool='avg') model = _create_vision_transformer( 'eva_large_patch14_196', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def eva_large_patch14_336(pretrained: bool = False, **kwargs) -> VisionTransformer: """ EVA-large model https://arxiv.org/abs/2211.07636 via MAE MIM pretrain""" model_args = dict(patch_size=14, embed_dim=1024, depth=24, num_heads=16, global_pool='avg') model = _create_vision_transformer('eva_large_patch14_336', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def flexivit_small(pretrained: bool = False, **kwargs) -> VisionTransformer: """ FlexiViT-Small """ model_args = dict(patch_size=16, embed_dim=384, depth=12, num_heads=6, no_embed_class=True) model = _create_vision_transformer('flexivit_small', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def flexivit_base(pretrained: bool = False, **kwargs) -> VisionTransformer: """ FlexiViT-Base """ model_args = dict(patch_size=16, embed_dim=768, depth=12, num_heads=12, no_embed_class=True) model = _create_vision_transformer('flexivit_base', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def flexivit_large(pretrained: bool = False, **kwargs) -> VisionTransformer: """ FlexiViT-Large """ model_args = dict(patch_size=16, embed_dim=1024, depth=24, num_heads=16, no_embed_class=True) model = _create_vision_transformer('flexivit_large', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def vit_base_patch16_xp_224(pretrained: bool = False, **kwargs) -> VisionTransformer: """ ViT-Large model (ViT-L/14) w/ parallel blocks and qk norm enabled. """ model_args = dict( patch_size=16, embed_dim=768, depth=12, num_heads=12, pre_norm=True, no_embed_class=True, norm_layer=RmsNorm, block_fn=ParallelScalingBlock, qkv_bias=False, qk_norm=True, ) model = _create_vision_transformer( 'vit_base_patch16_xp_224', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def vit_large_patch14_xp_224(pretrained: bool = False, **kwargs) -> VisionTransformer: """ ViT-Large model (ViT-L/14) w/ parallel blocks and qk norm enabled. """ model_args = dict( patch_size=14, embed_dim=1024, depth=24, num_heads=16, pre_norm=True, no_embed_class=True, norm_layer=RmsNorm, block_fn=ParallelScalingBlock, qkv_bias=False, qk_norm=True, ) model = _create_vision_transformer( 'vit_large_patch14_xp_224', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def vit_huge_patch14_xp_224(pretrained: bool = False, **kwargs) -> VisionTransformer: """ ViT-Huge model (ViT-H/14) w/ parallel blocks and qk norm enabled. """ model_args = dict( patch_size=14, embed_dim=1280, depth=32, num_heads=16, pre_norm=True, no_embed_class=True, norm_layer=RmsNorm, block_fn=ParallelScalingBlock, qkv_bias=False, qk_norm=True, ) model = _create_vision_transformer( 'vit_huge_patch14_xp_224', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def vit_small_patch14_dinov2(pretrained: bool = False, **kwargs) -> VisionTransformer: """ ViT-S/14 for DINOv2 """ model_args = dict(patch_size=14, embed_dim=384, depth=12, num_heads=6, init_values=1e-5) model = _create_vision_transformer( 'vit_small_patch14_dinov2', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def vit_base_patch14_dinov2(pretrained: bool = False, **kwargs) -> VisionTransformer: """ ViT-B/14 for DINOv2 """ model_args = dict(patch_size=14, embed_dim=768, depth=12, num_heads=12, init_values=1e-5) model = _create_vision_transformer( 'vit_base_patch14_dinov2', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def vit_large_patch14_dinov2(pretrained: bool = False, **kwargs) -> VisionTransformer: """ ViT-L/14 for DINOv2 """ model_args = dict(patch_size=14, embed_dim=1024, depth=24, num_heads=16, init_values=1e-5) model = _create_vision_transformer( 'vit_large_patch14_dinov2', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def vit_giant_patch14_dinov2(pretrained: bool = False, **kwargs) -> VisionTransformer: """ ViT-G/14 for DINOv2 """ # The hidden_features of SwiGLU is calculated by: # hidden_features = (int(hidden_features * 2 / 3) + 7) // 8 * 8 # When embed_dim=1536, hidden_features=4096 # With SwiGLUPacked, we need to set hidden_features = 2 * 4096 = 8192 model_args = dict( patch_size=14, embed_dim=1536, depth=40, num_heads=24, init_values=1e-5, mlp_ratio=2.66667 * 2, mlp_layer=SwiGLUPacked, act_layer=nn.SiLU ) model = _create_vision_transformer( 'vit_giant_patch14_dinov2', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def vit_small_patch14_reg4_dinov2(pretrained: bool = False, **kwargs) -> VisionTransformer: """ ViT-S/14 for DINOv2 w/ 4 registers """ model_args = dict( patch_size=14, embed_dim=384, depth=12, num_heads=6, init_values=1e-5, reg_tokens=4, no_embed_class=True, ) model = _create_vision_transformer( 'vit_small_patch14_reg4_dinov2', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def vit_base_patch14_reg4_dinov2(pretrained: bool = False, **kwargs) -> VisionTransformer: """ ViT-B/14 for DINOv2 w/ 4 registers """ model_args = dict( patch_size=14, embed_dim=768, depth=12, num_heads=12, init_values=1e-5, reg_tokens=4, no_embed_class=True, ) model = _create_vision_transformer( 'vit_base_patch14_reg4_dinov2', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def vit_large_patch14_reg4_dinov2(pretrained: bool = False, **kwargs) -> VisionTransformer: """ ViT-L/14 for DINOv2 w/ 4 registers """ model_args = dict( patch_size=14, embed_dim=1024, depth=24, num_heads=16, init_values=1e-5, reg_tokens=4, no_embed_class=True, ) model = _create_vision_transformer( 'vit_large_patch14_reg4_dinov2', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def vit_giant_patch14_reg4_dinov2(pretrained: bool = False, **kwargs) -> VisionTransformer: """ ViT-G/14 for DINOv2 """ # The hidden_features of SwiGLU is calculated by: # hidden_features = (int(hidden_features * 2 / 3) + 7) // 8 * 8 # When embed_dim=1536, hidden_features=4096 # With SwiGLUPacked, we need to set hidden_features = 2 * 4096 = 8192 model_args = dict( patch_size=14, embed_dim=1536, depth=40, num_heads=24, init_values=1e-5, mlp_ratio=2.66667 * 2, mlp_layer=SwiGLUPacked, act_layer=nn.SiLU, reg_tokens=4, no_embed_class=True, ) model = _create_vision_transformer( 'vit_giant_patch14_reg4_dinov2', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def vit_base_patch16_siglip_224(pretrained: bool = False, **kwargs) -> VisionTransformer: model_args = dict( patch_size=16, embed_dim=768, depth=12, num_heads=12, class_token=False, global_pool='map', ) model = _create_vision_transformer( 'vit_base_patch16_siglip_224', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def vit_base_patch16_siglip_256(pretrained: bool = False, **kwargs) -> VisionTransformer: model_args = dict( patch_size=16, embed_dim=768, depth=12, num_heads=12, class_token=False, global_pool='map', ) model = _create_vision_transformer( 'vit_base_patch16_siglip_256', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def vit_base_patch16_siglip_384(pretrained: bool = False, **kwargs) -> VisionTransformer: model_args = dict( patch_size=16, embed_dim=768, depth=12, num_heads=12, class_token=False, global_pool='map', ) model = _create_vision_transformer( 'vit_base_patch16_siglip_384', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def vit_base_patch16_siglip_512(pretrained: bool = False, **kwargs) -> VisionTransformer: model_args = dict( patch_size=16, embed_dim=768, depth=12, num_heads=12, class_token=False, global_pool='map', ) model = _create_vision_transformer( 'vit_base_patch16_siglip_512', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def vit_large_patch16_siglip_256(pretrained: bool = False, **kwargs) -> VisionTransformer: model_args = dict( patch_size=16, embed_dim=1024, depth=24, num_heads=16, class_token=False, global_pool='map', ) model = _create_vision_transformer( 'vit_large_patch16_siglip_256', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def vit_large_patch16_siglip_384(pretrained: bool = False, **kwargs) -> VisionTransformer: model_args = dict( patch_size=16, embed_dim=1024, depth=24, num_heads=16, class_token=False, global_pool='map', ) model = _create_vision_transformer( 'vit_large_patch16_siglip_384', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def vit_so400m_patch14_siglip_224(pretrained: bool = False, **kwargs) -> VisionTransformer: model_args = dict( patch_size=14, embed_dim=1152, depth=27, num_heads=16, mlp_ratio=3.7362, class_token=False, global_pool='map', ) model = _create_vision_transformer( 'vit_so400m_patch14_siglip_224', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def vit_so400m_patch16_siglip_256(pretrained: bool = False, **kwargs) -> VisionTransformer: # this is a corrected variant of the 384 with a res properly divisible by patch size (no padding/truncation) model_args = dict( patch_size=16, embed_dim=1152, depth=27, num_heads=16, mlp_ratio=3.7362, class_token=False, global_pool='map', ) model = _create_vision_transformer( 'vit_so400m_patch16_siglip_256', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def vit_so400m_patch14_siglip_378(pretrained: bool = False, **kwargs) -> VisionTransformer: # this is a corrected variant of the 384 with a res properly divisible by patch size (no padding/truncation) model_args = dict( patch_size=14, embed_dim=1152, depth=27, num_heads=16, mlp_ratio=3.7362, class_token=False, global_pool='map', ) model = _create_vision_transformer( 'vit_so400m_patch14_siglip_378', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def vit_so400m_patch14_siglip_384(pretrained: bool = False, **kwargs) -> VisionTransformer: model_args = dict( patch_size=14, embed_dim=1152, depth=27, num_heads=16, mlp_ratio=3.7362, class_token=False, global_pool='map', ) model = _create_vision_transformer( 'vit_so400m_patch14_siglip_384', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def vit_base_patch16_siglip_gap_224(pretrained: bool = False, **kwargs) -> VisionTransformer: """ A SigLIP variant of ViT with global average pooling (GAP) instead of attention pooling (MAP).""" model_args = dict( patch_size=16, embed_dim=768, depth=12, num_heads=12, class_token=False, global_pool='avg', fc_norm=False, ) model = _create_vision_transformer( 'vit_base_patch16_siglip_gap_224', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def vit_base_patch16_siglip_gap_256(pretrained: bool = False, **kwargs) -> VisionTransformer: """ A SigLIP variant of ViT with global average pooling (GAP) instead of attention pooling (MAP).""" model_args = dict( patch_size=16, embed_dim=768, depth=12, num_heads=12, class_token=False, global_pool='avg', fc_norm=False, ) model = _create_vision_transformer( 'vit_base_patch16_siglip_gap_256', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def vit_base_patch16_siglip_gap_384(pretrained: bool = False, **kwargs) -> VisionTransformer: """ A SigLIP variant of ViT with global average pooling (GAP) instead of attention pooling (MAP).""" model_args = dict( patch_size=16, embed_dim=768, depth=12, num_heads=12, class_token=False, global_pool='avg', fc_norm=False, ) model = _create_vision_transformer( 'vit_base_patch16_siglip_gap_384', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def vit_base_patch16_siglip_gap_512(pretrained: bool = False, **kwargs) -> VisionTransformer: """ A SigLIP variant of ViT with global average pooling (GAP) instead of attention pooling (MAP).""" model_args = dict( patch_size=16, embed_dim=768, depth=12, num_heads=12, class_token=False, global_pool='avg', fc_norm=False, ) model = _create_vision_transformer( 'vit_base_patch16_siglip_gap_512', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def vit_large_patch16_siglip_gap_256(pretrained: bool = False, **kwargs) -> VisionTransformer: """ A SigLIP variant of ViT with global average pooling (GAP) instead of attention pooling (MAP).""" model_args = dict( patch_size=16, embed_dim=1024, depth=24, num_heads=16, class_token=False, global_pool='avg', fc_norm=False, ) model = _create_vision_transformer( 'vit_large_patch16_siglip_gap_256', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def vit_large_patch16_siglip_gap_384(pretrained: bool = False, **kwargs) -> VisionTransformer: """ A SigLIP variant of ViT with global average pooling (GAP) instead of attention pooling (MAP).""" model_args = dict( patch_size=16, embed_dim=1024, depth=24, num_heads=16, class_token=False, global_pool='avg', fc_norm=False, ) model = _create_vision_transformer( 'vit_large_patch16_siglip_gap_384', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def vit_so400m_patch14_siglip_gap_224(pretrained: bool = False, **kwargs) -> VisionTransformer: """ A SigLIP variant of ViT with global average pooling (GAP) instead of attention pooling (MAP).""" model_args = dict( patch_size=14, embed_dim=1152, depth=27, num_heads=16, mlp_ratio=3.7362, class_token=False, global_pool='avg', fc_norm=False, ) model = _create_vision_transformer( 'vit_so400m_patch14_siglip_gap_224', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def vit_so400m_patch16_siglip_gap_256(pretrained: bool = False, **kwargs) -> VisionTransformer: """ A SigLIP variant of ViT with global average pooling (GAP) instead of attention pooling (MAP).""" model_args = dict( patch_size=16, embed_dim=1152, depth=27, num_heads=16, mlp_ratio=3.7362, class_token=False, global_pool='avg', fc_norm=False, ) model = _create_vision_transformer( 'vit_so400m_patch16_siglip_gap_256', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def vit_so400m_patch14_siglip_gap_378(pretrained: bool = False, **kwargs) -> VisionTransformer: """ A SigLIP variant of ViT with global average pooling (GAP) instead of attention pooling (MAP).""" model_args = dict( patch_size=14, embed_dim=1152, depth=27, num_heads=16, mlp_ratio=3.7362, class_token=False, global_pool='avg', fc_norm=False, ) model = _create_vision_transformer( 'vit_so400m_patch14_siglip_gap_378', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def vit_so400m_patch14_siglip_gap_384(pretrained: bool = False, **kwargs) -> VisionTransformer: """ A SigLIP variant of ViT with global average pooling (GAP) instead of attention pooling (MAP).""" model_args = dict( patch_size=14, embed_dim=1152, depth=27, num_heads=16, mlp_ratio=3.7362, class_token=False, global_pool='avg', fc_norm=False, ) model = _create_vision_transformer( 'vit_so400m_patch14_siglip_gap_384', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def vit_so400m_patch14_siglip_gap_448(pretrained: bool = False, **kwargs) -> VisionTransformer: """ A SigLIP variant of ViT with global average pooling (GAP) instead of attention pooling (MAP).""" model_args = dict( patch_size=14, embed_dim=1152, depth=27, num_heads=16, mlp_ratio=3.7362, class_token=False, global_pool='avg', fc_norm=False, ) model = _create_vision_transformer( 'vit_so400m_patch14_siglip_gap_448', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def vit_so400m_patch14_siglip_gap_896(pretrained: bool = False, **kwargs) -> VisionTransformer: """ A SigLIP variant of ViT with global average pooling (GAP) instead of attention pooling (MAP).""" model_args = dict( patch_size=14, embed_dim=1152, depth=27, num_heads=16, mlp_ratio=3.7362, class_token=False, global_pool='avg', fc_norm=False, ) model = _create_vision_transformer( 'vit_so400m_patch14_siglip_gap_896', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def vit_wee_patch16_reg1_gap_256(pretrained: bool = False, **kwargs) -> VisionTransformer: model_args = dict( patch_size=16, embed_dim=256, depth=14, num_heads=4, init_values=1e-5, mlp_ratio=5, class_token=False, no_embed_class=True, reg_tokens=1, global_pool='avg', ) model = _create_vision_transformer( 'vit_wee_patch16_reg1_gap_256', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def vit_pwee_patch16_reg1_gap_256(pretrained: bool = False, **kwargs) -> VisionTransformer: model_args = dict( patch_size=16, embed_dim=256, depth=16, num_heads=4, init_values=1e-5, mlp_ratio=5, class_token=False, no_embed_class=True, reg_tokens=1, global_pool='avg', block_fn=ParallelScalingBlock, ) model = _create_vision_transformer( 'vit_pwee_patch16_reg1_gap_256', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def vit_little_patch16_reg1_gap_256(pretrained: bool = False, **kwargs) -> VisionTransformer: model_args = dict( patch_size=16, embed_dim=320, depth=14, num_heads=5, init_values=1e-5, mlp_ratio=5.6, class_token=False, no_embed_class=True, reg_tokens=1, global_pool='avg', ) model = _create_vision_transformer( 'vit_little_patch16_reg1_gap_256', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def vit_little_patch16_reg4_gap_256(pretrained: bool = False, **kwargs) -> VisionTransformer: model_args = dict( patch_size=16, embed_dim=320, depth=14, num_heads=5, init_values=1e-5, mlp_ratio=5.6, class_token=False, no_embed_class=True, reg_tokens=4, global_pool='avg', ) model = _create_vision_transformer( 'vit_little_patch16_reg4_gap_256', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def vit_medium_patch16_reg1_gap_256(pretrained: bool = False, **kwargs) -> VisionTransformer: model_args = dict( patch_size=16, embed_dim=512, depth=12, num_heads=8, init_values=1e-5, class_token=False, no_embed_class=True, reg_tokens=1, global_pool='avg', ) model = _create_vision_transformer( 'vit_medium_patch16_reg1_gap_256', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def vit_medium_patch16_reg4_gap_256(pretrained: bool = False, **kwargs) -> VisionTransformer: model_args = dict( patch_size=16, embed_dim=512, depth=12, num_heads=8, init_values=1e-5, class_token=False, no_embed_class=True, reg_tokens=4, global_pool='avg', ) model = _create_vision_transformer( 'vit_medium_patch16_reg4_gap_256', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def vit_mediumd_patch16_reg4_gap_256(pretrained: bool = False, **kwargs) -> VisionTransformer: model_args = dict( patch_size=16, embed_dim=512, depth=20, num_heads=8, init_values=1e-5, class_token=False, no_embed_class=True, reg_tokens=4, global_pool='avg', ) model = _create_vision_transformer( 'vit_mediumd_patch16_reg4_gap_256', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def vit_mediumd_patch16_reg4_gap_384(pretrained: bool = False, **kwargs) -> VisionTransformer: model_args = dict( patch_size=16, embed_dim=512, depth=20, num_heads=8, init_values=1e-5, class_token=False, no_embed_class=True, reg_tokens=4, global_pool='avg', ) model = _create_vision_transformer( 'vit_mediumd_patch16_reg4_gap_384', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def vit_betwixt_patch16_reg1_gap_256(pretrained: bool = False, **kwargs) -> VisionTransformer: model_args = dict( patch_size=16, embed_dim=640, depth=12, num_heads=10, init_values=1e-5, class_token=False, no_embed_class=True, reg_tokens=1, global_pool='avg', ) model = _create_vision_transformer( 'vit_betwixt_patch16_reg1_gap_256', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def vit_betwixt_patch16_reg4_gap_256(pretrained: bool = False, **kwargs) -> VisionTransformer: model_args = dict( patch_size=16, embed_dim=640, depth=12, num_heads=10, init_values=1e-5, class_token=False, no_embed_class=True, reg_tokens=4, global_pool='avg', ) model = _create_vision_transformer( 'vit_betwixt_patch16_reg4_gap_256', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def vit_betwixt_patch16_reg4_gap_384(pretrained: bool = False, **kwargs) -> VisionTransformer: model_args = dict( patch_size=16, embed_dim=640, depth=12, num_heads=10, init_values=1e-5, class_token=False, no_embed_class=True, reg_tokens=4, global_pool='avg', ) model = _create_vision_transformer( 'vit_betwixt_patch16_reg4_gap_384', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def vit_base_patch16_reg4_gap_256(pretrained: bool = False, **kwargs) -> VisionTransformer: model_args = dict( patch_size=16, embed_dim=768, depth=12, num_heads=12, class_token=False, no_embed_class=True, global_pool='avg', reg_tokens=4, ) model = _create_vision_transformer( 'vit_base_patch16_reg4_gap_256', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def vit_so150m_patch16_reg4_map_256(pretrained: bool = False, **kwargs) -> VisionTransformer: """ SO150M (shape optimized, but diff than paper def, optimized for GPU) """ model_args = dict( patch_size=16, embed_dim=896, depth=18, num_heads=14, mlp_ratio=2.572, class_token=False, reg_tokens=4, global_pool='map', ) model = _create_vision_transformer( 'vit_so150m_patch16_reg4_map_256', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def vit_so150m_patch16_reg4_gap_256(pretrained: bool = False, **kwargs) -> VisionTransformer: """ SO150M (shape optimized, but diff than paper def, optimized for GPU) """ model_args = dict( patch_size=16, embed_dim=896, depth=18, num_heads=14, mlp_ratio=2.572, class_token=False, reg_tokens=4, global_pool='avg', fc_norm=False, ) model = _create_vision_transformer( 'vit_so150m_patch16_reg4_gap_256', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def vit_so150m_patch16_reg4_gap_384(pretrained: bool = False, **kwargs) -> VisionTransformer: """ SO150M (shape optimized, but diff than paper def, optimized for GPU) """ model_args = dict( patch_size=16, embed_dim=896, depth=18, num_heads=14, mlp_ratio=2.572, class_token=False, reg_tokens=4, global_pool='avg', fc_norm=False, ) model = _create_vision_transformer( 'vit_so150m_patch16_reg4_gap_384', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def vit_so150m2_patch16_reg1_gap_256(pretrained: bool = False, **kwargs) -> VisionTransformer: """ SO150M v2 (shape optimized, but diff than paper def, optimized for GPU) """ model_args = dict( patch_size=16, embed_dim=832, depth=21, num_heads=13, mlp_ratio=34/13, init_values=1e-5, qkv_bias=False, class_token=False, reg_tokens=1, global_pool='avg', ) model = _create_vision_transformer( 'vit_so150m2_patch16_reg1_gap_256', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def vit_intern300m_patch14_448(pretrained: bool = False, **kwargs) -> VisionTransformer: model_args = dict( patch_size=14, embed_dim=1024, depth=24, num_heads=16, init_values=0.1, final_norm=False, dynamic_img_size=True, ) model = _create_vision_transformer( 'vit_intern300m_patch14_448', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def aimv2_large_patch14_224(pretrained: bool = False, **kwargs) -> VisionTransformer: """ ViT Large AIM-v2 model """ model_args = dict( patch_size=14, embed_dim=1024, depth=24, num_heads=8, class_token=False, fc_norm=False, mlp_ratio=2.75, global_pool='avg', qkv_bias=False, proj_bias=False, act_layer='silu', norm_layer=partial(RmsNorm, eps=1e-5), embed_norm_layer=partial(RmsNorm, eps=1e-5), mlp_layer=SwiGLU, ) model = _create_vision_transformer( 'aimv2_large_patch14_224', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def aimv2_huge_patch14_224(pretrained: bool = False, **kwargs) -> VisionTransformer: """ ViT Huge AIM-v2 model """ model_args = dict( patch_size=14, embed_dim=1536, depth=24, num_heads=12, class_token=False, fc_norm=False, mlp_ratio=2.6667, global_pool='avg', qkv_bias=False, proj_bias=False, act_layer='silu', norm_layer=partial(RmsNorm, eps=1e-5), embed_norm_layer=partial(RmsNorm, eps=1e-5), mlp_layer=SwiGLU, ) model = _create_vision_transformer( 'aimv2_huge_patch14_224', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def aimv2_1b_patch14_224(pretrained: bool = False, **kwargs) -> VisionTransformer: """ ViT 1B AIM-v2 model """ model_args = dict( patch_size=14, embed_dim=2048, depth=24, num_heads=16, class_token=False, fc_norm=False, mlp_ratio=2.75, global_pool='avg', qkv_bias=False, proj_bias=False, act_layer='silu', norm_layer=partial(RmsNorm, eps=1e-5), embed_norm_layer=partial(RmsNorm, eps=1e-5), mlp_layer=SwiGLU, ) model = _create_vision_transformer( 'aimv2_1b_patch14_224', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def aimv2_3b_patch14_224(pretrained: bool = False, **kwargs) -> VisionTransformer: """ ViT 3B AIM-v2 model """ model_args = dict( patch_size=14, embed_dim=3072, depth=24, num_heads=24, class_token=False, fc_norm=False, mlp_ratio=2.6667, global_pool='avg', qkv_bias=False, proj_bias=False, act_layer='silu', norm_layer=partial(RmsNorm, eps=1e-5), embed_norm_layer=partial(RmsNorm, eps=1e-5), mlp_layer=SwiGLU, ) model = _create_vision_transformer( 'aimv2_3b_patch14_224', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def aimv2_large_patch14_336(pretrained: bool = False, **kwargs) -> VisionTransformer: """ ViT Large AIM-v2 model """ model_args = dict( patch_size=14, embed_dim=1024, depth=24, num_heads=8, class_token=False, fc_norm=False, mlp_ratio=2.75, global_pool='avg', qkv_bias=False, proj_bias=False, act_layer='silu', norm_layer=partial(RmsNorm, eps=1e-5), embed_norm_layer=partial(RmsNorm, eps=1e-5), mlp_layer=SwiGLU, ) model = _create_vision_transformer( 'aimv2_large_patch14_336', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def aimv2_huge_patch14_336(pretrained: bool = False, **kwargs) -> VisionTransformer: """ ViT Huge AIM-v2 model """ model_args = dict( patch_size=14, embed_dim=1536, depth=24, num_heads=12, class_token=False, fc_norm=False, mlp_ratio=2.6667, global_pool='avg', qkv_bias=False, proj_bias=False, act_layer='silu', norm_layer=partial(RmsNorm, eps=1e-5), embed_norm_layer=partial(RmsNorm, eps=1e-5), mlp_layer=SwiGLU, ) model = _create_vision_transformer( 'aimv2_huge_patch14_336', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def aimv2_1b_patch14_336(pretrained: bool = False, **kwargs) -> VisionTransformer: """ ViT 1B AIM-v2 model """ model_args = dict( patch_size=14, embed_dim=2048, depth=24, num_heads=16, class_token=False, fc_norm=False, mlp_ratio=2.75, global_pool='avg', qkv_bias=False, proj_bias=False, act_layer='silu', norm_layer=partial(RmsNorm, eps=1e-5), embed_norm_layer=partial(RmsNorm, eps=1e-5), mlp_layer=SwiGLU, ) model = _create_vision_transformer( 'aimv2_1b_patch14_336', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def aimv2_3b_patch14_336(pretrained: bool = False, **kwargs) -> VisionTransformer: """ ViT 3B AIM-v2 model """ model_args = dict( patch_size=14, embed_dim=3072, depth=24, num_heads=24, class_token=False, fc_norm=False, mlp_ratio=2.6667, global_pool='avg', qkv_bias=False, proj_bias=False, act_layer='silu', norm_layer=partial(RmsNorm, eps=1e-5), embed_norm_layer=partial(RmsNorm, eps=1e-5), mlp_layer=SwiGLU, ) model = _create_vision_transformer( 'aimv2_3b_patch14_336', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def aimv2_large_patch14_448(pretrained: bool = False, **kwargs) -> VisionTransformer: """ ViT Large AIM-v2 model """ model_args = dict( patch_size=14, embed_dim=1024, depth=24, num_heads=8, class_token=False, fc_norm=False, mlp_ratio=2.75, global_pool='avg', qkv_bias=False, proj_bias=False, act_layer='silu', norm_layer=partial(RmsNorm, eps=1e-5), embed_norm_layer=partial(RmsNorm, eps=1e-5), mlp_layer=SwiGLU, ) model = _create_vision_transformer( 'aimv2_large_patch14_448', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def aimv2_huge_patch14_448(pretrained: bool = False, **kwargs) -> VisionTransformer: """ ViT Huge AIM-v2 model """ model_args = dict( patch_size=14, embed_dim=1536, depth=24, num_heads=12, class_token=False, fc_norm=False, mlp_ratio=2.6667, global_pool='avg', qkv_bias=False, proj_bias=False, act_layer='silu', norm_layer=partial(RmsNorm, eps=1e-5), embed_norm_layer=partial(RmsNorm, eps=1e-5), mlp_layer=SwiGLU, ) model = _create_vision_transformer( 'aimv2_huge_patch14_448', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def aimv2_1b_patch14_448(pretrained: bool = False, **kwargs) -> VisionTransformer: """ ViT 1B AIM-v2 model """ model_args = dict( patch_size=14, embed_dim=2048, depth=24, num_heads=16, class_token=False, fc_norm=False, mlp_ratio=2.75, global_pool='avg', qkv_bias=False, proj_bias=False, act_layer='silu', norm_layer=partial(RmsNorm, eps=1e-5), embed_norm_layer=partial(RmsNorm, eps=1e-5), mlp_layer=SwiGLU, ) model = _create_vision_transformer( 'aimv2_1b_patch14_448', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def aimv2_3b_patch14_448(pretrained: bool = False, **kwargs) -> VisionTransformer: """ ViT 3B AIM-v2 model """ model_args = dict( patch_size=14, embed_dim=3072, depth=24, num_heads=24, class_token=False, fc_norm=False, mlp_ratio=2.6667, global_pool='avg', qkv_bias=False, proj_bias=False, act_layer='silu', norm_layer=partial(RmsNorm, eps=1e-5), embed_norm_layer=partial(RmsNorm, eps=1e-5), mlp_layer=SwiGLU, ) model = _create_vision_transformer( 'aimv2_3b_patch14_448', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def test_vit(pretrained: bool = False, **kwargs) -> VisionTransformer: """ ViT Test """ model_args = dict(patch_size=16, embed_dim=64, depth=6, num_heads=2, mlp_ratio=3, dynamic_img_size=True) model = _create_vision_transformer('test_vit', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def test_vit2(pretrained: bool = False, **kwargs) -> VisionTransformer: """ ViT Test """ model_args = dict( patch_size=16, embed_dim=64, depth=8, num_heads=2, mlp_ratio=3, class_token=False, reg_tokens=1, global_pool='avg', init_values=1e-5, dynamic_img_size=True) model = _create_vision_transformer('test_vit2', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def test_vit3(pretrained: bool = False, **kwargs) -> VisionTransformer: """ ViT Test """ model_args = dict( patch_size=16, embed_dim=96, depth=9, num_heads=3, mlp_ratio=2, class_token=False, reg_tokens=1, global_pool='map', init_values=1e-5) model = _create_vision_transformer('test_vit3', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def test_vit4(pretrained: bool = False, **kwargs) -> VisionTransformer: """ ViT Test """ model_args = dict( patch_size=16, embed_dim=96, depth=9, num_heads=3, mlp_ratio=3, class_token=False, reg_tokens=1, global_pool='avg', init_values=1e-5, dynamic_img_size=True, norm_layer='rmsnorm', ) model = _create_vision_transformer('test_vit4', pretrained=pretrained, **dict(model_args, **kwargs)) return model register_model_deprecations(__name__, { 'vit_tiny_patch16_224_in21k': 'vit_tiny_patch16_224.augreg_in21k', 'vit_small_patch32_224_in21k': 'vit_small_patch32_224.augreg_in21k', 'vit_small_patch16_224_in21k': 'vit_small_patch16_224.augreg_in21k', 'vit_base_patch32_224_in21k': 'vit_base_patch32_224.augreg_in21k', 'vit_base_patch16_224_in21k': 'vit_base_patch16_224.augreg_in21k', 'vit_base_patch8_224_in21k': 'vit_base_patch8_224.augreg_in21k', 'vit_large_patch32_224_in21k': 'vit_large_patch32_224.orig_in21k', 'vit_large_patch16_224_in21k': 'vit_large_patch16_224.augreg_in21k', 'vit_huge_patch14_224_in21k': 'vit_huge_patch14_224.orig_in21k', 'vit_base_patch32_224_sam': 'vit_base_patch32_224.sam', 'vit_base_patch16_224_sam': 'vit_base_patch16_224.sam', 'vit_small_patch16_224_dino': 'vit_small_patch16_224.dino', 'vit_small_patch8_224_dino': 'vit_small_patch8_224.dino', 'vit_base_patch16_224_dino': 'vit_base_patch16_224.dino', 'vit_base_patch8_224_dino': 'vit_base_patch8_224.dino', 'vit_base_patch16_224_miil_in21k': 'vit_base_patch16_224_miil.in21k', 'vit_base_patch32_224_clip_laion2b': 'vit_base_patch32_clip_224.laion2b', 'vit_large_patch14_224_clip_laion2b': 'vit_large_patch14_clip_224.laion2b', 'vit_huge_patch14_224_clip_laion2b': 'vit_huge_patch14_clip_224.laion2b', 'vit_giant_patch14_224_clip_laion2b': 'vit_giant_patch14_clip_224.laion2b', })
pytorch-image-models/timm/models/vision_transformer.py/0
{ "file_path": "pytorch-image-models/timm/models/vision_transformer.py", "repo_id": "pytorch-image-models", "token_count": 79510 }
""" Adafactor (Big Vision variant) for PyTorch Adapted from the implementation in big vision: https://github.com/google-research/big_vision Described in 'Scaling Vision Transformers': https://arxiv.org/abs/2106.04560 Adaptation and PyTorch modifications by Ross Wightman """ from typing import List, Optional, Tuple, Union import torch from torch import Tensor from torch.optim import Optimizer from ._types import ParamsT def _get_scalar_dtype(): """Get the scalar dtype that the optimizer uses for state""" return torch.float64 def _factored_dims( shape: Tuple[int, ...], factored: bool, min_dim_size_to_factor: int ) -> Optional[tuple[int, int]]: """Whether to use a factored second moment estimator. This function returns a tuple with the two largest axes to reduce over. If no two dimensions have size >= min_dim_size_to_factor, return None. Args: shape: an input shape factored: whether to use factored second-moment estimator for > 2d vars. min_dim_size_to_factor: only factor accumulator if two array dimensions have at least this size. Returns: None or a tuple of ints """ if not factored or len(shape) < 2: return None sorted_dims = sorted(((x, i) for i, x in enumerate(shape))) if shape[sorted_dims[-2][1]] < min_dim_size_to_factor: return None return int(sorted_dims[-2][1]), int(sorted_dims[-1][1]) class AdafactorBigVision(Optimizer): """ PyTorch implementation of BigVision's Adafactor variant with both single and multi tensor implementations. Adapted from https://github.com/google-research/big_vision by Ross Wightman """ def __init__( self, params: ParamsT, lr: float = 1.0, min_dim_size_to_factor: int = 16, decay_rate: float = 0.8, decay_offset: int = 0, beta2_cap: float = 0.999, momentum: Optional[float] = 0.9, momentum_dtype: Union[str, torch.dtype] = torch.bfloat16, eps: Optional[float] = None, weight_decay: float = 0.0, clipping_threshold: Optional[float] = None, unscaled_wd: bool = False, caution: bool = False, *, foreach: Optional[bool] = False, ): if isinstance(momentum_dtype, str): if momentum_dtype == 'float16': momentum_dtype = torch.float16 elif momentum_dtype == 'bfloat16': momentum_dtype = torch.bfloat16 else: assert momentum_dtype == 'float32', f'{momentum_dtype} dtype not supported' momentum_dtype = torch.float32 # FIXME try to check if momentum dtype is appropriate for device? Torch API not great for this. defaults = dict( lr=lr, min_dim_size_to_factor=min_dim_size_to_factor, decay_rate=decay_rate, decay_offset=decay_offset, beta2_cap=beta2_cap, momentum=momentum, momentum_dtype=momentum_dtype, eps=eps, weight_decay=weight_decay, clipping_threshold=clipping_threshold, unscaled_wd=unscaled_wd, caution=caution, foreach=foreach, ) super().__init__(params, defaults) def __setstate__(self, state): super().__setstate__(state) for group in self.param_groups: group.setdefault('caution', False) group.setdefault('foreach', None) for p in group['params']: p_state = self.state.get(p, {}) if len(p_state) != 0 and not torch.is_tensor(p_state['step']): p_state['step'] = torch.tensor(float(p_state['step']), dtype=_get_scalar_dtype()) if 'exp_avg' in p_state and torch.is_tensor(p_state['exp_avg']): # FIXME this is a bit of a hack, optimizer.load_state_dict appears to upcast # the momentum to float32 (it's half precision in the state_dict), need to # look into this further. Better to override _process_value_according_to_param_policy? p_state['exp_avg'] = p_state['exp_avg'].to(dtype=self.defaults['momentum_dtype']) @torch.no_grad() def step(self, closure=None): loss = None if closure is not None: with torch.enable_grad(): loss = closure() for group in self.param_groups: params_with_grad = [] grads = [] exp_avg_sq_rs = [] exp_avg_sq_cs = [] exp_avg_sqs = [] state_steps = [] exp_avgs = [] # For momentum for p in group['params']: if p.grad is None: continue if p.grad.is_sparse: raise RuntimeError("Sparse gradients not supported") params_with_grad.append(p) grads.append(p.grad) state = self.state[p] if len(state) == 0: # NOTE step on CPU, probably need some more though to make capturable state['step'] = torch.tensor(0.0, dtype=_get_scalar_dtype()) shape = p.grad.shape factored_dims = _factored_dims( shape, factored=True, min_dim_size_to_factor=self.defaults['min_dim_size_to_factor'] ) if factored_dims is not None: dc, dr = factored_dims row_shape = list(p.grad.shape) row_shape[dr] = 1 col_shape = list(p.grad.shape) col_shape[dc] = 1 state['exp_avg_sq_r'] = p.grad.new_zeros(row_shape) state['exp_avg_sq_c'] = p.grad.new_zeros(col_shape) else: state['exp_avg_sq'] = torch.zeros_like(p.grad, memory_format=torch.preserve_format) if self.defaults['momentum'] is not None: state['exp_avg'] = torch.zeros_like(p.grad, dtype=self.defaults['momentum_dtype']) state_steps.append(state['step']) exp_avg_sq_rs.append(state.get('exp_avg_sq_r', None)) exp_avg_sq_cs.append(state.get('exp_avg_sq_c', None)) exp_avg_sqs.append(state.get('exp_avg_sq', None)) exp_avgs.append(state.get('exp_avg', None)) if group['foreach']: func = _multi_tensor_adafactor else: func = _single_tensor_adafactor func( params=params_with_grad, grads=grads, exp_avg_sq_rs=exp_avg_sq_rs, exp_avg_sq_cs=exp_avg_sq_cs, exp_avg_sqs=exp_avg_sqs, exp_avgs=exp_avgs, state_steps=state_steps, beta2_decay=group['decay_rate'], beta2_cap=group['beta2_cap'], min_dim_size_to_factor=group['min_dim_size_to_factor'], eps=group['eps'], lr=group['lr'], weight_decay=group['weight_decay'], momentum=group['momentum'], momentum_dtype=group['momentum_dtype'], clipping_threshold=group['clipping_threshold'], unscaled_wd=group['unscaled_wd'], caution=group['caution'], ) return loss def _single_tensor_adafactor( params: List[Tensor], grads: List[Tensor], exp_avg_sq_rs: List[Optional[Tensor]], exp_avg_sq_cs: List[Optional[Tensor]], exp_avg_sqs: List[Optional[Tensor]], exp_avgs: List[Optional[Tensor]], state_steps: List[Tensor], *, beta2_decay: float, beta2_cap: float, min_dim_size_to_factor: int, eps: float, lr: float, weight_decay: float, momentum: Optional[float], momentum_dtype: Union[str, torch.dtype], clipping_threshold: Optional[float], unscaled_wd: bool, caution: bool, ): for i, param in enumerate(params): grad = grads[i] exp_avg_sq_r = exp_avg_sq_rs[i] exp_avg_sq_c = exp_avg_sq_cs[i] exp_avg_sq = exp_avg_sqs[i] exp_avg = exp_avgs[i] step_t = state_steps[i] if eps is None: # default eps for avoiding div by zero, diff from float type eps eps = 1e-7 if grad.dtype == torch.float16 else 1e-30 # Update step step_t += 1 beta2_t = min(beta2_cap, 1.0 - float(step_t) ** (-beta2_decay)) one_minus_beta2_t = 1 - beta2_t grad_sqr = torch.square(grad) + eps # NOTE application of eps (epsilon1) mirrors the optax/big vision/t5x approach if exp_avg_sq is None: # factorized second moment dc, dr = _factored_dims(grad.shape, True, min_dim_size_to_factor=min_dim_size_to_factor) exp_avg_sq_r.lerp_(grad_sqr.mean(dim=dr, keepdim=True), one_minus_beta2_t) exp_avg_sq_c.lerp_(grad_sqr.mean(dim=dc, keepdim=True), one_minus_beta2_t) reduce_dc = dc - 1 if dc > dr else dc row_col_mean = exp_avg_sq_r.mean(dim=reduce_dc, keepdim=True) row_factor = (exp_avg_sq_r / row_col_mean).rsqrt() col_factor = exp_avg_sq_c.rsqrt() update = grad * row_factor * col_factor else: # non-factorized second moment assert exp_avg_sq_r is None and exp_avg_sq_c is None exp_avg_sq.lerp_(grad_sqr, one_minus_beta2_t) update = grad * exp_avg_sq.rsqrt() # Clip by RMS value if clipping_threshold is not None: denom = (update.norm(2) / ((update.numel() ** 0.5) / clipping_threshold)).clamp_(max=1.0) update.div_(denom) # Apply momentum (in different dtype) if momentum is not None and exp_avg is not None: if momentum_dtype != grad.dtype: exp_avg.lerp_(update.to(momentum_dtype), 1 - momentum) # ema update = exp_avg.to(grad.dtype) else: exp_avg.lerp_(update, 1 - momentum) # ema update = exp_avg.clone() if caution: # apply caution as per 'Cautious Optimizers': https://arxiv.org/abs/2411.16085 mask = (update * grad > 0).to(grad.dtype) mask.div_(mask.mean().clamp_(min=1e-3)) update.mul_(mask) # Scale by learning rate update.mul_(lr) # Perform weight decay if weight_decay != 0: if unscaled_wd: # match big vision impl, 'fully decoupled' decay w/o LR scaling param.mul_(1. - weight_decay) else: # match typical pytorch behaviour for decoupled decay, eg adamw where wd is scaled by LR param.mul_(1. - lr * weight_decay) # Update parameters param.add_(update, alpha=-1.0) def _multi_tensor_adafactor( params: List[Tensor], grads: List[Tensor], exp_avg_sq_rs: List[Optional[Tensor]], exp_avg_sq_cs: List[Optional[Tensor]], exp_avg_sqs: List[Optional[Tensor]], exp_avgs: List[Optional[Tensor]], state_steps: List[Tensor], *, beta2_decay: float, beta2_cap: float, min_dim_size_to_factor: int, eps: float, lr: float, weight_decay: float, momentum: Optional[float], momentum_dtype: Union[str, torch.dtype], clipping_threshold: Optional[float], unscaled_wd: bool, caution: bool, ): # FIXME TODO assert False, 'multi-tensor fn (foreach=True) not implemented yet'
pytorch-image-models/timm/optim/adafactor_bv.py/0
{ "file_path": "pytorch-image-models/timm/optim/adafactor_bv.py", "repo_id": "pytorch-image-models", "token_count": 6213 }
""" Nvidia NovoGrad Optimizer. Original impl by Nvidia from Jasper example: - https://github.com/NVIDIA/DeepLearningExamples/blob/master/PyTorch/SpeechRecognition/Jasper Paper: `Stochastic Gradient Methods with Layer-wise Adaptive Moments for Training of Deep Networks` - https://arxiv.org/abs/1905.11286 """ import torch from torch.optim.optimizer import Optimizer import math class NvNovoGrad(Optimizer): """ Implements Novograd algorithm. Args: params (iterable): iterable of parameters to optimize or dicts defining parameter groups lr (float, optional): learning rate (default: 1e-3) betas (Tuple[float, float], optional): coefficients used for computing running averages of gradient and its square (default: (0.95, 0.98)) eps (float, optional): term added to the denominator to improve numerical stability (default: 1e-8) weight_decay (float, optional): weight decay (L2 penalty) (default: 0) grad_averaging: gradient averaging amsgrad (boolean, optional): whether to use the AMSGrad variant of this algorithm from the paper `On the Convergence of Adam and Beyond`_ (default: False) """ def __init__( self, params, lr=1e-3, betas=(0.95, 0.98), eps=1e-8, weight_decay=0, grad_averaging=False, amsgrad=False, ): if not 0.0 <= lr: raise ValueError("Invalid learning rate: {}".format(lr)) if not 0.0 <= eps: raise ValueError("Invalid epsilon value: {}".format(eps)) if not 0.0 <= betas[0] < 1.0: raise ValueError("Invalid beta parameter at index 0: {}".format(betas[0])) if not 0.0 <= betas[1] < 1.0: raise ValueError("Invalid beta parameter at index 1: {}".format(betas[1])) defaults = dict( lr=lr, betas=betas, eps=eps, weight_decay=weight_decay, grad_averaging=grad_averaging, amsgrad=amsgrad, ) super(NvNovoGrad, self).__init__(params, defaults) def __setstate__(self, state): super(NvNovoGrad, self).__setstate__(state) for group in self.param_groups: group.setdefault('amsgrad', False) @torch.no_grad() def step(self, closure=None): """Performs a single optimization step. Arguments: closure (callable, optional): A closure that reevaluates the model and returns the loss. """ loss = None if closure is not None: with torch.enable_grad(): loss = closure() for group in self.param_groups: for p in group['params']: if p.grad is None: continue grad = p.grad if grad.is_sparse: raise RuntimeError('Sparse gradients are not supported.') amsgrad = group['amsgrad'] state = self.state[p] # State initialization if len(state) == 0: state['step'] = 0 # Exponential moving average of gradient values state['exp_avg'] = torch.zeros_like(p) # Exponential moving average of squared gradient values state['exp_avg_sq'] = torch.zeros([]).to(state['exp_avg'].device) if amsgrad: # Maintains max of all exp. moving avg. of sq. grad. values state['max_exp_avg_sq'] = torch.zeros([]).to(state['exp_avg'].device) exp_avg, exp_avg_sq = state['exp_avg'], state['exp_avg_sq'] if amsgrad: max_exp_avg_sq = state['max_exp_avg_sq'] beta1, beta2 = group['betas'] state['step'] += 1 norm = torch.sum(torch.pow(grad, 2)) if exp_avg_sq == 0: exp_avg_sq.copy_(norm) else: exp_avg_sq.mul_(beta2).add_(norm, alpha=1 - beta2) if amsgrad: # Maintains the maximum of all 2nd moment running avg. till now torch.max(max_exp_avg_sq, exp_avg_sq, out=max_exp_avg_sq) # Use the max. for normalizing running avg. of gradient denom = max_exp_avg_sq.sqrt().add_(group['eps']) else: denom = exp_avg_sq.sqrt().add_(group['eps']) grad.div_(denom) if group['weight_decay'] != 0: grad.add_(p, alpha=group['weight_decay']) if group['grad_averaging']: grad.mul_(1 - beta1) exp_avg.mul_(beta1).add_(grad) p.add_(exp_avg, alpha=-group['lr']) return loss
pytorch-image-models/timm/optim/nvnovograd.py/0
{ "file_path": "pytorch-image-models/timm/optim/nvnovograd.py", "repo_id": "pytorch-image-models", "token_count": 2509 }
from .agc import adaptive_clip_grad from .attention_extract import AttentionExtract from .checkpoint_saver import CheckpointSaver from .clip_grad import dispatch_clip_grad from .cuda import ApexScaler, NativeScaler from .decay_batch import decay_batch_step, check_batch_size_retry from .distributed import distribute_bn, reduce_tensor, init_distributed_device,\ world_info_from_env, is_distributed_env, is_primary from .jit import set_jit_legacy, set_jit_fuser from .log import setup_default_logging, FormatterNoInfo from .metrics import AverageMeter, accuracy from .misc import natural_key, add_bool_arg, ParseKwargs from .model import unwrap_model, get_state_dict, freeze, unfreeze, reparameterize_model from .model_ema import ModelEma, ModelEmaV2, ModelEmaV3 from .random import random_seed from .summary import update_summary, get_outdir
pytorch-image-models/timm/utils/__init__.py/0
{ "file_path": "pytorch-image-models/timm/utils/__init__.py", "repo_id": "pytorch-image-models", "token_count": 264 }
""" Summary utilities Hacked together by / Copyright 2020 Ross Wightman """ import csv import os from collections import OrderedDict try: import wandb except ImportError: pass def get_outdir(path, *paths, inc=False): outdir = os.path.join(path, *paths) if not os.path.exists(outdir): os.makedirs(outdir) elif inc: count = 1 outdir_inc = outdir + '-' + str(count) while os.path.exists(outdir_inc): count = count + 1 outdir_inc = outdir + '-' + str(count) assert count < 100 outdir = outdir_inc os.makedirs(outdir) return outdir def update_summary( epoch, train_metrics, eval_metrics, filename, lr=None, write_header=False, log_wandb=False, ): rowd = OrderedDict(epoch=epoch) rowd.update([('train_' + k, v) for k, v in train_metrics.items()]) if eval_metrics: rowd.update([('eval_' + k, v) for k, v in eval_metrics.items()]) if lr is not None: rowd['lr'] = lr if log_wandb: wandb.log(rowd) with open(filename, mode='a') as cf: dw = csv.DictWriter(cf, fieldnames=rowd.keys()) if write_header: # first iteration (epoch == 1 can't be used) dw.writeheader() dw.writerow(rowd)
pytorch-image-models/timm/utils/summary.py/0
{ "file_path": "pytorch-image-models/timm/utils/summary.py", "repo_id": "pytorch-image-models", "token_count": 633 }
<!--Copyright 2024 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Secure code execution [[open-in-colab]] > [!TIP] > If you're new to building agents, make sure to first read the [intro to agents](../conceptual_guides/intro_agents) and the [guided tour of smolagents](../guided_tour). ### Code agents [Multiple](https://huggingface.co/papers/2402.01030) [research](https://huggingface.co/papers/2411.01747) [papers](https://huggingface.co/papers/2401.00812) have shown that having the LLM write its actions (the tool calls) in code is much better than the current standard format for tool calling, which is across the industry different shades of "writing actions as a JSON of tools names and arguments to use". Why is code better? Well, because we crafted our code languages specifically to be great at expressing actions performed by a computer. If JSON snippets was a better way, this package would have been written in JSON snippets and the devil would be laughing at us. Code is just a better way to express actions on a computer. It has better: - **Composability:** could you nest JSON actions within each other, or define a set of JSON actions to re-use later, the same way you could just define a python function? - **Object management:** how do you store the output of an action like `generate_image` in JSON? - **Generality:** code is built to express simply anything you can do have a computer do. - **Representation in LLM training corpus:** why not leverage this benediction of the sky that plenty of quality actions have already been included in LLM training corpus? This is illustrated on the figure below, taken from [Executable Code Actions Elicit Better LLM Agents](https://huggingface.co/papers/2402.01030). <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/code_vs_json_actions.png"> This is why we put emphasis on proposing code agents, in this case python agents, which meant putting higher effort on building secure python interpreters. ### Local python interpreter By default, the `CodeAgent` runs LLM-generated code in your environment. This execution is not done by the vanilla Python interpreter: we've re-built a more secure `LocalPythonInterpreter` from the ground up. This interpreter is designed for security by: - Restricting the imports to a list explicitly passed by the user - Capping the number of operations to prevent infinite loops and resource bloating. - Will not perform any operation that's not pre-defined. We've used this on many use cases, without ever observing any damage to the environment. However this solution is not watertight: one could imagine occasions where LLMs fine-tuned for malignant actions could still hurt your environment. For instance if you've allowed an innocuous package like `Pillow` to process images, the LLM could generate thousands of saves of images to bloat your hard drive. It's certainly not likely if you've chosen the LLM engine yourself, but it could happen. So if you want to be extra cautious, you can use the remote code execution option described below. ### E2B code executor For maximum security, you can use our integration with E2B to run code in a sandboxed environment. This is a remote execution service that runs your code in an isolated container, making it impossible for the code to affect your local environment. For this, you will need to setup your E2B account and set your `E2B_API_KEY` in your environment variables. Head to [E2B's quickstart documentation](https://e2b.dev/docs/quickstart) for more information. Then you can install it with `pip install "smolagents[e2b]"`. Now you're set! To set the code executor to E2B, simply pass the flag `use_e2b_executor=True` when initializing your `CodeAgent`. Note that you should add all the tool's dependencies in `additional_authorized_imports`, so that the executor installs them. ```py from smolagents import CodeAgent, VisitWebpageTool, HfApiModel agent = CodeAgent( tools = [VisitWebpageTool()], model=HfApiModel(), additional_authorized_imports=["requests", "markdownify"], use_e2b_executor=True ) agent.run("What was Abraham Lincoln's preferred pet?") ``` E2B code execution is not compatible with multi-agents at the moment - because having an agent call in a code blob that should be executed remotely is a mess. But we're working on adding it!
smolagents/docs/source/en/tutorials/secure_code_execution.md/0
{ "file_path": "smolagents/docs/source/en/tutorials/secure_code_execution.md", "repo_id": "smolagents", "token_count": 1304 }
<!--Copyright 2024 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Tools [[open-in-colab]] यहाँ, हम एडवांस्ड tools उपयोग देखेंगे। > [!TIP] > यदि आप एजेंट्स बनाने में नए हैं, तो सबसे पहले [एजेंट्स का परिचय](../conceptual_guides/intro_agents) और [smolagents की गाइडेड टूर](../guided_tour) पढ़ना सुनिश्चित करें। - [Tools](#tools) - [टूल क्या है, और इसे कैसे बनाएं?](#टूल-क्या-है-और-इसे-कैसे-बनाएं) - [अपना टूल हब पर शेयर करें](#अपना-टूल-हब-पर-शेयर-करें) - [स्पेस को टूल के रूप में इम्पोर्ट करें](#स्पेस-को-टूल-के-रूप-में-इम्पोर्ट-करें) - [LangChain टूल्स का उपयोग करें](#LangChain-टूल्स-का-उपयोग-करें) - [अपने एजेंट के टूलबॉक्स को मैनेज करें](#अपने-एजेंट-के-टूलबॉक्स-को-मैनेज-करें) - [टूल्स का कलेक्शन उपयोग करें](#टूल्स-का-कलेक्शन-उपयोग-करें) ### टूल क्या है और इसे कैसे बनाएं टूल मुख्य रूप से एक फ़ंक्शन है जिसे एक LLM एजेंटिक सिस्टम में उपयोग कर सकता है। लेकिन इसका उपयोग करने के लिए, LLM को एक API दी जाएगी: नाम, टूल विवरण, इनपुट प्रकार और विवरण, आउटपुट प्रकार। इसलिए यह केवल एक फ़ंक्शन नहीं हो सकता। यह एक क्लास होनी चाहिए। तो मूल रूप से, टूल एक क्लास है जो एक फ़ंक्शन को मेटाडेटा के साथ रैप करती है जो LLM को समझने में मदद करती है कि इसका उपयोग कैसे करें। यह कैसा दिखता है: ```python from smolagents import Tool class HFModelDownloadsTool(Tool): name = "model_download_counter" description = """ This is a tool that returns the most downloaded model of a given task on the Hugging Face Hub. It returns the name of the checkpoint.""" inputs = { "task": { "type": "string", "description": "the task category (such as text-classification, depth-estimation, etc)", } } output_type = "string" def forward(self, task: str): from huggingface_hub import list_models model = next(iter(list_models(filter=task, sort="downloads", direction=-1))) return model.id model_downloads_tool = HFModelDownloadsTool() ``` कस्टम टूल `Tool` को सबक्लास करता है उपयोगी मेथड्स को इनहेरिट करने के लिए। चाइल्ड क्लास भी परिभाषित करती है: - एक `name` एट्रिब्यूट, जो टूल के नाम से संबंधित है। नाम आमतौर पर बताता है कि टूल क्या करता है। चूंकि कोड एक टास्क के लिए सबसे अधिक डाउनलोड वाले मॉडल को रिटर्न करता है, इसलिए इसे `model_download_counter` नाम दें। - एक `description` एट्रिब्यूट एजेंट के सिस्टम प्रॉम्प्ट को पॉपुलेट करने के लिए उपयोग किया जाता है। - एक `inputs` एट्रिब्यूट, जो `"type"` और `"description"` keys वाला डिक्शनरी है। इसमें जानकारी होती है जो पायथन इंटरप्रेटर को इनपुट के बारे में शिक्षित विकल्प चुनने में मदद करती है। - एक `output_type` एट्रिब्यूट, जो आउटपुट टाइप को निर्दिष्ट करता है। `inputs` और `output_type` दोनों के लिए टाइप [Pydantic formats](https://docs.pydantic.dev/latest/concepts/json_schema/#generating-json-schema) होने चाहिए, वे इनमें से कोई भी हो सकते हैं: [`~AUTHORIZED_TYPES`]। - एक `forward` मेथड जिसमें एक्जीक्यूट किया जाने वाला इन्फरेंस कोड होता है। एजेंट में उपयोग किए जाने के लिए इतना ही चाहिए! टूल बनाने का एक और तरीका है। [guided_tour](../guided_tour) में, हमने `@tool` डेकोरेटर का उपयोग करके एक टूल को लागू किया। [`tool`] डेकोरेटर सरल टूल्स को परिभाषित करने का अनुशंसित तरीका है, लेकिन कभी-कभी आपको इससे अधिक की आवश्यकता होती है: अधिक स्पष्टता के लिए एक क्लास में कई मेथड्स का उपयोग करना, या अतिरिक्त क्लास एट्रिब्यूट्स का उपयोग करना। इस स्थिति में, आप ऊपर बताए अनुसार [`Tool`] को सबक्लास करके अपना टूल बना सकते हैं। ### अपना टूल हब पर शेयर करें आप टूल पर [`~Tool.push_to_hub`] को कॉल करके अपना कस्टम टूल हब पर शेयर कर सकते हैं। सुनिश्चित करें कि आपने हब पर इसके लिए एक रिपॉजिटरी बनाई है और आप रीड एक्सेस वाला टोकन उपयोग कर रहे हैं। ```python model_downloads_tool.push_to_hub("{your_username}/hf-model-downloads", token="<YOUR_HUGGINGFACEHUB_API_TOKEN>") ``` हब पर पुश करने के लिए काम करने के लिए, आपके टूल को कुछ नियमों का पालन करना होगा: - सभी मेथड्स सेल्फ-कंटेन्ड हैं, यानी उनके आर्ग्स से आने वाले वेरिएबल्स का उपयोग करें। - उपरोक्त बिंदु के अनुसार, **सभी इम्पोर्ट्स को सीधे टूल के फ़ंक्शंस के भीतर परिभाषित किया जाना चाहिए**, अन्यथा आपको अपने कस्टम टूल के साथ [`~Tool.save`] या [`~Tool.push_to_hub`] को कॉल करने का प्रयास करते समय एरर मिलेगा। - यदि आप `__init__` विधि को सबक्लास करते हैं, तो आप इसे `self` के अलावा कोई अन्य आर्ग्यूमेंट नहीं दे सकते। ऐसा इसलिए है क्योंकि किसी विशिष्ट टूल इंस्टेंस के इनिशियलाइजेशन के दौरान सेट किए गए तर्कों को आर्ग्यूमेंट्स करना कठिन होता है, जो उन्हें हब पर ठीक से साझा करने से रोकता है। और वैसे भी, एक विशिष्ट क्लास बनाने का विचार यह है कि आप हार्ड-कोड के लिए आवश्यक किसी भी चीज़ के लिए क्लास विशेषताएँ पहले से ही सेट कर सकते हैं (बस `your_variable=(...)` को सीधे `class YourTool(Tool):` पंक्ति के अंतर्गत सेट करें ). और निश्चित रूप से आप अभी भी `self.your_variable` को असाइन करके अपने कोड में कहीं भी एक क्लास विशेषता बना सकते हैं। एक बार जब आपका टूल हब पर पुश हो जाता है, तो आप इसे विज़ुअलाइज़ कर सकते हैं। [यहाँ](https://huggingface.co/spaces/m-ric/hf-model-downloads) `model_downloads_tool` है जिसे मैंने पुश किया है। इसमें एक अच्छा ग्रेडियो इंटरफ़ेस है। टूल फ़ाइलों में गहराई से जाने पर, आप पा सकते हैं कि सारी टूल लॉजिक [tool.py](https://huggingface.co/spaces/m-ric/hf-model-downloads/blob/main/tool.py) के अंतर्गत है। यहीं आप किसी और द्वारा शेयर किए गए टूल का निरीक्षण कर सकते हैं। फिर आप टूल को [`load_tool`] के साथ लोड कर सकते हैं या [`~Tool.from_hub`] के साथ बना सकते हैं और इसे अपने एजेंट में `tools` पैरामीटर में पास कर सकते हैं। चूंकि टूल्स को चलाने का मतलब कस्टम कोड चलाना है, आपको यह सुनिश्चित करना होगा कि आप रिपॉजिटरी पर भरोसा करते हैं, इसलिए हम हब से टूल लोड करने के लिए `trust_remote_code=True` पास करने की आवश्यकता रखते हैं। ```python from smolagents import load_tool, CodeAgent model_download_tool = load_tool( "{your_username}/hf-model-downloads", trust_remote_code=True ) ``` ### स्पेस को टूल के रूप में इम्पोर्ट करें आप [`Tool.from_space`] मेथड का उपयोग करके हब से एक स्पेस को सीधे टूल के रूप में इम्पोर्ट कर सकते हैं! आपको केवल हब पर स्पेस की ID, इसका नाम, और एक विवरण प्रदान करने की आवश्यकता है जो आपके एजेंट को समझने में मदद करेगा कि टूल क्या करता है। अंदर से, यह स्पेस को कॉल करने के लिए [`gradio-client`](https://pypi.org/project/gradio-client/) लाइब्रेरी का उपयोग करेगा। उदाहरण के लिए, चलिए हब से [FLUX.1-dev](https://huggingface.co/black-forest-labs/FLUX.1-dev) स्पेस को इम्पोर्ट करें और इसका उपयोग एक इमेज जनरेट करने के लिए करें। ```python image_generation_tool = Tool.from_space( "black-forest-labs/FLUX.1-schnell", name="image_generator", description="Generate an image from a prompt" ) image_generation_tool("A sunny beach") ``` और देखो, यह तुम्हारी छवि है! 🏖️ <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/sunny_beach.webp"> फिर आप इस टूल का उपयोग किसी अन्य टूल की तरह कर सकते हैं। उदाहरण के लिए, चलिए प्रॉम्प्ट `a rabbit wearing a space suit` को सुधारें और इसकी एक इमेज जनरेट करें। यह उदाहरण यह भी दिखाता है कि आप एजेंट को अतिरिक्त आर्ग्यूमेंट्स कैसे पास कर सकते हैं। ```python from smolagents import CodeAgent, HfApiModel model = HfApiModel("Qwen/Qwen2.5-Coder-32B-Instruct") agent = CodeAgent(tools=[image_generation_tool], model=model) agent.run( "Improve this prompt, then generate an image of it.", additional_args={'user_prompt': 'A rabbit wearing a space suit'} ) ``` ```text === Agent thoughts: improved_prompt could be "A bright blue space suit wearing rabbit, on the surface of the moon, under a bright orange sunset, with the Earth visible in the background" Now that I have improved the prompt, I can use the image generator tool to generate an image based on this prompt. >>> Agent is executing the code below: image = image_generator(prompt="A bright blue space suit wearing rabbit, on the surface of the moon, under a bright orange sunset, with the Earth visible in the background") final_answer(image) ``` <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/rabbit_spacesuit_flux.webp"> यह कितना कूल है? 🤩 ### LangChain टूल्स का उपयोग करें हम LangChain को पसंद करते हैं और मानते हैं कि इसके पास टूल्स का एक बहुत आकर्षक संग्रह है। LangChain से एक टूल इम्पोर्ट करने के लिए, `from_langchain()` मेथड का उपयोग करें। यहाँ बताया गया है कि आप LangChain वेब सर्च टूल का उपयोग करके परिचय के सर्च रिजल्ट को कैसे फिर से बना सकते हैं। इस टूल को काम करने के लिए `pip install langchain google-search-results -q` की आवश्यकता होगी। ```python from langchain.agents import load_tools search_tool = Tool.from_langchain(load_tools(["serpapi"])[0]) agent = CodeAgent(tools=[search_tool], model=model) agent.run("How many more blocks (also denoted as layers) are in BERT base encoder compared to the encoder from the architecture proposed in Attention is All You Need?") ``` ### अपने एजेंट के टूलबॉक्स को मैनेज करें आप एजेंट के टूलबॉक्स को `agent.tools` एट्रिब्यूट में एक टूल जोड़कर या बदलकर मैनेज कर सकते हैं, क्योंकि यह एक स्टैंडर्ड डिक्शनरी है। चलिए केवल डिफ़ॉल्ट टूलबॉक्स के साथ इनिशियलाइज़ किए गए मौजूदा एजेंट में `model_download_tool` जोड़ें। ```python from smolagents import HfApiModel model = HfApiModel("Qwen/Qwen2.5-Coder-32B-Instruct") agent = CodeAgent(tools=[], model=model, add_base_tools=True) agent.tools[model_download_tool.name] = model_download_tool ``` अब हम नए टूल का लाभ उठा सकते हैं। ```python agent.run( "Can you give me the name of the model that has the most downloads in the 'text-to-video' task on the Hugging Face Hub but reverse the letters?" ) ``` > [!TIP] > एजेंट में बहुत अधिक टूल्स न जोड़ने से सावधान रहें: यह कमजोर LLM इंजन को ओवरव्हेल्म कर सकता है। ### टूल्स का कलेक्शन उपयोग करें आप `ToolCollection` ऑब्जेक्ट का उपयोग करके टूल कलेक्शंस का लाभ उठा सकते हैं। यह या तो हब से एक कलेक्शन या MCP सर्वर टूल्स को लोड करने का समर्थन करता है। #### हब में कलेक्शन से टूल कलेक्शन आप उस कलेक्शन के स्लग के साथ इसका लाभ उठा सकते हैं जिसका आप उपयोग करना चाहते हैं। फिर उन्हें अपने एजेंट को इनिशियलाइज़ करने के लिए एक लिस्ट के रूप में पास करें, और उनका उपयोग शुरू करें! ```py from smolagents import ToolCollection, CodeAgent image_tool_collection = ToolCollection.from_hub( collection_slug="huggingface-tools/diffusion-tools-6630bb19a942c2306a2cdb6f", token="<YOUR_HUGGINGFACEHUB_API_TOKEN>" ) agent = CodeAgent(tools=[*image_tool_collection.tools], model=model, add_base_tools=True) agent.run("Please draw me a picture of rivers and lakes.") ``` स्टार्ट को तेज करने के लिए, टूल्स केवल तभी लोड होते हैं जब एजेंट द्वारा कॉल किए जाते हैं। #### किसी भी MCP सर्वर से टूल कलेक्शन [glama.ai](https://glama.ai/mcp/servers) या [smithery.ai](https://smithery.ai/) पर उपलब्ध सैकड़ों MCP सर्वर्स से टूल्स का लाभ उठाएं। MCP सर्वर्स टूल्स को निम्नानुसार `ToolCollection` ऑब्जेक्ट में लोड किया जा सकता है: ```py from smolagents import ToolCollection, CodeAgent from mcp import StdioServerParameters server_parameters = StdioServerParameters( command="uv", args=["--quiet", "[email protected]"], env={"UV_PYTHON": "3.12", **os.environ}, ) with ToolCollection.from_mcp(server_parameters) as tool_collection: agent = CodeAgent(tools=[*tool_collection.tools], add_base_tools=True) agent.run("Please find a remedy for hangover.") ```
smolagents/docs/source/hi/tutorials/tools.md/0
{ "file_path": "smolagents/docs/source/hi/tutorials/tools.md", "repo_id": "smolagents", "token_count": 10845 }
# This is a config for E2B sandbox template. # You can use template ID (qywp2ctmu2q7jzprcf4j) to create a sandbox: # Python SDK # from e2b import Sandbox, AsyncSandbox # sandbox = Sandbox("qywp2ctmu2q7jzprcf4j") # Sync sandbox # sandbox = await AsyncSandbox.create("qywp2ctmu2q7jzprcf4j") # Async sandbox # JS SDK # import { Sandbox } from 'e2b' # const sandbox = await Sandbox.create('qywp2ctmu2q7jzprcf4j') team_id = "f8776d3a-df2f-4a1d-af48-68c2e13b3b87" start_cmd = "/root/.jupyter/start-up.sh" dockerfile = "e2b.Dockerfile" template_id = "qywp2ctmu2q7jzprcf4j"
smolagents/e2b.toml/0
{ "file_path": "smolagents/e2b.toml", "repo_id": "smolagents", "token_count": 254 }
from typing import Optional from smolagents import Tool from smolagents.models import MessageRole, Model from .mdconvert import MarkdownConverter class TextInspectorTool(Tool): name = "inspect_file_as_text" description = """ You cannot load files yourself: instead call this tool to read a file as markdown text and ask questions about it. This tool handles the following file extensions: [".html", ".htm", ".xlsx", ".pptx", ".wav", ".mp3", ".flac", ".pdf", ".docx"], and all other types of text files. IT DOES NOT HANDLE IMAGES.""" inputs = { "file_path": { "description": "The path to the file you want to read as text. Must be a '.something' file, like '.pdf'. If it is an image, use the visualizer tool instead! DO NOT use this tool for an HTML webpage: use the web_search tool instead!", "type": "string", }, "question": { "description": "[Optional]: Your question, as a natural language sentence. Provide as much context as possible. Do not pass this parameter if you just want to directly return the content of the file.", "type": "string", "nullable": True, }, } output_type = "string" md_converter = MarkdownConverter() def __init__(self, model: Model, text_limit: int): super().__init__() self.model = model self.text_limit = text_limit def forward_initial_exam_mode(self, file_path, question): result = self.md_converter.convert(file_path) if file_path[-4:] in [".png", ".jpg"]: raise Exception("Cannot use inspect_file_as_text tool with images: use visualizer instead!") if ".zip" in file_path: return result.text_content if not question: return result.text_content if len(result.text_content) < 4000: return "Document content: " + result.text_content messages = [ { "role": MessageRole.SYSTEM, "content": [ { "type": "text", "text": "Here is a file:\n### " + str(result.title) + "\n\n" + result.text_content[: self.text_limit], } ], }, { "role": MessageRole.USER, "content": [ { "type": "text", "text": "Now please write a short, 5 sentence caption for this document, that could help someone asking this question: " + question + "\n\nDon't answer the question yourself! Just provide useful notes on the document", } ], }, ] return self.model(messages).content def forward(self, file_path, question: Optional[str] = None) -> str: result = self.md_converter.convert(file_path) if file_path[-4:] in [".png", ".jpg"]: raise Exception("Cannot use inspect_file_as_text tool with images: use visualizer instead!") if ".zip" in file_path: return result.text_content if not question: return result.text_content messages = [ { "role": MessageRole.SYSTEM, "content": [ { "type": "text", "text": "You will have to write a short caption for this file, then answer this question:" + question, } ], }, { "role": MessageRole.USER, "content": [ { "type": "text", "text": "Here is the complete file:\n### " + str(result.title) + "\n\n" + result.text_content[: self.text_limit], } ], }, { "role": MessageRole.USER, "content": [ { "type": "text", "text": "Now answer the question below. Use these three headings: '1. Short answer', '2. Extremely detailed answer', '3. Additional Context on the document and question asked'." + question, } ], }, ] return self.model(messages).content
smolagents/examples/open_deep_research/scripts/text_inspector_tool.py/0
{ "file_path": "smolagents/examples/open_deep_research/scripts/text_inspector_tool.py", "repo_id": "smolagents", "token_count": 2300 }