text
stringlengths 0
5.54k
|
---|
Returns |
~pipelines.stable_diffusion.AltDiffusionPipelineOutput or tuple |
~pipelines.stable_diffusion.AltDiffusionPipelineOutput if return_dict is True, otherwise a tuple. When returning a tuple, the first element is a list with the generated images, and the second element is a list of bools denoting whether the corresponding generated image likely represents "not-safe-for-work" (nsfw) content, according to the safety_checker`. |
Function invoked when calling the pipeline for generation. |
Examples: |
Copied |
>>> import torch |
>>> from diffusers import AltDiffusionPipeline |
>>> pipe = AltDiffusionPipeline.from_pretrained("BAAI/AltDiffusion-m9", torch_dtype=torch.float16) |
>>> pipe = pipe.to("cuda") |
>>> # "dark elf princess, highly detailed, d & d, fantasy, highly detailed, digital painting, trending on artstation, concept art, sharp focus, illustration, art by artgerm and greg rutkowski and fuji choko and viktoria gavrilenko and hoang lap" |
>>> prompt = "黑暗精灵公主,非常详细,幻想,非常详细,数字绘画,概念艺术,敏锐的焦点,插图" |
>>> image = pipe(prompt).images[0] |
disable_vae_slicing |
< |
source |
> |
( |
) |
Disable sliced VAE decoding. If enable_vae_slicing was previously invoked, this method will go back to |
computing decoding in one step. |
disable_vae_tiling |
< |
source |
> |
( |
) |
Disable tiled VAE decoding. If enable_vae_tiling was previously invoked, this method will go back to |
computing decoding in one step. |
enable_model_cpu_offload |
< |
source |
> |
( |
gpu_id = 0 |
) |
Offloads all models to CPU using accelerate, reducing memory usage with a low impact on performance. Compared |
to enable_sequential_cpu_offload, this method moves one whole model at a time to the GPU when its forward |
method is called, and the model remains in GPU until the next model runs. Memory savings are lower than with |
enable_sequential_cpu_offload, but performance is much better due to the iterative execution of the unet. |
enable_sequential_cpu_offload |
< |
source |
> |
( |
gpu_id = 0 |
) |
Offloads all models to CPU using accelerate, significantly reducing memory usage. When called, unet, |
text_encoder, vae and safety checker have their state dicts saved to CPU and then are moved to a |
torch.device('meta') and loaded to GPU only when their specific submodule has its forwardmethod called. Note that offloading happens on a submodule basis. Memory savings are higher than withenable_model_cpu_offload`, but performance is lower. |
enable_vae_slicing |
< |
source |
> |
( |
) |
Enable sliced VAE decoding. |
When this option is enabled, the VAE will split the input tensor in slices to compute decoding in several |
steps. This is useful to save some memory and allow larger batch sizes. |
enable_vae_tiling |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.