I use sd_turbo to test sd2 for sd.cpp , so here they are.

The "old" q8_0 is a direct conversion, converting to f16 first, then to q8_0 gave an equivalent performing but smaller filesize model file.

Use --cfg-scale 1 --steps 8 and maybe --schedule karras.

The model only really produces ok output for 512x512 .

Downloads last month
92
GGUF
Model size
1.3B params
Architecture
undefined
Hardware compatibility
Log In to view the estimation

8-bit

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for Green-Sky/SD-Turbo-GGUF

Quantized
(4)
this model