Quantized version of Step1X-Edit with some layers left as BF16 for higher accuracy. Fork with memory optimizations to use with it: https://github.com/rkfg/Step1X-Edit
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
Model tree for rkfg/Step1X-Edit-FP8
Base model
stepfun-ai/Step1X-Edit