v2.5 WebUI: The Wait is On! π
Is there an approximate timeframe for when v2.5 will be released for webui?
First they need to release their Scheduler, the inference API still shows this error:
module diffusers has no attribute EDMDPMSolverMultistepScheduler
Once they do, even I can make a webui for it.
I see, weird that it comes down to an unreleased scheduler. π€
I also look forward to it being available in WEBUI and forge as soon as possible
I'm looking forward to it too.
The current pip module of diffusers is version 0.27.2, and that version of diffusers seems to have EDMDPMSolverMultistepScheduler.
https://huggingface.co/docs/diffusers/v0.27.2/api/schedulers/edm_multistep_dpm_solver
Or is that not what is needed, @Yntec ?
Or is that not what is needed, @Yntec ?
@Hesajon : In theory, yes, in practice, this is what I'm getting with it:
So it's not working, though, the ball is in huggingface's camp because they're the ones that aren't supporting playground v2.5 correctly, currently.
If someone knows what the problem is they can already implement it on a WebUI, though!
the ball is in huggingface's camp because they're the ones that aren't supporting playground v2.5 correctly, currently.
I was wrong, it's playgroundai who haven't supported webUIs yet.
But I just checked https://huggingface.co/spaces/TIGER-Lab/GenAI-Arena - playground-v2.5 indeed tops the charts, they didn't preserve backward compatibility with existing technology, but their technology is better, so the wait will be worth it.
If SD.Next (Vladmandic) supports it, what is missing in Automatic1111? SD.Next is a fork of Automatic. Is it because SD.Next supports diffusers?
Even it is a fork they do not have that much in common anymore, there is like 6000 plus commits different between the 2.
Also SD next uses diffusers, donβt thing A1111 does.
I'm beginning to wonder if Playground 2.5 will ever be supported in Auto1111. Are devs generally abandoning Auto for Comfy?
I'm beginning to wonder if there's a conflict of interest, most people currently are using playground-v2.5 on their platform, with limited generations, and if you want to increase it, you have to buy their Pro plan. Supporting A1111 would mean users could make as many generations as they want locally, with full control, which means the more they delay the support, the higher their revenue will be. Though this is pure speculation on my part.
Except I think it works in ComfyUI, and SD.Next (Vladmandic)... I'm sure plenty of people are using it there.
no forge yet....
SD next is so bad it can't fit SD1 models in 8Gbs of VRAM when genning above 512x512 so I can't imagine anyone using that poorly optimized crap.
Let's get real guys, I don't think A1111 support is ever happening, the "Support coming soon" part of the model card is going to stay there forever, and there's more chances to get a Playground v3 version that claims to beat ChatGPT 4o native image generation than this.