Run Llama,Qwen,Gemma,Mistral, any warm/cold LLM. No GPU req.
Chat with AI models using text input
FLUX.1-Schnell on serverless inference, no GPU required
SDXL on serverless inference, no GPU required