⚠️ Work in Progress
This repo is actively being developed. Especially the the model card. Use as you see fit but know some things won't be accurate or up to date yet.

ComfyUI-Starter-Packs

A curated vault of the most essential models for ComfyUI users. Flux1, SDXL, ControlNets, Clips, GGUFs all in one place. Carefully organized. Hit the heart at the top next to this repo's name if you find it useful. A small action done by many will let me know working on this repo is helpful.


πŸͺœ What's Inside

This repo is a purposeful collection of the most important models organized in folders so whatever you need is all within that folder:

Flux1

  • Unet Models: Dev, Schnell, Depth, Canny, Fill
  • GGUF Versions: Q3, Q5, Q6 for each major branch
  • Clip + T5XXL encoders (standard + GGUF versions)
  • Loras: Only if are especially useful or improve the model.

SDXL

  • Reccomended Checkpoints to get you started Pony Realism and Juggernaut
  • Base + Refiner official models
  • ControlNets: Depth, Canny, OpenPose, Normal, etc.

Extra

  • VAE, upscalers, and anything required to support workflows

πŸ‹οΈ Unet Recommendations (Based on VRAM)

VRAM Use Case Model Type
16GB+ Full FP8 flux1-dev-fp8.safetensors
12GB Balanced Q5_K_S GGUF flux1-dev-Q5_K_S.gguf
8GB Light Q3_K_S GGUF flux1-dev-Q3_K_S.gguf

GGUF models are significantly lighter and designed for low-VRAM systems.

🧠 T5XXL Recommendations (Based on Ram)

System RAM Use Case Model Type
64GB Max quality t5xxl_fp16.safetensors
32GB High quality (can crash if multitasking) t5xxl_fp16.safetensors or t5xxl_fp8_scaled.safetensors
16GB Balanced t5xxl_fp8_scaled.safetensors
<16GB Low-memory / Safe mode GGUF Q5_K_S or Q3_K_S

Quantized t5xxl only directly affects prompt adherence.

⚠️ These are recommended tiers, not hard rules. RAM usage depends on your active processes, ComfyUI extensions, batch sizes, and other factors.
If you're getting random crashes, try scaling down one tier.


πŸ› Folder Structure

Adetailer/
β”œβ”€ Ultralytics/bbox/
β”‚  β”œβ”€ face_yolov8m.pt
β”‚  └─ hand_yolov8s.pt
└─ sams/
   └─ sam_vit_b_01ec64.pth

Flux1/
β”œβ”€ PuLID/
β”‚  └─ pulid_flux_v0.9.1.safetensors
β”œβ”€ Style_Models/
β”‚  └─ flux1-redux-dev.safetensors
β”œβ”€ clip/
β”‚  β”œβ”€ ViT-L-14-TEXT-detail-improved-hiT-GmP-HF.safetensors
β”‚  β”œβ”€ clip_l.safetensors
β”‚  β”œβ”€ t5xxl_fp16.safetensors
β”‚  β”œβ”€ t5xxl_fp8_e4m3fn_scaled.safetensors
β”‚  └─ GGUF/
β”‚     β”œβ”€ t5-v1_1-xxl-encoder-Q3_K_L.gguf
β”‚     └─ t5-v1_1-xxl-encoder-Q5_K_M.gguf
β”œβ”€ clip_vision/
β”‚  └─ sigclip_vision_patch14_384.safetensors
β”œβ”€ vae/
β”‚  └─ ae.safetensors
└─ unet/
   β”œβ”€ Dev/
   β”‚  β”œβ”€ flux1-dev-fp8.safetensors
   β”‚  └─ GGUF/
   β”‚     β”œβ”€ flux1-dev-Q3_K_S.gguf
   β”‚     └─ flux1-dev-Q5_K_S.gguf
   β”œβ”€ Fill/
   β”‚  β”œβ”€ flux1-fill-dev-fp8.safetensors
   β”‚  └─ GGUF/
   β”‚     β”œβ”€ flux1-fill-dev-Q3_K_S.gguf
   β”‚     └─ flux1-fill-dev-Q5_K_S.gguf
   β”œβ”€ Canny/
   β”‚  β”œβ”€ flux1-canny-dev-fp8.safetensors
   β”‚  └─ GGUF/
   β”‚     β”œβ”€ flux1-canny-dev-Q4_0-GGUF.gguf
   β”‚     └─ flux1-canny-dev-Q5_0-GGUF.gguf
   β”œβ”€ Depth/
   β”‚  β”œβ”€ flux1-depth-dev-fp8.safetensors
   β”‚  └─ GGUF/
   β”‚     β”œβ”€ flux1-depth-dev-Q4_0-GGUF.gguf
   β”‚     └─ flux1-depth-dev-Q5_0-GGUF.gguf
   └─ Schnell/
      β”œβ”€ flux1-schnell-fp8-e4m3fn.safetensors
      └─ GGUF/
         β”œβ”€ flux1-schnell-Q3_K_S.gguf
         └─ flux1-schnell-Q5_K_S.gguf

πŸ“ˆ Model Previews (Coming Soon)

I might add a single grid-style graphic showing example outputs:

  • Dev vs Schnell: Quality vs Speed
  • Depth / Canny / Fill: Source image β†’ processed map β†’ output
  • SDXL examples: Realism, Stylized, etc.

All preview images will be grouped into a single efficient visual block for each group.


πŸ“’ Want It Even Easier?

Skip the manual downloads.

🎁 Patreon.com/MaxedOut β€” (Coming Soon) Get:

  • One-click installers for all major Flux & SDXL workflows
  • ComfyUI workflows built for beginners and pros
  • Behind-the-scenes model picks and tips

❓ FAQ

Q: Why not every GGUF?
A: Because Q3, Q5, and Q6 cover the most meaningful range. No bloat.

Q: Are these the official models?
A: Yes. Most are sourced directly from creators, or validated mirrors.

Q: Will this grow?
A: Yes. But only with purpose. Not a collection of every model off the face of the earth.

Q: Why aren’t there more Loras here?
A: Stylized or niche Loras are showcased on Patreon, where we do deeper dives and examples. Some may get added here later if they become foundational.


✨ Final Thoughts

You shouldn’t need to hunt through 12 Civitai pages and 6 hugging face repos just to build your ComfyUI folder.

This repo fixes that.

Downloads last month
740
GGUF
Model size
4.76B params
Architecture
t5encoder
Hardware compatibility
Log In to view the estimation

3-bit

4-bit

5-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support