Running 2.56k 2.56k The Ultra-Scale Playbook 🌌 The ultimate guide to training LLM on large GPU Clusters
SmolLM2: When Smol Goes Big -- Data-Centric Training of a Small Language Model Paper • 2502.02737 • Published Feb 4 • 229
Balancing Pipeline Parallelism with Vocabulary Parallelism Paper • 2411.05288 • Published Nov 8, 2024 • 20