CLIMB: CLustering-based Iterative Data Mixture Bootstrapping for Language Model Pre-training Paper โข 2504.13161 โข Published 21 days ago โข 88
Running 110 110 TxT360: Trillion Extracted Text ๐ Create a large, deduplicated dataset for LLM pre-training
Running 2.56k 2.56k The Ultra-Scale Playbook ๐ The ultimate guide to training LLM on large GPU Clusters
Running 110 110 TxT360: Trillion Extracted Text ๐ Create a large, deduplicated dataset for LLM pre-training
Running 63 63 Scaling FineWeb to 1000+ languages: Step 1: finding signal in 100s of evaluation tasks ๐ Evaluate multilingual models using FineTasks
Running 936 936 FineWeb: decanting the web for the finest text data at scale ๐ท Generate high-quality web text data for LLM training
Power Scheduler: A Batch Size and Token Number Agnostic Learning Rate Scheduler Paper โข 2408.13359 โข Published Aug 23, 2024 โข 25