-
Will we run out of data? An analysis of the limits of scaling datasets in Machine Learning
Paper • 2211.04325 • Published -
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
Paper • 1810.04805 • Published • 18 -
On the Opportunities and Risks of Foundation Models
Paper • 2108.07258 • Published -
Super-NaturalInstructions: Generalization via Declarative Instructions on 1600+ NLP Tasks
Paper • 2204.07705 • Published • 1
Collections
Discover the best community collections!
Collections including paper arxiv:1810.04805
-
Navigating Dataset Documentations in AI: A Large-Scale Analysis of Dataset Cards on Hugging Face
Paper • 2401.13822 • Published • 1 -
Attention Is All You Need
Paper • 1706.03762 • Published • 61 -
HuggingFace's Transformers: State-of-the-art Natural Language Processing
Paper • 1910.03771 • Published • 19 -
Model Cards for Model Reporting
Paper • 1810.03993 • Published • 5
-
CLEAR: Character Unlearning in Textual and Visual Modalities
Paper • 2410.18057 • Published • 210 -
CORAL: Benchmarking Multi-turn Conversational Retrieval-Augmentation Generation
Paper • 2410.23090 • Published • 56 -
What Happened in LLMs Layers when Trained for Fast vs. Slow Thinking: A Gradient Perspective
Paper • 2410.23743 • Published • 64 -
"Give Me BF16 or Give Me Death"? Accuracy-Performance Trade-Offs in LLM Quantization
Paper • 2411.02355 • Published • 51
-
Attention Is All You Need
Paper • 1706.03762 • Published • 61 -
Playing Atari with Deep Reinforcement Learning
Paper • 1312.5602 • Published -
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
Paper • 1810.04805 • Published • 18 -
Language Models are Few-Shot Learners
Paper • 2005.14165 • Published • 13
-
Attention Is All You Need
Paper • 1706.03762 • Published • 61 -
LoRA Learns Less and Forgets Less
Paper • 2405.09673 • Published • 89 -
DeepSeek LLM: Scaling Open-Source Language Models with Longtermism
Paper • 2401.02954 • Published • 48 -
RAFT: Adapting Language Model to Domain Specific RAG
Paper • 2403.10131 • Published • 73
-
Attention Is All You Need
Paper • 1706.03762 • Published • 61 -
LLaMA: Open and Efficient Foundation Language Models
Paper • 2302.13971 • Published • 14 -
Efficient Tool Use with Chain-of-Abstraction Reasoning
Paper • 2401.17464 • Published • 21 -
MoMa: Efficient Early-Fusion Pre-training with Mixture of Modality-Aware Experts
Paper • 2407.21770 • Published • 23