-
43
Compare Siglip1 Siglip2
πCompare SigLIP1 and SigLIP2 on zero shot classification
-
SigLIP 2: Multilingual Vision-Language Encoders with Improved Semantic Understanding, Localization, and Dense Features
Paper β’ 2502.14786 β’ Published β’ 144 -
google/siglip2-base-patch16-224
Zero-Shot Image Classification β’ Updated β’ 50.9k β’ 39 -
google/siglip2-base-patch16-256
Zero-Shot Image Classification β’ Updated β’ 78.5k β’ 4
Collections
Discover the best community collections!
Collections including paper arxiv:2502.14786
-
MLLM-as-a-Judge for Image Safety without Human Labeling
Paper β’ 2501.00192 β’ Published β’ 31 -
2.5 Years in Class: A Multimodal Textbook for Vision-Language Pretraining
Paper β’ 2501.00958 β’ Published β’ 107 -
Xmodel-2 Technical Report
Paper β’ 2412.19638 β’ Published β’ 27 -
HuatuoGPT-o1, Towards Medical Complex Reasoning with LLMs
Paper β’ 2412.18925 β’ Published β’ 102
-
EVA-CLIP-18B: Scaling CLIP to 18 Billion Parameters
Paper β’ 2402.04252 β’ Published β’ 28 -
Vision Superalignment: Weak-to-Strong Generalization for Vision Foundation Models
Paper β’ 2402.03749 β’ Published β’ 13 -
ScreenAI: A Vision-Language Model for UI and Infographics Understanding
Paper β’ 2402.04615 β’ Published β’ 44 -
EfficientViT-SAM: Accelerated Segment Anything Model Without Performance Loss
Paper β’ 2402.05008 β’ Published β’ 23
-
seanghay/khmer_mpwt_speech
Viewer β’ Updated β’ 2.06k β’ 144 β’ 7 -
DeepSeek LLM: Scaling Open-Source Language Models with Longtermism
Paper β’ 2401.02954 β’ Published β’ 48 -
openai/whisper-large-v3-turbo
Automatic Speech Recognition β’ Updated β’ 7.07M β’ β’ 2.35k -
2.56k
The Ultra-Scale Playbook
πThe ultimate guide to training LLM on large GPU Clusters
-
SigLIP 2: Multilingual Vision-Language Encoders with Improved Semantic Understanding, Localization, and Dense Features
Paper β’ 2502.14786 β’ Published β’ 144 -
LongWriter-V: Enabling Ultra-Long and High-Fidelity Generation in Vision-Language Models
Paper β’ 2502.14834 β’ Published β’ 24 -
Qwen2.5-VL Technical Report
Paper β’ 2502.13923 β’ Published β’ 186 -
DICEPTION: A Generalist Diffusion Model for Visual Perceptual Tasks
Paper β’ 2502.17157 β’ Published β’ 53
-
SigLIP 2: Multilingual Vision-Language Encoders with Improved Semantic Understanding, Localization, and Dense Features
Paper β’ 2502.14786 β’ Published β’ 144 -
Scaling Text-Rich Image Understanding via Code-Guided Synthetic Multimodal Data Generation
Paper β’ 2502.14846 β’ Published β’ 13 -
RelaCtrl: Relevance-Guided Efficient Control for Diffusion Transformers
Paper β’ 2502.14377 β’ Published β’ 12
-
QLIP: Text-Aligned Visual Tokenization Unifies Auto-Regressive Multimodal Understanding and Generation
Paper β’ 2502.05178 β’ Published β’ 10 -
Scaling Text-Rich Image Understanding via Code-Guided Synthetic Multimodal Data Generation
Paper β’ 2502.14846 β’ Published β’ 13 -
SigLIP 2: Multilingual Vision-Language Encoders with Improved Semantic Understanding, Localization, and Dense Features
Paper β’ 2502.14786 β’ Published β’ 144 -
Efficient LLaMA-3.2-Vision by Trimming Cross-attended Visual Features
Paper β’ 2504.00557 β’ Published β’ 15