metadata
license: apache-2.0
Vision-Language Pairs Dataset
This dataset contains metadata about image-text pairs from various popular vision-language datasets.
Contents
- vision_language_data/all_vision_language_images.csv: Combined metadata for all images (75629 records)
- vision_language_data/all_vision_language_captions.csv: Combined captions for all images (86676 records)
- dataset_statistics.csv: Summary statistics for each dataset
- category_distribution.csv: Distribution of image categories across datasets
- caption_length_distribution.csv: Distribution of caption lengths
- caption_style_distribution.csv: Distribution of caption styles
- category_caption_statistics.csv: Caption statistics by category
- vision_language_catalog.json: Searchable catalog with sample image-caption pairs
Datasets Included
- COCO (Common Objects in Context): COCO is a large-scale object detection, segmentation, and captioning dataset with multiple captions per image. (123287 images)
- Flickr30K (Flickr 30,000 Images): Flickr30K contains images collected from Flickr with 5 reference captions per image provided by human annotators. (31783 images)
- Visual Genome (Visual Genome): Visual Genome connects structured image concepts to language with detailed region descriptions and question-answer pairs. (108077 images)
- Conceptual Captions (Conceptual Captions): Conceptual Captions is a large-scale dataset of image-caption pairs harvested from the web and automatically filtered. (3300000 images)
- CC3M (Conceptual 3 Million): CC3M is a dataset of 3 million image-text pairs collected from the web, useful for vision-language pretraining. (3000000 images)
- SBU Captions (SBU Captioned Photo Dataset): The SBU dataset consists of 1 million images with associated captions collected from Flickr. (1000000 images)
Fields Description
Images Table
- image_id: Unique identifier for the image
- dataset: Source dataset name
- image_url: URL to the image (simulated)
- primary_category: Main content category
- width: Image width in pixels
- height: Image height in pixels
- aspect_ratio: Width divided by height
- caption_count: Number of captions for this image
- license: License under which the image is available
Captions Table
- caption_id: Unique identifier for the caption
- image_id: ID of the associated image
- dataset: Source dataset name
- text: Caption text
- language: Caption language (default: en)
- style: Caption style (descriptive, short, or detailed)
- length: Number of characters in the caption
- word_count: Number of words in the caption
Usage Examples
This metadata can be used for:
- Analyzing the composition of vision-language datasets
- Comparing caption characteristics across different datasets
- Training and evaluating image captioning models
- Studying linguistic patterns in image descriptions
- Developing multimodal AI systems
Data Generation Note
This dataset contains synthetic metadata that represents the structure and characteristics of actual vision-language pair collections, but the specific image and caption details are generated for demonstration purposes.
Created: 2025-04-26