Datasets:
metadata
language:
- ar
configs:
- config_name: default
data_files:
- split: Amiri
path: Amiri/*.csv
- split: Sakkal_Majalla
path: Sakkal_Majalla/*.csv
- split: Arial
path: Arial/*.csv
- split: Calibri
path: Calibri/*.csv
- split: Scheherazade_New
path: Scheherazade_New/*.csv
features:
text:
dtype: string
csv_options:
delimiter: ','
quotechar: '"'
encoding: utf-8
tags:
- dataset
- OCR
- Arabic
- Image_To_Text
license: apache-2.0
task_categories:
- image-to-text
pretty_name: 'SAND: A Large-Scale Synthetic Arabic OCR Corpus for Vision-Language Models'
size_categories:
- 100K<n<1M
SARD: Synthetic Arabic Recognition Dataset
Overview
SARD (Synthetic Arabic Recognition Dataset) is a large-scale, synthetically generated dataset designed for training and evaluating Optical Character Recognition (OCR) models for Arabic text. This dataset addresses the critical need for comprehensive Arabic text recognition resources by providing controlled, diverse, and scalable training data that simulates real-world book layouts.
Key Features
- Massive Scale: 743,000 document images containing 662.15 million words
- Typographic Diversity: Five distinct Arabic fonts (Amiri, Sakkal Majalla, Arial, Calibri, and Scheherazade New)
- Structured Formatting: Designed to mimic real-world book layouts with consistent typography
- Clean Data: Synthetically generated with no scanning artifacts, blur, or distortions
- Content Diversity: Text spans multiple domains including culture, literature, Shariah, social topics, and more
Dataset Structure
The dataset is divided into five splits based on font name:
- Amiri: ~148,541 document images
- Sakkal Majalla: ~148,541 document images
- Arial: ~148,541 document images
- Calibri: ~148,541 document images
- Scheherazade New: ~148,541 document images
📋 Sample Images
![]() |
![]() |
![]() |
![]() |
Each split contains data specific to a single font with the following attributes:
image_name
: Unique identifier for each imagechunk
: The text content associated with the imagefont_name
: The font used in text renderingimage_base64
: Base64-encoded image representation
Content Distribution
Category | Number of Articles |
---|---|
Culture | 13,253 |
Fatawa & Counsels | 8,096 |
Literature & Language | 11,581 |
Bibliography | 26,393 |
Publications & Competitions | 1,123 |
Shariah | 46,665 |
Social | 8,827 |
Translations | 443 |
Muslim's News | 16,725 |
Total Articles | 133,105 |
Font Specifications
Font | Words Per Page | Font Size |
---|---|---|
Sakkal Majalla | 50–300 | 14 pt |
Arial | 50–500 | 12 pt |
Calibri | 50–500 | 12 pt |
Amiri | 50–300 | 12 pt |
Scheherazade | 50–250 | 12 pt |
Page Layout
Specification | Measurement |
---|---|
Left Margin | 0.9 inches |
Right Margin | 0.9 inches |
Top Margin | 1.0 inch |
Bottom Margin | 1.0 inch |
Gutter Margin | 0.2 inches |
Page Width | 8.27 inches (A4) |
Page Height | 11.69 inches (A4) |
Usage Example
from datasets import load_dataset
import base64
from io import BytesIO
from PIL import Image
import matplotlib.pyplot as plt
# Load dataset with streaming enabled
ds = load_dataset("riotu-lab/SARD", streaming=True)
print(ds)
# Iterate over a specific font dataset (e.g., Amiri)
for sample in ds["Amiri"]:
image_name = sample["image_name"]
chunk = sample["chunk"] # Arabic text transcription
font_name = sample["font_name"]
# Decode Base64 image
image_data = base64.b64decode(sample["image_base64"])
image = Image.open(BytesIO(image_data))
# Display the image
plt.figure(figsize=(10, 10))
plt.imshow(image)
plt.axis('off')
plt.title(f"Font: {font_name}")
plt.show()
# Print the details
print(f"Image Name: {image_name}")
print(f"Font Name: {font_name}")
print(f"Text Chunk: {chunk}")
# Break after one sample for testing
break
Applications
SAND is designed to support various Arabic text recognition tasks:
- Training and evaluating OCR models for Arabic text
- Developing vision-language models for document understanding
- Fine-tuning existing OCR models for better Arabic script recognition
- Benchmarking OCR performance across different fonts and layouts
- Research in Arabic natural language processing and computer vision
Acknowledgments
The authors thank Prince Sultan University for their support in developing this dataset.