mazafard commited on
Commit
6fa0b1c
Β·
verified Β·
1 Parent(s): c31f0dd

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +94 -0
README.md ADDED
@@ -0,0 +1,94 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - manual
4
+ language:
5
+ - pt
6
+ - en
7
+ license: mit
8
+ multilinguality: monolingual
9
+ pretty_name: Portuguese OCR Dataset
10
+ size_categories:
11
+ - 1K<n<10K
12
+ source_datasets: []
13
+ task_categories:
14
+ - image-to-text
15
+ tags:
16
+ - ocr
17
+ - Portuguese
18
+ ---
19
+
20
+ # πŸ“˜ Portuguese OCR Dataset
21
+
22
+ This dataset contains scanned image-text pairs in **European Portuguese**, curated manually for Optical Character Recognition (OCR) tasks. It includes literary sentences and historical excerpts.
23
+
24
+ ## πŸ“¦ Dataset Overview
25
+
26
+ - **Total Samples**: 10,000
27
+ - **Image Shape**: `(10000, 100, 1200, 3)` β€” color images with height 100 and width 1200 pixels
28
+ - **Text Example**:
29
+ `"E mais avante o Estreito que se arreia"`
30
+
31
+ ## πŸ“‚ Dataset Structure
32
+
33
+ The dataset is stored in a **single HDF5 file** (`dataset.h5`), which includes:
34
+
35
+ - `images`: a NumPy array of grayscale image data (from PNG files)
36
+ - `texts`: a list of UTF-8 encoded strings, corresponding to each image
37
+
38
+ Each sample pairs an image and its corresponding transcription.
39
+
40
+ ## πŸ“Š Dataset Statistics
41
+
42
+ - Format: HDF5 (`dataset.h5`)
43
+ - Total samples: ~10,000
44
+ - Average image size: 224x224 pixels (if preprocessed)
45
+ - Language: European Portuguese
46
+ - Source: literary and historic texts
47
+
48
+ ## πŸ’Ύ How to Load
49
+
50
+ To use this dataset with the Hugging Face Datasets library:
51
+
52
+ ```python
53
+ from datasets import load_dataset
54
+ import h5py
55
+
56
+ # Load from Hugging Face
57
+ dataset_path = "mazafard/portugues_ocr_dataset"
58
+ h5_file = load_dataset(dataset_path, data_files="dataset.h5", split="train")
59
+
60
+ # Alternatively, open the HDF5 file directly
61
+ with h5py.File("dataset.h5", "r") as f:
62
+ images = f["images"][:] # NumPy array of images
63
+ texts = f["texts"][:] # List of transcriptions
64
+ ```
65
+
66
+ ## 🧠 Use Cases
67
+
68
+ - Fine-tuning OCR models like [`microsoft/trocr-base-printed`](https://huggingface.co/microsoft/trocr-base-printed)
69
+ - Document digitization
70
+ - Research in Portuguese language modeling and handwritten/printed recognition
71
+
72
+ ## πŸ“œ License
73
+
74
+ MIT License β€” free for academic and commercial use with attribution.
75
+
76
+
77
+
78
+ ## πŸ“¦ How to Inspect `dataset.h5`
79
+
80
+ You can extract metadata from the HDF5 file using `h5py`:
81
+
82
+ ```python
83
+ import h5py
84
+
85
+ with h5py.File("dataset.h5", "r") as f:
86
+ print("Keys:", list(f.keys())) # ['images', 'texts']
87
+ print("Number of samples:", len(f["texts"]))
88
+ print("Image shape:", f["images"].shape)
89
+ print("Example text:", f["texts"][0])
90
+ ```
91
+
92
+ ### ⚠️ Note
93
+
94
+ This dataset uses synthetic text-image pairs for printed OCR. Handwritten or scanned documents are not included.