Hoixi commited on
Commit
94abe3a
·
verified ·
1 Parent(s): 1592b8f

dataset uploaded by roboflow2huggingface package

Browse files
README.dataset.txt ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ # PDF Exporter > 2025-04-16 4:33pm
2
+ https://universe.roboflow.com/tables-cfun2/pdf-exporter
3
+
4
+ Provided by a Roboflow user
5
+ License: CC BY 4.0
6
+
README.md ADDED
@@ -0,0 +1,94 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ task_categories:
3
+ - object-detection
4
+ tags:
5
+ - roboflow
6
+ - roboflow2huggingface
7
+
8
+ ---
9
+
10
+ <div align="center">
11
+ <img width="640" alt="Hoixi/TR-FinTable" src="https://huggingface.co/datasets/Hoixi/TR-FinTable/resolve/main/thumbnail.jpg">
12
+ </div>
13
+
14
+ ### Dataset Labels
15
+
16
+ ```
17
+ ['table', 'table column', 'table row', 'table spanning cell']
18
+ ```
19
+
20
+
21
+ ### Number of Images
22
+
23
+ ```json
24
+ {'valid': 6, 'train': 15}
25
+ ```
26
+
27
+
28
+ ### How to Use
29
+
30
+ - Install [datasets](https://pypi.org/project/datasets/):
31
+
32
+ ```bash
33
+ pip install datasets
34
+ ```
35
+
36
+ - Load the dataset:
37
+
38
+ ```python
39
+ from datasets import load_dataset
40
+
41
+ ds = load_dataset("Hoixi/TR-FinTable", name="full")
42
+ example = ds['train'][0]
43
+ ```
44
+
45
+ ### Roboflow Dataset Page
46
+ [https://universe.roboflow.com/tables-cfun2/pdf-exporter/dataset/8](https://universe.roboflow.com/tables-cfun2/pdf-exporter/dataset/8?ref=roboflow2huggingface)
47
+
48
+ ### Citation
49
+
50
+ ```
51
+ @misc{
52
+ pdf-exporter_dataset,
53
+ title = { PDF Exporter Dataset },
54
+ type = { Open Source Dataset },
55
+ author = { Tables },
56
+ howpublished = { \\url{ https://universe.roboflow.com/tables-cfun2/pdf-exporter } },
57
+ url = { https://universe.roboflow.com/tables-cfun2/pdf-exporter },
58
+ journal = { Roboflow Universe },
59
+ publisher = { Roboflow },
60
+ year = { 2025 },
61
+ month = { apr },
62
+ note = { visited on 2025-04-16 },
63
+ }
64
+ ```
65
+
66
+ ### License
67
+ CC BY 4.0
68
+
69
+ ### Dataset Summary
70
+ This dataset was exported via roboflow.com on April 16, 2025 at 1:40 PM GMT
71
+
72
+ Roboflow is an end-to-end computer vision platform that helps you
73
+ * collaborate with your team on computer vision projects
74
+ * collect & organize images
75
+ * understand and search unstructured image data
76
+ * annotate, and create datasets
77
+ * export, train, and deploy computer vision models
78
+ * use active learning to improve your dataset over time
79
+
80
+ For state of the art Computer Vision training notebooks you can use with this dataset,
81
+ visit https://github.com/roboflow/notebooks
82
+
83
+ To find over 100k other datasets and pre-trained models, visit https://universe.roboflow.com
84
+
85
+ The dataset includes 21 images.
86
+ Tables are annotated in COCO format.
87
+
88
+ The following pre-processing was applied to each image:
89
+ * Resize to 640x640 (Stretch)
90
+
91
+ No image augmentation techniques were applied.
92
+
93
+
94
+
README.roboflow.txt ADDED
@@ -0,0 +1,28 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ PDF Exporter - v8 2025-04-16 4:33pm
3
+ ==============================
4
+
5
+ This dataset was exported via roboflow.com on April 16, 2025 at 1:40 PM GMT
6
+
7
+ Roboflow is an end-to-end computer vision platform that helps you
8
+ * collaborate with your team on computer vision projects
9
+ * collect & organize images
10
+ * understand and search unstructured image data
11
+ * annotate, and create datasets
12
+ * export, train, and deploy computer vision models
13
+ * use active learning to improve your dataset over time
14
+
15
+ For state of the art Computer Vision training notebooks you can use with this dataset,
16
+ visit https://github.com/roboflow/notebooks
17
+
18
+ To find over 100k other datasets and pre-trained models, visit https://universe.roboflow.com
19
+
20
+ The dataset includes 21 images.
21
+ Tables are annotated in COCO format.
22
+
23
+ The following pre-processing was applied to each image:
24
+ * Resize to 640x640 (Stretch)
25
+
26
+ No image augmentation techniques were applied.
27
+
28
+
TR-FinTable.py ADDED
@@ -0,0 +1,145 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import collections
2
+ import json
3
+ import os
4
+
5
+ import datasets
6
+
7
+
8
+ _HOMEPAGE = "https://universe.roboflow.com/tables-cfun2/pdf-exporter/dataset/8"
9
+ _LICENSE = "CC BY 4.0"
10
+ _CITATION = """\
11
+ @misc{
12
+ pdf-exporter_dataset,
13
+ title = { PDF Exporter Dataset },
14
+ type = { Open Source Dataset },
15
+ author = { Tables },
16
+ howpublished = { \\url{ https://universe.roboflow.com/tables-cfun2/pdf-exporter } },
17
+ url = { https://universe.roboflow.com/tables-cfun2/pdf-exporter },
18
+ journal = { Roboflow Universe },
19
+ publisher = { Roboflow },
20
+ year = { 2025 },
21
+ month = { apr },
22
+ note = { visited on 2025-04-16 },
23
+ }
24
+ """
25
+ _CATEGORIES = ['table', 'table column', 'table row', 'table spanning cell']
26
+ _ANNOTATION_FILENAME = "_annotations.coco.json"
27
+
28
+
29
+ class TRFINTABLEConfig(datasets.BuilderConfig):
30
+ """Builder Config for TR-FinTable"""
31
+
32
+ def __init__(self, data_urls, **kwargs):
33
+ """
34
+ BuilderConfig for TR-FinTable.
35
+
36
+ Args:
37
+ data_urls: `dict`, name to url to download the zip file from.
38
+ **kwargs: keyword arguments forwarded to super.
39
+ """
40
+ super(TRFINTABLEConfig, self).__init__(version=datasets.Version("1.0.0"), **kwargs)
41
+ self.data_urls = data_urls
42
+
43
+
44
+ class TRFINTABLE(datasets.GeneratorBasedBuilder):
45
+ """TR-FinTable object detection dataset"""
46
+
47
+ VERSION = datasets.Version("1.0.0")
48
+ BUILDER_CONFIGS = [
49
+ TRFINTABLEConfig(
50
+ name="full",
51
+ description="Full version of TR-FinTable dataset.",
52
+ data_urls={
53
+ "train": "https://huggingface.co/datasets/Hoixi/TR-FinTable/resolve/main/data/train.zip",
54
+ "validation": "https://huggingface.co/datasets/Hoixi/TR-FinTable/resolve/main/data/valid.zip",
55
+ },
56
+ ),
57
+ TRFINTABLEConfig(
58
+ name="mini",
59
+ description="Mini version of TR-FinTable dataset.",
60
+ data_urls={
61
+ "train": "https://huggingface.co/datasets/Hoixi/TR-FinTable/resolve/main/data/valid-mini.zip",
62
+ "validation": "https://huggingface.co/datasets/Hoixi/TR-FinTable/resolve/main/data/valid-mini.zip",
63
+ },
64
+ )
65
+ ]
66
+
67
+ def _info(self):
68
+ features = datasets.Features(
69
+ {
70
+ "image_id": datasets.Value("int64"),
71
+ "image": datasets.Image(),
72
+ "width": datasets.Value("int32"),
73
+ "height": datasets.Value("int32"),
74
+ "objects": datasets.Sequence(
75
+ {
76
+ "id": datasets.Value("int64"),
77
+ "area": datasets.Value("int64"),
78
+ "bbox": datasets.Sequence(datasets.Value("float32"), length=4),
79
+ "category": datasets.ClassLabel(names=_CATEGORIES),
80
+ }
81
+ ),
82
+ }
83
+ )
84
+ return datasets.DatasetInfo(
85
+ features=features,
86
+ homepage=_HOMEPAGE,
87
+ citation=_CITATION,
88
+ license=_LICENSE,
89
+ )
90
+
91
+ def _split_generators(self, dl_manager):
92
+ data_files = dl_manager.download_and_extract(self.config.data_urls)
93
+ return [
94
+ datasets.SplitGenerator(
95
+ name=datasets.Split.TRAIN,
96
+ gen_kwargs={
97
+ "folder_dir": data_files["train"],
98
+ },
99
+ ),
100
+ datasets.SplitGenerator(
101
+ name=datasets.Split.TEST,
102
+ gen_kwargs={
103
+ "folder_dir": data_files["validation"],
104
+ },
105
+ ),
106
+ ]
107
+
108
+ def _generate_examples(self, folder_dir):
109
+ def process_annot(annot, category_id_to_category):
110
+ return {
111
+ "id": annot["id"],
112
+ "area": annot["area"],
113
+ "bbox": annot["bbox"],
114
+ "category": category_id_to_category[annot["category_id"]],
115
+ }
116
+
117
+ image_id_to_image = {}
118
+ idx = 0
119
+
120
+ annotation_filepath = os.path.join(folder_dir, _ANNOTATION_FILENAME)
121
+ with open(annotation_filepath, "r") as f:
122
+ annotations = json.load(f)
123
+ category_id_to_category = {category["id"]: category["name"] for category in annotations["categories"]}
124
+ image_id_to_annotations = collections.defaultdict(list)
125
+ for annot in annotations["annotations"]:
126
+ image_id_to_annotations[annot["image_id"]].append(annot)
127
+ filename_to_image = {image["file_name"]: image for image in annotations["images"]}
128
+
129
+ for filename in os.listdir(folder_dir):
130
+ filepath = os.path.join(folder_dir, filename)
131
+ if filename in filename_to_image:
132
+ image = filename_to_image[filename]
133
+ objects = [
134
+ process_annot(annot, category_id_to_category) for annot in image_id_to_annotations[image["id"]]
135
+ ]
136
+ with open(filepath, "rb") as f:
137
+ image_bytes = f.read()
138
+ yield idx, {
139
+ "image_id": image["id"],
140
+ "image": {"path": filepath, "bytes": image_bytes},
141
+ "width": image["width"],
142
+ "height": image["height"],
143
+ "objects": objects,
144
+ }
145
+ idx += 1
[TR]-FinTable.py ADDED
@@ -0,0 +1,145 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import collections
2
+ import json
3
+ import os
4
+
5
+ import datasets
6
+
7
+
8
+ _HOMEPAGE = "https://universe.roboflow.com/tables-cfun2/pdf-exporter/dataset/8"
9
+ _LICENSE = "CC BY 4.0"
10
+ _CITATION = """\
11
+ @misc{
12
+ pdf-exporter_dataset,
13
+ title = { PDF Exporter Dataset },
14
+ type = { Open Source Dataset },
15
+ author = { Tables },
16
+ howpublished = { \\url{ https://universe.roboflow.com/tables-cfun2/pdf-exporter } },
17
+ url = { https://universe.roboflow.com/tables-cfun2/pdf-exporter },
18
+ journal = { Roboflow Universe },
19
+ publisher = { Roboflow },
20
+ year = { 2025 },
21
+ month = { apr },
22
+ note = { visited on 2025-04-16 },
23
+ }
24
+ """
25
+ _CATEGORIES = ['table', 'table column', 'table row', 'table spanning cell']
26
+ _ANNOTATION_FILENAME = "_annotations.coco.json"
27
+
28
+
29
+ class [TR]FINTABLEConfig(datasets.BuilderConfig):
30
+ """Builder Config for [TR]-FinTable"""
31
+
32
+ def __init__(self, data_urls, **kwargs):
33
+ """
34
+ BuilderConfig for [TR]-FinTable.
35
+
36
+ Args:
37
+ data_urls: `dict`, name to url to download the zip file from.
38
+ **kwargs: keyword arguments forwarded to super.
39
+ """
40
+ super([TR]FINTABLEConfig, self).__init__(version=datasets.Version("1.0.0"), **kwargs)
41
+ self.data_urls = data_urls
42
+
43
+
44
+ class [TR]FINTABLE(datasets.GeneratorBasedBuilder):
45
+ """[TR]-FinTable object detection dataset"""
46
+
47
+ VERSION = datasets.Version("1.0.0")
48
+ BUILDER_CONFIGS = [
49
+ [TR]FINTABLEConfig(
50
+ name="full",
51
+ description="Full version of [TR]-FinTable dataset.",
52
+ data_urls={
53
+ "train": "https://huggingface.co/datasets/Hoixi/[TR]-FinTable/resolve/main/data/train.zip",
54
+ "validation": "https://huggingface.co/datasets/Hoixi/[TR]-FinTable/resolve/main/data/valid.zip",
55
+ },
56
+ ),
57
+ [TR]FINTABLEConfig(
58
+ name="mini",
59
+ description="Mini version of [TR]-FinTable dataset.",
60
+ data_urls={
61
+ "train": "https://huggingface.co/datasets/Hoixi/[TR]-FinTable/resolve/main/data/valid-mini.zip",
62
+ "validation": "https://huggingface.co/datasets/Hoixi/[TR]-FinTable/resolve/main/data/valid-mini.zip",
63
+ },
64
+ )
65
+ ]
66
+
67
+ def _info(self):
68
+ features = datasets.Features(
69
+ {
70
+ "image_id": datasets.Value("int64"),
71
+ "image": datasets.Image(),
72
+ "width": datasets.Value("int32"),
73
+ "height": datasets.Value("int32"),
74
+ "objects": datasets.Sequence(
75
+ {
76
+ "id": datasets.Value("int64"),
77
+ "area": datasets.Value("int64"),
78
+ "bbox": datasets.Sequence(datasets.Value("float32"), length=4),
79
+ "category": datasets.ClassLabel(names=_CATEGORIES),
80
+ }
81
+ ),
82
+ }
83
+ )
84
+ return datasets.DatasetInfo(
85
+ features=features,
86
+ homepage=_HOMEPAGE,
87
+ citation=_CITATION,
88
+ license=_LICENSE,
89
+ )
90
+
91
+ def _split_generators(self, dl_manager):
92
+ data_files = dl_manager.download_and_extract(self.config.data_urls)
93
+ return [
94
+ datasets.SplitGenerator(
95
+ name=datasets.Split.TRAIN,
96
+ gen_kwargs={
97
+ "folder_dir": data_files["train"],
98
+ },
99
+ ),
100
+ datasets.SplitGenerator(
101
+ name=datasets.Split.TEST,
102
+ gen_kwargs={
103
+ "folder_dir": data_files["validation"],
104
+ },
105
+ ),
106
+ ]
107
+
108
+ def _generate_examples(self, folder_dir):
109
+ def process_annot(annot, category_id_to_category):
110
+ return {
111
+ "id": annot["id"],
112
+ "area": annot["area"],
113
+ "bbox": annot["bbox"],
114
+ "category": category_id_to_category[annot["category_id"]],
115
+ }
116
+
117
+ image_id_to_image = {}
118
+ idx = 0
119
+
120
+ annotation_filepath = os.path.join(folder_dir, _ANNOTATION_FILENAME)
121
+ with open(annotation_filepath, "r") as f:
122
+ annotations = json.load(f)
123
+ category_id_to_category = {category["id"]: category["name"] for category in annotations["categories"]}
124
+ image_id_to_annotations = collections.defaultdict(list)
125
+ for annot in annotations["annotations"]:
126
+ image_id_to_annotations[annot["image_id"]].append(annot)
127
+ filename_to_image = {image["file_name"]: image for image in annotations["images"]}
128
+
129
+ for filename in os.listdir(folder_dir):
130
+ filepath = os.path.join(folder_dir, filename)
131
+ if filename in filename_to_image:
132
+ image = filename_to_image[filename]
133
+ objects = [
134
+ process_annot(annot, category_id_to_category) for annot in image_id_to_annotations[image["id"]]
135
+ ]
136
+ with open(filepath, "rb") as f:
137
+ image_bytes = f.read()
138
+ yield idx, {
139
+ "image_id": image["id"],
140
+ "image": {"path": filepath, "bytes": image_bytes},
141
+ "width": image["width"],
142
+ "height": image["height"],
143
+ "objects": objects,
144
+ }
145
+ idx += 1
data/train.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:aca1d85a85928a9599bd367ed33bc2094ebe283747c10cbd82de3512f0362f80
3
+ size 881114
data/valid-mini.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:29a6ab80d2f5e961091619605b23e2f7616681da3ee90747136307142c8dae5a
3
+ size 162655
data/valid.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f708d0257f558e2df2a1c214e6f0082e50492a79fac274dd4fa17278fb8853a9
3
+ size 347484
split_name_to_num_samples.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"valid": 6, "train": 15}
thumbnail.jpg ADDED

Git LFS Details

  • SHA256: 4a428e6989d82fd3ef4aea8ea91c3f88d905ea6ecf86d296288b32e2f8294253
  • Pointer size: 131 Bytes
  • Size of remote file: 106 kB