yushx commited on
Commit
8364cdb
·
1 Parent(s): b2eb9f0

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +214 -0
README.md CHANGED
@@ -1,3 +1,217 @@
1
  ---
2
  license: cc-by-sa-3.0
 
 
 
 
 
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: cc-by-sa-3.0
3
+ task_categories:
4
+ - table-question-answering
5
+ - question-answering
6
+ language:
7
+ - en
8
+ tags:
9
+ - documents
10
+ - tables
11
+ - VQA
12
+ pretty_name: WikiDT
13
+ size_categories:
14
+ - 100K<n<1M
15
  ---
16
+ # WikiDT: Wikipedia Table Document dataset for table extraction and visual question answering
17
+
18
+ ## Dataset Description
19
+
20
+ - **Homepage:**
21
+ - **Repository:**
22
+ - **Paper:**
23
+ - **Leaderboard:**
24
+ - **Point of Contact:**
25
+
26
+ ### Dataset Summary
27
+
28
+ The WikiDT contains multi-level annotations and labels for the question-answering task based on images. Meanwhile, as the questions are answered from some table on the image, and WikiDT provides the table annotation to facilitate the diagnosis of the models and decompose the problem, WikiDT can be also directly used as a
29
+ table recognition dataset.
30
+
31
+ The dataset contains 16,887 Wikipedia screenshot, which are segmented to 54,032 subpages since the full screenshots are potentially long. In total, there's 159,905 tables in the dataset. The number of question-answer samples is 70,652. Each QA sample contains triplets of <question, answer, full-page screenshot filename>, and is additionally annotated with retrieval labels (which subpage, and which table). 53,698 QA samples also have SQL annotation.
32
+
33
+ For each subpage, OCR and table extraction annotations from two sources are available. While rendering the screenshots, the ground truth table annotation is recorded. Meanwhile, to make the dataset realistic, we also requested OCR and table extraction from [AWS Textract](https://aws.amazon.com/textract/) for each subpage (results obtained during Feb.28, 2023 - Mar.6, 2023).
34
+ ### Supported Tasks and Leaderboards
35
+
36
+ [More Information Needed]
37
+
38
+ ### Languages
39
+
40
+ [More Information Needed]
41
+
42
+ ## Dataset Structure
43
+
44
+ The WikiDT dataset has the following file structure.
45
+
46
+ ```
47
+ +--WikiDT-dataset
48
+ | +--WikiTableExtraction
49
+ | | +--detection
50
+ | | | +--images # sub page images
51
+ | | | +--train # xml table bbox annotation
52
+ | | | +--test # xml table bbox annotation
53
+ | | | +--val # xml table bbox annotation
54
+ | | | images_filelist.txt # index of 54032 images
55
+ | | | test_filelist.txt # index of 5410 test samples
56
+ | | | train_filelist.txt # index of 43248 train samples
57
+ | | | val_filelist.txt # index of 5347 val samples
58
+ | | +--structure
59
+ | | | +--images # images cropped to table region
60
+ | | | +--train # xml table bbox annotation
61
+ | | | +--test # xml table bbox annotation
62
+ | | | +--val # xml table bbox annotation
63
+ | | | images_filelist.txt # index of 159898 images
64
+ | | | test_filelist.txt # index of 15989 test samples
65
+ | | | train_filelist.txt # index of 129980 train samples
66
+ | | | val_filelist.txt # index of 15991 val samples
67
+ | +--sample # TableVQA samples
68
+ | +--images # full page image
69
+ | +--ocr # text and bbox for the table content
70
+ | +--tsv # extracted table in tsv format
71
+ ```
72
+
73
+ ### Table VQA annotation example
74
+
75
+ ```
76
+ {'all_ocr_files_textract': ['ocr/textract/16301437_page_seg_0.json',
77
+ 'ocr/textract/16301437_page_seg_1.json'],
78
+ 'all_ocr_files_web': ['ocr/web/16301437_page_seg_0.json',
79
+ 'ocr/web/16301437_page_seg_1.json'],
80
+ 'all_table_files_textract': ['tsv/textract/16301437_page_0.tsv',
81
+ 'tsv/textract/16301437_page_1.tsv'],
82
+ 'all_table_files_web': ['tsv/web/16301437_1.tsv', 'tsv/web/16301437_0.tsv'],
83
+ 'answer': [['don johnson buckeye st. classic']],
84
+ 'image': '16301437_page.png',
85
+ 'ocr_retrieval_file_textract': 'ocr/textract/16301437_page_seg_0.json',
86
+ 'ocr_retrieval_file_web': 'ocr/web/16301437_page_seg_0.json',
87
+ 'question': 'Name the Event which has a Score of 209-197?',
88
+ 'sample_id': '14190',
89
+ 'sql_str': "SELECT `event` FROM cur_table WHERE `score` = '209-197' ",
90
+ 'sub_page': ['16301437_page_seg_0.png', '16301437_page_seg_1.png'],
91
+ 'sub_page_retrieved': '16301437_page_seg_0.png',
92
+ 'subset': 'TFC',
93
+ 'table_id': '2-16301437-1',
94
+ 'table_retrieval_file_textract': 'tsv/textract/16301437_page_0.tsv',
95
+ 'table_retrieval_file_web': 'tsv/web/16301437_1.tsv'}
96
+ ```
97
+
98
+
99
+ ### Table Detection annotation example
100
+
101
+ ```xml
102
+ <annotation>
103
+ <folder />
104
+ <filename>204_147_page_crop_5.png</filename>
105
+ <source>WikiDT Dataset</source>
106
+ <size>
107
+ <width>788</width>
108
+ <height>540.0</height>
109
+ <depth>3</depth>
110
+ </size>
111
+ <object>
112
+ <name>table</name>
113
+ <rowspan />
114
+ <colspan />
115
+ <bndbox>
116
+ <xmin>10</xmin>
117
+ <ymin>10</ymin>
118
+ <xmax>778</xmax>
119
+ <ymax>530</ymax>
120
+ </bndbox>
121
+ </object>
122
+ <object>
123
+ <name>header row</name>
124
+ <rowspan />
125
+ <colspan />
126
+ <bndbox>
127
+ <xmin>10</xmin>
128
+ <ymin>10</ymin>
129
+ <xmax>778</xmax>
130
+ <ymax>33</ymax>
131
+ </bndbox>
132
+ </object>
133
+ <object>
134
+ <name>header cell</name>
135
+ <rowspan />
136
+ <colspan>10</colspan>
137
+ <bndbox>
138
+ <xmin>12</xmin>
139
+ <ymin>35</ymin>
140
+ <xmax>776</xmax>
141
+ <ymax>58</ymax>
142
+ </bndbox>
143
+ </object>
144
+ <object>
145
+ <name>table row</name>
146
+ <rowspan />
147
+ <colspan />
148
+ <bndbox>
149
+ <xmin>10</xmin>
150
+ <ymin>60</ymin>
151
+ <xmax>778</xmax>
152
+ <ymax>530</ymax>
153
+ </bndbox>
154
+ </object>
155
+ </annotation>
156
+ ```
157
+ ## Dataset Creation
158
+
159
+ ### Curation Rationale
160
+
161
+ [More Information Needed]
162
+
163
+ ### Source Data
164
+
165
+ #### Initial Data Collection and Normalization
166
+
167
+ [More Information Needed]
168
+
169
+ #### Who are the source language producers?
170
+
171
+ [More Information Needed]
172
+
173
+ ### Annotations
174
+
175
+ #### Annotation process
176
+
177
+ [More Information Needed]
178
+
179
+ #### Who are the annotators?
180
+
181
+ [More Information Needed]
182
+
183
+ ### Personal and Sensitive Information
184
+
185
+ [More Information Needed]
186
+
187
+ ## Considerations for Using the Data
188
+
189
+ ### Social Impact of Dataset
190
+
191
+ [More Information Needed]
192
+
193
+ ### Discussion of Biases
194
+
195
+ [More Information Needed]
196
+
197
+ ### Other Known Limitations
198
+
199
+ [More Information Needed]
200
+
201
+ ## Additional Information
202
+
203
+ ### Dataset Curators
204
+
205
+ [More Information Needed]
206
+
207
+ ### Licensing Information
208
+
209
+ [More Information Needed]
210
+
211
+ ### Citation Information
212
+
213
+ [More Information Needed]
214
+
215
+ ### Contributions
216
+
217
+ [More Information Needed]