vansh9878 commited on
Commit
825e978
·
1 Parent(s): 96f4217

files added

Browse files
.gitignore ADDED
@@ -0,0 +1,12 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ /getFiles/kaggleApi.py
2
+ .env
3
+ test.py
4
+ langchain_folder/__pycache__
5
+ /downloads
6
+ lang_model.py
7
+ /final
8
+ __pycache__
9
+ getOpenml.py
10
+ /input_folder
11
+ /processed
12
+ /static
LICENSE ADDED
@@ -0,0 +1,674 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ GNU GENERAL PUBLIC LICENSE
2
+ Version 3, 29 June 2007
3
+
4
+ Copyright (C) 2007 Free Software Foundation, Inc. <https://fsf.org/>
5
+ Everyone is permitted to copy and distribute verbatim copies
6
+ of this license document, but changing it is not allowed.
7
+
8
+ Preamble
9
+
10
+ The GNU General Public License is a free, copyleft license for
11
+ software and other kinds of works.
12
+
13
+ The licenses for most software and other practical works are designed
14
+ to take away your freedom to share and change the works. By contrast,
15
+ the GNU General Public License is intended to guarantee your freedom to
16
+ share and change all versions of a program--to make sure it remains free
17
+ software for all its users. We, the Free Software Foundation, use the
18
+ GNU General Public License for most of our software; it applies also to
19
+ any other work released this way by its authors. You can apply it to
20
+ your programs, too.
21
+
22
+ When we speak of free software, we are referring to freedom, not
23
+ price. Our General Public Licenses are designed to make sure that you
24
+ have the freedom to distribute copies of free software (and charge for
25
+ them if you wish), that you receive source code or can get it if you
26
+ want it, that you can change the software or use pieces of it in new
27
+ free programs, and that you know you can do these things.
28
+
29
+ To protect your rights, we need to prevent others from denying you
30
+ these rights or asking you to surrender the rights. Therefore, you have
31
+ certain responsibilities if you distribute copies of the software, or if
32
+ you modify it: responsibilities to respect the freedom of others.
33
+
34
+ For example, if you distribute copies of such a program, whether
35
+ gratis or for a fee, you must pass on to the recipients the same
36
+ freedoms that you received. You must make sure that they, too, receive
37
+ or can get the source code. And you must show them these terms so they
38
+ know their rights.
39
+
40
+ Developers that use the GNU GPL protect your rights with two steps:
41
+ (1) assert copyright on the software, and (2) offer you this License
42
+ giving you legal permission to copy, distribute and/or modify it.
43
+
44
+ For the developers' and authors' protection, the GPL clearly explains
45
+ that there is no warranty for this free software. For both users' and
46
+ authors' sake, the GPL requires that modified versions be marked as
47
+ changed, so that their problems will not be attributed erroneously to
48
+ authors of previous versions.
49
+
50
+ Some devices are designed to deny users access to install or run
51
+ modified versions of the software inside them, although the manufacturer
52
+ can do so. This is fundamentally incompatible with the aim of
53
+ protecting users' freedom to change the software. The systematic
54
+ pattern of such abuse occurs in the area of products for individuals to
55
+ use, which is precisely where it is most unacceptable. Therefore, we
56
+ have designed this version of the GPL to prohibit the practice for those
57
+ products. If such problems arise substantially in other domains, we
58
+ stand ready to extend this provision to those domains in future versions
59
+ of the GPL, as needed to protect the freedom of users.
60
+
61
+ Finally, every program is threatened constantly by software patents.
62
+ States should not allow patents to restrict development and use of
63
+ software on general-purpose computers, but in those that do, we wish to
64
+ avoid the special danger that patents applied to a free program could
65
+ make it effectively proprietary. To prevent this, the GPL assures that
66
+ patents cannot be used to render the program non-free.
67
+
68
+ The precise terms and conditions for copying, distribution and
69
+ modification follow.
70
+
71
+ TERMS AND CONDITIONS
72
+
73
+ 0. Definitions.
74
+
75
+ "This License" refers to version 3 of the GNU General Public License.
76
+
77
+ "Copyright" also means copyright-like laws that apply to other kinds of
78
+ works, such as semiconductor masks.
79
+
80
+ "The Program" refers to any copyrightable work licensed under this
81
+ License. Each licensee is addressed as "you". "Licensees" and
82
+ "recipients" may be individuals or organizations.
83
+
84
+ To "modify" a work means to copy from or adapt all or part of the work
85
+ in a fashion requiring copyright permission, other than the making of an
86
+ exact copy. The resulting work is called a "modified version" of the
87
+ earlier work or a work "based on" the earlier work.
88
+
89
+ A "covered work" means either the unmodified Program or a work based
90
+ on the Program.
91
+
92
+ To "propagate" a work means to do anything with it that, without
93
+ permission, would make you directly or secondarily liable for
94
+ infringement under applicable copyright law, except executing it on a
95
+ computer or modifying a private copy. Propagation includes copying,
96
+ distribution (with or without modification), making available to the
97
+ public, and in some countries other activities as well.
98
+
99
+ To "convey" a work means any kind of propagation that enables other
100
+ parties to make or receive copies. Mere interaction with a user through
101
+ a computer network, with no transfer of a copy, is not conveying.
102
+
103
+ An interactive user interface displays "Appropriate Legal Notices"
104
+ to the extent that it includes a convenient and prominently visible
105
+ feature that (1) displays an appropriate copyright notice, and (2)
106
+ tells the user that there is no warranty for the work (except to the
107
+ extent that warranties are provided), that licensees may convey the
108
+ work under this License, and how to view a copy of this License. If
109
+ the interface presents a list of user commands or options, such as a
110
+ menu, a prominent item in the list meets this criterion.
111
+
112
+ 1. Source Code.
113
+
114
+ The "source code" for a work means the preferred form of the work
115
+ for making modifications to it. "Object code" means any non-source
116
+ form of a work.
117
+
118
+ A "Standard Interface" means an interface that either is an official
119
+ standard defined by a recognized standards body, or, in the case of
120
+ interfaces specified for a particular programming language, one that
121
+ is widely used among developers working in that language.
122
+
123
+ The "System Libraries" of an executable work include anything, other
124
+ than the work as a whole, that (a) is included in the normal form of
125
+ packaging a Major Component, but which is not part of that Major
126
+ Component, and (b) serves only to enable use of the work with that
127
+ Major Component, or to implement a Standard Interface for which an
128
+ implementation is available to the public in source code form. A
129
+ "Major Component", in this context, means a major essential component
130
+ (kernel, window system, and so on) of the specific operating system
131
+ (if any) on which the executable work runs, or a compiler used to
132
+ produce the work, or an object code interpreter used to run it.
133
+
134
+ The "Corresponding Source" for a work in object code form means all
135
+ the source code needed to generate, install, and (for an executable
136
+ work) run the object code and to modify the work, including scripts to
137
+ control those activities. However, it does not include the work's
138
+ System Libraries, or general-purpose tools or generally available free
139
+ programs which are used unmodified in performing those activities but
140
+ which are not part of the work. For example, Corresponding Source
141
+ includes interface definition files associated with source files for
142
+ the work, and the source code for shared libraries and dynamically
143
+ linked subprograms that the work is specifically designed to require,
144
+ such as by intimate data communication or control flow between those
145
+ subprograms and other parts of the work.
146
+
147
+ The Corresponding Source need not include anything that users
148
+ can regenerate automatically from other parts of the Corresponding
149
+ Source.
150
+
151
+ The Corresponding Source for a work in source code form is that
152
+ same work.
153
+
154
+ 2. Basic Permissions.
155
+
156
+ All rights granted under this License are granted for the term of
157
+ copyright on the Program, and are irrevocable provided the stated
158
+ conditions are met. This License explicitly affirms your unlimited
159
+ permission to run the unmodified Program. The output from running a
160
+ covered work is covered by this License only if the output, given its
161
+ content, constitutes a covered work. This License acknowledges your
162
+ rights of fair use or other equivalent, as provided by copyright law.
163
+
164
+ You may make, run and propagate covered works that you do not
165
+ convey, without conditions so long as your license otherwise remains
166
+ in force. You may convey covered works to others for the sole purpose
167
+ of having them make modifications exclusively for you, or provide you
168
+ with facilities for running those works, provided that you comply with
169
+ the terms of this License in conveying all material for which you do
170
+ not control copyright. Those thus making or running the covered works
171
+ for you must do so exclusively on your behalf, under your direction
172
+ and control, on terms that prohibit them from making any copies of
173
+ your copyrighted material outside their relationship with you.
174
+
175
+ Conveying under any other circumstances is permitted solely under
176
+ the conditions stated below. Sublicensing is not allowed; section 10
177
+ makes it unnecessary.
178
+
179
+ 3. Protecting Users' Legal Rights From Anti-Circumvention Law.
180
+
181
+ No covered work shall be deemed part of an effective technological
182
+ measure under any applicable law fulfilling obligations under article
183
+ 11 of the WIPO copyright treaty adopted on 20 December 1996, or
184
+ similar laws prohibiting or restricting circumvention of such
185
+ measures.
186
+
187
+ When you convey a covered work, you waive any legal power to forbid
188
+ circumvention of technological measures to the extent such circumvention
189
+ is effected by exercising rights under this License with respect to
190
+ the covered work, and you disclaim any intention to limit operation or
191
+ modification of the work as a means of enforcing, against the work's
192
+ users, your or third parties' legal rights to forbid circumvention of
193
+ technological measures.
194
+
195
+ 4. Conveying Verbatim Copies.
196
+
197
+ You may convey verbatim copies of the Program's source code as you
198
+ receive it, in any medium, provided that you conspicuously and
199
+ appropriately publish on each copy an appropriate copyright notice;
200
+ keep intact all notices stating that this License and any
201
+ non-permissive terms added in accord with section 7 apply to the code;
202
+ keep intact all notices of the absence of any warranty; and give all
203
+ recipients a copy of this License along with the Program.
204
+
205
+ You may charge any price or no price for each copy that you convey,
206
+ and you may offer support or warranty protection for a fee.
207
+
208
+ 5. Conveying Modified Source Versions.
209
+
210
+ You may convey a work based on the Program, or the modifications to
211
+ produce it from the Program, in the form of source code under the
212
+ terms of section 4, provided that you also meet all of these conditions:
213
+
214
+ a) The work must carry prominent notices stating that you modified
215
+ it, and giving a relevant date.
216
+
217
+ b) The work must carry prominent notices stating that it is
218
+ released under this License and any conditions added under section
219
+ 7. This requirement modifies the requirement in section 4 to
220
+ "keep intact all notices".
221
+
222
+ c) You must license the entire work, as a whole, under this
223
+ License to anyone who comes into possession of a copy. This
224
+ License will therefore apply, along with any applicable section 7
225
+ additional terms, to the whole of the work, and all its parts,
226
+ regardless of how they are packaged. This License gives no
227
+ permission to license the work in any other way, but it does not
228
+ invalidate such permission if you have separately received it.
229
+
230
+ d) If the work has interactive user interfaces, each must display
231
+ Appropriate Legal Notices; however, if the Program has interactive
232
+ interfaces that do not display Appropriate Legal Notices, your
233
+ work need not make them do so.
234
+
235
+ A compilation of a covered work with other separate and independent
236
+ works, which are not by their nature extensions of the covered work,
237
+ and which are not combined with it such as to form a larger program,
238
+ in or on a volume of a storage or distribution medium, is called an
239
+ "aggregate" if the compilation and its resulting copyright are not
240
+ used to limit the access or legal rights of the compilation's users
241
+ beyond what the individual works permit. Inclusion of a covered work
242
+ in an aggregate does not cause this License to apply to the other
243
+ parts of the aggregate.
244
+
245
+ 6. Conveying Non-Source Forms.
246
+
247
+ You may convey a covered work in object code form under the terms
248
+ of sections 4 and 5, provided that you also convey the
249
+ machine-readable Corresponding Source under the terms of this License,
250
+ in one of these ways:
251
+
252
+ a) Convey the object code in, or embodied in, a physical product
253
+ (including a physical distribution medium), accompanied by the
254
+ Corresponding Source fixed on a durable physical medium
255
+ customarily used for software interchange.
256
+
257
+ b) Convey the object code in, or embodied in, a physical product
258
+ (including a physical distribution medium), accompanied by a
259
+ written offer, valid for at least three years and valid for as
260
+ long as you offer spare parts or customer support for that product
261
+ model, to give anyone who possesses the object code either (1) a
262
+ copy of the Corresponding Source for all the software in the
263
+ product that is covered by this License, on a durable physical
264
+ medium customarily used for software interchange, for a price no
265
+ more than your reasonable cost of physically performing this
266
+ conveying of source, or (2) access to copy the
267
+ Corresponding Source from a network server at no charge.
268
+
269
+ c) Convey individual copies of the object code with a copy of the
270
+ written offer to provide the Corresponding Source. This
271
+ alternative is allowed only occasionally and noncommercially, and
272
+ only if you received the object code with such an offer, in accord
273
+ with subsection 6b.
274
+
275
+ d) Convey the object code by offering access from a designated
276
+ place (gratis or for a charge), and offer equivalent access to the
277
+ Corresponding Source in the same way through the same place at no
278
+ further charge. You need not require recipients to copy the
279
+ Corresponding Source along with the object code. If the place to
280
+ copy the object code is a network server, the Corresponding Source
281
+ may be on a different server (operated by you or a third party)
282
+ that supports equivalent copying facilities, provided you maintain
283
+ clear directions next to the object code saying where to find the
284
+ Corresponding Source. Regardless of what server hosts the
285
+ Corresponding Source, you remain obligated to ensure that it is
286
+ available for as long as needed to satisfy these requirements.
287
+
288
+ e) Convey the object code using peer-to-peer transmission, provided
289
+ you inform other peers where the object code and Corresponding
290
+ Source of the work are being offered to the general public at no
291
+ charge under subsection 6d.
292
+
293
+ A separable portion of the object code, whose source code is excluded
294
+ from the Corresponding Source as a System Library, need not be
295
+ included in conveying the object code work.
296
+
297
+ A "User Product" is either (1) a "consumer product", which means any
298
+ tangible personal property which is normally used for personal, family,
299
+ or household purposes, or (2) anything designed or sold for incorporation
300
+ into a dwelling. In determining whether a product is a consumer product,
301
+ doubtful cases shall be resolved in favor of coverage. For a particular
302
+ product received by a particular user, "normally used" refers to a
303
+ typical or common use of that class of product, regardless of the status
304
+ of the particular user or of the way in which the particular user
305
+ actually uses, or expects or is expected to use, the product. A product
306
+ is a consumer product regardless of whether the product has substantial
307
+ commercial, industrial or non-consumer uses, unless such uses represent
308
+ the only significant mode of use of the product.
309
+
310
+ "Installation Information" for a User Product means any methods,
311
+ procedures, authorization keys, or other information required to install
312
+ and execute modified versions of a covered work in that User Product from
313
+ a modified version of its Corresponding Source. The information must
314
+ suffice to ensure that the continued functioning of the modified object
315
+ code is in no case prevented or interfered with solely because
316
+ modification has been made.
317
+
318
+ If you convey an object code work under this section in, or with, or
319
+ specifically for use in, a User Product, and the conveying occurs as
320
+ part of a transaction in which the right of possession and use of the
321
+ User Product is transferred to the recipient in perpetuity or for a
322
+ fixed term (regardless of how the transaction is characterized), the
323
+ Corresponding Source conveyed under this section must be accompanied
324
+ by the Installation Information. But this requirement does not apply
325
+ if neither you nor any third party retains the ability to install
326
+ modified object code on the User Product (for example, the work has
327
+ been installed in ROM).
328
+
329
+ The requirement to provide Installation Information does not include a
330
+ requirement to continue to provide support service, warranty, or updates
331
+ for a work that has been modified or installed by the recipient, or for
332
+ the User Product in which it has been modified or installed. Access to a
333
+ network may be denied when the modification itself materially and
334
+ adversely affects the operation of the network or violates the rules and
335
+ protocols for communication across the network.
336
+
337
+ Corresponding Source conveyed, and Installation Information provided,
338
+ in accord with this section must be in a format that is publicly
339
+ documented (and with an implementation available to the public in
340
+ source code form), and must require no special password or key for
341
+ unpacking, reading or copying.
342
+
343
+ 7. Additional Terms.
344
+
345
+ "Additional permissions" are terms that supplement the terms of this
346
+ License by making exceptions from one or more of its conditions.
347
+ Additional permissions that are applicable to the entire Program shall
348
+ be treated as though they were included in this License, to the extent
349
+ that they are valid under applicable law. If additional permissions
350
+ apply only to part of the Program, that part may be used separately
351
+ under those permissions, but the entire Program remains governed by
352
+ this License without regard to the additional permissions.
353
+
354
+ When you convey a copy of a covered work, you may at your option
355
+ remove any additional permissions from that copy, or from any part of
356
+ it. (Additional permissions may be written to require their own
357
+ removal in certain cases when you modify the work.) You may place
358
+ additional permissions on material, added by you to a covered work,
359
+ for which you have or can give appropriate copyright permission.
360
+
361
+ Notwithstanding any other provision of this License, for material you
362
+ add to a covered work, you may (if authorized by the copyright holders of
363
+ that material) supplement the terms of this License with terms:
364
+
365
+ a) Disclaiming warranty or limiting liability differently from the
366
+ terms of sections 15 and 16 of this License; or
367
+
368
+ b) Requiring preservation of specified reasonable legal notices or
369
+ author attributions in that material or in the Appropriate Legal
370
+ Notices displayed by works containing it; or
371
+
372
+ c) Prohibiting misrepresentation of the origin of that material, or
373
+ requiring that modified versions of such material be marked in
374
+ reasonable ways as different from the original version; or
375
+
376
+ d) Limiting the use for publicity purposes of names of licensors or
377
+ authors of the material; or
378
+
379
+ e) Declining to grant rights under trademark law for use of some
380
+ trade names, trademarks, or service marks; or
381
+
382
+ f) Requiring indemnification of licensors and authors of that
383
+ material by anyone who conveys the material (or modified versions of
384
+ it) with contractual assumptions of liability to the recipient, for
385
+ any liability that these contractual assumptions directly impose on
386
+ those licensors and authors.
387
+
388
+ All other non-permissive additional terms are considered "further
389
+ restrictions" within the meaning of section 10. If the Program as you
390
+ received it, or any part of it, contains a notice stating that it is
391
+ governed by this License along with a term that is a further
392
+ restriction, you may remove that term. If a license document contains
393
+ a further restriction but permits relicensing or conveying under this
394
+ License, you may add to a covered work material governed by the terms
395
+ of that license document, provided that the further restriction does
396
+ not survive such relicensing or conveying.
397
+
398
+ If you add terms to a covered work in accord with this section, you
399
+ must place, in the relevant source files, a statement of the
400
+ additional terms that apply to those files, or a notice indicating
401
+ where to find the applicable terms.
402
+
403
+ Additional terms, permissive or non-permissive, may be stated in the
404
+ form of a separately written license, or stated as exceptions;
405
+ the above requirements apply either way.
406
+
407
+ 8. Termination.
408
+
409
+ You may not propagate or modify a covered work except as expressly
410
+ provided under this License. Any attempt otherwise to propagate or
411
+ modify it is void, and will automatically terminate your rights under
412
+ this License (including any patent licenses granted under the third
413
+ paragraph of section 11).
414
+
415
+ However, if you cease all violation of this License, then your
416
+ license from a particular copyright holder is reinstated (a)
417
+ provisionally, unless and until the copyright holder explicitly and
418
+ finally terminates your license, and (b) permanently, if the copyright
419
+ holder fails to notify you of the violation by some reasonable means
420
+ prior to 60 days after the cessation.
421
+
422
+ Moreover, your license from a particular copyright holder is
423
+ reinstated permanently if the copyright holder notifies you of the
424
+ violation by some reasonable means, this is the first time you have
425
+ received notice of violation of this License (for any work) from that
426
+ copyright holder, and you cure the violation prior to 30 days after
427
+ your receipt of the notice.
428
+
429
+ Termination of your rights under this section does not terminate the
430
+ licenses of parties who have received copies or rights from you under
431
+ this License. If your rights have been terminated and not permanently
432
+ reinstated, you do not qualify to receive new licenses for the same
433
+ material under section 10.
434
+
435
+ 9. Acceptance Not Required for Having Copies.
436
+
437
+ You are not required to accept this License in order to receive or
438
+ run a copy of the Program. Ancillary propagation of a covered work
439
+ occurring solely as a consequence of using peer-to-peer transmission
440
+ to receive a copy likewise does not require acceptance. However,
441
+ nothing other than this License grants you permission to propagate or
442
+ modify any covered work. These actions infringe copyright if you do
443
+ not accept this License. Therefore, by modifying or propagating a
444
+ covered work, you indicate your acceptance of this License to do so.
445
+
446
+ 10. Automatic Licensing of Downstream Recipients.
447
+
448
+ Each time you convey a covered work, the recipient automatically
449
+ receives a license from the original licensors, to run, modify and
450
+ propagate that work, subject to this License. You are not responsible
451
+ for enforcing compliance by third parties with this License.
452
+
453
+ An "entity transaction" is a transaction transferring control of an
454
+ organization, or substantially all assets of one, or subdividing an
455
+ organization, or merging organizations. If propagation of a covered
456
+ work results from an entity transaction, each party to that
457
+ transaction who receives a copy of the work also receives whatever
458
+ licenses to the work the party's predecessor in interest had or could
459
+ give under the previous paragraph, plus a right to possession of the
460
+ Corresponding Source of the work from the predecessor in interest, if
461
+ the predecessor has it or can get it with reasonable efforts.
462
+
463
+ You may not impose any further restrictions on the exercise of the
464
+ rights granted or affirmed under this License. For example, you may
465
+ not impose a license fee, royalty, or other charge for exercise of
466
+ rights granted under this License, and you may not initiate litigation
467
+ (including a cross-claim or counterclaim in a lawsuit) alleging that
468
+ any patent claim is infringed by making, using, selling, offering for
469
+ sale, or importing the Program or any portion of it.
470
+
471
+ 11. Patents.
472
+
473
+ A "contributor" is a copyright holder who authorizes use under this
474
+ License of the Program or a work on which the Program is based. The
475
+ work thus licensed is called the contributor's "contributor version".
476
+
477
+ A contributor's "essential patent claims" are all patent claims
478
+ owned or controlled by the contributor, whether already acquired or
479
+ hereafter acquired, that would be infringed by some manner, permitted
480
+ by this License, of making, using, or selling its contributor version,
481
+ but do not include claims that would be infringed only as a
482
+ consequence of further modification of the contributor version. For
483
+ purposes of this definition, "control" includes the right to grant
484
+ patent sublicenses in a manner consistent with the requirements of
485
+ this License.
486
+
487
+ Each contributor grants you a non-exclusive, worldwide, royalty-free
488
+ patent license under the contributor's essential patent claims, to
489
+ make, use, sell, offer for sale, import and otherwise run, modify and
490
+ propagate the contents of its contributor version.
491
+
492
+ In the following three paragraphs, a "patent license" is any express
493
+ agreement or commitment, however denominated, not to enforce a patent
494
+ (such as an express permission to practice a patent or covenant not to
495
+ sue for patent infringement). To "grant" such a patent license to a
496
+ party means to make such an agreement or commitment not to enforce a
497
+ patent against the party.
498
+
499
+ If you convey a covered work, knowingly relying on a patent license,
500
+ and the Corresponding Source of the work is not available for anyone
501
+ to copy, free of charge and under the terms of this License, through a
502
+ publicly available network server or other readily accessible means,
503
+ then you must either (1) cause the Corresponding Source to be so
504
+ available, or (2) arrange to deprive yourself of the benefit of the
505
+ patent license for this particular work, or (3) arrange, in a manner
506
+ consistent with the requirements of this License, to extend the patent
507
+ license to downstream recipients. "Knowingly relying" means you have
508
+ actual knowledge that, but for the patent license, your conveying the
509
+ covered work in a country, or your recipient's use of the covered work
510
+ in a country, would infringe one or more identifiable patents in that
511
+ country that you have reason to believe are valid.
512
+
513
+ If, pursuant to or in connection with a single transaction or
514
+ arrangement, you convey, or propagate by procuring conveyance of, a
515
+ covered work, and grant a patent license to some of the parties
516
+ receiving the covered work authorizing them to use, propagate, modify
517
+ or convey a specific copy of the covered work, then the patent license
518
+ you grant is automatically extended to all recipients of the covered
519
+ work and works based on it.
520
+
521
+ A patent license is "discriminatory" if it does not include within
522
+ the scope of its coverage, prohibits the exercise of, or is
523
+ conditioned on the non-exercise of one or more of the rights that are
524
+ specifically granted under this License. You may not convey a covered
525
+ work if you are a party to an arrangement with a third party that is
526
+ in the business of distributing software, under which you make payment
527
+ to the third party based on the extent of your activity of conveying
528
+ the work, and under which the third party grants, to any of the
529
+ parties who would receive the covered work from you, a discriminatory
530
+ patent license (a) in connection with copies of the covered work
531
+ conveyed by you (or copies made from those copies), or (b) primarily
532
+ for and in connection with specific products or compilations that
533
+ contain the covered work, unless you entered into that arrangement,
534
+ or that patent license was granted, prior to 28 March 2007.
535
+
536
+ Nothing in this License shall be construed as excluding or limiting
537
+ any implied license or other defenses to infringement that may
538
+ otherwise be available to you under applicable patent law.
539
+
540
+ 12. No Surrender of Others' Freedom.
541
+
542
+ If conditions are imposed on you (whether by court order, agreement or
543
+ otherwise) that contradict the conditions of this License, they do not
544
+ excuse you from the conditions of this License. If you cannot convey a
545
+ covered work so as to satisfy simultaneously your obligations under this
546
+ License and any other pertinent obligations, then as a consequence you may
547
+ not convey it at all. For example, if you agree to terms that obligate you
548
+ to collect a royalty for further conveying from those to whom you convey
549
+ the Program, the only way you could satisfy both those terms and this
550
+ License would be to refrain entirely from conveying the Program.
551
+
552
+ 13. Use with the GNU Affero General Public License.
553
+
554
+ Notwithstanding any other provision of this License, you have
555
+ permission to link or combine any covered work with a work licensed
556
+ under version 3 of the GNU Affero General Public License into a single
557
+ combined work, and to convey the resulting work. The terms of this
558
+ License will continue to apply to the part which is the covered work,
559
+ but the special requirements of the GNU Affero General Public License,
560
+ section 13, concerning interaction through a network will apply to the
561
+ combination as such.
562
+
563
+ 14. Revised Versions of this License.
564
+
565
+ The Free Software Foundation may publish revised and/or new versions of
566
+ the GNU General Public License from time to time. Such new versions will
567
+ be similar in spirit to the present version, but may differ in detail to
568
+ address new problems or concerns.
569
+
570
+ Each version is given a distinguishing version number. If the
571
+ Program specifies that a certain numbered version of the GNU General
572
+ Public License "or any later version" applies to it, you have the
573
+ option of following the terms and conditions either of that numbered
574
+ version or of any later version published by the Free Software
575
+ Foundation. If the Program does not specify a version number of the
576
+ GNU General Public License, you may choose any version ever published
577
+ by the Free Software Foundation.
578
+
579
+ If the Program specifies that a proxy can decide which future
580
+ versions of the GNU General Public License can be used, that proxy's
581
+ public statement of acceptance of a version permanently authorizes you
582
+ to choose that version for the Program.
583
+
584
+ Later license versions may give you additional or different
585
+ permissions. However, no additional obligations are imposed on any
586
+ author or copyright holder as a result of your choosing to follow a
587
+ later version.
588
+
589
+ 15. Disclaimer of Warranty.
590
+
591
+ THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY
592
+ APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT
593
+ HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY
594
+ OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO,
595
+ THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
596
+ PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM
597
+ IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF
598
+ ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
599
+
600
+ 16. Limitation of Liability.
601
+
602
+ IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
603
+ WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS
604
+ THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY
605
+ GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE
606
+ USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF
607
+ DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD
608
+ PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS),
609
+ EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF
610
+ SUCH DAMAGES.
611
+
612
+ 17. Interpretation of Sections 15 and 16.
613
+
614
+ If the disclaimer of warranty and limitation of liability provided
615
+ above cannot be given local legal effect according to their terms,
616
+ reviewing courts shall apply local law that most closely approximates
617
+ an absolute waiver of all civil liability in connection with the
618
+ Program, unless a warranty or assumption of liability accompanies a
619
+ copy of the Program in return for a fee.
620
+
621
+ END OF TERMS AND CONDITIONS
622
+
623
+ How to Apply These Terms to Your New Programs
624
+
625
+ If you develop a new program, and you want it to be of the greatest
626
+ possible use to the public, the best way to achieve this is to make it
627
+ free software which everyone can redistribute and change under these terms.
628
+
629
+ To do so, attach the following notices to the program. It is safest
630
+ to attach them to the start of each source file to most effectively
631
+ state the exclusion of warranty; and each file should have at least
632
+ the "copyright" line and a pointer to where the full notice is found.
633
+
634
+ <one line to give the program's name and a brief idea of what it does.>
635
+ Copyright (C) <year> <name of author>
636
+
637
+ This program is free software: you can redistribute it and/or modify
638
+ it under the terms of the GNU General Public License as published by
639
+ the Free Software Foundation, either version 3 of the License, or
640
+ (at your option) any later version.
641
+
642
+ This program is distributed in the hope that it will be useful,
643
+ but WITHOUT ANY WARRANTY; without even the implied warranty of
644
+ MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
645
+ GNU General Public License for more details.
646
+
647
+ You should have received a copy of the GNU General Public License
648
+ along with this program. If not, see <https://www.gnu.org/licenses/>.
649
+
650
+ Also add information on how to contact you by electronic and paper mail.
651
+
652
+ If the program does terminal interaction, make it output a short
653
+ notice like this when it starts in an interactive mode:
654
+
655
+ <program> Copyright (C) <year> <name of author>
656
+ This program comes with ABSOLUTELY NO WARRANTY; for details type `show w'.
657
+ This is free software, and you are welcome to redistribute it
658
+ under certain conditions; type `show c' for details.
659
+
660
+ The hypothetical commands `show w' and `show c' should show the appropriate
661
+ parts of the General Public License. Of course, your program's commands
662
+ might be different; for a GUI interface, you would use an "about box".
663
+
664
+ You should also get your employer (if you work as a programmer) or school,
665
+ if any, to sign a "copyright disclaimer" for the program, if necessary.
666
+ For more information on this, and how to apply and follow the GNU GPL, see
667
+ <https://www.gnu.org/licenses/>.
668
+
669
+ The GNU General Public License does not permit incorporating your program
670
+ into proprietary programs. If your program is a subroutine library, you
671
+ may consider it more useful to permit linking proprietary applications with
672
+ the library. If this is what you want to do, use the GNU Lesser General
673
+ Public License instead of this License. But first, please read
674
+ <https://www.gnu.org/licenses/why-not-lgpl.html>.
README.md CHANGED
@@ -1,10 +1 @@
1
- ---
2
- title: Dataset
3
- emoji: 👀
4
- colorFrom: yellow
5
- colorTo: indigo
6
- sdk: docker
7
- pinned: false
8
- ---
9
-
10
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
1
+ # DatasetCreator
 
 
 
 
 
 
 
 
 
backend.py ADDED
@@ -0,0 +1,101 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from flask import Flask, request, jsonify, send_file
2
+ from flask_cors import CORS
3
+ import os
4
+ from langchain_folder.main import ReturnKeywordsfromPrompt
5
+ from dotenv import load_dotenv
6
+ import pandas as pd
7
+ import matplotlib.pyplot as plt
8
+ import seaborn as sns
9
+
10
+ load_dotenv()
11
+
12
+ app = Flask(__name__)
13
+ CORS(app)
14
+
15
+ CSV_FILE_PATH = os.getenv('file_path')
16
+ GRAPH_DIR = './static/graphs'
17
+
18
+
19
+ @app.route('/api/search', methods=['POST'])
20
+ def search():
21
+ data = request.get_json()
22
+ query = data.get('query', '')
23
+ keywords = ReturnKeywordsfromPrompt(query)
24
+ return jsonify({"status": "success", "queryReceived": query})
25
+
26
+
27
+ def generate_graphs_from_csv(csv_path, output_dir):
28
+ df = pd.read_csv(csv_path)
29
+ os.makedirs(output_dir, exist_ok=True)
30
+
31
+ numeric_cols = df.select_dtypes(include='number').columns.tolist()
32
+ categorical_cols = df.select_dtypes(include='object').columns.tolist()
33
+ graph_paths = []
34
+ print (categorical_cols)
35
+
36
+ if len(numeric_cols) >= 1:
37
+ plt.figure(figsize=(4, 3))
38
+ sns.histplot(df[numeric_cols[0]], kde=True)
39
+ plt.title(f'{numeric_cols[0]} Distribution')
40
+ plt.tight_layout()
41
+ path = f'{output_dir}/graph_1_hist.png'
42
+ plt.savefig(path)
43
+ graph_paths.append(path)
44
+
45
+ if len(numeric_cols) >= 2 and categorical_cols:
46
+ plt.figure(figsize=(4, 3))
47
+ sns.boxplot(data=df, x=categorical_cols[0], y=numeric_cols[1])
48
+ plt.title(f'{numeric_cols[1]} by {categorical_cols[0]}')
49
+ plt.tight_layout()
50
+ path = f'{output_dir}/graph_2_box.png'
51
+ plt.savefig(path)
52
+ graph_paths.append(path)
53
+
54
+ if categorical_cols:
55
+ plt.figure(figsize=(4, 3))
56
+ sns.countplot(data=df, x=categorical_cols[0])
57
+ plt.title(f'{categorical_cols[0]} Distribution')
58
+ plt.tight_layout()
59
+ path = f'{output_dir}/graph_3_count.png'
60
+ plt.savefig(path)
61
+ graph_paths.append(path)
62
+
63
+ if len(numeric_cols) >= 2:
64
+ plt.figure(figsize=(4, 3))
65
+ sns.scatterplot(data=df, x=numeric_cols[0], y=numeric_cols[1])
66
+ plt.title(f'{numeric_cols[0]} vs {numeric_cols[1]}')
67
+ plt.tight_layout()
68
+ path = f'{output_dir}/graph_4_scatter.png'
69
+ plt.savefig(path)
70
+ graph_paths.append(path)
71
+
72
+ return graph_paths
73
+
74
+
75
+ @app.route('/api/get_csv', methods=['GET'])
76
+ def get_csv():
77
+ try:
78
+ if not os.path.exists(CSV_FILE_PATH):
79
+ return jsonify({"error": "CSV file not found"}), 404
80
+ with open(CSV_FILE_PATH, "r", encoding="utf-8") as f:
81
+ return f.read(), 200, {
82
+ "Content-Type": "text/csv",
83
+ "Content-Disposition": "inline; filename=dataset.csv"
84
+ }
85
+ except Exception as e:
86
+ return jsonify({"error": str(e)}), 500
87
+
88
+
89
+ @app.route('/api/download_csv', methods=['GET'])
90
+ def download_csv():
91
+ return send_file(CSV_FILE_PATH, as_attachment=True)
92
+
93
+
94
+ @app.route('/api/get_graphs', methods=['GET'])
95
+ def get_graphs():
96
+ paths = generate_graphs_from_csv(CSV_FILE_PATH, GRAPH_DIR)
97
+ return jsonify({"graphs": [p.replace("./static", "/static") for p in paths]})
98
+
99
+
100
+ if __name__ == '__main__':
101
+ app.run(debug=True)
cleanDataset.py ADDED
@@ -0,0 +1,165 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import pandas as pd
2
+ import glob
3
+ import re
4
+ from itertools import combinations
5
+ import os
6
+ from rapidfuzz import process, fuzz
7
+ import getLabels
8
+
9
+
10
+ def get_fuzzy_common_columns(cols_list, threshold=75):
11
+ """
12
+ Given a list of sets of column names (normalized),
13
+ return the set of column names that are 'fuzzy common'
14
+ across all lists.
15
+ """
16
+ # Start with columns from the first dataset
17
+ base = cols_list[0]
18
+ common = set()
19
+
20
+ for col in base:
21
+ match_all = True
22
+ for other in cols_list[1:]:
23
+ match, score, _ = process.extractOne(col, other, scorer=fuzz.token_sort_ratio)
24
+ if score < threshold:
25
+ match_all = False
26
+ break
27
+ if match_all:
28
+ common.add(col)
29
+ return common
30
+
31
+
32
+ def sortFiles(dfs):
33
+ unique_dfs = []
34
+ seen = []
35
+
36
+ for i, df1 in enumerate(dfs):
37
+ duplicate = False
38
+ for j in seen:
39
+ df2 = dfs[j]
40
+
41
+ # Check if same shape
42
+ if df1.shape != df2.shape:
43
+ continue
44
+
45
+ if df1.reset_index(drop=True).equals(df2.reset_index(drop=True)):
46
+ duplicate = True
47
+ break
48
+
49
+ if not duplicate:
50
+ unique_dfs.append(df1)
51
+ seen.append(i)
52
+
53
+ return unique_dfs
54
+
55
+
56
+ def normalize(col):
57
+ return re.sub(r'[^a-z0-9]', '', col.lower())
58
+
59
+ def clean(query):
60
+ os.makedirs("./final", exist_ok=True)
61
+
62
+ csv_files = glob.glob("downloads/"+query+"/*.csv")
63
+ if len(csv_files)<1:
64
+ print("No csv file found!!")
65
+ exit(0)
66
+ dfs=[]
67
+ skip=[]
68
+ for i,f in enumerate(csv_files):
69
+ try:
70
+ print(f"Reading {f}")
71
+ df = pd.read_csv(f)
72
+ dfs.append(df)
73
+ except Exception as e:
74
+ skip.append(i)
75
+ print(f"Failed to read {f}: {e}")
76
+ print(len(dfs))
77
+ dfs=sortFiles(dfs)
78
+ print(len(dfs))
79
+
80
+ labelList=getLabels.LabelsExtraction2(query,dfs,csv_files,skip)
81
+ print(labelList)
82
+ for i,df in enumerate(dfs):
83
+ if labelList[i] in df.columns:
84
+ df.rename(columns={labelList[i]:"label"},inplace=True)
85
+
86
+ # Step 2: Store normalized-to-original column mappings
87
+ normalized_cols = []
88
+ orig_col_maps = []
89
+
90
+ for df in dfs:
91
+ norm_to_orig = {}
92
+ norm_cols = []
93
+ for col in df.columns:
94
+ norm = normalize(col)
95
+ norm_cols.append(norm)
96
+ norm_to_orig[norm] = col
97
+ normalized_cols.append(set(norm_cols))
98
+ orig_col_maps.append(norm_to_orig)
99
+
100
+ # Step 3: Find combination with max common columns
101
+ max_common = set()
102
+ best_combo = []
103
+
104
+ for i in range(2, len(dfs) + 1):
105
+ for combo in combinations(range(len(dfs)), i):
106
+ selected_cols = [normalized_cols[j] for j in combo]
107
+ fuzzy_common = get_fuzzy_common_columns(selected_cols)
108
+ if len(fuzzy_common) >= len(max_common):
109
+ max_common = fuzzy_common
110
+ best_combo = combo
111
+
112
+
113
+ # Step 4: Harmonize columns and subset
114
+ aligned_dfs = []
115
+
116
+ for idx in best_combo:
117
+ df = dfs[idx]
118
+ original_cols = list(df.columns)
119
+ new_columns = {}
120
+
121
+ for std_col in max_common:
122
+ # Match this standard col to the most similar original column in this DataFrame
123
+ match, score, _ = process.extractOne(std_col, [normalize(col) for col in original_cols], scorer=fuzz.token_sort_ratio)
124
+
125
+ # Find the original column that corresponds to the matched normalized name
126
+ for col in original_cols:
127
+ if normalize(col) == match:
128
+ new_columns[col] = std_col # Map original -> standard
129
+ break
130
+
131
+ # Subset and rename
132
+ df_subset = df[list(new_columns.keys())].copy()
133
+ df_subset.rename(columns=new_columns, inplace=True)
134
+ aligned_dfs.append(df_subset)
135
+
136
+ # Step 5: Combine
137
+ combined_df = pd.concat(aligned_dfs, ignore_index=True)
138
+ print(best_combo)
139
+ # print(combined_df.head())
140
+
141
+ maxCount=0
142
+ idx=-1
143
+ for i in range(len(dfs)):
144
+ if dfs[i].index.size > maxCount:
145
+ maxCount=dfs[i].index.size
146
+ idx=i
147
+
148
+ flag=False
149
+ if maxCount>combined_df.index.size and len(dfs[idx].columns)>2:
150
+ # print("11")
151
+ flag=True
152
+ elif combined_df.index.size>maxCount and (len(dfs[idx].columns)-len(combined_df.columns))>3 and len(dfs[idx].columns)<7:
153
+ # print(len(dfs[idx].columns)-len(combined_df.columns))
154
+ flag=True
155
+
156
+ if flag:
157
+ dfs[idx].to_csv("./final/"+query+".csv", index=False)
158
+ print("The merge file was not upto the mark so saved a single file..."+str(idx))
159
+ else:
160
+ combined_df.to_csv("./final/"+query+".csv", index=False)
161
+ print("Saved Merged file...")
162
+
163
+
164
+
165
+ clean("twitter sentiment analysis")
clean_openml.py ADDED
@@ -0,0 +1,49 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+ import csv
3
+
4
+
5
+
6
+ def clean(user_prompt):
7
+ input_directory = os.path.join("input_folder", user_prompt)
8
+ output_directory = "downloads/"+user_prompt
9
+
10
+ # Create the output directory if it doesn't exist
11
+ os.makedirs(output_directory, exist_ok=True)
12
+
13
+ # Loop through all files in the user-specified input directory
14
+ for filename in os.listdir(input_directory):
15
+ file_path = os.path.join(input_directory, filename)
16
+
17
+ # Skip directories or hidden files
18
+ if os.path.isdir(file_path) or filename.startswith("."):
19
+ continue
20
+
21
+ # Output file path (.csv extension added)
22
+ output_file = os.path.join(output_directory, filename + ".csv")
23
+
24
+ with open(file_path, "r", encoding="utf-8", errors="ignore") as file:
25
+ lines = file.readlines()
26
+
27
+ headers = []
28
+ data_rows = []
29
+ data_started = False
30
+
31
+ for line in lines:
32
+ line = line.strip()
33
+ if line.startswith("@ATTRIBUTE"):
34
+ parts = line.split()
35
+ if len(parts) >= 2:
36
+ headers.append(parts[1])
37
+ elif line.startswith("@DATA"):
38
+ data_started = True
39
+ elif data_started and line:
40
+ data_rows.append(line.split(","))
41
+
42
+ # Write to CSV
43
+ with open(output_file, "w", newline="") as csvfile:
44
+ writer = csv.writer(csvfile)
45
+ writer.writerow(headers)
46
+ writer.writerows(data_rows)
47
+
48
+ print(f"✅ CSV file created for: {filename} → {output_file}")
49
+
csv_merge_openai.py ADDED
@@ -0,0 +1,133 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # import os
2
+ # import pandas as pd
3
+ # from openai import OpenAI
4
+ # from dotenv import load_dotenv
5
+
6
+ # # Load environment variables
7
+ # load_dotenv()
8
+
9
+ # # Setup OpenAI client
10
+ # client = OpenAI(
11
+ # api_key=os.getenv("OPENAI_API_KEY"),
12
+ # base_url=os.getenv("OPENAI_API_BASE", "")
13
+ # )
14
+
15
+ # # Set the folder containing CSV files
16
+ # folder_path = 'house'
17
+ # csv_files = [f for f in os.listdir(folder_path) if f.endswith(".csv")]
18
+
19
+ # # Loop through each CSV file and process
20
+ # for file in csv_files:
21
+ # file_path = os.path.join(folder_path, file)
22
+ # try:
23
+ # df = pd.read_csv(file_path, nrows=1) # Read just the header row
24
+ # columns = df.columns.tolist()
25
+ # columns_str = ", ".join(columns)
26
+
27
+ # # Formulate prompt
28
+ # prompt = (
29
+ # f"The following are column labels from a dataset: {columns_str}.\n"
30
+ # "Among all the labels, return a list of labels which can be clubbed and merged into 1 label. Note that the lables which can be merged must belong to different datasets, do not merge lables of the same dataset"
31
+ # )
32
+
33
+ # # Ask OpenAI for insight
34
+ # response = client.chat.completions.create(
35
+ # model="gpt-4",
36
+ # messages=[
37
+ # {"role": "system", "content": "You are a helpful data analyst."},
38
+ # {"role": "user", "content": prompt}
39
+ # ],
40
+ # temperature=0.3
41
+ # )
42
+
43
+ # print(f"\n🔍 File: {file}")
44
+ # print("📋 Columns:", columns_str)
45
+ # print("💡 AI Insight:", response.choices[0].message.content)
46
+
47
+ # except Exception as e:
48
+ # print(f"❌ Error processing {file}: {e}")
49
+
50
+ import os
51
+ import json
52
+ import pandas as pd
53
+ from openai import OpenAI
54
+ from dotenv import load_dotenv
55
+
56
+ load_dotenv()
57
+
58
+ client = OpenAI(
59
+ api_key=os.getenv("OPENAI_API_KEY"),
60
+ base_url=os.getenv("OPENAI_API_BASE", "")
61
+ )
62
+
63
+ folder_path = 'downloads/house'
64
+ csv_files = [f for f in os.listdir(folder_path) if f.endswith(".csv")]
65
+
66
+ # Collect all column headers
67
+ all_columns = {}
68
+ for file in csv_files:
69
+ try:
70
+ df = pd.read_csv(os.path.join(folder_path, file), nrows=1)
71
+ all_columns[file] = df.columns.tolist()
72
+ except Exception as e:
73
+ print(f"❌ Could not read {file}: {e}")
74
+
75
+ flattened_cols = [f"{file}: {', '.join(cols)}" for file, cols in all_columns.items()]
76
+ prompt = (
77
+ "The following are column headers from multiple CSV datasets:\n\n"
78
+ + f"{flattened_cols}"
79
+ + "\n\nIdentify which labels across different datasets can be considered equivalent and merged. "
80
+ "Return only a valid JSON dictionary where keys are standard labels and values are lists of corresponding labels to rename. No explanation."
81
+ )
82
+
83
+ response = client.chat.completions.create(
84
+ model="gpt-4",
85
+ messages=[
86
+ {"role": "system", "content": "You are a helpful data analyst."},
87
+ {"role": "user", "content": prompt}
88
+ ],
89
+ temperature=0.3
90
+ )
91
+
92
+ # Parse JSON dictionary
93
+ merge_map_text = response.choices[0].message.content.strip()
94
+ try:
95
+ start = merge_map_text.find("{")
96
+ end = merge_map_text.rfind("}") + 1
97
+ json_text = merge_map_text[start:end]
98
+ merge_map = json.loads(json_text)
99
+ print("\n🧠 Parsed Merge Map:")
100
+ print(json.dumps(merge_map, indent=2))
101
+ except Exception as e:
102
+ print("❌ Could not parse merge map from GPT:", e)
103
+ merge_map = {}
104
+
105
+ # Merge DataFrames
106
+ merged_df = pd.DataFrame()
107
+
108
+ for file in csv_files:
109
+ try:
110
+ df = pd.read_csv(os.path.join(folder_path, file), on_bad_lines='skip')
111
+
112
+ # Rename columns to standard labels
113
+ for standard_label, variants in merge_map.items():
114
+ for variant in variants:
115
+ if variant in df.columns:
116
+ df[standard_label] = df[variant]
117
+
118
+ # Retain only the standard columns we care about
119
+ df = df[list(set(df.columns) & set(merge_map.keys()))]
120
+
121
+ if not df.empty:
122
+ merged_df = pd.concat([merged_df, df], ignore_index=True)
123
+
124
+ except Exception as e:
125
+ print(f"❌ Error processing {file}: {e}")
126
+
127
+ # Final clean-up
128
+ if not merged_df.empty:
129
+ merged_df.drop_duplicates(inplace=True)
130
+ merged_df.to_csv("merged_cleaned_dataset.csv", index=False)
131
+ print("\n✅ Merged and cleaned dataset saved as 'merged_cleaned_dataset.csv'")
132
+ else:
133
+ print("⚠️ No data was merged. Check if the merge map matches the actual columns.")
generate_graphs.py ADDED
@@ -0,0 +1,51 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import pandas as pd
2
+ import matplotlib.pyplot as plt
3
+ import os
4
+
5
+ def generate_graphs(csv_path, output_dir):
6
+ df = pd.read_csv(csv_path)
7
+
8
+ os.makedirs(output_dir, exist_ok=True)
9
+
10
+ # Graph 1: Line plot of Open vs Close
11
+ plt.figure(figsize=(6, 4))
12
+ plt.plot(df['Open'], label='Open', color='blue')
13
+ plt.plot(df['Close'], label='Close', color='green')
14
+ plt.title('Open vs Close Price')
15
+ plt.xlabel('Index')
16
+ plt.ylabel('Price')
17
+ plt.legend()
18
+ plt.tight_layout()
19
+ plt.savefig(os.path.join(output_dir, 'open_close.png'))
20
+ plt.close()
21
+
22
+ # Graph 2: Scatter plot of RSI vs MACD
23
+ plt.figure(figsize=(6, 4))
24
+ plt.scatter(df['RSI'], df['MACD'], alpha=0.7, color='purple')
25
+ plt.title('RSI vs MACD')
26
+ plt.xlabel('RSI')
27
+ plt.ylabel('MACD')
28
+ plt.tight_layout()
29
+ plt.savefig(os.path.join(output_dir, 'rsi_macd.png'))
30
+ plt.close()
31
+
32
+ # Graph 3: Bar chart of average Volume (chunked)
33
+ volume_avg = df['Volume'].head(10) # Bar chart for first 10
34
+ plt.figure(figsize=(6, 4))
35
+ plt.bar(volume_avg.index, volume_avg.values, color='orange')
36
+ plt.title('Average Volume (First 10)')
37
+ plt.xlabel('Index')
38
+ plt.ylabel('Volume')
39
+ plt.tight_layout()
40
+ plt.savefig(os.path.join(output_dir, 'volume_bar.png'))
41
+ plt.close()
42
+
43
+ # Graph 4: Histogram of Target values
44
+ plt.figure(figsize=(6, 4))
45
+ plt.hist(df['Target'], bins=10, color='red', alpha=0.8)
46
+ plt.title('Distribution of Target')
47
+ plt.xlabel('Target Value')
48
+ plt.ylabel('Frequency')
49
+ plt.tight_layout()
50
+ plt.savefig(os.path.join(output_dir, 'target_hist.png'))
51
+ plt.close()
getDatasets.py ADDED
@@ -0,0 +1,39 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import sys
2
+ import os
3
+ import getFiles.getKaggle as getKaggle
4
+ import getFiles.getGoogle as getGoogle
5
+ import getFiles.getGithub as getGithub
6
+ import getFiles.getHuggingFace as gh
7
+ import cleanDataset
8
+ import openml_search
9
+ import clean_openml
10
+
11
+ sys.path.append(os.path.abspath(os.path.join(os.getcwd(), 'langchain_folder')))
12
+
13
+ from langchain_folder import main as m
14
+ import json
15
+
16
+ def downloadDatasets():
17
+ data=input("enter query : ")
18
+ kag,git,hug=getGoogle.googleDatasets(data)
19
+ print(kag)
20
+ print("this is github : ")
21
+ print(git)
22
+ print(hug)
23
+ if(len(kag)>0):
24
+ for url in kag:
25
+ getKaggle.kaggleDataset(url,data)
26
+ if(len(git)>0):
27
+ for url in git:
28
+ getGithub.githubDataset(url,data)
29
+ if(len(hug)>0):
30
+ for url in hug:
31
+ gh.huggingDataset(url,data)
32
+
33
+ openml_search.openDataset(data)
34
+ clean_openml.clean(data)
35
+
36
+
37
+ cleanDataset.clean(data)
38
+
39
+ downloadDatasets()
getFiles/getGithub.py ADDED
@@ -0,0 +1,57 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+ import time
3
+ from selenium import webdriver
4
+ from selenium.webdriver.common.by import By
5
+ from selenium.webdriver.chrome.options import Options
6
+ from selenium.webdriver.common.action_chains import ActionChains
7
+
8
+
9
+
10
+
11
+ def githubDataset(url,query):
12
+ # time.sleep(3)
13
+ download_folder = os.path.abspath(f"./downloads/{query}")
14
+ os.makedirs(download_folder, exist_ok=True)
15
+ chrome_options = Options()
16
+ chrome_options.add_argument("--headless") # Uncomment to run headless (no UI)
17
+ chrome_options.add_experimental_option("prefs", {
18
+ "download.default_directory": download_folder, # Set the custom download folder
19
+ "download.prompt_for_download": False, # Don't ask for confirmation to download
20
+ "download.directory_upgrade": True, # Allow downloading into the custom folder
21
+ "safebrowsing.enabled": True # Enable safe browsing (to avoid warnings during download)
22
+ })
23
+ driver = webdriver.Chrome(options=chrome_options)
24
+
25
+ driver.get(url)
26
+ try:
27
+ csv_links = driver.find_elements(By.XPATH, "//a[contains(@href, '.csv')]")
28
+ for link in csv_links:
29
+ csv_file_name = link.text
30
+ if csv_file_name.endswith(".csv"):
31
+ print(f"Found CSV file: {csv_file_name}")
32
+ href=link.get_attribute("href")
33
+ # print("hello : "+href)
34
+ driver.get(href)
35
+ time.sleep(5)
36
+
37
+ download_button = driver.find_element(By.XPATH, "//button[contains(@class, 'Box-sc-g0xbh4-0 ivobqY prc-Button-ButtonBase-c50BI prc-Button-IconButton-szpyj')]")
38
+ href2=download_button.get_attribute("href")
39
+ if href2:
40
+ driver.get(href2)
41
+ print("Button clicked!!")
42
+ else:
43
+ download_button.click()
44
+ time.sleep(7)
45
+ break
46
+ else:
47
+ print("No CSV file found.")
48
+ except Exception as e:
49
+ print("No CSV File")
50
+ print(e)
51
+ finally:
52
+ driver.quit()
53
+
54
+ # print(f"CSV file should be downloaded to {download_folder}")
55
+
56
+ # githubDataset("https://github.com/ageron/handson-ml2/tree/master/datasets/housing","housing")
57
+ # githubDataset("https://github.com/nytimes/covid-19-data","housing")
getFiles/getGoogle.py ADDED
@@ -0,0 +1,119 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import re
2
+ from selenium import webdriver
3
+ from selenium.webdriver.common.by import By
4
+ from selenium.webdriver.common.keys import Keys
5
+ from selenium.webdriver.support.ui import WebDriverWait
6
+ from selenium.webdriver.support import expected_conditions as EC
7
+ from selenium.webdriver.chrome.options import Options
8
+ import time
9
+ import getFiles.getKaggle as getKaggle
10
+ import getFiles.getGithub as getGithub
11
+ import os
12
+
13
+ times=15
14
+
15
+
16
+ def googleDatasets(query):
17
+ # query="Covid 19"
18
+ download_folder = "./downloads/"+query
19
+ kag=[]
20
+ git=[]
21
+ hug=[]
22
+ count=0
23
+ if not os.path.exists(download_folder):
24
+ os.makedirs(download_folder)
25
+
26
+ chrome_options = Options()
27
+ chrome_options.add_argument("--headless") # Uncomment to run headless (no UI)
28
+ chrome_options.add_experimental_option("prefs", {
29
+ "download.default_directory": download_folder, # Set the custom download folder
30
+ "download.prompt_for_download": False, # Don't ask for confirmation to download
31
+ "download.directory_upgrade": True, # Allow downloading into the custom folder
32
+ "safebrowsing.enabled": True # Enable safe browsing (to avoid warnings during download)
33
+ })
34
+ driver = webdriver.Chrome(options=chrome_options)
35
+
36
+ driver.get("https://datasetsearch.research.google.com/")
37
+
38
+ try:
39
+ WebDriverWait(driver, 20).until(
40
+ EC.presence_of_element_located((By.TAG_NAME, "c-wiz"))
41
+ )
42
+
43
+ search = WebDriverWait(driver, 20).until(
44
+ EC.element_to_be_clickable((By.CSS_SELECTOR, "input[aria-label='Dataset Search']"))
45
+ )
46
+
47
+ search.send_keys(query)
48
+ search.send_keys(Keys.RETURN)
49
+
50
+ WebDriverWait(driver, 10).until(
51
+ EC.presence_of_element_located((By.CSS_SELECTOR, "div[jscontroller]"))
52
+ )
53
+
54
+ WebDriverWait(driver,10).until(
55
+ EC.presence_of_element_located((By.TAG_NAME,"c-wiz"))
56
+ )
57
+
58
+ WebDriverWait(driver,10).until(
59
+ EC.presence_of_element_located((By.CSS_SELECTOR,"ol.VAt4"))
60
+ )
61
+
62
+ links=driver.find_elements(By.CSS_SELECTOR,"li.UnWQ5")
63
+ for link in links:
64
+ if count==times:
65
+ break
66
+ # print(link)
67
+ link.click()
68
+ time.sleep(2)
69
+ WebDriverWait(driver,10).until(
70
+ EC.presence_of_element_located((By.CSS_SELECTOR,"ul.eEUDce"))
71
+ )
72
+
73
+ downloads=driver.find_elements(By.CSS_SELECTOR,"li.dy4aPc")
74
+ dataset=downloads[0]
75
+ # dataset_url = dataset.get_attribute("href")
76
+ # print("Dataset URL:", dataset_url)
77
+ # print(dataset.get_attribute("href"))
78
+ tag=dataset.find_element(By.TAG_NAME,"a")
79
+ url=tag.get_attribute("href")
80
+ # print(url)
81
+ # print(driver.current_url)
82
+ try:
83
+ if "kaggle" in url:
84
+ match=re.search(r'datasets\/(.*)',url)
85
+ print(match)
86
+ string=match.group(1)
87
+ print("This is "+string)
88
+ kag.append(string)
89
+ # getKaggle.kaggleDataset(string,query)
90
+ # time.sleep(5)
91
+ continue
92
+ elif "github" in url:
93
+ print("This is "+url)
94
+ git.append(url)
95
+ # getGithub.githubDataset(url,query)
96
+ # time.sleep(5)
97
+ continue
98
+ elif "huggingface" in url and "turkish" not in url and "spanish" not in url:
99
+ match=re.search(r'datasets\/(.*)',url)
100
+ string=match.group(1)
101
+ print("Again "+string)
102
+ hug.append(string)
103
+ continue
104
+ except:
105
+ continue
106
+ dataset.click()
107
+ count+=1
108
+ time.sleep(5)
109
+ time.sleep(5)
110
+
111
+ except Exception as e:
112
+ print("Error:", e)
113
+
114
+ finally:
115
+ driver.quit()
116
+
117
+ return kag,git,hug
118
+ #
119
+ # googleDatasets("house predictions")
getFiles/getHuggingFace.py ADDED
@@ -0,0 +1,18 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from datasets import load_dataset
2
+ import os
3
+
4
+ def huggingDataset(url,query):
5
+ try:
6
+ # query="sentiment analysis"
7
+ # url="cornell-movie-review-data/rotten_tomatoes"
8
+ dataset = load_dataset(url)
9
+ print("Started downloading.....")
10
+ os.makedirs("./downloads/"+query,exist_ok=True)
11
+
12
+ dataset['train'].to_csv("./downloads/"+query+"/"+url[:5]+"train.csv")
13
+ dataset['test'].to_csv("./downloads/"+query+"/"+url[:5]+"test.csv")
14
+ except:
15
+ print("couldn't download hugging face dataset")
16
+
17
+ # huggingDataset("cornell-movie-review-data/rotten_tomatoes","sentiment analysis")
18
+
getFiles/getKaggle.py ADDED
@@ -0,0 +1,19 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+ from kaggle.api.kaggle_api_extended import KaggleApi
3
+ import sys
4
+
5
+
6
+ # Initialize
7
+ api = KaggleApi()
8
+ api.authenticate()
9
+
10
+ # Download dataset to current dir
11
+ def kaggleDataset(url,query):
12
+ try:
13
+ print(f"{url} started downloading")
14
+ os.makedirs("./downloads/"+query, exist_ok=True)
15
+ api.dataset_download_files(url, path='./downloads/'+query, unzip=True)
16
+ except Exception as e:
17
+ print("Dataset not found")
18
+
19
+ # kaggleDataset('zynicide/wine-reviews')
getFiles/getSklearn.py ADDED
@@ -0,0 +1,33 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+ import pandas as pd
3
+ import sklearn.datasets as skd
4
+ from pathlib import Path
5
+
6
+ def sklearnDatasets(url,query):
7
+ try:
8
+ current_directory = os.getcwd()
9
+ downloads_folder = os.path.join(current_directory, query)
10
+ print(f"{url} started downloading")
11
+ os.makedirs(downloads_folder, exist_ok=True)
12
+
13
+ dataset_func = getattr(skd, url, None)
14
+
15
+ if dataset_func:
16
+ dataset = dataset_func(data_home=downloads_folder)
17
+
18
+ X, y = dataset.data, dataset.target
19
+
20
+ df = pd.DataFrame(X, columns=dataset.feature_names)
21
+ df['Target'] = y
22
+
23
+ csv_file_path = os.path.join(downloads_folder, f"{url}.csv")
24
+ df.to_csv(csv_file_path, index=False)
25
+
26
+ # print(f"Dataset '{url}' downloaded and saved to {csv_file_path}")
27
+ else:
28
+ print(f"Unknown dataset function: {url}. Please check the URL.")
29
+ except Exception as e:
30
+ print(f"Dataset Not found")
31
+
32
+
33
+ # sklearnDatasets("fetch_california_housing")
getLabels.py ADDED
@@ -0,0 +1,97 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+ import pandas as pd
3
+ from openai import OpenAI
4
+ from dotenv import load_dotenv
5
+ import google.generativeai as genai
6
+ import ast
7
+ load_dotenv()
8
+
9
+ def LabelsExtraction(query,dfs,csv_files,skip):
10
+
11
+ columnNames={}
12
+ j=0
13
+ for i,df in enumerate(dfs):
14
+ if j in skip:
15
+ j+=1
16
+ name=os.path.basename(csv_files[j]).lower()
17
+ columnNames[name]=df.columns
18
+ j+=1
19
+
20
+
21
+ # print(columnNames)
22
+ client = OpenAI(
23
+ api_key=os.getenv("OPENAI_API_KEY"),
24
+ base_url=os.getenv("OPENAI_API_BASE_URL", "")
25
+ )
26
+ prompt = (
27
+ "The following is a dictionary with key as name of csv file and value as array of their column headers:\n\n"
28
+ "Eg: {'21754539_dataset.csv': Index(['Id', 'MSSubClass', 'MSZoning', 'LotFrontage', 'LotArea', 'Street','Alley', 'LotShape', 'LandContour', 'Utilities', 'LotConfig', 'LandSlope', 'Neighborhood', 'Condition1', 'Condition2', 'BldgType','HouseStyle', 'OverallQual', 'OverallCond', 'YearBuilt'])}"
29
+ + "The content in the Index([]) is the list of the headers"
30
+ + f"{columnNames}"
31
+ + "\n\nwith this info try to figure out which column in each file represents its label and also indentify if there is any file with no label"
32
+ "Return an array with column name that represents the label in that file and if there is no label return 0 in that place. Match the indexes as given in the dictionary, do not return any other content, just reutrn the array"
33
+ + f"the labels which you will return must in context with this query {query}, so return the most relevant labels"
34
+ )
35
+
36
+ response = client.chat.completions.create(
37
+ model="gpt-4",
38
+ messages=[
39
+ {"role": "system", "content": "You are a helpful data analyst."},
40
+ {"role": "user", "content": prompt}
41
+ ],
42
+ temperature=0.3
43
+ )
44
+
45
+ merge_map_text = response.choices[0].message.content.strip()
46
+ stripped=merge_map_text.split("```python")[1].replace("[","").replace("]","").replace("```","").split(",")
47
+ array=[str1.replace("\n","").strip() for str1 in stripped]
48
+ print(array)
49
+ arr2=[arr.strip("'") for arr in array]
50
+ print(arr2)
51
+ # print(arr2)
52
+ return arr2
53
+
54
+
55
+ def LabelsExtraction2(query,dfs,csv_files,skip):
56
+
57
+ columnNames={}
58
+ j=0
59
+ for i,df in enumerate(dfs):
60
+ if j in skip:
61
+ j+=1
62
+ name=os.path.basename(csv_files[j]).lower()
63
+ columnNames[name]=df.columns
64
+ j+=1
65
+
66
+
67
+ # print(columnNames)
68
+ prompt = (
69
+ "You are given a dictionary where each key is the name of a CSV file, and the value is an array (in pandas Index format) representing the column headers of that file.\n\n"
70
+ "Example format:\n"
71
+ "{'21754539_dataset.csv': Index(['Id', 'MSSubClass', 'MSZoning', 'LotFrontage', 'LotArea', 'Street', 'Alley', "
72
+ "'LotShape', 'LandContour', 'Utilities', 'LotConfig', 'LandSlope', 'Neighborhood', 'Condition1', 'Condition2', "
73
+ "'BldgType', 'HouseStyle', 'OverallQual', 'OverallCond', 'YearBuilt'])}\n\n"
74
+ "Your task is to analyze this dictionary and, for each file, determine which column is most likely to represent the "
75
+ "**label** (i.e., the target variable) relevant to the following query:\n\n"
76
+ f"{query}\n\n"
77
+ "If you believe a file has **no clear label** based on the column names and the query, return **0** for that file.\n\n"
78
+ "Return your response as a **Python list**, maintaining the **same order as the keys in the input dictionary**. Each "
79
+ "entry in the list should be either:\n"
80
+ "- the column name (string) that most likely represents the label for that file, or\n"
81
+ "- the integer `0` if no label can be identified.\n\n"
82
+ "⚠️ Do not return any explanation, reasoning, or code. Only return the final list of labels, e.g.:\n"
83
+ "```python\n['SalePrice', 0, 'target_column_name']\n```\n\n"
84
+ "Now use the following data to generate your answer:\n"
85
+ f"{columnNames}"
86
+ )
87
+
88
+ genai.configure(api_key=os.getenv("gemini_api"))
89
+
90
+ model = genai.GenerativeModel("gemini-2.0-flash")
91
+ response = model.generate_content(prompt)
92
+ merge_map_text = response.text.strip()
93
+ print(merge_map_text)
94
+ str1=merge_map_text.replace("```","").replace("python","")
95
+ actual_list = ast.literal_eval(str1)
96
+ return actual_list
97
+
langchain_folder/llm_helper.py ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ from dotenv import load_dotenv
2
+ from langchain_groq import ChatGroq
3
+ import os
4
+
5
+ load_dotenv()
6
+ llm = ChatGroq(groq_api_key = os.getenv('groq_api'), model_name = "llama-3.1-8b-instant")
7
+
8
+ #llm = ChatGroq(groq_api_key = os.getenv('groq_deepseek_api'), model_name = "DeepSeek R1 Distill Llama 70B")
langchain_folder/main.py ADDED
@@ -0,0 +1,14 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from langchain_core.prompts import PromptTemplate
2
+ from langchain_folder.llm_helper import llm
3
+
4
+ def ReturnKeywordsfromPrompt(query):
5
+ prompt = """You are a model and your job is the extract the keywords from the query which will be passed to you.
6
+ For Eg: If the query is -> Provide me with a combined dataset for house price prediction.
7
+ Your answer should be 'house price'. Do not return anything else, just the keywords, note that the queries can be for various datasets,
8
+ not jsut for house price prediction, go it?
9
+ Query: {query}"""
10
+
11
+ pt = PromptTemplate.from_template(prompt)
12
+ chain = pt | llm
13
+ response = chain.invoke({'query': query})
14
+ return response.content
openai_openml.py ADDED
@@ -0,0 +1,48 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+ from selenium import webdriver
3
+ from selenium.webdriver.common.by import By
4
+ from selenium.webdriver.common.keys import Keys
5
+ from selenium.webdriver.support.ui import WebDriverWait
6
+ from selenium.webdriver.support import expected_conditions as EC
7
+ from selenium.webdriver.chrome.options import Options
8
+ from selenium.webdriver.chrome.service import Service
9
+ import requests
10
+ import time
11
+
12
+ count=4
13
+
14
+ def openDataset(user_prompt):
15
+ chrome_options = Options()
16
+ chrome_options.add_argument("--headless") # Uncomment to run headless (no UI)
17
+ driver = webdriver.Chrome(options=chrome_options)
18
+ try:
19
+ driver.get('https://www.openml.org/search?type=data&status=active')
20
+
21
+ time.sleep(5)
22
+
23
+ search = WebDriverWait(driver, 20).until(
24
+ EC.presence_of_element_located((By.CSS_SELECTOR, "input.MuiInputBase-input.css-mnn31")))
25
+ search.send_keys(user_prompt)
26
+ search.send_keys(Keys.RETURN)
27
+ time.sleep(4)
28
+
29
+ divs = driver.find_elements(By.CSS_SELECTOR, "div.MuiPaper-root.MuiPaper-elevation.MuiPaper-elevation1.MuiCard-root.sc-gFAWRd.gJoEXx.css-1xol7fw")
30
+ a=0
31
+ urls=[]
32
+ for div in divs:
33
+ a+=1
34
+ if a>count:
35
+ break
36
+ div.click()
37
+ urls.append(driver.current_url)
38
+ # print(driver.current_url)
39
+ driver.back()
40
+ time.sleep(2)
41
+
42
+ driver.quit()
43
+ return urls
44
+ except:
45
+ print("Internet Issue driver crashed")
46
+ return []
47
+
48
+ # openDataset("stock price prediction")
openml_search.py ADDED
@@ -0,0 +1,96 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import openml
2
+ from langchain_core.prompts import PromptTemplate
3
+ from langchain_folder.llm_helper import llm
4
+ from dotenv import load_dotenv
5
+ import os
6
+ from selenium import webdriver
7
+ from selenium.webdriver.common.by import By
8
+ from selenium.webdriver.common.keys import Keys
9
+ from selenium.webdriver.support.ui import WebDriverWait
10
+ from selenium.webdriver.support import expected_conditions as EC
11
+ from selenium.webdriver.chrome.options import Options
12
+ from selenium.webdriver.chrome.service import Service
13
+ import requests
14
+ import openai_openml as oo
15
+
16
+
17
+ load_dotenv()
18
+ url_list = []
19
+ api_key = os.getenv('openml_api')
20
+ openml.config.apikey = api_key
21
+
22
+ def extract_keywords(query):
23
+ prompt = PromptTemplate.from_template("""
24
+ You are an assistant whose job is to extract the keywords from the query and return it:
25
+ Query = "{query}"
26
+ For example, if the query is Generate a list of links to datasets related to house price prediction
27
+ your response should be -> "house price".
28
+ Note that the query might not always be related to house price predictions, it can be related to other things as well.
29
+ return only the keywords do not return anything else
30
+ """)
31
+ rendered_prompt = prompt.format(query=query)
32
+ response = llm.invoke(rendered_prompt)
33
+ return response.content
34
+
35
+ def fetch_dataset_urls(query, limit=4):
36
+ print(f"Searching for datasets related to: {query}")
37
+ # datasets = openml.datasets.list_datasets(output_format="dataframe")
38
+ # matching_datasets = datasets[datasets['name'].str.contains(query, case=False, na=False)]
39
+ # if matching_datasets.empty:
40
+ # keywords = query.lower().split()
41
+ # mask = datasets['name'].apply(lambda name: all(kw in str(name).lower() for kw in keywords))
42
+ # matching_datasets = datasets[mask]
43
+
44
+ # if matching_datasets.empty:
45
+ # print("No datasets found for the query.")
46
+ # else:
47
+ # matching_datasets = matching_datasets.head(limit)
48
+ # for index, row in matching_datasets.iterrows():
49
+ # print(f"📌 Dataset: {row['name']}")
50
+ # dataset_url = f"https://www.openml.org/d/{row['did']}"
51
+ # url_list.append(dataset_url)
52
+ # print(f"🔗 URL: https://www.openml.org/d/{row['did']}\n")
53
+ global url_list
54
+ url_list=oo.openDataset(query)
55
+
56
+
57
+ def openDataset(user_prompt):
58
+ # user_prompt = input("Enter user prompt: ")
59
+ extracted_keywords = extract_keywords(user_prompt)
60
+ print(extracted_keywords)
61
+ fetch_dataset_urls(extracted_keywords)
62
+
63
+ download_folder = "./input_folder/"+user_prompt
64
+ if not os.path.exists(download_folder):
65
+ os.makedirs(download_folder)
66
+
67
+ chrome_options = Options()
68
+ chrome_options.add_argument("--headless") # Uncomment to run headless (no UI)
69
+ chrome_options.add_experimental_option("prefs", {
70
+ "download.default_directory": download_folder, # Set the custom download folder
71
+ "download.prompt_for_download": False, # Don't ask for confirmation to download
72
+ "download.directory_upgrade": True, # Allow downloading into the custom folder
73
+ "safebrowsing.enabled": True # Enable safe browsing (to avoid warnings during download)
74
+ })
75
+ driver = webdriver.Chrome(options=chrome_options)
76
+ for url in url_list:
77
+ driver.get(url)
78
+ try:
79
+ download_button = WebDriverWait(driver, 10).until(
80
+ EC.presence_of_element_located((By.CSS_SELECTOR, "a[aria-label='Download dataset']"))
81
+ )
82
+ actual_download_url = download_button.get_attribute("href")
83
+ filename = actual_download_url.split("/")[-2] + "_" + actual_download_url.split("/")[-1]
84
+ file_path = os.path.join(download_folder, filename)
85
+
86
+ print(f"⬇️ Downloading from {actual_download_url}")
87
+ response = requests.get(actual_download_url)
88
+ with open(file_path, "wb") as f:
89
+ f.write(response.content)
90
+ print(f"✅ Saved to {file_path}\n")
91
+
92
+ except Exception as e:
93
+ print(f"❌ Failed to fetch or download from {url}: {e}")
94
+
95
+ # openDataset("stock market predictions")
96
+
preprocessing/getNLP.py ADDED
@@ -0,0 +1,25 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import spacy
2
+ from tensorflow.keras.preprocessing.sequence import pad_sequences
3
+ nlp=spacy.load("en_core_web_lg")
4
+ MAX_LENGTH=10
5
+
6
+ def preprocess_texts(texts):
7
+ sentence_vectors = []
8
+
9
+ # Use nlp.pipe() to process texts in batch (much faster)
10
+ for doc in nlp.pipe(texts, batch_size=1000):
11
+ vectors = [token.vector for token in doc if token.has_vector] # Extract word vectors
12
+ sentence_vectors.append(vectors)
13
+
14
+ # Pad all vectors to fixed length
15
+ return pad_sequences(sentence_vectors, maxlen=MAX_LENGTH, dtype='float32', padding='post', truncating='post')
16
+
17
+ def wordEmbed(df,columns):
18
+ for col in columns:
19
+ processed_array = preprocess_texts(df[col].tolist())
20
+ df["processed"+col] = [processed_array[i] for i in range(len(df))]
21
+ df.drop(columns=columns,inplace=True)
22
+ # print(df.head())
23
+ return df
24
+
25
+
preprocessing/getString.py ADDED
@@ -0,0 +1,72 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+ import pandas as pd
3
+ import ast
4
+ import google.generativeai as genai
5
+ from dotenv import load_dotenv
6
+
7
+ load_dotenv()
8
+
9
+ def extract_column_samples(df, n=5):
10
+ samples = {}
11
+ for col in df.columns:
12
+ samples[col] = df[col].head(n).tolist()
13
+ return samples
14
+
15
+ def getCodes(query):
16
+ # query = "covid 19"
17
+ path="final/"+query+".csv"
18
+ df = pd.read_csv(path)
19
+
20
+ samples = extract_column_samples(df)
21
+
22
+ prompt = (
23
+ "You are a data analyst. I will give you a dictionary containing column names with example values from a dataset.\n\n"
24
+ "Your task is to:\n"
25
+ "1. Identify columns where one-hot encoding is *not suitable*.\n"
26
+ "2. For each of these, determine if it requires:\n"
27
+ " - feature extraction (e.g., from datetime or strings), or\n"
28
+ " - use of word embeddings (e.g., for free text or high-cardinality text).\n\n"
29
+ "For feature extraction columns:\n"
30
+ "- Create a **Python dictionary** where:\n"
31
+ " * Each key is a new, meaningful column name.\n"
32
+ " * Each value is a **valid Pandas expression string** that derives the new column from the original `df` DataFrame.\n"
33
+ "- Also return a **Python list** of original column names that were used in this dictionary.\n\n"
34
+ "For columns requiring word embeddings:\n"
35
+ "- Return a separate **Python list** of these column names.\n"
36
+ "- If any column appears in both cases, include it *only* in the word embedding list.\n\n"
37
+ "Your output **must follow this exact format** with no additional explanation or markdown. Only return the following inside a single Python code block:\n"
38
+ "```python\n"
39
+ "# Dictionary of transformations\n"
40
+ "{'new_col1': \"some pandas expression\", 'new_col2': \"some other pandas expression\"}\n\n"
41
+ "# Array of columns used in the dictionary\n"
42
+ "['col1', 'col2']\n"
43
+ "# Array of columns that require the use of word embeddings\n"
44
+ "['col3', 'col4']\n"
45
+ "```\n\n"
46
+ "**DO NOT** include any explanation, reasoning, extra code, or markdown outside of the code block. Only return the exact format shown above. Do not generate or describe functions.\n\n"
47
+ f"Here is the input :\n{samples}\n"
48
+ )
49
+
50
+
51
+ genai.configure(api_key=os.getenv("gemini_api"))
52
+
53
+ model = genai.GenerativeModel("gemini-2.0-flash")
54
+ response = model.generate_content(prompt)
55
+
56
+ merge_map_text = response.text.strip()
57
+ print(merge_map_text)
58
+
59
+ str1 = merge_map_text.split("```python")[1].split("# Array of columns used in the dictionary")[0].strip()
60
+ str2 = merge_map_text.split("# Array of columns used in the dictionary")[1].split("# Array of columns that require the use of word embeddings")[0].strip()
61
+ str3 = merge_map_text.split("# Array of columns used in the dictionary")[1].split("# Array of columns that require the use of word embeddings")[1].replace("```","").strip()
62
+
63
+ preprocessing_code = ast.literal_eval(str1)
64
+ actual_list = ast.literal_eval(str2)
65
+ nlp=ast.literal_eval(str3)
66
+ # print("Parsed dict:\n", preprocessing_code)
67
+ # print("Columns changed:\n", actual_list)
68
+ # print("for nlp : ",nlp)
69
+ return preprocessing_code,actual_list,nlp
70
+
71
+ # getCodes(extract_column_samples)
72
+
preprocessing/process.py ADDED
@@ -0,0 +1,48 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import pandas as pd
2
+ import getString
3
+ import getNLP
4
+ import os
5
+
6
+ def one_hot_encode_objects(df,nlp,columns):
7
+ object_cols = df.select_dtypes(include='object').columns
8
+ for col in object_cols:
9
+ if col not in nlp and col not in columns and "label" not in col:
10
+ if df[col].apply(lambda x: isinstance(x, (list, tuple, dict, set)) or hasattr(x, '__array__')).any():
11
+ print(f"Skipping column '{col}' due to unhashable values.")
12
+ continue
13
+ dummies = pd.get_dummies(df[col], prefix=col).astype(int)
14
+ df = pd.concat([df, dummies], axis=1)
15
+ df = df.drop(columns=[col])
16
+ print(df.columns)
17
+ return df
18
+
19
+ def fixEmpty(df):
20
+ df.replace(['undefined', 'null', 'NaN', 'None'], pd.NA, inplace=True)
21
+ for col in df.columns:
22
+ if df[col].dtype == 'object':
23
+ df[col] = df[col].fillna('Unknown')
24
+ else:
25
+ df[col] = df[col].fillna(df[col].mean())
26
+
27
+ return df
28
+
29
+ def preprocessing(query):
30
+ os.makedirs("./processed",exist_ok=True)
31
+ df=pd.read_csv("final/"+query+".csv")
32
+ # print(df.head())
33
+ df=fixEmpty(df)
34
+ preDict,col,nlp=getString.getCodes(query)
35
+ if len(col)>0:
36
+ for new_col, expr in preDict.items():
37
+ df[new_col] = eval(expr)
38
+ df.drop(columns=col, inplace=True)
39
+ if len(nlp)>0:
40
+ df=getNLP.wordEmbed(df,nlp)
41
+ # print(df.columns)
42
+ df=one_hot_encode_objects(df,nlp,col)
43
+ # df = df.astype('float32')
44
+ df.to_csv("./processed/"+query+".csv", index=False)
45
+ # print(df.head())
46
+ # print(df.info())
47
+
48
+ # preprocessing("twitter sentiment analysis")
test.ipynb ADDED
@@ -0,0 +1,1115 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "cells": [
3
+ {
4
+ "cell_type": "code",
5
+ "execution_count": 1,
6
+ "id": "4b227ed2",
7
+ "metadata": {},
8
+ "outputs": [],
9
+ "source": [
10
+ "import pandas as pd\n",
11
+ "import glob\n",
12
+ "import re\n",
13
+ "# from collections import defaultdict\n",
14
+ "from itertools import combinations\n",
15
+ "import os"
16
+ ]
17
+ },
18
+ {
19
+ "cell_type": "code",
20
+ "execution_count": 2,
21
+ "id": "9a976256",
22
+ "metadata": {},
23
+ "outputs": [],
24
+ "source": [
25
+ "query=\"house predictions\"\n",
26
+ "\n",
27
+ "\n",
28
+ "# Helper: Standardize column names\n",
29
+ "def normalize(col):\n",
30
+ " return re.sub(r'[^a-z0-9]', '', col.lower())\n",
31
+ "\n",
32
+ "# Step 1: Read CSVs\n",
33
+ "csv_files = glob.glob(\"downloads/covid 19/*.csv\")\n",
34
+ "dfs = [pd.read_csv(f) for f in csv_files]\n",
35
+ "\n",
36
+ "# Step 2: Store normalized-to-original column mappings\n",
37
+ "normalized_cols = []\n",
38
+ "orig_col_maps = []\n",
39
+ "\n",
40
+ "for df in dfs:\n",
41
+ " norm_to_orig = {}\n",
42
+ " norm_cols = []\n",
43
+ " for col in df.columns:\n",
44
+ " norm = normalize(col)\n",
45
+ " norm_cols.append(norm)\n",
46
+ " norm_to_orig[norm] = col\n",
47
+ " normalized_cols.append(set(norm_cols))\n",
48
+ " orig_col_maps.append(norm_to_orig)\n",
49
+ "\n",
50
+ "\n",
51
+ "# normalized_cols,orig_col_maps"
52
+ ]
53
+ },
54
+ {
55
+ "cell_type": "code",
56
+ "execution_count": null,
57
+ "id": "d82953b5",
58
+ "metadata": {},
59
+ "outputs": [],
60
+ "source": [
61
+ "from rapidfuzz import process, fuzz\n",
62
+ "\n",
63
+ "def get_fuzzy_common_columns(cols_list, threshold=85):\n",
64
+ " \"\"\"\n",
65
+ " Given a list of sets of column names (normalized),\n",
66
+ " return the set of column names that are 'fuzzy common'\n",
67
+ " across all lists.\n",
68
+ " \"\"\"\n",
69
+ " # Start with columns from the first dataset\n",
70
+ " base = cols_list[0]\n",
71
+ " common = set()\n",
72
+ "\n",
73
+ " for col in base:\n",
74
+ " match_all = True\n",
75
+ " for other in cols_list[1:]:\n",
76
+ " match, score, _ = process.extractOne(col, other, scorer=fuzz.token_sort_ratio)\n",
77
+ " if score < threshold:\n",
78
+ " match_all = False\n",
79
+ " break\n",
80
+ " if match_all:\n",
81
+ " common.add(col)\n",
82
+ " return common\n"
83
+ ]
84
+ },
85
+ {
86
+ "cell_type": "code",
87
+ "execution_count": null,
88
+ "id": "0eb8e3d5",
89
+ "metadata": {},
90
+ "outputs": [
91
+ {
92
+ "name": "stdout",
93
+ "output_type": "stream",
94
+ "text": [
95
+ "[{'max', 'prcp', 'min', 'lat', 'long', 'temp', 'provincestate', 'dayfromjanfirst', 'date', 'wdsp', 'fog', 'fatalities', 'ah', 'stp', 'confirmedcases', 'countryregion', 'rh', 'slp', 'dewp', 'id'}, {'max', 'prcp', 'min', 'lat', 'long', 'temp', 'provincestate', 'dayfromjanfirst', 'date', 'wdsp', 'fog', 'fatalities', 'ah', 'stp', 'countryprovince', 'confirmedcases', 'countryregion', 'rh', 'slp', 'dewp', 'id'}]\n",
96
+ "[{'max', 'prcp', 'min', 'lat', 'long', 'temp', 'provincestate', 'dayfromjanfirst', 'date', 'wdsp', 'fog', 'fatalities', 'ah', 'stp', 'confirmedcases', 'countryregion', 'rh', 'slp', 'dewp', 'id'}, {'cumulativecases', 'datereported', 'newdeaths', 'countrycode', 'cumulativedeaths', 'whoregion', 'country', 'newcases'}]\n",
97
+ "[{'max', 'prcp', 'min', 'lat', 'long', 'temp', 'provincestate', 'dayfromjanfirst', 'date', 'wdsp', 'fog', 'fatalities', 'ah', 'stp', 'countryprovince', 'confirmedcases', 'countryregion', 'rh', 'slp', 'dewp', 'id'}, {'cumulativecases', 'datereported', 'newdeaths', 'countrycode', 'cumulativedeaths', 'whoregion', 'country', 'newcases'}]\n",
98
+ "[{'max', 'prcp', 'min', 'lat', 'long', 'temp', 'provincestate', 'dayfromjanfirst', 'date', 'wdsp', 'fog', 'fatalities', 'ah', 'stp', 'confirmedcases', 'countryregion', 'rh', 'slp', 'dewp', 'id'}, {'max', 'prcp', 'min', 'lat', 'long', 'temp', 'provincestate', 'dayfromjanfirst', 'date', 'wdsp', 'fog', 'fatalities', 'ah', 'stp', 'countryprovince', 'confirmedcases', 'countryregion', 'rh', 'slp', 'dewp', 'id'}, {'cumulativecases', 'datereported', 'newdeaths', 'countrycode', 'cumulativedeaths', 'whoregion', 'country', 'newcases'}]\n"
99
+ ]
100
+ },
101
+ {
102
+ "data": {
103
+ "text/plain": [
104
+ "(set(), [])"
105
+ ]
106
+ },
107
+ "execution_count": 4,
108
+ "metadata": {},
109
+ "output_type": "execute_result"
110
+ }
111
+ ],
112
+ "source": [
113
+ "max_common = set()\n",
114
+ "best_combo = []\n",
115
+ "\n",
116
+ "for i in range(2, len(dfs) + 1):\n",
117
+ " for combo in combinations(range(len(dfs)), i):\n",
118
+ " common = set.intersection(*[normalized_cols[i] for i in combo])\n",
119
+ " # some=[normalized_cols[i] for i in combo]\n",
120
+ " # print(some)\n",
121
+ " if len(common) > len(max_common):\n",
122
+ " max_common = common\n",
123
+ " best_combo = combo\n",
124
+ "\n",
125
+ "max_common,best_combo"
126
+ ]
127
+ },
128
+ {
129
+ "cell_type": "code",
130
+ "execution_count": 17,
131
+ "id": "b72d4152",
132
+ "metadata": {},
133
+ "outputs": [],
134
+ "source": [
135
+ "aligned_dfs = []\n",
136
+ "for idx in best_combo:\n",
137
+ " df = dfs[idx]\n",
138
+ " norm_to_orig = orig_col_maps[idx]\n",
139
+ " selected_cols = [norm_to_orig[col] for col in max_common]\n",
140
+ " df_subset = df[selected_cols].copy()\n",
141
+ " df_subset.columns = [col for col in max_common] # unify column names\n",
142
+ " aligned_dfs.append(df_subset)\n",
143
+ "\n",
144
+ "# Step 5: Combine\n",
145
+ "combined_df = pd.concat(aligned_dfs, ignore_index=True)"
146
+ ]
147
+ },
148
+ {
149
+ "cell_type": "code",
150
+ "execution_count": null,
151
+ "id": "b0768203",
152
+ "metadata": {},
153
+ "outputs": [],
154
+ "source": []
155
+ },
156
+ {
157
+ "cell_type": "code",
158
+ "execution_count": 9,
159
+ "id": "f315dfad",
160
+ "metadata": {},
161
+ "outputs": [
162
+ {
163
+ "name": "stdout",
164
+ "output_type": "stream",
165
+ "text": [
166
+ "17892\n",
167
+ "24414\n",
168
+ "54960\n"
169
+ ]
170
+ }
171
+ ],
172
+ "source": [
173
+ "for df in dfs:\n",
174
+ " print(df.index.size)"
175
+ ]
176
+ },
177
+ {
178
+ "cell_type": "code",
179
+ "execution_count": 11,
180
+ "id": "7972a696",
181
+ "metadata": {},
182
+ "outputs": [
183
+ {
184
+ "name": "stdout",
185
+ "output_type": "stream",
186
+ "text": [
187
+ "[17892, 24414]\n",
188
+ "[17892, 54960]\n",
189
+ "[24414, 54960]\n",
190
+ "[17892, 24414, 54960]\n"
191
+ ]
192
+ }
193
+ ],
194
+ "source": [
195
+ "for i in range(2, len(dfs) + 1):\n",
196
+ " for combo in combinations(range(len(dfs)), i):\n",
197
+ " counts=[dfs[i].index.size for i in combo]\n",
198
+ " print(counts)"
199
+ ]
200
+ },
201
+ {
202
+ "cell_type": "code",
203
+ "execution_count": 12,
204
+ "id": "1a0f8006",
205
+ "metadata": {},
206
+ "outputs": [
207
+ {
208
+ "data": {
209
+ "text/plain": [
210
+ "(54960, 2)"
211
+ ]
212
+ },
213
+ "execution_count": 12,
214
+ "metadata": {},
215
+ "output_type": "execute_result"
216
+ }
217
+ ],
218
+ "source": [
219
+ "maxCount=0\n",
220
+ "idx=-1\n",
221
+ "for i in range(len(dfs)):\n",
222
+ " if dfs[i].index.size > maxCount:\n",
223
+ " maxCount=dfs[i].index.size\n",
224
+ " idx=i\n",
225
+ "\n",
226
+ "maxCount,idx"
227
+ ]
228
+ },
229
+ {
230
+ "cell_type": "code",
231
+ "execution_count": 18,
232
+ "id": "240d4fd1",
233
+ "metadata": {},
234
+ "outputs": [
235
+ {
236
+ "data": {
237
+ "text/plain": [
238
+ "(42306, 54960)"
239
+ ]
240
+ },
241
+ "execution_count": 18,
242
+ "metadata": {},
243
+ "output_type": "execute_result"
244
+ }
245
+ ],
246
+ "source": [
247
+ "combined_df.index.size,maxCount"
248
+ ]
249
+ },
250
+ {
251
+ "cell_type": "code",
252
+ "execution_count": 20,
253
+ "id": "5eba3fe9",
254
+ "metadata": {},
255
+ "outputs": [
256
+ {
257
+ "data": {
258
+ "text/plain": [
259
+ "'hello'"
260
+ ]
261
+ },
262
+ "execution_count": 20,
263
+ "metadata": {},
264
+ "output_type": "execute_result"
265
+ }
266
+ ],
267
+ "source": [
268
+ "str=\"hello and \"\n",
269
+ "str[:5]"
270
+ ]
271
+ },
272
+ {
273
+ "cell_type": "code",
274
+ "execution_count": 2,
275
+ "id": "94dac715",
276
+ "metadata": {},
277
+ "outputs": [
278
+ {
279
+ "data": {
280
+ "text/html": [
281
+ "<div>\n",
282
+ "<style scoped>\n",
283
+ " .dataframe tbody tr th:only-of-type {\n",
284
+ " vertical-align: middle;\n",
285
+ " }\n",
286
+ "\n",
287
+ " .dataframe tbody tr th {\n",
288
+ " vertical-align: top;\n",
289
+ " }\n",
290
+ "\n",
291
+ " .dataframe thead th {\n",
292
+ " text-align: right;\n",
293
+ " }\n",
294
+ "</style>\n",
295
+ "<table border=\"1\" class=\"dataframe\">\n",
296
+ " <thead>\n",
297
+ " <tr style=\"text-align: right;\">\n",
298
+ " <th></th>\n",
299
+ " <th>fixed acidity</th>\n",
300
+ " <th>volatile acidity</th>\n",
301
+ " <th>citric acid</th>\n",
302
+ " <th>residual sugar</th>\n",
303
+ " <th>chlorides</th>\n",
304
+ " <th>free sulfur dioxide</th>\n",
305
+ " <th>total sulfur dioxide</th>\n",
306
+ " <th>density</th>\n",
307
+ " <th>pH</th>\n",
308
+ " <th>sulphates</th>\n",
309
+ " <th>alcohol</th>\n",
310
+ " <th>quality</th>\n",
311
+ " </tr>\n",
312
+ " </thead>\n",
313
+ " <tbody>\n",
314
+ " <tr>\n",
315
+ " <th>0</th>\n",
316
+ " <td>7.4</td>\n",
317
+ " <td>0.70</td>\n",
318
+ " <td>0.00</td>\n",
319
+ " <td>1.9</td>\n",
320
+ " <td>0.076</td>\n",
321
+ " <td>11.0</td>\n",
322
+ " <td>34.0</td>\n",
323
+ " <td>0.9978</td>\n",
324
+ " <td>3.51</td>\n",
325
+ " <td>0.56</td>\n",
326
+ " <td>9.4</td>\n",
327
+ " <td>5</td>\n",
328
+ " </tr>\n",
329
+ " <tr>\n",
330
+ " <th>1</th>\n",
331
+ " <td>7.8</td>\n",
332
+ " <td>0.88</td>\n",
333
+ " <td>0.00</td>\n",
334
+ " <td>2.6</td>\n",
335
+ " <td>0.098</td>\n",
336
+ " <td>25.0</td>\n",
337
+ " <td>67.0</td>\n",
338
+ " <td>0.9968</td>\n",
339
+ " <td>3.20</td>\n",
340
+ " <td>0.68</td>\n",
341
+ " <td>9.8</td>\n",
342
+ " <td>5</td>\n",
343
+ " </tr>\n",
344
+ " <tr>\n",
345
+ " <th>2</th>\n",
346
+ " <td>7.8</td>\n",
347
+ " <td>0.76</td>\n",
348
+ " <td>0.04</td>\n",
349
+ " <td>2.3</td>\n",
350
+ " <td>0.092</td>\n",
351
+ " <td>15.0</td>\n",
352
+ " <td>54.0</td>\n",
353
+ " <td>0.9970</td>\n",
354
+ " <td>3.26</td>\n",
355
+ " <td>0.65</td>\n",
356
+ " <td>9.8</td>\n",
357
+ " <td>5</td>\n",
358
+ " </tr>\n",
359
+ " <tr>\n",
360
+ " <th>3</th>\n",
361
+ " <td>11.2</td>\n",
362
+ " <td>0.28</td>\n",
363
+ " <td>0.56</td>\n",
364
+ " <td>1.9</td>\n",
365
+ " <td>0.075</td>\n",
366
+ " <td>17.0</td>\n",
367
+ " <td>60.0</td>\n",
368
+ " <td>0.9980</td>\n",
369
+ " <td>3.16</td>\n",
370
+ " <td>0.58</td>\n",
371
+ " <td>9.8</td>\n",
372
+ " <td>6</td>\n",
373
+ " </tr>\n",
374
+ " <tr>\n",
375
+ " <th>4</th>\n",
376
+ " <td>7.4</td>\n",
377
+ " <td>0.70</td>\n",
378
+ " <td>0.00</td>\n",
379
+ " <td>1.9</td>\n",
380
+ " <td>0.076</td>\n",
381
+ " <td>11.0</td>\n",
382
+ " <td>34.0</td>\n",
383
+ " <td>0.9978</td>\n",
384
+ " <td>3.51</td>\n",
385
+ " <td>0.56</td>\n",
386
+ " <td>9.4</td>\n",
387
+ " <td>5</td>\n",
388
+ " </tr>\n",
389
+ " </tbody>\n",
390
+ "</table>\n",
391
+ "</div>"
392
+ ],
393
+ "text/plain": [
394
+ " fixed acidity volatile acidity citric acid residual sugar chlorides \\\n",
395
+ "0 7.4 0.70 0.00 1.9 0.076 \n",
396
+ "1 7.8 0.88 0.00 2.6 0.098 \n",
397
+ "2 7.8 0.76 0.04 2.3 0.092 \n",
398
+ "3 11.2 0.28 0.56 1.9 0.075 \n",
399
+ "4 7.4 0.70 0.00 1.9 0.076 \n",
400
+ "\n",
401
+ " free sulfur dioxide total sulfur dioxide density pH sulphates \\\n",
402
+ "0 11.0 34.0 0.9978 3.51 0.56 \n",
403
+ "1 25.0 67.0 0.9968 3.20 0.68 \n",
404
+ "2 15.0 54.0 0.9970 3.26 0.65 \n",
405
+ "3 17.0 60.0 0.9980 3.16 0.58 \n",
406
+ "4 11.0 34.0 0.9978 3.51 0.56 \n",
407
+ "\n",
408
+ " alcohol quality \n",
409
+ "0 9.4 5 \n",
410
+ "1 9.8 5 \n",
411
+ "2 9.8 5 \n",
412
+ "3 9.8 6 \n",
413
+ "4 9.4 5 "
414
+ ]
415
+ },
416
+ "execution_count": 2,
417
+ "metadata": {},
418
+ "output_type": "execute_result"
419
+ }
420
+ ],
421
+ "source": [
422
+ "import pandas as pd\n",
423
+ "\n",
424
+ "df=pd.read_csv(\"downloads/wine quality prediction/redwine.csv\")\n",
425
+ "df.head()"
426
+ ]
427
+ },
428
+ {
429
+ "cell_type": "code",
430
+ "execution_count": 9,
431
+ "id": "d0947632",
432
+ "metadata": {},
433
+ "outputs": [
434
+ {
435
+ "data": {
436
+ "text/plain": [
437
+ "'downloads/wine quality prediction\\\\redwine.csv'"
438
+ ]
439
+ },
440
+ "execution_count": 9,
441
+ "metadata": {},
442
+ "output_type": "execute_result"
443
+ }
444
+ ],
445
+ "source": [
446
+ "import glob\n",
447
+ "csv_files = glob.glob(\"downloads/\"+\"wine quality prediction\"+\"/*.csv\")\n",
448
+ "csv_files[0]"
449
+ ]
450
+ },
451
+ {
452
+ "cell_type": "code",
453
+ "execution_count": 11,
454
+ "id": "22e2e148",
455
+ "metadata": {},
456
+ "outputs": [
457
+ {
458
+ "data": {
459
+ "text/html": [
460
+ "<div>\n",
461
+ "<style scoped>\n",
462
+ " .dataframe tbody tr th:only-of-type {\n",
463
+ " vertical-align: middle;\n",
464
+ " }\n",
465
+ "\n",
466
+ " .dataframe tbody tr th {\n",
467
+ " vertical-align: top;\n",
468
+ " }\n",
469
+ "\n",
470
+ " .dataframe thead th {\n",
471
+ " text-align: right;\n",
472
+ " }\n",
473
+ "</style>\n",
474
+ "<table border=\"1\" class=\"dataframe\">\n",
475
+ " <thead>\n",
476
+ " <tr style=\"text-align: right;\">\n",
477
+ " <th></th>\n",
478
+ " <th>fixed acidity</th>\n",
479
+ " <th>volatile acidity</th>\n",
480
+ " <th>citric acid</th>\n",
481
+ " <th>residual sugar</th>\n",
482
+ " <th>chlorides</th>\n",
483
+ " <th>free sulfur dioxide</th>\n",
484
+ " <th>total sulfur dioxide</th>\n",
485
+ " <th>density</th>\n",
486
+ " <th>pH</th>\n",
487
+ " <th>sulphates</th>\n",
488
+ " <th>alcohol</th>\n",
489
+ " <th>quality</th>\n",
490
+ " </tr>\n",
491
+ " </thead>\n",
492
+ " <tbody>\n",
493
+ " <tr>\n",
494
+ " <th>0</th>\n",
495
+ " <td>7.4</td>\n",
496
+ " <td>0.70</td>\n",
497
+ " <td>0.00</td>\n",
498
+ " <td>1.9</td>\n",
499
+ " <td>0.076</td>\n",
500
+ " <td>11.0</td>\n",
501
+ " <td>34.0</td>\n",
502
+ " <td>0.9978</td>\n",
503
+ " <td>3.51</td>\n",
504
+ " <td>0.56</td>\n",
505
+ " <td>9.4</td>\n",
506
+ " <td>5</td>\n",
507
+ " </tr>\n",
508
+ " <tr>\n",
509
+ " <th>1</th>\n",
510
+ " <td>7.8</td>\n",
511
+ " <td>0.88</td>\n",
512
+ " <td>0.00</td>\n",
513
+ " <td>2.6</td>\n",
514
+ " <td>0.098</td>\n",
515
+ " <td>25.0</td>\n",
516
+ " <td>67.0</td>\n",
517
+ " <td>0.9968</td>\n",
518
+ " <td>3.20</td>\n",
519
+ " <td>0.68</td>\n",
520
+ " <td>9.8</td>\n",
521
+ " <td>5</td>\n",
522
+ " </tr>\n",
523
+ " <tr>\n",
524
+ " <th>2</th>\n",
525
+ " <td>7.8</td>\n",
526
+ " <td>0.76</td>\n",
527
+ " <td>0.04</td>\n",
528
+ " <td>2.3</td>\n",
529
+ " <td>0.092</td>\n",
530
+ " <td>15.0</td>\n",
531
+ " <td>54.0</td>\n",
532
+ " <td>0.9970</td>\n",
533
+ " <td>3.26</td>\n",
534
+ " <td>0.65</td>\n",
535
+ " <td>9.8</td>\n",
536
+ " <td>5</td>\n",
537
+ " </tr>\n",
538
+ " <tr>\n",
539
+ " <th>3</th>\n",
540
+ " <td>11.2</td>\n",
541
+ " <td>0.28</td>\n",
542
+ " <td>0.56</td>\n",
543
+ " <td>1.9</td>\n",
544
+ " <td>0.075</td>\n",
545
+ " <td>17.0</td>\n",
546
+ " <td>60.0</td>\n",
547
+ " <td>0.9980</td>\n",
548
+ " <td>3.16</td>\n",
549
+ " <td>0.58</td>\n",
550
+ " <td>9.8</td>\n",
551
+ " <td>6</td>\n",
552
+ " </tr>\n",
553
+ " <tr>\n",
554
+ " <th>4</th>\n",
555
+ " <td>7.4</td>\n",
556
+ " <td>0.70</td>\n",
557
+ " <td>0.00</td>\n",
558
+ " <td>1.9</td>\n",
559
+ " <td>0.076</td>\n",
560
+ " <td>11.0</td>\n",
561
+ " <td>34.0</td>\n",
562
+ " <td>0.9978</td>\n",
563
+ " <td>3.51</td>\n",
564
+ " <td>0.56</td>\n",
565
+ " <td>9.4</td>\n",
566
+ " <td>5</td>\n",
567
+ " </tr>\n",
568
+ " </tbody>\n",
569
+ "</table>\n",
570
+ "</div>"
571
+ ],
572
+ "text/plain": [
573
+ " fixed acidity volatile acidity citric acid residual sugar chlorides \\\n",
574
+ "0 7.4 0.70 0.00 1.9 0.076 \n",
575
+ "1 7.8 0.88 0.00 2.6 0.098 \n",
576
+ "2 7.8 0.76 0.04 2.3 0.092 \n",
577
+ "3 11.2 0.28 0.56 1.9 0.075 \n",
578
+ "4 7.4 0.70 0.00 1.9 0.076 \n",
579
+ "\n",
580
+ " free sulfur dioxide total sulfur dioxide density pH sulphates \\\n",
581
+ "0 11.0 34.0 0.9978 3.51 0.56 \n",
582
+ "1 25.0 67.0 0.9968 3.20 0.68 \n",
583
+ "2 15.0 54.0 0.9970 3.26 0.65 \n",
584
+ "3 17.0 60.0 0.9980 3.16 0.58 \n",
585
+ "4 11.0 34.0 0.9978 3.51 0.56 \n",
586
+ "\n",
587
+ " alcohol quality \n",
588
+ "0 9.4 5 \n",
589
+ "1 9.8 5 \n",
590
+ "2 9.8 5 \n",
591
+ "3 9.8 6 \n",
592
+ "4 9.4 5 "
593
+ ]
594
+ },
595
+ "execution_count": 11,
596
+ "metadata": {},
597
+ "output_type": "execute_result"
598
+ }
599
+ ],
600
+ "source": [
601
+ "df=pd.read_csv(csv_files[0])\n",
602
+ "df.head()"
603
+ ]
604
+ },
605
+ {
606
+ "cell_type": "code",
607
+ "execution_count": null,
608
+ "id": "1553de09",
609
+ "metadata": {},
610
+ "outputs": [
611
+ {
612
+ "data": {
613
+ "text/html": [
614
+ "<div>\n",
615
+ "<style scoped>\n",
616
+ " .dataframe tbody tr th:only-of-type {\n",
617
+ " vertical-align: middle;\n",
618
+ " }\n",
619
+ "\n",
620
+ " .dataframe tbody tr th {\n",
621
+ " vertical-align: top;\n",
622
+ " }\n",
623
+ "\n",
624
+ " .dataframe thead th {\n",
625
+ " text-align: right;\n",
626
+ " }\n",
627
+ "</style>\n",
628
+ "<table border=\"1\" class=\"dataframe\">\n",
629
+ " <thead>\n",
630
+ " <tr style=\"text-align: right;\">\n",
631
+ " <th></th>\n",
632
+ " <th>fixed acidity</th>\n",
633
+ " <th>volatile acidity</th>\n",
634
+ " <th>citric acid</th>\n",
635
+ " <th>residual sugar</th>\n",
636
+ " <th>chlorides</th>\n",
637
+ " <th>free sulfur dioxide</th>\n",
638
+ " <th>total sulfur dioxide</th>\n",
639
+ " <th>density</th>\n",
640
+ " <th>pH</th>\n",
641
+ " <th>sulphates</th>\n",
642
+ " <th>alcohol</th>\n",
643
+ " <th>quality</th>\n",
644
+ " <th>label</th>\n",
645
+ " </tr>\n",
646
+ " </thead>\n",
647
+ " <tbody>\n",
648
+ " <tr>\n",
649
+ " <th>0</th>\n",
650
+ " <td>7.4</td>\n",
651
+ " <td>0.70</td>\n",
652
+ " <td>0.00</td>\n",
653
+ " <td>1.9</td>\n",
654
+ " <td>0.076</td>\n",
655
+ " <td>11.0</td>\n",
656
+ " <td>34.0</td>\n",
657
+ " <td>0.9978</td>\n",
658
+ " <td>3.51</td>\n",
659
+ " <td>0.56</td>\n",
660
+ " <td>9.4</td>\n",
661
+ " <td>5</td>\n",
662
+ " <td>red</td>\n",
663
+ " </tr>\n",
664
+ " <tr>\n",
665
+ " <th>1</th>\n",
666
+ " <td>7.8</td>\n",
667
+ " <td>0.88</td>\n",
668
+ " <td>0.00</td>\n",
669
+ " <td>2.6</td>\n",
670
+ " <td>0.098</td>\n",
671
+ " <td>25.0</td>\n",
672
+ " <td>67.0</td>\n",
673
+ " <td>0.9968</td>\n",
674
+ " <td>3.20</td>\n",
675
+ " <td>0.68</td>\n",
676
+ " <td>9.8</td>\n",
677
+ " <td>5</td>\n",
678
+ " <td>red</td>\n",
679
+ " </tr>\n",
680
+ " <tr>\n",
681
+ " <th>2</th>\n",
682
+ " <td>7.8</td>\n",
683
+ " <td>0.76</td>\n",
684
+ " <td>0.04</td>\n",
685
+ " <td>2.3</td>\n",
686
+ " <td>0.092</td>\n",
687
+ " <td>15.0</td>\n",
688
+ " <td>54.0</td>\n",
689
+ " <td>0.9970</td>\n",
690
+ " <td>3.26</td>\n",
691
+ " <td>0.65</td>\n",
692
+ " <td>9.8</td>\n",
693
+ " <td>5</td>\n",
694
+ " <td>red</td>\n",
695
+ " </tr>\n",
696
+ " <tr>\n",
697
+ " <th>3</th>\n",
698
+ " <td>11.2</td>\n",
699
+ " <td>0.28</td>\n",
700
+ " <td>0.56</td>\n",
701
+ " <td>1.9</td>\n",
702
+ " <td>0.075</td>\n",
703
+ " <td>17.0</td>\n",
704
+ " <td>60.0</td>\n",
705
+ " <td>0.9980</td>\n",
706
+ " <td>3.16</td>\n",
707
+ " <td>0.58</td>\n",
708
+ " <td>9.8</td>\n",
709
+ " <td>6</td>\n",
710
+ " <td>red</td>\n",
711
+ " </tr>\n",
712
+ " <tr>\n",
713
+ " <th>4</th>\n",
714
+ " <td>7.4</td>\n",
715
+ " <td>0.70</td>\n",
716
+ " <td>0.00</td>\n",
717
+ " <td>1.9</td>\n",
718
+ " <td>0.076</td>\n",
719
+ " <td>11.0</td>\n",
720
+ " <td>34.0</td>\n",
721
+ " <td>0.9978</td>\n",
722
+ " <td>3.51</td>\n",
723
+ " <td>0.56</td>\n",
724
+ " <td>9.4</td>\n",
725
+ " <td>5</td>\n",
726
+ " <td>red</td>\n",
727
+ " </tr>\n",
728
+ " </tbody>\n",
729
+ "</table>\n",
730
+ "</div>"
731
+ ],
732
+ "text/plain": [
733
+ " fixed acidity volatile acidity citric acid residual sugar chlorides \\\n",
734
+ "0 7.4 0.70 0.00 1.9 0.076 \n",
735
+ "1 7.8 0.88 0.00 2.6 0.098 \n",
736
+ "2 7.8 0.76 0.04 2.3 0.092 \n",
737
+ "3 11.2 0.28 0.56 1.9 0.075 \n",
738
+ "4 7.4 0.70 0.00 1.9 0.076 \n",
739
+ "\n",
740
+ " free sulfur dioxide total sulfur dioxide density pH sulphates \\\n",
741
+ "0 11.0 34.0 0.9978 3.51 0.56 \n",
742
+ "1 25.0 67.0 0.9968 3.20 0.68 \n",
743
+ "2 15.0 54.0 0.9970 3.26 0.65 \n",
744
+ "3 17.0 60.0 0.9980 3.16 0.58 \n",
745
+ "4 11.0 34.0 0.9978 3.51 0.56 \n",
746
+ "\n",
747
+ " alcohol quality label \n",
748
+ "0 9.4 5 red \n",
749
+ "1 9.8 5 red \n",
750
+ "2 9.8 5 red \n",
751
+ "3 9.8 6 red \n",
752
+ "4 9.4 5 red "
753
+ ]
754
+ },
755
+ "execution_count": 14,
756
+ "metadata": {},
757
+ "output_type": "execute_result"
758
+ }
759
+ ],
760
+ "source": [
761
+ "import os\n",
762
+ "newName=os.path.basename(csv_files[0]).lower().split(\".\")[0]\n",
763
+ "query=\"wine quality prediction\"\n",
764
+ "\n",
765
+ "words=set(query.lower().split())\n",
766
+ "\n",
767
+ "for word in words:\n",
768
+ " if word in newName:\n",
769
+ " newName=newName.replace(word,\"\")\n",
770
+ "\n",
771
+ "df['label']=newName\n",
772
+ "\n",
773
+ "df.head()"
774
+ ]
775
+ },
776
+ {
777
+ "cell_type": "code",
778
+ "execution_count": 2,
779
+ "id": "8c258b22",
780
+ "metadata": {},
781
+ "outputs": [
782
+ {
783
+ "data": {
784
+ "text/html": [
785
+ "<div>\n",
786
+ "<style scoped>\n",
787
+ " .dataframe tbody tr th:only-of-type {\n",
788
+ " vertical-align: middle;\n",
789
+ " }\n",
790
+ "\n",
791
+ " .dataframe tbody tr th {\n",
792
+ " vertical-align: top;\n",
793
+ " }\n",
794
+ "\n",
795
+ " .dataframe thead th {\n",
796
+ " text-align: right;\n",
797
+ " }\n",
798
+ "</style>\n",
799
+ "<table border=\"1\" class=\"dataframe\">\n",
800
+ " <thead>\n",
801
+ " <tr style=\"text-align: right;\">\n",
802
+ " <th></th>\n",
803
+ " <th>Date_reported</th>\n",
804
+ " <th>Country_code</th>\n",
805
+ " <th>Country</th>\n",
806
+ " <th>WHO_region</th>\n",
807
+ " <th>New_cases</th>\n",
808
+ " <th>Cumulative_cases</th>\n",
809
+ " <th>New_deaths</th>\n",
810
+ " <th>Cumulative_deaths</th>\n",
811
+ " </tr>\n",
812
+ " </thead>\n",
813
+ " <tbody>\n",
814
+ " <tr>\n",
815
+ " <th>0</th>\n",
816
+ " <td>2020-01-05</td>\n",
817
+ " <td>AF</td>\n",
818
+ " <td>Afghanistan</td>\n",
819
+ " <td>EMRO</td>\n",
820
+ " <td>NaN</td>\n",
821
+ " <td>0</td>\n",
822
+ " <td>NaN</td>\n",
823
+ " <td>0</td>\n",
824
+ " </tr>\n",
825
+ " <tr>\n",
826
+ " <th>1</th>\n",
827
+ " <td>2020-01-12</td>\n",
828
+ " <td>AF</td>\n",
829
+ " <td>Afghanistan</td>\n",
830
+ " <td>EMRO</td>\n",
831
+ " <td>NaN</td>\n",
832
+ " <td>0</td>\n",
833
+ " <td>NaN</td>\n",
834
+ " <td>0</td>\n",
835
+ " </tr>\n",
836
+ " <tr>\n",
837
+ " <th>2</th>\n",
838
+ " <td>2020-01-19</td>\n",
839
+ " <td>AF</td>\n",
840
+ " <td>Afghanistan</td>\n",
841
+ " <td>EMRO</td>\n",
842
+ " <td>NaN</td>\n",
843
+ " <td>0</td>\n",
844
+ " <td>NaN</td>\n",
845
+ " <td>0</td>\n",
846
+ " </tr>\n",
847
+ " <tr>\n",
848
+ " <th>3</th>\n",
849
+ " <td>2020-01-26</td>\n",
850
+ " <td>AF</td>\n",
851
+ " <td>Afghanistan</td>\n",
852
+ " <td>EMRO</td>\n",
853
+ " <td>NaN</td>\n",
854
+ " <td>0</td>\n",
855
+ " <td>NaN</td>\n",
856
+ " <td>0</td>\n",
857
+ " </tr>\n",
858
+ " <tr>\n",
859
+ " <th>4</th>\n",
860
+ " <td>2020-02-02</td>\n",
861
+ " <td>AF</td>\n",
862
+ " <td>Afghanistan</td>\n",
863
+ " <td>EMRO</td>\n",
864
+ " <td>NaN</td>\n",
865
+ " <td>0</td>\n",
866
+ " <td>NaN</td>\n",
867
+ " <td>0</td>\n",
868
+ " </tr>\n",
869
+ " </tbody>\n",
870
+ "</table>\n",
871
+ "</div>"
872
+ ],
873
+ "text/plain": [
874
+ " Date_reported Country_code Country WHO_region New_cases \\\n",
875
+ "0 2020-01-05 AF Afghanistan EMRO NaN \n",
876
+ "1 2020-01-12 AF Afghanistan EMRO NaN \n",
877
+ "2 2020-01-19 AF Afghanistan EMRO NaN \n",
878
+ "3 2020-01-26 AF Afghanistan EMRO NaN \n",
879
+ "4 2020-02-02 AF Afghanistan EMRO NaN \n",
880
+ "\n",
881
+ " Cumulative_cases New_deaths Cumulative_deaths \n",
882
+ "0 0 NaN 0 \n",
883
+ "1 0 NaN 0 \n",
884
+ "2 0 NaN 0 \n",
885
+ "3 0 NaN 0 \n",
886
+ "4 0 NaN 0 "
887
+ ]
888
+ },
889
+ "execution_count": 2,
890
+ "metadata": {},
891
+ "output_type": "execute_result"
892
+ }
893
+ ],
894
+ "source": [
895
+ "import pandas as pd\n",
896
+ "\n",
897
+ "df=pd.read_csv(\"final/covid 19.csv\")\n",
898
+ "df.head()"
899
+ ]
900
+ },
901
+ {
902
+ "cell_type": "code",
903
+ "execution_count": 3,
904
+ "id": "6b43c357",
905
+ "metadata": {},
906
+ "outputs": [
907
+ {
908
+ "name": "stdout",
909
+ "output_type": "stream",
910
+ "text": [
911
+ "<class 'pandas.core.frame.DataFrame'>\n",
912
+ "RangeIndex: 54960 entries, 0 to 54959\n",
913
+ "Data columns (total 8 columns):\n",
914
+ " # Column Non-Null Count Dtype \n",
915
+ "--- ------ -------------- ----- \n",
916
+ " 0 Date_reported 54960 non-null object \n",
917
+ " 1 Country_code 54731 non-null object \n",
918
+ " 2 Country 54960 non-null object \n",
919
+ " 3 WHO_region 50838 non-null object \n",
920
+ " 4 New_cases 38082 non-null float64\n",
921
+ " 5 Cumulative_cases 54960 non-null int64 \n",
922
+ " 6 New_deaths 24747 non-null float64\n",
923
+ " 7 Cumulative_deaths 54960 non-null int64 \n",
924
+ "dtypes: float64(2), int64(2), object(4)\n",
925
+ "memory usage: 3.4+ MB\n"
926
+ ]
927
+ }
928
+ ],
929
+ "source": [
930
+ "df.info()"
931
+ ]
932
+ },
933
+ {
934
+ "cell_type": "code",
935
+ "execution_count": 4,
936
+ "id": "ab7a92d8",
937
+ "metadata": {},
938
+ "outputs": [
939
+ {
940
+ "data": {
941
+ "text/plain": [
942
+ "['Date_reported', 'Country_code', 'Country', 'WHO_region']"
943
+ ]
944
+ },
945
+ "execution_count": 4,
946
+ "metadata": {},
947
+ "output_type": "execute_result"
948
+ }
949
+ ],
950
+ "source": [
951
+ "object_columns = df.dtypes[df.dtypes == 'object'].index.tolist()\n",
952
+ "object_columns"
953
+ ]
954
+ },
955
+ {
956
+ "cell_type": "code",
957
+ "execution_count": 5,
958
+ "id": "ae0b8edb",
959
+ "metadata": {},
960
+ "outputs": [
961
+ {
962
+ "name": "stdout",
963
+ "output_type": "stream",
964
+ "text": [
965
+ " New_cases Cumulative_cases New_deaths Cumulative_deaths \\\n",
966
+ "0 NaN 0 NaN 0 \n",
967
+ "1 NaN 0 NaN 0 \n",
968
+ "2 NaN 0 NaN 0 \n",
969
+ "3 NaN 0 NaN 0 \n",
970
+ "4 NaN 0 NaN 0 \n",
971
+ "\n",
972
+ " Date_reported_2020-01-05 Date_reported_2020-01-12 \\\n",
973
+ "0 1 0 \n",
974
+ "1 0 1 \n",
975
+ "2 0 0 \n",
976
+ "3 0 0 \n",
977
+ "4 0 0 \n",
978
+ "\n",
979
+ " Date_reported_2020-01-19 Date_reported_2020-01-26 \\\n",
980
+ "0 0 0 \n",
981
+ "1 0 0 \n",
982
+ "2 1 0 \n",
983
+ "3 0 1 \n",
984
+ "4 0 0 \n",
985
+ "\n",
986
+ " Date_reported_2020-02-02 Date_reported_2020-02-09 ... Country_Zambia \\\n",
987
+ "0 0 0 ... 0 \n",
988
+ "1 0 0 ... 0 \n",
989
+ "2 0 0 ... 0 \n",
990
+ "3 0 0 ... 0 \n",
991
+ "4 1 0 ... 0 \n",
992
+ "\n",
993
+ " Country_Zimbabwe \\\n",
994
+ "0 0 \n",
995
+ "1 0 \n",
996
+ "2 0 \n",
997
+ "3 0 \n",
998
+ "4 0 \n",
999
+ "\n",
1000
+ " Country_occupied Palestinian territory, including east Jerusalem \\\n",
1001
+ "0 0 \n",
1002
+ "1 0 \n",
1003
+ "2 0 \n",
1004
+ "3 0 \n",
1005
+ "4 0 \n",
1006
+ "\n",
1007
+ " WHO_region_AFRO WHO_region_AMRO WHO_region_EMRO WHO_region_EURO \\\n",
1008
+ "0 0 0 1 0 \n",
1009
+ "1 0 0 1 0 \n",
1010
+ "2 0 0 1 0 \n",
1011
+ "3 0 0 1 0 \n",
1012
+ "4 0 0 1 0 \n",
1013
+ "\n",
1014
+ " WHO_region_OTHER WHO_region_SEARO WHO_region_WPRO \n",
1015
+ "0 0 0 0 \n",
1016
+ "1 0 0 0 \n",
1017
+ "2 0 0 0 \n",
1018
+ "3 0 0 0 \n",
1019
+ "4 0 0 0 \n",
1020
+ "\n",
1021
+ "[5 rows x 719 columns]\n"
1022
+ ]
1023
+ }
1024
+ ],
1025
+ "source": [
1026
+ "import pandas as pd\n",
1027
+ "\n",
1028
+ "def one_hot_encode_objects(df):\n",
1029
+ " object_cols = df.select_dtypes(include='object').columns\n",
1030
+ "\n",
1031
+ " for col in object_cols:\n",
1032
+ " if \"date\" in col:\n",
1033
+ " continue\n",
1034
+ "\n",
1035
+ " # Perform one-hot encoding\n",
1036
+ " dummies = pd.get_dummies(df[col], prefix=col).astype(int)\n",
1037
+ " df = pd.concat([df, dummies], axis=1)\n",
1038
+ " \n",
1039
+ " df = df.drop(columns=object_cols)\n",
1040
+ " return df\n",
1041
+ "\n",
1042
+ "\n",
1043
+ "def preprocessing(query):\n",
1044
+ " df=pd.read_csv(\"final/\"+query+\".csv\")\n",
1045
+ " # print(df.head())\n",
1046
+ " df=one_hot_encode_objects(df)\n",
1047
+ " print(df.head())\n",
1048
+ " \n",
1049
+ " \n",
1050
+ "preprocessing(\"covid 19\")"
1051
+ ]
1052
+ },
1053
+ {
1054
+ "cell_type": "code",
1055
+ "execution_count": 2,
1056
+ "id": "f4ab7ad9",
1057
+ "metadata": {},
1058
+ "outputs": [
1059
+ {
1060
+ "name": "stdout",
1061
+ "output_type": "stream",
1062
+ "text": [
1063
+ "Reduced file saved to: final/twitter sentiment analysis.csv\n"
1064
+ ]
1065
+ }
1066
+ ],
1067
+ "source": [
1068
+ "import pandas as pd\n",
1069
+ "\n",
1070
+ "def reduce_csv_to_10_percent(file_path):\n",
1071
+ " # Read the original CSV\n",
1072
+ " df = pd.read_csv(file_path)\n",
1073
+ "\n",
1074
+ " # Sample 10% of the rows\n",
1075
+ " reduced_df = df.sample(frac=0.1, random_state=42)\n",
1076
+ "\n",
1077
+ " # Save back to the original file path, overwriting it\n",
1078
+ " reduced_df.to_csv(file_path, index=False)\n",
1079
+ " print(f\"Reduced file saved to: {file_path}\")\n",
1080
+ "\n",
1081
+ "# Example usage\n",
1082
+ "reduce_csv_to_10_percent(\"final/twitter sentiment analysis.csv\")"
1083
+ ]
1084
+ },
1085
+ {
1086
+ "cell_type": "code",
1087
+ "execution_count": null,
1088
+ "id": "5644317d",
1089
+ "metadata": {},
1090
+ "outputs": [],
1091
+ "source": []
1092
+ }
1093
+ ],
1094
+ "metadata": {
1095
+ "kernelspec": {
1096
+ "display_name": "base",
1097
+ "language": "python",
1098
+ "name": "python3"
1099
+ },
1100
+ "language_info": {
1101
+ "codemirror_mode": {
1102
+ "name": "ipython",
1103
+ "version": 3
1104
+ },
1105
+ "file_extension": ".py",
1106
+ "mimetype": "text/x-python",
1107
+ "name": "python",
1108
+ "nbconvert_exporter": "python",
1109
+ "pygments_lexer": "ipython3",
1110
+ "version": "3.12.7"
1111
+ }
1112
+ },
1113
+ "nbformat": 4,
1114
+ "nbformat_minor": 5
1115
+ }
workflow.txt ADDED
@@ -0,0 +1,377 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ 1. seprate attributes and data
2
+ 2. remove the datatypes from the attributes
3
+ C:\Users\Niall Dcunha\DatasetCreator\house price prediction\21754539_dataset
4
+
5
+ # import os
6
+ # import glob
7
+ # import pandas as pd
8
+ # import openai
9
+ # from openai import OpenAI
10
+ # from dotenv import load_dotenv
11
+ # import ast
12
+ # import re
13
+
14
+ # def extract_dict_from_response(response: str) -> dict:
15
+ # # Try extracting code block content containing the dictionary
16
+ # match = re.search(r"```(?:python)?\s*(\{.*?\})\s*```", response, re.DOTALL)
17
+ # if match:
18
+ # mapping_str = match.group(1)
19
+ # else:
20
+ # # Try extracting dictionary directly if it's not in code block
21
+ # match = re.search(r"(\{.*\})", response, re.DOTALL)
22
+ # if not match:
23
+ # raise ValueError("❌ Could not find a Python dictionary in the response.")
24
+ # mapping_str = match.group(1)
25
+
26
+ # try:
27
+ # return ast.literal_eval(mapping_str)
28
+ # except Exception as e:
29
+ # print("⚠️ Failed to evaluate extracted dictionary string.")
30
+ # print("String:", mapping_str)
31
+ # raise e
32
+
33
+ # # Load environment variables
34
+ # load_dotenv()
35
+ # client = OpenAI(
36
+ # api_key=os.getenv("OPENAI_API_KEY"),
37
+ # base_url=os.getenv("OPENAI_API_BASE") # Optional: for Azure or self-hosted
38
+ # )
39
+
40
+ # def load_csv_files(folder_path):
41
+ # csv_files = glob.glob(os.path.join(folder_path, "*.csv"))
42
+ # dataframes = []
43
+ # column_sets = []
44
+ # valid_paths = []
45
+
46
+ # print("📥 Reading CSV files...")
47
+
48
+ # for file in csv_files:
49
+ # try:
50
+ # df = pd.read_csv(file)
51
+ # dataframes.append(df)
52
+ # column_sets.append(list(df.columns))
53
+ # valid_paths.append(file)
54
+ # print(f"✅ Loaded: {os.path.basename(file)}")
55
+ # except pd.errors.ParserError as e:
56
+ # print(f"❌ Skipping file due to parsing error: {os.path.basename(file)}")
57
+ # print(f" ↳ {e}")
58
+ # except Exception as e:
59
+ # print(f"⚠️ Unexpected error with file {os.path.basename(file)}: {e}")
60
+
61
+ # return dataframes, column_sets, valid_paths
62
+
63
+ # def generate_mapping_prompt(column_sets):
64
+ # prompt = (
65
+ # "You are a data scientist helping to merge multiple ML prediction datasets. "
66
+ # "Each CSV may have different or similar column names. I need a unified mapping to standardize these datasets. "
67
+ # "Also, please identify likely prediction label columns (e.g., price, quality, outcome).\n\n"
68
+ # "Here are the column headers from each CSV:\n"
69
+ # )
70
+ # for i, columns in enumerate(column_sets):
71
+ # prompt += f"CSV {i+1}: {columns}\n"
72
+ # prompt += (
73
+ # "\nPlease provide:\n"
74
+ # "1. A Python dictionary mapping similar columns across these CSVs.\n"
75
+ # "2. A list of columns most likely to represent prediction labels.\n\n"
76
+ # "Format your response as:\n"
77
+ # "```python\n"
78
+ # "column_mapping = { ... }\n"
79
+ # "label_columns = [ ... ]\n"
80
+ # "```"
81
+ # )
82
+ # return prompt
83
+
84
+ # def get_column_mapping_from_openai(column_sets):
85
+ # prompt = generate_mapping_prompt(column_sets)
86
+
87
+ # response = client.chat.completions.create(
88
+ # model="gpt-4",
89
+ # messages=[
90
+ # {"role": "system", "content": "You are a helpful data scientist."},
91
+ # {"role": "user", "content": prompt}
92
+ # ],
93
+ # temperature=0.3
94
+ # )
95
+
96
+ # content = response.choices[0].message.content
97
+ # print("\n📩 Received response from OpenAI.")
98
+
99
+ # try:
100
+ # # Try parsing both dictionary and label list from the response
101
+ # column_mapping_match = re.search(r"column_mapping\s*=\s*(\{.*?\})", content, re.DOTALL)
102
+ # label_columns_match = re.search(r"label_columns\s*=\s*(\[.*?\])", content, re.DOTALL)
103
+
104
+ # if column_mapping_match:
105
+ # mapping = ast.literal_eval(column_mapping_match.group(1))
106
+ # else:
107
+ # raise ValueError("❌ Could not find `column_mapping` in the response.")
108
+
109
+ # if label_columns_match:
110
+ # label_columns = ast.literal_eval(label_columns_match.group(1))
111
+ # else:
112
+ # label_columns = []
113
+
114
+ # except Exception as e:
115
+ # print("⚠️ Error parsing OpenAI response:")
116
+ # print(content)
117
+ # raise e
118
+
119
+ # return mapping, label_columns
120
+
121
+ # def standardize_columns(df, mapping):
122
+ # new_columns = {col: mapping.get(col, col) for col in df.columns}
123
+ # return df.rename(columns=new_columns)
124
+
125
+ # def merge_csvs(folder_path, output_file="merged_dataset.csv"):
126
+ # dfs, column_sets, csv_paths = load_csv_files(folder_path)
127
+
128
+ # if not dfs:
129
+ # print("❌ No valid CSVs found to merge.")
130
+ # return
131
+
132
+ # print("\n🧠 Requesting column mapping from OpenAI...")
133
+ # mapping, label_columns = get_column_mapping_from_openai(column_sets)
134
+
135
+ # print("\n📌 Column Mapping:")
136
+ # for k, v in mapping.items():
137
+ # print(f" '{k}' -> '{v}'")
138
+
139
+ # print("\n🏷️ Suggested Label Columns:")
140
+ # for label in label_columns:
141
+ # print(f" - {label}")
142
+
143
+ # standardized_dfs = [standardize_columns(df, mapping) for df in dfs]
144
+ # merged_df = pd.concat(standardized_dfs, ignore_index=True, sort=False)
145
+
146
+ # merged_df.to_csv(output_file, index=False)
147
+ # print(f"\n✅ Merged dataset saved as '{output_file}'")
148
+
149
+ # if __name__ == "__main__":
150
+ # folder_path = "house"
151
+
152
+
153
+ import os
154
+ import glob
155
+ import pandas as pd
156
+ import ast
157
+ import re
158
+ from itertools import combinations
159
+ from rapidfuzz import fuzz, process
160
+ from dotenv import load_dotenv
161
+ from openai import OpenAI
162
+
163
+ # Manual rename map to standardize some known variations
164
+ manual_rename_map = {
165
+ "review": "text",
166
+ "text": "text",
167
+ "NumBedrooms": "bedrooms",
168
+ "HousePrice": "price",
169
+ "TARGET(PRICE_IN_LACS)": "price",
170
+ "SquareFootage": "area",
171
+ "SQUARE_FT": "area",
172
+ "sentiment": "label",
173
+ "target": "label",
174
+ "type": "label",
175
+ "variety": "label",
176
+ "class": "label",
177
+ "HeartDisease": "label",
178
+ "Heart Attack Risk (Binary)": "label",
179
+ "Heart Attack Risk": "label"
180
+ }
181
+
182
+
183
+ def normalize(col):
184
+ return re.sub(r'[^a-z0-9]', '', col.lower())
185
+
186
+ def apply_manual_renaming(df, rename_map):
187
+ renamed = {}
188
+ for col in df.columns:
189
+ if col in rename_map:
190
+ renamed[col] = rename_map[col]
191
+ return df.rename(columns=renamed)
192
+
193
+ def get_fuzzy_common_columns(cols_list, threshold=75):
194
+ base = cols_list[0]
195
+ common = set()
196
+ for col in base:
197
+ match_all = True
198
+ for other in cols_list[1:]:
199
+ match, score, _ = process.extractOne(col, other, scorer=fuzz.token_sort_ratio)
200
+ if score < threshold:
201
+ match_all = False
202
+ break
203
+ if match_all:
204
+ common.add(col)
205
+ return common
206
+
207
+ def sortFiles(dfs):
208
+ unique_dfs = []
209
+ seen = []
210
+ for i, df1 in enumerate(dfs):
211
+ duplicate = False
212
+ for j in seen:
213
+ df2 = dfs[j]
214
+ if df1.shape != df2.shape:
215
+ continue
216
+ if df1.reset_index(drop=True).equals(df2.reset_index(drop=True)):
217
+ duplicate = True
218
+ break
219
+ if not duplicate:
220
+ unique_dfs.append(df1)
221
+ seen.append(i)
222
+ return unique_dfs
223
+
224
+ def load_csv_files(folder_path):
225
+ csv_files = glob.glob(os.path.join(folder_path, "*.csv"))
226
+ dfs = []
227
+ column_sets = []
228
+ paths = []
229
+
230
+ for file in csv_files:
231
+ try:
232
+ df = pd.read_csv(file)
233
+ dfs.append(df)
234
+ column_sets.append(list(df.columns))
235
+ paths.append(file)
236
+ print(f"✅ Loaded: {os.path.basename(file)}")
237
+ except Exception as e:
238
+ print(f"❌ Failed to load {file}: {e}")
239
+ return dfs, column_sets, paths
240
+
241
+ def generate_mapping_prompt(column_sets):
242
+ prompt = (
243
+ "You are a data scientist helping to merge multiple machine learning prediction datasets. "
244
+ "Each CSV file may have different column names, even if they represent similar types of data. "
245
+ "Your task is to identify and map these similar columns across datasets to a common, unified name. "
246
+ "Columns with clearly similar features (e.g., 'Bedrooms' and 'BedroomsAbvGr') should be merged into one column with a relevant name like 'bedrooms'.\n\n"
247
+ "Avoid keeping redundant or unique columns that do not have any logical counterpart in other datasets unless they are essential. "
248
+ "The goal is not to maximize the number of columns or rows, but to create a clean, consistent dataset for training ML models.\n\n"
249
+ "Examples:\n"
250
+ "- Dataset1: 'Locality' -> Mumbai, Delhi\n"
251
+ "- Dataset2: 'Places' -> Goa, Singapore\n"
252
+ "→ Merge both into a common column like 'location'.\n\n"
253
+ "Please also identify likely label or target columns that are typically used for prediction (e.g., price, sentiment, outcome, quality).\n\n"
254
+ )
255
+
256
+ for i, cols in enumerate(column_sets):
257
+ prompt += f"CSV {i+1}: {cols}\n"
258
+ prompt += "\nPlease return:\n```python\ncolumn_mapping = { ... }\nlabel_columns = [ ... ]\n```"
259
+ return prompt
260
+
261
+ def get_column_mapping_from_openai(column_sets):
262
+ load_dotenv()
263
+ client = OpenAI(
264
+ api_key=os.getenv("OPENAI_API_KEY"),
265
+ base_url=os.getenv("OPENAI_API_BASE", "")
266
+ )
267
+
268
+ prompt = generate_mapping_prompt(column_sets)
269
+
270
+ response = client.chat.completions.create(
271
+ model="gpt-4",
272
+ messages=[
273
+ {"role": "system", "content": "You are a helpful data scientist."},
274
+ {"role": "user", "content": prompt}
275
+ ],
276
+ temperature=0.3
277
+ )
278
+
279
+ content = response.choices[0].message.content
280
+
281
+ try:
282
+ column_mapping_match = re.search(r"column_mapping\s*=\s*(\{.*?\})", content, re.DOTALL)
283
+ label_columns_match = re.search(r"label_columns\s*=\s*(\[.*?\])", content, re.DOTALL)
284
+ column_mapping = ast.literal_eval(column_mapping_match.group(1)) if column_mapping_match else {}
285
+ label_columns = ast.literal_eval(label_columns_match.group(1)) if label_columns_match else []
286
+ except Exception as e:
287
+ print("⚠️ Error parsing OpenAI response:")
288
+ print(content)
289
+ raise e
290
+
291
+ return column_mapping, label_columns
292
+
293
+ def clean_and_merge(folder, query=None, use_ai=True):
294
+ os.makedirs("./final", exist_ok=True)
295
+ dfs, column_sets, csv_paths = load_csv_files(folder)
296
+
297
+ if not dfs:
298
+ print("No valid CSVs found.")
299
+ return
300
+
301
+ dfs = sortFiles(dfs)
302
+ dfs = [apply_manual_renaming(df, manual_rename_map) for df in dfs]
303
+
304
+ if use_ai:
305
+ try:
306
+ column_mapping, label_columns = get_column_mapping_from_openai(column_sets)
307
+ dfs = [df.rename(columns={col: column_mapping.get(col, col) for col in df.columns}) for df in dfs]
308
+ except Exception as e:
309
+ print("Falling back to fuzzy matching due to OpenAI error:", e)
310
+ use_ai = False
311
+
312
+ if not use_ai:
313
+ # Normalize columns for fuzzy match fallback
314
+ normalized_cols = []
315
+ for df in dfs:
316
+ normalized_cols.append({normalize(col) for col in df.columns})
317
+
318
+ # Get best combination with fuzzy common columns
319
+ max_common = set()
320
+ best_combo = []
321
+ for i in range(2, len(dfs)+1):
322
+ for combo in combinations(range(len(dfs)), i):
323
+ selected = [normalized_cols[j] for j in combo]
324
+ fuzzy_common = get_fuzzy_common_columns(selected)
325
+ if len(fuzzy_common) >= len(max_common):
326
+ max_common = fuzzy_common
327
+ best_combo = combo
328
+
329
+ # Harmonize and align
330
+ aligned_dfs = []
331
+ for idx in best_combo:
332
+ df = dfs[idx]
333
+ col_map = {}
334
+ for std_col in max_common:
335
+ match, _, _ = process.extractOne(std_col, [normalize(col) for col in df.columns])
336
+ for col in df.columns:
337
+ if normalize(col) == match:
338
+ col_map[col] = std_col
339
+ break
340
+ df_subset = df[list(col_map.keys())].rename(columns=col_map)
341
+ aligned_dfs.append(df_subset)
342
+
343
+ combined_df = pd.concat(aligned_dfs, ignore_index=True)
344
+ else:
345
+ combined_df = pd.concat(dfs, ignore_index=True)
346
+
347
+ # Label assignment fallback
348
+ for i, df in enumerate(dfs):
349
+ if 'label' not in df.columns:
350
+ name = os.path.basename(csv_paths[i]).split(".")[0].lower()
351
+ name_cleaned = name
352
+ if query:
353
+ words = set(re.sub(r'[^a-z]', ' ', query.lower()).split())
354
+ for word in words:
355
+ name_cleaned = name_cleaned.replace(word, "")
356
+ df['label'] = name_cleaned
357
+
358
+ # Decide best final file
359
+ largest_df = max(dfs, key=lambda df: len(df))
360
+ flag = False
361
+
362
+ if len(largest_df) > len(combined_df) and len(largest_df.columns) > 2:
363
+ flag = True
364
+ elif len(combined_df) > len(largest_df) and (len(largest_df.columns) - len(combined_df.columns)) > 3 and len(largest_df.columns) < 7:
365
+ flag = True
366
+
367
+ output_file = f"./final/{query or os.path.basename(folder)}.csv"
368
+ if flag:
369
+ largest_df.to_csv(output_file, index=False)
370
+ print(f"⚠️ Saved fallback single file due to poor merge: {output_file}")
371
+ else:
372
+ combined_df.to_csv(output_file, index=False)
373
+ print(f"✅ Saved merged file: {output_file}")
374
+
375
+ # Example usage:
376
+ clean_and_merge("house", query="house", use_ai=True)
377
+ # merge_csvs(folder_path)