File size: 74,157 Bytes
f2ec360
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2ffdc34
 
f2ec360
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
1023
1024
1025
1026
1027
1028
1029
1030
1031
1032
1033
1034
1035
1036
1037
1038
1039
1040
1041
1042
1043
1044
1045
1046
1047
1048
1049
1050
1051
1052
1053
1054
1055
1056
1057
1058
1059
1060
1061
1062
1063
1064
1065
1066
1067
1068
1069
1070
1071
1072
1073
1074
1075
1076
1077
1078
1079
1080
1081
1082
1083
1084
1085
1086
1087
1088
1089
1090
1091
1092
1093
1094
1095
1096
1097
1098
1099
1100
1101
1102
1103
1104
1105
1106
1107
1108
1109
1110
1111
1112
1113
1114
1115
1116
1117
1118
1119
1120
1121
1122
1123
1124
1125
1126
1127
1128
1129
1130
1131
1132
1133
1134
1135
1136
1137
1138
1139
1140
1141
1142
1143
1144
1145
1146
1147
1148
1149
1150
1151
1152
1153
1154
1155
1156
1157
1158
1159
1160
1161
1162
1163
1164
1165
1166
1167
1168
1169
1170
1171
1172
1173
1174
1175
1176
1177
1178
1179
1180
1181
1182
1183
1184
1185
1186
1187
1188
1189
1190
1191
1192
1193
1194
1195
1196
1197
1198
1199
1200
1201
1202
1203
1204
1205
1206
1207
1208
1209
1210
1211
1212
1213
1214
1215
1216
1217
1218
1219
1220
1221
1222
1223
1224
1225
1226
1227
1228
1229
1230
1231
1232
1233
1234
1235
1236
1237
1238
1239
1240
1241
1242
1243
1244
1245
1246
1247
1248
1249
1250
1251
1252
1253
1254
1255
1256
1257
1258
1259
1260
1261
1262
1263
1264
1265
1266
1267
1268
1269
1270
1271
1272
1273
1274
1275
1276
1277
1278
1279
1280
1281
1282
1283
1284
1285
1286
1287
1288
1289
1290
1291
1292
1293
1294
1295
1296
1297
1298
1299
1300
1301
1302
1303
1304
1305
1306
1307
1308
1309
1310
1311
1312
1313
1314
1315
1316
1317
1318
1319
1320
1321
1322
1323
1324
1325
1326
1327
1328
1329
1330
1331
1332
1333
1334
1335
1336
1337
1338
1339
1340
1341
1342
1343
1344
1345
1346
1347
1348
1349
1350
1351
1352
1353
1354
1355
1356
1357
1358
1359
1360
1361
1362
1363
1364
1365
1366
1367
1368
1369
1370
1371
1372
1373
1374
1375
1376
1377
1378
1379
1380
1381
1382
1383
1384
1385
1386
1387
1388
1389
1390
1391
1392
1393
1394
1395
1396
1397
1398
1399
1400
1401
1402
1403
1404
1405
1406
1407
1408
1409
1410
1411
1412
1413
1414
1415
1416
1417
1418
1419
1420
1421
1422
1423
1424
1425
1426
1427
1428
1429
1430
1431
1432
1433
1434
1435
1436
1437
1438
1439
1440
1441
1442
1443
1444
1445
1446
1447
1448
1449
1450
1451
1452
1453
1454
1455
1456
1457
1458
1459
1460
1461
1462
1463
1464
1465
1466
1467
1468
1469
1470
1471
1472
1473
1474
1475
1476
1477
1478
1479
1480
1481
1482
1483
1484
1485
1486
1487
1488
1489
1490
1491
1492
1493
1494
1495
1496
1497
1498
1499
1500
1501
1502
1503
1504
1505
1506
1507
1508
1509
1510
1511
1512
1513
1514
1515
1516
1517
1518
1519
1520
1521
1522
1523
1524
1525
1526
1527
1528
1529
1530
1531
1532
1533
1534
1535
1536
1537
1538
1539
1540
1541
1542
1543
1544
1545
1546
1547
1548
1549
1550
1551
1552
1553
1554
import os
import re
import shutil
import streamlit as st
from fpdf import FPDF
from chromadb import Client
from chromadb.config import Settings
import json
import chromadb
from PIL import Image
from llama_index.core import VectorStoreIndex
from langchain_community.utilities import SerpAPIWrapper
from llama_index.core import VectorStoreIndex
from langchain_core.output_parsers import StrOutputParser
from langchain_core.runnables import RunnablePassthrough
from langchain_groq import ChatGroq
from langchain.chains import LLMChain
from langchain.agents import AgentType, Tool, initialize_agent, AgentExecutor
from llama_parse import LlamaParse
from langchain_community.document_loaders import UnstructuredMarkdownLoader
from langchain_huggingface import HuggingFaceEmbeddings
from llama_index.core import SimpleDirectoryReader
from dotenv import load_dotenv, find_dotenv
from streamlit_chat import message
from langchain_community.vectorstores import Chroma
from langchain_community.utilities import SerpAPIWrapper
from langchain.chains import RetrievalQA
from langchain_community.document_loaders import DirectoryLoader
from langchain_community.document_loaders import PyMuPDFLoader
from langchain_community.document_loaders import UnstructuredXMLLoader
from langchain_community.document_loaders import CSVLoader
from langchain.prompts import PromptTemplate
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain.memory import ConversationBufferMemory
from langchain.prompts import PromptTemplate
import joblib
import nltk
from dotenv import load_dotenv, find_dotenv
import uuid
from langchain_community.embeddings import HuggingFaceEmbeddings
from langchain_community.vectorstores import Chroma
from langchain.chat_models import ChatOpenAI
from langchain.embeddings import OpenAIEmbeddings
from langchain.prompts import PromptTemplate
from yachalk import chalk
from langchain.vectorstores import PGVector
from langchain.document_loaders import PyPDFLoader, UnstructuredPDFLoader, PyPDFium2Loader
from langchain_community.document_loaders import PyPDFDirectoryLoader
## Import all the chains.
from chains_v2.create_questions import QuestionCreationChain
from chains_v2.most_pertinent_question import MostPertinentQuestion
from chains_v2.retrieval_qa import retrieval_qa
from chains_v2.research_compiler import research_compiler
from chains_v2.question_atomizer import QuestionAtomizer
from chains_v2.refine_answer import RefineAnswer
## Import all the helpers.
from helpers.response_helpers import result2QuestionsList
from helpers.response_helpers import qStr2Dict
from helpers.questions_helper import getAnsweredQuestions
from helpers.questions_helper import getUnansweredQuestions
from helpers.questions_helper import getSubQuestions
from helpers.questions_helper import getHopQuestions
from helpers.questions_helper import getLastQuestionId
from helpers.questions_helper import markAnswered
from helpers.questions_helper import getQuestionById

import nest_asyncio  # noqa: E402
nest_asyncio.apply()

load_dotenv()
load_dotenv(find_dotenv())

nltk.download('averaged_perceptron_tagger_eng') 

os.environ["TOKENIZERS_PARALLELISM"] = "false"
SERPAPI_API_KEY = os.environ["SERPAPI_API_KEY"]
GOOGLE_CSE_ID = os.environ["GOOGLE_CSE_ID"]
GOOGLE_API_KEY = os.environ["GOOGLE_API_KEY"]
LLAMA_PARSE_API_KEY = os.environ["LLAMA_PARSE_API_KEY"]
HUGGINGFACEHUB_API_TOKEN = os.environ["HUGGINGFACEHUB_API_TOKEN"]
LANGCHAIN_API_KEY = os.environ["LANGCHAIN_API_KEY"]
LANGCHAIN_ENDPOINT = os.environ["LANGCHAIN_ENDPOINT"]
LANGCHAIN_PROJECT = os.environ["LANGCHAIN_PROJECT"]
groq_api_key=os.getenv('GROQ_API_KEY')
#--------------
im = Image.open("Assets/StratXcel_white_small.jpg")

st.set_page_config(page_title="StratXcel",
                   page_icon=im,
                   layout="wide")

st.markdown(
    """
    <style>
    /* Main app background and text color */
    .stApp {
        background-color: black;
        color: #FAFAFA;
        font-family: 'sans serif';
    }
    /* Background color for the sidebar */
    .css-1d391kg {
        background-color: #262730;
    }
    /* Text color for sidebar and other text elements */
    .css-1d391kg, .css-145kmo2 {
        color: #FAFAFA;
    }
    /* Button background color and text color */
    .css-1v0mbdj, .css-1dbjc4n, .css-1ph4q5j, .stButton button {
        background-color: #2C5FCB;
        color: #FAFAFA; /* Text color */
    }
    /* Button hover state */
    .css-1v0mbdj:hover, .css-1dbjc4n:hover, .css-1ph4q5j:hover, .stButton button:hover {
        background-color: #1a4b8e;
    }
    </style>
    """,
    unsafe_allow_html=True
)
#st.sidebar.image('StratXcel.png', width=150)
st.image('Assets/black_waves2.jpeg', width=1240)

def load_credentials(filepath):
    with open(filepath, 'r') as file:
        return json.load(file)

# Load credentials from 'credentials.json'
credentials = load_credentials('Assets/credentials.json')

# Initialize session state if not already done
if 'logged_in' not in st.session_state:
    st.session_state.logged_in = False
    st.session_state.username = ''

# Function to handle login
def login(username, password):
    if username in credentials and credentials[username] == password:
        st.session_state.logged_in = True
        st.session_state.username = username
        st.rerun()  # Rerun to reflect login state
    else:
        st.session_state.logged_in = False
        st.session_state.username = ''
        st.error("Invalid username or password.")

# Function to handle logout
def logout():
    st.session_state.logged_in = False
    st.session_state.username = ''
    st.rerun()  # Rerun to reflect logout state

#--------------
## Define log printers
def print_iteration(current_iteration):
    print(
        chalk.bg_yellow_bright.black.bold(
            f"\n   Iteration - {current_iteration}  β–·β–Ά  \n"
        )
    )

def print_unanswered_questions(unanswered):
    print(
        chalk.cyan_bright("** Unanswered Questions **"),
        chalk.cyan("".join([f"\n'{q['id']}. {q['question']}'" for q in unanswered])),
    )

def print_next_question(current_question_id, current_question):
    print(
        chalk.magenta.bold("** πŸ€” Next Questions I must ask: **\n"),
        chalk.magenta(current_question_id),
        chalk.magenta(current_question["question"]),
    )

def print_answer(current_question):
    print(
        chalk.yellow_bright.bold("** Answer **\n"),
        chalk.yellow_bright(current_question["answer"]),
    )

def print_final_answer(answerpad):
    print(
        chalk.white("** Refined Answer **\n"),
        chalk.white(answerpad[-1]),
    )

def print_max_iterations():
    print(
        chalk.bg_yellow_bright.black.bold(
            "\n βœ”βœ”  Max Iterations Reached. Compiling the results ...\n"
        )
    )

def print_result(result):
    print(chalk.italic.white_bright((result["text"])))

def print_sub_question(q):
    print(chalk.magenta.bold(f"** Sub Question **\n{q['question']}\n{q['answer']}\n"))

## ---- The researcher ----- ##

class Agent:
    ## Create chains
    def __init__(self, agent_settings, scratchpad, store, verbose):
        self.store = store
        self.scratchpad = scratchpad
        self.agent_settings = agent_settings
        self.verbose = verbose
        self.question_creation_chain = QuestionCreationChain.from_llm(
            language_model(
                temperature=self.agent_settings["question_creation_temperature"]
            ),
            verbose=self.verbose,
        )
        self.question_atomizer = QuestionAtomizer.from_llm(
            llm=language_model(
                temperature=self.agent_settings["question_atomizer_temperature"]
            ),
            verbose=self.verbose,
        )
        self.most_pertinent_question = MostPertinentQuestion.from_llm(
            language_model(
                temperature=self.agent_settings["question_creation_temperature"]
            ),
            verbose=self.verbose,
        )
        self.refine_answer = RefineAnswer.from_llm(
            language_model(
                temperature=self.agent_settings["refine_answer_temperature"]
            ),
            verbose=self.verbose,
        )

    def run(self, question):
        ## Step 0. Prepare the initial set of questions
        atomized_questions_response = self.question_atomizer.run(
            question=question,
            num_questions=self.agent_settings["num_atomistic_questions"],
        )

        self.scratchpad["questions"] += result2QuestionsList(
            question_response=atomized_questions_response,
            type="subquestion",
            status="unanswered",
        )

        for q in self.scratchpad["questions"]:
            q["answer"], q["documents"] = retrieval_qa(
                llm=language_model(
                    temperature=self.agent_settings["qa_temperature"],
                    verbose=self.verbose,
                ),
                retriever=self.store.as_retriever(
                    search_type="mmr", search_kwargs={"k": 5, "fetch_k": 10}
                ),
                question=q["question"],
                answer_length=self.agent_settings["intermediate_answers_length"],
                verbose=self.verbose,
            )
            q["status"] = "answered"
            print_sub_question(q)

        
        current_context = "".join(
            f"\n{q['id']}. {q['question']}\n{q['answer']}\n"
            for q in self.scratchpad["questions"]
        )
        
        self.scratchpad["answerpad"] += [current_context]

        current_iteration = 0

        while True:
            current_iteration += 1
            print_iteration(current_iteration)

            # STEP 1: create questions
            start_id = getLastQuestionId(self.scratchpad["questions"]) + 1
            questions_response = self.question_creation_chain.run(
                question=question,
                context=current_context,
                previous_questions=[
                    "".join(f"\n{q['question']}") for q in self.scratchpad["questions"]
                ],
                num_questions=self.agent_settings["num_questions_per_iteration"],
                start_id=start_id,
            )
            self.scratchpad["questions"] += result2QuestionsList(
                question_response=questions_response,
                type="hop",
                status="unanswered",
            )

            # STEP 2: Choose question for current iteration
            unanswered = getUnansweredQuestions(self.scratchpad["questions"])
            unanswered_questions_prompt = self.unanswered_questions_prompt(unanswered)
            print_unanswered_questions(unanswered)
            response = self.most_pertinent_question.run(
                original_question=question,
                unanswered_questions=unanswered_questions_prompt,
            )
            current_question_dict = qStr2Dict(question=response)
            current_question_id = current_question_dict["id"]
            current_question = getQuestionById(
                self.scratchpad["questions"], current_question_id
            )
            print_next_question(current_question_id, current_question)

            # STEP 3: Answer the question
            current_question["answer"], current_question["documents"] = retrieval_qa(
                llm=language_model(
                    temperature=self.agent_settings["qa_temperature"],
                    verbose=self.verbose,
                ),
                retriever=self.store.as_retriever(
                    search_type="mmr", search_kwargs={"k": 5, "fetch_k": 10}
                ),
                question=current_question["question"],
                answer_length=self.agent_settings["intermediate_answers_length"],
                verbose=self.verbose,
            )
            markAnswered(self.scratchpad["questions"], current_question_id)
            print_answer(current_question)
            current_context = current_question["answer"]

            ## STEP 4: refine the answer
            refinement_context = current_question["question"] + "\n" + current_context
            refine_answer = self.refine_answer.run(
                question=question,
                context=refinement_context,
                answer=self.get_latest_answer(),
            )
            self.scratchpad["answerpad"] += [refine_answer]
            print_final_answer(self.scratchpad["answerpad"])

            if current_iteration > self.agent_settings["max_iterations"]:
                print_max_iterations()
                break

    def unanswered_questions_prompt(self, unanswered):
        return (
            "[" + "".join([f"\n{q['id']}. {q['question']}" for q in unanswered]) + "]"
        )

    def notes_prompt(self, answered_questions):
        return "".join(
            [
                f"{{ Question: {q['question']}, Answer: {q['answer']} }}"
                for q in answered_questions
            ]
        )

    def get_latest_answer(self):
        answers = self.scratchpad["answerpad"]
        answer = answers[-1] if answers else ""
        return answer
    
#--------------
# If not logged in, show login form
if not st.session_state.logged_in:
    st.sidebar.write("Login")
    username = st.sidebar.text_input('Username')
    password = st.sidebar.text_input('Password', type='password')
    if st.sidebar.button('Login'):
        login(username, password)
    # Stop the script here if the user is not logged in
    st.stop()


# If logged in, show logout button and main content
#st.sidebar.image('StratXcel.png', width=150)
if st.session_state.logged_in:
    st.sidebar.write(f"Welcome, {st.session_state.username}!")
    if st.sidebar.button('Logout'):
        logout()

#st.write(css, unsafe_allow_html=True)

company_document = st.sidebar.toggle("Shareholder agreement", False)
financial_document = st.sidebar.toggle("Debt agreement", False)
intercreditor_document = st.sidebar.toggle("Intercreditor agreement", False)
LPA_document = st.sidebar.toggle("Limited partnership agreement", False)
ESG_document = st.sidebar.toggle("ESG report", False)

#------------- 
llm=ChatGroq(groq_api_key=groq_api_key,
             model_name="llama-3.2-90b-vision-preview", temperature = 0.0, streaming=True)   
Llama = "llama-3.2-90b-vision-preview"
#--------------
def language_model(
    model_name: str = Llama, temperature: float = 0, verbose: bool = False
    ):
    llm=ChatGroq(groq_api_key=groq_api_key, model_name=model_name, temperature=temperature, verbose=verbose)    
    return llm
#--------------
doc_retriever_company = None
doc_retriever_financials = None
doc_retriever_intercreditor = None
doc_retriever_LPA = None
doc_retriever_ESG = None
#--------------

#@st.cache_data
def load_or_parse_data_company():
    data_file = "./data/parsed_data_company.pkl"

    parsingInstructionUber10k = """The provided documents are company law documents of a company.
    They contain detailed information about the company's rights and obligations of the company and its shareholders, and management.
    They also contain procedures for dispute resolution, voting, control priority, and exit and sale situations. 
    You must never provide false legal or financial information. Use only the information included in the context documents.
    Only refer to other sources if the context document refers to them or if necessary to provide additional understanding to the company's documents."""

    parser = LlamaParse(api_key=LLAMA_PARSE_API_KEY,
                        result_type="markdown",
                        parsing_instruction=parsingInstructionUber10k,
                        max_timeout=5000,
                        gpt4o_mode=True,
                        )

    file_extractor = {".pdf": parser,
                ".docx": parser,
                ".doc": parser,
                }
    reader = SimpleDirectoryReader("./Corporate_Documents", file_extractor=file_extractor)
    documents = reader.load_data()

    print("Saving the parse results in .pkl format ..........")
    joblib.dump(documents, data_file)

    # Set the parsed data to the variable
    parsed_data_company = documents

    return parsed_data_company

#@st.cache_data
def load_or_parse_data_financial():
    data_file = "./data/parsed_data_financial.pkl"

    parsingInstructionUber10k = """The provided documents are financial law documents of a company.
    They contain detailed information about the rights and obligations of the company and its creditors.
    They also contain procedures for acceleration of debt, sale of security, enforcement, use of creditor control, priority and distribution of assets. 
    You must never provide false legal or financial information. Use only the information included in the context documents.
    Only refer to other sources if the context document refers to them or if necessary to provide additional understanding to company's documents."""

    parser = LlamaParse(api_key=LLAMA_PARSE_API_KEY,
                        result_type="markdown",
                        parsing_instruction=parsingInstructionUber10k,
                        max_timeout=5000,
                        gpt4o_mode=True,
                        )

    file_extractor = {".pdf": parser,
                ".docx": parser,
                ".doc": parser,
                }
    reader = SimpleDirectoryReader("./Financial_Documents", file_extractor=file_extractor)
    documents = reader.load_data()

    print("Saving the parse results in .pkl format ..........")
    joblib.dump(documents, data_file)

    # Set the parsed data to the variable
    parsed_data_financial = documents

    return parsed_data_financial

#--------------
#@st.cache_data
def load_or_parse_data_intercreditor():
    data_file = "./data/parsed_data_intercreditor.pkl"

    parsingInstructionUber10k = """The provided document is an intercreditor agreement between a company and its creditor groups.
    They contain detailed information about the rights and obligations of the company and its creditors and creditor groups.
    They also contain procedures for acceleration of debt, sale of security, enforcement, use of creditor control, priority and distribution of assets. 
    You must never provide false legal or financial information. Use only the information included in the context documents.
    Only refer to other sources if the context document refers to them or if necessary to provide additional understanding to company's documents."""

    parser = LlamaParse(api_key=LLAMA_PARSE_API_KEY,
                        result_type="markdown",
                        parsing_instruction=parsingInstructionUber10k,
                        max_timeout=5000,
                        gpt4o_mode=True,
                        )

    file_extractor = {".pdf": parser,
                ".docx": parser,
                ".doc": parser,
                }
    reader = SimpleDirectoryReader("./Intercreditor_Documents", file_extractor=file_extractor)
    documents = reader.load_data()

    print("Saving the parse results in .pkl format ..........")
    joblib.dump(documents, data_file)

    # Set the parsed data to the variable
    parsed_data_financial = documents

    return parsed_data_financial

#@st.cache_data
def load_or_parse_data_LPA():
    data_file = "./data/parsed_data_LPA.pkl"

    parsingInstructionUber10k = """The provided document is a limited partnership agreement between a fund, general partner and limited partners.
    They contain detailed information about the environmental, social and governance aspects of the company.
    You must never provide false legal, statistical or financial information. Use only the information included in the context documents.
    Only refer to other sources if the context document refers to them or if necessary to provide additional understanding to company's documents."""

    parser = LlamaParse(api_key=LLAMA_PARSE_API_KEY,
                        result_type="markdown",
                        parsing_instruction=parsingInstructionUber10k,
                        max_timeout=5000,
                        gpt4o_mode=True,
                        )

    file_extractor = {".pdf": parser,
                ".docx": parser,
                ".doc": parser,
                }
    reader = SimpleDirectoryReader("./LPA", file_extractor=file_extractor)
    documents = reader.load_data()

    print("Saving the parse results in .pkl format ..........")
    joblib.dump(documents, data_file)

    # Set the parsed data to the variable
    parsed_data_financial = documents

    return parsed_data_financial

#--------------
#@st.cache_data
def load_or_parse_data_ESG():
    data_file = "./data/parsed_data_ESG.pkl"

    parsingInstructionUber10k = """The provided document is an ESG and sustainability document of a company.
    They contain detailed information about the rights and obligations of the fund, general partner and limited partners.
    They also contain procedures for investments, additional investments, general partner and fund costs, liability and other fund matters. 
    You must never provide false legal or financial information. Use only the information included in the context documents.
    Only refer to other sources if the context document refers to them or if necessary to provide additional understanding to company's documents."""

    parser = LlamaParse(api_key=LLAMA_PARSE_API_KEY,
                        result_type="markdown",
                        parsing_instruction=parsingInstructionUber10k,
                        max_timeout=5000,
                        gpt4o_mode=True,
                        )

    file_extractor = {".pdf": parser,
                ".docx": parser,
                ".doc": parser,
                }
    reader = SimpleDirectoryReader("./ESG", file_extractor=file_extractor)
    documents = reader.load_data()

    print("Saving the parse results in .pkl format ..........")
    joblib.dump(documents, data_file)

    # Set the parsed data to the variable
    parsed_data_financial = documents

    return parsed_data_financial

#--------------
# Create vector database
@st.cache_resource
def create_vector_database_company():

    llama_parse_documents = load_or_parse_data_company()

    with open('data/output_company.md', 'a') as f:  # Open the file in append mode ('a')
        for doc in llama_parse_documents:
            f.write(doc.text + '\n')

    markdown_path = "data/output_company.md"
    loader = UnstructuredMarkdownLoader(markdown_path)
    documents = loader.load()

    text_splitter = RecursiveCharacterTextSplitter(chunk_size=400, chunk_overlap=30)
    docs = text_splitter.split_documents(documents)

    print(f"length of documents loaded: {len(documents)}")
    print(f"total number of document chunks generated :{len(docs)}")

    persist_directory = "./chroma_db_company"  # Specify directory for Chroma persistence
    embed_model = HuggingFaceEmbeddings()
    print('Vector DB not yet created !')

    vs = Chroma.from_documents(
        documents=docs,
        embedding=embed_model,
        collection_name="rag_company",
        persist_directory=persist_directory  # Ensure persistence
    )

    doc_retriever_company = vs
    
    index = VectorStoreIndex.from_documents(llama_parse_documents)
    query_engine = index.as_query_engine()
    
    print('Vector DB created successfully !')
    return doc_retriever_company, query_engine

@st.cache_resource
def create_vector_database_financial():
    # Call the function to either load or parse the data
    llama_parse_documents = load_or_parse_data_financial()

    with open('data/output_financials.md', 'a') as f:  # Open the file in append mode ('a')
        for doc in llama_parse_documents:
            f.write(doc.text + '\n')

    markdown_path = "data/output_financials.md"
    loader = UnstructuredMarkdownLoader(markdown_path)
    documents = loader.load()
    # Split loaded documents into chunks
    text_splitter = RecursiveCharacterTextSplitter(chunk_size=100, chunk_overlap=15)
    docs = text_splitter.split_documents(documents)

    print(f"length of documents loaded: {len(documents)}")
    print(f"total number of document chunks generated :{len(docs)}")
    persist_directory = "./chroma_db_financial"  # Specify directory for Chroma persistence

    embed_model = HuggingFaceEmbeddings()

    vs = Chroma.from_documents(
        documents=docs,
        embedding=embed_model,
        collection_name="rag_financial",
        persist_directory=persist_directory  # Ensure persistence
    )
    doc_retriever_financial = vs
    
    index = VectorStoreIndex.from_documents(llama_parse_documents)
    query_engine = index.as_query_engine()

    print('Vector DB created successfully !')
    return doc_retriever_financial, query_engine

#--------------

@st.cache_resource
def create_vector_database_intercreditor():
    # Call the function to either load or parse the data
    llama_parse_documents = load_or_parse_data_intercreditor()

    with open('data/output_intercreditor.md', 'a') as f:  # Open the file in append mode ('a')
        for doc in llama_parse_documents:
            f.write(doc.text + '\n')

    markdown_path = "data/output_intercreditor.md"
    loader = UnstructuredMarkdownLoader(markdown_path)
    documents = loader.load()
    # Split loaded documents into chunks
    text_splitter = RecursiveCharacterTextSplitter(chunk_size=100, chunk_overlap=15)
    docs = text_splitter.split_documents(documents)

    print(f"length of documents loaded: {len(documents)}")
    print(f"total number of document chunks generated :{len(docs)}")
    persist_directory = "./chroma_db_intercreditor"  # Specify directory for Chroma persistence
    embed_model = HuggingFaceEmbeddings()

    vs = Chroma.from_documents(
        documents=docs,
        embedding=embed_model,
        collection_name="rag_intercreditor",
        persist_directory=persist_directory   # Ensure persistence
    )
    doc_retriever_intercreditor = vs
    
    index = VectorStoreIndex.from_documents(llama_parse_documents)
    query_engine = index.as_query_engine()

    print('Vector DB created successfully !')
    return doc_retriever_intercreditor, query_engine

#--------------

@st.cache_resource
def create_vector_database_LPA():
    # Call the function to either load or parse the data
    llama_parse_documents = load_or_parse_data_LPA()

    with open('data/output_LPA.md', 'a') as f:  # Open the file in append mode ('a')
        for doc in llama_parse_documents:
            f.write(doc.text + '\n')

    markdown_path = "data/output_LPA.md"
    loader = UnstructuredMarkdownLoader(markdown_path)
    documents = loader.load()
    # Split loaded documents into chunks
    text_splitter = RecursiveCharacterTextSplitter(chunk_size=100, chunk_overlap=15)
    docs = text_splitter.split_documents(documents)

    print(f"length of documents loaded: {len(documents)}")
    print(f"total number of document chunks generated :{len(docs)}")
    persist_directory = "./chroma_db_LPA"  # Specify directory for Chroma persistence
    embed_model = HuggingFaceEmbeddings()

    vs = Chroma.from_documents(
        documents=docs,
        embedding=embed_model,
        collection_name="rag_LPA",
        persist_directory=persist_directory   # Ensure persistence
    )
    doc_retriever_LPA = vs
    
    index = VectorStoreIndex.from_documents(llama_parse_documents)
    query_engine = index.as_query_engine()

    print('Vector DB created successfully !')
    return doc_retriever_LPA, query_engine

#--------------

@st.cache_resource
def create_vector_database_ESG():
    # Call the function to either load or parse the data
    llama_parse_documents = load_or_parse_data_ESG()

    with open('data/output_ESG.md', 'a') as f:  # Open the file in append mode ('a')
        for doc in llama_parse_documents:
            f.write(doc.text + '\n')

    markdown_path = "data/output_ESG.md"
    loader = UnstructuredMarkdownLoader(markdown_path)
    documents = loader.load()
    # Split loaded documents into chunks
    text_splitter = RecursiveCharacterTextSplitter(chunk_size=100, chunk_overlap=15)
    docs = text_splitter.split_documents(documents)

    print(f"length of documents loaded: {len(documents)}")
    print(f"total number of document chunks generated :{len(docs)}")
    persist_directory = "./chroma_db_ESG"  # Specify directory for Chroma persistence
    embed_model = HuggingFaceEmbeddings()

    vs = Chroma.from_documents(
        documents=docs,
        embedding=embed_model,
        collection_name="rag_ESG",
        persist_directory=persist_directory   # Ensure persistence
    )
    doc_retriever_ESG = vs
    
    index = VectorStoreIndex.from_documents(llama_parse_documents)
    query_engine = index.as_query_engine()

    print('Vector DB created successfully !')
    return doc_retriever_ESG, query_engine

#--------------
legal_analysis_button_key = "legal_strategy_button"
#---------------
def delete_files_and_folders(folder_path):
    for root, dirs, files in os.walk(folder_path, topdown=False):
        for file in files:
            try:
                os.unlink(os.path.join(root, file))
            except Exception as e:
                st.error(f"Error deleting {os.path.join(root, file)}: {e}")
        for dir in dirs:
            try:
                os.rmdir(os.path.join(root, dir))
            except Exception as e:
                st.error(f"Error deleting directory {os.path.join(root, dir)}: {e}")
#---------------

if company_document:
    uploaded_files_ESG = st.sidebar.file_uploader("Choose company law documents", accept_multiple_files=True, key="company_files")
    for uploaded_file in uploaded_files_ESG:
        st.write("filename:", uploaded_file.name)
        def save_uploadedfile(uploadedfile):
            with open(os.path.join("Corporate_Documents",uploadedfile.name),"wb") as f:
                f.write(uploadedfile.getbuffer())
            return st.success("Saved File:{} to Company_Documents".format(uploadedfile.name))
        save_uploadedfile(uploaded_file)

if financial_document:
    uploaded_files_financials = st.sidebar.file_uploader("Choose financial law documents", accept_multiple_files=True, key="financial_files")
    for uploaded_file in uploaded_files_financials:
        st.write("filename:", uploaded_file.name)
        def save_uploadedfile(uploadedfile):
            with open(os.path.join("Financial_Documents",uploadedfile.name),"wb") as f:
                f.write(uploadedfile.getbuffer())
            return st.success("Saved File:{} to Financial_Documents".format(uploadedfile.name))
        save_uploadedfile(uploaded_file)

if intercreditor_document:
    uploaded_files_intercreditor = st.sidebar.file_uploader("Choose intercreditor documents", accept_multiple_files=True, key="intercreditor_files")
    for uploaded_file in uploaded_files_intercreditor:
        st.write("filename:", uploaded_file.name)
        def save_uploadedfile(uploadedfile):
            with open(os.path.join("Intercreditor_Documents",uploadedfile.name),"wb") as f:
                f.write(uploadedfile.getbuffer())
            return st.success("Saved File:{} to Intercreditor_Documents".format(uploadedfile.name))
        save_uploadedfile(uploaded_file)

if LPA_document:
    uploaded_files_LPA = st.sidebar.file_uploader("Choose LPA", accept_multiple_files=True, key="LPA_files")
    for uploaded_file in uploaded_files_LPA:
        st.write("filename:", uploaded_file.name)
        def save_uploadedfile(uploadedfile):
            with open(os.path.join("LPA",uploadedfile.name),"wb") as f:
                f.write(uploadedfile.getbuffer())
            return st.success("Saved File:{} to LPA".format(uploadedfile.name))
        save_uploadedfile(uploaded_file)

if ESG_document:
    uploaded_files_ESG = st.sidebar.file_uploader("Choose ESG document", accept_multiple_files=True, key="ESG_files")
    for uploaded_file in uploaded_files_ESG:
        st.write("filename:", uploaded_file.name)
        def save_uploadedfile(uploadedfile):
            with open(os.path.join("ESG",uploadedfile.name),"wb") as f:
                f.write(uploadedfile.getbuffer())
            return st.success("Saved File:{} to ESG".format(uploadedfile.name))
        save_uploadedfile(uploaded_file)
#---------------
def company_strategy():
    doc_retriever_company, query_engine = create_vector_database_company()
    doc_retriever_company = doc_retriever_company.as_retriever()

    prompt_template = """<|system|>
    You are a seasoned attorney specializing in company law and legal analysis. You write expert analyses for institutional investors. 
    Your answer should not exceed three paragraphs.
    The text should be technical legal text but easy to understand for a professional investor.
    Explain the actual contents of the clauses and sections relevant to the question.
    Include, at the end of the response, as a source the titles of the contract clauses from which the answer was obtained.
    Base your responses to the specific parts of the context document.<|end|>
    <|user|>
    Answer the {question} based on the information you find in context: {context} <|end|>
    <|assistant|>""" 

    prompt = PromptTemplate(template=prompt_template, input_variables=["question", "context"])

    qa = (
    {
        "context": doc_retriever_company,
        "question": RunnablePassthrough(),
    }
    | prompt
    | llm
    | StrOutputParser()
)   

    #Corporate_answer_0 = qa.invoke("List the parties of the agreement and the business of the company? What categories of shares and shareholders are there? Are there conditions precedent to investment")
    Corporate_answer_0 = query_engine.query("List the parties of the agreement and the business of the company? What categories of shares and shareholders are there? Are there conditions precedent to investment")
    
    Corporate_answer_1 = qa.invoke("Describe the provisions governing nomination and removal of board members of the company?")

    Corporate_answer_2 = qa.invoke("Describe the company's share capital structure, including any provisions for different classes of shares and the rights attached to them. How are voting rights distributed among shareholders?")

    Corporate_answer_3 = qa.invoke("Summarize the procedures for decision-making in shareholder meetings and board meetings. Focus on decisions that require approval of some shareholders.")

    Corporate_answer_4 = qa.invoke("Summarize the provisions governing sale of shares, possible redemption rights, drag along and tag along rights and other exist situations of the shareholders.")
    
    Corporate_answer_5 = qa.invoke("Explain how and in what capacity new shareholders are admitted to the company. Does this require shareholder or board approval?")

    Corporate_answer_6 = qa.invoke("What mechanisms are in place for resolving shareholder disputes? Provide details on any arbitration or mediation clauses found in the company's articles or shareholders' agreements.")


    corporate_output = f"**__The Parties:__** {Corporate_answer_0} \n\n **__Director Appointment and Removal:__** {Corporate_answer_1} \n\n **__Share Capital Structure and Voting Rights:__** {Corporate_answer_2} \n\n **__Corporate Decisions:__** {Corporate_answer_3} \n\n **__Transfer of Shares:__** {Corporate_answer_4} \n\n **__Adherence of new shareholders:__** {Corporate_answer_5} \n\n **__Dispute Resolution:__** {Corporate_answer_6}"
    
    financial_output = corporate_output
    
    with open("company_analysis.txt", 'w') as file:
        file.write(financial_output)
    
    return financial_output

def financial_strategy():
    doc_retriever_financial, query_engine = create_vector_database_financial()
    doc_retriever_financial = doc_retriever_financial.as_retriever()

    prompt_template = """<|system|> You are a seasoned attorney specializing in financial law and legal analysis. You write expert analyses for institutional investors. 
    Give only specific details and contract clauses about the provided documents.
    Your answer should not exceed three paragraphs. The maximum number of sentences is twenty.
    The text should be technical legal text but easy to understand for a professional investor.
    Divide the output into paragraphs.
    Explain the legal contents of the clauses and sections relevant to the question.
    Include the titles of the contract clauses from which the information was obtained as a reference. Do not refer to the document as a whole but to specific clauses
    Use other knowledge to supplement the contract terms and conditions only if absolutely necessary.<|end|>
    <|user|>
    Answer the {question} based on the information you find in context: {context} <|end|>
    <|assistant|>""" 

    prompt = PromptTemplate(template=prompt_template, input_variables=["question", "context"])

    qa = (
    {
        "context": doc_retriever_financial,
        "question": RunnablePassthrough(),
    }
    | prompt
    | llm
    | StrOutputParser()
)   

    Financial_answer_0 = query_engine.query("Identify the parties involved in the loan, bond, or security agreements. What are the key obligations of the borrower or issuer under these agreements?")
    
    Financial_answer_1 = qa.invoke("Describe any financial covenants or ratios that must be maintained and the most important general covenants.")

    Financial_answer_3 = qa.invoke("What are the provisions governing events of default under the company's loan, bond, or security agreements? Include details on any cross-default or material adverse change clauses.")

    Financial_answer_4 = qa.invoke("Describe the rights of secured creditors under the security agreements. What types of collateral are secured, and what are the enforcement mechanisms in case of default?")

    Financial_answer_5 = qa.invoke("What acceleration clauses exist within the loan, bond, or security agreements? Under what conditions can creditors demand early repayment or terminate financing arrangements?")

    Financial_answer_6 = qa.invoke("Explain the procedures for enforcing security interests under the security agreements. How do the rights of secured creditors differ from those of unsecured creditors in such circumstances?")

    Financial_answer_7 = qa.invoke("How are decisions related to enforcement or restructuring prioritized among different classes of creditors under the loan, bond, or security agreements?")

    Financial_answer_8 = qa.invoke("Outline the company's obligations under any guarantees or indemnities provided to creditors in the loan, bond, or security agreements. Are there any limitations on the enforcement of these guarantees?")

    Financial_answer_9 = qa.invoke("What are the rights of bondholders or lenders under the bond issuance or loan agreements? How are creditor meetings conducted, and how can creditors exercise their rights in the event of default?")

    Financial_answer_10 = qa.invoke("What protections are in place for junior creditors or subordinated debt holders in the loan, bond, or security agreements? How are their rights affected in the event of enforcement or restructuring?")

    Financial_answer_11 = qa.invoke("What are the company's obligations to provide financial information to creditors under its loan, bond, or security agreements? How frequently must the company report, and what information is typically required?")


    financial_output = f"**__The parties and their key obligations:__** {Financial_answer_0} \n\n**__Borrower/Issuer Obligations and Covenants:__** {Financial_answer_1} \n\n **__Events of Default and Cross-Default Provisions:__** {Financial_answer_3} \n\n **__Rights of Secured Creditors and Enforcement of Security:__** {Financial_answer_4} \n\n **__Acceleration Clauses and Early Repayment Triggers:__** {Financial_answer_5} \n\n **__Enforcement of Security Interests:__** {Financial_answer_6} \n\n **__Intercreditor Decision-Making and Prioritization:__** {Financial_answer_7} \n\n **__Guarantees and Indemnities Obligations:__** {Financial_answer_8} \n\n **__Rights of Bondholders and Default Procedures:__** {Financial_answer_9} \n\n **__Protections for Junior Creditors:__** {Financial_answer_10} \n\n **__Financial Reporting Obligations to Creditors:__** {Financial_answer_11}"
        
    with open("financial_analysis.txt", 'w') as file:
        file.write(financial_output)

    return financial_output

def intercreditor_strategy():
    doc_retriever_intercreditor, query_engine = create_vector_database_intercreditor()
    doc_retriever_intercreditor = doc_retriever_intercreditor.as_retriever()
    
    prompt_template = """<|system|>
    "You are a seasoned attorney specializing in financial law and legal analysis.You write expert analyses for institutional investors. 
    Give only specific details and contract clauses about the provided documents.
    Your answer should not exceed three paragraphs. The maximum number of sentences is twenty.
    The text should be technical legal text but easy to understand for a professional investor.
    Divide the output into paragraphs.
    Explain the legal contents of the clauses and sections relevant to the question.
    Include the source of the answer, including the titles of the contract clauses from which the information was obtained as a reference.
    Use other knowledge to supplement the contract terms and conditions only if absolutely necessary.<|end|>
    <|user|>
    Answer the {question} based on the information you find in context: {context} <|end|>
    <|assistant|>"""

    prompt = PromptTemplate(template=prompt_template, input_variables=["question", "context"])

    qa = (
    {
        "context": doc_retriever_intercreditor,
        "question": RunnablePassthrough(),
    }
    | prompt
    | llm
    | StrOutputParser()
)   

    Intercreditor_answer_1 = query_engine.query("Specify the parties to the intercreditor agreement, and what are their key roles, including senior and subordinated creditors, and security trustees or security agents?")

    Intercreditor_answer_2 = qa.invoke("How is the ranking and priority of claims established among creditors under the intercreditor agreement? Describe the key clauses related to subordination and any waterfall or payment distribution.")

    Intercreditor_answer_3 = qa.invoke("How are enforcement actions managed under the intercreditor agreement? What are the contractual clauses regulating appointing a lead enforcement agent. What clauses govern the coordination between senior and junior creditors handled during enforcement? How do the intercreditor agreement provisions handle enforcement blockages or restrictions on junior creditors?")

    Intercreditor_answer_4 = qa.invoke("What are the standstill and turnover provisions under the agreement? Under what circumstances can subordinated or junior creditors be restricted from enforcing their rights, and when must they turn over proceeds to senior creditors?")

    Intercreditor_answer_5 = qa.invoke("How do the intercreditor agreement provisions handle payment blockages or restrictions on junior creditors? What are the specific terms concerning limitations on junior creditors in relation to payment receipt during the enforcement period?")
    
    Intercreditor_answer_6 = qa.invoke("What contractual dispute resolution mechanisms are established within the intercreditor agreement for resolving conflicts between senior and junior creditors, or other creditor groups?")

    Intercreditor_answer_7 = qa.invoke("How does the intercreditor agreement address the distribution of enforcement proceeds? What are the priority rules for distributing recoveries, and how are they applied among different classes of creditors?")

    Intercreditor_answer_8 = qa.invoke("What provisions govern amendments and waivers under the intercreditor agreement? How are decisions to amend key terms or waive rights made among the creditors, and what voting thresholds are required?")

    Intercreditor_answer_9 = qa.invoke("What limitations or restrictions are imposed on junior creditors in insolvency or restructuring proceedings under the intercreditor agreement? Are there any specific conditions that prevent junior creditors from exercising their rights independently?")

    Intercreditor_answer_10 = qa.invoke("What reporting or information-sharing obligations are outlined in the intercreditor agreement? How frequently must updates be provided, and what types of financial or operational information must be shared among creditor groups?")


    intercreditor_output = f"**__Parties and Roles under the Intercreditor Agreement:__** {Intercreditor_answer_1} \n\n**__Ranking and Priority of Claims:__** {Intercreditor_answer_2} \n\n**__Enforcement Actions and Coordination Procedures:__** {Intercreditor_answer_3} \n\n**__Standstill and Turnover Provisions:__** {Intercreditor_answer_4} \n\n**__Payment Blockages and Restrictions on Junior Creditors:__** {Intercreditor_answer_5} \n\n**__Dispute Resolution and Conflict Management:__** {Intercreditor_answer_6} \n\n**__Distribution of Proceeds and Priority Rules:__** {Intercreditor_answer_7} \n\n**__Amendments and Waivers:__** {Intercreditor_answer_8} \n\n**__Restrictions on Junior Creditors in Insolvency or Restructuring:__** {Intercreditor_answer_9} \n\n **__Information-Sharing and Reporting Obligations:__** {Intercreditor_answer_10}"
        
    with open("intercreditor_analysis.txt", 'w') as file:
        file.write(intercreditor_output)

    return intercreditor_output


def LPA_strategy():
    doc_retriever_LPA, query_engine = create_vector_database_LPA()
    doc_retriever_LPA = doc_retriever_LPA.as_retriever()
    
    prompt_template = """<|system|>
    "You are a seasoned attorney specializing in financial law and legal analysis.You write expert analyses for institutional investors. 
    Give only specific details and contract clauses about the provided documents.
    Your answer should not exceed three paragraphs. The maximum number of sentences is twenty.
    The text should be technical legal text but easy to understand for a professional investor.
    Divide the output into paragraphs. Quote the relevant part of the context text if needed.
    Explain the legal contents of the clauses and sections relevant to the question.
    Include the source of the answer, including the titles of the contract clauses from which the information was obtained as a reference.
    Use other knowledge to supplement the contract terms and conditions only if absolutely necessary.<|end|>
    <|user|>
    Answer the {question} based on the information you find in context: {context} <|end|>
    <|assistant|>"""

    prompt = PromptTemplate(template=prompt_template, input_variables=["question", "context"])

    qa = (
    {
        "context": doc_retriever_LPA,
        "question": RunnablePassthrough(),
    }
    | prompt
    | llm
    | StrOutputParser()
)   

    PE_answer_0 = query_engine.query("Who are the key parties to the Limited Partnership Agreement (LPA), such as the General Partner (GP), Limited Partners (LPs), and other relevant stakeholders?")

    PE_answer_1 = qa.invoke("What are the key obligations and responsibilities of the General Partner under the Limited Partnership Agreement? Include details on fiduciary duties, reporting obligations, and fund management responsibilities.")

    PE_answer_2 = qa.invoke("What are the key rights and restrictions of the Limited Partners under the Limited Partnership Agreement? Include details on withdrawal rights, transferability, and voting rights.")

    PE_answer_3 = qa.invoke("What are the management fees, carried interest arrangements, and other compensation structures outlined in the Limited Partnership Agreement or Fund Memorandum?")

    PE_answer_4 = qa.invoke("What provisions govern the investment restrictions and limitations under the Limited Partnership Agreement? Include details on diversification requirements, prohibited investments, and geographic or sectoral focus.")

    PE_answer_5 = qa.invoke("What are the provisions governing the distribution of profits and return of capital under the Limited Partnership Agreement? Include details on preferred returns, waterfalls, and clawback provisions.")

    PE_answer_6 = qa.invoke("What are the key risk factors and disclosures provided in the Fund Memorandum? Include details on market risk, liquidity risk, and conflicts of interest.")

    PE_answer_7 = qa.invoke("What are the provisions for resolving disputes among the General Partner, Limited Partners, or other stakeholders under the Limited Partnership Agreement?")

    PE_answer_8 = qa.invoke("What are the General Partner's rights and obligations in raising additional funds or successor funds? Include details on any restrictions or requirements under the Limited Partnership Agreement.")

    PE_answer_9 = qa.invoke("What are the reporting and disclosure obligations of the General Partner to the Limited Partners? Include details on financial reporting, capital account statements, and other periodic updates.")

    PE_answer_10 = qa.invoke("What are the terms and conditions for fund dissolution and winding up under the Limited Partnership Agreement? Include details on liquidation procedures and distribution priorities.")

    PE_answer_11 = qa.invoke("What provisions govern Limited Partner advisory committees or governance mechanisms within the fund? Include details on their powers, composition, and decision-making processes.")
    
    PE_answer_12 = qa.invoke("What key-man provisions does the limited partnership contain? Who are the specific persons obligated to manage the fund? What time do they have to devote to the management of the fund.")


    pe_financial_output = f"**__Key Parties and Stakeholders:__** {PE_answer_0} \n\n**__General Partner Obligations and Responsibilities:__** {PE_answer_1} \n\n**__Limited Partner Rights and Restrictions:__** {PE_answer_2} \n\n**__Fees, Carried Interest, and Compensation:__** {PE_answer_3} \n\n**__Investment Restrictions and Limitations:__** {PE_answer_4} \n\n**__Profit Distributions and Clawback Provisions:__** {PE_answer_5} \n\n**__Risk Factors and Disclosures:__** {PE_answer_6} \n\n**__Dispute Resolution Mechanisms:__** {PE_answer_7} \n\n**__Fundraising and Successor Fund Obligations:__** {PE_answer_8} \n\n**__General Partner Reporting Obligations:__** {PE_answer_9} \n\n**__Fund Dissolution and Winding Up:__** {PE_answer_10} \n\n**__Limited Partner Advisory Committees and Governance:__** {PE_answer_11} \n\n**__Key man provisions:__** {PE_answer_12}"

        
    with open("LPA_analysis.txt", 'w') as file:
        file.write(pe_financial_output)

    return pe_financial_output

def ESG_strategy():
    doc_retriever_ESG, query_engine = create_vector_database_ESG()
    doc_retriever_ESG = doc_retriever_ESG.as_retriever()
    
    prompt_template = """<|system|>
    You are a seasoned specialist in environmental, social and governance matters. 
    Always use figures, numerical and statistical data when possible. 
    Your answer should not exceed three paragraphs. The maximum number of sentences is twenty.
    The text should be technical text but easy to understand for a professional investor.
    Divide the output into paragraphs.
    Include the source of the answer, including the titles of the relevant document from which the information was obtained as a reference.
    Use other knowledge to supplement the contract terms and conditions only if absolutely necessary.<|end|>
    Quote the relevant part of the context text if needed.<|user|>
    Answer the {question} based on the information you find in context: {context} <|end|>
    <|assistant|>""" 

    prompt = PromptTemplate(template=prompt_template, input_variables=["question", "context"])

    qa = (
    {
        "context": doc_retriever_ESG,
        "question": RunnablePassthrough(),
    }
    | prompt
    | llm
    | StrOutputParser()
)   

    ESG_answer_1 = qa.invoke("Give a summary what specific ESG measures the company has taken recently and compare these to the best practices.")
    ESG_answer_2 = qa.invoke("Does the company's main business fall under the European Union's taxonomy regulation? Answer whether the company is taxonomy compliant under European Union Taxonomy Regulation?")
    ESG_answer_3 = qa.invoke("Describe what specific ESG transparency commitments the company has given. Give details how the company has followed the Paris Treaty's obligation to limit globabl warming to 1.5 celcius degrees.")
    ESG_answer_4 = qa.invoke("Does the company have carbon emissions reduction plan? Has the company reached its carbon dioxide reduction objectives? Set the company's carbon footprint by location and its development or equivalent figures in a table. List carbon dioxide emissions compared to turnover.")
    ESG_answer_5 = qa.invoke("Describe and set out in a table the following specific information: (i) Scope 1 CO2 emissions, (ii) Scope 2 CO2 emissions, and (iii) Scope 3 CO2 emissions of the company for 2021, 2022 and 2023. List the material changes relating to these figures.")
    ESG_answer_6 = qa.invoke("List in a table the company's energy and renewable energy usage for each material activity. Explain the main energy efficiency measures taken by the company.")
    ESG_answer_7 = qa.invoke("Does the company follow UN Guiding Principles on Business and Human Rights, ILO Declaration on Fundamental Principles and Rights at Work or OECD Guidelines for Multinational Enterprises that involve affected communities?")
    ESG_answer_8 = qa.invoke("List the environmental permits and certifications held by the company. Set out and explain any environmental procedures, investigations, and decisions taken against the company. Answer whether the company's locations or operations are connected to areas sensitive in relation to biodiversity.")
    ESG_answer_9 = qa.invoke("Set out waste management produces by the company and possible waste into the soil. Describe if the company's real estates have hazardous waste.")
    ESG_answer_10 = qa.invoke("What percentage of women are represented in the (i) board, (ii) executive directors, and (iii) upper management? Set out the measures taken to have the gender balance on the upper management of the company.")
    ESG_answer_11 = qa.invoke("What policies has the company implemented to counter money laundering and corruption?")

    ESG_output = f"**__Summary of  ESG reporting and obligations:__** {ESG_answer_1} \n\n **__Compliance with taxonomy:__** \n\n {ESG_answer_2} \n\n **__Disclosure transparency:__** \n\n {ESG_answer_3} \n\n **__Carbon footprint:__** \n\n {ESG_answer_4} \n\n **__Carbon dioxide emissions:__** \n\n {ESG_answer_5} \n\n **__Renewable energy:__** \n\n {ESG_answer_6} \n\n **__Human rights compliance:__** \n\n {ESG_answer_7} \n\n **__Management and gender balance:__** \n\n {ESG_answer_8} \n\n **__Waste and other emissions:__** {ESG_answer_9} \n\n **__Gender equality:__** {ESG_answer_10} \n\n **__Anti-money laundering:__** {ESG_answer_11}"
    
    with open("ESG_analysis.txt", 'w') as file:
        file.write(ESG_output)
    
    return ESG_output

#-------------
@st.cache_data
def generate_strategy() -> str:
    strategic_output = ""

    # Check which document exists and assign the respective strategy output to strategic_output
    if company_document:
        strategic_output = company_strategy()
    elif financial_document:
        strategic_output = financial_strategy()
    elif intercreditor_document:
        strategic_output = intercreditor_strategy()
    elif LPA_document:
        strategic_output = LPA_strategy()
    elif ESG_document:
        strategic_output = ESG_strategy()

    # Set the combined result in a single session state key
    st.session_state.results["legal_analysis_button_key"] = strategic_output

    return strategic_output
#---------------
#@st.cache_data

# Function to remove paragraphs (blank lines)
def remove_paragraphs(input_file, output_file):
    with open(input_file, 'r', encoding='utf-8') as f:
        lines = f.readlines()

    # Filter out blank lines (empty lines) or lines containing only whitespace
    lines = [line for line in lines if line.strip()]

    # Write the cleaned content back to a new file
    with open(output_file, 'w', encoding='utf-8') as f:
        f.writelines(lines)

def remove_paragraphs(input_file, output_file):
    """This function removes paragraphs and saves the new content to output_file."""
    try:
        with open(input_file, 'r', encoding='utf-8') as infile, open(output_file, 'w', encoding='utf-8') as outfile:
            for line in infile:
                # Remove paragraphs by stripping empty lines (or any other method)
                if line.strip():
                    outfile.write(line)
    except Exception as e:
        print(f"Error processing {input_file}: {e}")

def create_pdf():
    from fpdf import FPDF
    import os
    import re

    class PDF(FPDF):
        pass  # Add custom functionality here if needed

    # Define the possible files
    files = [
        "company_analysis.txt",
        "financial_analysis.txt",
        "intercreditor_analysis.txt",
        "LPA_analysis.txt",
        "ESG_analysis.txt"
    ]

    # Check which file exists and set input_file accordingly
    input_file = None  # Default to None, in case no file exists
    output_file = None  # Default to None, in case no file exists

    for file in files:
        if os.path.exists(file):
            input_file = file  # Set the input_file to the first file that exists
            output_file = 'legal_analysis_no_paragraphs.txt'  # Set output_file when input_file is found
            break  # Exit the loop once the first matching file is found

    if input_file and output_file:
        # Remove paragraphs from the selected file
        remove_paragraphs(input_file, output_file)

        # Create the PDF document
        pdf = PDF()
        pdf.add_page()
        pdf.set_margins(14, 14, 14)

        # Use the built-in "Helvetica" font
        pdf.set_font("Helvetica", size=14, style='B')

        # Title of the PDF
        pdf.cell(0, 10, txt="Structured Document Analysis", ln=2, align='C')
        pdf.ln(4)
        pdf.line(14, pdf.get_y(), 190, pdf.get_y())

        # Content
        pdf.set_font("Helvetica", size=11)

        # Define regex to match bold and heading patterns
        heading_pattern = r"\*\*__(.*?)__\*\*"  # Matches **__heading__**
        bold_pattern = r"\*\*(.*?)\*\*"  # Matches **bold text**

        try:
            with open(output_file, 'r', encoding='utf-8') as f:
                for line in f:
                    line = line.strip()

                    # Replace problematic Unicode characters
                    replacements = {
                        "₁": "1",  # Subscript 1
                        "β‚‚": "2",  # Subscript 2
                        "₃": "3",  # Subscript 3
                        "βœ“": "Check",  # Checkmark
                        "€": "EUR",
                        "’": "'" # Euro symbol
                        # Add more replacements as needed
                    }
                    for char, replacement in replacements.items():
                        line = line.replace(char, replacement)

                    # Split the line into parts and apply the correct formatting
                    parts = re.split(r'(\*\*__.*?__\*\*|\*\*.*?\*\*)', line)  # Split on headings or bolds
                    for part in parts:
                        if re.match(heading_pattern, part):  # If part is a heading
                            content = re.sub(heading_pattern, r'\1', part)  # Remove the **__ and __**
                            pdf.set_font("Helvetica", size=12, style='B')  # Larger font for heading
                            pdf.ln(1)
                            pdf.multi_cell(0, 5, txt=content, align='L')
                            pdf.ln(1)
                        elif re.match(bold_pattern, part):  # If part is bold text (convert to sub-heading)
                            content = re.sub(bold_pattern, r'\1', part)  # Remove ** for bold
                            # Use multi_cell for bold text as a sub-heading, it will wrap the text
                            pdf.set_font("Helvetica", size=11, style='B')  # Larger font for sub-heading
                            pdf.ln(1)
                            pdf.multi_cell(0, 5, txt=content, align='L')  # Multi-cell prevents overflow
                            pdf.ln(1)  # Add some space after the sub-heading
                        else:
                            # Regular Text
                            pdf.set_font("Helvetica", size=11)
                            pdf.ln(1)
                            pdf.multi_cell(0, 5, txt=part, align='L')
                            pdf.ln(1)

        except UnicodeEncodeError:
            print("UnicodeEncodeError: Some characters could not be encoded. Skipping...")
            pass  # Skip problematic lines

        # Save the PDF
        output_pdf_path = "Document_analysis.pdf"
        pdf.output(output_pdf_path)
    else:
        print("No valid input file found.")
        # Handle the case where no valid file exists


#----------------
if 'results' not in st.session_state:
    st.session_state.results = {
        "legal_analysis_button_key": {}
    }

loaders = {'.pdf': PyMuPDFLoader,
           '.xml': UnstructuredXMLLoader,
           '.csv': CSVLoader,
           }

def create_directory_loader(file_type, directory_path):
    return DirectoryLoader(
        path=directory_path,
        glob=f"**/*{file_type}",
        loader_cls=loaders[file_type],
    )

#---------------
strategies_container = st.container()
with strategies_container:
    mrow1_col1, mrow1_col2 = st.columns(2)

    st.sidebar.info("To get started, please upload the documents from the company you would like to analyze.")
    button_container = st.sidebar.container()
    if os.path.exists("company_analysis.txt") or os.path.exists("financial_analysis.txt") or os.path.exists("intercreditor_analysis.txt") or os.path.exists("LPA_analysis.txt") or os.path.exists("ESG_analysis.txt"):
        create_pdf()
        with open("Document_analysis.pdf", "rb") as pdf_file:
            PDFbyte = pdf_file.read()

        st.sidebar.download_button(label="Download Analysis",
                    data=PDFbyte,
                    file_name="Document Analysis.pdf",
                    mime='application/octet-stream',
                    )

    if button_container.button("Clear All"):
        
        st.session_state.button_states = {
        "legal_analysis_button_key": False,
        }
        st.session_state.button_states = {
        "financial_analysis_button_key": False,
        }
        st.session_state.results = {}

        st.session_state['history'] = []
        st.session_state['generated'] = ["Let's discuss the company documents πŸ€—"]
        st.session_state['past'] = ["Hey ! πŸ‘‹"]
        st.cache_data.clear()
        st.cache_resource.clear()

        # List of files to delete
        files_to_delete = [
            "company_analysis.txt",
            "financial_analysis.txt",
            "intercreditor_analysis.txt",
            "LPA_analysis.txt",
            "ESG_analysis.txt"
        ]

        # Loop through each file and try to delete it
        for file_name in files_to_delete:
            if os.path.exists(file_name):
                try:
                    os.unlink(file_name)  # Delete the file
                    st.success(f"Successfully deleted {file_name}")
                except Exception as e:
                    st.error(f"Error deleting {file_name}: {e}")
            else:
                st.warning(f"{file_name} not found, skipping...")

        # Check if the subfolder exists
        if os.path.exists("Corporate_Documents"):
            for filename in os.listdir("Corporate_Documents"):
                file_path = os.path.join("Corporate_Documents", filename)
                try:
                    if os.path.isfile(file_path):
                        os.unlink(file_path)
                except Exception as e:
                    st.error(f"Error deleting {file_path}: {e}")
        else:
            pass

        # Check if the subfolder exists
        if os.path.exists("data"):
            for filename in os.listdir("data"):
                file_path = os.path.join("data", filename)
                try:
                    if os.path.isfile(file_path):
                        os.unlink(file_path)
                except Exception as e:
                    st.error(f"Error deleting {file_path}: {e}")
        else:
            pass

        if os.path.exists("Financial_Documents"):
            # Iterate through files in the subfolder and delete them
            for filename in os.listdir("Financial_Documents"):
                file_path = os.path.join("Financial_Documents", filename)
                try:
                    if os.path.isfile(file_path):
                        os.unlink(file_path)
                except Exception as e:
                    st.error(f"Error deleting {file_path}: {e}")
        else:
            pass
            # st.warning("No 'data' subfolder found.")
        
        if os.path.exists("Intercreditor_Documents"):
            # Iterate through files in the subfolder and delete them
            for filename in os.listdir("Intercreditor_Documents"):
                file_path = os.path.join("Intercreditor_Documents", filename)
                try:
                    if os.path.isfile(file_path):
                        os.unlink(file_path)
                except Exception as e:
                    st.error(f"Error deleting {file_path}: {e}")
        else:
            pass

        if os.path.exists("LPA"):
            # Iterate through files in the subfolder and delete them
            for filename in os.listdir("LPA"):
                file_path = os.path.join("LPA", filename)
                try:
                    if os.path.isfile(file_path):
                        os.unlink(file_path)
                except Exception as e:
                    st.error(f"Error deleting {file_path}: {e}")
        else:
            pass

        if os.path.exists("ESG"):
            # Iterate through files in the subfolder and delete them
            for filename in os.listdir("ESG"):
                file_path = os.path.join("ESG", filename)
                try:
                    if os.path.isfile(file_path):
                        os.unlink(file_path)
                except Exception as e:
                    st.error(f"Error deleting {file_path}: {e}")
        else:
            pass

    with mrow1_col1:
        st.subheader("Asset Management Document Analysis")
        st.info("This tool is designed to provide a legal analysis of the documentation for  institutional investors.")
        
        button_container2 = st.container()
        if "button_states" not in st.session_state:
            st.session_state.button_states = {
            "legal_analysis_button_key": False,
            }
        
        if "results" not in st.session_state:
            st.session_state.results = {}

        if button_container2.button("Legal Analysis", key=legal_analysis_button_key):
            st.session_state.button_states[legal_analysis_button_key] = True
            result_generator = generate_strategy()  # Call the generator function
            st.session_state.results["legal_analysis_output"] = result_generator
            
        if "legal_analysis_output" in st.session_state.results:           
            st.markdown(st.session_state.results["legal_analysis_output"])
        
        st.divider()
        
    with mrow1_col2:
        if "legal_analysis_button_key" in st.session_state.results and st.session_state.results["legal_analysis_button_key"]:
            
            run_id = str(uuid.uuid4())

            scratchpad = {
                "questions": [],  # list of type Question
                "answerpad": [],
            }

            embed_model = HuggingFaceEmbeddings()

            vs_company = Chroma(
                persist_directory="./chroma_db_company",  # Directory for persistent storage
                collection_name="rag_company",
                embedding_function=embed_model,
                )
            vs_financial = Chroma(
                persist_directory="./chroma_db_financial",  # Directory for persistent storage
                collection_name="rag_financial",
                embedding_function=embed_model,
                )
            vs_intercreditor = Chroma(
                persist_directory="./chroma_db_intercreditor",  # Directory for persistent storage
                collection_name="rag_intercreditor",
                embedding_function=embed_model,
                )
            vs_LPA = Chroma(
                persist_directory="./chroma_db_LPA",  # Directory for persistent storage
                collection_name="rag_LPA",
                embedding_function=embed_model,
                )

            vs_ESG = Chroma(
                persist_directory="./chroma_db_ESG",  # Directory for persistent storage
                collection_name="rag_ESG",
                embedding_function=embed_model,
                )

            if company_document:
                store = vs_company
            elif financial_document:
                store = vs_financial
            elif intercreditor_document:
                store = vs_intercreditor
            elif LPA_document:
                store = vs_LPA
            elif ESG_document:
                store = vs_ESG
            else:
                store = None
                
            agent_settings = {
                "max_iterations": 3,
                "num_atomistic_questions": 2,
                "num_questions_per_iteration": 4,
                "question_atomizer_temperature": 0,
                "question_creation_temperature": 0.4,
                "question_prioritisation_temperature": 0,
                "refine_answer_temperature": 0,
                "qa_temperature": 0,
                "analyser_temperature": 0,
                "intermediate_answers_length": 200,
                "answer_length": 500,
            }

            # Updated prompt templates to include chat history
            def format_chat_history(chat_history):
                """Format chat history as a single string for input to the chain."""
                formatted_history = "\n".join([f"User: {entry['input']}\nAI: {entry['output']}" for entry in chat_history])
                return formatted_history

            # Initialize the agent with LCEL tools and memory
            memory = ConversationBufferMemory(memory_key="chat_history", k=3, return_messages=True)
            agent = Agent(agent_settings, scratchpad, store, True)
            def conversational_chat(query):
                # Get the result from the agent
                agent.run({"input": query, "chat_history": st.session_state['history']})
                
                result = agent.get_latest_answer()

                # Handle different response types
                if isinstance(result, dict):
                    # Extract the main content if the result is a dictionary
                    result = result.get("output", "")  # Adjust the key as needed based on your agent's output
                elif isinstance(result, list):
                    # If the result is a list, join it into a single string
                    result = "\n".join(result)
                elif not isinstance(result, str):
                    # Convert the result to a string if it is not already one
                    result = str(result)
                
                # Add the query and the result to the session state
                st.session_state['history'].append((query, result))
                
                # Update memory with the conversation
                memory.save_context({"input": query}, {"output": result})
                
                # Return the result
                return result

            # Ensure session states are initialized
            if 'history' not in st.session_state:
                st.session_state['history'] = []

            if 'generated' not in st.session_state:
                st.session_state['generated'] = ["Let's discuss the legal and financial matters πŸ€—"]

            if 'past' not in st.session_state:
                st.session_state['past'] = ["Hey ! πŸ‘‹"]

            if 'input' not in st.session_state:
                st.session_state['input'] = ""

            # Streamlit layout
            st.subheader("Discuss the documentation")
            st.info("This document research assistant enables you to discuss about the legal documentation.")
            response_container = st.container()
            container = st.container()

            with container:
                with st.form(key='my_form'):
                    user_input = st.text_input("Query:", placeholder="What would you like to know about the documentation", key='input')
                    submit_button = st.form_submit_button(label='Send')
                if submit_button and user_input:
                    output = conversational_chat(user_input)
                    st.session_state['past'].append(user_input)
                    st.session_state['generated'].append(output)
                    user_input = "Query:"
                #st.session_state['input'] = ""
            # Display generated responses
            if st.session_state['generated']:
                with response_container:
                    for i in range(len(st.session_state['generated'])):
                        message(st.session_state["past"][i], is_user=True, key=str(i) + '_user', avatar_style="shapes")
                        message(st.session_state["generated"][i], key=str(i), avatar_style="icons")