awacke1 commited on
Commit
73e1438
·
verified ·
1 Parent(s): 5dd4a72

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1173 -1
README.md CHANGED
@@ -378,4 +378,1176 @@ AIKnowledgeTreeBuilder is designed with the following tenets:
378
  📢 10. Writing clear documentation across the product lifecycle
379
  📢 11. Contributing to open-source libraries Transformers Datasets Accelerate
380
  📢 12. Communicating via GitHub forums or Slack
381
- 📢 13. Demonstrating creativity to make complex technology accessible
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
378
  📢 10. Writing clear documentation across the product lifecycle
379
  📢 11. Contributing to open-source libraries Transformers Datasets Accelerate
380
  📢 12. Communicating via GitHub forums or Slack
381
+ 📢 13. Demonstrating creativity to make complex technology accessible
382
+
383
+
384
+
385
+
386
+ -----
387
+
388
+
389
+
390
+
391
+ Lets create a gradio demo app that spins up 9 ML agents to help with the aspects of ML Development . 1st my agent code should follow and demo all the agent features in transformers, yet keep the UI witty emoji filled with humor and use either gradio or streamlit and have app.py plus requirements.txt. Any documentation say a markdown outline on the functions and help or docs would be in README.md file so three files always with those. 2nd I will have a knowledge tree program which already has a MoE. Can you please add the transformers agents code to it? Transformers AGents Docs: Agents
392
+ We provide two types of agents, based on the main Agent class:
393
+
394
+ CodeAgent acts in one shot, generating code to solve the task, then executes it at once.
395
+ ReactAgent acts step by step, each step consisting of one thought, then one tool call and execution. It has two classes:
396
+ ReactJsonAgent writes its tool calls in JSON.
397
+ ReactCodeAgent writes its tool calls in Python code.
398
+ Agent
399
+ class transformers.Agent
400
+ <
401
+ source
402
+ >
403
+ ( tools: typing.Union[typing.List[transformers.agents.tools.Tool], transformers.agents.agents.Toolbox]llm_engine: typing.Callable = Nonesystem_prompt: typing.Optional[str] = Nonetool_description_template: typing.Optional[str] = Noneadditional_args: typing.Dict = {}max_iterations: int = 6tool_parser: typing.Optional[typing.Callable] = Noneadd_base_tools: bool = Falseverbose: int = 0grammar: typing.Optional[typing.Dict[str, str]] = Nonemanaged_agents: typing.Optional[typing.List] = Nonestep_callbacks: typing.Optional[typing.List[typing.Callable]] = Nonemonitor_metrics: bool = True )
404
+
405
+ execute_tool_call
406
+ <
407
+ source
408
+ >
409
+ ( tool_name: strarguments: typing.Dict[str, str] )
410
+
411
+ Parameters
412
+
413
+ tool_name (str) — Name of the Tool to execute (should be one from self.toolbox).
414
+ arguments (Dict[str, str]) — Arguments passed to the Tool.
415
+ Execute tool with the provided input and returns the result. This method replaces arguments with the actual values from the state if they refer to state variables.
416
+
417
+ extract_action
418
+ <
419
+ source
420
+ >
421
+ ( llm_output: strsplit_token: str )
422
+
423
+ Parameters
424
+
425
+ llm_output (str) — Output of the LLM
426
+ split_token (str) — Separator for the action. Should match the example in the system prompt.
427
+ Parse action from the LLM output
428
+
429
+ run
430
+ <
431
+ source
432
+ >
433
+ ( **kwargs )
434
+
435
+ To be implemented in the child class
436
+
437
+ write_inner_memory_from_logs
438
+ <
439
+ source
440
+ >
441
+ ( summary_mode: typing.Optional[bool] = False )
442
+
443
+ Reads past llm_outputs, actions, and observations or errors from the logs into a series of messages that can be used as input to the LLM.
444
+
445
+ CodeAgent
446
+ class transformers.CodeAgent
447
+ <
448
+ source
449
+ >
450
+ ( tools: typing.List[transformers.agents.tools.Tool]llm_engine: typing.Optional[typing.Callable] = Nonesystem_prompt: typing.Optional[str] = Nonetool_description_template: typing.Optional[str] = Nonegrammar: typing.Optional[typing.Dict[str, str]] = Noneadditional_authorized_imports: typing.Optional[typing.List[str]] = None**kwargs )
451
+
452
+ A class for an agent that solves the given task using a single block of code. It plans all its actions, then executes all in one shot.
453
+
454
+ parse_code_blob
455
+ <
456
+ source
457
+ >
458
+ ( result: str )
459
+
460
+ Override this method if you want to change the way the code is cleaned in the run method.
461
+
462
+ run
463
+ <
464
+ source
465
+ >
466
+ ( task: strreturn_generated_code: bool = False**kwargs )
467
+
468
+ Parameters
469
+
470
+ task (str) — The task to perform
471
+ return_generated_code (bool, optional, defaults to False) — Whether to return the generated code instead of running it
472
+ kwargs (additional keyword arguments, optional) — Any keyword argument to send to the agent when evaluating the code.
473
+ Runs the agent for the given task.
474
+
475
+ Example:
476
+
477
+ Copied
478
+ from transformers.agents import CodeAgent
479
+
480
+ agent = CodeAgent(tools=[])
481
+ agent.run("What is the result of 2 power 3.7384?")
482
+ React agents
483
+ class transformers.ReactAgent
484
+ <
485
+ source
486
+ >
487
+ ( tools: typing.List[transformers.agents.tools.Tool]llm_engine: typing.Optional[typing.Callable] = Nonesystem_prompt: typing.Optional[str] = Nonetool_description_template: typing.Optional[str] = Nonegrammar: typing.Optional[typing.Dict[str, str]] = Noneplan_type: typing.Optional[str] = Noneplanning_interval: typing.Optional[int] = None**kwargs )
488
+
489
+ This agent that solves the given task step by step, using the ReAct framework: While the objective is not reached, the agent will perform a cycle of thinking and acting. The action will be parsed from the LLM output: it consists in calls to tools from the toolbox, with arguments chosen by the LLM engine.
490
+
491
+ direct_run
492
+ <
493
+ source
494
+ >
495
+ ( task: str )
496
+
497
+ Runs the agent in direct mode, returning outputs only at the end: should be launched only in the run method.
498
+
499
+ planning_step
500
+ <
501
+ source
502
+ >
503
+ ( taskis_first_step: bool = Falseiteration: int = None )
504
+
505
+ Parameters
506
+
507
+ task (str) — The task to perform
508
+ is_first_step (bool) — If this step is not the first one, the plan should be an update over a previous plan.
509
+ iteration (int) — The number of the current step, used as an indication for the LLM.
510
+ Used periodically by the agent to plan the next steps to reach the objective.
511
+
512
+ provide_final_answer
513
+ <
514
+ source
515
+ >
516
+ ( task )
517
+
518
+ This method provides a final answer to the task, based on the logs of the agent’s interactions.
519
+
520
+ run
521
+ <
522
+ source
523
+ >
524
+ ( task: strstream: bool = Falsereset: bool = True**kwargs )
525
+
526
+ Parameters
527
+
528
+ task (str) — The task to perform
529
+ Runs the agent for the given task.
530
+
531
+ Example:
532
+
533
+ Copied
534
+ from transformers.agents import ReactCodeAgent
535
+ agent = ReactCodeAgent(tools=[])
536
+ agent.run("What is the result of 2 power 3.7384?")
537
+ stream_run
538
+ <
539
+ source
540
+ >
541
+ ( task: str )
542
+
543
+ Runs the agent in streaming mode, yielding steps as they are executed: should be launched only in the run method.
544
+
545
+ class transformers.ReactJsonAgent
546
+ <
547
+ source
548
+ >
549
+ ( tools: typing.List[transformers.agents.tools.Tool]llm_engine: typing.Optional[typing.Callable] = Nonesystem_prompt: typing.Optional[str] = Nonetool_description_template: typing.Optional[str] = Nonegrammar: typing.Optional[typing.Dict[str, str]] = Noneplanning_interval: typing.Optional[int] = None**kwargs )
550
+
551
+ This agent that solves the given task step by step, using the ReAct framework: While the objective is not reached, the agent will perform a cycle of thinking and acting. The tool calls will be formulated by the LLM in JSON format, then parsed and executed.
552
+
553
+ step
554
+ <
555
+ source
556
+ >
557
+ ( log_entry: typing.Dict[str, typing.Any] )
558
+
559
+ Perform one step in the ReAct framework: the agent thinks, acts, and observes the result. The errors are raised here, they are caught and logged in the run() method.
560
+
561
+ class transformers.ReactCodeAgent
562
+ <
563
+ source
564
+ >
565
+ ( tools: typing.List[transformers.agents.tools.Tool]llm_engine: typing.Optional[typing.Callable] = Nonesystem_prompt: typing.Optional[str] = Nonetool_description_template: typing.Optional[str] = Nonegrammar: typing.Optional[typing.Dict[str, str]] = Noneadditional_authorized_imports: typing.Optional[typing.List[str]] = Noneplanning_interval: typing.Optional[int] = None**kwargs )
566
+
567
+ This agent that solves the given task step by step, using the ReAct framework: While the objective is not reached, the agent will perform a cycle of thinking and acting. The tool calls will be formulated by the LLM in code format, then parsed and executed.
568
+
569
+ step
570
+ <
571
+ source
572
+ >
573
+ ( log_entry: typing.Dict[str, typing.Any] )
574
+
575
+ Perform one step in the ReAct framework: the agent thinks, acts, and observes the result. The errors are raised here, they are caught and logged in the run() method.
576
+
577
+ ManagedAgent
578
+ class transformers.ManagedAgent
579
+ <
580
+ source
581
+ >
582
+ ( agentnamedescriptionadditional_prompting = Noneprovide_run_summary = False )
583
+
584
+ Tools
585
+ load_tool
586
+ transformers.load_tool
587
+ <
588
+ source
589
+ >
590
+ ( task_or_repo_idmodel_repo_id = Nonetoken = None**kwargs )
591
+
592
+ Parameters
593
+
594
+ task_or_repo_id (str) — The task for which to load the tool or a repo ID of a tool on the Hub. Tasks implemented in Transformers are:
595
+ "document_question_answering"
596
+ "image_question_answering"
597
+ "speech_to_text"
598
+ "text_to_speech"
599
+ "translation"
600
+ model_repo_id (str, optional) — Use this argument to use a different model than the default one for the tool you selected.
601
+ token (str, optional) — The token to identify you on hf.co. If unset, will use the token generated when running huggingface-cli login (stored in ~/.huggingface).
602
+ kwargs (additional keyword arguments, optional) — Additional keyword arguments that will be split in two: all arguments relevant to the Hub (such as cache_dir, revision, subfolder) will be used when downloading the files for your tool, and the others will be passed along to its init.
603
+ Main function to quickly load a tool, be it on the Hub or in the Transformers library.
604
+
605
+ Loading a tool means that you’ll download the tool and execute it locally. ALWAYS inspect the tool you’re downloading before loading it within your runtime, as you would do when installing a package using pip/npm/apt.
606
+
607
+ tool
608
+ transformers.tool
609
+ <
610
+ source
611
+ >
612
+ ( tool_function: typing.Callable )
613
+
614
+ Parameters
615
+
616
+ tool_function — Your function. Should have type hints for each input and a type hint for the output.
617
+ Should also have a docstring description including an ‘Args —’ part where each argument is described.
618
+ Converts a function into an instance of a Tool subclass.
619
+
620
+ Tool
621
+ class transformers.Tool
622
+ <
623
+ source
624
+ >
625
+ ( *args**kwargs )
626
+
627
+ A base class for the functions used by the agent. Subclass this and implement the __call__ method as well as the following class attributes:
628
+
629
+ description (str) — A short description of what your tool does, the inputs it expects and the output(s) it will return. For instance ‘This is a tool that downloads a file from a url. It takes the url as input, and returns the text contained in the file’.
630
+ name (str) — A performative name that will be used for your tool in the prompt to the agent. For instance "text-classifier" or "image_generator".
631
+ inputs (Dict[str, Dict[str, Union[str, type]]]) — The dict of modalities expected for the inputs. It has one typekey and a descriptionkey. This is used by launch_gradio_demo or to make a nice space from your tool, and also can be used in the generated description for your tool.
632
+ output_type (type) — The type of the tool output. This is used by launch_gradio_demo or to make a nice space from your tool, and also can be used in the generated description for your tool.
633
+ You can also override the method setup() if your tool as an expensive operation to perform before being usable (such as loading a model). setup() will be called the first time you use your tool, but not at instantiation.
634
+
635
+ from_gradio
636
+ <
637
+ source
638
+ >
639
+ ( gradio_tool )
640
+
641
+ Creates a Tool from a gradio tool.
642
+
643
+ from_hub
644
+ <
645
+ source
646
+ >
647
+ ( repo_id: strtoken: typing.Optional[str] = None**kwargs )
648
+
649
+ Parameters
650
+
651
+ repo_id (str) — The name of the repo on the Hub where your tool is defined.
652
+ token (str, optional) — The token to identify you on hf.co. If unset, will use the token generated when running huggingface-cli login (stored in ~/.huggingface).
653
+ kwargs (additional keyword arguments, optional) — Additional keyword arguments that will be split in two: all arguments relevant to the Hub (such as cache_dir, revision, subfolder) will be used when downloading the files for your tool, and the others will be passed along to its init.
654
+ Loads a tool defined on the Hub.
655
+
656
+ Loading a tool from the Hub means that you’ll download the tool and execute it locally. ALWAYS inspect the tool you’re downloading before loading it within your runtime, as you would do when installing a package using pip/npm/apt.
657
+
658
+ from_langchain
659
+ <
660
+ source
661
+ >
662
+ ( langchain_tool )
663
+
664
+ Creates a Tool from a langchain tool.
665
+
666
+ from_space
667
+ <
668
+ source
669
+ >
670
+ ( space_id: strname: strdescription: strapi_name: typing.Optional[str] = Nonetoken: typing.Optional[str] = None ) → Tool
671
+
672
+ Parameters
673
+
674
+ space_id (str) — The id of the Space on the Hub.
675
+ name (str) — The name of the tool.
676
+ description (str) — The description of the tool.
677
+ api_name (str, optional) — The specific api_name to use, if the space has several tabs. If not precised, will default to the first available api.
678
+ token (str, optional) — Add your token to access private spaces or increase your GPU quotas.
679
+ Returns
680
+
681
+ Tool
682
+
683
+ The Space, as a tool.
684
+
685
+
686
+ Creates a Tool from a Space given its id on the Hub.
687
+
688
+ Examples:
689
+
690
+ Copied
691
+ image_generator = Tool.from_space(
692
+ space_id="black-forest-labs/FLUX.1-schnell",
693
+ name="image-generator",
694
+ description="Generate an image from a prompt"
695
+ )
696
+ image = image_generator("Generate an image of a cool surfer in Tahiti")
697
+ Copied
698
+ face_swapper = Tool.from_space(
699
+ "tuan2308/face-swap",
700
+ "face_swapper",
701
+ "Tool that puts the face shown on the first image on the second image. You can give it paths to images.",
702
+ )
703
+ image = face_swapper('./aymeric.jpeg', './ruth.jpg')
704
+ push_to_hub
705
+ <
706
+ source
707
+ >
708
+ ( repo_id: strcommit_message: str = 'Upload tool'private: typing.Optional[bool] = Nonetoken: typing.Union[bool, str, NoneType] = Nonecreate_pr: bool = False )
709
+
710
+ Parameters
711
+
712
+ repo_id (str) — The name of the repository you want to push your tool to. It should contain your organization name when pushing to a given organization.
713
+ commit_message (str, optional, defaults to "Upload tool") — Message to commit while pushing.
714
+ private (bool, optional) — Whether to make the repo private. If None (default), the repo will be public unless the organization’s default is private. This value is ignored if the repo already exists.
715
+ token (bool or str, optional) — The token to use as HTTP bearer authorization for remote files. If unset, will use the token generated when running huggingface-cli login (stored in ~/.huggingface).
716
+ create_pr (bool, optional, defaults to False) — Whether or not to create a PR with the uploaded files or directly commit.
717
+ Upload the tool to the Hub.
718
+
719
+ For this method to work properly, your tool must have been defined in a separate module (not __main__).
720
+
721
+ For instance:
722
+
723
+ Copied
724
+ from my_tool_module import MyTool
725
+ my_tool = MyTool()
726
+ my_tool.push_to_hub("my-username/my-space")
727
+ save
728
+ <
729
+ source
730
+ >
731
+ ( output_dir )
732
+
733
+ Parameters
734
+
735
+ output_dir (str) — The folder in which you want to save your tool.
736
+ Saves the relevant code files for your tool so it can be pushed to the Hub. This will copy the code of your tool in output_dir as well as autogenerate:
737
+
738
+ a config file named tool_config.json
739
+ an app.py file so that your tool can be converted to a space
740
+ a requirements.txt containing the names of the module used by your tool (as detected when inspecting its code)
741
+ You should only use this method to save tools that are defined in a separate module (not __main__).
742
+
743
+ setup
744
+ <
745
+ source
746
+ >
747
+ ( )
748
+
749
+ Overwrite this method here for any operation that is expensive and needs to be executed before you start using your tool. Such as loading a big model.
750
+
751
+ Toolbox
752
+ class transformers.Toolbox
753
+ <
754
+ source
755
+ >
756
+ ( tools: typing.List[transformers.agents.tools.Tool]add_base_tools: bool = False )
757
+
758
+ Parameters
759
+
760
+ tools (List[Tool]) — The list of tools to instantiate the toolbox with
761
+ add_base_tools (bool, defaults to False, optional, defaults to False) — Whether to add the tools available within transformers to the toolbox.
762
+ The toolbox contains all tools that the agent can perform operations with, as well as a few methods to manage them.
763
+
764
+ add_tool
765
+ <
766
+ source
767
+ >
768
+ ( tool: Tool )
769
+
770
+ Parameters
771
+
772
+ tool (Tool) — The tool to add to the toolbox.
773
+ Adds a tool to the toolbox
774
+
775
+ clear_toolbox
776
+ <
777
+ source
778
+ >
779
+ ( )
780
+
781
+ Clears the toolbox
782
+
783
+ remove_tool
784
+ <
785
+ source
786
+ >
787
+ ( tool_name: str )
788
+
789
+ Parameters
790
+
791
+ tool_name (str) — The tool to remove from the toolbox.
792
+ Removes a tool from the toolbox
793
+
794
+ show_tool_descriptions
795
+ <
796
+ source
797
+ >
798
+ ( tool_description_template: str = None )
799
+
800
+ Parameters
801
+
802
+ tool_description_template (str, optional) — The template to use to describe the tools. If not provided, the default template will be used.
803
+ Returns the description of all tools in the toolbox
804
+
805
+ update_tool
806
+ <
807
+ source
808
+ >
809
+ ( tool: Tool )
810
+
811
+ Parameters
812
+
813
+ tool (Tool) — The tool to update to the toolbox.
814
+ Updates a tool in the toolbox according to its name.
815
+
816
+ PipelineTool
817
+ class transformers.PipelineTool
818
+ <
819
+ source
820
+ >
821
+ ( model = Nonepre_processor = Nonepost_processor = Nonedevice = Nonedevice_map = Nonemodel_kwargs = Nonetoken = None**hub_kwargs )
822
+
823
+ Parameters
824
+
825
+ model (str or PreTrainedModel, optional) — The name of the checkpoint to use for the model, or the instantiated model. If unset, will default to the value of the class attribute default_checkpoint.
826
+ pre_processor (str or Any, optional) — The name of the checkpoint to use for the pre-processor, or the instantiated pre-processor (can be a tokenizer, an image processor, a feature extractor or a processor). Will default to the value of model if unset.
827
+ post_processor (str or Any, optional) — The name of the checkpoint to use for the post-processor, or the instantiated pre-processor (can be a tokenizer, an image processor, a feature extractor or a processor). Will default to the pre_processor if unset.
828
+ device (int, str or torch.device, optional) — The device on which to execute the model. Will default to any accelerator available (GPU, MPS etc…), the CPU otherwise.
829
+ device_map (str or dict, optional) — If passed along, will be used to instantiate the model.
830
+ model_kwargs (dict, optional) — Any keyword argument to send to the model instantiation.
831
+ token (str, optional) — The token to use as HTTP bearer authorization for remote files. If unset, will use the token generated when running huggingface-cli login (stored in ~/.huggingface).
832
+ hub_kwargs (additional keyword arguments, optional) — Any additional keyword argument to send to the methods that will load the data from the Hub.
833
+ A Tool tailored towards Transformer models. On top of the class attributes of the base class Tool, you will need to specify:
834
+
835
+ model_class (type) — The class to use to load the model in this tool.
836
+ default_checkpoint (str) — The default checkpoint that should be used when the user doesn’t specify one.
837
+ pre_processor_class (type, optional, defaults to AutoProcessor) — The class to use to load the pre-processor
838
+ post_processor_class (type, optional, defaults to AutoProcessor) — The class to use to load the post-processor (when different from the pre-processor).
839
+ decode
840
+ <
841
+ source
842
+ >
843
+ ( outputs )
844
+
845
+ Uses the post_processor to decode the model output.
846
+
847
+ encode
848
+ <
849
+ source
850
+ >
851
+ ( raw_inputs )
852
+
853
+ Uses the pre_processor to prepare the inputs for the model.
854
+
855
+ forward
856
+ <
857
+ source
858
+ >
859
+ ( inputs )
860
+
861
+ Sends the inputs through the model.
862
+
863
+ setup
864
+ <
865
+ source
866
+ >
867
+ ( )
868
+
869
+ Instantiates the pre_processor, model and post_processor if necessary.
870
+
871
+ launch_gradio_demo
872
+ transformers.launch_gradio_demo
873
+ <
874
+ source
875
+ >
876
+ ( tool_class: Tool )
877
+
878
+ Parameters
879
+
880
+ tool_class (type) — The class of the tool for which to launch the demo.
881
+ Launches a gradio demo for a tool. The corresponding tool class needs to properly implement the class attributes inputs and output_type.
882
+
883
+ stream_to_gradio
884
+ transformers.stream_to_gradio
885
+ <
886
+ source
887
+ >
888
+ ( agenttask: strtest_mode: bool = False**kwargs )
889
+
890
+ Runs an agent with the given task and streams the messages from the agent as gradio ChatMessages.
891
+
892
+ ToolCollection
893
+ class transformers.ToolCollection
894
+ <
895
+ source
896
+ >
897
+ ( collection_slug: strtoken: typing.Optional[str] = None )
898
+
899
+ Parameters
900
+
901
+ collection_slug (str) — The collection slug referencing the collection.
902
+ token (str, optional) — The authentication token if the collection is private.
903
+ Tool collections enable loading all Spaces from a collection in order to be added to the agent’s toolbox.
904
+
905
+ [!NOTE] Only Spaces will be fetched, so you can feel free to add models and datasets to your collection if you’d like for this collection to showcase them.
906
+
907
+ Example:
908
+
909
+ Copied
910
+ from transformers import ToolCollection, ReactCodeAgent
911
+
912
+ image_tool_collection = ToolCollection(collection_slug="huggingface-tools/diffusion-tools-6630bb19a942c2306a2cdb6f")
913
+ agent = ReactCodeAgent(tools=[*image_tool_collection.tools], add_base_tools=True)
914
+
915
+ agent.run("Please draw me a picture of rivers and lakes.")
916
+ Engines
917
+ You’re free to create and use your own engines to be usable by the Agents framework. These engines have the following specification:
918
+
919
+ Follow the messages format for its input (List[Dict[str, str]]) and return a string.
920
+ Stop generating outputs before the sequences passed in the argument stop_sequences
921
+ TransformersEngine
922
+ For convenience, we have added a TransformersEngine that implements the points above, taking a pre-initialized Pipeline as input.
923
+
924
+ Copied
925
+ from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline, TransformersEngine
926
+
927
+ model_name = "HuggingFaceTB/SmolLM-135M-Instruct"
928
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
929
+ model = AutoModelForCausalLM.from_pretrained(model_name)
930
+
931
+ pipe = pipeline("text-generation", model=model, tokenizer=tokenizer)
932
+
933
+ engine = TransformersEngine(pipe)
934
+ engine([{"role": "user", "content": "Ok!"}], stop_sequences=["great"])
935
+
936
+ "What a "
937
+ class transformers.TransformersEngine
938
+ <
939
+ source
940
+ >
941
+ ( pipeline: Pipelinemodel_id: typing.Optional[str] = None )
942
+
943
+ This engine uses a pre-initialized local text-generation pipeline.
944
+
945
+ HfApiEngine
946
+ The HfApiEngine is an engine that wraps an HF Inference API client for the execution of the LLM.
947
+
948
+ Copied
949
+ from transformers import HfApiEngine
950
+
951
+ messages = [
952
+ {"role": "user", "content": "Hello, how are you?"},
953
+ {"role": "assistant", "content": "I'm doing great. How can I help you today?"},
954
+ {"role": "user", "content": "No need to help, take it easy."},
955
+ ]
956
+
957
+ HfApiEngine()(messages, stop_sequences=["conversation"])
958
+
959
+ "That's very kind of you to say! It's always nice to have a relaxed "
960
+ class transformers.HfApiEngine
961
+ <
962
+ source
963
+ >
964
+ ( model: str = 'meta-llama/Meta-Llama-3.1-8B-Instruct'token: typing.Optional[str] = Nonemax_tokens: typing.Optional[int] = 1500timeout: typing.Optional[int] = 120 )
965
+
966
+ Parameters
967
+
968
+ model (str, optional, defaults to "meta-llama/Meta-Llama-3.1-8B-Instruct") — The Hugging Face model ID to be used for inference. This can be a path or model identifier from the Hugging Face model hub.
969
+ token (str, optional) — Token used by the Hugging Face API for authentication. If not provided, the class will use the token stored in the Hugging Face CLI configuration.
970
+ max_tokens (int, optional, defaults to 1500) — The maximum number of tokens allowed in the output.
971
+ timeout (int, optional, defaults to 120) — Timeout for the API request, in seconds.
972
+ Raises
973
+
974
+ ValueError
975
+
976
+ ValueError — If the model name is not provided.
977
+
978
+ A class to interact with Hugging Face’s Inference API for language model interaction.
979
+
980
+ This engine allows you to communicate with Hugging Face’s models using the Inference API. It can be used in both serverless mode or with a dedicated endpoint, supporting features like stop sequences and grammar customization.
981
+
982
+ Agent Types
983
+ Agents can handle any type of object in-between tools; tools, being completely multimodal, can accept and return text, image, audio, video, among other types. In order to increase compatibility between tools, as well as to correctly render these returns in ipython (jupyter, colab, ipython notebooks, …), we implement wrapper classes around these types.
984
+
985
+ The wrapped objects should continue behaving as initially; a text object should still behave as a string, an image object should still behave as a PIL.Image.
986
+
987
+ These types have three specific purposes:
988
+
989
+ Calling to_raw on the type should return the underlying object
990
+ Calling to_string on the type should return the object as a string: that can be the string in case of an AgentText but will be the path of the serialized version of the object in other instances
991
+ Displaying it in an ipython kernel should display the object correctly
992
+ AgentText
993
+ class transformers.agents.agent_types.AgentText
994
+ <
995
+ source
996
+ >
997
+ ( value )
998
+
999
+ Text type returned by the agent. Behaves as a string.
1000
+
1001
+ AgentImage
1002
+ class transformers.agents.agent_types.AgentImage
1003
+ <
1004
+ source
1005
+ >
1006
+ ( value )
1007
+
1008
+ Image type returned by the agent. Behaves as a PIL.Image.
1009
+
1010
+ save
1011
+ <
1012
+ source
1013
+ >
1014
+ ( output_bytesformat**params )
1015
+
1016
+ Parameters
1017
+
1018
+ output_bytes (bytes) — The output bytes to save the image to.
1019
+ format (str) — The format to use for the output image. The format is the same as in PIL.Image.save.
1020
+ **params — Additional parameters to pass to PIL.Image.save.
1021
+ Saves the image to a file.
1022
+
1023
+ to_raw
1024
+ <
1025
+ source
1026
+ >
1027
+ ( )
1028
+
1029
+ Returns the “raw” version of that object. In the case of an AgentImage, it is a PIL.Image.
1030
+
1031
+ to_string
1032
+ <
1033
+ source
1034
+ >
1035
+ ( )
1036
+
1037
+ Returns the stringified version of that object. In the case of an AgentImage, it is a path to the serialized version of the image.
1038
+
1039
+ AgentAudio
1040
+ class transformers.agents.agent_types.AgentAudio
1041
+ <
1042
+ source
1043
+ >
1044
+ ( valuesamplerate = 16000 )
1045
+
1046
+ Audio type returned by the agent.
1047
+
1048
+ to_raw
1049
+ <
1050
+ source
1051
+ >
1052
+ ( )
1053
+
1054
+ Returns the “raw” version of that object. It is a torch.Tensor object.
1055
+
1056
+ to_string
1057
+ <
1058
+ source
1059
+ >
1060
+ ( )
1061
+
1062
+ Returns the stringified version of that object. In the case of an AgentAudio, it is a path to the serialized version of the audio. Code to SynapTree my Knowledge Tree Builder to demo MoE and Agents: import streamlit as st
1063
+ import os
1064
+ import glob
1065
+ import re
1066
+ import base64
1067
+ import pytz
1068
+ import time
1069
+ import streamlit.components.v1 as components
1070
+
1071
+ from urllib.parse import quote
1072
+ from gradio_client import Client
1073
+ from datetime import datetime
1074
+
1075
+ # Page configuration
1076
+ Site_Name = 'AI Knowledge Tree Builder 📈🌿 Grow Smarter with Every Click'
1077
+ title = "🌳✨AI Knowledge Tree Builder🛠️🤓"
1078
+ helpURL = 'https://huggingface.co/spaces/awacke1/AIKnowledgeTreeBuilder/'
1079
+ bugURL = 'https://huggingface.co/spaces/awacke1/AIKnowledgeTreeBuilder/'
1080
+ icons = '🌳✨🛠️🤓'
1081
+
1082
+ SidebarOutline = """🌳🤖 Designed with the following tenets:
1083
+ 1 📱 **Portability** - Universal access via any device & link sharing
1084
+ 2. ⚡ **Speed of Build** - Rapid deployments < 2min to production
1085
+ 3. 🔗 **Linkiness** - Programmatic access to AI knowledge sources
1086
+ 4. 🎯 **Abstractive** - Core stays lean isolating high-maintenance components
1087
+ 5. 🧠 **Memory** - Shareable flows deep-linked research paths
1088
+ 6. 👤 **Personalized** - Rapidly adapts knowledge base to user needs
1089
+ 7. 🐦 **Living Brevity** - Easily cloneable, self modify data public share results.
1090
+ """
1091
+
1092
+ st.set_page_config(
1093
+ page_title=title,
1094
+ page_icon=icons,
1095
+ layout="wide",
1096
+ initial_sidebar_state="auto",
1097
+ menu_items={
1098
+ 'Get Help': helpURL,
1099
+ 'Report a bug': bugURL,
1100
+ 'About': title
1101
+ }
1102
+ )
1103
+
1104
+ st.sidebar.markdown(SidebarOutline)
1105
+
1106
+ # Initialize session state variables
1107
+ if 'selected_file' not in st.session_state:
1108
+ st.session_state.selected_file = None
1109
+ if 'view_mode' not in st.session_state:
1110
+ st.session_state.view_mode = 'view'
1111
+ if 'files' not in st.session_state:
1112
+ st.session_state.files = []
1113
+
1114
+ # --- MoE System Prompts Setup ---
1115
+ moe_prompts_data = """1. Create a python streamlit app.py demonstrating the topic and show top 3 arxiv papers discussing this as reference.
1116
+ 2. Create a python gradio app.py demonstrating the topic and show top 3 arxiv papers discussing this as reference.
1117
+ 3. Create a mermaid model of the knowledge tree around concepts and parts of this topic. Use appropriate emojis.
1118
+ 4. Create a top three list of tools and techniques for this topic with markdown and emojis.
1119
+ 5. Create a specification in markdown outline with emojis for this topic.
1120
+ 6. Create an image generation prompt for this with Bosch and Turner oil painting influences.
1121
+ 7. Generate an image which describes this as a concept and area of study.
1122
+ 8. List top ten glossary terms with emojis related to this topic as markdown outline."""
1123
+ # Split the data by lines and remove the numbering/period (assume each line has "number. " at the start)
1124
+ moe_prompts_list = [line.split('. ', 1)[1].strip() for line in moe_prompts_data.splitlines() if '. ' in line]
1125
+ moe_options = [""] + moe_prompts_list # blank is default
1126
+
1127
+ # Place the selectbox at the top of the app; store selection in session_state key "selected_moe"
1128
+ selected_moe = st.selectbox("Choose a MoE system prompt", options=moe_options, index=0, key="selected_moe")
1129
+
1130
+ # --- Utility Functions ---
1131
+
1132
+ def get_display_name(filename):
1133
+ """Extract text from parentheses or return filename as is."""
1134
+ match = re.search(r'\((.*?)\)', filename)
1135
+ if match:
1136
+ return match.group(1)
1137
+ return filename
1138
+
1139
+ def get_time_display(filename):
1140
+ """Extract just the time portion from the filename."""
1141
+ time_match = re.match(r'(\d{2}\d{2}[AP]M)', filename)
1142
+ if time_match:
1143
+ return time_match.group(1)
1144
+ return filename
1145
+
1146
+ def sanitize_filename(text):
1147
+ """Create a safe filename from text while preserving spaces."""
1148
+ safe_text = re.sub(r'[^\w\s-]', ' ', text)
1149
+ safe_text = re.sub(r'\s+', ' ', safe_text)
1150
+ safe_text = safe_text.strip()
1151
+ return safe_text[:50]
1152
+
1153
+ def generate_timestamp_filename(query):
1154
+ """Generate filename with format: 1103AM 11032024 (Query).md"""
1155
+ central = pytz.timezone('US/Central')
1156
+ current_time = datetime.now(central)
1157
+ time_str = current_time.strftime("%I%M%p")
1158
+ date_str = current_time.strftime("%m%d%Y")
1159
+ safe_query = sanitize_filename(query)
1160
+ filename = f"{time_str} {date_str} ({safe_query}).md"
1161
+ return filename
1162
+
1163
+ def delete_file(file_path):
1164
+ """Delete a file and return success status."""
1165
+ try:
1166
+ os.remove(file_path)
1167
+ return True
1168
+ except Exception as e:
1169
+ st.error(f"Error deleting file: {e}")
1170
+ return False
1171
+
1172
+ def save_ai_interaction(query, ai_result, is_rerun=False):
1173
+ """Save AI interaction to a markdown file with new filename format."""
1174
+ filename = generate_timestamp_filename(query)
1175
+ if is_rerun:
1176
+ content = f"""# Rerun Query
1177
+ Original file content used for rerun:
1178
+
1179
+ {query}
1180
+
1181
+ # AI Response (Fun Version)
1182
+ {ai_result}
1183
+ """
1184
+ else:
1185
+ content = f"""# Query: {query}
1186
+
1187
+ ## AI Response
1188
+ {ai_result}
1189
+ """
1190
+ try:
1191
+ with open(filename, 'w', encoding='utf-8') as f:
1192
+ f.write(content)
1193
+ return filename
1194
+ except Exception as e:
1195
+ st.error(f"Error saving file: {e}")
1196
+ return None
1197
+
1198
+ def get_file_download_link(file_path):
1199
+ """Generate a base64 download link for a file."""
1200
+ try:
1201
+ with open(file_path, 'r', encoding='utf-8') as f:
1202
+ content = f.read()
1203
+ b64 = base64.b64encode(content.encode()).decode()
1204
+ filename = os.path.basename(file_path)
1205
+ return f'<a href="data:text/markdown;base64,{b64}" download="{filename}">{get_display_name(filename)}</a>'
1206
+ except Exception as e:
1207
+ st.error(f"Error creating download link: {e}")
1208
+ return None
1209
+
1210
+ # --- New Functions for Markdown File Parsing and Link Tree ---
1211
+
1212
+ def clean_item_text(line):
1213
+ """
1214
+ Remove emoji and numbered prefix from a line.
1215
+ E.g., "🔧 1. Low-level system integrations compilers Cplusplus" becomes
1216
+ "Low-level system integrations compilers Cplusplus".
1217
+ Also remove any bold markdown markers.
1218
+ """
1219
+ # Remove leading emoji and number+period
1220
+ cleaned = re.sub(r'^[^\w]*(\d+\.\s*)', '', line)
1221
+ # Remove any remaining emoji (simple unicode range) and ** markers
1222
+ cleaned = re.sub(r'[\U0001F300-\U0001FAFF]', '', cleaned)
1223
+ cleaned = cleaned.replace("**", "")
1224
+ return cleaned.strip()
1225
+
1226
+ def clean_header_text(header_line):
1227
+ """
1228
+ Extract header text from a markdown header line.
1229
+ E.g., "🔧 **Systems, Infrastructure & Low-Level Engineering**" becomes
1230
+ "Systems, Infrastructure & Low-Level Engineering".
1231
+ """
1232
+ match = re.search(r'\*\*(.*?)\*\*', header_line)
1233
+ if match:
1234
+ return match.group(1).strip()
1235
+ return header_line.strip()
1236
+
1237
+ def parse_markdown_sections(md_text):
1238
+ """
1239
+ Parse markdown text into sections.
1240
+ Each section starts with a header line containing bold text.
1241
+ Returns a list of dicts with keys: 'header' and 'items' (list of lines).
1242
+ Skips any content before the first header.
1243
+ """
1244
+ sections = []
1245
+ current_section = None
1246
+ lines = md_text.splitlines()
1247
+ for line in lines:
1248
+ if line.strip() == "":
1249
+ continue
1250
+ # Check if line is a header (contains bold markdown and an emoji)
1251
+ if '**' in line:
1252
+ header = clean_header_text(line)
1253
+ current_section = {'header': header, 'raw': line, 'items': []}
1254
+ sections.append(current_section)
1255
+ elif current_section is not None:
1256
+ # Only add lines that appear to be list items (start with an emoji and number)
1257
+ if re.match(r'^[^\w]*\d+\.\s+', line):
1258
+ current_section['items'].append(line)
1259
+ else:
1260
+ if current_section['items']:
1261
+ current_section['items'][-1] += " " + line.strip()
1262
+ else:
1263
+ current_section['items'].append(line)
1264
+ return sections
1265
+
1266
+ def display_section_items(items):
1267
+ """
1268
+ Display list of items as links.
1269
+ For each item, clean the text and generate search links using your original link set.
1270
+ If a MoE system prompt is selected (non-blank), prepend it—with three spaces—before the cleaned text.
1271
+ """
1272
+ # Retrieve the current selected MoE prompt (if any)
1273
+ moe_prefix = st.session_state.get("selected_moe", "")
1274
+ search_urls = {
1275
+ "📚📖ArXiv": lambda k: f"/?q={quote(k)}",
1276
+ "🔮<sup>Google</sup>": lambda k: f"https://www.google.com/search?q={quote(k)}",
1277
+ "📺<sup>Youtube</sup>": lambda k: f"https://www.youtube.com/results?search_query={quote(k)}",
1278
+ "🔭<sup>Bing</sup>": lambda k: f"https://www.bing.com/search?q={quote(k)}",
1279
+ "💡<sup>Claude</sup>": lambda k: f"https://claude.ai/new?q={quote(k)}",
1280
+ "📱X": lambda k: f"https://twitter.com/search?q={quote(k)}",
1281
+ "🤖<sup>GPT</sup>": lambda k: f"https://chatgpt.com/?model=o3-mini-high&q={quote(k)}",
1282
+ }
1283
+ for item in items:
1284
+ cleaned_text = clean_item_text(item)
1285
+ # If a MoE prompt is selected (non-blank), prepend it (with three spaces) to the cleaned text.
1286
+ final_query = (moe_prefix + " " if moe_prefix else "") + cleaned_text
1287
+ links_md = ' '.join([f"[{emoji}]({url(final_query)})" for emoji, url in search_urls.items()])
1288
+ st.markdown(f"- **{cleaned_text}** {links_md}", unsafe_allow_html=True)
1289
+
1290
+ def display_markdown_tree():
1291
+ """
1292
+ Allow user to upload a .md file or load README.md.
1293
+ Parse the markdown into sections and display each section in a collapsed expander
1294
+ with the original markdown and a link tree of items.
1295
+ """
1296
+ st.markdown("## Markdown Tree Parser")
1297
+ uploaded_file = st.file_uploader("Upload a Markdown file", type=["md"])
1298
+ if uploaded_file is not None:
1299
+ md_content = uploaded_file.read().decode("utf-8")
1300
+ else:
1301
+ if os.path.exists("README.md"):
1302
+ with open("README.md", "r", encoding="utf-8") as f:
1303
+ md_content = f.read()
1304
+ else:
1305
+ st.info("No Markdown file uploaded and README.md not found.")
1306
+ return
1307
+
1308
+ sections = parse_markdown_sections(md_content)
1309
+ if not sections:
1310
+ st.info("No sections found in the markdown file.")
1311
+ return
1312
+
1313
+ for sec in sections:
1314
+ with st.expander(sec['header'], expanded=False):
1315
+ st.markdown(f"**Original Markdown:**\n\n{sec['raw']}\n")
1316
+ if sec['items']:
1317
+ st.markdown("**Link Tree:**")
1318
+ display_section_items(sec['items'])
1319
+ else:
1320
+ st.write("No items found in this section.")
1321
+
1322
+ # --- Existing AI and File Management Functions ---
1323
+
1324
+ def search_arxiv(query):
1325
+ st.write("Performing AI Lookup...")
1326
+ client = Client("awacke1/Arxiv-Paper-Search-And-QA-RAG-Pattern")
1327
+ result1 = client.predict(
1328
+ prompt=query,
1329
+ llm_model_picked="mistralai/Mixtral-8x7B-Instruct-v0.1",
1330
+ stream_outputs=True,
1331
+ api_name="/ask_llm"
1332
+ )
1333
+ st.markdown("### Mixtral-8x7B-Instruct-v0.1 Result")
1334
+ st.markdown(result1)
1335
+ result2 = client.predict(
1336
+ prompt=query,
1337
+ llm_model_picked="mistralai/Mistral-7B-Instruct-v0.2",
1338
+ stream_outputs=True,
1339
+ api_name="/ask_llm"
1340
+ )
1341
+ st.markdown("### Mistral-7B-Instruct-v0.2 Result")
1342
+ st.markdown(result2)
1343
+ combined_result = f"{result1}\n\n{result2}"
1344
+ return combined_result
1345
+
1346
+ @st.cache_resource
1347
+ def SpeechSynthesis(result):
1348
+ documentHTML5 = '''
1349
+ <!DOCTYPE html>
1350
+ <html>
1351
+ <head>
1352
+ <title>Read It Aloud</title>
1353
+ <script type="text/javascript">
1354
+ function readAloud() {
1355
+ const text = document.getElementById("textArea").value;
1356
+ const speech = new SpeechSynthesisUtterance(text);
1357
+ window.speechSynthesis.speak(speech);
1358
+ }
1359
+ </script>
1360
+ </head>
1361
+ <body>
1362
+ <h1>🔊 Read It Aloud</h1>
1363
+ <textarea id="textArea" rows="10" cols="80">
1364
+ '''
1365
+ documentHTML5 += result
1366
+ documentHTML5 += '''
1367
+ </textarea>
1368
+ <br>
1369
+ <button onclick="readAloud()">🔊 Read Aloud</button>
1370
+ </body>
1371
+ </html>
1372
+ '''
1373
+ components.html(documentHTML5, width=1280, height=300)
1374
+
1375
+ def display_file_content(file_path):
1376
+ """Display file content with editing capabilities."""
1377
+ try:
1378
+ with open(file_path, 'r', encoding='utf-8') as f:
1379
+ content = f.read()
1380
+ if st.session_state.view_mode == 'view':
1381
+ st.markdown(content)
1382
+ else:
1383
+ edited_content = st.text_area(
1384
+ "Edit content",
1385
+ content,
1386
+ height=400,
1387
+ key=f"edit_{os.path.basename(file_path)}"
1388
+ )
1389
+ if st.button("Save Changes", key=f"save_{os.path.basename(file_path)}"):
1390
+ try:
1391
+ with open(file_path, 'w', encoding='utf-8') as f:
1392
+ f.write(edited_content)
1393
+ st.success(f"Successfully saved changes to {file_path}")
1394
+ except Exception as e:
1395
+ st.error(f"Error saving changes: {e}")
1396
+ except Exception as e:
1397
+ st.error(f"Error reading file: {e}")
1398
+
1399
+ def file_management_sidebar():
1400
+ """Redesigned sidebar with improved layout and additional functionality."""
1401
+ st.sidebar.title("📁 File Management")
1402
+ md_files = [file for file in glob.glob("*.md") if file.lower() != 'readme.md']
1403
+ md_files.sort()
1404
+ st.session_state.files = md_files
1405
+ if md_files:
1406
+ st.sidebar.markdown("### Saved Files")
1407
+ for idx, file in enumerate(md_files):
1408
+ st.sidebar.markdown("---")
1409
+ st.sidebar.text(get_time_display(file))
1410
+ download_link = get_file_download_link(file)
1411
+ if download_link:
1412
+ st.sidebar.markdown(download_link, unsafe_allow_html=True)
1413
+ col1, col2, col3, col4 = st.sidebar.columns(4)
1414
+ with col1:
1415
+ if st.button("📄View", key=f"view_{idx}"):
1416
+ st.session_state.selected_file = file
1417
+ st.session_state.view_mode = 'view'
1418
+ with col2:
1419
+ if st.button("✏️Edit", key=f"edit_{idx}"):
1420
+ st.session_state.selected_file = file
1421
+ st.session_state.view_mode = 'edit'
1422
+ with col3:
1423
+ if st.button("🔄Run", key=f"rerun_{idx}"):
1424
+ try:
1425
+ with open(file, 'r', encoding='utf-8') as f:
1426
+ content = f.read()
1427
+ rerun_prefix = """For the markdown below reduce the text to a humorous fun outline with emojis and markdown outline levels in outline that convey all the facts and adds wise quotes and funny statements to engage the reader:
1428
+
1429
+ """
1430
+ full_prompt = rerun_prefix + content
1431
+ ai_result = perform_ai_lookup(full_prompt)
1432
+ saved_file = save_ai_interaction(content, ai_result, is_rerun=True)
1433
+ if saved_file:
1434
+ st.success(f"Created fun version in {saved_file}")
1435
+ st.session_state.selected_file = saved_file
1436
+ st.session_state.view_mode = 'view'
1437
+ except Exception as e:
1438
+ st.error(f"Error during rerun: {e}")
1439
+ with col4:
1440
+ if st.button("🗑️Delete", key=f"delete_{idx}"):
1441
+ if delete_file(file):
1442
+ st.success(f"Deleted {file}")
1443
+ st.rerun()
1444
+ else:
1445
+ st.error(f"Failed to delete {file}")
1446
+ st.sidebar.markdown("---")
1447
+ if st.sidebar.button("📝 Create New Note"):
1448
+ filename = generate_timestamp_filename("New Note")
1449
+ with open(filename, 'w', encoding='utf-8') as f:
1450
+ f.write("# New Markdown File\n")
1451
+ st.sidebar.success(f"Created: {filename}")
1452
+ st.session_state.selected_file = filename
1453
+ st.session_state.view_mode = 'edit'
1454
+ else:
1455
+ st.sidebar.write("No markdown files found.")
1456
+ if st.sidebar.button("📝 Create First Note"):
1457
+ filename = generate_timestamp_filename("New Note")
1458
+ with open(filename, 'w', encoding='utf-8') as f:
1459
+ f.write("# New Markdown File\n")
1460
+ st.sidebar.success(f"Created: {filename}")
1461
+ st.session_state.selected_file = filename
1462
+ st.session_state.view_mode = 'edit'
1463
+
1464
+ def perform_ai_lookup(query):
1465
+ start_time = time.strftime("%Y-%m-%d %H:%M:%S")
1466
+ client = Client("awacke1/Arxiv-Paper-Search-And-QA-RAG-Pattern")
1467
+ response1 = client.predict(
1468
+ query,
1469
+ 20,
1470
+ "Semantic Search",
1471
+ "mistralai/Mixtral-8x7B-Instruct-v0.1",
1472
+ api_name="/update_with_rag_md"
1473
+ )
1474
+ Question = '### 🔎 ' + query + '\r\n'
1475
+ References = response1[0]
1476
+ ReferenceLinks = ""
1477
+ results = ""
1478
+ RunSecondQuery = True
1479
+ if RunSecondQuery:
1480
+ response2 = client.predict(
1481
+ query,
1482
+ "mistralai/Mixtral-8x7B-Instruct-v0.1",
1483
+ True,
1484
+ api_name="/ask_llm"
1485
+ )
1486
+ if len(response2) > 10:
1487
+ Answer = response2
1488
+ SpeechSynthesis(Answer)
1489
+ results = Question + '\r\n' + Answer + '\r\n' + References + '\r\n' + ReferenceLinks
1490
+ st.markdown(results)
1491
+ st.write('🔍Run of Multi-Agent System Paper Summary Spec is Complete')
1492
+ end_time = time.strftime("%Y-%m-%d %H:%M:%S")
1493
+ start_timestamp = time.mktime(time.strptime(start_time, "%Y-%m-%d %H:%M:%S"))
1494
+ end_timestamp = time.mktime(time.strptime(end_time, "%Y-%m-%d %H:%M:%S"))
1495
+ elapsed_seconds = end_timestamp - start_timestamp
1496
+ st.write(f"Start time: {start_time}")
1497
+ st.write(f"Finish time: {end_time}")
1498
+ st.write(f"Elapsed time: {elapsed_seconds:.2f} seconds")
1499
+ filename = generate_filename(query, "md")
1500
+ create_file(filename, query, results)
1501
+ return results
1502
+
1503
+ def generate_filename(prompt, file_type):
1504
+ central = pytz.timezone('US/Central')
1505
+ safe_date_time = datetime.now(central).strftime("%m%d_%H%M")
1506
+ safe_prompt = re.sub(r'\W+', '_', prompt)[:90]
1507
+ return f"{safe_date_time}_{safe_prompt}.{file_type}"
1508
+
1509
+ def create_file(filename, prompt, response):
1510
+ with open(filename, 'w', encoding='utf-8') as file:
1511
+ file.write(prompt + "\n\n" + response)
1512
+
1513
+ # --- Main Application ---
1514
+
1515
+ def main():
1516
+ st.markdown("### AI Knowledge Tree Builder 🧠🌱 Cultivate Your AI Mindscape!")
1517
+ query_params = st.query_params
1518
+ query = query_params.get('q', '')
1519
+ show_initial_content = True
1520
+
1521
+ if query:
1522
+ show_initial_content = False
1523
+ st.write(f"### Search query received: {query}")
1524
+ try:
1525
+ ai_result = perform_ai_lookup(query)
1526
+ saved_file = save_ai_interaction(query, ai_result)
1527
+ if saved_file:
1528
+ st.success(f"Saved interaction to {saved_file}")
1529
+ st.session_state.selected_file = saved_file
1530
+ st.session_state.view_mode = 'view'
1531
+ except Exception as e:
1532
+ st.error(f"Error during AI lookup: {e}")
1533
+
1534
+ file_management_sidebar()
1535
+
1536
+ if st.session_state.selected_file:
1537
+ show_initial_content = False
1538
+ if os.path.exists(st.session_state.selected_file):
1539
+ st.markdown(f"### Current File: {st.session_state.selected_file}")
1540
+ display_file_content(st.session_state.selected_file)
1541
+ else:
1542
+ st.error("Selected file no longer exists.")
1543
+ st.session_state.selected_file = None
1544
+ st.rerun()
1545
+
1546
+ if show_initial_content:
1547
+ display_markdown_tree()
1548
+
1549
+ if __name__ == "__main__":
1550
+ main()
1551
+
1552
+
1553
+