Clelia Astra Bertelli's picture

Clelia Astra Bertelli

as-cle-bert

AI & ML interests

Biology + Artificial Intelligence = โค๏ธ | AI for sustainable development, sustainable development for AI | Researching on Machine Learning Enhancement | I love automation for everyday things | Blogger | Open Source

Recent Activity

replied to their post about 2 hours ago
Ever dreamt of ingesting into a vector DB that pile of CSVs, Word documents and presentations laying in some remote folders on your PC?๐Ÿ—‚๏ธ What if I told you that you can do it within three to six lines of code?๐Ÿคฏ Well, with my latest open-source project, ๐ข๐ง๐ ๐ž๐ฌ๐ญ-๐š๐ง๐ฒ๐ญ๐ก๐ข๐ง๐  (https://github.com/AstraBert/ingest-anything), you can take all your non-PDF files, convert them to PDF, extract their text, chunk, embed and load them into a vector database, all in one go!๐Ÿš€ How? It's pretty simple! ๐Ÿ“ The input files are converted into PDF by PdfItDown (https://github.com/AstraBert/PdfItDown) ๐Ÿ“‘ The PDF text is extracted using LlamaIndex readers ๐Ÿฆ› The text is chunked exploiting Chonkie ๐Ÿงฎ The chunks are embedded thanks to Sentence Transformers models ๐Ÿ—„๏ธ The embeddings are loaded into a Qdrant vector database And you're done!โœ… Curious of trying it? Install it by running: ๐˜ฑ๐˜ช๐˜ฑ ๐˜ช๐˜ฏ๐˜ด๐˜ต๐˜ข๐˜ญ๐˜ญ ๐˜ช๐˜ฏ๐˜จ๐˜ฆ๐˜ด๐˜ต-๐˜ข๐˜ฏ๐˜บ๐˜ต๐˜ฉ๐˜ช๐˜ฏ๐˜จ And you can start using it in your python scripts!๐Ÿ Don't forget to star it on GitHub and let me know if you have any feedback! โžก๏ธ https://github.com/AstraBert/ingest-anything
replied to their post 2 days ago
Ever dreamt of ingesting into a vector DB that pile of CSVs, Word documents and presentations laying in some remote folders on your PC?๐Ÿ—‚๏ธ What if I told you that you can do it within three to six lines of code?๐Ÿคฏ Well, with my latest open-source project, ๐ข๐ง๐ ๐ž๐ฌ๐ญ-๐š๐ง๐ฒ๐ญ๐ก๐ข๐ง๐  (https://github.com/AstraBert/ingest-anything), you can take all your non-PDF files, convert them to PDF, extract their text, chunk, embed and load them into a vector database, all in one go!๐Ÿš€ How? It's pretty simple! ๐Ÿ“ The input files are converted into PDF by PdfItDown (https://github.com/AstraBert/PdfItDown) ๐Ÿ“‘ The PDF text is extracted using LlamaIndex readers ๐Ÿฆ› The text is chunked exploiting Chonkie ๐Ÿงฎ The chunks are embedded thanks to Sentence Transformers models ๐Ÿ—„๏ธ The embeddings are loaded into a Qdrant vector database And you're done!โœ… Curious of trying it? Install it by running: ๐˜ฑ๐˜ช๐˜ฑ ๐˜ช๐˜ฏ๐˜ด๐˜ต๐˜ข๐˜ญ๐˜ญ ๐˜ช๐˜ฏ๐˜จ๐˜ฆ๐˜ด๐˜ต-๐˜ข๐˜ฏ๐˜บ๐˜ต๐˜ฉ๐˜ช๐˜ฏ๐˜จ And you can start using it in your python scripts!๐Ÿ Don't forget to star it on GitHub and let me know if you have any feedback! โžก๏ธ https://github.com/AstraBert/ingest-anything
posted an update 3 days ago
Ever dreamt of ingesting into a vector DB that pile of CSVs, Word documents and presentations laying in some remote folders on your PC?๐Ÿ—‚๏ธ What if I told you that you can do it within three to six lines of code?๐Ÿคฏ Well, with my latest open-source project, ๐ข๐ง๐ ๐ž๐ฌ๐ญ-๐š๐ง๐ฒ๐ญ๐ก๐ข๐ง๐  (https://github.com/AstraBert/ingest-anything), you can take all your non-PDF files, convert them to PDF, extract their text, chunk, embed and load them into a vector database, all in one go!๐Ÿš€ How? It's pretty simple! ๐Ÿ“ The input files are converted into PDF by PdfItDown (https://github.com/AstraBert/PdfItDown) ๐Ÿ“‘ The PDF text is extracted using LlamaIndex readers ๐Ÿฆ› The text is chunked exploiting Chonkie ๐Ÿงฎ The chunks are embedded thanks to Sentence Transformers models ๐Ÿ—„๏ธ The embeddings are loaded into a Qdrant vector database And you're done!โœ… Curious of trying it? Install it by running: ๐˜ฑ๐˜ช๐˜ฑ ๐˜ช๐˜ฏ๐˜ด๐˜ต๐˜ข๐˜ญ๐˜ญ ๐˜ช๐˜ฏ๐˜จ๐˜ฆ๐˜ด๐˜ต-๐˜ข๐˜ฏ๐˜บ๐˜ต๐˜ฉ๐˜ช๐˜ฏ๐˜จ And you can start using it in your python scripts!๐Ÿ Don't forget to star it on GitHub and let me know if you have any feedback! โžก๏ธ https://github.com/AstraBert/ingest-anything
View all activity

Organizations

Social Post Explorers's profile picture Hugging Face Discord Community's profile picture GreenFit AI's profile picture

as-cle-bert's activity

replied to their post about 2 hours ago
view reply

I am working on supporting compatibility with other embeddding models, and we will have that soon, for now I had to reduce the compatibility only to Sentence Transformers.
For what concerns page numbers, I am also working toward having better and more extensive metadata: everything is a big work-in-progress and will come in future releases!

replied to their post 2 days ago
view reply

So, there are two possibilities:

  • If you mean customizing the embedder among the ones available within Sentence Transformers, it is very possible, you just have to change the embedding_model parameter when calling the ingest method
  • If you mean that you have your own embedding model (like saved on your PC), that is a tad more difficult. I think Sentence Transformer might allow loading the model from your PC as long as it is compatible with the package. I think that this guide might be useful in that regard

For now the package only supports Sentence Transformers models, in the future it will probably extend its support to other embedding models as well :)

posted an update 3 days ago
view post
Post
2713
Ever dreamt of ingesting into a vector DB that pile of CSVs, Word documents and presentations laying in some remote folders on your PC?๐Ÿ—‚๏ธ
What if I told you that you can do it within three to six lines of code?๐Ÿคฏ
Well, with my latest open-source project, ๐ข๐ง๐ ๐ž๐ฌ๐ญ-๐š๐ง๐ฒ๐ญ๐ก๐ข๐ง๐  (https://github.com/AstraBert/ingest-anything), you can take all your non-PDF files, convert them to PDF, extract their text, chunk, embed and load them into a vector database, all in one go!๐Ÿš€
How? It's pretty simple!
๐Ÿ“ The input files are converted into PDF by PdfItDown (https://github.com/AstraBert/PdfItDown)
๐Ÿ“‘ The PDF text is extracted using LlamaIndex readers
๐Ÿฆ› The text is chunked exploiting Chonkie
๐Ÿงฎ The chunks are embedded thanks to Sentence Transformers models
๐Ÿ—„๏ธ The embeddings are loaded into a Qdrant vector database

And you're done!โœ…
Curious of trying it? Install it by running:

๐˜ฑ๐˜ช๐˜ฑ ๐˜ช๐˜ฏ๐˜ด๐˜ต๐˜ข๐˜ญ๐˜ญ ๐˜ช๐˜ฏ๐˜จ๐˜ฆ๐˜ด๐˜ต-๐˜ข๐˜ฏ๐˜บ๐˜ต๐˜ฉ๐˜ช๐˜ฏ๐˜จ

And you can start using it in your python scripts!๐Ÿ
Don't forget to star it on GitHub and let me know if you have any feedback! โžก๏ธ https://github.com/AstraBert/ingest-anything
  • 4 replies
ยท
replied to their post 6 days ago
view reply

Hey @T-2000 , you're absolutely right! I'm in the process of making the application online so for now the repo got a bit messy, tomorrow it will be clean and ready to be spinned up also locally: sorry for the incovenient!

posted an update 7 days ago
view post
Post
2933
Finding a job that matches with our resume shouldn't be difficult, especially now that we have AI... And still, we're drowning in unclear announcements, jobs whose skill requirements might not really fit us, and tons of material๐Ÿ˜ตโ€๐Ÿ’ซ
That's why I decided to build ๐‘๐ž๐ฌ๐ฎ๐ฆ๐ž ๐Œ๐š๐ญ๐œ๐ก๐ž๐ซ (https://github.com/AstraBert/resume-matcher), a fully open-source application that scans your resume and searches the web for jobs that match with it!๐ŸŽ‰
The workflow is very simple:
๐Ÿฆ™ A LlamaExtract agent parses the resume and extracts valuable data that represent your profile
๐Ÿ—„๏ธThe structured data are passed on to a Job Matching Agent (built with LlamaIndex๐Ÿ˜‰) that uses them to build a web search query based on your resume
๐ŸŒ The web search is handled by Linkup, which finds the top matches and returns them to the Agent
๐Ÿ”Ž The agent evaluates the match between your profile and the jobs, and then returns a final answer to you

So, are you ready to find a job suitable for you?๐Ÿ’ผ You can spin up the application completely locally and with Docker, starting from the GitHub repo โžก๏ธ https://github.com/AstraBert/resume-matcher
Feel free to leave your feedback and let me know in the comments if you want an online version of Resume Matcher as well!โœจ
  • 2 replies
ยท
replied to their post 17 days ago
posted an update 21 days ago
view post
Post
2924
Llama-4 is out and I couldn't resist but to cook something with it... So I came up with ๐‹๐ฅ๐š๐ฆ๐š๐‘๐ž๐ฌ๐ž๐š๐ซ๐œ๐ก๐ž๐ซ (https://llamaresearcher.com), your deep-research AI companion!๐Ÿ”Ž

The workflow behind ๐—Ÿ๐—น๐—ฎ๐—บ๐—ฎ๐—ฅ๐—ฒ๐˜€๐—ฒ๐—ฎ๐—ฟ๐—ฐ๐—ต๐—ฒ๐—ฟ is simple:
๐Ÿ’ฌ You submit a query
๐Ÿ›ก๏ธ Your query is evaluated by Llama 3 guard model, which deems it safe or unsafe
๐Ÿง  If your query is safe, it is routed to the Researcher Agent
โš™๏ธ The Researcher Agent expands the query into three sub-queries, with which to search the web
๐ŸŒ The web is searched for each of the sub-queries
๐Ÿ“Š The retrieved information is evaluated for relevancy against your original query
โœ๏ธ The Researcher Agent produces an essay based on the information it gathered, paying attention to referencing its sources

The agent itself is also built with easy-to-use and intuitive blocks:
๐Ÿฆ™ LlamaIndex provides the agentic architecture and the integrations with the language models
โšกGroq makes Llama-4 available with its lightning-fast inference
๐Ÿ”Ž Linkup allows the agent to deep-search the web and provides sourced answers
๐Ÿ’ช FastAPI does the heavy loading with wrapping everything within an elegant API interface
โฑ๏ธ Redis is used for API rate limiting
๐ŸŽจ Gradio creates a simple but powerful user interface

Special mention also to Lovable, which helped me build the first draft of the landing page for LlamaResearcher!๐Ÿ’–

If you're curious and you want to try LlamaResearcher, you can - completely for free and without subscription - for 30 days from now โžก๏ธ https://llamaresearcher.com
And if you're like me, and you like getting your hands in code and build stuff on your own machine, I have good news: this is all open-source, fully reproducible locally and Docker-ready๐Ÿ‹
Just go to the GitHub repo: https://github.com/AstraBert/llama-4-researcher and don't forget to star it, if you find it useful!โญ

As always, have fun and feel free to leave your feedbackโœจ
  • 2 replies
ยท
posted an update 25 days ago
view post
Post
736
I heard someone saying ๐˜ƒ๐—ผ๐—ถ๐—ฐ๐—ฒ assistants are the future, and someone else that ๐— ๐—–๐—ฃ will rule the AI world... So I decided to combine both!๐Ÿš€

Meet ๐“๐ฒ๐’๐•๐€ (๐—ง๐˜†pe๐—ฆcript ๐—ฉoice ๐—”ssistant, https://github.com/AstraBert/TySVA), your (speaking) AI companion for everyday TypeScript programming tasks!๐ŸŽ™๏ธ

TySVA is a skilled TypeScript expert and, to provide accurate and up-to-date responses, she leverages the following workflow:
๐Ÿ—ฃ๏ธ If you talk to her, she converts the audio into a textual prompt, and use it a starting point to answer your questions (if you send a message, she'll use directly that๐Ÿ’ฌ)
๐Ÿง  She can solve your questions by (deep)searching the web and/or by retrieving relevant information from a vector database containing TypeScript documentation. If the answer is simple, she can also reply directly (no tools needed!)
๐Ÿ›œ To ease her life, TySVA has all the tools she needs available through Model Context Protocol (MCP)
๐Ÿ”Š Once she's done, she returns her answer to you, along with a voice summary of what she did and what solution she found

But how does she do that? What are her components?๐Ÿคจ

๐Ÿ“– Qdrant + HuggingFace give her the documentation knowledge, providing the vector database and the embeddings
๐ŸŒ Linkup provides her with up-to-date, grounded answers, connecting her to the web
๐Ÿฆ™ LlamaIndex makes up her brain, with the whole agentic architecture
๐ŸŽค ElevenLabs gives her ears and mouth, transcribing and producing voice inputs and outoputs
๐Ÿ“œ Groq provides her with speech, being the LLM provider behind TySVA
๐ŸŽจ Gradio+FastAPI make up her face and fibers, providing a seamless backend-to-frontend integration

If you're now curious of trying her, you can easily do that by spinning her up locally (and with Docker!๐Ÿ‹) from the GitHub repo โžก๏ธ https://github.com/AstraBert/TySVA

And feel free to leave any feedback!โœจ
posted an update about 1 month ago
view post
Post
629
Drowning in handouts, documents and presentations from your professors and not knowing where to start?๐ŸŒŠ๐Ÿ˜ตโ€๐Ÿ’ซ
Well, I might have a tool for you: ๐๐๐Ÿ๐Ÿ๐๐จ๐ญ๐ž๐ฌ (https://github.com/AstraBert/pdf2notes) is an ๐—”๐—œ-๐—ฝ๐—ผ๐˜„๐—ฒ๐—ฟ๐—ฒ๐—ฑ, ๐—ผ๐—ฝ๐—ฒ๐—ป-๐˜€๐—ผ๐˜‚๐—ฟ๐—ฐ๐—ฒ solution that lets you turn your unstructured and chaotic PDFs into nice and well-ordered notes in a matter of seconds!๐Ÿ“

๐—›๐—ผ๐˜„ ๐—ฑ๐—ผ๐—ฒ๐˜€ ๐—ถ๐˜ ๐˜„๐—ผ๐—ฟ๐—ธ?
๐Ÿ“„ You first upload a document
โš™๏ธ LlamaParse by LlamaIndex extracts the text from the document, using DeepMind's Gemini 2 Flash to perform multi-modal parsing
๐Ÿง  Llama-3.3-70B by Groq turns the extracted text into notes!

The notes are not perfect or you want more in-depth insights? No problem:
๐Ÿ’ฌ Send a direct message to the chatbot
โš™๏ธ The chatbot will retrieve the chat history from a Postgres database
๐Ÿง  Llama-3.3-70B will produce the answer you need

All of this is nicely wrapped within a seamless backend-to-frontend framework powered by Gradio and FastAPI๐ŸŽจ

And you can even spin it up easily and locally, using Docker๐Ÿ‹

So, what are you waiting for? Go turn your hundreds of pages of chaotic learning material into neat and elegant notes โžก๏ธ https://github.com/AstraBert/pdf2notes

And, if you would like an online demo, feel free to drop a comment - we'll see what we can build๐Ÿš€
posted an update about 2 months ago
view post
Post
1683
๐‘๐€๐†๐œ๐จ๐จ๐ง๐Ÿฆ - ๐€๐ ๐ž๐ง๐ญ๐ข๐œ ๐‘๐€๐† ๐ญ๐จ ๐ก๐ž๐ฅ๐ฉ ๐ฒ๐จ๐ฎ ๐›๐ฎ๐ข๐ฅ๐ ๐ฒ๐จ๐ฎ๐ซ ๐ฌ๐ญ๐š๐ซ๐ญ๐ฎ๐ฉ

GitHub ๐Ÿ‘‰ https://github.com/AstraBert/ragcoon

Are you building a startup and you're stuck in the process, trying to navigate hundreds of resources, suggestions and LinkedIn posts?๐Ÿ˜ถโ€๐ŸŒซ๏ธ
Well, fear no more, because ๐—ฅ๐—”๐—š๐—ฐ๐—ผ๐—ผ๐—ป๐Ÿฆ is here to do some of the job for you:

๐Ÿ“ƒ It's built on free resources written by successful founders
โš™๏ธ It performs complex retrieval operations, exploiting "vanilla" hybrid search, query expansion with an ๐—ต๐˜†๐—ฝ๐—ผ๐˜๐—ต๐—ฒ๐˜๐—ถ๐—ฐ๐—ฎ๐—น ๐—ฑ๐—ผ๐—ฐ๐˜‚๐—บ๐—ฒ๐—ป๐˜ approach and ๐—บ๐˜‚๐—น๐˜๐—ถ-๐˜€๐˜๐—ฒ๐—ฝ ๐—พ๐˜‚๐—ฒ๐—ฟ๐˜† ๐—ฑ๐—ฒ๐—ฐ๐—ผ๐—บ๐—ฝ๐—ผ๐˜€๐—ถ๐˜๐—ถ๐—ผ๐—ป
๐Ÿ“Š It evaluates the ๐—ฟ๐—ฒ๐—น๐—ถ๐—ฎ๐—ฏ๐—ถ๐—น๐—ถ๐˜๐˜† of the retrieved context, and the ๐—ฟ๐—ฒ๐—น๐—ฒ๐˜ƒ๐—ฎ๐—ป๐—ฐ๐˜† and ๐—ณ๐—ฎ๐—ถ๐˜๐—ต๐—ณ๐˜‚๐—น๐—ป๐—ฒ๐˜€๐˜€ of its own responses, in an auto-correction effort

RAGcoon๐Ÿฆ is ๐—ผ๐—ฝ๐—ฒ๐—ป-๐˜€๐—ผ๐˜‚๐—ฟ๐—ฐ๐—ฒ and relies on easy-to-use components:

๐Ÿ”นLlamaIndex is at the core of the agent architecture, provisions the integrations with language models and vector database services, and performs evaluations
๐Ÿ”น Qdrant is your go-to, versatile and scalable companion for vector database services
๐Ÿ”นGroq provides lightning-fast LLM inference to support the agent, giving it the full power of ๐—ค๐˜„๐—ค-๐Ÿฏ๐Ÿฎ๐—• by Qwen
๐Ÿ”นHugging Face provides the embedding models used for dense and sparse retrieval
๐Ÿ”นFastAPI wraps the whole backend into an API interface
๐Ÿ”น๐— ๐—ฒ๐˜€๐—ผ๐—ฝ by Google is used to serve the application frontend

RAGcoon๐Ÿฆ can be spinned up locally - it's ๐——๐—ผ๐—ฐ๐—ธ๐—ฒ๐—ฟ-๐—ฟ๐—ฒ๐—ฎ๐—ฑ๐˜†๐Ÿ‹, and you can find the whole code to reproduce it on GitHub ๐Ÿ‘‰ https://github.com/AstraBert/ragcoon

But there might be room for an online version of RAGcoon๐Ÿฆ: let me know if you would use it - we can connect and build it together!๐Ÿš€
New activity in as-cle-bert/pdfitdown about 2 months ago

Update requirements.txt

1
#1 opened about 2 months ago by
not-lain
posted an update about 2 months ago
view post
Post
2740
I just released a fully automated evaluation framework for your RAG applications!๐Ÿ“ˆ

GitHub ๐Ÿ‘‰ https://github.com/AstraBert/diRAGnosis
PyPi ๐Ÿ‘‰ https://pypi.org/project/diragnosis/

It's called ๐๐ข๐‘๐€๐†๐ง๐จ๐ฌ๐ข๐ฌ and is a lightweight framework that helps you ๐—ฑ๐—ถ๐—ฎ๐—ด๐—ป๐—ผ๐˜€๐—ฒ ๐˜๐—ต๐—ฒ ๐—ฝ๐—ฒ๐—ฟ๐—ณ๐—ผ๐—ฟ๐—บ๐—ฎ๐—ป๐—ฐ๐—ฒ ๐—ผ๐—ณ ๐—Ÿ๐—Ÿ๐— ๐˜€ ๐—ฎ๐—ป๐—ฑ ๐—ฟ๐—ฒ๐˜๐—ฟ๐—ถ๐—ฒ๐˜ƒ๐—ฎ๐—น ๐—บ๐—ผ๐—ฑ๐—ฒ๐—น๐˜€ ๐—ถ๐—ป ๐—ฅ๐—”๐—š ๐—ฎ๐—ฝ๐—ฝ๐—น๐—ถ๐—ฐ๐—ฎ๐˜๐—ถ๐—ผ๐—ป๐˜€.

You can launch it as an application locally (it's Docker-ready!๐Ÿ‹) or, if you want more flexibility, you can integrate it in your code as a python package๐Ÿ“ฆ

The workflow is simple:
๐Ÿง  You choose your favorite LLM provider and model (supported, for now, are Mistral AI, Groq, Anthropic, OpenAI and Cohere)
๐Ÿง  You pick the embedding models provider and the embedding model you prefer (supported, for now, are Mistral AI, Hugging Face, Cohere and OpenAI)
๐Ÿ“„ You prepare and provide your documents
โš™๏ธ Documents are ingested into a Qdrant vector database and transformed into a synthetic question dataset with the help of LlamaIndex
๐Ÿ“Š The LLM is evaluated for the faithfulness and relevancy of its retrieval-augmented answer to the questions
๐Ÿ“Š The embedding model is evaluated for hit rate and mean reciprocal ranking (MRR) of the retrieved documents

And the cool thing is that all of this is ๐—ถ๐—ป๐˜๐˜‚๐—ถ๐˜๐—ถ๐˜ƒ๐—ฒ ๐—ฎ๐—ป๐—ฑ ๐—ฐ๐—ผ๐—บ๐—ฝ๐—น๐—ฒ๐˜๐—ฒ๐—น๐˜† ๐—ฎ๐˜‚๐˜๐—ผ๐—บ๐—ฎ๐˜๐—ฒ๐—ฑ: you plug it in, and it works!๐Ÿ”Œโšก

Even cooler? This is all built on top of LlamaIndex and its integrations: no need for tons of dependencies or fancy workarounds๐Ÿฆ™
And if you're a UI lover, Gradio and FastAPI are there to provide you a seamless backend-to-frontend experience๐Ÿ•ถ๏ธ

So now it's your turn: you can either get diRAGnosis from GitHub ๐Ÿ‘‰ https://github.com/AstraBert/diRAGnosis
or just run a quick and painless:

uv pip install diragnosis


To get the package installed (lightning-fast) in your environment๐Ÿƒโ€โ™€๏ธ

Have fun and feel free to leave feedback and feature/integrations requests on GitHub issuesโœจ
commented on streamlit_supabase_auth_ui about 2 months ago
view reply

Hi there, just wanted to reach out also here, so that if people see our conversation know that this feature has been integrated: you can now find it in the v0.1.0 of the package, already installable via pip.
Have fun!

commented on streamlit_supabase_auth_ui 2 months ago
view reply

I did not specify any configuration, but I'm pretty sure we could play around with Supabase and set a login/logout status for the user (like saying: the user last logged in at time X and logged out at time Y; if Y > X, then the user can login in again, else they cannot).
If you want, I can put it in the roadmap for the next release of the package: then I would ask you to open an issue here: https://github.com/AstraBert/streamlit_supabase_auth_ui/issues so that I can add it to the milestone for v0.1.0 :)