AI & ML interests

Hub documentation on datasets

datasets-examples's activity

albertvillanovaย 
posted an update 16 days ago
view post
Post
2482
smolagents v1.14.0 is out! ๐Ÿš€
๐Ÿ”Œ MCPClient: A sleek new client for connecting to remote MCP servers, making integrations more flexible and scalable.
๐Ÿชจ Amazon Bedrock: Native support for Bedrock-hosted models.
SmolAgents is now more powerful, flexible, and enterprise-ready. ๐Ÿ’ผ

Full release ๐Ÿ‘‰ https://github.com/huggingface/smolagents/releases/tag/v1.14.0
#smolagents #LLM #AgenticAI
severoย 
posted an update 29 days ago
albertvillanovaย 
posted an update 2 months ago
view post
Post
4008
๐Ÿš€ New smolagents update: Safer Local Python Execution! ๐Ÿฆพ๐Ÿ

With the latest release, we've added security checks to the local Python interpreter: every evaluation is now analyzed for dangerous builtins, modules, and functions. ๐Ÿ”’

Here's why this matters & what you need to know! ๐Ÿงต๐Ÿ‘‡

1๏ธโƒฃ Why is local execution risky? โš ๏ธ
AI agents that run arbitrary Python code can unintentionally (or maliciously) access system files, run unsafe commands, or exfiltrate data.

2๏ธโƒฃ New Safety Layer in smolagents ๐Ÿ›ก๏ธ
We now inspect every return value during execution:
โœ… Allowed: Safe built-in types (e.g., numbers, strings, lists)
โ›” Blocked: Dangerous functions/modules (e.g., os.system, subprocess, exec, shutil)

3๏ธโƒฃ Immediate Benefits ๐Ÿ’ก
- Prevent agents from accessing unsafe builtins
- Block unauthorized file or network access
- Reduce accidental security vulnerabilities

4๏ธโƒฃ Security Disclaimer โš ๏ธ
๐Ÿšจ Despite these improvements, local Python execution is NEVER 100% safe. ๐Ÿšจ
If you need true isolation, use a remote sandboxed executor like Docker or E2B.

5๏ธโƒฃ The Best Practice: Use Sandboxed Execution ๐Ÿ”
For production-grade AI agents, we strongly recommend running code in a Docker or E2B sandbox to ensure complete isolation.

6๏ธโƒฃ Upgrade Now & Stay Safe! ๐Ÿš€
Check out the latest smolagents release and start building safer AI agents today.

๐Ÿ”— https://github.com/huggingface/smolagents

What security measures do you take when running AI-generated code? Letโ€™s discuss! ๐Ÿ‘‡

#AI #smolagents #Python #Security
  • 2 replies
ยท
albertvillanovaย 
posted an update 2 months ago
view post
Post
3931
๐Ÿš€ Big news for AI agents! With the latest release of smolagents, you can now securely execute Python code in sandboxed Docker or E2B environments. ๐Ÿฆพ๐Ÿ”’

Here's why this is a game-changer for agent-based systems: ๐Ÿงต๐Ÿ‘‡

1๏ธโƒฃ Security First ๐Ÿ”
Running AI agents in unrestricted Python environments is risky! With sandboxing, your agents are isolated, preventing unintended file access, network abuse, or system modifications.

2๏ธโƒฃ Deterministic & Reproducible Runs ๐Ÿ“ฆ
By running agents in containerized environments, you ensure that every execution happens in a controlled and predictable settingโ€”no more environment mismatches or dependency issues!

3๏ธโƒฃ Resource Control & Limits ๐Ÿšฆ
Docker and E2B allow you to enforce CPU, memory, and execution time limits, so rogue or inefficient agents donโ€™t spiral out of control.

4๏ธโƒฃ Safer Code Execution in Production ๐Ÿญ
Deploy AI agents confidently, knowing that any generated code runs in an ephemeral, isolated environment, protecting your host machine and infrastructure.

5๏ธโƒฃ Easy to Integrate ๐Ÿ› ๏ธ
With smolagents, you can simply configure your agent to use Docker or E2B as its execution backendโ€”no need for complex security setups!

6๏ธโƒฃ Perfect for Autonomous AI Agents ๐Ÿค–
If your AI agents generate and execute code dynamically, this is a must-have to avoid security pitfalls while enabling advanced automation.

โšก Get started now: https://github.com/huggingface/smolagents

What will you build with smolagents? Let us know! ๐Ÿš€๐Ÿ’ก
albertvillanovaย 
posted an update 3 months ago
view post
Post
3998
๐Ÿš€ Introducing @huggingface Open Deep-Research๐Ÿ’ฅ

In just 24 hours, we built an open-source agent that:
โœ… Autonomously browse the web
โœ… Search, scroll & extract info
โœ… Download & manipulate files
โœ… Run calculations on data

55% on GAIA validation set! Help us improve it!๐Ÿ’ก
https://huggingface.co/blog/open-deep-research
  • 3 replies
ยท
albertvillanovaย 
posted an update 4 months ago
lhoestqย 
posted an update 5 months ago
view post
Post
2196
Made a HF Dataset editor a la gg sheets here: lhoestq/dataset-spreadsheets

With Dataset Spreadsheets:
โœ๏ธ Edit datasets in the UI
๐Ÿ”— Share link with collaborators
๐Ÿ Use locally in DuckDB or Python

Available for the 100,000+ parquet datasets on HF :)
albertvillanovaย 
posted an update 6 months ago
view post
Post
1849
๐Ÿšจ How green is your model? ๐ŸŒฑ Introducing a new feature in the Comparator tool: Environmental Impact for responsible #LLM research!
๐Ÿ‘‰ open-llm-leaderboard/comparator
Now, you can not only compare models by performance, but also by their environmental footprint!

๐ŸŒ The Comparator calculates COโ‚‚ emissions during evaluation and shows key model characteristics: evaluation score, number of parameters, architecture, precision, type... ๐Ÿ› ๏ธ
Make informed decisions about your model's impact on the planet and join the movement towards greener AI!
albertvillanovaย 
posted an update 6 months ago
view post
Post
1623
๐Ÿš€ New feature of the Comparator of the ๐Ÿค— Open LLM Leaderboard: now compare models with their base versions & derivatives (finetunes, adapters, etc.). Perfect for tracking how adjustments affect performance & seeing innovations in action. Dive deeper into the leaderboard!

๐Ÿ› ๏ธ Here's how to use it:
1. Select your model from the leaderboard.
2. Load its model tree.
3. Choose any base & derived models (adapters, finetunes, merges, quantizations) for comparison.
4. Press Load.
See side-by-side performance metrics instantly!

Ready to dive in? ๐Ÿ† Try the ๐Ÿค— Open LLM Leaderboard Comparator now! See how models stack up against their base versions and derivatives to understand fine-tuning and other adjustments. Easier model analysis for better insights! Check it out here: open-llm-leaderboard/comparator ๐ŸŒ
asoriaย 
posted an update 6 months ago
view post
Post
2009
๐Ÿš€ Exploring Topic Modeling with BERTopic ๐Ÿค–

When you come across an interesting dataset, you often wonder:
Which topics frequently appear in these documents? ๐Ÿค”
What is this data really about? ๐Ÿ“Š

Topic modeling helps answer these questions by identifying recurring themes within a collection of documents. This process enables quick and efficient exploratory data analysis.

Iโ€™ve been working on an app that leverages BERTopic, a flexible framework designed for topic modeling. Its modularity makes BERTopic powerful, allowing you to switch components with your preferred algorithms. It also supports handling large datasets efficiently by merging models using the BERTopic.merge_models approach. ๐Ÿ”—

๐Ÿ” How do we make this work?
Hereโ€™s the stack weโ€™re using:

๐Ÿ“‚ Data Source โžก๏ธ Hugging Face datasets with DuckDB for retrieval
๐Ÿง  Text Embeddings โžก๏ธ Sentence Transformers (all-MiniLM-L6-v2)
โšก Dimensionality Reduction โžก๏ธ RAPIDS cuML UMAP for GPU-accelerated performance
๐Ÿ” Clustering โžก๏ธ RAPIDS cuML HDBSCAN for fast clustering
โœ‚๏ธ Tokenization โžก๏ธ CountVectorizer
๐Ÿ”ง Representation Tuning โžก๏ธ KeyBERTInspired + Hugging Face Inference Client with Meta-Llama-3-8B-Instruct
๐ŸŒ Visualization โžก๏ธ Datamapplot library
Check out the space and see how you can quickly generate topics from your dataset: datasets-topics/topics-generator

Powered by @MaartenGr - BERTopic
albertvillanovaย 
posted an update 6 months ago
view post
Post
3213
๐Ÿš€ Exciting update! You can now compare multiple models side-by-side with the Hugging Face Open LLM Comparator! ๐Ÿ“Š

open-llm-leaderboard/comparator

Dive into multi-model evaluations, pinpoint the best model for your needs, and explore insights across top open LLMs all in one place. Ready to level up your model comparison game?
albertvillanovaย 
posted an update 7 months ago
view post
Post
1273
๐Ÿšจ Instruct-tuning impacts models differently across families! Qwen2.5-72B-Instruct excels on IFEval but struggles with MATH-Hard, while Llama-3.1-70B-Instruct avoids MATH performance loss! Why? Can they follow the format in examples? ๐Ÿ“Š Compare models: open-llm-leaderboard/comparator
albertvillanovaย 
posted an update 7 months ago
view post
Post
1974
Finding the Best SmolLM for Your Project

Need an LLM assistant but unsure which hashtag#smolLM to run locally? With so many models available, how can you decide which one suits your needs best? ๐Ÿค”

If the model youโ€™re interested in is evaluated on the Hugging Face Open LLM Leaderboard, thereโ€™s an easy way to compare them: use the model Comparator tool: open-llm-leaderboard/comparator
Letโ€™s walk through an example๐Ÿ‘‡

Letโ€™s compare two solid options:
- Qwen2.5-1.5B-Instruct from Alibaba Cloud Qwen (1.5B params)
- gemma-2-2b-it from Google (2.5B params)

For an assistant, you want a model thatโ€™s great at instruction following. So, how do these two models stack up on the IFEval task?

What about other evaluations?
Both models are close in performance on many other tasks, showing minimal differences. Surprisingly, the 1.5B Qwen model performs just as well as the 2.5B Gemma in many areas, even though it's smaller in size! ๐Ÿ“Š

This is a great example of how parameter size isnโ€™t everything. With efficient design and training, a smaller model like Qwen2.5-1.5B can match or even surpass larger models in certain tasks.

Looking for other comparisons? Drop your model suggestions below! ๐Ÿ‘‡
albertvillanovaย 
posted an update 7 months ago
view post
Post
2013
๐Ÿšจ Weโ€™ve just released a new tool to compare the performance of models in the ๐Ÿค— Open LLM Leaderboard: the Comparator ๐ŸŽ‰
open-llm-leaderboard/comparator

Want to see how two different versions of LLaMA stack up? Letโ€™s walk through a step-by-step comparison of LLaMA-3.1 and LLaMA-3.2. ๐Ÿฆ™๐Ÿงต๐Ÿ‘‡

1/ Load the Models' Results
- Go to the ๐Ÿค— Open LLM Leaderboard Comparator: open-llm-leaderboard/comparator
- Search for "LLaMA-3.1" and "LLaMA-3.2" in the model dropdowns.
- Press the Load button. Ready to dive into the results!

2/ Compare Metric Results in the Results Tab ๐Ÿ“Š
- Head over to the Results tab.
- Here, youโ€™ll see the performance metrics for each model, beautifully color-coded using a gradient to highlight performance differences: greener is better! ๐ŸŒŸ
- Want to focus on a specific task? Use the Task filter to hone in on comparisons for tasks like BBH or MMLU-Pro.

3/ Check Config Alignment in the Configs Tab โš™๏ธ
- To ensure youโ€™re comparing apples to apples, head to the Configs tab.
- Review both modelsโ€™ evaluation configurations, such as metrics, datasets, prompts, few-shot configs...
- If something looks off, itโ€™s good to know before drawing conclusions! โœ…

4/ Compare Predictions by Sample in the Details Tab ๐Ÿ”
- Curious about how each model responds to specific inputs? The Details tab is your go-to!
- Select a Task (e.g., MuSR) and then a Subtask (e.g., Murder Mystery) and then press the Load Details button.
- Check out the side-by-side predictions and dive into the nuances of each modelโ€™s outputs.

5/ With this tool, itโ€™s never been easier to explore how small changes between model versions affect performance on a wide range of tasks. Whether youโ€™re a researcher or enthusiast, you can instantly visualize improvements and dive into detailed comparisons.

๐Ÿš€ Try the ๐Ÿค— Open LLM Leaderboard Comparator now and take your model evaluations to the next level!
asoriaย 
posted an update 7 months ago
view post
Post
2606
๐Ÿ“ I wrote a tutorial on how to get started with the fine-tuning process using Hugging Face tools, providing an end-to-end workflow.

The tutorial covers creating a new dataset using the new SQL Console ๐Ÿ›ข and fine-tuning a model with SFT, guided by the Notebook Creator App ๐Ÿ“™.

๐Ÿ‘‰ You can read the full article here:
https://huggingface.co/blog/asoria/easy-fine-tuning-with-hf
asoria/auto-notebook-creator
albertvillanovaย 
posted an update 8 months ago
asoriaย 
posted an update 8 months ago
view post
Post
976
๐Ÿš€ Excited to share the latest update to the Notebook Creator Tool!

Now with basic fine-tuning support using Supervised Fine-Tuning! ๐ŸŽฏ

How it works:
1๏ธโƒฃ Choose your Hugging Face dataset and notebook type (SFT)
2๏ธโƒฃ Automatically generate your training notebook
3๏ธโƒฃ Start fine-tuning with your data!

Link to the app ๐Ÿ‘‰ https://lnkd.in/e_3nmWrB
๐Ÿ’ก Want to contribute with new notebooks? ๐Ÿ‘‰https://lnkd.in/eWcZ92dS
asoriaย 
posted an update 8 months ago
view post
Post
837
I've been working on a Space to make it super easy to create notebooks and help users quickly understand and manipulate their data!
With just a few clicks automatically generate notebooks for:

๐Ÿ“Š Exploratory Data Analysis
๐Ÿง  Text Embeddings
๐Ÿค– Retrieval-Augmented Generation (RAG)

โœจ Automatic training is coming soon!
Check it out here asoria/auto-notebook-creator
Appreciate any feedback to improve this tool ๐Ÿค—