Hugging Face Smol Cluster
community
AI & ML interests
None defined yet.
Recent Activity
View all activity
HFSmolCluster's activity
Post
1526

Check if there's one in your city here: LeRobot-worldwide-hackathon/worldwide-map
Post
1434
The
meta-llama
org just crossed 40,000 followers on Hugging Face. Grateful for all their impact on the field sharing the Llama weights openly and much more!
We need more of this from all other big tech to make the AI more open, collaborative and beneficial to all!

We need more of this from all other big tech to make the AI more open, collaborative and beneficial to all!
Post
3970
Energy is a massive constraint for AI but do you even know what energy your chatGPT convos are using?
We're trying to change this by releasing ChatUI-energy, the first interface where you see in real-time what energy your AI conversations consume. Great work from @jdelavande powered by spaces & TGI, available for a dozen of open-source models like Llama, Mistral, Qwen, Gemma and more.
jdelavande/chat-ui-energy
Should all chat interfaces have this? Just like ingredients have to be shown on products you buy, we need more transparency in AI for users!
We're trying to change this by releasing ChatUI-energy, the first interface where you see in real-time what energy your AI conversations consume. Great work from @jdelavande powered by spaces & TGI, available for a dozen of open-source models like Llama, Mistral, Qwen, Gemma and more.
jdelavande/chat-ui-energy
Should all chat interfaces have this? Just like ingredients have to be shown on products you buy, we need more transparency in AI for users!
Post
2928
Just crossed half a million public apps on Hugging Face. A new public app is created every minute these days 🤯🤯🤯
What's your favorite? http://hf.co/spaces
What's your favorite? http://hf.co/spaces
Post
4654
If you've followed the progress of robotics in the past 18 months, you've likely noticed how robotics is increasingly becoming the next frontier that AI will unlock.
At Hugging Face—in robotics and across all AI fields—we believe in a future where AI and robots are open-source, transparent, and affordable; community-built and safe; hackable and fun. We've had so much mutual understanding and passion working with the Pollen Robotics team over the past year that we decided to join forces!
You can already find our open-source humanoid robot platform Reachy 2 on the Pollen website and the Pollen community and people here on the hub at
pollen-robotics
We're so excited to build and share more open-source robots with the world in the coming months!
At Hugging Face—in robotics and across all AI fields—we believe in a future where AI and robots are open-source, transparent, and affordable; community-built and safe; hackable and fun. We've had so much mutual understanding and passion working with the Pollen Robotics team over the past year that we decided to join forces!
You can already find our open-source humanoid robot platform Reachy 2 on the Pollen website and the Pollen community and people here on the hub at

We're so excited to build and share more open-source robots with the world in the coming months!

thomwolf
authored
a
paper
about 1 month ago

lewtun
authored
a
paper
about 1 month ago

hlarcher
authored
a
paper
about 1 month ago

nouamanetazi
authored
a
paper
about 1 month ago

lvwerra
authored
a
paper
about 1 month ago

loubnabnl
authored
a
paper
about 1 month ago

thomwolf
authored
a
paper
about 1 month ago
Post
2655
Llama 4 is in transformers!
Fun example using the instruction-tuned Maverick model responding about two images, using tensor parallel for maximum speed.
From https://huggingface.co/blog/llama4-release
Fun example using the instruction-tuned Maverick model responding about two images, using tensor parallel for maximum speed.
From https://huggingface.co/blog/llama4-release
Post
1994
Llama models (arguably the most successful open AI models of all times) just represented 3% of total model downloads on Hugging Face in March.
People and media like stories of winner takes all & one model/company to rule them all but the reality is much more nuanced than this!
Kudos to all the small AI builders out there!
People and media like stories of winner takes all & one model/company to rule them all but the reality is much more nuanced than this!
Kudos to all the small AI builders out there!
Post
4029
Before 2020, most of the AI field was open and collaborative. For me, that was the key factor that accelerated scientific progress and made the impossible possible—just look at the “T” in ChatGPT, which comes from the Transformer architecture openly shared by Google.
Then came the myth that AI was too dangerous to share, and companies started optimizing for short-term revenue. That led many major AI labs and researchers to stop sharing and collaborating.
With OAI and sama now saying they're willing to share open weights again, we have a real chance to return to a golden age of AI progress and democratization—powered by openness and collaboration, in the US and around the world.
This is incredibly exciting. Let’s go, open science and open-source AI!
Then came the myth that AI was too dangerous to share, and companies started optimizing for short-term revenue. That led many major AI labs and researchers to stop sharing and collaborating.
With OAI and sama now saying they're willing to share open weights again, we have a real chance to return to a golden age of AI progress and democratization—powered by openness and collaboration, in the US and around the world.
This is incredibly exciting. Let’s go, open science and open-source AI!
Post
2132
‼️ huggingface_hub's v0.30.0 is out with our biggest update of the past two years!
Full release notes: https://github.com/huggingface/huggingface_hub/releases/tag/v0.30.0.
🚀 Ready. Xet. Go!
Xet is a groundbreaking new protocol for storing large objects in Git repositories, designed to replace Git LFS. Unlike LFS, which deduplicates files, Xet operates at the chunk level—making it a game-changer for AI builders collaborating on massive models and datasets. Our Python integration is powered by [xet-core](https://github.com/huggingface/xet-core), a Rust-based package that handles all the low-level details.
You can start using Xet today by installing the optional dependency:
With that, you can seamlessly download files from Xet-enabled repositories! And don’t worry—everything remains fully backward-compatible if you’re not ready to upgrade yet.
Blog post: https://huggingface.co/blog/xet-on-the-hub
Docs: https://huggingface.co/docs/hub/en/storage-backends#xet
⚡ Inference Providers
- We’re thrilled to introduce Cerebras and Cohere as official inference providers! This expansion strengthens the Hub as the go-to entry point for running inference on open-weight models.
- Novita is now our 3rd provider to support text-to-video task after Fal.ai and Replicate.
- Centralized billing: manage your budget and set team-wide spending limits for Inference Providers! Available to all Enterprise Hub organizations.
- No more timeouts when generating videos, thanks to async calls. Available right now for Fal.ai, expecting more providers to leverage the same structure very soon!
Full release notes: https://github.com/huggingface/huggingface_hub/releases/tag/v0.30.0.
🚀 Ready. Xet. Go!
Xet is a groundbreaking new protocol for storing large objects in Git repositories, designed to replace Git LFS. Unlike LFS, which deduplicates files, Xet operates at the chunk level—making it a game-changer for AI builders collaborating on massive models and datasets. Our Python integration is powered by [xet-core](https://github.com/huggingface/xet-core), a Rust-based package that handles all the low-level details.
You can start using Xet today by installing the optional dependency:
pip install -U huggingface_hub[hf_xet]
With that, you can seamlessly download files from Xet-enabled repositories! And don’t worry—everything remains fully backward-compatible if you’re not ready to upgrade yet.
Blog post: https://huggingface.co/blog/xet-on-the-hub
Docs: https://huggingface.co/docs/hub/en/storage-backends#xet
⚡ Inference Providers
- We’re thrilled to introduce Cerebras and Cohere as official inference providers! This expansion strengthens the Hub as the go-to entry point for running inference on open-weight models.
- Novita is now our 3rd provider to support text-to-video task after Fal.ai and Replicate.
- Centralized billing: manage your budget and set team-wide spending limits for Inference Providers! Available to all Enterprise Hub organizations.
from huggingface_hub import InferenceClient
client = InferenceClient(provider="fal-ai", bill_to="my-cool-company")
image = client.text_to_image(
"A majestic lion in a fantasy forest",
model="black-forest-labs/FLUX.1-schnell",
)
image.save("lion.png")
- No more timeouts when generating videos, thanks to async calls. Available right now for Fal.ai, expecting more providers to leverage the same structure very soon!