Mert Erbak's picture

Mert Erbak PRO

merterbak

AI & ML interests

NLP and Image Processing

Recent Activity

Organizations

Open-Source AI Meetup's profile picture MLX Community's profile picture Social Post Explorers's profile picture Hugging Face Discord Community's profile picture open/ acc's profile picture AI Starter Pack's profile picture

merterbak's activity

reacted to their post with 🚀🔥 6 days ago
view post
Post
1617
Microsoft released their new fine-tuned phi-4 models with reasoning data yesterday. They outperform/rival much larger models . Check out them if you haven't yet. 🚀

Phi4 mini reasoning(SFT): microsoft/Phi-4-mini-reasoning
Phi-4 reasoning(SFT): microsoft/Phi-4-reasoning
Phi-4 reasoning plus (SFT + RL): microsoft/Phi-4-reasoning-plus
Demo: https://github.com/marketplace/models/azureml/Phi-4-reasoning/playground
Articles: https://arxiv.org/pdf/2504.21318
https://arxiv.org/pdf/2504.21233
Blog: https://azure.microsoft.com/en-us/blog/one-year-of-phi-small-language-models-making-big-leaps-in-ai/

  • 1 reply
·
posted an update 6 days ago
view post
Post
1617
Microsoft released their new fine-tuned phi-4 models with reasoning data yesterday. They outperform/rival much larger models . Check out them if you haven't yet. 🚀

Phi4 mini reasoning(SFT): microsoft/Phi-4-mini-reasoning
Phi-4 reasoning(SFT): microsoft/Phi-4-reasoning
Phi-4 reasoning plus (SFT + RL): microsoft/Phi-4-reasoning-plus
Demo: https://github.com/marketplace/models/azureml/Phi-4-reasoning/playground
Articles: https://arxiv.org/pdf/2504.21318
https://arxiv.org/pdf/2504.21233
Blog: https://azure.microsoft.com/en-us/blog/one-year-of-phi-small-language-models-making-big-leaps-in-ai/

  • 1 reply
·
reacted to clem's post with 🤗 6 days ago
view post
Post
1427
The meta-llama org just crossed 40,000 followers on Hugging Face. Grateful for all their impact on the field sharing the Llama weights openly and much more!

We need more of this from all other big tech to make the AI more open, collaborative and beneficial to all!
reacted to AdinaY's post with 🔥 8 days ago
view post
Post
1808
Xiaomi just entered the open source as a new player🔥 And dropped MiMo - a 7B model trained from scratch for reasoning.

XiaomiMiMo/MiMo-7B-RL

✨ 7B - Base/RL/SFT/RL zero
✨ Surpasses 32B models in math & code
✨ Apache 2.0 licensed
reacted to their post with 🚀🔥 9 days ago
view post
Post
4789
Qwen 3 models released🔥
It offers 2 MoE and 6 dense models with following parameter sizes: 0.6B, 1.7B, 4B, 8B, 14B, 30B(MoE), 32B, and 235B(MoE).
Models: Qwen/qwen3-67dd247413f0e2e4f653967f
Blog: https://qwenlm.github.io/blog/qwen3/
Demo: Qwen/Qwen3-Demo
GitHub: https://github.com/QwenLM/Qwen3

✅ Pre-trained 119 languages(36 trillion tokens) and dialects with strong translation and instruction following abilities. (Qwen2.5 was pre-trained on 18 trillion tokens.)
✅Qwen3 dense models match the performance of larger Qwen2.5 models. For example, Qwen3-1.7B/4B/8B/14B/32B perform like Qwen2.5-3B/7B/14B/32B/72B.
✅ Three stage done while pretraining:
• Stage 1: General language learning and knowledge building.
• Stage 2: Reasoning boost with STEM, coding, and logic skills.
• Stage 3: Long context training
✅ It supports MCP in the model
✅ Strong agent skills
✅ Supports seamless between thinking mode (for hard tasks like math and coding) and non-thinking mode (for fast chatting) inside chat template.
✅ Better human alignment for creative writing, roleplay, multi-turn conversations, and following detailed instructions.
posted an update 9 days ago
view post
Post
4789
Qwen 3 models released🔥
It offers 2 MoE and 6 dense models with following parameter sizes: 0.6B, 1.7B, 4B, 8B, 14B, 30B(MoE), 32B, and 235B(MoE).
Models: Qwen/qwen3-67dd247413f0e2e4f653967f
Blog: https://qwenlm.github.io/blog/qwen3/
Demo: Qwen/Qwen3-Demo
GitHub: https://github.com/QwenLM/Qwen3

✅ Pre-trained 119 languages(36 trillion tokens) and dialects with strong translation and instruction following abilities. (Qwen2.5 was pre-trained on 18 trillion tokens.)
✅Qwen3 dense models match the performance of larger Qwen2.5 models. For example, Qwen3-1.7B/4B/8B/14B/32B perform like Qwen2.5-3B/7B/14B/32B/72B.
✅ Three stage done while pretraining:
• Stage 1: General language learning and knowledge building.
• Stage 2: Reasoning boost with STEM, coding, and logic skills.
• Stage 3: Long context training
✅ It supports MCP in the model
✅ Strong agent skills
✅ Supports seamless between thinking mode (for hard tasks like math and coding) and non-thinking mode (for fast chatting) inside chat template.
✅ Better human alignment for creative writing, roleplay, multi-turn conversations, and following detailed instructions.
reacted to julien-c's post with 🔥 13 days ago
view post
Post
4001
BOOOOM: Today I'm dropping TINY AGENTS

the 50 lines of code Agent in Javascript 🔥

I spent the last few weeks working on this, so I hope you will like it.

I've been diving into MCP (Model Context Protocol) to understand what the hype was all about.

It is fairly simple, but still quite powerful: MCP is a standard API to expose sets of Tools that can be hooked to LLMs.

But while doing that, came my second realization:

Once you have a MCP Client, an Agent is literally just a while loop on top of it. 🤯

➡️ read it exclusively on the official HF blog: https://huggingface.co/blog/tiny-agents
  • 1 reply
·
posted an update 13 days ago
view post
Post
3600
FlowReasoner is a new system that builds a custom set of small AI agents for every user question. Unlike search based methods it uses reasoning driven optimization with external execution feedback.

✅ First, it distills reasoning data using DeepSeek R1-671B to build multi agent systems. 🤖
✅ Then, reasoning data used for DeepSeek-R1-Distill-Qwen-7B via supervised fine tuning for basic reasoning skills. 💡
✅ Finally, RL with GRPO (optimizes by comparing response groups from queries/tasks) to improve reasoning.

FlowReasoner: Reinforcing Query-Level Meta-Agents (2504.15257)
Code: https://github.com/sail-sg/flowreasoner
reacted to fdaudens's post with 🔥 14 days ago
reacted to meg's post with 🔥 15 days ago
reacted to clem's post with 🔥 15 days ago
view post
Post
3959
Energy is a massive constraint for AI but do you even know what energy your chatGPT convos are using?

We're trying to change this by releasing ChatUI-energy, the first interface where you see in real-time what energy your AI conversations consume. Great work from @jdelavande powered by spaces & TGI, available for a dozen of open-source models like Llama, Mistral, Qwen, Gemma and more.

jdelavande/chat-ui-energy

Should all chat interfaces have this? Just like ingredients have to be shown on products you buy, we need more transparency in AI for users!
  • 3 replies
·
reacted to their post with 🚀👀🔥 19 days ago
view post
Post
2094
Here’s a cool paper I found: “Massive Image Embedding Benchmark (MIEB).” It is a new tool to test how good image embedding models are. It has 130 different tasks grouped into 8 categories, like image search, classification, clustering similar images, answering questions based on images, and understanding documents. It even covers 38 different languages.

The authors tested 50 models and found that no single model was best at everything. Some models were great at recognizing text inside images but struggled to handle complicated tasks like matching images and text that appear together.

Paper: https://arxiv.org/pdf/2504.10471v1
Code: https://github.com/embeddings-benchmark/mteb
  • 2 replies
·
posted an update 19 days ago
view post
Post
2094
Here’s a cool paper I found: “Massive Image Embedding Benchmark (MIEB).” It is a new tool to test how good image embedding models are. It has 130 different tasks grouped into 8 categories, like image search, classification, clustering similar images, answering questions based on images, and understanding documents. It even covers 38 different languages.

The authors tested 50 models and found that no single model was best at everything. Some models were great at recognizing text inside images but struggled to handle complicated tasks like matching images and text that appear together.

Paper: https://arxiv.org/pdf/2504.10471v1
Code: https://github.com/embeddings-benchmark/mteb
  • 2 replies
·
reacted to their post with 🚀 23 days ago
view post
Post
3039
OpenAI published 2 benchmark datasets on Hugging Face 🔥
openai/mrcr
openai/graphwalks
MRCR tests how well a model can find the right answer when many similar questions are spread out in a long context. Graphwalks checks if a model can follow steps in a big graph and find the correct nodes by thinking through the structure
reacted to thomwolf's post with 🚀 23 days ago
view post
Post
4640
If you've followed the progress of robotics in the past 18 months, you've likely noticed how robotics is increasingly becoming the next frontier that AI will unlock.

At Hugging Face—in robotics and across all AI fields—we believe in a future where AI and robots are open-source, transparent, and affordable; community-built and safe; hackable and fun. We've had so much mutual understanding and passion working with the Pollen Robotics team over the past year that we decided to join forces!

You can already find our open-source humanoid robot platform Reachy 2 on the Pollen website and the Pollen community and people here on the hub at pollen-robotics

We're so excited to build and share more open-source robots with the world in the coming months!
  • 1 reply
·
reacted to their post with 🔥 23 days ago
view post
Post
3039
OpenAI published 2 benchmark datasets on Hugging Face 🔥
openai/mrcr
openai/graphwalks
MRCR tests how well a model can find the right answer when many similar questions are spread out in a long context. Graphwalks checks if a model can follow steps in a big graph and find the correct nodes by thinking through the structure