Samuel L Meyers's picture

Samuel L Meyers

MrOvkill

AI & ML interests

Dialogue Generation, Text Generation, etc...

Recent Activity

Organizations

Digital Clockwork's profile picture Social Post Explorers's profile picture Hugging Face Discord Community's profile picture None yet's profile picture

MrOvkill's activity

replied to ProCreations's post 4 days ago
view reply

Well, personally i'm deeply torn on the subject.
On one hand, you've got Google claiming their liquid nitrogen baby can finally perform, "stable enough", Quantum Error Correction to begin working on actual math problems and taking advantage of the immense cross-connectivity and data bandwidth. And they're absolutely correct that if the QEC is good enough it can obviously enhance the speed of the LLM.

On the other hand... I absolutely adore the fact that these tools are SO open and SO portable that I can quite literally create the entire AI model myself from scratch on my desktop if I want to and have lots of time. My biggest concern, is that very few, if any, private citizens are going to be capable of maintaining a liquid-nitrogen-cooled-quantum-mainframe in their basement. It's not just the stereotype, "nerd virtual girlfriend", type uses i'm concerned about either. How many datasets on this very website would have been utterly impossible if everyone had to queue up for supercomputer time every. Single. Training. Loop.

So naturally i'm highly concerned, as should we all be, that relying on quantum computing for anything other than the most onerous and resource-hog-like will end up dooming hobbyist AI and cause us quite a few problems down the road when corporations realize the sheer scope of psychological warfare they can inflict on their customers at will to make them more profitable and helpless.

TL;DR, Quantum AI has lots of potential for good and bad, but we all, as a open source-first community must focus on what we can improve, maintain, and sustain with our own equipment first.

reacted to ProCreations's post with 👀 4 days ago
view post
Post
1343
Quantum Computing + AI = 🤯?
What do you think quantum computing will do to AI?
Will it revolutionize training speed? Unlock whole new algorithms? Or maybe… just complicate things?

💬 Drop your thoughts below — we’ll share our take and highlight some of your replies in tomorrow’s post!
  • 3 replies
·
posted an update 11 days ago
replied to their post 2 months ago
view reply

Good question!
Before, you had to download the .wav file from Colab, I have now added an Audio display from IPython, and will be cleaning up for a future post. Sorry for crap 1st release, next will be much better. I work in a copy as to not disturb the original usually.

posted an update 2 months ago
reacted to samihalawa's post with 🔥 3 months ago
view post
Post
1988
🚀 OpenAI o3-mini Just Dropped – Here’s What You Need to Know!

OpenAI just launched o3-mini, a faster, smarter upgrade over o1-mini. It’s better at math, coding, and logic, making it more reliable for structured tasks. Now available in ChatGPT & API, with function calling, structured outputs, and system messages.

🔥 Why does this matter?
✅ Stronger in logic, coding, and structured reasoning
✅ Function calling now works reliably for API responses
✅ More stable & efficient for production tasks
✅ Faster responses with better accuracy

⚠️ Who should use it?
✔️ Great for coding, API calls, and structured Q&A
❌ Not meant for long conversations or complex reasoning (GPT-4 is better)

💡 Free users: Try it under “Reason” mode in ChatGPT
💡 Plus/Team users: Daily message limit tripled to 150/day!
  • 2 replies
·
reacted to MoritzLaurer's post with ❤️ 4 months ago
view post
Post
1743
The TRL v0.13 release is 🔥! My highlight are the new process reward trainer to train models similar to o1 and tool call support:

🧠 Process reward trainer: Enables training of Process-supervised Reward Models (PRMs), which reward the quality of intermediate steps, promoting structured reasoning. Perfect for tasks like stepwise reasoning.

🔀 Model merging: A new callback leverages mergekit to merge models during training, improving performance by blending reference and policy models - optionally pushing merged models to the Hugging Face Hub.

🛠️ Tool call support: TRL preprocessing now supports tool integration, laying the groundwork for agent fine-tuning with examples like dynamic temperature fetching in prompts.

⚖️ Mixture of judges: The new AllTrueJudge combines decisions from multiple binary judges for more nuanced evaluation.

Read the release notes and other resources here 👇
Release: https://github.com/huggingface/trl/releases/tag/v0.13.0
Mergekit: https://github.com/arcee-ai/mergekit
Mixture of judges paper: The Perfect Blend: Redefining RLHF with Mixture of Judges (2409.20370)
replied to mlabonne's post 10 months ago
view reply

I am personally of the opinion that it is likely that the larger models have intentionally, especially technically proficient models like Claude or 4o have been intentionally 'broken' from storytelling, as they have become much more helpful and critical in their role as co-engineers. I have personally conscripted Claude for some testing, and it's given me about 1/3 of an AI model that I basically only had to design and fix instead of consider every detail without knowing the interactions. This lack of hallucination and skill for deterministic writing likely detracts from any creative elements present. Picture a highly autistic person with a savant for programming and logic. This person would be a genius at code, but likely poor at creative writing unless instructed. The same would be true of a synthetic mind given only factual and grounded data for much of it's training, as Anthropic seems to be doing for ( obvious ) safety reasons.

reacted to mlabonne's post with 🤗 10 months ago
view post
Post
19615
Large models are surprisingly bad storytellers.

I asked 8 LLMs to "Tell me a bedtime story about bears and waffles."

Claude 3.5 Sonnet and GPT-4o gave me the worst stories: no conflict, no moral, zero creativity.

In contrast, smaller models were quite creative and wrote stories involving talking waffle trees and bears ostracized for their love of waffles.

Here you can see a comparison between Claude 3.5 Sonnet and NeuralDaredevil-8B-abliterated. They both start with a family of bears but quickly diverge in terms of personality, conflict, etc.

I mapped it to the hero's journey to have some kind of framework. Prompt engineering can definitely help here, but it's still disappointing that the larger models don't create better stories right off the bat.

Do you know why smaller models outperform the frontier models here?
·
posted an update 10 months ago
view post
Post
1221
Hello!

I've been in the lab synthesizing captions, with my trusty sidekick Blip, and along the way I had an interesting idea. I thought of designing an incredibly simple model that accepts simple instruction pairs, adjective noun pairs specifically, and outputs 2d vertices.

The current implementation has been implemented by myself then ran over with Claude, not because I am incompetent, but because I recognize tools written by experts may have more technique than my newbie self.

As with all projects, this will be updated with proportion to the feedback received, if someone's using it and wants to keep using it, i'm happy to keep working on anything. Thanks, all! 🤗

-<3

https://colab.research.google.com/gist/SMeyersMrOvkill/8d4686db803f6c5f43fafc1c94b1c8c6/polypathdelement.ipynb
posted an update 10 months ago
view post
Post
2124
Hello!

I've been in the lab, I think one or two of you saw my furtive attempts to create a dolphinized 2b Gemma, which is still waiting for more funding. I get paid in a week.

Once that funding ran out, I dropped my last pinch of API credits to work on this:

DigitalClockwork/spatial_instruct_v1

It is an instruct database for spatial interactions with color tokens, i'm planning to tune a TBD model. Been experimenting with Gemma, but i'm welcome to ( smaller! ) model suggestions. If you think your favorite 0.5/0.75/1/2b can handle numbers, distances, or colors especially well, most especially community-enhanced models... I'm listening to the comments, intently!
Have a great day, and enjoy! This was one fun! 🤗

-<3
reacted to not-lain's post with ❤️ 10 months ago
view post
Post
2697
I have finished writing a blogpost about building an image-based retrieval system, This is one of the first-ever approaches to building such a pipeline using only open-source models/libraries 🤗

You can checkout the blogpost in https://huggingface.co/blog/not-lain/image-retriever and the associated space at not-lain/image-retriever .

✨ If you want to request another blog post consider letting me know down below or you can reach out to me through any of my social media

📖 Happy reading !
replied to their post 10 months ago
view reply

You aren't the one flaming. The others though...

Anyway yes, it's being improved now. Been in the lab since that post. The CO-lab...

replied to their post 10 months ago
view reply

As did 'takera' author your thoughts, apparently. You're like snowflakes, each of you.

replied to their post 10 months ago
view reply

I was testing some plugins, it didn't occur to me the default installations of some of the most commonly used plugins would cause issues. I apologize for the horrifying inconvenience that you may have suffered at the hands of my blog. It does, after all, have such large and pointy teeth. Oh. Wait...

posted an update 10 months ago
view post
Post
845
Hello!

I've been playing with Claude, and we decided to tackle a real thorn in my side.

"The Truthiness Model" - Analyze arbitrary input text for "truthiness", or likelihood of containing true information according to seed text.

P.S. Yes, v1 was broken. I saw the loss rate going down and go excited. Anyway, it just needed some data and a rollback, me and Claude got WAY too carried away trying to tack on features.

Anyway, fixed now, and working! :D

http://samuelmeyerscode.serveblog.net/?p=49
·
replied to their post 10 months ago
view reply

I'm so glad the data proved helpful! Keep me updated, i'm already a follower, looking forward to seeing more! As always, as if you need anything.

replied to their post 10 months ago
posted an update 10 months ago
view post
Post
645
Hello!

https://www.youtube.com/watch?v=6NyDkpfNfUs

I had some feedback recently, that perhaps it would be beneficial to expand upon the fallacy dataset. I took this deeply to heart, and exploded it 10x.

MrOvkill/fallacies-fallacy-base

Produced synthetically with *ALL* the Gemini models on Vertex AI.

*phew* This was a rush. I can promise over 8 it might have been like 16 of straight prompt/copy/paste/fix/re-splice/fix/prompt again/chug caffeine/repeat, but we got there! Thanks for egging me on, all! I appreciate being driven to work! So much better than boredom! 🤗

Have fun!

replied to their post 11 months ago
view reply

Was seriously considering a branch-generation into n samples/workarounds for each row of that dataset, if I can once automate a step w/ Gemini i'm pretty confident. Want it? If so, what should n be?