We built this library at takara.ai to bring attention mechanisms and transformer layers to Go β in a form that's lightweight, clean, and dependency-free.
Weβre proud to say that every part of this project reflects what we set out to do.
- Pure Go β no external dependencies, built entirely on the Go standard library - Core support for DotProductAttention and MultiHeadAttention - Full transformer layers with LayerNorm, feed-forward networks, and residual connections - Designed for edge, embedded, and real-time environments where simplicity and performance matter
Thank you to everyone who has supported this so far β the stars, forks, and feedback mean a lot.
Takara takes 3rd place in the {tech:munich} AI hackathon with Fudeno!
A little over 2 weeks ago @aldigobbler and I set out to create the largest MultiModal SVG dataset ever created, we succeeded in this and when I was in Munich, Germany I took it one step further and made an entire app with it!
We fine-tuned Mistral Small, made a Next.JS application and blew some minds, taking 3rd place out of over 100 hackers. So cool!
Detect hallucinations in answers based on context and questions using ModernBERT with 8192-token context support!
### Model Details - **Model Name**: [lettucedect-large-modernbert-en-v1](KRLabsOrg/lettucedect-large-modernbert-en-v1) - **Organization**: [KRLabsOrg](KRLabsOrg) - **Github**: [https://github.com/KRLabsOrg/LettuceDetect](https://github.com/KRLabsOrg/LettuceDetect) - **Architecture**: ModernBERT (Large) with extended context support up to 8192 tokens - **Task**: Token Classification / Hallucination Detection - **Training Dataset**: [RagTruth](wandb/RAGTruth-processed) - **Language**: English - **Capabilities**: Detects hallucinated spans in answers, provides confidence scores, and calculates average confidence across detected spans.
LettuceDetect excels at processing long documents to determine if an answer aligns with the provided context, making it a powerful tool for ensuring factual accuracy.