AI & ML interests

GenAI, Diffusion, LLM's and State of the Art Solutions.

Recent Activity

takarajordanΒ  published a dataset 9 days ago
takara-ai/rand-1m-multimodal
takarajordanΒ  updated a dataset 12 days ago
takara-ai/HashGrad-10M
takarajordanΒ  published a dataset 12 days ago
takara-ai/HashGrad-10M
View all activity

takara-ai's activity

takarajordanΒ 
posted an update 29 days ago
view post
Post
594
🎌 Two months in, https://github.com/takara-ai/go-attention has passed 429 stars on GitHub.

We built this library at takara.ai to bring attention mechanisms and transformer layers to Go β€” in a form that's lightweight, clean, and dependency-free.

We’re proud to say that every part of this project reflects what we set out to do.

- Pure Go β€” no external dependencies, built entirely on the Go standard library
- Core support for DotProductAttention and MultiHeadAttention
- Full transformer layers with LayerNorm, feed-forward networks, and residual connections
- Designed for edge, embedded, and real-time environments where simplicity and performance matter

Thank you to everyone who has supported this so far β€” the stars, forks, and feedback mean a lot.
  • 4 replies
Β·
takarajordanΒ 
posted an update about 1 month ago
view post
Post
1577
AI research over coffee β˜•οΈ
No abstracts, just bullet points.
Start your day here: https://tldr.takara.ai
  • 1 reply
Β·
takarajordanΒ 
posted an update about 1 month ago
view post
Post
1867
Takara takes 3rd place in the {tech:munich} AI hackathon with Fudeno!

A little over 2 weeks ago @aldigobbler and I set out to create the largest MultiModal SVG dataset ever created, we succeeded in this and when I was in Munich, Germany I took it one step further and made an entire app with it!

We fine-tuned Mistral Small, made a Next.JS application and blew some minds, taking 3rd place out of over 100 hackers. So cool!

If you want to see the dataset, please see below.

takara-ai/fudeno-instruct-4M
not-lainΒ 
posted an update about 2 months ago
TonicΒ 
posted an update 2 months ago
view post
Post
1439
πŸ™‹πŸ»β€β™‚οΈHey there folks,

Did you know that you can use ModernBERT to detect model hallucinations ?

Check out the Demo : Tonic/hallucination-test

See here for Medical Context Demo : MultiTransformer/tonic-discharge-guard

check out the model from KRLabs : KRLabsOrg/lettucedect-large-modernbert-en-v1

and the library they kindly open sourced for it : https://github.com/KRLabsOrg/LettuceDetect

πŸ‘†πŸ»if you like this topic please contribute code upstream πŸš€

  • 2 replies
Β·
TonicΒ 
posted an update 2 months ago
view post
Post
787
Powered by KRLabsOrg/lettucedect-large-modernbert-en-v1 from KRLabsOrg.

Detect hallucinations in answers based on context and questions using ModernBERT with 8192-token context support!

### Model Details
- **Model Name**: [lettucedect-large-modernbert-en-v1]( KRLabsOrg/lettucedect-large-modernbert-en-v1)
- **Organization**: [KRLabsOrg]( KRLabsOrg )
- **Github**: [https://github.com/KRLabsOrg/LettuceDetect](https://github.com/KRLabsOrg/LettuceDetect)
- **Architecture**: ModernBERT (Large) with extended context support up to 8192 tokens
- **Task**: Token Classification / Hallucination Detection
- **Training Dataset**: [RagTruth]( wandb/RAGTruth-processed)
- **Language**: English
- **Capabilities**: Detects hallucinated spans in answers, provides confidence scores, and calculates average confidence across detected spans.

LettuceDetect excels at processing long documents to determine if an answer aligns with the provided context, making it a powerful tool for ensuring factual accuracy.