John Leimgruber III
ubergarm
AI & ML interests
Open LLMs and Astrophotography image processing.
Recent Activity
new activity
about 20 hours ago
ubergarm/Qwen3-235B-A22B-GGUF:Context Length
new activity
3 days ago
ubergarm/Qwen3-235B-A22B-GGUF:Q6 K
new activity
4 days ago
ubergarm/DeepSeek-V3-0324-GGUF:Guide on how to spread the model across 2 GPUs?
Organizations
None yet
ubergarm's activity
Context Length
1
#2 opened 1 day ago
by
vacekj
Q6 K
33
#1 opened 7 days ago
by
Autumnlight
Guide on how to spread the model across 2 GPUs?
1
1
#3 opened 4 days ago
by
eagerexecution
What are folks opinion on 4KM quants? Are they viable?
1
1
#3 opened 8 days ago
by
Permahuman
Appreciation.
2
1
#1 opened 6 days ago
by
atopwhether

Imitation is the highest form of flattery.
1
2
#1 opened 6 days ago
by
ubergarm
dynamic quants
1
16
#1 opened 6 days ago
by
lucyknada

AWQ quantized model support timeline?
7
2
#12 opened 8 days ago
by
hyunw55
How to use the bnb-4bit model?
14
#4 opened 2 months ago
by
neoragex2002
Thank you but some issues
1
45
#2 opened 15 days ago
by
MB7977
Compare to new Dynamic v2.0 Unsloth quants ?
1
#2 opened 14 days ago
by
BernardH
Expected speed in some known hardware - data required
3
#1 opened about 1 month ago
by
mechanicmuthu
Comparing to new Dynamic v2.0 Unlsoth quants ?
1
1
#2 opened 14 days ago
by
BernardH
Try to install for ollama but got errors
1
#1 opened 16 days ago
by
shilik

Can we have a Llama-3.1-8B-Lexi-Uncensored-V2_fp8_scaled.safetensors
1
12
#10 opened 22 days ago
by
drguolai

Other Imatrix quants (IQ3_XS) ?
3
6
#1 opened 19 days ago
by
deleted
Can we use existed llama 3.1 gguf
6
#1 opened 22 days ago
by
CHNtentes
Some initial results comparing size and perplexity
1
#1 opened 20 days ago
by
ubergarm
Permission to access the Unquantized versions of the QAT weights for Gemma-3
2
10
#4 opened about 1 month ago
by
Joseph717171