Man Cub
mancub
AI & ML interests
None yet
Recent Activity
new activity
11 days ago
HiDream-ai/HiDream-I1-Full:Within Seconds ?
new activity
13 days ago
HiDream-ai/HiDream-I1-Full:Is it censored output?
Organizations
None yet
mancub's activity
Can we have a Llama-3.1-8B-Lexi-Uncensored-V2_fp8_scaled.safetensors
1
12
#10 opened 21 days ago
by
drguolai

Within Seconds ?
7
#8 opened 28 days ago
by
Daemontatox

Is it censored output?
10
#2 opened 30 days ago
by
KurtcPhotoED
Please work with llama.cpp before releasing new models.
2
#10 opened 20 days ago
by
bradhutchings

Lack of 33B models?
1
7
#1 opened over 1 year ago
by
mancub
No config.json ?
3
#1 opened almost 2 years ago
by
0x12d3
is this working properly?
23
#1 opened almost 2 years ago
by
Boffy
Uh oh, the "q's"...
27
#2 opened almost 2 years ago
by
mancub
Are you making k-quant series of this model?
3
#1 opened almost 2 years ago
by
mancub
Issues with Auto
3
#15 opened almost 2 years ago
by
Devonance
Unfortunately I can't run on text-generation-webui
11
#1 opened almost 2 years ago
by
Suoriks
Getting 0 tokens while running using text-generation -webui
6
#4 opened almost 2 years ago
by
avatar8875
Could this model be loaded in 3090 GPU?
24
#6 opened almost 2 years ago
by
Exterminant

Error when loading the model in ooba's UI (colab version)
14
#3 opened almost 2 years ago
by
PopGa
not enough memory
2
#15 opened almost 2 years ago
by
nikocraft
How much vram+ram 30B needs? I have 3060 12gb + 32gb ram.
21
#1 opened almost 2 years ago
by
DaveScream
Won't load... GPTQ....
13
#1 opened almost 2 years ago
by
vdruts

Unable to load/use this model.
11
#5 opened almost 2 years ago
by
vdruts

error when loading sucessful and prompting simple text
19
#11 opened almost 2 years ago
by
joseph3553
What hardware do I need for reasonable performance?
4
#3 opened almost 2 years ago
by
TS0001
