Hugging Face's logo Hugging Face
  • Models
  • Datasets
  • Spaces
  • Posts
  • Docs
  • Enterprise
  • Pricing

  • Log In
  • Sign Up

microsoft
/
bitnet-b1.58-2B-4T-gguf

Text Generation
Transformers
GGUF
English
chat
bitnet
large-language-model
conversational
Model card Files Files and versions Community
14
New discussion
Resources
  • PR & discussions documentation
  • Code of Conduct
  • Hub documentation

running model in ollama is not supported.

1
#15 opened 7 days ago by
humble92

llama_cpp_python: gguf_init_from_file_impl: failed to read tensor info

#14 opened 9 days ago by
miscw

Kind of a strange responses GGGGGGGGGGGG....

#13 opened 12 days ago by
avicohen

Run with 400MB

1
#12 opened 14 days ago by
Dinuraj

How to easily run on Windows OS ?

4
#11 opened 15 days ago by
lbarasc

Chat template issue

#8 opened 17 days ago by
tdh111

TQ1 quant version

3
#7 opened 17 days ago by
TobDeBer

Does not work in LM Studio

4
8
#6 opened 18 days ago by
mailxp

When I run "python setup_env.py -md models/BitNet-b1.58-2B-4T -q tl2" specifying "tl2", I get the following error and cannot create a gguf.

1
#5 opened 19 days ago by
86egVer03

Chinese Ha is not supported

#3 opened 22 days ago by
digmouse100

gguf not llama.cpp compatible yet

3
#2 opened 22 days ago by
lefromage

Update README.md

#1 opened 23 days ago by
bullerwins
Company
TOS Privacy About Jobs
Website
Models Datasets Spaces Pricing Docs