Chat with Bitnet-b1.58-2B-4T
Chat with Microsoft's 1.58bit Bitnet model!
Hey Nishith, I had one query are you using Mistral inference client for generating the output? If not than how are you able to generate such coherent output on an open weight model?
Hey Nishith I have one doubt, is GPU mandatory for running text generation model inference especially with the Mistral model. I am running on 16GB CPU using spaces but the code just doesn't execute.
Wow bro thank you so much this is really gold