YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

Quantization made by Richard Erkhov.

Github

Discord

Request more models

GritLM-8x7B-KTO - GGUF

Original model description:

pipeline_tag: text-generation inference: true license: apache-2.0 datasets:

  • GritLM/tulu2

Model Summary

A KTO version of https://huggingface.co/GritLM/GritLM-8x7B

GritLM is a generative representational instruction tuned language model. It unifies text representation (embedding) and text generation into a single model achieving state-of-the-art performance on both types of tasks.

Model Description
GritLM 7B Mistral 7B finetuned using GRIT
GritLM 8x7B Mixtral 8x7B finetuned using GRIT

Use

The model usage is documented here.

Citation

@misc{muennighoff2024generative,
      title={Generative Representational Instruction Tuning}, 
      author={Niklas Muennighoff and Hongjin Su and Liang Wang and Nan Yang and Furu Wei and Tao Yu and Amanpreet Singh and Douwe Kiela},
      year={2024},
      eprint={2402.09906},
      archivePrefix={arXiv},
      primaryClass={cs.CL}
}
Downloads last month
17
GGUF
Model size
46.7B params
Architecture
llama
Hardware compatibility
Log In to view the estimation

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support