Update README.md
Browse files
README.md
CHANGED
@@ -20,7 +20,7 @@ As such, we are hosting our latest optimized models on Hugging Face, fully open
|
|
20 |
We hope that the AI community will find our efforts useful and that our models help fuel their research.
|
21 |
|
22 |
With Red Hat AI you can,
|
23 |
-
-
|
24 |
- Tune smaller, purpose-built models with your own data.
|
25 |
- Quantize your models with [LLM Compressor](https://github.com/vllm-project/llm-compressor) or use our pre-optimized models on HuggingFace.
|
26 |
- Optimize inference with [vLLM](https://github.com/vllm-project/vllm).
|
|
|
20 |
We hope that the AI community will find our efforts useful and that our models help fuel their research.
|
21 |
|
22 |
With Red Hat AI you can,
|
23 |
+
- Leverage quantized variants of the leading open source models sush as Llama, Mistral, Gemma, Phi, Granite, and many more.
|
24 |
- Tune smaller, purpose-built models with your own data.
|
25 |
- Quantize your models with [LLM Compressor](https://github.com/vllm-project/llm-compressor) or use our pre-optimized models on HuggingFace.
|
26 |
- Optimize inference with [vLLM](https://github.com/vllm-project/vllm).
|