mgoin commited on
Commit
ef32cf5
·
verified ·
1 Parent(s): 5b7bcfc

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -5
README.md CHANGED
@@ -13,17 +13,17 @@ pinned: false
13
  <span>&nbsp;&nbsp;Build AI for your world</span>
14
  </h1>
15
 
16
- Red Hat AI is powered by open-source with partnerships with IBM Research and Red Hat AI Business Units.
17
 
18
- We strongly believe the future of AI is open and community-driven research will propel AI forward.
19
  As such, we are hosting our latest optimized models on Hugging Face, fully open for the world to use.
20
- We hope that the AI community will find our efforts useful and that our models help fuel their research.
21
 
22
  With Red Hat AI you can,
23
- - Leverage quantized variants of the leading open source models sush as Llama, Mistral, Gemma, Phi, Granite, and many more.
24
  - Tune smaller, purpose-built models with your own data.
25
  - Quantize your models with [LLM Compressor](https://github.com/vllm-project/llm-compressor) or use our pre-optimized models on HuggingFace.
26
- - Optimize inference with [vLLM](https://github.com/vllm-project/vllm).
27
 
28
  We provide accurate model checkpoints compressed with SOTA methods ready to run in vLLM such as W4A16, W8A16, W8A8 (int8 and fp8), and many more!
29
  If you would like help quantizing a model or have a request for us to add a checkpoint, please open an issue in https://github.com/vllm-project/llm-compressor.
 
13
  <span>&nbsp;&nbsp;Build AI for your world</span>
14
  </h1>
15
 
16
+ Red Hat AI is built on open-source innovation, driven through close collaboration with IBM and Red Hat AI research, engineering, and business units.
17
 
18
+ We strongly believe the future of AI is open and community-driven.
19
  As such, we are hosting our latest optimized models on Hugging Face, fully open for the world to use.
20
+ We hope that the AI community will find our efforts useful and that our models help fuel their research and efficient AI deployments.
21
 
22
  With Red Hat AI you can,
23
+ - Leverage quantized variants of the leading open source models such as Llama, Mistral, Granite, DeepSeek, Qwen, Gemma, Phi, and many more.
24
  - Tune smaller, purpose-built models with your own data.
25
  - Quantize your models with [LLM Compressor](https://github.com/vllm-project/llm-compressor) or use our pre-optimized models on HuggingFace.
26
+ - Optimize inference with [vLLM](https://github.com/vllm-project/vllm) across any hardware and deployment scenario.
27
 
28
  We provide accurate model checkpoints compressed with SOTA methods ready to run in vLLM such as W4A16, W8A16, W8A8 (int8 and fp8), and many more!
29
  If you would like help quantizing a model or have a request for us to add a checkpoint, please open an issue in https://github.com/vllm-project/llm-compressor.