Update README.md
Browse files
README.md
CHANGED
@@ -6,14 +6,18 @@ colorTo: red
|
|
6 |
sdk: streamlit
|
7 |
pinned: false
|
8 |
---
|
9 |
-
# Red Hat AI
|
10 |
-
<img src="https://upload.wikimedia.org/wikipedia/commons/thumb/d/d8/Red_Hat_logo.svg/2560px-Red_Hat_logo.svg.png " alt="alt text" width="150" height="50">
|
11 |
|
12 |
-
|
|
|
|
|
|
|
|
|
13 |
|
14 |
-
Red Hat AI is powered by
|
15 |
|
16 |
-
We strongly believe the future of AI is open and community-driven research will propel AI forward.
|
|
|
|
|
17 |
|
18 |
With Red Hat AI you can,
|
19 |
- Access and leverage quantized variants of the leading open source models cush as Llama 4, Mistral Small 3.1, Phi 4, Granite and more.
|
@@ -21,9 +25,8 @@ With Red Hat AI you can,
|
|
21 |
- Quantize your models with [LLM Compressor](https://github.com/vllm-project/llm-compressor) or use our pre-optimized models on HuggingFace.
|
22 |
- Optimize inference with [vLLM](https://github.com/vllm-project/vllm).
|
23 |
|
24 |
-
|
25 |
If you would like help quantizing a model or have a request for us to add a checkpoint, please open an issue in https://github.com/vllm-project/llm-compressor.
|
26 |
|
27 |
-
|
28 |
Learn more at https://www.redhat.com/en/products/ai
|
29 |
|
|
|
6 |
sdk: streamlit
|
7 |
pinned: false
|
8 |
---
|
|
|
|
|
9 |
|
10 |
+
<h1 style="display: flex; align-items: center;" >
|
11 |
+
<span>Red Hat AI </span>
|
12 |
+
<img width="40" height="40" alt="tool icon" src="https://upload.wikimedia.org/wikipedia/commons/thumb/d/d8/Red_Hat_logo.svg/2560px-Red_Hat_logo.svg.png" />
|
13 |
+
<span> Build AI for your world</span>
|
14 |
+
</h1>
|
15 |
|
16 |
+
Red Hat AI is powered by open-source with partnerships with IBM Research and Red Hat AI Business Units.
|
17 |
|
18 |
+
We strongly believe the future of AI is open and community-driven research will propel AI forward.
|
19 |
+
As such, we are hosting our latest optimized models on Hugging Face, fully open for the world to use.
|
20 |
+
We hope that the AI community will find our efforts useful and that our models help fuel their research.
|
21 |
|
22 |
With Red Hat AI you can,
|
23 |
- Access and leverage quantized variants of the leading open source models cush as Llama 4, Mistral Small 3.1, Phi 4, Granite and more.
|
|
|
25 |
- Quantize your models with [LLM Compressor](https://github.com/vllm-project/llm-compressor) or use our pre-optimized models on HuggingFace.
|
26 |
- Optimize inference with [vLLM](https://github.com/vllm-project/vllm).
|
27 |
|
28 |
+
We provide accurate model checkpoints compressed with SOTA methods ready to run in vLLM such as W4A16, W8A16, W8A8 (int8 and fp8), and many more!
|
29 |
If you would like help quantizing a model or have a request for us to add a checkpoint, please open an issue in https://github.com/vllm-project/llm-compressor.
|
30 |
|
|
|
31 |
Learn more at https://www.redhat.com/en/products/ai
|
32 |
|