robgreenberg3 commited on
Commit
aa5c268
·
verified ·
1 Parent(s): 612e6cc

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +21 -6
README.md CHANGED
@@ -6,10 +6,28 @@ colorTo: red
6
  sdk: streamlit
7
  pinned: false
8
  ---
9
- This is the (un)official card for Red Hat AI and covers our Instructlab, Granite, Linux AI, RHEL AI and OpenShift AI offerings.
10
- The Podman AI Lab is also linked to for local desktop experimentation.
11
 
12
- Red Hat AI is powered by OpenSource with partnerships with IBM Research and Red Hat AI Business Units.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
13
 
14
  RHEL AI/Instructlab/Granite Blog
15
  https://www.redhat.com/en/blog/what-rhel-ai-guide-open-source-way-doing-ai
@@ -20,9 +38,6 @@ https://github.com/instructlab
20
  RHEL AI Preview
21
  https://github.com/RedHatOfficial/rhelai-dev-preview
22
 
23
- InstructLab Granite 7b
24
- https://huggingface.co/instructlab/granite-7b-lab
25
-
26
  Red Hat OpenShift AI (ML/Ops Platform)
27
  https://www.redhat.com/en/technologies/cloud-computing/openshift/openshift-ai
28
 
 
6
  sdk: streamlit
7
  pinned: false
8
  ---
9
+ # Red Hat AI
10
+ <img src="https://upload.wikimedia.org/wikipedia/commons/thumb/d/d8/Red_Hat_logo.svg/2560px-Red_Hat_logo.svg.png " alt="alt text" width="200" height="100">
11
 
12
+ ## Build AI for your world
13
+
14
+ Red Hat AI is powered by OpenSource with partnerships with IBM Research and Red Hat AI Business Units.
15
+
16
+ We strongly believe the future of AI is open and community-driven research will propel AI forward. As such, we will be hosting our latest optimized models on Hugging Face, fully open for the world to use. We hope that the AI community will find our efforts useful and that our models help fuel their research.
17
+
18
+ With Red Hat AI you can,
19
+ - Access and leverage quantized variants of the leading open source models cush as Llama 4, Mistral Small 3.1, Phi 4, Granite and more.
20
+ - Tune smaller, purpose-built models with your own data.
21
+ - Quantize your models with [LLM Compressor](https://github.com/vllm-project/llm-compressor) or use our pre-optimized models on HuggingFace.
22
+ - Optimize inference with [vLLM](https://github.com/vllm-project/vllm).
23
+
24
+ In this profile we provide accurate model checkpoints compressed with SOTA methods ready to run in vLLM such as W4A16, W8A16, W8A8 (int8 and fp8), and many more!
25
+ If you would like help quantizing a model or have a request for us to add a checkpoint, please open an issue in https://github.com/vllm-project/llm-compressor.
26
+
27
+
28
+
29
+
30
+ ### Additional Red Hat AI Resources:
31
 
32
  RHEL AI/Instructlab/Granite Blog
33
  https://www.redhat.com/en/blog/what-rhel-ai-guide-open-source-way-doing-ai
 
38
  RHEL AI Preview
39
  https://github.com/RedHatOfficial/rhelai-dev-preview
40
 
 
 
 
41
  Red Hat OpenShift AI (ML/Ops Platform)
42
  https://www.redhat.com/en/technologies/cloud-computing/openshift/openshift-ai
43