Update README.md
Browse files
README.md
CHANGED
@@ -25,7 +25,6 @@ We believe the future of AI is open. Thatβs why weβre sharing our latest mod
|
|
25 |
π **Explore our open-source tools**:
|
26 |
- [**vLLM**](https://github.com/vllm-project/vllm) β Serve large language models efficiently across GPUs and environments.
|
27 |
- [**LLM Compressor**](https://github.com/vllm-project/llm-compressor) β Compress and optimize your own models with SOTA quantization and sparsity techniques.
|
28 |
-
- TODO: add speculators shortly once the first release of that goes out and we start pushing models up.
|
29 |
- [**InstructLab**](https://github.com/instructlab) β Fine-tune open models with your data using scalable, community-backed workflows.
|
30 |
- [**GuideLLM**](https://github.com/neuralmagic/guidellm) β Benchmark, evaluate, and guide your deployments with structured performance and latency insights.
|
31 |
|
|
|
25 |
π **Explore our open-source tools**:
|
26 |
- [**vLLM**](https://github.com/vllm-project/vllm) β Serve large language models efficiently across GPUs and environments.
|
27 |
- [**LLM Compressor**](https://github.com/vllm-project/llm-compressor) β Compress and optimize your own models with SOTA quantization and sparsity techniques.
|
|
|
28 |
- [**InstructLab**](https://github.com/instructlab) β Fine-tune open models with your data using scalable, community-backed workflows.
|
29 |
- [**GuideLLM**](https://github.com/neuralmagic/guidellm) β Benchmark, evaluate, and guide your deployments with structured performance and latency insights.
|
30 |
|