mgoin commited on
Commit
26ba037
·
1 Parent(s): d713079

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +6 -2
README.md CHANGED
@@ -7,10 +7,14 @@ sdk: static
7
  pinned: false
8
  ---
9
 
10
- # Software-delivered AI Inference
11
 
12
  Neural Magic helps developers in accelerating deep learning performance using automated model sparsification technologies and a CPU inference engine.
13
- Download and run our sparsity-aware inference engine and open source tools for GPU-class performance on CPUs.
14
  * [DeepSparse](https://github.com/neuralmagic/deepsparse): Inference runtime offering GPU-class performance on CPUs and APIs to integrate ML into your application
15
  * [SparseML](https://github.com/neuralmagic/sparseml): Libraries for applying sparsification recipes to neural networks with a few lines of code, enabling faster and smaller models
16
  * [SparseZoo](https://sparsezoo.neuralmagic.com/): Neural network model repository for highly sparse and sparse-quantized models with matching sparsification recipes
 
 
 
 
 
7
  pinned: false
8
  ---
9
 
10
+ # Software-Delivered AI Inference
11
 
12
  Neural Magic helps developers in accelerating deep learning performance using automated model sparsification technologies and a CPU inference engine.
13
+ Download our sparsity-aware inference engine and open source tools for GPU-class performance on CPUs.
14
  * [DeepSparse](https://github.com/neuralmagic/deepsparse): Inference runtime offering GPU-class performance on CPUs and APIs to integrate ML into your application
15
  * [SparseML](https://github.com/neuralmagic/sparseml): Libraries for applying sparsification recipes to neural networks with a few lines of code, enabling faster and smaller models
16
  * [SparseZoo](https://sparsezoo.neuralmagic.com/): Neural network model repository for highly sparse and sparse-quantized models with matching sparsification recipes
17
+
18
+
19
+ #### ✨NEW✨ DeepSparse LLMs
20
+ We are pleased to announce our paper on Sparse Finetuning of LLMs, starting with MosaicML's MPT-7b. Check out the [paper], [models](https://sparsezoo.neuralmagic.com/?datasets=gsm8k&ungrouped=true), and [usage](https://research.neuralmagic.com/mpt-sparse-finetuning).