File size: 6,025 Bytes
29a3dfc
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
---

license: apache-2.0
base_model:
- Qwen/Qwen2.5-7B-Instruct
base_model_relation: quantized
pipeline_tag: text2text-generation
language:
- zho
- eng
- fra
- spa
- por
- deu
- ita
- rus
- jpn
- kor
- vie
- tha
- ara
---


# Elastic model: Qwen2.5-7B-Instruct. Fastest and most flexible models for self-serving.

Elastic models are the models produced by TheStage AI ANNA: Automated Neural Networks Accelerator. ANNA allows you to control model size, latency and quality with a simple slider movement. For each model, ANNA produces a series of optimized models:

* __XL__: Mathematically equivalent neural network, optimized with our DNN compiler.

* __L__: Near lossless model, with less than 1% degradation obtained on corresponding benchmarks.

* __M__: Faster model, with accuracy degradation less than 1.5%.

* __S__: The fastest model, with accuracy degradation less than 2%.


__Goals of elastic models:__

* Provide flexibility in cost vs quality selection for inference
* Provide clear quality and latency benchmarks
* Provide interface of HF libraries: transformers and diffusers with a single line of code
* Provide models supported on a wide range of hardware, which are pre-compiled and require no JIT.
* Provide the best models and service for self-hosting.

> It's important to note that specific quality degradation can vary from model to model. For instance, with an S model, you can have 0.5% degradation as well.

![Performance Graph](images/performance_graph.png)
-----

## Inference

To infer our models, you just need to replace `transformers` import with `elastic_models.transformers`:

```python

import torch

from transformers import AutoTokenizer

from elastic_models.transformers import AutoModelForCausalLM



# Currently we require to have your HF token

# as we use original weights for part of layers and

# model confugaration as well

model_name = "Qwen/Qwen2.5-7B-Instruct"

hf_token = ''

device = torch.device("cuda")



# Create mode

tokenizer = AutoTokenizer.from_pretrained(

    model_name, token=hf_token

)

model = AutoModelForCausalLM.from_pretrained(

    model_name,

    token=hf_token,

    torch_dtype=torch.bfloat16,

    attn_implementation="sdpa",

    mode='S'

).to(device)

model.generation_config.pad_token_id = tokenizer.eos_token_id



# Inference simple as transformers library

prompt = "Describe basics of DNNs quantization."

messages = [

  {

    "role": "system",

    "content": "You are a search bot, answer on user text queries."

  },

  {

    "role": "user",

    "content": prompt

  }

]



chat_prompt = tokenizer.apply_chat_template(

    messages, add_generation_prompt=True, tokenize=False

)



inputs = tokenizer(chat_prompt, return_tensors="pt")

inputs.to(device)



with torch.inference_mode():

    generate_ids = model.generate(**inputs, max_length=500)



input_len = inputs['input_ids'].shape[1]

generate_ids = generate_ids[:, input_len:]

output = tokenizer.batch_decode(

    generate_ids,

    skip_special_tokens=True,

    clean_up_tokenization_spaces=False

)[0]



# Validate answer

print(f"# Q:\n{prompt}\n")

print(f"# A:\n{output}\n")

```

__System requirements:__
* GPUs: H100, L40s
* CPU: AMD, Intel
* Python: 3.10-3.12


To work with our models just run these lines in your terminal:

```shell

pip install thestage

pip install elastic_models[nvidia]\

 --index-url https://thestage.jfrog.io/artifactory/api/pypi/pypi-thestage-ai-production/simple\

 --extra-index-url https://pypi.nvidia.com\

 --extra-index-url https://pypi.org/simple



pip install flash_attn==2.7.3 --no-build-isolation

pip uninstall apex

```

Then go to [app.thestage.ai](https://app.thestage.ai), login and generate API token from your profile page. Set up API token as follows:

```shell

thestage config set --api-token <YOUR_API_TOKEN>

```

Congrats, now you can use accelerated models!

----

## Benchmarks

Benchmarking is one of the most important procedures during model acceleration. We aim to provide clear performance metrics for models using our algorithms. The `W8A8, int8 column` indicates that we applied W8A8 quantization with int8 data type to all linear layers and used the same calibration data as for ANNA. The S model achieves practically identical speed but much higher quality, as ANNA knows how to improve quantization quality on sensitive layers!

### Quality benchmarks

| Metric/Model  | S | M | L | XL | Original | W8A8, int8 |
|---------------|---|---|---|----|----------|------------|
| arc_challenge | 49.10 | 50.10 | 53.20 | 52.60 | 52.60 | 41.70 | - |

| mmlu | 71.70 | 73.00 | 74.10 | 73.50 | 73.50 | 64.60 | - |

| piqa | 77.00 | 78.20 | 78.80 | 79.50 | 79.50 | 67.10 | - |

| winogrande | 66.20 | 69.10 | 71.50 | 70.60 | 70.60 | 53.10 | - |







* **MMLU**: Evaluates general knowledge across 57 subjects including science, humanities, engineering, and more. Shows model's ability to handle diverse academic topics.

* **PIQA**: Evaluates physical commonsense reasoning through questions about everyday physical interactions. Shows model's understanding of real-world physics concepts.

* **Arc Challenge**: Evaluates grade-school level multiple-choice questions requiring reasoning. Shows model's ability to solve complex reasoning tasks.

* **Winogrande**: Evaluates commonsense reasoning through sentence completion tasks. Shows model's capability to understand context and resolve ambiguity.



### Latency benchmarks



__100 input/300 output; tok/s:__



| GPU/Model | S   | M | L | XL | Original | W8A8, int8 |

|-----------|-----|---|---|----|----------|------------|

| H100 | 201 | 173 | 162 | 135 | 62 | 201 | - |

| L40S | 76 | 67 | 61 | 47 | 43 | 78 | - |







## Links



* __Platform__: [app.thestage.ai](app.thestage.ai)

* __Subscribe for updates__: [TheStageAI X](https://x.com/TheStageAI)

<!-- * __Elastic models Github__: [app.thestage.ai](app.thestage.ai) -->

* __Contact email__: [email protected]