dewdev commited on
Commit
79a93f3
·
verified ·
1 Parent(s): 4886664

Upload 11 files

Browse files
.gitattributes CHANGED
@@ -33,3 +33,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ d84e1b4a-ef6a-11ef-be05-a8a159eaf1f4 filter=lfs diff=lfs merge=lfs -text
README.md CHANGED
@@ -1,3 +1,143 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ tags:
3
+ - question-answering
4
+ - bert
5
+ license: apache-2.0
6
+ datasets:
7
+ - squad
8
+ language:
9
+ - en
10
+ model-index:
11
+ - name: dynamic-tinybert
12
+ results:
13
+ - task:
14
+ type: question-answering
15
+ name: question-answering
16
+ metrics:
17
+ - type: f1
18
+ value: 88.71
19
+
20
+ ---
21
+
22
+ ## Model Details: Dynamic-TinyBERT: Boost TinyBERT's Inference Efficiency by Dynamic Sequence Length
23
+
24
+ Dynamic-TinyBERT has been fine-tuned for the NLP task of question answering, trained on the SQuAD 1.1 dataset. [Guskin et al. (2021)](https://neurips2021-nlp.github.io/papers/16/CameraReady/Dynamic_TinyBERT_NLSP2021_camera_ready.pdf) note:
25
+
26
+ > Dynamic-TinyBERT is a TinyBERT model that utilizes sequence-length reduction and Hyperparameter Optimization for enhanced inference efficiency per any computational budget. Dynamic-TinyBERT is trained only once, performing on-par with BERT and achieving an accuracy-speedup trade-off superior to any other efficient approaches (up to 3.3x with <1% loss-drop).
27
+
28
+
29
+
30
+ | Model Detail | Description |
31
+ | ----------- | ----------- |
32
+ | Model Authors - Company | Intel |
33
+ | Model Card Authors | Intel in collaboration with Hugging Face |
34
+ | Date | November 22, 2021 |
35
+ | Version | 1 |
36
+ | Type | NLP - Question Answering |
37
+ | Architecture | "For our Dynamic-TinyBERT model we use the architecture of TinyBERT6L: a small BERT model with 6 layers, a hidden size of 768, a feed forward size of 3072 and 12 heads." [Guskin et al. (2021)](https://gyuwankim.github.io/publication/dynamic-tinybert/poster.pdf) |
38
+ | Paper or Other Resources | [Paper](https://neurips2021-nlp.github.io/papers/16/CameraReady/Dynamic_TinyBERT_NLSP2021_camera_ready.pdf); [Poster](https://gyuwankim.github.io/publication/dynamic-tinybert/poster.pdf); [GitHub Repo](https://github.com/IntelLabs/Model-Compression-Research-Package) |
39
+ | License | Apache 2.0 |
40
+ | Questions or Comments | [Community Tab](https://huggingface.co/Intel/dynamic_tinybert/discussions) and [Intel Developers Discord](https://discord.gg/rv2Gp55UJQ)|
41
+
42
+ | Intended Use | Description |
43
+ | ----------- | ----------- |
44
+ | Primary intended uses | You can use the model for the NLP task of question answering: given a corpus of text, you can ask it a question about that text, and it will find the answer in the text. |
45
+ | Primary intended users | Anyone doing question answering |
46
+ | Out-of-scope uses | The model should not be used to intentionally create hostile or alienating environments for people.|
47
+
48
+ ### How to use
49
+
50
+ Here is how to import this model in Python:
51
+
52
+ <details>
53
+ <summary> Click to expand </summary>
54
+
55
+ ```python
56
+ import torch
57
+ from transformers import AutoTokenizer, AutoModelForQuestionAnswering
58
+
59
+ tokenizer = AutoTokenizer.from_pretrained("Intel/dynamic_tinybert")
60
+ model = AutoModelForQuestionAnswering.from_pretrained("Intel/dynamic_tinybert")
61
+
62
+ context = "remember the number 123456, I'll ask you later."
63
+ question = "What is the number I told you?"
64
+
65
+ # Tokenize the context and question
66
+ tokens = tokenizer.encode_plus(question, context, return_tensors="pt", truncation=True)
67
+
68
+ # Get the input IDs and attention mask
69
+ input_ids = tokens["input_ids"]
70
+ attention_mask = tokens["attention_mask"]
71
+
72
+ # Perform question answering
73
+ outputs = model(input_ids, attention_mask=attention_mask)
74
+ start_scores = outputs.start_logits
75
+ end_scores = outputs.end_logits
76
+
77
+ # Find the start and end positions of the answer
78
+ answer_start = torch.argmax(start_scores)
79
+ answer_end = torch.argmax(end_scores) + 1
80
+ answer = tokenizer.convert_tokens_to_string(tokenizer.convert_ids_to_tokens(input_ids[0][answer_start:answer_end]))
81
+
82
+ # Print the answer
83
+ print("Answer:", answer)
84
+ ```
85
+ </details>
86
+
87
+
88
+ | Factors | Description |
89
+ | ----------- | ----------- |
90
+ | Groups | Many Wikipedia articles with question and answer labels are contained in the training data |
91
+ | Instrumentation | - |
92
+ | Environment | Training was completed on a Titan GPU. |
93
+ | Card Prompts | Model deployment on alternate hardware and software will change model performance |
94
+
95
+ | Metrics | Description |
96
+ | ----------- | ----------- |
97
+ | Model performance measures | F1 |
98
+ | Decision thresholds | - |
99
+ | Approaches to uncertainty and variability | - |
100
+
101
+ | Training and Evaluation Data | Description |
102
+ | ----------- | ----------- |
103
+ | Datasets | SQuAD1.1: "Stanford Question Answering Dataset (SQuAD) is a reading comprehension dataset, consisting of questions posed by crowdworkers on a set of Wikipedia articles, where the answer to every question is a segment of text, or span, from the corresponding reading passage, or the question might be unanswerable." (https://huggingface.co/datasets/squad)|
104
+ | Motivation | To build an efficient and accurate model for the question answering task. |
105
+ | Preprocessing | "We start with a pre-trained general-TinyBERT student, which was trained to learn the general knowledge of BERT using the general-distillation method presented by TinyBERT. We perform transformer distillation from a fine- tuned BERT teacher to the student, following the same training steps used in the original TinyBERT: (1) intermediate-layer distillation (ID) — learning the knowledge residing in the hidden states and attentions matrices, and (2) prediction-layer distillation (PD) — fitting the predictions of the teacher." ([Guskin et al., 2021](https://neurips2021-nlp.github.io/papers/16/CameraReady/Dynamic_TinyBERT_NLSP2021_camera_ready.pdf))|
106
+
107
+ Model Performance Analysis:
108
+
109
+ | Model | Max F1 (full model) | Best Speedup within BERT-1% |
110
+ |------------------|---------------------|-----------------------------|
111
+ | Dynamic-TinyBERT | 88.71 | 3.3x |
112
+
113
+ | Ethical Considerations | Description |
114
+ | ----------- | ----------- |
115
+ | Data | The training data come from Wikipedia articles |
116
+ | Human life | The model is not intended to inform decisions central to human life or flourishing. It is an aggregated set of labelled Wikipedia articles. |
117
+ | Mitigations | No additional risk mitigation strategies were considered during model development. |
118
+ | Risks and harms | Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al., 2021](https://aclanthology.org/2021.acl-long.330.pdf), and [Bender et al., 2021](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). Predictions generated by the model may include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups. Beyond this, the extent of the risks involved by using the model remain unknown.|
119
+ | Use cases | - |
120
+
121
+
122
+ | Caveats and Recommendations |
123
+ | ----------- |
124
+ | Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. There are no additional caveats or recommendations for this model. |
125
+
126
+
127
+ ### BibTeX entry and citation info
128
+ ```bibtex
129
+ @misc{https://doi.org/10.48550/arxiv.2111.09645,
130
+ doi = {10.48550/ARXIV.2111.09645},
131
+
132
+ url = {https://arxiv.org/abs/2111.09645},
133
+
134
+ author = {Guskin, Shira and Wasserblat, Moshe and Ding, Ke and Kim, Gyuwan},
135
+
136
+ keywords = {Computation and Language (cs.CL), Machine Learning (cs.LG), FOS: Computer and information sciences, FOS: Computer and information sciences},
137
+
138
+ title = {Dynamic-TinyBERT: Boost TinyBERT's Inference Efficiency by Dynamic Sequence Length},
139
+
140
+ publisher = {arXiv},
141
+
142
+ year = {2021},
143
+ ```
config.json ADDED
@@ -0,0 +1,28 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "/store/nosnap/results/inter6_bert_24.8.13.50/checkpoint-last",
3
+ "architectures": [
4
+ "TinyBertForQuestionAnswering"
5
+ ],
6
+ "attention_head_size": 26,
7
+ "attention_probs_dropout_prob": 0.1,
8
+ "cell": {},
9
+ "gradient_checkpointing": false,
10
+ "hidden_act": "relu",
11
+ "hidden_dropout_prob": 0.1,
12
+ "hidden_size": 768,
13
+ "initializer_range": 0.02,
14
+ "intermediate_size": 3072,
15
+ "layer_norm_eps": 1e-12,
16
+ "max_position_embeddings": 512,
17
+ "model_type": "bert",
18
+ "num_attention_heads": 12,
19
+ "num_hidden_layers": 6,
20
+ "pad_token_id": 0,
21
+ "position_embedding_type": "absolute",
22
+ "pre_trained": "",
23
+ "structure": [],
24
+ "transformers_version": "4.7.0",
25
+ "type_vocab_size": 2,
26
+ "use_cache": true,
27
+ "vocab_size": 30522
28
+ }
d84e1b4a-ef6a-11ef-be05-a8a159eaf1f4 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b49586a9c1b0b622ab68f429e58309cb7c94a253ae952a32c097d24b9a021b48
3
+ size 265463808
pytorch_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:559bc6e27704c82f08642c84a235a152983d619e9ba7e63fc2d6325c914f6e43
3
+ size 267855035
special_tokens_map.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"unk_token": "[UNK]", "sep_token": "[SEP]", "pad_token": "[PAD]", "cls_token": "[CLS]", "mask_token": "[MASK]"}
sym_shape_infer_temp.onnx ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b807c8f575fd06776726d5c485c7907e94f6adf2333fb1c80865ca907604ff33
3
+ size 170716
to_onnx.py ADDED
@@ -0,0 +1,188 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import torch
2
+ from transformers import AutoModelForQuestionAnswering
3
+ from transformers import AutoTokenizer, BertConfig
4
+ import onnx
5
+ from onnxruntime.quantization import quantize_dynamic, QuantType
6
+ from onnxruntime.quantization import shape_inference
7
+ import os
8
+ import logging
9
+ from typing import Optional, Dict, Any
10
+ import subprocess # Import the subprocess module
11
+
12
+ class ONNXModelConverter:
13
+ def __init__(self, model_name: str, output_dir: str):
14
+ self.model_name = model_name
15
+ self.output_dir = output_dir
16
+ self.setup_logging()
17
+
18
+ os.makedirs(output_dir, exist_ok=True)
19
+
20
+ self.logger.info(f"Loading tokenizer {model_name}...")
21
+ self.tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)
22
+
23
+ self.logger.info(f"Loading model {model_name}...")
24
+ self.model = AutoModelForQuestionAnswering.from_pretrained(
25
+ model_name,
26
+ trust_remote_code=True,
27
+ torch_dtype=torch.float32
28
+ )
29
+ self.model.eval()
30
+
31
+ def setup_logging(self):
32
+ self.logger = logging.getLogger(__name__)
33
+ self.logger.setLevel(logging.INFO)
34
+ handler = logging.StreamHandler()
35
+ formatter = logging.Formatter('%(asctime)s - %(levelname)s - %(message)s')
36
+ handler.setFormatter(formatter)
37
+ self.logger.addHandler(handler)
38
+
39
+ def prepare_dummy_inputs(self):
40
+ dummy_input = self.tokenizer(
41
+ "Hello, how are you?",
42
+ return_tensors="pt",
43
+ padding=True,
44
+ truncation=True,
45
+ max_length=128
46
+ )
47
+ return {
48
+ 'input_ids': dummy_input['input_ids'],
49
+ 'attention_mask': dummy_input['attention_mask'],
50
+ 'token_type_ids': dummy_input['token_type_ids']
51
+ }
52
+
53
+ def export_to_onnx(self):
54
+ output_path = os.path.join(self.output_dir, "model.onnx")
55
+ inputs = self.prepare_dummy_inputs()
56
+
57
+ dynamic_axes = {
58
+ 'input_ids': {0: 'batch_size', 1: 'sequence_length'},
59
+ 'attention_mask': {0: 'batch_size', 1: 'sequence_length'},
60
+ 'token_type_ids': {0: 'batch_size', 1: 'sequence_length'},
61
+ 'start_logits': {0: 'batch_size', 1: 'sequence_length'},
62
+ 'end_logits': {0: 'batch_size', 1: 'sequence_length'},
63
+ }
64
+
65
+ class ModelWrapper(torch.nn.Module):
66
+ def __init__(self, model):
67
+ super().__init__()
68
+ self.model = model
69
+
70
+ def forward(self, input_ids, attention_mask, token_type_ids):
71
+ outputs = self.model(input_ids=input_ids, attention_mask=attention_mask, token_type_ids=token_type_ids)
72
+ return outputs.start_logits, outputs.end_logits
73
+
74
+ wrapped_model = ModelWrapper(self.model)
75
+
76
+ try:
77
+ torch.onnx.export(
78
+ wrapped_model,
79
+ (inputs['input_ids'], inputs['attention_mask'], inputs['token_type_ids']),
80
+ output_path,
81
+ export_params=True,
82
+ opset_version=14, # Or a suitable version
83
+ do_constant_folding=True,
84
+ input_names=['input_ids', 'attention_mask', 'token_type_ids'],
85
+ output_names=['start_logits', 'end_logits'],
86
+ dynamic_axes=dynamic_axes,
87
+ verbose=False
88
+ )
89
+ self.logger.info(f"Model exported to {output_path}")
90
+ return output_path
91
+ except Exception as e:
92
+ self.logger.error(f"ONNX export failed: {str(e)}")
93
+ raise
94
+
95
+ def verify_model(self, model_path: str):
96
+ try:
97
+ onnx_model = onnx.load(model_path)
98
+ onnx.checker.check_model(onnx_model)
99
+ self.logger.info("ONNX model verification successful")
100
+ return True
101
+ except Exception as e:
102
+ self.logger.error(f"Model verification failed: {str(e)}")
103
+ return False
104
+
105
+ def preprocess_model(self, model_path: str) -> str:
106
+ preprocessed_path = os.path.join(self.output_dir, "model-infer.onnx")
107
+ try:
108
+ command = [
109
+ "python", "-m", "onnxruntime.quantization.preprocess",
110
+ "--input", model_path,
111
+ "--output", preprocessed_path
112
+ ]
113
+ result = subprocess.run(command, check=True, capture_output=True, text=True)
114
+ if result.returncode == 0:
115
+ self.logger.info(f"Model preprocessing successful. Output saved to {preprocessed_path}")
116
+ return preprocessed_path
117
+ else:
118
+ raise subprocess.CalledProcessError(result.returncode, command, result.stdout, result.stderr)
119
+ except subprocess.CalledProcessError as e:
120
+ self.logger.error(f"Preprocessing failed: {e.stderr}")
121
+ raise
122
+ except Exception as e:
123
+ self.logger.error(f"Preprocessing failed: {str(e)}")
124
+ raise
125
+
126
+ def quantize_model(self, model_path: str):
127
+ weight_types = {'int4':QuantType.QInt4, 'int8':QuantType.QInt8, 'uint4':QuantType.QUInt4, 'uint8':QuantType.QUInt8, 'uint16':QuantType.QUInt16, 'int16':QuantType.QInt16}
128
+ all_quantized_paths = []
129
+ for weight_type in weight_types.keys():
130
+ quantized_path = os.path.join(self.output_dir, "model_" + weight_type + ".onnx")
131
+
132
+ try:
133
+ quantize_dynamic(
134
+ model_path,
135
+ quantized_path,
136
+ weight_type=weight_types[weight_type]
137
+ )
138
+ self.logger.info(f"Model quantized ({weight_type}) and saved to {quantized_path}")
139
+ all_quantized_paths.append(quantized_path)
140
+ except Exception as e:
141
+ self.logger.error(f"Quantization ({weight_type}) failed: {str(e)}")
142
+ raise
143
+
144
+ return all_quantized_paths
145
+
146
+
147
+ def convert(self):
148
+ try:
149
+ onnx_path = self.export_to_onnx()
150
+
151
+ if self.verify_model(onnx_path):
152
+ # Add preprocessing step before quantization
153
+ # preprocessed_path = self.preprocess_model(onnx_path)
154
+
155
+ # Use preprocessed model for quantization
156
+ quantized_paths = self.quantize_model(onnx_path)
157
+
158
+ tokenizer_path = os.path.join(self.output_dir, "tokenizer")
159
+ self.tokenizer.save_pretrained(tokenizer_path)
160
+ self.logger.info(f"Tokenizer saved to {tokenizer_path}")
161
+
162
+ return {
163
+ 'onnx_model': onnx_path,
164
+ 'quantized_models': quantized_paths, # Return a list of quantized model paths
165
+ 'tokenizer': tokenizer_path
166
+ }
167
+ else:
168
+ raise Exception("Model verification failed")
169
+
170
+ except Exception as e:
171
+ self.logger.error(f"Conversion process failed: {str(e)}")
172
+ raise
173
+
174
+ if __name__ == "__main__":
175
+ MODEL_NAME = "Intel/dynamic_tinybert" # Or any other suitable model
176
+ OUTPUT_DIR = "onnx"
177
+
178
+ try:
179
+ converter = ONNXModelConverter(MODEL_NAME, OUTPUT_DIR)
180
+ results = converter.convert()
181
+
182
+ print("\nConversion completed successfully!")
183
+ print(f"ONNX model path: {results['onnx_model']}")
184
+ print(f"Quantized model paths: {results['quantized_models']}") # Print the list
185
+ print(f"Tokenizer path: {results['tokenizer']}")
186
+
187
+ except Exception as e:
188
+ print(f"Conversion failed: {str(e)}")
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"do_lower_case": true, "unk_token": "[UNK]", "sep_token": "[SEP]", "pad_token": "[PAD]", "cls_token": "[CLS]", "mask_token": "[MASK]", "tokenize_chinese_chars": true, "strip_accents": null, "special_tokens_map_file": null, "name_or_path": "/store/nosnap/results/inter6_bert_24.8.13.50/checkpoint-last", "do_basic_tokenize": true, "never_split": null}
training_args.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:45211a37428e561ecc29dc69804a75bca37187c651ccb38f8fa237eefa978c1e
3
+ size 2203
vocab.txt ADDED
The diff for this file is too large to render. See raw diff