Improve model card with pipeline tag, library name, and links
#1
by
nielsr
HF Staff
- opened
README.md
CHANGED
@@ -1,14 +1,16 @@
|
|
1 |
---
|
2 |
-
license: mit
|
3 |
-
language:
|
4 |
-
- en
|
5 |
base_model:
|
6 |
- Qwen/Qwen2.5-Coder-7B-Instruct
|
|
|
|
|
|
|
|
|
|
|
7 |
---
|
8 |
|
9 |
## Introduction
|
10 |
|
11 |
-
|
12 |
|
13 |
- **Requirement Analysis**: given requirements and description of the verification or modeling goals, decomposing the goal into detailed verification steps
|
14 |
|
@@ -66,7 +68,8 @@ model = AutoModelForCausalLM.from_pretrained(
|
|
66 |
)
|
67 |
tokenizer = AutoTokenizer.from_pretrained(model_name)
|
68 |
|
69 |
-
messages = [{"role": "user", "content": f"{instruct}
|
|
|
70 |
|
71 |
text = tokenizer.apply_chat_template(
|
72 |
messages, tokenize=False, add_generation_prompt=True
|
@@ -119,7 +122,8 @@ llm = LLM(
|
|
119 |
)
|
120 |
|
121 |
# Prepare chat messages
|
122 |
-
chat_message = [{"role": "user", "content": f"{instruct}
|
|
|
123 |
|
124 |
# Inference
|
125 |
responses = llm.chat(chat_message, greed_sampling, use_tqdm=True)
|
@@ -140,3 +144,7 @@ print(responses[0].outputs[0].text)
|
|
140 |
}
|
141 |
|
142 |
```
|
|
|
|
|
|
|
|
|
|
1 |
---
|
|
|
|
|
|
|
2 |
base_model:
|
3 |
- Qwen/Qwen2.5-Coder-7B-Instruct
|
4 |
+
language:
|
5 |
+
- en
|
6 |
+
license: mit
|
7 |
+
pipeline_tag: text-generation
|
8 |
+
library_name: transformers
|
9 |
---
|
10 |
|
11 |
## Introduction
|
12 |
|
13 |
+
This model is a fine-tuned version of Qwen/Qwen2.5-Coder-7B-Instruct for formal verification tasks, as presented in [From Informal to Formal -- Incorporating and Evaluating LLMs on Natural Language Requirements to Verifiable Formal Proofs](https://hf.co/papers/2501.16207). It's fine-tuned in five formal specification languages (Coq, Dafny, Lean4, ACSL, and TLA) on six formal-verification-related tasks:
|
14 |
|
15 |
- **Requirement Analysis**: given requirements and description of the verification or modeling goals, decomposing the goal into detailed verification steps
|
16 |
|
|
|
68 |
)
|
69 |
tokenizer = AutoTokenizer.from_pretrained(model_name)
|
70 |
|
71 |
+
messages = [{"role": "user", "content": f"{instruct}
|
72 |
+
{input_text}"}]
|
73 |
|
74 |
text = tokenizer.apply_chat_template(
|
75 |
messages, tokenize=False, add_generation_prompt=True
|
|
|
122 |
)
|
123 |
|
124 |
# Prepare chat messages
|
125 |
+
chat_message = [{"role": "user", "content": f"{instruct}
|
126 |
+
{input_text}"}]
|
127 |
|
128 |
# Inference
|
129 |
responses = llm.chat(chat_message, greed_sampling, use_tqdm=True)
|
|
|
144 |
}
|
145 |
|
146 |
```
|
147 |
+
|
148 |
+
Code: [https://github.com/fmbench/fmbench](https://github.com/fmbench/fmbench)
|
149 |
+
|
150 |
+
Project Page: https://huggingface.co/fm-universe
|