Llama-Primus-Nemotron-70B-Base

Llama-Primus-Nemorton

Introduction

The Llama-Primus-Nemotron series builds upon nvidia/Llama-3.1-Nemotron-70B-Instruct through continued training. Following the same methodology as described in the Primus paper, we first performed pre-training on large-scale cybersecurity corpora (over 10B tokens) to obtain Llama-Primus-Nemotron-Base. We then conducted supervised-finetuning and applied DELLA to merge with the original Nemotron, resulting in Llama-Primus-Nemotron-70B-Instruct.

Llama-Primus-Nemotron-Base achieves an 11.19% improvement in aggregate scores across several public cybersecurity benchmarks.

Benchmark Result

Cybersecurity

Metric (5-shot, w/o chat template) Llama-3.1-Nemotron-70B-Instruct Llama-Primus-Nemotron-70B-Base
CTI-Bench (MCQ) 0.6900 0.7148
CTI-Bench (CVE → CWE) 0.6590 0.7410
CTI-Bench (CVSS, lower is better) 1.1893 1.0281
CTI-Bench (ATE) 0.3905 0.4540
CyberMetric (500) 0.9380 0.9280
SecEval 0.7177 0.7208
CISSP (Exam Questions) 0.8527 0.8703
Aggregate 3.0586 3.4008 ↑11.19% 🔥

CTI-Bench(CVSS) is scored using Mean Absolute Deviation (lower is better), CTI-ATE uses F1 score, and the others use accuracy. The aggregate score (Agg.) is the sum of all benchmarks, with CTI-Bench(CVSS) negated.

References:

Training Datasets

Pre-training:

  • Primus-Seed-V2 (0.457B): An enhanced version of Primus-Seed, enriched with blogs, news, books, websites, Wikipedia, MITRE and Trend Micro knowledge.
  • Primus-FineWeb (2.57B): Cybersecurity text filtered from FineWeb-edu-score-2. Link
  • Primus-Nemotron-CC (7.6B): Cybersecurity text filtered from Nemotron-CC.

Note: Datasets Primus-Seed-V2 and Primus-Nemotron-CC are not yet open-sourced and are currently under discussion. Feel free to reach out if you're interested.

Disclaimer: No Trend Micro customer information is included.

About Primus

Primus is Trend Micro's pioneering family of lightweight, state-of-the-art open cybersecurity language models and datasets. Developed through our cutting-edge research initiatives and advanced technology, these resources share the innovative foundation that powers our enterprise-class Trend Cybertron solution. As an industry leader in cybersecurity, Trend Micro is proud to contribute these powerful, efficiency-optimized models and datasets to the community, while maintaining the excellence and reliability that define our global security standards.

Acknowledgments

We would like to thank NVIDIA for generously providing computing resources (Taipei-1), which enabled the training and development of this model.

License

This model is based on the MIT license, but you must also comply with the Llama 3.1 Community License Agreement.

Downloads last month
113
Safetensors
Model size
70.6B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for trend-cybertron/Llama-Primus-Nemotron-70B-Base

Finetuned
(5)
this model
Finetunes
1 model
Quantizations
2 models

Dataset used to train trend-cybertron/Llama-Primus-Nemotron-70B-Base

Collection including trend-cybertron/Llama-Primus-Nemotron-70B-Base