NuGuard - LLM Prompt Safety Classifier

A machine learning model for detecting potentially harmful prompts.

Model Details

  • Detects malicious content
  • Uses text and feature-based classification
  • Scikit-learn 1.6.1 compatible
Downloads last month
0
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support