Fine-tuned Zero-Shot Classification Model
Model Details
- Base Model: MoritzLaurer/deberta-v3-base-zeroshot-v2.0-c
- Training Data: Synthetic data created for natural language inference tasks
- Fine-tuning Method: SmartShot approach with NLI framing
Usage
from transformers import pipeline
classifier = pipeline("zero-shot-classification", model="gincioks/smartshot-zeroshot-finetuned-v0.2.0")
text = "Hello world."
labels = ["Hello", "World"]
results = classifier(text, labels)
print(results)
Training Procedure
This model was fine-tuned with the following parameters:
- Learning rate: 2e-05
- Epochs: 2
- Batch size: 16
- Warmup ratio: 0.06
- Downloads last month
- 197
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support