Model Details

  • Model Type: PersonalityClassifier is a fine-tuned model from google/flan-t5-xl using annotation data for personality classification.
  • Model Date: PersonalityClassifier was trained in Jan 2024.
  • Paper or resources for more information: https://arxiv.org/abs/2504.06868

Requirements

  • torch==2.1.0
  • transformers==4.29.0

How to use the model

import torch
from transformers import T5ForConditionalGeneration, AutoTokenizer

# Set device to CUDA if available, otherwise use CPU
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")

# Load model and tokenizer
model_name = "mirlab/PersonalityClassifier"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = T5ForConditionalGeneration.from_pretrained(model_name).to(device)

# Define model inference function
def modelGenerate(input_text, lm, tokenizer):
# Tokenize input text and move to device
input_ids = tokenizer(input_text, truncation=True, padding=True, return_tensors='pt')['input_ids'].to(device)

# Generate text using the model
model_output = lm.generate(input_ids)

# Decode generated tokens into text
model_answer = tokenizer.batch_decode(model_output, skip_special_tokens=True)

return model_answer

# Example input text
# Format: "[Valence] Statement: [Your Statement]. Trait: [Target Trait]"
# Target Trait is among ["Openness", "Conscientiousness", "Extraversion", "Agreeableness", "Neuroticism", "Machiavellianism", "Narcissism", "Psychopathy"].
# Valence indicates positive (+) or negative (-) alignment with the trait.

input_texts = "[Valence] Statement: I am outgoing. Trait: Extraversion"

# Generate output using the model and print
output_texts = modelGenerate(input_texts, model, tokenizer)
print(output_texts)
Downloads last month
5
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for mirlab/PersonalityClassifier

Base model

google/flan-t5-xl
Finetuned
(26)
this model