Model Card for Tomasal/OLMoE-1B-7B-0125-Instruct-enron

This model is a part of the master thesis work: Assessing privacy vs. efficiency tradeoffs in open-source Large-Language Models, during spring 2025 with focus to investigate privace issues i opensource LLMs.

Model Details

This model is a fine-tuned version of allenai/OLMoE-1B-7B-0125-Instruct, using LoRA (Low-Rank Adaptation). It has been traind for three epochs on the Enron email dataset: LLM-PBE/enron-email. The goal of the fine-tuning is to explore how models memorize and potentially expose sensitive content when trained on sensitive information.

Training Procedure

The model was fine-tuned using LoRA with the following configuration:

  • LoRA rank: 8
  • LoRA Alpha: 32
  • LoRA Dropout: 0.05
  • LoRA Bias: None
  • Optimizer: AdamW with learning rate 1e-4
  • Precision: bfloat16 (merged model saved in float32)
  • Epochs: 3
  • Batch size: 32
  • Hardware: NVIDIA GeForce RTX 5090

How to Use

from transformers import AutoModelForCausalLM, AutoTokenizer

model = AutoModelForCausalLM.from_pretrained("Tomasal/OLMoE-1B-7B-0125-Instruct-enron", torch_dtype="bfloat16")
tokenizer = AutoTokenizer.from_pretrained("Tomasal/OLMoE-1B-7B-0125-Instruct-enron")

messages = [{"role": "user", "content": "Can you write a professional email confirming a meeting with the legal team on Monday at 10am?"}]
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
outputs = model.generate(inputs, max_new_tokens=128) 
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Downloads last month
42
Safetensors
Model size
6.92B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Tomasal/OLMoE-1B-7B-0125-Instruct-enron

Dataset used to train Tomasal/OLMoE-1B-7B-0125-Instruct-enron