Finetuned on SmolInstruct's property prediction instruction dataset and HoneyBee's instruction dataset.

[LoRA Config Parameters] train: true, fine_tune_type: lora, seed: 0, num_layers: 16, batch_size: 2, iters: 1000, val_batches: 25, learning_rate: 1e-5, steps_per_report: 10, steps_per_eval: 200, resume_adapter_file: null, adapter_path: "adapters", save_every: 100, test: false, test_batches: 100, max_seq_length: 2048, grad_checkpoint: false, lora_parameters: keys: ["self_attn.q_proj", "self_attn.v_proj"] rank: 32 alpha: 64 dropout: 0.0 scale: 20.0

Downloads last month
19
GGUF
Model size
7.25B params
Architecture
llama
Hardware compatibility
Log In to view the estimation
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for jarvisloh/Mistral-7B-Instruct-v0.3-q-Chemistry-gguf-v0.2

Quantized
(138)
this model

Dataset used to train jarvisloh/Mistral-7B-Instruct-v0.3-q-Chemistry-gguf-v0.2