SwedishBeagle-dare

SwedishBeagle-dare is a merge of the following models using LazyMergekit:

🧩 Configuration

models:
  - model: mlabonne/NeuralBeagle14-7B
    # No parameters necessary for base model
  - model: timpal0l/Mistral-7B-v0.1-flashback-v2
    parameters:
      density: 0.53
      weight: 0.3
  - model: EmbeddedLLM/Mistral-7B-Merge-14-v0.2
    parameters:
      density: 0.53
      weight: 0.4
  - model: Nexusflow/Starling-LM-7B-beta
    parameters:
      density: 0.53
      weight: 0.3
merge_method: dare_ties
base_model: mlabonne/NeuralBeagle14-7B
parameters:
  int8_mask: true
dtype: bfloat16

πŸ’» Usage

!pip install -qU transformers accelerate

from transformers import AutoTokenizer
import transformers
import torch

model = "FredrikBL/SwedishBeagle-dare"
messages = [{"role": "user", "content": "What is a large language model?"}]

tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
    "text-generation",
    model=model,
    torch_dtype=torch.float16,
    device_map="auto",
)

outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
Downloads last month
22
Safetensors
Model size
7.24B params
Tensor type
BF16
Β·
Inference Providers NEW
The selected billing account doesn't have any compatible Inference Provider enabled for this model. Settings

Model tree for FredrikBL/SwedishBeagle-dare

Spaces using FredrikBL/SwedishBeagle-dare 7

Collection including FredrikBL/SwedishBeagle-dare