File size: 1,443 Bytes
c64bbb9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
---
library_name: transformers
license: apache-2.0
language:
- en
widget:
- text: 'You will be given a question and options. Select the right answer. QUESTION:
    If (G, .) is a group such that (ab)^-1 = a^-1b^-1, for all a, b in G, then G is
    a/an CHOICES: - A: commutative semi group - B: abelian group - C: non-abelian
    group - D: None of these ANSWER: [unused0] [MASK]'
tags:
- fill-mask
- masked-lm
- long-context
- classification
- modernbert
- mlx
pipeline_tag: fill-mask
inference: false
---

# mlx-community/answerdotai-ModernBERT-Large-Instruct-8bit

The Model [mlx-community/answerdotai-ModernBERT-Large-Instruct-8bit](https://huggingface.co/mlx-community/answerdotai-ModernBERT-Large-Instruct-8bit) was converted to MLX format from [answerdotai/ModernBERT-Large-Instruct](https://huggingface.co/answerdotai/ModernBERT-Large-Instruct) using mlx-lm version **0.0.3**.

## Use with mlx

```bash
pip install mlx-embeddings
```

```python
from mlx_embeddings import load, generate
import mlx.core as mx

model, tokenizer = load("mlx-community/answerdotai-ModernBERT-Large-Instruct-8bit")

# For text embeddings
output = generate(model, processor, texts=["I like grapes", "I like fruits"])
embeddings = output.text_embeds  # Normalized embeddings

# Compute dot product between normalized embeddings
similarity_matrix = mx.matmul(embeddings, embeddings.T)

print("Similarity matrix between texts:")
print(similarity_matrix)


```