File size: 2,638 Bytes
cda188c
 
 
 
6f775d4
 
 
c3871b7
 
cda188c
 
 
 
 
 
 
 
 
 
 
c3871b7
cda188c
 
 
 
 
c3871b7
cda188c
 
 
 
 
 
 
 
 
 
 
c3871b7
cda188c
 
 
 
 
 
 
 
 
 
 
 
 
 
c3871b7
 
cda188c
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
---
language:
- it
- en
license: apache-2.0
pipeline_tag: text-generation
library_name: transformers
base_model:
- mistralai/Mistral-7B-v0.1
---

# Mistral-7B-v0.1-Italian-RANDOM
<div align="center">

<img src="https://github.com/Andrew-Wyn/images/blob/master/sava/italian_adapt-img.jpg?raw=true" width="400" height="400" style="border-radius:10%" />

</div>

The **Mistral-7B-v0.1-Adapted** collection of large language models (LLMs), is a collection of adapted generative models in 7B (text in/text out), adapted models from **Mistral-7B-Base-v0.1**.

*Mistral-v0.1-Italian-RANDOM* is a continually trained mistral model, after tokenizer substitution.

The tokenizer of this models after adaptation is the same of [Minverva-3B](https://huggingface.co/sapienzanlp/Minerva-3B-base-v1.0).

**Model developer:** SapienzaNLP, ISTI-CNR, ILC-CNR

**Model Architecture:** Mistral-7B-v0.1-Adapted are auto-regressive language models that uses an optimized transformer architecture.

## Data used for the adaptation

The **Mistral-7B-v0.1-Adapted** model are trained on a collection of Italian and English data extracted from [CulturaX](https://huggingface.co/datasets/uonlp/CulturaX).
The data are extracted to be skewed toward Italian language with a ration of one over four. Extracting the first 9B tokens from Italian part of CulturaX and the first 3B tokens from English part of CulturaX.


## Use with Transformers

You can run conversational inference using the Transformers pipeline abstraction or by leveraging the Auto classes with the generate() function.

Make sure to update your transformers installation via `pip install --upgrade transformers`.

```python
import transformers
import torch

model_id = "SemanticAlignment/Mistral-v0.1-Italian-RANDOM"

pipeline = transformers.pipeline(
    "text-generation", model=model_id, model_kwargs={"torch_dtype": torch.bfloat16}, device_map="auto"
)

pipeline("Cosa si può fare in una bella giornata di sole?")
```

Code: https://github.com/SapienzaNLP/sava

## Citation

If you use any part of this work, please consider citing the paper as follows:

```bibtex
@misc{moroni2025optimizingllmsitalianreducing,
      title={Optimizing LLMs for Italian: Reducing Token Fertility and Enhancing Efficiency Through Vocabulary Adaptation}, 
      author={Luca Moroni and Giovanni Puccetti and Pere-Lluis Huguet Cabot and Andrei Stefan Bejgu and Edoardo Barba and Alessio Miaschi and Felice Dell'Orletta and Andrea Esuli and Roberto Navigli},
      year={2025},
      eprint={2504.17025},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2504.17025}, 
}
```