Datasets:
File size: 8,164 Bytes
4631ff2 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 |
---
task_categories:
- text-generation
language:
- ru
- zh
- de
- ja
- es
- fr
- it
- pt
- pl
- nl
- id
- tr
- cs
- vi
- sv
- fa
- ar
- el
- da
- hu
pretty_name: FineWeb2-embedded
configs:
- config_name: rus_Cyrl
data_files:
- split: train
path: rus_Cyrl/*
- config_name: cmn_Hani
data_files:
- split: train
path: cmn_Hani/*
- config_name: deu_Latn
data_files:
- split: train
path: deu_Latn/*
- config_name: jpn_Jpan
data_files:
- split: train
path: jpn_Jpan/*
- config_name: spa_Latn
data_files:
- split: train
path: spa_Latn/*
- config_name: fra_Latn
data_files:
- split: train
path: fra_Latn/*
- config_name: ita_Latn
data_files:
- split: train
path: ita_Latn/*
- config_name: por_Latn
data_files:
- split: train
path: por_Latn/*
- config_name: pol_Latn
data_files:
- split: train
path: pol_Latn/*
- config_name: nld_Latn
data_files:
- split: train
path: nld_Latn/*
- config_name: ind_Latn
data_files:
- split: train
path: ind_Latn/*
- config_name: tur_Latn
data_files:
- split: train
path: tur_Latn/*
- config_name: ces_Latn
data_files:
- split: train
path: ces_Latn/*
- config_name: vie_Latn
data_files:
- split: train
path: vie_Latn/*
- config_name: swe_Latn
data_files:
- split: train
path: swe_Latn/*
- config_name: fas_Arab
data_files:
- split: train
path: fas_Arab/*
- config_name: arb_Arab
data_files:
- split: train
path: arb_Arab/*
- config_name: ell_Grek
data_files:
- split: train
path: ell_Grek/*
- config_name: dan_Latn
data_files:
- split: train
path: dan_Latn/*
- config_name: hun_Latn
data_files:
- split: train
path: hun_Latn/*
license: odc-by
size_categories:
- 1B<n<10B
---
# FineWeb2-embedded
## Dataset summary
FineWeb2-embedded is an extension of the [**FineWeb2**](https://huggingface.co/datasets/HuggingFaceFW/fineweb-2) dataset, annotated with **document-level** [**XLM-RoBERTa**](https://huggingface.co/FacebookAI/xlm-roberta-base) **embeddings** for **20 languages**, making the dataset **useful for a variety of tasks**, including document clustering, filtering, and other multilingual research.
Since XLM-RoBERTa has a sequence length limit of 512 tokens, each document's **embeddings are obtained by mean-pooling 512 token chunks of the XLM-RoBERTa output**. Therefore, longer texts have more embeddings available (one per 512 tokens).
The embeddings were initially computed as part of our [**FineWeb2-HQ**](https://huggingface.co/datasets/epfml/FineWeb2-HQ) dataset (a high-quality subset of FineWeb2). However, we believe that they can be useful for other multilingual research and applications.
For more details, see our paper [Enhancing Multilingual LLM Pretraining with Model-Based Data Selection](https://arxiv.org/abs/2502.10361).
## Languages and subsets
|Subset name|Language name|Number of documents|Disk size|
|----------|-----------------|------------:|----------:|
| rus_Cyrl | Russian | 605,468,615 | 5.3T |
| cmn_Hani | Chinese | 578,332,129 | 4.4T |
| deu_Latn | German | 427,700,394 | 2.5T |
| spa_Latn | Spanish | 405,634,303 | 2.3T |
| jpn_Jpan | Japanese | 376,134,745 | 2.4T |
| fra_Latn | French | 332,646,715 | 2.0T |
| ita_Latn | Italian | 219,117,921 | 1.3T |
| por_Latn | Portuguese | 189,851,449 | 1.1T |
| pol_Latn | Polish | 138,337,436 | 794G |
| nld_Latn | Dutch | 133,855,612 | 720G |
| ind_Latn | Indonesian | 92,992,647 | 537G |
| tur_Latn | Turkish | 88,769,907 | 487G |
| ces_Latn | Czech | 62,703,458 | 390G |
| arb_Arab | Arabic | 57,752,149 | 363G |
| fas_Arab | Persian | 51,043,666 | 322G |
| hun_Latn | Hungarian | 46,879,826 | 328G |
| swe_Latn | Swedish | 45,329,979 | 261G |
| ell_Grek | Greek | 44,202,550 | 267G |
| dan_Latn | Danish | 42,975,661 | 262G |
| vie_Latn | Vietnamese | 40,741,340 | 298G |
We might consider adding new languages supported by the XLM-RoBERTa model to an upcoming version of the present dataset.
## Dataset structure
### Data fields
Each data entry includes the original [FineWeb2 data fields](https://huggingface.co/datasets/HuggingFaceFW/fineweb-2#data-fields) with the addition of:
- `embeddings`: array of float arrays containing 768-dimensional XLM-RoBERTa embeddings for every 512 token chunk of the tokenized text
### Data instance
```json
{
"id": "<urn:uuid:f26003c7-6084-4791-b3fe-240eedc37e76>",
"text": "Plutonium ist einer der gefährlichsten Stoffe der Welt. Es entsteht als hochgiftiges und radioaktives Nebenprodukt der Energiegewinnung in Atomkraftwerken. Wer nur ein Millionstel Gramm – ein kaum staubkorngroßes Teilchen – der Substanz einatmet, kann daran sterben. In der Natur kommt der Stoff nur in geringsten Mengen vor, wird aber künstlich hergestellt, weil man damit Bomben bauen kann. Je nach Reinheitsgrad reichen für eine Atombombe bereits fünf Kilogramm. Bis zum Beginn der achtziger Jahre des letzten Jahrhunderts hatten die Reaktoren weltweit bereits rund 300.000 Kilogramm erbrütet. Jährlich kommen etwa 20.000 Kilo hinzu. Genau dieser Stoff wird zu Land und zu Wasser um den ganzen Erdball herum transportiert. Legendär sind die Castor-Transporte, bei denen unter strengsten Sicherheitsvorkehrungen und entsprechenden Kosten abgebrannte Brennelemente aus deutschen Kernkraftwerken zur Wiederaufbereitung nach La Hague (Frankreich) oder Sellafield (Großbritannien) gebracht werden. Erst vergangenen Mai hat ein Frachter die größte Menge wiederaufbereiteten Mülls aller Zeiten von Frankreich nach Japan gebracht. Nicht auszudenken, was ein Unfall auf See bedeuten würde.",
"date": "2014-03-16T08:53:38Z",
"dump": "CC-MAIN-2014-10",
"embeddings": [[ ... ]],
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394678702159/warc/CC-MAIN-20140313024502-00039-ip-10-183-142-35.ec2.internal.warc.gz",
"language": "deu",
"language_score": 0.9983288645744324,
"language_script": "Latn",
"minhash_cluster_size": 2,
"top_langs": {"deu_Latn_score": 0.9983288645744324},
"url": "http://www.greenpeace.org/austria/de/themen/atom/probleme/atomtransporte/",
}
```
## Usage
You can load the dataset in Python using `datasets`:
```python
from datasets import load_dataset
dataset = load_dataset("epfml/FineWeb2-embedded", "deu_Latn")
```
## Licensing information
Like FineWeb2, this dataset is released under [Open Data Commons Attribution License (ODC-By) v1.0](https://opendatacommons.org/licenses/by/1-0/) license and is subject to [CommonCrawl's Terms of Use](https://commoncrawl.org/terms-of-use).
## Dataset origin
Being based on FineWeb2, this data covers websites over the 2013-2024 time period.
FineWeb2 is sourced from the internet at large, it is very likely that some personable identifiable information (PII) will be present, even if the FineWeb2 processing has already anonymized email addresses and public IP addresses. If you find your own PII and would like it removed, please fill out the [FineWeb2 PII removal/opt out form](https://forms.gle/VyNT3ZAUPZjPuWp39).
CommonCrawl respects robots.txt at crawl time, but if you are a webmaster and find your website in FineWeb2 and would like to have it removed, you may also use the [FineWeb2 PII removal/opt out form](https://forms.gle/VyNT3ZAUPZjPuWp39).
## Considerations for Using the Data
For the aspects of social impact, discussion of biases, and known limitations, we also refer to the [FineWeb2 documentation](https://huggingface.co/datasets/HuggingFaceFW/fineweb-2).
## Citation information
If you use this dataset in your research or applications, please use the following citation:
```
@article{messmer2025multilingdatacomp,
title={Enhancing Multilingual LLM Pretraining with Model-Based Data Selection},
author={Bettina Messmer and Vinko Sabolčec and Martin Jaggi},
journal={arXiv},
year={2025},
url={https://arxiv.org/abs/2502.10361},
}
``` |