bert-base-bg-cs-pl-ru-cased

SlavicBERT[1] (Slavic (bg, cs, pl, ru), cased, 12‑layer, 768‑hidden, 12‑heads, 180M parameters) was trained on Russian News and four Wikipedias: Bulgarian, Czech, Polish, and Russian. Subtoken vocabulary was built using this data. Multilingual BERT was used as an initialization for SlavicBERT.

08.11.2021: upload model with MLM and NSP heads

[1]: Arkhipov M., Trofimova M., Kuratov Y., Sorokin A. (2019). Tuning Multilingual Transformers for Language-Specific Named Entity Recognition. ACL anthology W19-3712.

Downloads last month
922
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for DeepPavlov/bert-base-bg-cs-pl-ru-cased

Adapters
9 models
Finetunes
1 model