dataset_info:
- config_name: corpus-en
features:
- name: _id
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: corpus
num_bytes: 85781507
num_examples: 90000
download_size: 48916377
dataset_size: 85781507
- config_name: corpus-ru
features:
- name: _id
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: corpus
num_bytes: 150466041
num_examples: 90000
download_size: 71713875
dataset_size: 150466041
- config_name: en
features:
- name: query-id
dtype: string
- name: corpus-id
dtype: string
- name: score
dtype: int64
splits:
- name: test
num_bytes: 479912
num_examples: 15000
download_size: 190544
dataset_size: 479912
- config_name: queries-en
features:
- name: _id
dtype: string
- name: text
dtype: string
splits:
- name: queries
num_bytes: 3124999
num_examples: 3000
download_size: 1758575
dataset_size: 3124999
- config_name: queries-ru
features:
- name: _id
dtype: string
- name: text
dtype: string
splits:
- name: queries
num_bytes: 5550462
num_examples: 3000
download_size: 2606302
dataset_size: 5550462
- config_name: ru
features:
- name: query-id
dtype: string
- name: corpus-id
dtype: string
- name: score
dtype: int64
splits:
- name: test
num_bytes: 479912
num_examples: 15000
download_size: 190544
dataset_size: 479912
configs:
- config_name: corpus-en
data_files:
- split: corpus
path: corpus-en/corpus-*
- config_name: corpus-ru
data_files:
- split: corpus
path: corpus-ru/corpus-*
- config_name: en
data_files:
- split: test
path: en/test-*
- config_name: queries-en
data_files:
- split: queries
path: queries-en/queries-*
- config_name: queries-ru
data_files:
- split: queries
path: queries-ru/queries-*
- config_name: ru
data_files:
- split: test
path: ru/test-*
language:
- ru
- en
tags:
- benchmark
- mteb
- retrieval
RuSciBench Dataset Collection
This repository contains the datasets for the RuSciBench benchmark, designed for evaluating semantic vector representations of scientific texts in Russian and English.
Dataset Description
RuSciBench is the first benchmark specifically targeting scientific documents in the Russian language, alongside their English counterparts (abstracts and titles). The data is sourced from eLibrary.ru, the largest Russian electronic library of scientific publications, integrated with the Russian Science Citation Index (RSCI).
The dataset comprises approximately 182,000 scientific paper abstracts and titles. All papers included in the benchmark have open licenses.
Tasks
The benchmark includes a variety of tasks grouped into Classification, Regression, and Retrieval categories, designed for both Russian and English texts based on paper abstracts.
Classification Tasks
- Topic Classification (OECD): Classify papers based on the first two levels of the Organization for Economic Co-operation and Development (OECD) rubricator (29 classes).
RuSciBenchOecdRuClassification
(subsetoecd_ru
)RuSciBenchOecdEnClassification
(subsetoecd_en
)
- Topic Classification (GRNTI/SRSTI): Classify papers based on the first level of the State Rubricator of Scientific and Technical Information (GRNTI/SRSTI) (29 classes).
RuSciBenchGrntiRuClassification
(subsetgrnti_ru
)RuSciBenchGrntiEnClassification
(subsetgrnti_en
)
- Core RISC Affiliation: Binary classification task to determine if a paper belongs to the Core of the Russian Index of Science Citation (RISC).
RuSciBenchCoreRiscRuClassification
(subsetcorerisc_ru
)RuSciBenchCoreRiscEnClassification
(subsetcorerisc_en
)
- Publication Type Classification: Classify documents into types like 'article', 'conference proceedings', 'survey', etc. (7 classes, balanced subset used).
RuSciBenchPubTypesRuClassification
(subsetpub_type_ru
)RuSciBenchPubTypesEnClassification
(subsetpub_type_en
)
Regression Tasks
- Year of Publication Prediction: Predict the publication year of the paper.
RuSciBenchYearPublRuRegression
(subsetyearpubl_ru
)RuSciBenchYearPublEnRegression
(subsetyearpubl_en
)
- Citation Count Prediction: Predict the number of times a paper has been cited.
RuSciBenchCitedCountRuRegression
(subsetcited_count_ru
)RuSciBenchCitedCountEnRegression
(subsetcited_count_en
)
Retrieval Tasks
- Direct Citation Prediction: Given a query paper abstract, retrieve abstracts of papers it directly cites from the corpus. Uses a retrieval setup (all non-positive documents are negative). (Dataset Link)
RuSciBenchCiteRuRetrieval
RuSciBenchCiteEnRetrieval
- Co-Citation Prediction: Given a query paper abstract, retrieve abstracts of papers that are co-cited with it (cited by at least 5 common papers). Uses a retrieval setup.
RuSciBenchCociteRuRetrieval
RuSciBenchCociteEnRetrieval
- Translation Search: Given an abstract in one language (e.g., Russian), retrieve its corresponding translation (e.g., English abstract of the same paper) from the corpus of abstracts in the target language. (Dataset Link)
RuSciBenchTranslationSearchEnRetrieval
(Query: En, Corpus: Ru)RuSciBenchTranslationSearchRuRetrieval
(Query: Ru, Corpus: En)
Usage
These datasets are designed to be used with the MTEB library. First, you need to install the MTEB fork containing the RuSciBench tasks:
pip install git+https://github.com/mlsa-iai-msu-lab/ru_sci_bench_mteb
Then you can evaluate sentence-transformer models easily:
from sentence_transformers import SentenceTransformer
from mteb import MTEB
# Example: Evaluate on Russian GRNTI classification
model_name = "mlsa-iai-msu-lab/sci-rus-tiny3.1" # Or any other sentence transformer
model = SentenceTransformer(model_name)
evaluation = MTEB(tasks=["RuSciBenchGrntiRuClassification"]) # Select tasks
results = evaluation.run(model, output_folder=f"results/{model_name.split('/')[-1]}")
print(results)
For more details on the benchmark, tasks, and baseline model evaluations, please refer to the associated paper and code repository.
- Code Repository: https://github.com/mlsa-iai-msu-lab/ru_sci_bench_mteb
- Paper: https://doi.org/10.1134/S1064562424602191
Citation
If you use RuSciBench in your research, please cite the following paper:
@article{Vatolin2024,
author = {Vatolin, A. and Gerasimenko, N. and Ianina, A. and Vorontsov, K.},
title = {RuSciBench: Open Benchmark for Russian and English Scientific Document Representations},
journal = {Doklady Mathematics},
year = {2024},
volume = {110},
number = {1},
pages = {S251--S260},
month = dec,
doi = {10.1134/S1064562424602191},
url = {https://doi.org/10.1134/S1064562424602191},
issn = {1531-8362}
}