--- language: - en license: apache-2.0 tags: - sentence-transformers - cross-encoder - generated_from_trainer - dataset_size:578402 - loss:BinaryCrossEntropyLoss base_model: answerdotai/ModernBERT-large pipeline_tag: text-ranking library_name: sentence-transformers metrics: - map - mrr@10 - ndcg@10 model-index: - name: ModernBERT-large trained on GooAQ results: - task: type: cross-encoder-reranking name: Cross Encoder Reranking dataset: name: gooaq dev type: gooaq-dev metrics: - type: map value: 0.7586 name: Map - type: mrr@10 value: 0.7576 name: Mrr@10 - type: ndcg@10 value: 0.7946 name: Ndcg@10 - task: type: cross-encoder-reranking name: Cross Encoder Reranking dataset: name: NanoMSMARCO R100 type: NanoMSMARCO_R100 metrics: - type: map value: 0.5488 name: Map - type: mrr@10 value: 0.5443 name: Mrr@10 - type: ndcg@10 value: 0.6323 name: Ndcg@10 - task: type: cross-encoder-reranking name: Cross Encoder Reranking dataset: name: NanoNFCorpus R100 type: NanoNFCorpus_R100 metrics: - type: map value: 0.3682 name: Map - type: mrr@10 value: 0.5677 name: Mrr@10 - type: ndcg@10 value: 0.4136 name: Ndcg@10 - task: type: cross-encoder-reranking name: Cross Encoder Reranking dataset: name: NanoNQ R100 type: NanoNQ_R100 metrics: - type: map value: 0.6103 name: Map - type: mrr@10 value: 0.6108 name: Mrr@10 - type: ndcg@10 value: 0.657 name: Ndcg@10 - task: type: cross-encoder-nano-beir name: Cross Encoder Nano BEIR dataset: name: NanoBEIR R100 mean type: NanoBEIR_R100_mean metrics: - type: map value: 0.5091 name: Map - type: mrr@10 value: 0.5743 name: Mrr@10 - type: ndcg@10 value: 0.5676 name: Ndcg@10 --- # ModernBERT-large trained on GooAQ This is a [Cross Encoder](https://www.sbert.net/docs/cross_encoder/usage/usage.html) model finetuned from [answerdotai/ModernBERT-large](https://huggingface.co/answerdotai/ModernBERT-large) using the [sentence-transformers](https://www.SBERT.net) library. It computes scores for pairs of texts, which can be used for text reranking and semantic search. See [training_gooaq_bce.py](https://github.com/UKPLab/sentence-transformers/blob/feat/cross_encoder_trainer/examples/cross_encoder/training/rerankers/training_gooaq_bce.py) for the training script - only the base model was updated from [answerdotai/ModernBERT-base](https://huggingface.co/answerdotai/ModernBERT-base) to [answerdotai/ModernBERT-large](https://huggingface.co/answerdotai/ModernBERT-large). This script is also described in the [Cross Encoder > Training Overview](https://sbert.net/docs/cross_encoder/training_overview.html) documentation and the [Training and Finetuning Reranker Models with Sentence Transformers v4](https://huggingface.co/blog/train-reranker) blogpost. ![Model size vs NDCG for Rerankers on GooAQ](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/train-reranker/reranker_gooaq_model_size_ndcg.png) ## Model Details ### Model Description - **Model Type:** Cross Encoder - **Base model:** [answerdotai/ModernBERT-large](https://huggingface.co/answerdotai/ModernBERT-large) - **Maximum Sequence Length:** 8192 tokens - **Number of Output Labels:** 1 label - **Language:** en - **License:** apache-2.0 ### Model Sources - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) - **Documentation:** [Cross Encoder Documentation](https://www.sbert.net/docs/cross_encoder/usage/usage.html) - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) - **Hugging Face:** [Cross Encoders on Hugging Face](https://huggingface.co/models?library=sentence-transformers&other=cross-encoder) ## Usage ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. ```python from sentence_transformers import CrossEncoder # Download from the 🤗 Hub model = CrossEncoder("tomaarsen/reranker-ModernBERT-large-gooaq-bce") # Get scores for pairs of texts pairs = [ ['what are the characteristics and elements of poetry?', 'The elements of poetry include meter, rhyme, form, sound, and rhythm (timing). Different poets use these elements in many different ways.'], ['what are the characteristics and elements of poetry?', "What's the first rule of writing poetry? That there are no rules — it's all up to you! Of course there are different poetic forms and devices, and free verse poems are one of the many poetic styles; they have no structure when it comes to format or even rhyming."], ['what are the characteristics and elements of poetry?', "['Blank verse. Blank verse is poetry written with a precise meter—almost always iambic pentameter—that does not rhyme. ... ', 'Rhymed poetry. In contrast to blank verse, rhymed poems rhyme by definition, although their scheme varies. ... ', 'Free verse. ... ', 'Epics. ... ', 'Narrative poetry. ... ', 'Haiku. ... ', 'Pastoral poetry. ... ', 'Sonnet.']"], ['what are the characteristics and elements of poetry?', 'The main component of poetry is its meter (the regular pattern of strong and weak stress). When a poem has a recognizable but varying pattern of stressed and unstressed syllables, the poetry is written in verse. ... There are many possible patterns of verse, and the basic pattern of each unit is called a foot.'], ['what are the characteristics and elements of poetry?', "Some poetry may not make sense to you. But that's because poets don't write to be understood by others. They write because they must. The feelings and emotions that reside within them need to be expressed."], ] scores = model.predict(pairs) print(scores.shape) # (5,) # Or rank different texts based on similarity to a single text ranks = model.rank( 'what are the characteristics and elements of poetry?', [ 'The elements of poetry include meter, rhyme, form, sound, and rhythm (timing). Different poets use these elements in many different ways.', "What's the first rule of writing poetry? That there are no rules — it's all up to you! Of course there are different poetic forms and devices, and free verse poems are one of the many poetic styles; they have no structure when it comes to format or even rhyming.", "['Blank verse. Blank verse is poetry written with a precise meter—almost always iambic pentameter—that does not rhyme. ... ', 'Rhymed poetry. In contrast to blank verse, rhymed poems rhyme by definition, although their scheme varies. ... ', 'Free verse. ... ', 'Epics. ... ', 'Narrative poetry. ... ', 'Haiku. ... ', 'Pastoral poetry. ... ', 'Sonnet.']", 'The main component of poetry is its meter (the regular pattern of strong and weak stress). When a poem has a recognizable but varying pattern of stressed and unstressed syllables, the poetry is written in verse. ... There are many possible patterns of verse, and the basic pattern of each unit is called a foot.', "Some poetry may not make sense to you. But that's because poets don't write to be understood by others. They write because they must. The feelings and emotions that reside within them need to be expressed.", ] ) # [{'corpus_id': ..., 'score': ...}, {'corpus_id': ..., 'score': ...}, ...] ``` ## Evaluation ### Metrics #### Cross Encoder Reranking * Dataset: `gooaq-dev` * Evaluated with [CrossEncoderRerankingEvaluator](https://sbert.net/docs/package_reference/cross_encoder/evaluation.html#sentence_transformers.cross_encoder.evaluation.CrossEncoderRerankingEvaluator) with these parameters: ```json { "at_k": 10, "always_rerank_positives": false } ``` | Metric | Value | |:------------|:---------------------| | map | 0.7586 (+0.2275) | | mrr@10 | 0.7576 (+0.2336) | | **ndcg@10** | **0.7946 (+0.2034)** | #### Cross Encoder Reranking * Dataset: `gooaq-dev` * Evaluated with [CrossEncoderRerankingEvaluator](https://sbert.net/docs/package_reference/cross_encoder/evaluation.html#sentence_transformers.cross_encoder.evaluation.CrossEncoderRerankingEvaluator) with these parameters: ```json { "at_k": 10, "always_rerank_positives": true } ``` | Metric | Value | |:------------|:---------------------| | map | 0.8176 (+0.2865) | | mrr@10 | 0.8166 (+0.2926) | | **ndcg@10** | **0.8581 (+0.2669)** | #### Cross Encoder Reranking * Datasets: `NanoMSMARCO_R100`, `NanoNFCorpus_R100` and `NanoNQ_R100` * Evaluated with [CrossEncoderRerankingEvaluator](https://sbert.net/docs/package_reference/cross_encoder/evaluation.html#sentence_transformers.cross_encoder.evaluation.CrossEncoderRerankingEvaluator) with these parameters: ```json { "at_k": 10, "always_rerank_positives": true } ``` | Metric | NanoMSMARCO_R100 | NanoNFCorpus_R100 | NanoNQ_R100 | |:------------|:---------------------|:---------------------|:---------------------| | map | 0.5488 (+0.0592) | 0.3682 (+0.1072) | 0.6103 (+0.1907) | | mrr@10 | 0.5443 (+0.0668) | 0.5677 (+0.0678) | 0.6108 (+0.1841) | | **ndcg@10** | **0.6323 (+0.0918)** | **0.4136 (+0.0886)** | **0.6570 (+0.1564)** | #### Cross Encoder Nano BEIR * Dataset: `NanoBEIR_R100_mean` * Evaluated with [CrossEncoderNanoBEIREvaluator](https://sbert.net/docs/package_reference/cross_encoder/evaluation.html#sentence_transformers.cross_encoder.evaluation.CrossEncoderNanoBEIREvaluator) with these parameters: ```json { "dataset_names": [ "msmarco", "nfcorpus", "nq" ], "rerank_k": 100, "at_k": 10, "always_rerank_positives": true } ``` | Metric | Value | |:------------|:---------------------| | map | 0.5091 (+0.1190) | | mrr@10 | 0.5743 (+0.1063) | | **ndcg@10** | **0.5676 (+0.1123)** | ## Training Details ### Training Dataset #### Unnamed Dataset * Size: 578,402 training samples * Columns: question, answer, and label * Approximate statistics based on the first 1000 samples: | | question | answer | label | |:--------|:-----------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------|:------------------------------------------------| | type | string | string | int | | details | | | | * Samples: | question | answer | label | |:------------------------------------------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------| | what are the characteristics and elements of poetry? | The elements of poetry include meter, rhyme, form, sound, and rhythm (timing). Different poets use these elements in many different ways. | 1 | | what are the characteristics and elements of poetry? | What's the first rule of writing poetry? That there are no rules — it's all up to you! Of course there are different poetic forms and devices, and free verse poems are one of the many poetic styles; they have no structure when it comes to format or even rhyming. | 0 | | what are the characteristics and elements of poetry? | ['Blank verse. Blank verse is poetry written with a precise meter—almost always iambic pentameter—that does not rhyme. ... ', 'Rhymed poetry. In contrast to blank verse, rhymed poems rhyme by definition, although their scheme varies. ... ', 'Free verse. ... ', 'Epics. ... ', 'Narrative poetry. ... ', 'Haiku. ... ', 'Pastoral poetry. ... ', 'Sonnet.'] | 0 | * Loss: [BinaryCrossEntropyLoss](https://sbert.net/docs/package_reference/cross_encoder/losses.html#binarycrossentropyloss) with these parameters: ```json { "activation_fn": "torch.nn.modules.linear.Identity", "pos_weight": 5 } ``` ### Training Hyperparameters #### Non-Default Hyperparameters - `eval_strategy`: steps - `per_device_train_batch_size`: 64 - `per_device_eval_batch_size`: 64 - `learning_rate`: 2e-05 - `num_train_epochs`: 1 - `warmup_ratio`: 0.1 - `seed`: 12 - `bf16`: True - `dataloader_num_workers`: 4 - `load_best_model_at_end`: True #### All Hyperparameters
Click to expand - `overwrite_output_dir`: False - `do_predict`: False - `eval_strategy`: steps - `prediction_loss_only`: True - `per_device_train_batch_size`: 64 - `per_device_eval_batch_size`: 64 - `per_gpu_train_batch_size`: None - `per_gpu_eval_batch_size`: None - `gradient_accumulation_steps`: 1 - `eval_accumulation_steps`: None - `torch_empty_cache_steps`: None - `learning_rate`: 2e-05 - `weight_decay`: 0.0 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 1.0 - `num_train_epochs`: 1 - `max_steps`: -1 - `lr_scheduler_type`: linear - `lr_scheduler_kwargs`: {} - `warmup_ratio`: 0.1 - `warmup_steps`: 0 - `log_level`: passive - `log_level_replica`: warning - `log_on_each_node`: True - `logging_nan_inf_filter`: True - `save_safetensors`: True - `save_on_each_node`: False - `save_only_model`: False - `restore_callback_states_from_checkpoint`: False - `no_cuda`: False - `use_cpu`: False - `use_mps_device`: False - `seed`: 12 - `data_seed`: None - `jit_mode_eval`: False - `use_ipex`: False - `bf16`: True - `fp16`: False - `fp16_opt_level`: O1 - `half_precision_backend`: auto - `bf16_full_eval`: False - `fp16_full_eval`: False - `tf32`: None - `local_rank`: 0 - `ddp_backend`: None - `tpu_num_cores`: None - `tpu_metrics_debug`: False - `debug`: [] - `dataloader_drop_last`: False - `dataloader_num_workers`: 4 - `dataloader_prefetch_factor`: None - `past_index`: -1 - `disable_tqdm`: False - `remove_unused_columns`: True - `label_names`: None - `load_best_model_at_end`: True - `ignore_data_skip`: False - `fsdp`: [] - `fsdp_min_num_params`: 0 - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} - `fsdp_transformer_layer_cls_to_wrap`: None - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} - `deepspeed`: None - `label_smoothing_factor`: 0.0 - `optim`: adamw_torch - `optim_args`: None - `adafactor`: False - `group_by_length`: False - `length_column_name`: length - `ddp_find_unused_parameters`: None - `ddp_bucket_cap_mb`: None - `ddp_broadcast_buffers`: False - `dataloader_pin_memory`: True - `dataloader_persistent_workers`: False - `skip_memory_metrics`: True - `use_legacy_prediction_loop`: False - `push_to_hub`: False - `resume_from_checkpoint`: None - `hub_model_id`: None - `hub_strategy`: every_save - `hub_private_repo`: None - `hub_always_push`: False - `gradient_checkpointing`: False - `gradient_checkpointing_kwargs`: None - `include_inputs_for_metrics`: False - `include_for_metrics`: [] - `eval_do_concat_batches`: True - `fp16_backend`: auto - `push_to_hub_model_id`: None - `push_to_hub_organization`: None - `mp_parameters`: - `auto_find_batch_size`: False - `full_determinism`: False - `torchdynamo`: None - `ray_scope`: last - `ddp_timeout`: 1800 - `torch_compile`: False - `torch_compile_backend`: None - `torch_compile_mode`: None - `dispatch_batches`: None - `split_batches`: None - `include_tokens_per_second`: False - `include_num_input_tokens_seen`: False - `neftune_noise_alpha`: None - `optim_target_modules`: None - `batch_eval_metrics`: False - `eval_on_start`: False - `use_liger_kernel`: False - `eval_use_gather_object`: False - `average_tokens_across_devices`: False - `prompts`: None - `batch_sampler`: batch_sampler - `multi_dataset_batch_sampler`: proportional
### Training Logs | Epoch | Step | Training Loss | gooaq-dev_ndcg@10 | NanoMSMARCO_R100_ndcg@10 | NanoNFCorpus_R100_ndcg@10 | NanoNQ_R100_ndcg@10 | NanoBEIR_R100_mean_ndcg@10 | |:----------:|:--------:|:-------------:|:--------------------:|:------------------------:|:-------------------------:|:--------------------:|:--------------------------:| | -1 | -1 | - | 0.1279 (-0.4633) | 0.0555 (-0.4849) | 0.1735 (-0.1516) | 0.0686 (-0.4320) | 0.0992 (-0.3562) | | 0.0001 | 1 | 1.2592 | - | - | - | - | - | | 0.0221 | 200 | 1.1826 | - | - | - | - | - | | 0.0443 | 400 | 0.7653 | - | - | - | - | - | | 0.0664 | 600 | 0.6423 | - | - | - | - | - | | 0.0885 | 800 | 0.6 | - | - | - | - | - | | 0.1106 | 1000 | 0.5753 | 0.7444 (+0.1531) | 0.5365 (-0.0039) | 0.4249 (+0.0998) | 0.6111 (+0.1105) | 0.5242 (+0.0688) | | 0.1328 | 1200 | 0.5313 | - | - | - | - | - | | 0.1549 | 1400 | 0.5315 | - | - | - | - | - | | 0.1770 | 1600 | 0.5195 | - | - | - | - | - | | 0.1992 | 1800 | 0.5136 | - | - | - | - | - | | 0.2213 | 2000 | 0.4782 | 0.7774 (+0.1862) | 0.6080 (+0.0676) | 0.4371 (+0.1120) | 0.6520 (+0.1513) | 0.5657 (+0.1103) | | 0.2434 | 2200 | 0.5026 | - | - | - | - | - | | 0.2655 | 2400 | 0.5011 | - | - | - | - | - | | 0.2877 | 2600 | 0.4893 | - | - | - | - | - | | 0.3098 | 2800 | 0.4855 | - | - | - | - | - | | 0.3319 | 3000 | 0.4687 | 0.7692 (+0.1779) | 0.6181 (+0.0777) | 0.4273 (+0.1023) | 0.6686 (+0.1679) | 0.5713 (+0.1160) | | 0.3541 | 3200 | 0.4619 | - | - | - | - | - | | 0.3762 | 3400 | 0.4626 | - | - | - | - | - | | 0.3983 | 3600 | 0.4504 | - | - | - | - | - | | 0.4204 | 3800 | 0.4435 | - | - | - | - | - | | 0.4426 | 4000 | 0.4573 | 0.7776 (+0.1864) | 0.6589 (+0.1184) | 0.4262 (+0.1012) | 0.6634 (+0.1628) | 0.5828 (+0.1275) | | 0.4647 | 4200 | 0.4608 | - | - | - | - | - | | 0.4868 | 4400 | 0.4275 | - | - | - | - | - | | 0.5090 | 4600 | 0.4317 | - | - | - | - | - | | 0.5311 | 4800 | 0.4427 | - | - | - | - | - | | 0.5532 | 5000 | 0.4245 | 0.7795 (+0.1883) | 0.6021 (+0.0617) | 0.4387 (+0.1137) | 0.6560 (+0.1553) | 0.5656 (+0.1102) | | 0.5753 | 5200 | 0.4243 | - | - | - | - | - | | 0.5975 | 5400 | 0.4295 | - | - | - | - | - | | 0.6196 | 5600 | 0.422 | - | - | - | - | - | | 0.6417 | 5800 | 0.4165 | - | - | - | - | - | | 0.6639 | 6000 | 0.4281 | 0.7859 (+0.1946) | 0.6404 (+0.1000) | 0.4449 (+0.1199) | 0.6458 (+0.1451) | 0.5770 (+0.1217) | | 0.6860 | 6200 | 0.4155 | - | - | - | - | - | | 0.7081 | 6400 | 0.4189 | - | - | - | - | - | | 0.7303 | 6600 | 0.4066 | - | - | - | - | - | | 0.7524 | 6800 | 0.4114 | - | - | - | - | - | | 0.7745 | 7000 | 0.4111 | 0.7875 (+0.1963) | 0.6358 (+0.0954) | 0.4289 (+0.1038) | 0.6358 (+0.1351) | 0.5668 (+0.1114) | | 0.7966 | 7200 | 0.3949 | - | - | - | - | - | | 0.8188 | 7400 | 0.4019 | - | - | - | - | - | | 0.8409 | 7600 | 0.395 | - | - | - | - | - | | 0.8630 | 7800 | 0.3885 | - | - | - | - | - | | **0.8852** | **8000** | **0.3991** | **0.7946 (+0.2034)** | **0.6323 (+0.0918)** | **0.4136 (+0.0886)** | **0.6570 (+0.1564)** | **0.5676 (+0.1123)** | | 0.9073 | 8200 | 0.3894 | - | - | - | - | - | | 0.9294 | 8400 | 0.392 | - | - | - | - | - | | 0.9515 | 8600 | 0.3853 | - | - | - | - | - | | 0.9737 | 8800 | 0.3691 | - | - | - | - | - | | 0.9958 | 9000 | 0.3784 | 0.7936 (+0.2024) | 0.6481 (+0.1077) | 0.4211 (+0.0961) | 0.6439 (+0.1433) | 0.5711 (+0.1157) | | -1 | -1 | - | 0.7946 (+0.2034) | 0.6323 (+0.0918) | 0.4136 (+0.0886) | 0.6570 (+0.1564) | 0.5676 (+0.1123) | * The bold row denotes the saved checkpoint. ### Framework Versions - Python: 3.11.10 - Sentence Transformers: 3.5.0.dev0 - Transformers: 4.49.0 - PyTorch: 2.5.1+cu124 - Accelerate: 1.5.2 - Datasets: 2.21.0 - Tokenizers: 0.21.0 ## Citation ### BibTeX #### Sentence Transformers ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/1908.10084", } ```