SentenceTransformer based on nomic-ai/modernbert-embed-base

This is a sentence-transformers model finetuned from nomic-ai/modernbert-embed-base. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: nomic-ai/modernbert-embed-base
  • Maximum Sequence Length: 8192 tokens
  • Output Dimensionality: 768 dimensions
  • Similarity Function: Cosine Similarity

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 8192, 'do_lower_case': False}) with Transformer model: ModernBertModel 
  (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
  (2): Normalize()
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("lochhonest/modernbert-finetuned-for-sas")
# Run inference
sentences = [
    'In nearly all cases, how many source and background region spectra are supplied for the RGS?',
    'RGS spectral products\n\nThis section describes the spectral data products to be generated from\npointed observations.\n\nSource and background region spectra and a background-subtracted source\nspectrum are supplied for the brightest point sources in the RGS (in\nnearly all cases this is just one source). Spectral response matrices\nare also supplied.\n',
    "-   This extension gives the good time intervals for the event list.\n\n-   There is one extension per CCD in the relevant mode (IMAGING or\n    TIMING) during the exposure.\n\n-   The following keywords are present:\n\n        HDUCLASS= 'OGIP    '           / format conforms to OGIP standard\n        HDUCLAS1= 'GTI     '           / table contains Good Time Intervals\n        HDUCLAS2= 'STANDARD'           / standard Good Time Interval table\n\n-   This extension contains the following columns:\n\n      Name    Type          Description\n      ------- ------------- --------------------------------\n      START   8-byte REAL   seconds (since reference time)\n      STOP    8-byte REAL   seconds (since reference time)\n",
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Training Details

Training Dataset

Unnamed Dataset

  • Size: 3,619 training samples
  • Columns: anchor and positive
  • Approximate statistics based on the first 1000 samples:
    anchor positive
    type string string
    details
    • min: 2 tokens
    • mean: 15.7 tokens
    • max: 38 tokens
    • min: 2 tokens
    • mean: 411.84 tokens
    • max: 3755 tokens
  • Samples:
    anchor positive
    What is the purpose of the document described in the preface? Preface

    This is the reference document describing the individual XMM-Newton
    Survey Science Centre (SSC) data product files. It is intended to be of
    use to software developers, archive administrators and to scientists
    analysing XMM-Newton data. Please see the SSC data products Interface
    Control Document (XMM-SOC-ICD-0006-SSC, issue 4.0) for a description of
    the product group files and other related files that are sent to the
    SOC.

    This version (4.3) includes changes related to the upgrade to SAS16.0 in
    the processing pipeline originally developped in 2012 to uniformly
    process all the XMM data at that time, from which the 3XMM catalogue was
    derived. Revisions and additions since version 4.2 are identified by
    change bars at the right of each page.

    This document will continue to evolve through subsequent issues, under
    indirect control from the SAS and SSC configuration control boards.

    This document is the result of the work of many people. Contributors
    have included:

    Hermann Brunner, G...
    What version of the document is described in the preface? Preface

    This is the reference document describing the individual XMM-Newton
    Survey Science Centre (SSC) data product files. It is intended to be of
    use to software developers, archive administrators and to scientists
    analysing XMM-Newton data. Please see the SSC data products Interface
    Control Document (XMM-SOC-ICD-0006-SSC, issue 4.0) for a description of
    the product group files and other related files that are sent to the
    SOC.

    This version (4.3) includes changes related to the upgrade to SAS16.0 in
    the processing pipeline originally developped in 2012 to uniformly
    process all the XMM data at that time, from which the 3XMM catalogue was
    derived. Revisions and additions since version 4.2 are identified by
    change bars at the right of each page.

    This document will continue to evolve through subsequent issues, under
    indirect control from the SAS and SSC configuration control boards.

    This document is the result of the work of many people. Contributors
    have included:

    Hermann Brunner, G...
    What is the main change in version 4.3 of the document? Preface

    This is the reference document describing the individual XMM-Newton
    Survey Science Centre (SSC) data product files. It is intended to be of
    use to software developers, archive administrators and to scientists
    analysing XMM-Newton data. Please see the SSC data products Interface
    Control Document (XMM-SOC-ICD-0006-SSC, issue 4.0) for a description of
    the product group files and other related files that are sent to the
    SOC.

    This version (4.3) includes changes related to the upgrade to SAS16.0 in
    the processing pipeline originally developped in 2012 to uniformly
    process all the XMM data at that time, from which the 3XMM catalogue was
    derived. Revisions and additions since version 4.2 are identified by
    change bars at the right of each page.

    This document will continue to evolve through subsequent issues, under
    indirect control from the SAS and SSC configuration control boards.

    This document is the result of the work of many people. Contributors
    have included:

    Hermann Brunner, G...
  • Loss: CachedMultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "get_similarity"
    }
    

Evaluation Dataset

Unnamed Dataset

  • Size: 30 evaluation samples
  • Columns: anchor and positive
  • Approximate statistics based on the first 30 samples:
    anchor positive
    type string string
    details
    • min: 8 tokens
    • mean: 16.0 tokens
    • max: 24 tokens
    • min: 6 tokens
    • mean: 642.47 tokens
    • max: 6152 tokens
  • Samples:
    anchor positive
    What is the purpose of the PPS cross-correlation products? General cross-correlation products

    These PPS cross-correlation products list the names of all catalogues
    searched (both around each EPIC position and in the whole EPIC field)
    and describe the format of their output.
    What are the task parameters of rgssources? rgssources
    ## Parameters

    \label{rgssources:description:parameters}

    filemode} {modify (Optional): no
    (Type:
    Controls whether the task opens a previous source list for editing or creates a new one.
    }
    \optparm{changeprime} {no} {boolean} {yes
    How many stars were used in the U-filter analysis for the G153 pointing to create the distortion map? OM distortion

    The  OM
    (http://www.cosmos.esa.int/web/xmm-newton/technical-details-om) optics,
    filters and (primarily) the detector system result in a certain amount
    of image distortion. This effect can be corrected with a “distortion
    map”, by comparing the expected position with the measured position for
    a large number of stars in the OM
    (http://www.cosmos.esa.int/web/xmm-newton/technical-details-om) field of
    view. A U-filter analysis has been performed on the G153 pointing with
    813 stars. The effect of applying this correction is shown in
    Fig. [fig:uhb:distmap]. A positional r.m.s. accuracy of 0.5 − 1.5 arcsec
    is obtained. The distortion map has been entered into the appropriate
    CCF file and is used in http://www.cosmos.esa.int/web/xmm-newton/sas
    (http://www.cosmos.esa.int/web/xmm-newton/sas).
  • Loss: CachedMultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "get_similarity"
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: steps
  • per_device_train_batch_size: 16
  • per_device_eval_batch_size: 4
  • num_train_epochs: 2
  • lr_scheduler_type: constant
  • warmup_ratio: 0.1
  • bf16: True
  • batch_sampler: no_duplicates

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: steps
  • prediction_loss_only: True
  • per_device_train_batch_size: 16
  • per_device_eval_batch_size: 4
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 1
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 5e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1.0
  • num_train_epochs: 2
  • max_steps: -1
  • lr_scheduler_type: constant
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.1
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: True
  • fp16: False
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: False
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: None
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • include_for_metrics: []
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • dispatch_batches: None
  • split_batches: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • use_liger_kernel: False
  • eval_use_gather_object: False
  • average_tokens_across_devices: False
  • prompts: None
  • batch_sampler: no_duplicates
  • multi_dataset_batch_sampler: proportional

Training Logs

Epoch Step Training Loss Validation Loss
0.2203 50 0.2209 -
0.4405 100 0.1635 0.0402
0.6608 150 0.1759 -
0.8811 200 0.1674 0.1307
1.1013 250 0.1134 -
1.3216 300 0.0809 0.0441
1.5419 350 0.0571 -
1.7621 400 0.077 0.0268
1.9824 450 0.0557 -

Framework Versions

  • Python: 3.10.14
  • Sentence Transformers: 3.4.1
  • Transformers: 4.48.2
  • PyTorch: 2.6.0+cu124
  • Accelerate: 1.3.0
  • Datasets: 3.3.1
  • Tokenizers: 0.21.0

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

CachedMultipleNegativesRankingLoss

@misc{gao2021scaling,
    title={Scaling Deep Contrastive Learning Batch Size under Memory Limited Setup},
    author={Luyu Gao and Yunyi Zhang and Jiawei Han and Jamie Callan},
    year={2021},
    eprint={2101.06983},
    archivePrefix={arXiv},
    primaryClass={cs.LG}
}
Downloads last month
7
Safetensors
Model size
149M params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for lochhonest/modernbert-finetuned-for-sas

Finetuned
(43)
this model