SentenceTransformer based on FacebookAI/roberta-base

This is a sentence-transformers model finetuned from FacebookAI/roberta-base. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: FacebookAI/roberta-base
  • Maximum Sequence Length: 512 tokens
  • Output Dimensionality: 768 dimensions
  • Similarity Function: Cosine Similarity

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: RobertaModel 
  (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the ๐Ÿค— Hub
model = SentenceTransformer("LorMolf/mnrl-toole-overlap-roberta-base")
# Run inference
sentences = [
    'What are the top news stories on Sky News today?',
    'def NewsTool:\n\t"""\n\tDescription:\n\tStay connected to global events with our up-to-date news around the world.\n\t"""',
    'def RepoTool:\n\t"""\n\tDescription:\n\tDiscover GitHub projects tailored to your needs, explore their structures with insightful summaries, and get quick coding solutions with curated snippets. Elevate your coding journey with RepoTool, your go-to companion for GitHub project exploration and code mastery.\n\t"""',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Evaluation

Metrics

Device Aware Information Retrieval

  • Dataset: dev
  • Evaluated with src.port.retrieval_evaluator.DeviceAwareInformationRetrievalEvaluator
Metric Value
cosine_accuracy@1 0.0574
cosine_accuracy@3 0.1588
cosine_accuracy@5 0.2771
cosine_accuracy@10 0.5484
cosine_precision@1 0.0574
cosine_precision@3 0.0529
cosine_precision@5 0.0554
cosine_precision@10 0.0548
cosine_recall@1 0.0574
cosine_recall@3 0.1588
cosine_recall@5 0.2771
cosine_recall@10 0.5484
cosine_ndcg@1 0.0574
cosine_ndcg@3 0.1156
cosine_ndcg@5 0.1636
cosine_ndcg@10 0.2502
cosine_mrr@10 0.162
cosine_map@100 0.1927

Training Details

Training Dataset

Unnamed Dataset

  • Size: 30,000 training samples
  • Columns: sentence_0, sentence_1, and sentence_2
  • Approximate statistics based on the first 1000 samples:
    sentence_0 sentence_1 sentence_2
    type string string string
    details
    • min: 8 tokens
    • mean: 25.83 tokens
    • max: 121 tokens
    • min: 22 tokens
    • mean: 40.05 tokens
    • max: 80 tokens
    • min: 22 tokens
    • mean: 37.91 tokens
    • max: 80 tokens
  • Samples:
    sentence_0 sentence_1 sentence_2
    Can you please convert this ABC notation, which represents musical notation, into both a MIDI file, which is a digital audio file format, and a PostScript file, which is a page description language file format used for printing? def abc_to_audio:
    """
    Description:
    Converts ABC music notation to WAV, MIDI, and PostScript files.
    """
    def heygen:
    """
    Description:
    Meet HeyGen - The best AI video generation platform for your team.
    """
    I urgently require a detailed and extensive report that thoroughly analyzes every aspect of my website's SEO performance. Additionally, I need comprehensive suggestions and recommendations for enhancing its overall performance and making necessary improvements. def seoanalysis:
    """
    Description:
    Use AI to analyze and improve the SEO of a website. Get advice on websites, keywords and competitors.
    """
    def WebRewind:
    """
    Description:
    Get the picture of a website at a specific date.
    """
    I am actively seeking to hire a highly skilled freelance engineer who specializes in civil engineering and possesses expertise in all aspects of the field, specifically for a construction project. def TalentOrg:
    """
    Description:
    Find and hire freelance engineering talents from around the world.
    """
    def Agones:
    """
    Description:
    Agones provides soccer (football) results for matches played all over the world in the past 15 years.
    """
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: steps
  • per_device_train_batch_size: 2
  • per_device_eval_batch_size: 2
  • num_train_epochs: 1
  • fp16: True
  • multi_dataset_batch_sampler: round_robin

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: steps
  • prediction_loss_only: True
  • per_device_train_batch_size: 2
  • per_device_eval_batch_size: 2
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 1
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 5e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1.0
  • num_train_epochs: 1
  • max_steps: -1
  • lr_scheduler_type: linear
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.0
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: False
  • fp16: True
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: False
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • tp_size: 0
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: None
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • include_for_metrics: []
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • use_liger_kernel: False
  • eval_use_gather_object: False
  • average_tokens_across_devices: False
  • prompts: None
  • batch_sampler: batch_sampler
  • multi_dataset_batch_sampler: round_robin

Training Logs

Epoch Step Training Loss dev_cosine_ndcg@10
-1 -1 - 0.4447
0.0333 500 0.4387 -
0.0667 1000 0.4728 -
0.1 1500 0.6096 -
0.1333 2000 1.2803 -
0.1667 2500 1.3866 -
0.2 3000 1.3848 0.1957
0.2333 3500 1.3854 -
0.2667 4000 1.3864 -
0.3 4500 1.3859 -
0.3333 5000 1.3855 -
0.3667 5500 1.3856 -
0.4 6000 1.3863 0.1970
0.4333 6500 1.3852 -
0.4667 7000 1.3858 -
0.5 7500 1.3862 -
0.5333 8000 1.3856 -
0.5667 8500 1.3857 -
0.6 9000 1.3855 0.2291
0.6333 9500 1.3858 -
0.6667 10000 1.3856 -
0.7 10500 1.3859 -
0.7333 11000 1.386 -
0.7667 11500 1.386 -
0.8 12000 1.3868 0.2648
0.8333 12500 1.3858 -
0.8667 13000 1.3861 -
0.9 13500 1.3862 -
0.9333 14000 1.3857 -
0.9667 14500 1.3856 -
1.0 15000 1.3859 0.2502

Framework Versions

  • Python: 3.10.12
  • Sentence Transformers: 4.0.2
  • Transformers: 4.51.2
  • PyTorch: 2.6.0+cu124
  • Accelerate: 1.6.0
  • Datasets: 3.5.0
  • Tokenizers: 0.21.1

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

MultipleNegativesRankingLoss

@misc{henderson2017efficient,
    title={Efficient Natural Language Response Suggestion for Smart Reply},
    author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
    year={2017},
    eprint={1705.00652},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}
Downloads last month
3
Safetensors
Model size
125M params
Tensor type
F32
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for LorMolf/mnrl-toole-overlap-roberta-base

Finetuned
(1567)
this model

Evaluation results