SentenceTransformer based on FacebookAI/roberta-base
This is a sentence-transformers model finetuned from FacebookAI/roberta-base. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
Model Details
Model Description
- Model Type: Sentence Transformer
- Base model: FacebookAI/roberta-base
- Maximum Sequence Length: 512 tokens
- Output Dimensionality: 768 dimensions
- Similarity Function: Cosine Similarity
Model Sources
- Documentation: Sentence Transformers Documentation
- Repository: Sentence Transformers on GitHub
- Hugging Face: Sentence Transformers on Hugging Face
Full Model Architecture
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: RobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
Usage
Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
pip install -U sentence-transformers
Then you can load this model and run inference.
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("LorMolf/mnrl-apibank-roberta-base")
# Run inference
sentences = [
'User: Can you provide me with a list of first aid procedures for a severe burn injury?\nAPI-Request: [First_Aid_Procedures(condition=\'severe burn injury\')]->[{"procedure_name": "Call the emergency number", "description": "Immediately seek medical attention. While waiting for medical help, cool the burn with cool running water for at least 10 minutes or until the pain subsides. Remove any clothing or jewelry that is near the burn and do not touch any blisters that form. Cover the burn with sterile gauze or a clean cloth."}, {"procedure_name": "Seek professional medical attention", "description": "Seek medical attention right away if the burn is deep or covers a large area of the body or if the person experiences a fever, chills, or other signs of infection"}]\nAI: For a severe burn injury, the first aid procedures include calling the emergency number right away and immediately seeking medical attention. While waiting for medical help, you should cool the burn with cool running water for at least 10 minutes, remove any clothing or jewelry that is near the burn, and do not touch any blisters that form. You should also cover the burn with sterile gauze or a clean cloth. If the burn is deep or covers a large area of the body or if the person experiences a fever or chills, it\'s important to seek professional medical attention immediately.\nUser: Can you tell me if there are any natural disasters in my area?\nAI: Sure, what\'s your location?\nUser: I\'m currently at 40.7128° N, 74.0060° W.\nGenerate API Request:',
'def Natural_Disaster_Alerts:\n\t"""\n\t\n\tDescription:\n\tThis API provides information on natural disasters in a particular area. It requires two input parameters, \'latitude\' and \'longitude\', which specify the location coordinates to retrieve the disaster information.\n\t\n\tArguments:\n\t---------\n\t- latitude : string (optional)\n\t Description: The latitude coordinate of a location.\n\t Format: Not specified\n\t- longitude : string (optional)\n\t Description: The longitude coordinate of a location.\n\t Format: Not specified\n\t\n\tOutput:\n\t---------\n\t- data : list (optional)\n\t Description: List of natural disaster information\n\t Format: Not specified\n\t\n\t\n\t"""',
'def check_vaccine_eligibility:\n\t"""\n\t\n\tDescription:\n\tCheck if the patient is eligible for a specific vaccine\n\t\n\tArguments:\n\t---------\n\t- age : integer (optional)\n\t Description: Age of the patient in years\n\t Format: free\n\t- disease : string (optional)\n\t Description: Name of the disease for which the vaccine is required\n\t Format: free\n\t- pregnant : boolean (optional)\n\t Description: Whether the patient is pregnant. true - pregnant, false - not pregnant\n\t Format: enum\n\t\n\tOutput:\n\t---------\n\t- data : string (optional)\n\t Description: Returned vaccine eligibility information\n\t Format: free\n\t\n\t\n\t"""',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
Evaluation
Metrics
Device Aware Information Retrieval
- Dataset:
dev
- Evaluated with
src.port.retrieval_evaluator.DeviceAwareInformationRetrievalEvaluator
Metric | Value |
---|---|
cosine_accuracy@1 | 0.0168 |
cosine_accuracy@3 | 0.0556 |
cosine_accuracy@5 | 0.0875 |
cosine_accuracy@10 | 0.165 |
cosine_precision@1 | 0.0168 |
cosine_precision@3 | 0.0185 |
cosine_precision@5 | 0.0175 |
cosine_precision@10 | 0.0165 |
cosine_recall@1 | 0.0168 |
cosine_recall@3 | 0.0556 |
cosine_recall@5 | 0.0875 |
cosine_recall@10 | 0.165 |
cosine_ndcg@1 | 0.0168 |
cosine_ndcg@3 | 0.0382 |
cosine_ndcg@5 | 0.0511 |
cosine_ndcg@10 | 0.0755 |
cosine_mrr@10 | 0.049 |
cosine_map@100 | 0.0776 |
Training Details
Training Dataset
Unnamed Dataset
- Size: 20,109 training samples
- Columns:
sentence_0
,sentence_1
, andsentence_2
- Approximate statistics based on the first 1000 samples:
sentence_0 sentence_1 sentence_2 type string string string details - min: 18 tokens
- mean: 151.04 tokens
- max: 512 tokens
- min: 78 tokens
- mean: 216.52 tokens
- max: 512 tokens
- min: 77 tokens
- mean: 199.23 tokens
- max: 512 tokens
- Samples:
sentence_0 sentence_1 sentence_2 User: Can you help me add a new alternative medicine remedy to the repository?
AI: Sure! What is the name of the new remedy you want to add?
User: The remedy is called Chamomile Tea.
AI: Great! What are the ingredients of Chamomile Tea, and what are the instructions for preparing it?
User: For Chamomile Tea, we need dried chamomile flowers and hot water, and the instructions are to steep the flowers in hot water for 3-5 minutes.
API-Request: [add_remedy(name='Chamomile Tea', ingredients='['dried chamomile flowers', 'hot water']', instructions='Steep the flowers in hot water for 3-5 minutes.')]->{"status": "Success"}
AI: The new remedy Chamomile Tea has been added to the repository.
User: Can you help me find a list of herbs that can help with inflammation?
AI: Sure! How many results do you want me to show?
User: Can you show me 5 results?
Generate API Request:def search_herbs:
"""
Description:
Search for information about a specific herb in the database.
Arguments:
---------
- herb_name : string (optional)
Description: The name of the herb that is being searched for.
Format: Not specified
Output:
---------
- data : object (optional)
Description: Information about the searched herb.
Format: Not specified
Properties:
- name : string (optional)
Description: The name of the herb.
Format: Not specified
- description : string (optional)
Description: A brief description of the herb and its uses.
Format: Not specified
- scientific_name : string (optional)
Description: The scientific name of the herb.
Format: Not specified
- uses : list (optional)
Description: A list of the different uses for the herb.
Format: Not specified
"""def DoctorAppointmentAvailability:
"""
Description:
API for checking the availability of doctor appointments.
Arguments:
---------
- specialty : str (optional)
Description: The specialty of the doctor.
Format: Not specified
- location : str (optional)
Description: The city location.
Format: Not specified
Output:
---------
- availability : str (optional)
Description: The availability of doctor appointments.
Format: Not specified
"""User: I want to know what meal suggestions you have for a 30-year-old man who wants a meal with 500 calories.
Generate API Request:def get_meal_suggestions:
"""
Description:
Get meal suggestions based on user preferences API. This API has several input parameters, including 'calories', 'diet_preference', 'ingredients', and 'exclude'.
Arguments:
---------
- calories : integer (optional)
Description: The maximum number of calories that the meal can contain.
Format: Not specified
- diet_preference : string (optional)
Description: The preferred diet type, such as vegetarian or ketogenic.
Format: Not specified
- ingredients : string (optional)
Description: Ingredients to be included in the meal.
Format: Not specified
- exclude : string (optional)
Description: Ingredients to be excluded from the meal.
Format: Not specified
Output:
---------
- data : list (optional)
Description: List of meal suggestions
Format: Not specified
"""def PregnancyHealthMonitoringAPI:
"""
Description:
API for monitoring pregnancy-related health parameters.
Arguments:
---------
- week : int (optional)
Description: The pregnancy week for which health parameters are required.
Format: Not specified
Output:
---------
- health_parameters : dict (optional)
Description: Pregnancy-related health parameters for the given week, including weight, blood pressure, and heart rate.
Format: Not specified
"""User: Can you help me schedule a dental appointment for a root canal? I have dental insurance with SmileGuard, and I want to schedule the appointment at Dr. Lee's office on March 24th at 2 pm.
API-Request: [schedule_appointment(user_name='[USER NAME]', service_name='root canal', appointment_date='2039-03-24', appointment_time='14:00:00', provider_name='Dr. Lee's office')]->{"confirmation_code": "[CONFIRMATION CODE]"}
AI: Your appointment for a root canal at Dr. Lee's office has been scheduled on March 24th at 2 pm. Your confirmation code is [CONFIRMATION CODE].
User: I want to know how much a cleaning and a root canal would cost for my dental insurance provider, SmileGuard?
AI: Sure, what is the name of your dental insurance provider?
User: It's SmileGuard.
Generate API Request:def get_dental_services:
"""
Description:
Get a list of dental services and their prices, including basic services like cleaning and more complex procedures like root canals and braces.
Arguments:
---------
- insurance_company : string (optional)
Description: Name of the user's dental insurance company
Format: free
Output:
---------
- data : list (optional)
Description: List of dental services and their prices
Format: Not specified
"""def add_device_compatibility:
"""
Description:
Add a new medical device to the list of compatible devices for a specific medical condition.
Arguments:
---------
- medical_condition : string (optional)
Description: The medical condition for which the new medical device is compatible with.
Format: free
- device_name : string (optional)
Description: The name of the new medical device.
Format: free
- manufacturer : string (optional)
Description: The name of the manufacturer of the new medical device.
Format: free
- company_contact : string (optional)
Description: Contact information for the company that manufactures the new medical device.
Format: free
Output:
---------
- data : object (optional)
Description: Result of adding the new medical device to the list of compatible devices.
Format: Not specified
Properties:
- message : string (optional)
Description: A message indicating the success or failure of the operation.
F... - Loss:
MultipleNegativesRankingLoss
with these parameters:{ "scale": 20.0, "similarity_fct": "cos_sim" }
Training Hyperparameters
Non-Default Hyperparameters
eval_strategy
: stepsper_device_train_batch_size
: 2per_device_eval_batch_size
: 2num_train_epochs
: 1fp16
: Truemulti_dataset_batch_sampler
: round_robin
All Hyperparameters
Click to expand
overwrite_output_dir
: Falsedo_predict
: Falseeval_strategy
: stepsprediction_loss_only
: Trueper_device_train_batch_size
: 2per_device_eval_batch_size
: 2per_gpu_train_batch_size
: Noneper_gpu_eval_batch_size
: Nonegradient_accumulation_steps
: 1eval_accumulation_steps
: Nonetorch_empty_cache_steps
: Nonelearning_rate
: 5e-05weight_decay
: 0.0adam_beta1
: 0.9adam_beta2
: 0.999adam_epsilon
: 1e-08max_grad_norm
: 1.0num_train_epochs
: 1max_steps
: -1lr_scheduler_type
: linearlr_scheduler_kwargs
: {}warmup_ratio
: 0.0warmup_steps
: 0log_level
: passivelog_level_replica
: warninglog_on_each_node
: Truelogging_nan_inf_filter
: Truesave_safetensors
: Truesave_on_each_node
: Falsesave_only_model
: Falserestore_callback_states_from_checkpoint
: Falseno_cuda
: Falseuse_cpu
: Falseuse_mps_device
: Falseseed
: 42data_seed
: Nonejit_mode_eval
: Falseuse_ipex
: Falsebf16
: Falsefp16
: Truefp16_opt_level
: O1half_precision_backend
: autobf16_full_eval
: Falsefp16_full_eval
: Falsetf32
: Nonelocal_rank
: 0ddp_backend
: Nonetpu_num_cores
: Nonetpu_metrics_debug
: Falsedebug
: []dataloader_drop_last
: Falsedataloader_num_workers
: 0dataloader_prefetch_factor
: Nonepast_index
: -1disable_tqdm
: Falseremove_unused_columns
: Truelabel_names
: Noneload_best_model_at_end
: Falseignore_data_skip
: Falsefsdp
: []fsdp_min_num_params
: 0fsdp_config
: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}tp_size
: 0fsdp_transformer_layer_cls_to_wrap
: Noneaccelerator_config
: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}deepspeed
: Nonelabel_smoothing_factor
: 0.0optim
: adamw_torchoptim_args
: Noneadafactor
: Falsegroup_by_length
: Falselength_column_name
: lengthddp_find_unused_parameters
: Noneddp_bucket_cap_mb
: Noneddp_broadcast_buffers
: Falsedataloader_pin_memory
: Truedataloader_persistent_workers
: Falseskip_memory_metrics
: Trueuse_legacy_prediction_loop
: Falsepush_to_hub
: Falseresume_from_checkpoint
: Nonehub_model_id
: Nonehub_strategy
: every_savehub_private_repo
: Nonehub_always_push
: Falsegradient_checkpointing
: Falsegradient_checkpointing_kwargs
: Noneinclude_inputs_for_metrics
: Falseinclude_for_metrics
: []eval_do_concat_batches
: Truefp16_backend
: autopush_to_hub_model_id
: Nonepush_to_hub_organization
: Nonemp_parameters
:auto_find_batch_size
: Falsefull_determinism
: Falsetorchdynamo
: Noneray_scope
: lastddp_timeout
: 1800torch_compile
: Falsetorch_compile_backend
: Nonetorch_compile_mode
: Noneinclude_tokens_per_second
: Falseinclude_num_input_tokens_seen
: Falseneftune_noise_alpha
: Noneoptim_target_modules
: Nonebatch_eval_metrics
: Falseeval_on_start
: Falseuse_liger_kernel
: Falseeval_use_gather_object
: Falseaverage_tokens_across_devices
: Falseprompts
: Nonebatch_sampler
: batch_samplermulti_dataset_batch_sampler
: round_robin
Training Logs
Epoch | Step | Training Loss | dev_cosine_ndcg@10 |
---|---|---|---|
-1 | -1 | - | 0.1465 |
0.0497 | 500 | 0.2997 | - |
0.0995 | 1000 | 0.3394 | - |
0.1492 | 1500 | 0.5734 | - |
0.1989 | 2000 | 1.0787 | - |
0.2 | 2011 | - | 0.1081 |
0.2486 | 2500 | 1.3863 | - |
0.2984 | 3000 | 1.3863 | - |
0.3481 | 3500 | 1.3861 | - |
0.3978 | 4000 | 1.3855 | - |
0.4 | 4022 | - | 0.0582 |
0.4475 | 4500 | 1.3861 | - |
0.4973 | 5000 | 1.3861 | - |
0.5470 | 5500 | 1.3861 | - |
0.5967 | 6000 | 1.386 | - |
0.6 | 6033 | - | 0.0616 |
0.6464 | 6500 | 1.3859 | - |
0.6962 | 7000 | 1.3864 | - |
0.7459 | 7500 | 1.386 | - |
0.7956 | 8000 | 1.3858 | - |
0.8 | 8044 | - | 0.0726 |
0.8454 | 8500 | 1.3859 | - |
0.8951 | 9000 | 1.386 | - |
0.9448 | 9500 | 1.3861 | - |
0.9945 | 10000 | 1.3856 | - |
1.0 | 10055 | - | 0.0755 |
Framework Versions
- Python: 3.10.12
- Sentence Transformers: 4.0.2
- Transformers: 4.51.2
- PyTorch: 2.6.0+cu124
- Accelerate: 1.6.0
- Datasets: 3.5.0
- Tokenizers: 0.21.1
Citation
BibTeX
Sentence Transformers
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
MultipleNegativesRankingLoss
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
- Downloads last month
- 4
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
Model tree for LorMolf/mnrl-apibank-roberta-base
Base model
FacebookAI/roberta-baseEvaluation results
- Cosine Accuracy@1 on devself-reported0.017
- Cosine Accuracy@3 on devself-reported0.056
- Cosine Accuracy@5 on devself-reported0.088
- Cosine Accuracy@10 on devself-reported0.165
- Cosine Precision@1 on devself-reported0.017
- Cosine Precision@3 on devself-reported0.019
- Cosine Precision@5 on devself-reported0.018
- Cosine Precision@10 on devself-reported0.016
- Cosine Recall@1 on devself-reported0.017
- Cosine Recall@3 on devself-reported0.056