ServiceNow Table Answering
Introduction
ServiceNow is a platform that helps businesses automate their processes and workflows. They offer several solutions such as ITSM. Currently, users of ServiceNow generally need to apply filters and/or build dashboards to observe data about tables in ServiceNow, such as incidents and problems. Building dashboards and reports often require the help of developers and may be a hassle just for quick information. Dashboards are useful for visual representation, but it would be useful to be able to ask questions about the data just to a chatbot. I created some sample tables with some ServiceNow fields and used that as data. The task is to create a custom LLM chat/assistant that takes in data from tables ServiceNow uses such as incident, change, and problem, which can then be used to respond to user queries in natural language.
Training Data
For this project, the training data was structured around ServiceNow ITSM tables, specifically Incident, Change, and Problem tables. I used a certain subset fields from Incident, Change, and Problem tables. For example, Problem tables have a problem id, priority, status, root cause, and resolved at field. Since I can’t use official data from in-use ServiceNow instances, which contain private information, I generated a synthetic dataset with custom code. Then, I had to structure that code in sqa format, which is the best format for the model I was using, TAPAS. For this, I had to save each table in a CSV file. The final refined dataset that I would pass in would contain an id, uestion, table_file, answer_coordinates if the answer was in the table itself, the actual answer, and a float answer if the answer was a numeric value not in the data, such as a count. I do have an aggregation_label field as well, which I set right before the training process, but after the train_test_table split. I used the method train_test_split() to obtain the training, validation, and test data. I specifically used a seed of 42:
Example of how the training data appears:
train_val_data, test_data = train_test_split(data, test_size=0.1, random_state=42)
# Then split train+validation into train and validation
train_data, val_data = train_test_split(train_val_data, test_size=0.1, random_state=42)
Training Method
I used full-fine tuning. The model did not really need generalization abilities. Its primary purpose is to take ServiceNow Tables and answer queries based on those tables. Keeping some generalization ability would be nice, but isn't really that necessary. PEFT could work as well to prevent catastrophic overfitting, but generalization is not hugely important. The drawbacks I had expected was some generalization loss, but that wasn't really the case.
These were the arguments/hyperparameters, I used. I tried using higher epochs, but those usually caused worse results:
num_train_epochs=1, # Number of training epochs
per_device_train_batch_size=32, # Batch size per device during training
per_device_eval_batch_size=64, # Batch size per device during evaluation
learning_rate=0.00001,
warmup_steps=100, # Number of warmup steps for learning rate scheduler
weight_decay=0.01, # Strength of weight decay
evaluation_strategy="steps", # Evaluate every 'eval_steps'
eval_steps=50, # Evaluation frequency in steps
logging_steps=50, # Log every eval_steps
save_steps=150, # Save model every 500 steps
save_total_limit=2,
load_best_model_at_end=True, # Load the best model when finished training
metric_for_best_model="eval_loss", # Metric to use for best model selection
Evaluation
I had three benchmarks, the WikiTableQuestions dataset, the TabFact dataset, and SQA. Fine-tuning did not harm the results of on the WTQ Validation Set and the TabFact Dataset, in which I got accuracies of .3405 and .5005, respectively for both the pre-trained and fine-tuned model. It slightly improved the results on the SQA dataset. There were improvements in the test results after training though. On the test set, there was quite a large jump in accuracy from 0.2033 to 0.4667 after fine-tuning.
Model | Test Set of Synthetic Dataset | Benchmark 1 (WTQ Validation Set) | Benchmark 2 (TabFact) | Benchmark 3 (SQA) |
---|---|---|---|---|
google/tapas-base-finetuned-wtq (before Fine-tuning) | 0.2933 | 0.3405 | 0.5005 | 0.2512 |
google/tapas-base-finetuned-wtq (Fine-tuned) | 0.4667 | 0.3405 | 0.5005 | 0.2525 |
mistralai/Mistral-7B-Instruct-v0.3 | 0 | Exact Match: 0.0346 / Fuzzy Match: 0.4744 | 0.4995 | 0.0296 |
meta-llama/Llama-3.2-1B | 0.0133 | Exact Match: 0.0593 / Fuzzy Match: 0.2769 | 0.4995 | 0.0238 |
Usage and Intended Uses
This model is designed for question answering over tabular data. It is mostly directed for querying ITSM tables (change, problem, and incident). It is used to answer questions, such as most common issues and number of records in varying categories.
saved_path = "am5uc/ServiceNow_Table_Question_Answering"
tokenizer = TapasTokenizer.from_pretrained(saved_path)
model = TapasForQuestionAnswering.from_pretrained(saved_path)
question = "How many Hardware Upgrade changes are still pending?"
table_df = pd.DataFrame({
"change_id": [
"CHG3000",
"CHG3001",
"CHG3002",
"CHG3003"
],
"category": [
"Security Patch",
"Software Update",
"Hardware Upgrade",
"Software Update"
],
"status": [
"Rejected",
"In Progress",
"In Progress",
"Completed"
],
"approved_by": [
"",
"Manager2",
"",
"Admin1"
],
"implementation_date": [
"",
"",
"",
"2023-05-30"
]
})
# Tokenize both Question and Table together
inputs = tokenizer(table=table_df, queries=[question], padding='max_length', return_tensors='pt')
# Model prediction
# --- Helper function ---
def get_final_answer(model, tokenizer, inputs, table_df):
outputs = model(**inputs)
logits = outputs.logits
logits_agg = outputs.logits_aggregation
predicted_answer_coordinates, predicted_aggregation_indices = tokenizer.convert_logits_to_predictions(
inputs,
logits.detach(),
logits_agg=logits_agg.detach()
)
aggregation_operators = ["NONE", "SUM", "AVERAGE", "COUNT"]
agg_op_idx = predicted_aggregation_indices[0] if predicted_aggregation_indices else 0
agg_op = aggregation_operators[agg_op_idx]
predicted_cells = []
for coord in predicted_answer_coordinates[0]:
cell_value = table_df.iat[coord[0], coord[1]]
predicted_cells.append(cell_value)
if agg_op == "COUNT":
answer = len(predicted_cells)
elif agg_op == "SUM":
try:
answer = sum(float(cell) for cell in predicted_cells)
except ValueError:
answer = "Could not SUM non-numeric cells"
elif agg_op == "AVERAGE":
try:
answer = sum(float(cell) for cell in predicted_cells) / len(predicted_cells)
except ValueError:
answer = "Could not AVERAGE non-numeric cells"
else: # NONE
answer = predicted_cells
return agg_op, answer, predicted_cells
_, answer, _ = get_final_answer(model, tokenizer, inputs, table_df)
print(answer)
Prompt Format
The prompt for the TAPAS model should be a natural language question paired with a structured table that can be passed in in dataframe format. TAPAS does not work with just one prompt and generally reqires a question and a table dataframe to work.
question = "How many Hardware Upgrade changes are still pending?"
table_df = pd.DataFrame({
"change_id": [
"CHG3000",
"CHG3001",
"CHG3002",
"CHG3003"
],
"category": [
"Security Patch",
"Software Update",
"Hardware Upgrade",
"Software Update"
],
"status": [
"Rejected",
"In Progress",
"In Progress",
"Completed"
],
"approved_by": [
"",
"Manager2",
"",
"Admin1"
],
"implementation_date": [
"",
"",
"",
"2023-05-30"
]
})
inputs = tokenizer(table=table_df, queries=[question], padding='max_length', return_tensors='pt')
Or you could define table in json format and then have table = pd.DataFrame(table) in your tokenizer.
Expected Output Format
You tokenize the inputs and then perform a specific function to get outputs, which are the aggregation operation, answer, and predicted_cells. You can just grab the middle value which is the predicted answer.
# Tokenize both Question and Table together
inputs = tokenizer(table=table_df, queries=[question], padding='max_length', return_tensors='pt')
# Model prediction
##--- Helper function ---
def get_final_answer(model, tokenizer, inputs, table_df):
outputs = model(**inputs)
logits = outputs.logits
logits_agg = outputs.logits_aggregation
predicted_answer_coordinates, predicted_aggregation_indices = tokenizer.convert_logits_to_predictions(
inputs,
logits.detach(),
logits_agg=logits_agg.detach()
)
aggregation_operators = ["NONE", "SUM", "AVERAGE", "COUNT"]
agg_op_idx = predicted_aggregation_indices[0] if predicted_aggregation_indices else 0
agg_op = aggregation_operators[agg_op_idx]
predicted_cells = []
for coord in predicted_answer_coordinates[0]:
cell_value = table_df.iat[coord[0], coord[1]]
predicted_cells.append(cell_value)
if agg_op == "COUNT":
answer = len(predicted_cells)
elif agg_op == "SUM":
try:
answer = sum(float(cell) for cell in predicted_cells)
except ValueError:
answer = "Could not SUM non-numeric cells"
elif agg_op == "AVERAGE":
try:
answer = sum(float(cell) for cell in predicted_cells) / len(predicted_cells)
except ValueError:
answer = "Could not AVERAGE non-numeric cells"
else: # NONE
answer = predicted_cells
return agg_op, answer, predicted_cells
_, answer, _ = get_final_answer(model, tokenizer, inputs, table_df)
print(answer)
If the question is asking for a count, like how many changes have been completed, the answer would just be one number. If it is asking a question about the most common incident status or root cause, the answer would be the root cause or status the model predicts.
Limitations
The model does still not come close to a 100% accuracy. Possibly using a larger model could help. I have ran into issues with larger table sizes before, so that may be an issue. Once again, possibly a larger model could help. Also this needs to take in a question and table in dataframe format, so more preocessing is necessary. The number of samples it is trained on could also be increased to possibly improve results.
Model Card Authors [optional]
Abhinandan Mekap
- Downloads last month
- 5