Spaces:
Running
title: Callytics Demo
emoji: π
colorFrom: green
colorTo: purple
sdk: gradio
sdk_version: 5.23.1
app_file: app.py
pinned: false
license: gpl-3.0
short_description: Callytics Demo

Callytics
Callytics
is an advanced call analytics solution that leverages speech recognition and large language models (LLMs)
technologies to analyze phone conversations from customer service and call centers. By processing both the
audio and text of each call, it provides insights such as sentiment analysis, topic detection, conflict detection,
profanity word detection and summary. These cutting-edge techniques help businesses optimize customer interactions,
identify areas for improvement, and enhance overall service quality.
When an audio file is placed in the .data/input
directory, the entire pipeline automatically starts running, and the
resulting data is inserted into the database.
Note: This is only a v1.1.0
version; many new features will be added, models
will be fine-tuned or trained from scratch, and various optimization efforts will be applied. For more information,
you can check out the Upcoming section.
Note: If you would like to contribute to this repository, please read the CONTRIBUTING first.
Table of Contents
- Prerequisites
- Architecture
- Math And Algorithm
- Features
- Demo
- Installation
- File Structure
- Database Structure
- Version Control System
- Upcoming
- Documentations
- License
- Links
- Team
- Contact
- Citation
Prerequisites
General
Python 3.11
(or above)
Llama
GPU (min 24GB)
(or above)Hugging Face Credentials (Account, Token)
Llama-3.2-11B-Vision-Instruct
(or above)
OpenAI
GPU (min 12GB)
(for other process such asfaster whisper
&NeMo
)- At least one of the following is required:
OpenAI Credentials (Account, API Key)
Azure OpenAI Credentials (Account, API Key, API Base URL)
Architecture
Math and Algorithm
This section describes the mathematical models and algorithms used in the project.
Note: The mathematical concepts and algorithms specific to this repository, rather than the models used, will be
provided in this section. Please refer to the RESOURCES
under the Documentations section for the
repositories and models utilized or referenced.
Silence Duration Calculation
The silence durations are derived from the time intervals between speech segments:
represent the set of silence durations (in seconds) between consecutive speech segments.
- A user-defined factor:
To determine a threshold that distinguishes significant silence from trivial gaps, two statistical methods can be applied:
1. Standard Deviation-Based Threshold
- Mean:
- Standard Deviation:
- Threshold:
2. Median + Interquartile Range (IQR) Threshold
- Median:
Let:
be an ordered set.
Then:
- Quartiles:
- IQR:
- Threshold:
T_{\text{median\\_iqr}} = M + (\text{IQR} \times \text{factor})
Total Silence Above Threshold
Once the threshold
either
or
T_{\text{median\\_iqr}}
is defined, we sum only those silence durations that meet or exceed this threshold:
where $$\mathbf{1}(s_i \geq T)$$ is an indicator function defined as:
Summary:
- Identify the silence durations:
- Determine the threshold using either:
Standard deviation-based:
Median+IQR-based:
- Compute the total silence above this threshold:
Features
- Speech Enhancement
- Sentiment Analysis
- Profanity Word Detection
- Summary
- Conflict Detection
- Topic Detection
Demo
Installation
Linux/Ubuntu
sudo apt update -y && sudo apt upgrade -y
sudo apt install ffmpeg -y
sudo apt install -y ffmpeg build-essential g++
git clone https://github.com/bunyaminergen/Callytics
cd Callytics
conda env create -f environment.yaml
conda activate Callytics
Environment
.env
file sample:
# CREDENTIALS
# OPENAI
OPENAI_API_KEY=
# HUGGINGFACE
HUGGINGFACE_TOKEN=
# AZURE OPENAI
AZURE_OPENAI_API_KEY=
AZURE_OPENAI_API_BASE=
AZURE_OPENAI_API_VERSION=
# DATABASE
DB_NAME=
DB_USER=
DB_PASSWORD=
DB_HOST=
DB_PORT=
DB_URL=
Database
In this section, an example database
and tables
are provided. It is a well-structured
and simple design
. If you
create the tables
and columns in the same structure in your remote database, you will not encounter errors in the code. However, if you
want to change the database structure, you will also need to refactor the code.
Note: Refer to the Database Structure section for the database schema and tables.
sqlite3 .db/Callytics.sqlite < src/db/sql/Schema.sql
Grafana
In this section, it is explained how to install Grafana
on your local
environment. Since Grafana is a third-party
open-source monitoring application, you must handle its installation yourself and connect your database. Of course, you
can also use it with Granafa Cloud
instead of local
environment.
sudo apt update -y && sudo apt upgrade -y
sudo apt install -y apt-transport-https software-properties-common wget
wget -q -O - https://packages.grafana.com/gpg.key | sudo apt-key add -
echo "deb https://packages.grafana.com/oss/deb stable main" | sudo tee /etc/apt/sources.list.d/grafana.list
sudo apt install -y grafana
sudo systemctl start grafana-server
sudo systemctl enable grafana-server
sudo systemctl daemon-reload
http://localhost:3000
SQLite Plugin
sudo grafana-cli plugins install frser-sqlite-datasource
sudo systemctl restart grafana-server
sudo systemctl daemon-reload
File Structure
.
βββ automation
β βββ service
β βββ callytics.service
βββ config
β βββ config.yaml
β βββ nemo
β β βββ diar_infer_telephonic.yaml
β βββ prompt.yaml
βββ .data
β βββ example
β β βββ LogisticsCallCenterConversation.mp3
β βββ input
βββ .db
β βββ Callytics.sqlite
βββ .docs
β βββ documentation
β β βββ CONTRIBUTING.md
β β βββ RESOURCES.md
β βββ img
β βββ Callytics.drawio
β βββ Callytics.gif
β βββ CallyticsIcon.png
β βββ Callytics.png
β βββ Callytics.svg
β βββ database.png
βββ .env
βββ environment.yaml
βββ .gitattributes
βββ .github
β βββ CODEOWNERS
βββ .gitignore
βββ LICENSE
βββ main.py
βββ README.md
βββ requirements.txt
βββ src
βββ audio
β βββ alignment.py
β βββ analysis.py
β βββ effect.py
β βββ error.py
β βββ io.py
β βββ metrics.py
β βββ preprocessing.py
β βββ processing.py
β βββ utils.py
βββ db
β βββ manager.py
β βββ sql
β βββ AudioPropertiesInsert.sql
β βββ Schema.sql
β βββ TopicFetch.sql
β βββ TopicInsert.sql
β βββ UtteranceInsert.sql
βββ text
β βββ llm.py
β βββ model.py
β βββ prompt.py
β βββ utils.py
βββ utils
βββ utils.py
19 directories, 43 files
Database Structure
Version Control System
Releases
Branches
Upcoming
- Speech Emotion Recognition: Develop a model to automatically detect emotions from speech data.
- New Forced Alignment Model: Train a forced alignment model from scratch.
- New Vocal Separation Model: Train a vocal separation model from scratch.
- Unit Tests: Add a comprehensive unit testing script to validate functionality.
- Logging Logic: Implement a more comprehensive and structured logging mechanism.
- Warnings: Add meaningful and detailed warning messages for better user guidance.
- Real-Time Analysis: Enable real-time analysis capabilities within the system.
- Dockerization: Containerize the repository to ensure seamless deployment and environment consistency.
- New Transcription Models: Integrate and test new transcription models suchas AIOLAβs Multi-Head Speech Recognition Model.
- Noise Reduction Model: Identify, test, and integrate a deep learning-based noise reduction model. Consider existing models like Facebook Research Denoiser, Noise2Noise, Audio Denoiser CNN. Write test scripts for evaluation, and if necessary, train a new model for optimal performance.
Considerations
- Detect CSR's identity via Voice Recognition/Identification instead of Diarization and LLM.
- Transform the code structure into a pipeline for better modularity and scalability.
- Publish the repository as a Python package on PyPI for wider distribution.
- Convert the repository into a Linux package to support Linux-based systems.
- Implement a two-step processing workflow: perform diarization (speaker segmentation) first, then apply * transcription* for each identified speaker separately. This approach can improve transcription accuracy by leveraging speaker separation.
- Enable parallel processing for tasks such as diarization, transcription, and model inference to improve overall system performance and reduce processing time.
- Explore using Docker Compose for multi-container orchestration if required.
- Upload the models and relevant resources to Hugging Face for easier access, sharing, and community collaboration.
- Consider writing a Command Line Interface (CLI) to simplify user interaction and improve usability.
- Test the ability to use different language models (LLMs) for specific tasks. For instance, using BERT for profanity detection. Evaluate their performance and suitability for different use cases as a feature.
Documentations
Licence
Links
Team
Contact
Citation
@software{ Callytics,
author = {Bunyamin Ergen},
title = {{Callytics}},
year = {2024},
month = {12},
url = {https://github.com/bunyaminergen/Callytics},
version = {v1.1.0},
}