Automatic Speech Recognition
File size: 6,078 Bytes
2be4c43
 
 
 
 
68484c4
2be4c43
b81bddd
 
79624ff
 
 
 
 
b81bddd
d1cfdc8
79624ff
b81bddd
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
d622f31
 
b81bddd
 
 
 
 
 
 
 
 
 
79624ff
 
 
 
 
 
 
 
 
 
 
 
 
 
 
b81bddd
 
 
 
68484c4
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
---
license: mit
datasets:
- mozilla-foundation/common_voice_4_0
- google/fleurs
- llm-lab/SpeechBrown
pipeline_tag: automatic-speech-recognition
---

[![arXiv](https://img.shields.io/badge/arXiv-Paper-<COLOR>.svg)](https://arxiv.org/abs/2412.13071) [![GitHub](https://img.shields.io/badge/GitHub-Code-181717?logo=github)](https://github.com/language-modeling-lab/CLASP) [![Website](https://img.shields.io/website?url=https%3A%2F%2Fmultimodalrag.github.io%2F)](https://clasp1.github.io/)


[Models](https://huggingface.co/llm-lab/CLASP) | [Springer Link](https://link.springer.com/chapter/10.1007/978-3-031-88717-8_2) | [arXiv Link](https://arxiv.org/abs/2412.13071) | [Proposed Dataset](https://huggingface.co/datasets/llm-lab/SpeechBrown)  | [ACM Digital Library](https://dl.acm.org/doi/10.1007/978-3-031-88717-8_2) | [Website](https://clasp1.github.io/)


**CLASP** (Contrastive Language-Speech Pretraining) is a novel, lightweight, multilingual, multimodal representation designed for audio-text retrieval.  
To learn more about our proposed model, please refer to this [paper](https://arxiv.org/abs/2412.13071), which is published at **ECIR 2025**. All code is available on this [GitHub page](https://github.com/language-modeling-lab/CLASP).  
The newly introduced dataset, SpeechBrown, which we created for training this model, can be found on [this page](https://huggingface.co/datasets/llm-lab/SpeechBrown)

CLASP creates powerful and meaningful semantic embeddings for raw speech in a 768-dimensional multilingual representation space. These embeddings can be used in various tasks such as speech retrieval or classification.  

Here, we have uploaded different versions of the CLASP and LASP models we trained:  

- `CLASP_Concat_Final_Fusion_Encoder.pt`: The best model we trained based on retrieval and classification metrics. It uses the concatenation fusion encoder strategy and is trained with contrastive loss.  
- `CLASP_Gating.pt`: Trained with contrastive loss and employs a gating algorithm.  
- `LASP_Concat.pt`: Trained with Huber loss and employs the concatenation strategy.  
- `LASP_Gating.pt`: Trained with Huber loss and employs the gating algorithm.  

To use these models or train your own on custom datasets, please refer to our [GitHub page](https://github.com/language-modeling-lab/CLASP). The `clasp-inference.ipynb` notebook provides an example of loading and using the model.  

## Steps for Inference with CLASP:
1. Load our model with the specified architecture.  
2. Load the [EfficientNet](https://pytorch.org/vision/main/models/generated/torchvision.models.efficientnet_b7.html) encoder for spectrogram encoding.  
3. Load the [HuBERT](https://huggingface.co/facebook/hubert-large-ls960-ft) encoder for self-supervised speech encoding.  
4. Use our scripts to generate embeddings for your audio files.  
5. Use these embeddings for tasks like classification or speech retrieval. For speech retrieval, you can load the [LaBSE](https://huggingface.co/sentence-transformers/LaBSE) sentence transformer to compute the cosine similarity between query and speech embeddings.  

## Architecture Overview

![image/png](https://cdn-uploads.huggingface.co/production/uploads/64ba58d377dd483716aba098/3Eb-6SXQ6c48jJNZedrsZ.png)

## Contributions
1. We introduce CLASP (Contrastive Language-Speech Pretraining), a novel, lightweight, multilingual, multimodal representation designed for audio-text retrieval.  
2. We present a diverse paired speech-text dataset (Speech Brown) in 15 categories, covering a wide range of topics from fiction to religion.  
3. We demonstrate that combining audio spectrograms with a pre-trained self-supervised speech model enhances audio encoding in retrieval applications.  
4. Evaluations in multiple languages show that CLASP achieves new benchmarks in HITS@1, Mean Reciprocal Rank (MRR), and Mean Rank (meanR) metrics.  

## Citations
If you find our paper, code, data, or models useful, please cite the paper:  
```
@inproceedings{10.1007/978-3-031-88717-8_2,
                author = {Abootorabi, Mohammad Mahdi and Asgari, Ehsaneddin},
                title = {CLASP: Contrastive Language-Speech Pretraining for Multilingual Multimodal Information Retrieval},
                year = {2025},
                isbn = {978-3-031-88716-1},
                publisher = {Springer-Verlag},
                address = {Berlin, Heidelberg},
                url = {https://doi.org/10.1007/978-3-031-88717-8_2},
                doi = {10.1007/978-3-031-88717-8_2},
                abstract = {This study introduces CLASP (Contrastive Language-Speech Pretraining), a multilingual, multimodal representation tailored for audio-text information retrieval. CLASP leverages the synergy between spoken content and textual data. During training, we utilize our newly introduced speech-text dataset, which encompasses 15 diverse categories ranging from fiction to religion. CLASP’s audio component integrates audio spectrograms with a pre-trained self-supervised speech model, while its language encoding counterpart employs a sentence encoder pre-trained on over 100 languages. This unified lightweight model bridges the gap between various modalities and languages, enhancing its effectiveness in handling and retrieving multilingual and multimodal data. Our evaluations across multiple languages demonstrate that CLASP establishes new benchmarks in HITS@1, MRR, and meanR metrics, outperforming traditional ASR-based retrieval methods that rely on transcribing speech into text for subsequent text retrieval, especially in specific scenarios.},
                booktitle = {Advances in Information Retrieval: 47th European Conference on Information Retrieval, ECIR 2025, Lucca, Italy, April 6–10, 2025, Proceedings, Part IV},
                pages = {10–20},
                numpages = {11},
                keywords = {Multimodal IR, Speech Retrieval, Contrastive Learning},
                location = {Lucca, Italy}
}
```

## Contact
If you have questions, please email [email protected] or [email protected].