Datasets:
license: cc-by-nc-sa-4.0
language:
- zh
- en
tags:
- speech
- audio
pretty_name: voxbox
size_categories:
- 10M<n<100M
task_categories:
- text-to-speech
VoxBox
This dataset is a curated collection of bilingual speech corpora annotated clean transcriptions and rich metadata incluing age, gender, and emotion.
Dataset Structure
.
├── audios/
│ └── aishell-3/ # Audio files (organised by sub-corpus)
│ └── ...
└── metadata/
├── aishell-3.jsonl
├── casia.jsonl
├── commonvoice_cn.jsonl
├── ...
└── wenetspeech4tts.jsonl # JSONL metadata files
Each JSONL file corresponds to a specific corpus and contains one metadata record per audio sample.
Metadata Format
Each line in the *.jsonl files is a JSON object describing one audio sample. Below is a typical example:
{
"index": "VCTK_0000044280",
"split": "train",
"language": "en",
"age": "Youth-Adult",
"gender": "female",
"emotion": "UNKNOWN",
"pitch": 180.626,
"pitch_std": 0.158,
"speed": 4.2,
"duration": 3.84,
"speech_duration": 3.843,
"syllable_num": 16,
"text": "Clearly, the need for a personal loan is written in the stars.",
"syllables": "K-L-IH1-R L-IY0 DH-AH0 N-IY1-D F-AO1 R-AH0 P-ER1 S-IH0 N-IH0-L L-OW1 N-IH1 Z-R-IH1 T-AH0 N-IH0-N DH-AH0-S T-AA1-R-Z",
"wav_path": "vctk/VCTK_0000044280.flac"
}
The corresponding audio file is located inside the extracted .tar.gz archive.
Key Fields:
index
: Unique identifier for the audio sample.split
: Train/test split.language
: Language of the audio sample. Currently only English and Chinese are supported.age
,gender
,emotion
: Speaker and utterance attributespitch
,pitch_std
,speed
: Acoustic featuresduration
: Duration of the audio sample in secondsspeech_duration
: Duration of the speech in seconds by excluding silence in both ends.syllable_num
: Number of syllables in the utterancetext
: Transcription of the utterancesyllables
: Syllable-level transcriptionwav_path
: Path to the audio file
📚 More Information & Download Instructions
For detailed information about the dataset, including download scripts and usage instructions, please visit the official GitHub repository:
🔗 https://github.com/SparkAudio/VoxBox
📌 Licence & Attribution
Please refer to the original licenses of each sub-corpus. This dataset merely aggregates and annotates the metadata in a unified structure for research purposes.
📬 Citation
If you use this file or its associated data in your research, please consider citing:
@article{wang2025spark,
title={Spark-tts: An efficient llm-based text-to-speech model with single-stream decoupled speech tokens},
author={Wang, Xinsheng and Jiang, Mingqi and Ma, Ziyang and Zhang, Ziyu and Liu, Songxiang and Li, Linqin and Liang, Zheng and Zheng, Qixi and Wang, Rui and Feng, Xiaoqin and others},
journal={arXiv preprint arXiv:2503.01710},
year={2025}
}