Text-to-Speech
Safetensors
English
Chinese
ZiyueJiang commited on
Commit
66c3d54
·
verified ·
1 Parent(s): 7aa74b5

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +157 -3
README.md CHANGED
@@ -1,3 +1,157 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ language:
4
+ - en
5
+ - zh
6
+ ---
7
+
8
+ # Model Description
9
+ This is a huggingface model card for MegaTTS 3 👋
10
+ - github: https://github.com/bytedance/MegaTTS3
11
+ - [Demo Video](https://github.com/user-attachments/assets/0174c111-f392-4376-a34b-0b5b8164aacc)
12
+ - Huggingface Space: (comming soon)
13
+
14
+
15
+
16
+ ## Installation
17
+ ``` sh
18
+ # Clone the repository
19
+ git clone https://github.com/bytedance/MegaTTS3
20
+ cd MegaTTS3
21
+ ```
22
+
23
+ **Model Download**
24
+ ``` sh
25
+ huggingface-cli download ByteDance/MegaTTS3 --local-dir ./checkpoints --local-dir-use-symlinks False
26
+ ```
27
+
28
+ **Requirements (for Linux)**
29
+ ``` sh
30
+ # Create a python 3.10 conda env (you could also use virtualenv)
31
+ conda create -n megatts3-env python=3.10
32
+ conda activate megatts3-env
33
+ pip install -r requirements.txt
34
+
35
+ # Set the root directory
36
+ export PYTHONPATH="/path/to/MegaTTS3:$PYTHONPATH"
37
+
38
+ # [Optional] Set GPU
39
+ export CUDA_VISIBLE_DEVICES=0
40
+
41
+ # If you encounter bugs with pydantic in inference, you should check if the versions of pydantic and gradio are matched.
42
+ # [Note] if you encounter bugs related with httpx, please check that whether your environmental variable "no_proxy" has patterns like "::"
43
+ ```
44
+
45
+ **Requirements (for Windows)**
46
+ ``` sh
47
+ # [The Windows version is currently under testing]
48
+ # Comment below dependence in requirements.txt:
49
+ # # WeTextProcessing==1.0.4.1
50
+
51
+ # Create a python 3.10 conda env (you could also use virtualenv)
52
+ conda create -n megatts3-env python=3.10
53
+ conda activate megatts3-env
54
+ pip install -r requirements.txt
55
+ conda install -y -c conda-forge pynini==2.1.5
56
+ pip install WeTextProcessing==1.0.3
57
+
58
+ # [Optional] If you want GPU inference, you may need to install specific version of PyTorch for your GPU from https://pytorch.org/.
59
+ pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu126
60
+
61
+ # [Note] if you encounter bugs related with `ffprobe` or `ffmpeg`, you can install it through `conda install -c conda-forge ffmpeg`
62
+
63
+ # Set environment variable for root directory
64
+ set PYTHONPATH="C:\path\to\MegaTTS3;%PYTHONPATH%" # Windows
65
+ $env:PYTHONPATH="C:\path\to\MegaTTS3;%PYTHONPATH%" # Powershell on Windows
66
+ conda env config vars set PYTHONPATH="C:\path\to\MegaTTS3;%PYTHONPATH%" # For conda users
67
+
68
+ # [Optional] Set GPU
69
+ set CUDA_VISIBLE_DEVICES=0 # Windows
70
+ $env:CUDA_VISIBLE_DEVICES=0 # Powershell on Windows
71
+
72
+ ```
73
+
74
+ **Requirements (for Docker)**
75
+ ``` sh
76
+ # [The Docker version is currently under testing]
77
+ # ! You should download the pretrained checkpoint before running the following command
78
+ docker build . -t megatts3:latest
79
+
80
+ # For GPU inference
81
+ docker run -it -p 7929:7929 --gpus all -e CUDA_VISIBLE_DEVICES=0 megatts3:latest
82
+ # For CPU inference
83
+ docker run -it -p 7929:7929 megatts3:latest
84
+
85
+ # Visit http://0.0.0.0:7929/ for gradio.
86
+ ```
87
+
88
+ > [!TIP]
89
+ > [IMPORTANT]
90
+ > For security issues, we do not upload the parameters of WaveVAE encoder to the above links. You can only use the pre-extracted latents from [link1](https://drive.google.com/drive/folders/1QhcHWcy20JfqWjgqZX1YM3I6i9u4oNlr?usp=sharing) for inference. If you want to synthesize speech for speaker A, you need "A.wav" and "A.npy" in the same directory. If you have any questions or suggestions for our model, please email us.
91
+ >
92
+ > This project is primarily intended for academic purposes. For academic datasets requiring evaluation, you may upload them to the voice request queue in [link2](https://drive.google.com/drive/folders/1gCWL1y_2xu9nIFhUX_OW5MbcFuB7J5Cl?usp=sharing) (within 24s for each clip). After verifying that your uploaded voices are free from safety issues, we will upload their latent files to [link1](https://drive.google.com/drive/folders/1QhcHWcy20JfqWjgqZX1YM3I6i9u4oNlr?usp=sharing) as soon as possible.
93
+ >
94
+ > In the coming days, we will also prepare and release the latent representations for some common TTS benchmarks.
95
+
96
+
97
+ ## Inference
98
+
99
+ **Command-Line Usage (Standard)**
100
+ ``` bash
101
+ # p_w (intelligibility weight), t_w (similarity weight). Typically, prompt with more noises requires higher p_w and t_w
102
+ python tts/infer_cli.py --input_wav 'assets/Chinese_prompt.wav' --input_text "另一边的桌上,一位读书人嗤之以鼻道,'佛子三藏,神子燕小鱼是什么样的人物,李家的那个李子夜如何与他们相提并论?'" --output_dir ./gen
103
+
104
+ # As long as audio volume and pronunciation are appropriate, increasing --t_w within reasonable ranges (2.0~5.0)
105
+ # will increase the generated speech's expressiveness and similarity (especially for some emotional cases).
106
+ python tts/infer_cli.py --input_wav 'assets/English_prompt.wav' --input_text 'As his long promised tariff threat turned into reality this week, top human advisers began fielding a wave of calls from business leaders, particularly in the automotive sector, along with lawmakers who were sounding the alarm.' --output_dir ./gen --p_w 2.0 --t_w 3.0
107
+ ```
108
+ **Command-Line Usage (for TTS with Accents)**
109
+ ``` bash
110
+ # When p_w (intelligibility weight) ≈ 1.0, the generated audio closely retains the speaker’s original accent. As p_w increases, it shifts toward standard pronunciation.
111
+ # t_w (similarity weight) is typically set 0–3 points higher than p_w for optimal results.
112
+ # Useful for accented TTS or solving the accent problems in cross-lingual TTS.
113
+ python tts/infer_cli.py --input_wav 'assets/English_prompt.wav' --input_text '这是一条有口音的音频。' --output_dir ./gen --p_w 1.0 --t_w 3.0
114
+
115
+ python tts/infer_cli.py --input_wav 'assets/English_prompt.wav' --input_text '这条音频的发音标准一些了吗?' --output_dir ./gen --p_w 2.5 --t_w 2.5
116
+ ```
117
+
118
+ **Web UI Usage**
119
+ ``` bash
120
+ # We also support cpu inference, but it may take about 30 seconds (for 10 inference steps).
121
+ python tts/gradio_api.py
122
+ ```
123
+
124
+
125
+
126
+ ## Security
127
+ If you discover a potential security issue in this project, or think you may
128
+ have discovered a security issue, we ask that you notify Bytedance Security via our [security center](https://security.bytedance.com/src) or [[email protected]]([email protected]).
129
+
130
+ Please do **not** create a public issue.
131
+
132
+ ## License
133
+ This project is licensed under the [Apache-2.0 License](LICENSE).
134
+
135
+ ## BibTeX Entry and Citation Info
136
+ This repo contains forced-align version of `Sparse Alignment Enhanced Latent Diffusion Transformer for Zero-Shot Speech Synthesis` and the WavVAE is mainly based on `Wavtokenizer: an efficient acoustic discrete codec tokenizer for audio language modeling`. Compared to the model described in paper, the repository includes additional models. These models not only enhance the stability and cloning capabilities of the algorithm but can also be independently utilized to serve a wider range of scenarios.
137
+ ```
138
+ @article{jiang2025sparse,
139
+ title={Sparse Alignment Enhanced Latent Diffusion Transformer for Zero-Shot Speech Synthesis},
140
+ author={Jiang, Ziyue and Ren, Yi and Li, Ruiqi and Ji, Shengpeng and Ye, Zhenhui and Zhang, Chen and Jionghao, Bai and Yang, Xiaoda and Zuo, Jialong and Zhang, Yu and others},
141
+ journal={arXiv preprint arXiv:2502.18924},
142
+ year={2025}
143
+ }
144
+
145
+ @article{ji2024wavtokenizer,
146
+ title={Wavtokenizer: an efficient acoustic discrete codec tokenizer for audio language modeling},
147
+ author={Ji, Shengpeng and Jiang, Ziyue and Wang, Wen and Chen, Yifu and Fang, Minghui and Zuo, Jialong and Yang, Qian and Cheng, Xize and Wang, Zehan and Li, Ruiqi and others},
148
+ journal={arXiv preprint arXiv:2408.16532},
149
+ year={2024}
150
+ }
151
+ ```
152
+
153
+
154
+
155
+
156
+
157
+