add Canary-1B-Flash information (#29)
Browse files- add Canary-1B-Flash information (6dd2226426960e95a74b6414b71456b4a005cff8)
README.md
CHANGED
@@ -272,6 +272,8 @@ img {
|
|
272 |
|
273 |
NVIDIA [NeMo Canary](https://nvidia.github.io/NeMo/blogs/2024/2024-02-canary/) is a family of multi-lingual multi-tasking models that achieves state-of-the art performance on multiple benchmarks. With 1 billion parameters, Canary-1B supports automatic speech-to-text recognition (ASR) in 4 languages (English, German, French, Spanish) and translation from English to German/French/Spanish and from German/French/Spanish to English with or without punctuation and capitalization (PnC).
|
274 |
|
|
|
|
|
275 |
## Model Architecture
|
276 |
|
277 |
Canary is an encoder-decoder model with FastConformer [1] encoder and Transformer Decoder [2].
|
@@ -281,7 +283,6 @@ SentencePiece [3] tokenizers of each language, which makes it easy to scale up t
|
|
281 |
The Canay-1B model has 24 encoder layers and 24 layers of decoder layers in total.
|
282 |
|
283 |
|
284 |
-
|
285 |
## NVIDIA NeMo
|
286 |
|
287 |
To train, fine-tune or Transcribe with Canary, you will need to install [NVIDIA NeMo](https://github.com/NVIDIA/NeMo). We recommend you install it after you've installed Cython and latest PyTorch version.
|
|
|
272 |
|
273 |
NVIDIA [NeMo Canary](https://nvidia.github.io/NeMo/blogs/2024/2024-02-canary/) is a family of multi-lingual multi-tasking models that achieves state-of-the art performance on multiple benchmarks. With 1 billion parameters, Canary-1B supports automatic speech-to-text recognition (ASR) in 4 languages (English, German, French, Spanish) and translation from English to German/French/Spanish and from German/French/Spanish to English with or without punctuation and capitalization (PnC).
|
274 |
|
275 |
+
**🚨Note: Checkout our recent [Canary-1B-Flash](https://huggingface.co/nvidia/canary-1b-flash) model, a faster and more accurate variant of Canary-1B!**
|
276 |
+
|
277 |
## Model Architecture
|
278 |
|
279 |
Canary is an encoder-decoder model with FastConformer [1] encoder and Transformer Decoder [2].
|
|
|
283 |
The Canay-1B model has 24 encoder layers and 24 layers of decoder layers in total.
|
284 |
|
285 |
|
|
|
286 |
## NVIDIA NeMo
|
287 |
|
288 |
To train, fine-tune or Transcribe with Canary, you will need to install [NVIDIA NeMo](https://github.com/NVIDIA/NeMo). We recommend you install it after you've installed Cython and latest PyTorch version.
|