File size: 52,379 Bytes
7934b29 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 797 798 799 800 801 802 803 804 805 806 807 808 809 810 811 812 813 814 815 816 817 818 819 820 821 822 823 824 825 826 827 828 829 830 831 832 833 834 835 836 837 838 839 840 841 842 843 844 845 846 847 848 849 850 851 852 853 854 855 856 857 858 859 860 861 862 863 864 865 866 867 868 869 870 871 872 873 874 875 876 877 878 879 880 881 882 883 884 885 886 887 888 889 890 891 892 893 894 895 896 897 898 899 900 901 902 903 904 905 906 907 908 909 910 911 912 913 914 915 916 917 918 919 920 921 922 923 924 925 926 927 928 929 930 931 932 933 934 935 936 937 938 939 940 941 942 943 944 945 946 947 948 949 950 951 952 953 954 955 956 957 958 959 960 961 962 963 964 965 966 967 968 969 970 971 972 973 974 975 976 977 978 979 980 981 982 983 984 985 986 987 988 989 990 991 992 993 994 995 996 997 998 999 1000 1001 1002 1003 1004 1005 1006 1007 1008 1009 1010 1011 1012 1013 1014 1015 1016 1017 1018 1019 1020 1021 1022 1023 1024 1025 1026 1027 1028 1029 1030 1031 1032 1033 1034 1035 1036 1037 1038 1039 1040 1041 1042 1043 1044 1045 1046 1047 1048 1049 1050 1051 1052 1053 1054 1055 1056 1057 1058 1059 1060 1061 1062 1063 1064 1065 1066 1067 1068 1069 1070 1071 1072 1073 1074 1075 1076 1077 1078 1079 1080 1081 1082 1083 1084 1085 1086 1087 1088 1089 1090 1091 1092 1093 1094 1095 1096 1097 1098 1099 1100 1101 1102 1103 1104 1105 1106 1107 1108 1109 1110 1111 1112 1113 1114 1115 1116 1117 1118 1119 1120 1121 1122 1123 1124 1125 1126 1127 1128 1129 1130 1131 1132 1133 1134 1135 1136 1137 1138 1139 1140 1141 1142 1143 1144 1145 1146 1147 1148 1149 1150 1151 1152 1153 1154 1155 1156 1157 1158 1159 1160 1161 1162 1163 1164 1165 1166 1167 1168 1169 1170 1171 1172 1173 1174 1175 1176 1177 1178 1179 1180 1181 1182 1183 1184 1185 1186 1187 1188 1189 1190 1191 1192 1193 1194 1195 1196 1197 1198 1199 1200 1201 1202 1203 1204 1205 1206 1207 1208 1209 1210 1211 1212 1213 1214 1215 1216 1217 1218 1219 1220 1221 1222 1223 1224 1225 1226 1227 1228 1229 1230 1231 1232 1233 1234 1235 1236 1237 1238 1239 1240 1241 1242 1243 1244 1245 1246 1247 1248 1249 1250 1251 1252 1253 1254 1255 1256 1257 1258 1259 1260 1261 1262 1263 1264 1265 1266 1267 1268 1269 1270 1271 1272 1273 1274 1275 1276 1277 1278 1279 1280 1281 1282 1283 1284 1285 1286 |
{
"cells": [
{
"cell_type": "markdown",
"metadata": {
"id": "7LfkL2r2Q1tr"
},
"source": [
"# Getting Started: Exploring Nemo Fundamentals\n",
"\n",
"NeMo is a toolkit for creating [Conversational AI](https://developer.nvidia.com/conversational-ai#started) applications.\n",
"\n",
"NeMo toolkit makes it possible for researchers to easily compose complex neural network architectures for conversational AI using reusable components - Neural Modules. Neural Modules are conceptual blocks of neural networks that take typed inputs and produce typed outputs. Such modules typically represent data layers, encoders, decoders, language models, loss functions, or methods of combining activations.\n",
"\n",
"The toolkit comes with extendable collections of pre-built modules and ready-to-use models for automatic speech recognition (ASR), natural language processing (NLP) and text synthesis (TTS). Built for speed, NeMo can utilize NVIDIA's Tensor Cores and scale out training to multiple GPUs and multiple nodes.\n",
"\n",
"For more information, please visit https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/#"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "zLSy94NEQi-e"
},
"outputs": [],
"source": [
"\"\"\"\n",
"You can run either this notebook locally (if you have all the dependencies and a GPU) or on Google Colab.\n",
"\n",
"Instructions for setting up Colab are as follows:\n",
"1. Open a new Python 3 notebook.\n",
"2. Import this notebook from GitHub (File -> Upload Notebook -> \"GITHUB\" tab -> copy/paste GitHub URL)\n",
"3. Connect to an instance with a GPU (Runtime -> Change runtime type -> select \"GPU\" for hardware accelerator)\n",
"4. Run this cell to set up dependencies.\n",
"\"\"\"\n",
"# If you're using Google Colab and not running locally, run this cell.\n",
"\n",
"## Install dependencies\n",
"!pip install wget\n",
"!apt-get install sox libsndfile1 ffmpeg\n",
"!pip install text-unidecode\n",
"\n",
"# ## Install NeMo\n",
"BRANCH = 'r1.17.0'\n",
"!python -m pip install git+https://github.com/NVIDIA/NeMo.git@$BRANCH#egg=nemo_toolkit[all]\n",
"\n",
"## Install TorchAudio\n",
"!pip install torchaudio>=0.10.0 -f https://download.pytorch.org/whl/torch_stable.html\n",
"\n",
"## Grab the config we'll use in this example\n",
"!mkdir configs"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "6G2TZkaxcM0e"
},
"source": [
"## Foundations of NeMo\n",
"---------\n",
"\n",
"NeMo models leverage [PyTorch Lightning](https://github.com/PyTorchLightning/pytorch-lightning) Module, and are compatible with the entire PyTorch ecosystem. This means that users have the full flexibility of using the higher level APIs provided by PyTorch Lightning (via Trainer), or write their own training and evaluation loops in PyTorch directly (by simply calling the model and the individual components of the model).\n",
"\n",
"For NeMo developers, a \"Model\" is the neural network(s) as well as all the infrastructure supporting those network(s), wrapped into a singular, cohesive unit. As such, all NeMo models are constructed to contain the following out of the box (at the bare minimum, some models support additional functionality too!) - \n",
"\n",
" - Neural Network architecture - all of the modules that are required for the model.\n",
"\n",
" - Dataset + Data Loaders - all of the components that prepare the data for consumption during training or evaluation.\n",
"\n",
" - Preprocessing + Postprocessing - all of the components that process the datasets so they can easily be consumed by the modules.\n",
"\n",
" - Optimizer + Schedulers - basic defaults that work out of the box, and allow further experimentation with ease.\n",
"\n",
" - Any other supporting infrastructure - tokenizers, language model configuration, data augmentation etc.\n",
"\n",
"\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "XxAwtqWBQrNk"
},
"outputs": [],
"source": [
"import nemo\n",
"nemo.__version__"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "H01SHfKQh-gV"
},
"source": [
"## NeMo Collections\n",
"\n",
"NeMo is sub-divided into a few fundamental collections based on their domains - `asr`, `nlp`, `tts`. When you performed the `import nemo` statement above, none of the above collections were imported. This is because you might not need all of the collections at once, so NeMo allows partial imports of just one or more collection, as and when you require them.\n",
"\n",
"-------\n",
"Let's import the above three collections - "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "J09NNa8fhth7"
},
"outputs": [],
"source": [
"import nemo.collections.asr as nemo_asr\n",
"import nemo.collections.nlp as nemo_nlp\n",
"import nemo.collections.tts as nemo_tts"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "bSvYoeBrjPza"
},
"source": [
"## NeMo Models in Collections\n",
"\n",
"NeMo contains several models for each of its collections, pertaining to certain common tasks involved in conversational AI. At a brief glance, let's look at all the Models that NeMo offers for the above 3 collections."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "9LbbC_92i41f"
},
"outputs": [],
"source": [
"asr_models = [model for model in dir(nemo_asr.models) if model.endswith(\"Model\")]\n",
"asr_models"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "t5_ax9Z8j9FC"
},
"outputs": [],
"source": [
"nlp_models = [model for model in dir(nemo_nlp.models) if model.endswith(\"Model\")]\n",
"nlp_models"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "bQdR6RJdkezq"
},
"outputs": [],
"source": [
"tts_models = [model for model in dir(nemo_tts.models) if model.endswith(\"Model\")]\n",
"tts_models"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "iWKxKQnSkj9Z"
},
"source": [
"## The NeMo Model\n",
"\n",
"Let's dive deeper into what a NeMo model really is. There are many ways we can create these models - we can use the constructor and pass in a config, we can instantiate the model from a pre-trained checkpoint, or simply pass a pre-trained model name and instantiate a model directly from the cloud !\n",
"\n",
"---------\n",
"For now, let's try to work with an ASR model - [Citrinet](https://arxiv.org/abs/2104.01721)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "n-XOQaW1kh3v"
},
"outputs": [],
"source": [
"citrinet = nemo_asr.models.EncDecCTCModelBPE.from_pretrained('stt_en_citrinet_512')"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "YP4X7KVPli6g"
},
"outputs": [],
"source": [
"citrinet.summarize()"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "MB91Swu0pIKr"
},
"source": [
"## Model Configuration using OmegaConf\n",
"--------\n",
"\n",
"So we could download, instantiate and analyse the high level structure of the `Citrinet` model in a few lines! Now let's delve deeper into the configuration file that makes the model work.\n",
"\n",
"First, we import [OmegaConf](https://omegaconf.readthedocs.io/en/latest/). OmegaConf is an excellent library that is used throughout NeMo in order to enable us to perform yaml configuration management more easily. Additionally, it plays well with another library, [Hydra](https://hydra.cc/docs/intro/), that is used by NeMo to perform on the fly config edits from the command line, dramatically boosting ease of use of our config files !"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "RkgrDJvumFER"
},
"outputs": [],
"source": [
"from omegaconf import OmegaConf"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "CktakfBluA56"
},
"source": [
"All NeMo models come packaged with their model configuration inside the `cfg` attribute. While technically it is meant to be config declaration of the model as it has been currently constructed, `cfg` is an essential tool to modify the behaviour of the Model after it has been constructed. It can be safely used to make it easier to perform many essential tasks inside Models. \n",
"\n",
"To be doubly sure, we generally work on a copy of the config until we are ready to edit it inside the model"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "ISd6z7sXt9Mm"
},
"outputs": [],
"source": [
"import copy"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "N2_SiLHRve8A"
},
"outputs": [],
"source": [
"cfg = copy.deepcopy(citrinet.cfg)\n",
"print(OmegaConf.to_yaml(cfg))"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "W_V3e3W7vqOb"
},
"source": [
"## Analysing the contents of the Model config\n",
"----------\n",
"\n",
"Above we see a configuration for the Citrinet model. As discussed in the beginning, NeMo models contain the entire definition of the neural network(s) as well as most of the surrounding infrastructure to support that model within themselves. Here, we see a perfect example of this behaviour.\n",
"\n",
"Citrinet contains within its config - \n",
"\n",
"- `preprocessor` - MelSpectrogram preprocessing layer\n",
"- `encoder` - The acoustic encoder model.\n",
"- `decoder` - The CTC decoder layer.\n",
"- `optim` (and potentially `sched`) - Optimizer configuration. Can optionally include Scheduler information.\n",
"- `spec_augment` - Spectrogram Augmentation support.\n",
"- `train_ds`, `validation_ds` and `test_ds` - Dataset and data loader construction information."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "sIwhdXkwxn6R"
},
"source": [
"## Modifying the contents of the Model config\n",
"----------\n",
"\n",
"Say we want to experiment with a different preprocessor (we want MelSpectrogram, but with different configuration than was provided in the original configuration). Or say we want to add a scheduler to this model during training. \n",
"\n",
"OmegaConf makes this a very simple task for us!"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "WlSZ8EA4yGKo"
},
"outputs": [],
"source": [
"# OmegaConf won't allow you to add new config items, so we temporarily disable this safeguard.\n",
"OmegaConf.set_struct(cfg, False)\n",
"\n",
"# Let's see the old optim config\n",
"print(\"Old Config: \")\n",
"print(OmegaConf.to_yaml(cfg.optim))\n",
"\n",
"sched = {'name': 'CosineAnnealing', 'warmup_steps': 1000, 'min_lr': 1e-6}\n",
"sched = OmegaConf.create(sched) # Convert it into a DictConfig\n",
"\n",
"# Assign it to cfg.optim.sched namespace\n",
"cfg.optim.sched = sched\n",
"\n",
"# Let's see the new optim config\n",
"print(\"New Config: \")\n",
"print(OmegaConf.to_yaml(cfg.optim))\n",
"\n",
"# Here, we restore the safeguards so no more additions can be made to the config\n",
"OmegaConf.set_struct(cfg, True)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "-nMDN66502kn"
},
"source": [
"## Updating the model from config\n",
"----------\n",
"\n",
"NeMo Models can be updated in a few ways, but we follow similar patterns within each collection so as to maintain consistency.\n",
"\n",
"Here, we will show the two most common ways to modify core components of the model - using the `from_config_dict` method, and updating a few special parts of the model.\n",
"\n",
"Remember, all NeMo models are PyTorch Lightning modules, which themselves are PyTorch modules, so we have a lot of flexibility here!"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "qrKzFYkZ20aa"
},
"source": [
"### Update model using `from_config_dict`\n",
"\n",
"In certain config files, you will notice the following pattern : \n",
"\n",
"```yaml\n",
"preprocessor:\n",
" _target_: nemo.collections.asr.modules.AudioToMelSpectrogramPreprocessor\n",
" normalize: per_feature\n",
" window_size: 0.02\n",
" sample_rate: 16000\n",
" window_stride: 0.01\n",
" window: hann\n",
" features: 64\n",
" n_fft: 512\n",
" frame_splicing: 1\n",
" dither: 1.0e-05\n",
" stft_conv: false\n",
"```\n",
"\n",
"You might ask, why are we using `_target_`? Well, it is generally rare for the preprocessor, encoder, decoder and perhaps a few other details to be changed often from the command line when experimenting. In order to stabilize these settings, we enforce that our preprocessor will always be of type `AudioToMelSpectrogramPreprocessor` for this model by setting its `_target_` attribute in the config. In order to provide its parameters in the class constructor, we simply add them after `_target_`.\n",
"\n",
"---------\n",
"Note, we can still change all of the parameters of this `AudioToMelSpectrogramPreprocessor` class from the command line using hydra, so we don't lose any flexibility once we decide what type of preprocessing class we want !\n",
"\n",
"This also gives us a flexible way to instantiate parts of the model from just the config object !"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "1Be08R4szkT3"
},
"outputs": [],
"source": [
"new_preprocessor_config = copy.deepcopy(cfg.preprocessor)\n",
"new_preprocessor = citrinet.from_config_dict(new_preprocessor_config)\n",
"print(new_preprocessor)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "UzJQ7Y8H4S_U"
},
"source": [
"So how do we actually update our model's internal preprocessor with something new? Well, since NeMo Model's are just pytorch Modules, we can just replace their attribute !"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "WdtnPKX84OJ-"
},
"outputs": [],
"source": [
"citrinet.preprocessor = new_preprocessor"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "OMz2KR-24xTO"
},
"outputs": [],
"source": [
"citrinet.summarize()"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "gPb_BdPN40Ro"
},
"source": [
"--------\n",
"This might look like nothing changed - because we didn't actually modify the config for the preprocessor at all ! But as we showed above, we can easily modify the config for the preprocessor, instantiate it from config, and then just set it to the model."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "IV8WKJkD5E_Q"
},
"source": [
"-------\n",
"**NOTE**: Preprocessors don't generally have weights, so this was easy, but say we want to replace a part of the model which actually has trained parameters?\n",
"\n",
"Well, the above approach will still work, just remember the fact that the new module you inserted into `citrinet.encoder` or `citrinet.decoder` actually won't have pretrained weights. You can easily rectify that by loading the state dict for the module *before* you set it to the Model though!"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "YplQcgfG6S1U"
},
"source": [
"### Preserving the new config\n",
"\n",
"So we went ahead and updated the preprocessor of the model. We however also need to perform a crucial step - **preserving the updated config**!\n",
"\n",
"Why do we want to do this? NeMo has many ways of saving and restoring its models, which we will discuss a bit later. All of them depend on having an updated config that defines the model in its entirety, so if we modify anything, we should also update the corresponding part of the config to safely save and restore models."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "dsxQHBV86R4a"
},
"outputs": [],
"source": [
"# Update the config copy\n",
"cfg.preprocessor = new_preprocessor_config\n",
"# Update the model config\n",
"citrinet.cfg = cfg"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "eXRRBnJk5tCv"
},
"source": [
"## Update a few special components of the Model\n",
"---------\n",
"\n",
"While the above approach is good for most major components of the model, NeMo has special utilities for a few components.\n",
"\n",
"They are - \n",
"\n",
" - `setup_training_data`\n",
" - `setup_validation_data` and `setup_multi_validation_data`\n",
" - `setup_test_data` and `setup_multi_test_data`\n",
" - `setup_optimization`\n",
"\n",
"These special utilities are meant to help you easily setup training, validation, testing once you restore a model from a checkpoint.\n",
"\n",
"------\n",
"One of the major tasks of all conversational AI models is fine-tuning onto new datasets - new languages, new corpus of text, new voices etc. It is often insufficient to have just a pre-trained model. So these setup methods are provided to enable users to adapt models *after* they have been already trained or provided to you.\n",
"\n",
"\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "B7Y7wt2x9goJ"
},
"source": [
"You might remember having seen a few warning messages the moment you tried to instantiate the pre-trained model. Those warnings are in fact reminders to call the appropriate setup methods for the task you want to perform. \n",
"\n",
"Those warnings are simply displaying the old config that was used to train that model, and are a basic template that you can easily modify. You have the ability to modify the `train_ds`, `validation_ds` and `test_ds` sub-configs in their entirety in order to evaluate, fine-tune or train from scratch the model, or any further purpose as you require it.\n",
"\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "1hXXdaup-QmG"
},
"source": [
"Let's discuss how to add the scheduler to the model below (which initially had just an optimizer in its config)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "cveKWvMZ4zBo"
},
"outputs": [],
"source": [
"# Let's print out the current optimizer\n",
"print(OmegaConf.to_yaml(citrinet.cfg.optim))"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "XVguw3k0-f6b"
},
"outputs": [],
"source": [
"# Now let's update the config\n",
"citrinet.setup_optimization(cfg.optim)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "1JZBCQeW-21X"
},
"source": [
"-------\n",
"We see a warning - \n",
"\n",
"```\n",
"Neither `max_steps` nor `iters_per_batch` were provided to `optim.sched`, cannot compute effective `max_steps` !\n",
" Scheduler will not be instantiated !\n",
"```\n",
"\n",
"We don't have a train dataset setup, nor do we have max_steps in the config. Most NeMo schedulers cannot be instantiated without computing how many train steps actually exist!\n",
"\n",
"Here, we can temporarily allow the scheduler construction by explicitly passing a max_steps value to be 100"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "mqC89hfE-tqf"
},
"outputs": [],
"source": [
"OmegaConf.set_struct(cfg.optim.sched, False)\n",
"\n",
"cfg.optim.sched.max_steps = 100\n",
"\n",
"OmegaConf.set_struct(cfg.optim.sched, True)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "r22IqOBK_q6l"
},
"outputs": [],
"source": [
"# Now let's update the config and try again\n",
"citrinet.setup_optimization(cfg.optim)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "U7Eezf_sAVS0"
},
"source": [
"You might wonder why we didnt explicitly set `citrinet.cfg.optim = cfg.optim`. \n",
"\n",
"This is because the `setup_optimization()` method does it for you! You can still update the config manually."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "THqhXy_lQ7i8"
},
"source": [
"### Optimizer & Scheduler Config\n",
"\n",
"Optimizers and schedulers are common components of models, and are essential to train the model from scratch.\n",
"\n",
"They are grouped together under a unified `optim` namespace, as schedulers often operate on a given optimizer.\n",
"\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "6HY51nuoSJs5"
},
"source": [
"### Let's breakdown the general `optim` structure\n",
"```yaml\n",
"optim:\n",
" name: novograd\n",
" lr: 0.01\n",
"\n",
" # optimizer arguments\n",
" betas: [0.8, 0.25]\n",
" weight_decay: 0.001\n",
"\n",
" # scheduler setup\n",
" sched:\n",
" name: CosineAnnealing\n",
"\n",
" # Optional arguments\n",
" max_steps: -1 # computed at runtime or explicitly set here\n",
" monitor: val_loss\n",
" reduce_on_plateau: false\n",
"\n",
" # scheduler config override\n",
" warmup_steps: 1000\n",
" warmup_ratio: null\n",
" min_lr: 1e-9\n",
"```\n",
"\n",
"Essential Optimizer components - \n",
"\n",
" - `name`: String name of the optimizer. Generally a lower case of the class name.\n",
" - `lr`: Learning rate is a required argument to all optimizers.\n",
"\n",
"Optional Optimizer components - after the above two arguments are provided, any additional arguments added under `optim` will be passed to the constructor of that optimizer as keyword arguments\n",
"\n",
" - `betas`: List of beta values to pass to the optimizer\n",
" - `weight_decay`: Optional weight decay passed to the optimizer.\n",
"\n",
"Optional Scheduler components - `sched` is an optional setup of the scheduler for the given optimizer.\n",
"\n",
"If `sched` is provided, only one essential argument needs to be provided : \n",
"\n",
" - `name`: The name of the scheduler. Generally, it is the full class name.\n",
"\n",
"Optional Scheduler components - \n",
"\n",
" - `max_steps`: Max steps as an override from the user. If one provides `trainer.max_steps` inside the trainer configuration, that value is used instead. If neither value is set, the scheduler will attempt to compute the `effective max_steps` using the size of the train data loader. If that too fails, then the scheduler will not be created at all.\n",
"\n",
" - `monitor`: Used if you are using an adaptive scheduler such as ReduceLROnPlateau. Otherwise ignored. Defaults to `loss` - indicating train loss as monitor.\n",
"\n",
" - `reduce_on_plateau`: Required to be set to true if using an adaptive scheduler.\n",
"\n",
"Any additional arguments under `sched` will be supplied as keyword arguments to the constructor of the scheduler.\n",
"\n",
"\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "V3pQM2aj_6WX"
},
"source": [
"## Difference between the data loader setup methods\n",
"----------\n",
"\n",
"You might notice, we have multiple setup methods for validation and test data sets. We also don't have an equivalent `setup_multi_train_data`. \n",
"\n",
"In general, the `multi` methods refer to multiple data sets / data loaders. \n"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "g33nMx9WCJdj"
},
"source": [
"### Where's `setup_multi_train_data`?\n",
"With the above in mind, let's tackle why we don't have `setup_multi_train_data`. \n",
"\n",
"NeMo is concerned with multiple domains - `asr`, `nlp` and `tts`. The way datasets are setup and used in these domains is dramatically different. It is often unclear what it means to have multiple train datasets - do we concatenate them? Do we randomly sample (with same or different probability) from each of them? \n",
"\n",
"Therefore we leave such support for multiple datasets up to the model itself. For example, in ASR, you can concatenate multiple train manifest files by using commas when providing the `manifest_filepath` value!"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "BjI2Q5LECJib"
},
"source": [
"### What are multi methods?\n",
"\n",
"In many cases, especially true for ASR and NLP, we may have multiple validation and test datasets. The most common example for this in ASR is `Librispeech`, which has `dev_clean`, `dev_other`, `test_clean`, `test_other`.\n",
"\n",
"NeMo standardizes how to handle multiple data loaders for validation and testing, so that all of our collections have a similar look and feel, as well as ease development of our models. During evaluation, these datasets are treated independently and prepended with resolved names so that logs are separate!\n",
"\n",
"The `multi` methods are therefore generalizations of the single validation and single test data setup methods, with some additional functionality. If you provide multiple datasets, you still have to write code for just one dataset and NeMo will automatically attach the appropriate names to your logs so you can differentiate between them!\n",
"\n",
"Furthermore, they also automatically preserve the config the user passes to them when updating the validation or test data loaders.\n",
"\n",
"**In general, it is preferred to call the `setup_multi_validation_data` and `setup_multi_test_data` methods, even if you are only using single datasets, simply for the automated management they provide.**"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "ZKURHn0jH_52"
},
"source": [
"## Creating Model from constructor vs restoring a model\n",
"---------\n",
"\n",
"You might notice, we discuss all of the above setup methods in the context of model after it is restored. However, NeMo scripts do not call them inside any of the example train scripts themselves.\n",
"\n",
"This is because these methods are automatically called by the constructor when the Model is created for the first time, but these methods are skipped during restoration (either from a PyTorch Lightning checkpoint using `load_from_checkpoint`, or via `restore_from` method inside NeMo Models).\n",
"\n",
"This is done as most datasets are stored on a user's local directory, and the path to these datasets is set in the config (either set by default, or set by Hydra overrides). On the other hand, the models are meant to be portable. On another user's system, the data might not be placed at exactly the same location, or even on the same drive as specified in the model's config!\n",
"\n",
"Therefore we allow the constructor some brevity and automate such dataset setup, whereas restoration warns that data loaders were not set up and provides the user with ways to set up their own datasets.\n",
"\n",
"------\n",
"\n",
"Why are optimizers not restored automatically? Well, optimizers themselves don't face an issue, but as we saw before, schedulers depend on the number of train steps in order to calculate their schedule.\n",
"\n",
"However, if you don't wish to modify the optimizer and scheduler, and prefer to leave them to their default values, that's perfectly alright. The `setup_optimization()` method is automatically called by PyTorch Lightning for you when you begin training your model!"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "g91FE8mlMcnh"
},
"source": [
"## Saving and restoring models\n",
"----------\n",
"\n",
"NeMo provides a few ways to save and restore models. If you utilize the Experiment Manager that is part of all NeMo train scripts, PyTorch Lightning will automatically save checkpoints for you in the experiment directory.\n",
"\n",
"We can also use packaged files using the specialized `save_to` and `restore_from` methods."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "NzMxga7QNYn8"
},
"source": [
"### Saving and Restoring from PTL Checkpoints\n",
"----------\n",
"\n",
"The PyTorch Lightning Trainer object will periodically save checkpoints when the experiment manager is being used during training.\n",
"\n",
"PyTorch Lightning checkpoints can then be loaded and evaluated / fine-tuned just as always using the class method `load_from_checkpoint`.\n",
"\n",
"For example, restore a Citrinet model from a checkpoint - \n",
"\n",
"```python\n",
"citrinet = nemo_asr.models.EncDecCTCModelBPE.load_from_checkpoint(<path to checkpoint>)\n",
"```"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "W4YzAG-KOBkZ"
},
"source": [
"### Saving and Restoring from .nemo files\n",
"----------\n",
"\n",
"There are a few models which might require external dependencies to be packaged with them in order to restore them properly.\n",
"\n",
"One such example is an ASR model with an external BPE tokenizer. It is preferred if the model includes all of the components required to restore it, but a binary file for a tokenizer cannot be serialized into a PyTorch Lightning checkpoint.\n",
"\n",
"In such cases, we can use the `save_to` and `restore_from` method to package the entire model + its components (here, the tokenizer file(s)) into a tarfile. This can then be easily imported by the user and used to restore the model."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "P6_vMSwXNJ74"
},
"outputs": [],
"source": [
"# Save the model\n",
"citrinet.save_to('citrinet_512.nemo')"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "HrBhgaqyP4rU"
},
"outputs": [],
"source": [
"!ls -d -- *.nemo "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "Tyht1E0DQGb_"
},
"outputs": [],
"source": [
"# Restore the model\n",
"temp_cn = nemo_asr.models.EncDecCTCModelBPE.restore_from('citrinet_512.nemo')"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "dqNpmYYJQS2H"
},
"outputs": [],
"source": [
"temp_cn.summarize()"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "A5e42EoiZYjf"
},
"outputs": [],
"source": [
"# Note that the preprocessor + optimizer config have been preserved after the changes we made !\n",
"print(OmegaConf.to_yaml(temp_cn.cfg))"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "OI3RxwpcV-UF"
},
"source": [
"Note, that .nemo file is a simple .tar.gz with checkpoint, configuration and, potentially, other artifacts such as tokenizer configs being used by the model"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "jFBAGcaDWLiu"
},
"outputs": [],
"source": [
"!cp citrinet_512.nemo citrinet_512.tar.gz\n",
"!tar -xvf citrinet_512.tar.gz"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "mkau4Q9jZo1l"
},
"source": [
"### Extracting PyTorch checkpoints from NeMo tarfiles (Model level)\n",
"-----------\n",
"\n",
"While the .nemo tarfile is an excellent way to have a portable model, sometimes it is necessary for researchers to have access to the basic PyTorch save format. NeMo aims to be entirely compatible with PyTorch, and therefore offers a simple method to extract just the PyTorch checkpoint from the .nemo tarfile."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "qccPANeycCoq"
},
"outputs": [],
"source": [
"import torch"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "A4zswOKHar9q"
},
"outputs": [],
"source": [
"state_dict = temp_cn.extract_state_dict_from('citrinet_512.nemo', save_dir='./pt_ckpt/')\n",
"!ls ./pt_ckpt/"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "ACB-0dfnbFG3"
},
"source": [
"As we can see below, there is now a single basic PyTorch checkpoint available inside the `pt_ckpt` directory, which we can use to load the weights of the entire model as below"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "4ZAF_A0uc5bB"
},
"outputs": [],
"source": [
"temp_cn.load_state_dict(torch.load('./pt_ckpt/model_weights.ckpt'))"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "Hkq6EM99cS6y"
},
"source": [
"### Extracting PyTorch checkpoints from NeMo tarfiles (Module level)\n",
"----------\n",
"\n",
"While the above method is exceptional when extracting the checkpoint of the entire model, sometimes there may be a necessity to load and save the individual modules that comprise the Model.\n",
"\n",
"The same extraction method offers a flag to extract the individual model level checkpoints into their individual files, so that users have access to per-module level checkpoints."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "LW6wve2zbT9D"
},
"outputs": [],
"source": [
"state_dict = temp_cn.extract_state_dict_from('citrinet_512.nemo', save_dir='./pt_module_ckpt/', split_by_module=True)\n",
"!ls ./pt_module_ckpt/"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "DtV5vpb5d1ni"
},
"source": [
"Now, we can load and assign the weights of the individual modules of the above Citrinet Model !"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "rVHylSKFdywn"
},
"outputs": [],
"source": [
"temp_cn.preprocessor.load_state_dict(torch.load('./pt_module_ckpt/preprocessor.ckpt'))\n",
"temp_cn.encoder.load_state_dict(torch.load('./pt_module_ckpt/encoder.ckpt'))\n",
"temp_cn.decoder.load_state_dict(torch.load('./pt_module_ckpt/decoder.ckpt'))"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "88vOGV7VYcuu"
},
"source": [
"# NeMo with Hydra\n",
"\n",
"[Hydra](https://hydra.cc/docs/intro/) is used throughout NeMo as a way to enable rapid prototyping using predefined config files. Hydra and OmegaConf offer great compatibility with each other, and below we show a few general helpful tips to improve productivity with Hydra when using NeMo."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "DfY6Ha3qYcxG"
},
"source": [
"## Hydra Help\n",
"--------\n",
"\n",
"Since our scripts are written with hydra in mind, you might notice that using `python <script.py> --help` returns you a config rather than the usual help format from argparse. \n",
"\n",
"Using `--help` you can see the default config attached to the script - every NeMo script has at least one default config file attached to it. This gives you a guide on how you can modify values for an experiment.\n",
"\n",
"Hydra also has a special `--hydra-help` flag, which will offer you more help with respect to hydra itself as it is set up in the script.\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "gEsZlnfaYc3X"
},
"source": [
"## Changing config paths and files\n",
"---------\n",
"\n",
"While all NeMo models come with at least 1 default config file, one might want to switch configs without changing code. This is easily achieved by the following commands : \n",
"\n",
"- `--config-path`: Path to the directory which contains the config files\n",
"- `--config-name`: Name of the config file we wish to load.\n",
"\n",
"Note that these two arguments need to be at the very beginning of your execution statement, before you provide any command line overrides to your config file."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "ZyNHlArpYc9A"
},
"source": [
"## Overriding config from the command line\n",
"----------\n",
"\n",
"Hydra allows users to provide command line overrides to any part of the config. There are three cases to consider - \n",
"\n",
" - Override existing value in config\n",
" - Add new value in config\n",
" - Remove old value in config"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "96CKbvn6Yc7f"
},
"source": [
"### Overriding existing values in config\n",
"\n",
"Let's take the case where we want to change the optimizer from `novograd` to `adam`. Let's also change the beta values to default adam values.\n",
"\n",
"Hydra overrides are based on the `.` syntax - each `.` representing a level in the config itself.\n",
"\n",
"```sh\n",
"$ python <script>.py \\\n",
" --config-path=\"dir to config\" \\\n",
" --config-name=\"name of config\" \\\n",
" model.optim.name=\"adam\" \\\n",
" model.optim.betas=[0.9,0.999]\n",
"```\n",
"\n",
"It is to be noted, if lists are passed, there cannot be any spaces between items.\n",
"\n",
"------\n",
"\n",
"We can also support multi validation datasets with the above list syntax, but it depends on the model level support. \n",
"\n",
"For ASR collection, the following syntax is widely supported in ASR, ASR-BPE and classification models. Let's take an example of a model being trained on LibriSpeech -\n",
"\n",
"```sh\n",
"$ python <script>.py \\\n",
" --config-path=\"dir to config\" \\\n",
" --config-name=\"name of config\" \\\n",
" model.validation_ds.manifest_filepath=[\"path to dev clean\",\"path to dev other\"] \\\n",
" model.test_ds.manifest_filepath=[\"path to test clean\",\"path to test other\"]\n",
"```"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "Wj7oMkepYc17"
},
"source": [
"### Add new values in config\n",
"----------\n",
"\n",
"Hydra allows us to inject additional parameters inside the config using the `+` syntax.\n",
"\n",
"Let's take an example of adding `amsgrad` fix for the `novograd` optimizer above.\n",
"\n",
"```sh\n",
"$ python <script>.py \\\n",
" --config-path=\"dir to config\" \\\n",
" --config-name=\"name of config\" \\\n",
" +model.optim.amsgrad=true\n",
"```"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "p23327hsYc0Z"
},
"source": [
"### Remove old value in config\n",
"---------\n",
"\n",
"Hydra allows us to remove parameters inside the config using the `~` syntax.\n",
"\n",
"Let's take an example of removing `weight_decay` inside the Novograd optimizer\n",
"\n",
"```sh\n",
"$ python <script>.py \\\n",
" --config-path=\"dir to config\" \\\n",
" --config-name=\"name of config\" \\\n",
" ~model.optim.weight_decay\n",
"```"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "8VSWIbzjYzDi"
},
"source": [
"## Setting a value to `None` from the command line\n",
"\n",
"We may sometimes choose to disable a feature by setting the value to `None`.\n",
"\n",
"We can accomplish this by using the keyword `null` inside the command line. \n",
"\n",
"Let's take an example of disabling the validation data loader inside an ASR model's config - \n",
"\n",
"\n",
"```sh\n",
"$ python <script>.py \\\n",
" --config-path=\"dir to config\" \\\n",
" --config-name=\"name of config\" \\\n",
" model.test_ds.manifest_filepath=null\n",
"```"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "ah8rgrvvsw5R"
},
"source": [
"# NeMo Examples\n",
"\n",
"NeMo supports various pre-built models for ASR, NLP and TTS tasks. One example we see in this notebook is the ASR model for Speech to Text - by using the Citrinet model.\n",
"\n",
"The NeMo repository has a dedicated `examples` directory with scripts to train and evaluate models for various tasks - ranging from ASR speech to text, NLP question answering and TTS text to speech using models such as `FastPitch` and `HiFiGAN`.\n",
"\n",
"NeMo constantly adds new models and new tasks to these examples, such that these examples serve as the basis to train and evaluate models from scratch with the provided config files.\n",
"\n",
"NeMo Examples directory can be found here - https://github.com/NVIDIA/NeMo/tree/main/examples"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "999KAomdtWlu"
},
"source": [
"## Structure of NeMo Examples\n",
"-------\n",
"\n",
"The NeMo Examples directory is structured by domain, as well as sub-task. Similar to how we partition the collections supported by NeMo, the examples themselves are separated initially by domain, and then by sub-tasks of that domain. \n",
"\n",
"All these example scripts are bound to at least one default config file. These config files contain all of the information of the model, as well as the PyTorch Lightning Trainer configuration and Experiment Manager configuration. \n",
"\n",
"In general, once the model is trained and saved to a PyTorch Lightning checkpoint, or to a .nemo tarfile, it will no longer contain the training configuration - no configuration information for the Trainer or Experiment Manager.\n",
"\n",
"**These config files have good defaults pre-set to run an experiment with NeMo, so it is advised to base your own training configuration on these configs.**\n",
"\n",
"\n",
"Let's take a deeper look at some of the examples inside each domain.\n",
"\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "8Fk2grx0uSBQ"
},
"source": [
"## ASR Examples\n",
"-------\n",
"\n",
"NeMo supports multiple Speech Recognition models such as Jasper, QuartzNet, Citrinet, Conformer and more, all of which can be trained on various datasets. We also provide pretrained checkpoints for these models trained on standard datasets so that they can be used immediately. These scripts are made available in `speech_to_text_ctc.py`.\n",
"\n",
"ASR examples also supports sub-tasks such as speech classification - MatchboxNet trained on the Google Speech Commands Dataset is available in `speech_to_label.py`. Voice Activity Detection is also supported with the same script, by simply changing the config file passed to the script!\n",
"\n",
"NeMo also supports training Speech Recognition models with Byte Pair/Word Piece encoding of the corpus, via the `speech_to_text_ctc_bpe.py` example. Since these models are still under development, their configs fall under the `experimental/configs` directory.\n",
"\n",
"Finally, in order to simply perform inference on some dataset using these models, prefer to use the `speech_to_text_eval.py` example, which provides a look at how to compute WER over a dataset provided by the user."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "HhtzYATsuSJV"
},
"source": [
"## NLP Examples\n",
"---------\n",
"\n",
"NeMo supports a wide variety of tasks in NLP - from text classification and language modelling all the way to glue benchmarking! \n",
"\n",
"All NLP models require text tokenization as data preprocessing steps. The list of tokenizers can be found in nemo.collections.common.tokenizers, and include WordPiece tokenizer, SentencePiece tokenizer or simple tokenizers like Word tokenizer.\n",
"\n",
"A non-exhaustive list of tasks that NeMo currently supports in NLP is - \n",
"\n",
" - Language Modelling - Assigns a probability distribution over a sequence of words. Can be either generative e.g. vanilla left-right-transformer or BERT with a masked language model loss.\n",
" - Text Classification - Classifies an entire text based on its content into predefined categories, e.g. news, finance, science etc. These models are BERT-based and can be used for applications such as sentiment analysis, relationship extraction\n",
" - Token Classification - Classifies each input token separately. Models are based on BERT. Applications include named entity recognition, punctuation and capitalization, etc.\n",
" - Intent Slot Classification - used for joint recognition of Intents and Slots (Entities) for building conversational assistants. \n",
" - Question Answering - Currently only SQuAD is supported. This takes in a question and a passage as input and predicts a span in the passage, from which the answer is extracted.\n",
" - Glue Benchmarks - A benchmark of nine sentence- or sentence-pair language understanding tasks\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "F2m4BT2AuSM_"
},
"source": [
"## TTS Examples\n",
"---------\n",
"\n",
"NeMo supports Text To Speech (TTS, aka Speech Synthesis) via a two step inference procedure. First, a model is used to generate a mel spectrogram from text. Second, a model is used to generate audio from a mel spectrogram.\n",
"\n",
"Supported Models:\n",
"\n",
"Mel Spectrogram Generators:\n",
"* Tacotron2\n",
"* FastPitch\n",
"* Talknet\n",
"* And more...\n",
"\n",
"Audio Generators (Vocoders):\n",
"* WaveGlow\n",
"* HiFiGAN\n",
"* And more..."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "XKJPRgUns2On"
},
"source": [
"# NeMo Tutorials\n",
"\n",
"Alongside the example scripts provided above, NeMo provides in depth tutorials for usage of these models for each of the above domains inside the `tutorials` directory found in the NeMo repository.\n",
"\n",
"Tutorials are meant to be more in-depth explanation of the workflow in the discussed task - usually involving a small amount of data to train a small model on a task, along with some explanation of the task itself.\n",
"\n",
"While the tutorials are a great example of the simplicity of NeMo, please note for the best performance when training on real datasets, we advice the use of the example scripts instead of the tutorial notebooks. \n",
"\n",
"NeMo Tutorials directory can be found here - https://github.com/NVIDIA/NeMo/tree/main/tutorials"
]
}
],
"metadata": {
"colab": {
"collapsed_sections": [],
"name": "00_NeMo_Primer.ipynb",
"provenance": [],
"toc_visible": true
},
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.8.5"
}
},
"nbformat": 4,
"nbformat_minor": 2
}
|