query
stringlengths 273
149k
| pos
stringlengths 18
667
| idx
int64 0
1.99k
| task_name
stringclasses 1
value |
---|---|---|---|
This paper explores the use of self-ensembling for visual domain adaptation problems. Our technique is derived from the mean teacher variant (Tarvainen et. al 2017) of temporal ensembling (Laine et al. 2017), a technique that achieved state of the art in the area of semi-supervised learning. We introduce a number of modifications to their approach for challenging domain adaptation scenarios and evaluate its effectiveness. Our approach achieves state of the art in a variety of benchmarks, including our winning entry in the VISDA-2017 visual domain adaptation challenge. In small image benchmarks, our algorithm not only outperforms prior art, but can also achieve accuracy that is close to that of a classifier trained in a supervised fashion. The strong performance of deep learning in computer vision tasks comes at the cost of requiring large datasets with corresponding ground truth labels for training. Such datasets are often expensive to produce, owing to the cost of the human labour required to produce the ground truth labels. Semi-supervised learning is an active area of research that aims to reduce the quantity of ground truth labels required for training. It is aimed at common practical scenarios in which only a small subset of a large dataset has corresponding ground truth labels. Unsupervised domain adaptation is a closely related problem in which one attempts to transfer knowledge gained from a labeled source dataset to a distinct unlabeled target dataset, within the constraint that the objective (e.g.digit classification) must remain the same. Domain adaptation offers the potential to train a model using labeled synthetic data -that is often abundantly available -and unlabeled real data. The scale of the problem can be seen in the VisDA-17 domain adaptation challenge images shown in FIG3. We will present our winning solution in Section 4.2.Recent work BID28 ) has demonstrated the effectiveness of self-ensembling with random image augmentations to achieve state of the art performance in semi-supervised learning benchmarks. We have developed the approach proposed by BID28 to work in a domain adaptation scenario. We will show that this can achieve excellent in specific small image domain adaptation benchmarks. More challenging scenarios, notably MNIST → SVHN and the VisDA-17 domain adaptation challenge required further modifications. To this end, we developed confidence thresholding and class balancing that allowed us to achieve state of the art in a variety of benchmarks, with some of our coming close to those achieved by traditional supervised learning. Our approach is sufficiently flexble to be applicable to a variety of network architectures, both randomly initialized and pre-trained. Our paper is organised as follows; in Section 2 we will discuss related work that provides context and forms the basis of our technique; our approach is described in Section 3 with our experiments and in Section 4; and finally we present our in Section 5. In this section we will cover self-ensembling based semi-supervised methods that form the basis of our approach and domain adaptation techniques to which our work can be compared. Recent work based on methods related to self-ensembling have achieved excellent in semisupervised learning scenarious. A neural network is trained to make consistent predictions for unsupervised samples under different augmentation BID23, dropout and noise conditions or through the use of adversarial training. We will focus in particular on the self-ensembling based approaches of BID13 and BID28 as they form the basis of our approach. BID13 present two models; their Π-model and their temporal model. The Π-model passes each unlabeled sample through a classifier twice, each time with different dropout, noise and image translation parameters. Their unsupervised loss is the mean of the squared difference in class probability predictions ing from the two presentations of each sample. Their temporal model maintains a per-sample moving average of the historical network predictions and encourages subsequent predictions to be consistent with the average. Their approach achieved state of the art in the SVHN and CIFAR-10 semi-supervised classification benchmarks. BID28 further improved on the temporal model of BID13 by using an exponential moving average of the network weights rather than of the class predictions. Their approach uses two networks; a student network and a teacher network, where the student is trained using gradient descent and the weigthts of the teacher are the exponential moving average of those of the student. The unsupervised loss used to train the student is the mean square difference between the predictions of the student and the teacher, under different dropout, noise and image translation parameters. There is a rich body of literature tackling the problem of domain adaptation. We focus on deep learning based methods as these are most relevant to our work. Auto-encoders are unsupervised neural network models that reconstruct their input samples by first encoding them into a latent space and then decoding and reconstructing them. BID5 describe an auto-encoder model that is trained to reconstruct samples from both the source and target domains, while a classifier is trained to predict labels from domain invariant features present in the latent representation using source domain labels. BID0 reckognised that samples from disparate domains have distinct domain specific characteristics that must be represented in the latent representation to support effective reconstruction. They developed a split model that separates the latent representation into shared domain invariant features and private features specific to the source and target domains. Their classifier operates on the domain invariant features only. BID4 propose a bifurcated classifier that splits into label classification and domain classification branches after common feature extraction layers. A gradient reversal layer is placed between the common feature extraction layers and the domain classification branch; while the domain classification layers attempt to determine which domain a sample came from the gradient reversal operation encourages the feature extraction layers to confuse the domain classifier by extracting domain invariant features. An alternative and simpler implementation described in their appendix minimises the label cross-entropy loss in the feature and label classification layers, minimises the domain cross-entropy in the domain classification layers but maximises it in the feature layers. The model of runs along similar lines but uses separate feature extraction sub-networks for source and domain samples and train the model in two distinct stages. BID21 ) use tri-training (Zhou & Li; feature extraction layers are used to drive three classifier sub-networks. The first two are trained on samples from the source domain, while a weight similarity penalty encourages them to learn different weights. Pseudo-labels generated for target domain samples by these source domain classifiers are used to train the final classifier to operate on the target domain. Generative Adversarial Networks (GANs; BID6) are unsupervised models that consist of a generator network that is trained to generate samples that match the distribution of a dataset by fooling a discriminator network that is simultaneously trained to distinguish real samples from generates samples. Some GAN based models -such as that of BID24 -use a GAN to help learn a domain invariant embedding for samples. Many GAN based domain adaptation approaches use a generator that transforms samples from one domain to another. BID1 propose a GAN that adapts synthetic images to better match the characteristics of real images. Their generator takes a synthetic image and noise vector as input and produces an adapted image. They train a classifier to predict annotations for source and adapted samples alonside the GAN, while encouraing the generator to preserve aspects of the image important for annotation. The model of BID25 consists of a refiner network (in the place of a generator) and discriminator that have a limited receptive field, limiting their model to making local changes while preserving ground truth annotations. The use of refined simulated images with corresponding ground truths ed in improved performance in gaze and hand pose estimation. BID20 present a bi-directional GAN composed of two generators that transform samples from the source to the target domain and vice versa. They transform labelled source samples to the target domain using one generator and back to the source domain with the other and encourage the network to learn label class consistency. This work bears similarities to CycleGAN, by.A number of domain adaptation models maximise domain confusion by minimising the difference between the distributions of features extracted from source and target domains. minimises the difference between the feature covariance matrices for a mini-batch of samples from the source and target domains. and BID16 minimise the Maximum Mean Discrepancy metric BID7. described adaptive batch normalization, a variant of batch normalization BID11 ) that learns separate batch normalization statistics for the source and target domains in a two-pass process, establishing new state-of-the-art . In the first pass standard supervised learning is used to train a classifier for samples from the source domain. In the second pass, normalization statistics for target domain samples are computed for each batch normalization layer in the network, leaving the network weights as they are. Our model builds upon the mean teacher semi-supervised learning model of BID28, which we will describe. Subsequently we will present our modifications that enable domain adaptation. The structure of the mean teacher model of BID28 -also discussed in section 2.1 -is shown in FIG1. The student network is trained using gradient descent, while the weights of the teacher network are an exponential moving average of those of the student. During training each input sample x i is passed through both the student and teacher networks, generating predicted class probability vectors z i (student) andz i (teacher). Different dropout, noise and image translation parameters are used for the student and teacher pathways. During each training iteration a mini-batch of samples is drawn from the dataset, consisting of both labeled and unlabeled samples. The training loss is the sum of a supervised and an unsupervised component. The supervised loss is cross-entropy loss computed using z i (student prediction). It is masked to 0 for unlabeled samples for which no ground truth is available. The unsupervised component is the self-ensembling loss. It penalises the difference in class predictions between student (z i) and teacher (z i) networks for the same input sample. It is computed using the mean squared difference between the class probability predictions z i andz i.Laine & Aila FORMULA0 and BID28 found that it was necessary to apply a timedependent weighting to the unsupervised loss during training in order to prevent the network from getting stuck in a degenerate solution that gives poor classification performance. They used a function that follows a Gaussian curve from 0 to 1 during the first 80 epochs. In the following subsections we will describe our contributions in detail along with the motivations for introducing them. We minimise the same loss as in BID28; we apply cross-entropy loss to labeled source samples and unsupervised self-ensembling loss to target samples. As in BID28, self-ensembling loss is computed as the mean-squared difference between predictions produced by the student (z T i) and teacher (z T i) networks with different augmentation, dropout and noise parameters. The models of BID28 and of BID13 were designed for semisupervised learning problems in which a subset of the samples in a single dataset have ground truth labels. During training both models mix labeled and unlabeled samples together in a minibatch. In contrast, unsupervised domain adaptation problems use two distinct datasets with different underlying distributions; labeled source and unlabeled target. Our variant of the mean teacher model -shown in FIG1 -has separate source (X Si) and target (X T i) paths. Inspired by the work of, we process mini-batches from the source and target datasets separately (per iteration) so that batch normalization uses different normalization statistics for each domain during training.1. We do not use the approach of as-is, as they handle the source and target datasets separtely in two distinct training phases, where our approach must train using both simultaneously. We also do not maintain separate exponential moving averages of the means and variances for each dataset for use at test time. As seen in the'MT+TF' row of Table 1, the model described thus far achieves state of the art in 5 out of 8 small image benchmarks. The MNIST → SVHN, STL → CIFAR-10 and Syn-digits → SVHN benchmarks however require additional modifications to achieve good performance. We found that replacing the Gaussian ramp-up factor that scales the unsupervised loss with confidence thresholding stabilized training in more challenging domain adaptation scenarios. For each unlabeled sample x T i the teacher network produces the predicted class probabilty vectorz T ijwhere j is the class index drawn from the set of classes C -from which we compute the confidencẽ f T i = max j∈C (z T ij); the predicted probability of the predicted class of the sample. Iff T i is below the confidence threshold (a parameter search found 0.968 to be an effective value for small image benchmarks), the self-ensembling loss for the sample x i is masked to 0.Our working hypothesis is that confidence thresholding acts as a filter, shifting the balance in favour of the student learning correct labels from the teacher. While high network prediction confidence does not guarantee correctness there is a positive correlation. Given the tolerance to incorrect labels reported by BID13, we believe that the higher signal-to-noise ratio underlies the success of this component of our approach. The use of confidence thresholding achieves a state of the art in the STL → CIFAR-10 and Syn-digits → SVHN benchmarks, as seen in the'MT+CT+TF' row of Table 1. While confidence thresholding can in very slight reductions in performance (see the MNIST ↔ USPS and SVHN → MNIST ), its ability to stabilise training in challenging scenarios leads us to recommend it as a replacement for the time-dependent Gaussian ramp-up used in BID13. We explored the effect of three data augmentation schemes in our small image benchmarks (section 4.1). Our minimal scheme (that should be applicable in non-visual domains) consists of Gaussian noise (with σ = 0.1) added to the pixel values. The standard scheme (indicated by 'TF' in Table 1) was used by BID13 and adds translations in the interval [−2, 2] and horizontal flips for the CIFAR-10 ↔ STL experiments. The affine scheme (indicated by 'TFA') adds random affine transformations defined by the matrix in, where N (0, 0.1) denotes a real value drawn from a normal distribution with mean 0 and standard deviation 0.1. DISPLAYFORM0 The use of translations and horizontal flips has a significant impact in a number of our benchmarks. It is necessary in order to outpace prior art in the MNIST ↔ USPS and SVHN → MNIST benchmarks and improves performance in the CIFAR-10 ↔ STL benchmarks. The use of affine augmentation can improve performance in experiments involving digit and traffic sign recognition datasets, as seen in the'MT+CT+TFA' row of Table 1. In contrast it can impair performance when used with photographic datasets, as seen in the the STL → CIFAR-10 experiment. It also impaired performance in the VisDA-17 experiment (section 4.2). With the adaptations made so far the challenging MNIST → SVHN benchmark remains undefeated due to training instabilities. During training we noticed that the error rate on the SVHN test set decreases at first, then rises and reaches high values before training completes. We diagnosed the problem by recording the predictions for the SVHN target domain samples after each epoch. The rise in error rate correlated with the predictions evolving toward a condition in which most samples are predicted as belonging to the'1' class; the most populous class in the SVHN dataset. We hypothesize that the class imbalance in the SVHN dataset caused the unsupervised loss to reinforce the'1' class more often than the others, ing in the network settling in a degenerate local minimum. Rather than distinguish between digit classes as intended it seperated MNIST from SVHN samples and assigned the latter to the'1' class. We addressed this problem by introducing a class balance loss term that penalises the network for making predictions that exhibit large class imbalance. For each target domain mini-batch we compute the mean of the predicted sample class probabilities over the sample dimension, ing in the mini-batch mean per-class probability. The loss is computed as the binary cross entropy between the mean class probability vector and a uniform probability vector. We balance the strength of the class balance loss with that of the self-ensembling loss by multiplying the class balance loss by the average of the confidence threshold mask (e.g. if 75% of samples in a mini-batch pass the confidence threshold, then the class balance loss is multiplied by 0.75). We would like to note the similarity between our class balance loss and the entropy maximisation loss in the IMSAT clustering model of BID10; IMSAT employs entropy maximisation to encourage uniform cluster sizes and entropy minimisation to encourage unambiguous cluster assignments. Our implementation was developed using PyTorch (Chintala et al.) and is publically available at http://github.com/Britefury/self-ensemble-visual-domain-adapt. Our can be seen in Table 1. The'train on source' and'train on target' report the target domain performance of supervised training on the source and target domains. They represent the exepected baseline and best achievable . The'Specific aug.' experiments used data augmentation specific to the MNIST → SVHN adaptation path that is discussed further down. The small datasets and data preparation procedures are described in Appendix A. Our training procedure is described in Appendix B and our network architectures are described in Appendix D. The same network architectures and augmentation parameters were used for domain adaptation experiments and the supervised baselines discussed above. It is worth noting that only the training sets of the small image datasets were used during training; the test sets used for reporting scores only. MNIST ↔ USPS (see FIG2 . MNIST and USPS are both greyscale hand-written digit datasets. In both adaptation directions our approach not only demonstrates a significant improvement over prior art but nearly achieves the performance of supervised learning using the target domain ground truths. The strong performance of the base mean teacher model can be attributed to the similarity of the datasets to one another. It is worth noting that data augmentation allows our 'train on source' baseline to outpace prior domain adaptation methods. CIFAR-10 ↔ STL (see FIG2). CIFAR-10 and STL are both 10-class image datasets, although we removed one class from each (see Appendix A.2). We obtained strong performance in the STL → CIFAR-10 path, but only by using confidence thresholding. The CIFAR-10 → STL are more interesting; the'train on source' baseline performance outperforms that of a network trained on the STL target domain, most likely due to the small size of the STL training set. Our self-ensembling outpace both the baseline performance and the'theoretical maximum' of a network trained Table 1: Small image benchmark classification accuracy; each is presented as mean ± standard deviation, computed from 5 independent runs. The abbreviations for components of our models are as follows: MT = mean teacher, CT = confidence thresholding, TF = translation and horizontal flip augmentation, TFA = translation, horizontal flip and affine augmentation, * indicates minimal augmentation.on the target domain, lending further evidence to the view of BID23 and BID13 that self-ensembling acts as an effective regulariser. Syn-Digits → SVHN (see FIG2 . The Syn-Digits dataset is a synthetic dataset designed by BID4 to be used as a source dataset in domain adaptation experiments with SVHN as the target dataset. Other approaches have achieved good scores on this benchmark, beating the baseline by a significant margin. Our improves on them, reducing the error rate from 6.9% to 2.9%; even slightly outpacing the 'train on target' 3.4% error rate achieved using supervised learning. Syn-Signs → GTSRB (see FIG2). Syn-Signs is another synthetic dataset designed by BID4 to target the 43-class GTSRB (German Traffic Signs Recognition Benchmark; BID26) dataset. Our approach halved the best error rate of competing approaches. Once again, our approaches slightly outpaces the'train on target' supervised learning upper bound. SVHN → MNIST (see FIG2). Google's SVHN (Street View House Numbers) is a colour digits dataset of house number plates. Our approach significantly outpaces other techniques and achieves an accuracy close to that of supervised learning. MNIST → SVHN (see FIG2). This adaptation path is somewhat more challenging as MNIST digits are greyscale and uniform in terms of size, aspect ratio and intensity range, in contrast to the variably sized colour digits present in SVHN. As a consequence, adapting from MNIST to SVHN required additional work. Class balancing loss was necessary to ensure training stability and additional experiment specific data augmentation was required to achieve good accuracy. The use of translations and affine augmentation (see section 3.3) in an accuracy score of 37%. Significant improvements ed from additional augmentation in the form of random intensity flips (negative image), and random intensity scales and offsets drawn from the intervals [0.25, 1.5] and [−0.5, 0.5] respectively. These hyper-parameters were selected in order to augment MNIST samples to match the intensity variations present in SVHN, as illustrated in FIG2. With these additional modifications, we achieve a that significantly outperforms prior art and nearly achieves the accuracy of a supervised classifier trained on the target dataset. We found that applying these additional augmentations to the source MNIST dataset only yielded good ; applying them to the target SVHN dataset as well yielded a small improvement but was not essential. It should also be noted that this augmentation scheme raises the performance of the'train on source' baseline to just above that of much of the prior art. The VisDA-2017 image classification challenge is a 12-class domain adaptation problem consisting of three datasets: a training set consisting of 3D renderings of sketchup models, and validation and test sets consisting of real images (see FIG3) drawn from the and YouTube BoundingBoxes BID19 datasets respectively. The objective is to learn from labeled computer generated images and correctly predict the class of real images. Ground truth labels were made available for the training and validation sets only; test set scores were computed by a server operated by the competition organisers. While the algorithm is that presented above, we base our network on the pretrained ResNet-152 BID9 ) network provided by PyTorch (Chintala et al.), rather than using a randomly initialised network as before. The final 1000-class classification layer is removed and replaced with two fullyconnected layers; the first has 512 units with a ReLU non-linearity while the final layer has 12 units with a softmax non-linearity. Results from our original competition submissions and newer using two data augmentation schemes are presented in Table 2. Our reduced augmentation scheme consists of random crops, random horizontal flips and random uniform scaling. It is very similar to scheme used for ImageNet image classification in BID9. Our competition configuration includes additional augmentation that was specifically designed for the VisDA dataset, although we subsequently found that it makes little difference. Our hyper-parameters and competition data augmentation scheme are described in Appendix C.1. It is worth noting that we applied test time augmentation (we averaged predictions form 16 differently augmented images) to achieve our competition . We present resuts with and without test time augmentation in Table 2. Our VisDA competition test set score is also the of ensembling the predictions of 5 different networks. We have presented an effective domain adaptation algorithm that has achieved state of the art in a number of benchmarks and has achieved accuracies that are almost on par with traditional supervised learning on digit recognition benchmarks targeting the MNIST and SVHN datasets. The Table 2: VisDA-17 performance, presented as mean ± std-dev of 5 independent runs. Full are presented in TAB6 in Appendix C. ing networks will exhibit strong performance on samples from both the source and target domains. Our approach is sufficiently flexible to be usable for a variety of network architectures, including those based on randomly initialised and pre-trained networks. stated that the self-ensembling methods presented by -on which our algorithm is based -operate by label propagation. This view is supported by our , in particular our MNIST → SVHN experiment. The latter requires additional intensity augmentation in order to sufficiently align the dataset distributions, after which good quality label predictions are propagated throughout the target dataset. In cases where data augmentation is insufficient to align the dataset distributions, a pre-trained network may be used to bridge the gap, as in our solution to the VisDA-17 challenge. This leads us to conclude that effective domain adaptation can be achieved by first aligning the distributions of the source and target datasets -the focus of much prior art in the field -and then refining their correspondance; a task to which self-ensembling is well suited. The datasets used in this paper are described in Some of the experiments that involved datasets described in TAB3 required additional data preparation in order to match the resolution and format of the input samples and match the classification target. These additional steps will now be described. MNIST ↔ USPS The USPS images were up-scaled using bilinear interpolation from 16 × 16 to 28 × 28 resolution to match that of MNIST.CIFAR-10 ↔ STL CIFAR-10 and STL are both 10-class image datasets. The STL images were down-scaled to 32 × 32 resolution to match that of CIFAR-10. The'frog' class in CIFAR-10 and the'monkey' class in STL were removed as they have no equivalent in the other dataset, ing in a 9-class problem with 10% less samples in each dataset. Syn-Signs → GTSRB GTSRB is composed of images that vary in size and come with annotations that provide region of interest (bounding box around the sign) and ground truth classification. We extracted the region of interest from each image and scaled them to a resolution of 40 × 40 to match those of Syn-Signs. MNIST ↔ SVHN The MNIST images were padded to 32 × 32 resolution and converted to RGB by replicating the greyscale channel into the three RGB channels to match the format of SVHN.B SMALL IMAGE EXPERIMENT TRAINING B.1 TRAINING PROCEDURE Our networks were trained for 300 epochs. We used the Adam BID12 gradient descent algorithm with a learning rate of 0.001. We trained using mini-batches composed of 256 samples, except in the Syn-digits → SVHN and Syn-signs → GTSRB experiments where we used 128 in order to reduce memory usage. The self-ensembling loss was weighted by a factor of 3 and the class balancing loss was weighted by 0.005. Our teacher network weights t i were updated so as to be an exponential moving average of those of the student s i using the formula t i = αt i−1 + (1 − α)s i, with a value of 0.99 for α. A complete pass over the target dataset was considered to be one epoch in all experiments except the MNIST → USPS and CIFAR-10 → STL experiments due to the small size of the target datasets, in which case one epoch was considered to be a pass over the larger soure dataset. We found that using the proportion of samples that passed the confidence threshold can be used to drive early stopping BID18 ). The final score was the target test set performance at the epoch at which the highest confidence threshold pass rate was obtained. C VISDA-17 C.1 HYPER-PARAMETERS Our training procedure was the same as that used in the small image experiments, except that we used 160 × 160 images, a batch size of 56 (reduced from 64 to fit within the memory of an nVidia 1080-Ti), a self-ensembling weight of 10 (instead of 3), a confidence threshold of 0.9 (instead of 0.968) and a class balancing weight of 0.01. We used the Adam BID12 gradient descent algorithm with a learning rate of 10 −5 for the final two randomly initialized layers and 10 DISPLAYFORM0 for the pre-trained layers. The first convolutional layer and the first group of convolutional layers (with 64 feature channels) of the pre-trained ResNet were left unmodified during training. Reduced data augmentation:• scale image so that its smallest dimension is 176 pixels, then randomly crop a 160 × 160 section from the scaled image • No random affine transformations as they increase confusion between the car and truck classes in the validation set • random uniform scaling in the range [0.75, 1.333]• horizontal flipping Competition data augmentation adds the following in addition to the above:• random intensity/brightness scaling in the range [0.75, 1.333]• random rotations, normally distributed with a standard deviation of 0.2π• random desaturation in which the colours in an image are randomly desaturated to greyscale by a factor between 0% and 100% • rotations in colour space, around a randomly chosen axes with a standard deviation of 0.05π• random offset in colour space, after standardisation using parameters specified by 10 × 10 × 192 Dropout, 50%10 × 10 × 192 Conv 3 × 3 × 384, pad 1, batch norm 10 × 10 × 384 Conv 3 × 3 × 384, pad 1, batch norm 10 × 10 × 384 Conv 3 × 3 × 384, pad 1, batch norm 10 × 10 × 384 Max-pool, 2x25 × 5 × 384 Dropout, 50%5 × 5 × 384 Global pooling layer 1 × 1 × 384 Fully connected, 43 units, softmax 43 Table 8: Syn-signs → GTSRB architecture | Self-ensembling based algorithm for visual domain adaptation, state of the art results, won VisDA-2017 image classification domain adaptation challenge. | 1,000 | scitldr |
It is easy for people to imagine what a man with pink hair looks like, even if they have never seen such a person before. We call the ability to create images of novel semantic concepts visually grounded imagination. In this paper, we show how we can modify variational auto-encoders to perform this task. Our method uses a novel training objective, and a novel product-of-experts inference network, which can handle partially specified (abstract) concepts in a principled and efficient way. We also propose a set of easy-to-compute evaluation metrics that capture our intuitive notions of what it means to have good visual imagination, namely correctness, coverage, and compositionality (the 3 C’s). Finally, we perform a detailed comparison of our method with two existing joint image-attribute VAE methods (the JMVAE method of and the BiVCCA method of) by applying them to two datasets: the MNIST-with-attributes dataset (which we introduce here), and the CelebA dataset . Consider the following two-party communication game: a speaker thinks of a visual concept C, such as "men with black hair", and then generates a description y of this concept, which she sends to a listener; the listener interprets the description y, by creating an internal representation z, which captures its "meaning". We can think of z as representing a set of "mental images" which depict the concept C. To test whether the listener has correctly "understood" the concept, we ask him to draw a set of real images S = {x s : s = 1 : S}, which depict the concept C. He then sends these back to the speaker, who checks to see if the images correctly match the concept C. We call this process visually grounded imagination. In this paper, we represent concept descriptions in terms of a fixed length vector of discrete attributes A. This allows us to specify an exponentially large set of concepts using a compact, combinatorial representation. In particular, by specifying different subsets of attributes, we can generate concepts at different levels of granularity or abstraction. We can arrange these concepts into a compositional abstraction hierarchy, as shown in Figure 1. This is a directed acyclic graph (DAG) in which nodes represent concepts, and an edge from a node to its parent is added whenever we drop one of the attributes from the child's concept definition. Note that we dont make any assumptions about the order in which the attributes are dropped (that is, dropping the attribute "smiling" is just as valid as dropping "female" in Figure 1). Thus, the tree shown in the figure is just a subset extracted from the full DAG of concepts, shown for illustration purposes. We can describe a concept by creating the attribute vector y O, in which we only specify the value of the attributes in the subset O ⊆ A; the remaining attributes are unspecified, and are assumed to take all possible legal values. For example, consider the following concepts, in order of increasing abstraction: C msb = (male, smiling, blackhair), C * sb = (*, smiling, blackhair), and C * * b = (*, *, blackhair), where the attributes are gender, smiling or not, and hair color, and * represents "don't care". A good model should be able to generate images from different levels of the abstraction hierarchy, as shown in Figure 1. (This is in contrast to most prior work on conditional generative models of images, which assume that all attributes are fully specified, which corresponds to sampling only from leaf nodes in the hierarchy.) Figure 1: A compositional abstraction hierarchy for faces, derived from 3 attributes: hair color, smiling or not, and gender. We show a set of sample images generated by our model, when trained on CelebA, for different nodes in this hierarchy. In Section 2, we show how we can extend the variational autoencoder (VAE) framework of BID15 to create models which can perform this task. The first extension is to modify the model to the "multi-modal" setting where we have both an image, x, and an attribute vector, y. More precisely, we assume a joint generative model of the form p(x, y, z) = p(z)p(x|z)p(y|z), where p(z) is the prior over the latent variable z, p(x|z) is our image decoder, and p(y|z) is our description decoder. We additionally assume that the description decoder factorizes over the specified attributes in the description, so p(y O |z) = k∈O p(y k |z).We further extend the VAE by devising a novel objective function, which we call the TELBO, for training the model from paired data, D = {(x n, y n)}. However, at test time, we will allow unpaired data (either just a description or just an image). Hence we fit three inference networks: q(z|x, y), q(z|x) and q(z|y). This way we can embed an image or a description into the same shared latent space (using q(z|x) and q(z|y), respectively); this lets us "translate" images into descriptions or vice versa, by computing p(y|x) = dz p(y|z)q(z|x) and p(x|y) = dz p(x|z)q(z|y).To handle abstract concepts (i.e., partially observed attribute vectors), we use a method based on the product of experts (POE) BID8. In particular, our inference network for attributes has the form q(z|y O) ∝ p(z) k∈O q(z|y k). If no attributes are specified, the posterior is equal to the prior. As we condition on more attributes, the posterior becomes narrower, which corresponds to specifying a more precise concept. This enables us to generate a more diverse set of images to represent abstract concepts, and a less diverse set of images to represent concrete concepts, as we show below. Section 3 discusses how to evaluate the performance of our method in an objective way. Specifically, we first "ground" the description by generating a set of images, S(y O) = {x s ∼ p(x|y O): s = 1: S}. We then check that all the sampled images in S(y O) are consistent with the specified attributes y O (we call this correctness). We also check that the set of images "spans" the extension of the concept, by exhibiting suitable diversity (c.f. BID36). Concretely, we check that the attributes that were not specified (e.g., gender in C * sb above) vary across the different images; we call this coverage. Finally, we want the set of images to have high correctness and coverage even if the concept y O has a combination of attribute values that have not been seen in training. For example, if we train on C msb = (male, smiling, blackhair), and C f nb = (female, notsmiling, blackhair), we should be able to test on C mnb = (male, notsmiling, blackhair), and C f sb = (female, smiling, blackhair). We will call this property compositionality. Being able to generate plausible images in response to truly compositionally novel queries is the essence of imagination. Together, we call these criteria the 3 C's of visual imagination. Section 5 reports experimental on two different datasets. The first dataset is a modified version of MNIST, which we call MNIST-with-attributes (or MNIST-A), in which we "render" modified versions of a single MNIST digit on a 64x64 canvas, varying its location, orientation and size. The second dataset is CelebA BID16, which consists of over 200k face images, annotated with 40 binary attributes. We show that our method outperforms previous methods on these datasets. The contributions of this paper are threefold. First, we present a novel extension to VAEs in the multimodal setting, introducing a principled new training objective (the TELBO), and deriving an interpretation of a previously proposed objective (JMVAE) BID31 as a valid alternative in Appendix A.1. Second, we present a novel way to handle missing data in inference networks based on a product of experts. Third, we present novel criteria (the 3 C's) for evaluating conditional generative models of images, that extends prior work by considering the notion of visual abstraction and imagination. We start by describing standard VAEs, to introduce notation. We then discuss our extensions to handle the multimodal and the missing input settings. Standard VAEs. A variational autoencoder BID15 is a latent variable model of the form p θ (x, z) = p θ (z)p θ (x|z), where p θ (z) is the prior (we assume it is Gaussian, p θ (z) = N (z|0, I), although this assumption can be relaxed), and p θ (x|z) is the likelihood (sometimes called the decoder), usually represented by a neural network. To perform approximate posterior inference, we fit an inference network (sometimes called the encoder) of the form q φ (z|x), so as to maximize DISPLAYFORM0 is the empirical distribution, and ELBO is the evidence lower bound: DISPLAYFORM1 Here KL(p, q) is the Kullback Leibler divergence between distributions p and q. By default, β = λ = 1, in which case we will just write elbo(x, θ, φ). However, by using β > 1 we can encourage the posterior to be closer to the factorial prior p(z) = N (z|0, I), which encouarges the latent factors to be "disentangled", as proved in BID0; this is known as the β-VAE trick BID6. And allowing λ > 1 will be useful later, when we have multiple modalities. Joint VAEs and the TELBO. We extend the VAE to model images and attributes by defining the joint distribution p θ (x, y, z) = p θ (z)p θ (x|z)p θ (y|z), where p θ (x|z) is the image decoder (we use the DCGAN architecture from), and p θ (y|z) is an MLP for the attribute vector. The corresponding training objective which we want to maximize becomes L(θ, φ) = DISPLAYFORM2 is the empirical distribution derived from paired data, and the joint ELBO is given by DISPLAYFORM3 We call this the JVAE (joint VAE) model. We usually set β = 1, but set λ y /λ x > 1 to to scale up the likelihood from the low dimensional attribute vector, p θ (y|z), to match the likelihood from the high dimensional image, p θ (x|z).Having fit the joint model above, we can proceed to train unpaired inference networks q φ x (z|x) and q φ y (z|y), so we can embed images and attributes into the same shared latent space. Keeping the p family fixed from the joint model, a natural objective to fit, say, q φ x (z|x) is to maximize the following: DISPLAYFORM4 Product of Gaussians = Always multiplying with the prior in the product of experts makes the posterior better behaved (w/ missing attributes) Figure 2: Illustration of the product of experts inference network. Each expert votes for a part of latent space implied by its observed attribute. The final posterior is the intersection of these regions. When all attributes are observed, the posterior will be a narrowly defined Gaussian, but when some attributes are missing, the posterior will be broader. Right: we illustrate how inclusion of the "universal expert" p(z) in the product ensures that the posterior is always well-conditioned (close to spherical), even when we are missing some attributes.where the last term is constant wrt φ x and the model family p, and hence can be dropped. We can use a similar method to fit q φ y (z|y). Combining these gives the following triple ELBO (TELBO) objective: DISPLAYFORM0 where λ and γ scale the log likelihood terms log p(y|z); we set these parameters using a validation set. Since we are training the generative model only on aligned data, and simply retrofitting inference networks, we freeze the p θx (x|z) and p θy (y|z) terms when training the last two ELBO terms above, and just optimize q φ x (z|x) and q φ y (z|y) terms. This enables us to optimize all terms in Equation FORMULA5 jointly. Alternatively, we can first fit the joint model, and then fit the unimodal inference networks. In Section 4, we compare this to other methods for training joint VAEs that have been proposed in the literature. Handling missing attributes. In order to handle missing attributes at test time, we use a product of experts model, where each attribute instantiates an expert. We are motivated by prior work BID34 which shows that for a linear factor analysis model, the posterior distribution p(z|y) is a product of K-dimensional Gaussians, one for each visible dimension. Since our model is just a nonlinear extension of factor analysis, we choose the form of the approximate posterior of our inference network, q(z|y), to be a product of Gaussians, one for each visible feature: DISPLAYFORM0 is the kth Gaussian "expert", and p(z) = N (z|µ 0 = 0, C 0 = I) is the prior. A similar model was concurrently proposed in BID4 to perform inference for a set of images. Unlike the product of experts model in BID8, our model multiplies Gaussians, not Bernoullis, so the product has a closed form solution namely q(z|y O) = N (z|µ, C), where DISPLAYFORM1, and the sum is over all the observed attributes. Intuitively, y imposes an increasing number of constraints on z as more of it is observed, as explained in BID33. In our setting, if we do not observe any attributes, the posterior reduces to the prior. As we observe more attributes, the posterior becomes narrower, since the (positive definite) precision matrices, C −1 add up, reflecting the increased specificity of the concept being specified, as illustrated in Figure 2 (middle) (see also BID33). We always include the prior term, p(z), in the product, since without it, the posterior q φ y (z|y O) may not be well-conditioned when we are missing attributes, as illustrated in Figure 2 To evaluate the quality of a set of generated images, S(y O) = {x s ∼ p(x|y O): s = 1: S}, we apply a multi-label classifier to each image, to convert it to a predicted attribute vector,ŷ(x). This attribute classifier is trained on a large dataset of images and attributes, and is held constant across all methods that are being evaluated. It plays the role of a human observer. This is similar in spirit to generative adversarial networks BID5, that declare a generated image to be good enough if a binary classifier cannot distinguish it from a real image. (Both approaches avoid the problems mentioned in BID28 related to evaluating generative image models in terms of their likelihood.) However, the attribute classifier checks not only that the images look realistic, but also that they have the desired attributes. To quantify this, we define the correctness as the fraction of attributes for each generated image that match those specified in the concept's description: correctness(S, DISPLAYFORM0 However, we also want to measure the diversity of values for the unspecified or missing attributes, M = A \ O. We do this by comparing q k, the empirical distribution over values for attribute k induced by the generated set S, to p k, the true distribution for this attribute induced by the training set. We measure the difference between these distributions using the Jensen-Shannon divergence, since it is symmetric and satisfies 0 ≤ JS(p, q) ≤ 1. We then define the coverage as follows: coverage(S, DISPLAYFORM1 . If desired, we can combine correctness and coverage into a single number, by computing the JS divergence between p k and q k for all attributes, where, for observed attributes, p k is a delta function and q k is the empirical distribution (we call this JS-overall). This gives us a convenient way to pick hyperparameters. However, for analysis, we find it helpful to report correctness and coverage separately. Note that our metric is different from the inception score proposed in BID24. That is defined as follows: inception = exp Ep (x) [KL(p(y|x), p(y))], where y is a class label. Expanding the term inside the exponential, we get DISPLAYFORM2 A high inception score means that the distribution p(y|x) has low entropy, so the generated images match some class, but that the marginal p(y) has high entropy, so the images are diverse. However, the inception score was created to evaluate unconditional generative models of images, so it does not check if the generated images are consistent with the concept y O, and the degree of diversity does not vary in response to the level of abstraction of the concept. Finally, we can assess how well the model understands compositionality, by checking correctness of its generated images in response to test concepts y O that differ in at least one attribute from the training concepts. We call this a compositional split of the data. This is much harder than a standard iid split, since we are asking the model to predict the effects of novel combinations of attributes, which it has not seen before (and which might actually be impossible). Note that abstraction is different from compositionality -in abstraction we are asking the model to predict the effects of dropping certain attributes instead of predicting novel combinations of attributes. In this section, we briefly mention some of the most closely related prior work. Conditional models. Many conditional generative image models of the form p(y|x) have been proposed recently, where y can be a class label (e.g.,), a vector of attributes (e.g.,), a sentence (e.g., BID21), another image (e.g., BID11), etc. Such models are usually based on VAEs or GANs. However, we are more interested in learning a shared latent space from either descriptions y or images x, which means we need to use a joint, symmetric, model. Joint models. Several papers use the same joint VAE model as us, but they differ in how it is trained. In particular, the BiVCCA objective of BID32 has the form L(θ, φ) = DISPLAYFORM0 This method in the model generating the mean image corresponding to each concept, due to the E q φ y (z|y) log p θ (x, y|z) term, which requires that z's sampled from q φ y (z|y n) be good at generating all the different x n's which co-occur with y n. We show this empirically in Section 5. This problem can be partially compensated for by increasing µ, but that reduces the KL(q φ (z|y), p θ (z)) penalty, which is required to ensure q φ y (z|y) is a broad distribution with good coverage of the concept. The JMVAE objective of BID26 has the form DISPLAYFORM1 At first glance, forcing q φ (z|y) to be close to q φ (z|x, y) seems undesirable, since the latter will typically be close to a delta function, since there is little posterior uncertainty in z once we see the image x. However, in Appendix A.1, we use from BID9 to show that Ep (x,y) KL(q φ (z|x, y), q φ y (z|y)) can be written in terms of KL(q avg φ (z|y), q φ y (z|y)), where q avg φ (z|y) = Ep (x|y) [q φ (z|x, y)] is the aggregated posterior over z induced by all images x which are associated with description y. This ensures that q φ y (z|y) will cover the embeddings of all the images associated with concept y. However, since there is no KL(q φ y (z|y), p θ (z)) term, the diversity of the samples is slightly reduced for novel concepts compared to TELBO, as we show empirically in Section 5. On the flip side, the benefit of using the aggregated posterior to fit the q(z|y) inference network is that one can expect sharper images, as this ensures we will sample z ∼ q(z|y) which have been seen by the image decoder p θ (x|z) during joint training. If the aggregated posterior does not exactly match the prior (which is known to happen in VAE-type models, see BID9) then regularizing with respect to the prior (as TELBO does) can generate samples in parts of space not seen by the image decoder, which can potentially lead to less "correct" samples. Again, our empirical findings in Section 5 confirm this tradeoff between correctness and coverage implicit in choices of TELBO vs. JMVAE.The SCAN method of BID7 first fits a standard β-VAE model BID6 ) on unlabeled images (or rather, features derived from images using a pre-trained denoising autoencoder) by maximizing DISPLAYFORM2 This is very similar to JMVAE, since q φ x (z|x) ≈ q φ (z|x, y), when (x, y) is a matching pair of images and labels. An important difference, however, is that SCAN treats the attribute vectors y as atomic symbols; this has the advantage that there is no need to handle missing inputs, but the disadvantage that they cannot infer the meaning of unseen attribute combinations at test time, unless they are "taught" them by having them paired with images. Also, they rely on β x > 1 as a way to get compositionality, assuming that a disentangled latent space will suffice. However, in Appendix A.3, we show that unsupervised learning of the latent space given images alone can in poor when some of the attributes in the compositional concept hierarchy are non-visual, such as parity of an MNIST digit. Our approach always takes the labels into consideration when learning the latent space, permitting well-organized latent spaces even in the presence of non-visual concepts (c.f. the difference between PCA and LDA).Handling missing inputs. Conditional generative models of images, of the form p(x|y), have problems with missing input attributes, as do inference networks q(z|y) for VAEs. BID10 uses MCMC to fit a latent Gaussian model, which can in principle handle missing data; however, he initializes the Markov chain with the posterior mode computed by an inference network, which cannot easily handle missing inputs. One approach we can use, if we have a joint model, is to estimate or impute the missing values, as follows:ŷ = arg max y M p(y M |y O), where p(y M, y O) models dependencies between attributes. We can then sample images using p(x|ŷ). This approach was used in to handle the case where some of the pixels being passed into an inference network were not observed. However, conditioning on an imputed value will give different from not conditioning on the missing inputs; only the latter will increase the posterior uncertainty in order to correctly represent less precise concepts with broader support. Gaussian embeddings. There are many papers that embed images and text into points in a vector space. However, we want to represent concepts of different levels of abstraction, and therefore want to map images and text to regions of latent space. There are some prior works that use Gaussian embeddings for words BID30 BID2, sometimes in conjunction with images BID17 BID22. Our method differs from these approaches in several ways. First, we maximize the likelihood of (x, y) pairs, whereas the above methods learn a Gaussian embedding using a contrastive loss. Second, our PoE formulation ensures that the covariance of the posterior q(z|y O) is adaptive to the data that we condition on. In particular, it becomes narrower as we observe more attributes (because the precision matrices sum up), which is a property not shared by other embedding methods. Abstraction and compositionality. BID36 represent the extension of a concept (described by a noun phrase) in terms of a set of images whose captions match the phrase. By contrast, we use a parametric probability distribution in a latent space that can generate new images. BID29 use order embeddings, where they explicitly learn subsumption-like relationships by learning a space that respects a partial order. In contrast, we reason about generality of concepts via the uncertainty induced by their latent representation. There has been some work on compositionality in the language/vision literature (see e.g., ; BID13 ; BID1), but none of these papers use generative models, which is arguably a much more stringent test of whether a model has truly "understood" the meaning of the components which are being composed. In this section, we fit the JVAE model to two different datasets (MNIST-A and CelebA), using the TELBO objective, as well as BiVCCA and JMVAE. We measure the quality of the ing model using the 3 C's, and show that our method of handling missing data behaves in a qualitatively reasonable way. Dataset. In this section, we report on the MNIST-A dataset. This is created by modifying the original MNIST dataset as follows. We first create a compositional concept hierarchy using 4 discrete attributes, corresponding to class label (10 values), location (4 values), orientation (3 values), and size (2 values). Thus there are 10x2x3x4=240 unique concepts in total. We then sample ∼ 290 example images of each concept, and create both an iid and compositional split of the data. See Appendix A.2 for details. We train the JVAE model on this dataset using TELBO, BiVCCA and JMVAE objectives. We use Adam BID14 for optimization, with a learning rate of 0.0001, and a minibatch size of 64. We train all models for 250,000 steps (we generally found that the models do not tend to overfit in our experiments). Our models typically take around a day to train on NVIDIA Titan X GPUs. For the image models, p(x|z) and q(z|x), we use the DCGAN architecture from. Our generated images are of size 64×64, as in. For the attribute models, p(y k |z) and q(z|y k), we use MLPs. For the joint inference network, q(z|x, y), we use a CNN combined with an MLP. We use d = 10 latent dimensions for all models. We choose the hyperparameters for each method so as to maximize JS-overall, which is an overall measure of correctness and coverage (see Section 3) on a validation set of attribute queries. See Appendix A.4 for further details on the model architectures. Evaluation. To measure correctness and coverage, we first train the observation classifier on the full iid dataset, where it gets to an accuracy of 91.18% for class label, 90.56% for scale, 92.23% for orientation, and 100% for location. Consequently, it is a reliable way to assess the quality of samples from various generative models (see Appendix A.5 for details). We then compute correctness and coverage on the iid dataset, and coverage on the comp dataset. Familiar concrete concepts. We start by assessing the quality of the models in the simplest setting, which is where the test concepts are fully specified (i.e., all attributes are known), and the concepts have been seen before in the training set (i.e., we are using the iid split). FIG3 shows the correctness scores for the three methods. (Since the test concepts are fully grounded, coverage is not well defined, since there are no missing attributes.) We see that TELBO has a correctness of 82.08%, which is close to that of JMVAE (85.15%); both methods significantly outperform BiVCCA (67.38%).To gain more insight, FIG1 shows some samples from each of these methods for a leaf concept chosen at random. We see that the images generated by BiVCCA are very blurry, for reasons we discussed in Section 4. Note that these blurry images are correctly detected by the attribute classifier. 3 We also see that the JMVAE samples all look good (in this example). Most of the samples from TELBO are also good, although there is one error (correctly detected by the attribute classifier).(a) Evaluation of different approaches on the test set. Higher numbers are better. We report standard deviation across 5 splits of the test set. Qualitative on MNIST-A for various queries. For refined/fully specified queries, we can see that both TELBO and JMVAE produce good correctness, i.e., the images produced follow constraints placed by the specified attributes. When the attribute'orientation' is unspecified, we see that TELBO produces upright and counter clockwise digits, while JMVAE produces clockwise and upright digits. Finally, when we leave the digit unspecified (top), we see that TELBO appears to generate a more diverse set of digits while JMVAE produces 0 and 3. 3 We chose the value of µ = 0.7 based on maximizing correctness score on the validation set. Nevertheless, this does not completely eliminate blurriness, as we can see. Novel abstract concepts. Next we assess the quality of the models when the test concepts are abstract, i.e., one or more attributes are not specified. (Note that the model was never trained on such abstract concepts.) FIG3 shows that the correctness scores for JMVAE seems to drop somewhat (from about 85% to about 81.5%), although it remains steady for TELBO and BiVCCA. We also see that the coverage of TELBO is higher than the other methods, due to the use of the KL(q φ y (z|y), p θ (z)) regularizer, as we discussed in Section 4. FIG3 illustrates how the methods respond to concepts of different levels of abstraction. The samples from the TELBO seem to be more diverse, which is consistent with the numbers in FIG3.Compositionally novel concrete concepts. Finally we assess the quality of the models when the test concepts are fully specified, but have not been seen before (i.e., we are using the comp split). FIG3 shows some quantitative . We see that the correctness for TELBO and JMVAE has dropped from about 82% to about 75%, since this task is much harder, and requires "strong generalization". However, as before, we see that both TELBO and JMVAE outperform BiVCCA, which has a correctness of about 69%. See Appendix A.7 qualitative and more details. In this section, we report on the CelebA dataset BID16. In particular, we use the version that was used in BID18, which selects 18 visually distinctive attributes, and generate images of size 64×64; see Appendix A.8 for more details on the CelebA dataset and Appendix A.4 for details of the model architectures. Figure 5 shows some sample qualitative . On the top left, we show some images which were generated by the three methods given the concept shown in the left column. TELBO and JMVAE generate realistic and diverse images. That is, the generated images are generally of males, with mouth slightly open and smiling attributes present in the images. On the other hand, BiVCCA just generates the mean image. On the bottom left, we show what happens when we drop some attributes, thus specifying more abstract concepts. We see that when we drop the gender, we get a mixture of both male and female images for both TELBO and JMVAE. Going further, when we drop the "smiling" attribute, we see that the samples now comprise of people who are smiling as well as not smiling, and we see a mixture of genders in the samples. Further, while we see a greater diversity in the samples, we also notice a slight drop in image quality (presumably because none of the approaches has seen supervision with just 'abstract' concepts). See Appendix A.9 for more qualitative examples on CelebA. On the top right, we show some examples of visual imagination, where we ask the models to generate images from the concept "bald female", which does not occur in the training set. 4 (We omit the from BiVCCA, which are uniformly poor.) We see that both TELBO and JMVAE can sometimes do a fairly reasonable job (although these are admittedly cherry picked ). Finally, the bottom right illustrates an interesting bias in the dataset: if we ask the model to generate images where we do not specify the value of the eyeglasses attribute, nearly all of the samples fail to included glasses, since the prior probability of this attribute is rare (about 6%). In this section, we demonstrate initial which show that our imagination models can be used for concept naming, where the task is to assign a label to a set of images illustrating the concept depicted by the images. A similar problem has been studied in previous work such as BID27 and BID12. BID27 studies a set naming problem with integers (instead of images), and show that construct a likelihood function given a hypothesis set that can capture notions of the minimal/smallest hypothesis that explains the observed samples in the set. BID12 extend this approach to concept-naming on images, incorporating perceptual uncertainty (in recognizing the contents of an image) using a confusion matrix weighted likelihood term. While this approach first extracts labels for each image and then performs concept naming, here we test how well our generative model itself is able to generalize to concept naming without ever performing explicit classification on the images. • set "not male'' (female) and "bald" TELBO with eyeglasses=* p(eyeglasses) = 0.065 on train over 10 samples Figure 5: Sample CelebA . Left: we show the attributes specified to be present or absent when generating images. Middle: we show 10 samples each generated from TELBO, JMVAE and BiVCCA. We see that TELBO and JMVAE genreate better samples than BiVCCA which collapses to the mean. Middle, bottom: We show five samples from TELBO and JMVAE in response to queries with unspecified attributes, and see that both approaches generate a mix in the samples, generalizing meaningfully across unspecified attributes. In more detail, the problem setup in concept naming is as follows: we are given as input a set X of images, each of which corresponds to a concept in the compositional abstraction hierarchy Figure 1. The task is to assign a label y ∈ Y to the set of images. One of the key challenges in concept learning is to understand "how far" to generalize in the concept hierarchy given a limited number of positive examples BID27. That is, given a small set of images with 7 in the top-left corner and bottom-right corner, one must infer that the concept is "7" as opposed to "7, top-left". In other words, we wish to find the least common ancestor (in the concept hierarchy) corresponding to all the images in the set, given any number of images in the set, so that we can be consistent with the set. We consider two heuristic solutions to this problem:1. Concept-NB: In this approach we compute arg max y p(y|X), where p(y|X) is computed using the naive bayes assumption: DISPLAYFORM0 where p(y) is chosen to be uniform across all concepts, and the integrals are approximated using Monte Carlo.2. Concept-Latent: In this approach, instead of working in the observed space, we work in the latent space. That is, we pick arg min y KL(q(z|X)|q(z|y)), where q(z|X) is approximated using x∈X q(z|x), which is a mixture of gaussians. The KL divergence can be computed analytically by considering the first two moments of the gaussian mixture 5. We use the MNIST-A dataset for the concept naming studies. We consider the fully specified attribute labels in the MNIST-A hierarchy, and consider differrent patterns of missingness (corresponding to different nodes in the abstraction hirearchy) by dropping attributes. Specifically, we ignore the case where no attribute is specified, and consider a uniform distribution over the rest of the (2 4 − 1 = 15) patterns of missingness. Now, for each fully specified attribute pattern in the iid split of MNIST-A, we sample four missingness patterns and repeat across all fully specified attributes to form a bank of 960 candidate names that a model must choose. We randomly select three subsets of 100 candidate names (and the corresponding images) to form the query set for concept naming, namely tuples of (y, X). Specifically, given all the images in the eval set for a concept y, we form X using a randomly sampled subset of 5 images. We report the accuracy metric, measuring how often the selected concept for a set X matches the ground truth concept, across three different splits of 100 datapoints. 5 Given a Gaussian mixture of the form g(x) = i πif (x; µi, σi), where f is the pdf for the Gaussian distribution, the first order moment, that is, the mean of g(x) is given by: i πiµi. The variance is given by: Table 1: Accuracy of Imagination models on Concept Naming. Higher is better. Figure 6: A qualitative illustration of some of the examples from concept naming models. Top-left: an example of a sample that is correctly named by a Concept-NB model. However, the Concept-NB model is not that strong and often gets simple concepts such as digits incorrect, making mistakes between 6 and 0, for example (bottom-left). This is likely because the only way in which the Concept-NB approach reasons about the set is not via a "meaningful" low dimensional latent variable but via a sampling distribution on a high dimensional space of images. The Concept-Latent model is able to do better on the same set of images, and classify the set as the concept "6". Finally, we show a failure case of the model where it incorrectly classifies the digits as being large (there is a small digit in the set), and ignores the fact that all of the digits are in the top-left. DISPLAYFORM0 We evaluate the best versions of TELBO, JMVAE, and BiVCCA on the iid split of MNIST-A for concept naming (Table 1). In general, we find that Concept-NB approaches perform significantly worse than Concept-Latent approaches. For example, the best Concept-NB approach (using TELBO/BiVCCA objective) gets to an accuracy of around 18%, while Concept-Latent using JMVAE gets to 54.66 ± 4.92%. In general, these numbers are better than a random chance baseline which would get to 0.28% (picking one of 348 effective options, after collating the 960 candidate names based on missingness patterns), while picking the most frequent (ground truth) fully-specified y depicted across an image set gets to 6.33 ± 1.88%. Figure 6 shows some qualitative examples from Concept-NB as well as Concept-Latent models for concept / set classification. We observe that the Concept-Latent models are much more powerful than using Concept-NB in terms of naming the concept based on few positive examples from the support set. We have shown how to create generative models which can "imagine" compositionally novel concrete and abstract visual concepts. In the future we would like to explore richer forms of description, beyond attribute vectors, such as natural language text, as well as compositional descriptions of scenes, which will require dealing with a variable number of objects. The JMVAE objective of BID26 has the form J(x, y, θ, φ) = elbo(x, y, θ, φ) − α KL(q φ (z|x, y), q φ y (z|y)) + KL(q φ (z|x, y), q φ x (z|x))Let us focus on the KL(q φ (z|x, y)|q φ y (z|y)) term. Let Y be the set of unique labels (attribute vectors) in the training set, X i be the indices of the images associated with label y i, and let N i = |X i | be the size of that set. Then we can write DISPLAYFORM0 As explained in BID9, we can rewrite this by treating the index n ∈ {1, · · ·, N i} as a random variable, with prior q(n|y i) = 1/N i. Also, let us define the likelihood q(z|n, y i) = q φ (z|x n, y i). Using this notation, we can show that the above average KL becomes DISPLAYFORM1 where q DISPLAYFORM2 is the average of the posteriors for that concept, and q(n|z, y i) is the posterior over the indices for all the possible examples from the set X i, given that the latent code is z and the description is y i.The KL(q avg φ (z|y i) |q φ y (z|y i)) term in Equation tells us that JMVAE encourages the inference network for descriptions, q φ y (z|y i), to be close to the average of the posteriors induced by each of the images x n associated with y i. Since each q φ (z|x n, y i) is close to a delta function (since there is little posterior uncertainty when conditioning on an image), we are essentially requiring that q φ (z|y i) cover the embeddings of each of these images. We created the MNIST-A dataset as follows. Given an image in the original MNIST dataset, we first sample a discrete scale label (big or small), an orientation label (clockwise, upright, and anticlockwise), and a location label (top-left, top-right, bottom-left, bottom-right).Next, we converted this vector of discrete attributes into a vector of continuous transformation parameters, using the procedure described below:• Scale: For big, we sample scale values from a Gaussian centered at 0.9 with a standard deviation of 0.1, while for small we sample from a Gaussian centered at 0.6 with a standard deviation of 0.1. In all cases, we reject and draw a sample again if we get values outside the range [0.4, 1.0], to avoid artifacts from upsampling or problems with illegible (small) digits.• Orientation: For the clockwise label, we sample the amount of rotation to apply for a digit from a Gaussian centered at +45 degrees, with a standard deviation of 10 degrees. For anti-clockwise, we use a Gaussian at -45 degrees, with a standard deviation of 10 degrees. For upright, we set the rotation to be 0 degrees always.• Location: For location, we place Gaussians at the centers of the four quadrants in the image, and then apply an offset of image_size/16 to shift the centers a bit towards the corresponding corners. We then use a standard deviation of image_size/16 and sample locations for centers of the digits. We reject and draw the sample again if we find that the location for the center would place the extremities of the digit outside of the canvas. Finally, we generate the image as follows. We first take an empty black canvas of size 64x64, rotate the original 28x28 MNIST image, and then scale and translate the image and paste it on the canvas. (We use bicubic interpolation for scaling and resizing the images.) Finally, we use the method of BID23 to binarize the images. See FIG5 for example images generated in this way. We repeat the above process of sampling labels, and applying corresponding transformations, to generate images 10 times for each image in the original MNIST dataset. Each trial samples labels from a uniform categorical distribution over the sample space for the corresponding attribute. Thus, we get a new MNIST-A dataset with 700,000 images from the original MNIST dataset of 70,000 images. We split the images into a train, val and test set of 85%, 5%, and 10% of the data respectively to create the IID split. To create the compositional split, we split the 10x2x3x4=240 possible label combinations by the sample train/val/test split, giving us splits of the dataset with non-overlapping label combinations. A.3 β-VAE vs. JOINT VAE (a) (b) Figure 8: Visualization of the benefit of semantic annotations for learning a good latent space. Each small digit is a single sample generated from p(x|z) from the corresponding point z in latent space. (a) β-VAE fit to images without annotations. The color of a point z is inferred from looking at the attributes of the training image that maps to this point of space using q(z|x). Note that the red region (corresponding to the concept of large and even digits) is almost non existent. (b) Joint-VAE fit to images with annotations. The color of a point z is inferred from p(y|z).β- VAE Higgins et al. (2017a) is an approach that aims to learn disentangled latent spaces. It does this by modifying the ELBO objective, so that it scales the KL(q(z|x), p(z)) term by a factor β > 1. This gives rise to disentangled spaces since the prior p(z) = N (z|0, I) is factorized (see BID0 for details). However, to learn latent spaces that correspond to high level concepts, this is not sufficient: we need to use labeled data as well. To illustrate this, we set up an experiment where we learn a 2d latent space for standard MNIST digit images, but where we replace the label with two binary attributes: parity (odd vs.even) and magnitude (value < 5 or >= 5). We call this dataset MNIST-2bit. In Figure 8 (a), we show the of fitting a 2d β-VAE model BID6 to the images in MNIST-2bit, ignoring the attributes. We perform a hyperparameter sweep over β, and pick the one that gives the best looking latent space (this corresponds to a value of β = 10). At each point z in the latent 2d space, we show a single image sampled from p(x|z). To derive the colors for each point in latent space, we proceed as follows: we embed each training image x (with label y(x)) into latent space, by computingẑ(x) = E q(z|x) [z]. We then associate label y(x) with this point in space. To derive the label for an arbitrary point z, we lookup the closest embedded training image (using 2 distance in z space), and use its corresponding label. We see that the latent space is useful for autoencoding (since the generated images look good), but it does not capture the relevant semantic properties of parity and magnitude. In fact, we argue that there is no way of forcing the model to learn a latent space that captures such high level conceptual properties from images alone. In Figure 8 (b), we show the of fitting a joint VAE model to MNIST-2bit, by optimizing elbo(x, y) on images and attributes (i.e., we do not include the uni-modality elbo(x) and elbo(y) terms in this experiment.) Now the color codes are derived from p(y|z) rather than using nearest neighbor retrieval. We see that the latent space autoencodes well, and also captures the 4 relevant types of concepts. In particular, the regions are all convex and linearly seperable, which facilitates the learning of a good imagination function q(z|y), interpolation, retrieval, and other latent-space tasks. A skeptic might complain that we have created an arbitrary partitioning of the data, that is unrelated to the appearance of the objects, and that learning such concepts is therefore "unnatural". But consider an agent interacting with an environment by touching digits on a screen. Suppose the amount of reward they get depends on whether the digit that they touch is small or big, or odd or even. In such an environment, it would be very useful for the agent to structure its internal representation to capture the concepts of magnitude and parity, rather than in terms of low level visual similarity. (In fact, BID25 showed that pigeons can learn simple numerical concepts, such as magnitude, by rewarding them for doing exactly this!) Language can be considered as the realization of such concepts, which enables agents to share useful information about their common environments more easily. As explained in the main paper, we fit the joint graphical model p(x, y, z) = p(z)p(x|z)p(y|z) with inference networks q(z|x, y), q(z|x), and q(z|y). Thus, our overall model is made up of three encoders (denoted with q) and two decoders (denoted with p). Across all models we use the exponential linear unit (ELU) which is a leaky non-linearity often used to train VAEs. We explain the architectures in more detail below. • Image decoder, p(x|z): Our architecture for the image decoder exactly follows the standard DCGAN architecture from, where the input to the model is the latent state of the VAE.• Label decoder, p(y|z): Our label decoder assumes a factorized output space p(y|z) = k∈A p(y k |z), where y k is each individual attribute. We parameterize each p(y k |z) with a two-layer MLP with 128 hidden units each. We apply a small amount of 2 regularization to the weight matrices.• Image and Label encoder, q(z|x, y): Our architecture FIG6 ) for the image-label encoder first separately processes the images and the labels, and then concatenates them downstream in the network and then passes the concatenated features through a multi-layered perceptron. More specifically, we have convolutional layers which process image into 32, 64, 128, 16 feature maps with strides 1, 2, 2, 2 in the corresponding layers. We use batch normalization in the convolutional layers before applying the ELU non-linearity. On the label encoder side, we first encode the each attribute label into a 32d continuous vector and then pass each individual attribute vector through a 2-layered MLP with 512 hidden dimensions each. For example, for MNIST-A we have 4 attributes, which gives us 4 vectors of 512d. We then concatenate these vectors and pass it through a two layer MLP. Finally we concatenate this label feature with the image feature after the convolutional layers (after flattening the conv-features) and then pass the through a 2 layer MLP to predict the mean (µ) and standard deviation (σ) for the latent space gaussian. Following standard practice, we predict log σ for the standard deviation in order to get values which are positive. flatten FORMULA1 concat (• Image encoder, q(z|x): The image encoder FIG9 ) uses the same architecture to process the image as the image feature extractor in q(z|x, y) network described above. After the conv-features, we pass the through a 3-layer MLP to get the latent state mean and standard deviation vectors following the procedure described above.• Label encoder, q(z|y): The label encoder FIG9 ) part of the architecture uses the same design choices to process the labels as the label encoder part in the q(z|x, y) network. After obtaining the concatenated label feature vectors, we pass the through a 4-layered MLP with 512 hidden dimensions each and then finally obtain the mean (µ) and log σ values for each dimension in the latent state of the VAE. We next describe the architecuture of the observation classifier we use for evaluating the 3C's on the MNIST-A dataset. The observation classifier is a convolutional neural network, with the first convolutional layer with filters of size 5×5, and 32 channels, followed by a 2×2 pooling layer applied with a stride of 2. This is followed by another convolutional layer with 5×5 filter size and 64 output channels. This is followed by another 2×2 pooling layer of stride 2. After this, the network has four heads (corresponding to each attribute), each of which is an MLP with a single hidden layer (of size 1024), with dropout applied to the activations. The final layer of the MLP outputs the logits for classifying each attribute into the corresponding categorical labels associated with it. We train this model from scratch on the MNIST-A dataset using stochastic gradient descent, batch size of 64 and a learning rate of 10 −4.CelebA model architecture Our design choices for CelebA closely mirror the models we built for MNIST-A. One primary difference is that we use a latent dimensionality of 18 in our CelebA experiments which matches the number of attributes we model. Meanwhile, the architectures of the image encoder, image decoder (i.e. DCGAN), are exactly identical to what is described above for MNIST-A execept that encoders take as input a 3-channel RGB image, while decoders produce a 3-channel output. We replace the Bernoulli likelihood with Quantized Normal likelihood (which is basically gaussian likelihood with uniform noise).In terms of the label encoder q(z|y), we follow FIG9 quite closely, except that we get as input 18 categorical (embedded) class labels as input, and we process the labels through a single hidden layer before concatenation and two hidden layers post concatenation (as opposed to two and four used in FIG9).Finally, the joint encoder q(z|x, y), is again based heavily on FIG6 where we feed as input 18 labels as opposed to 4, process them through a single layer mlp of 512d, concatenate them, and then pass the through a two hidden layer mlp of 512 d. At this point we concatenate the with the image feature through the image feature head in FIG6. Finally, we process the feature through another 512d single hidden layer mlp to produce the µ, σ values. A.5 OUTPUTS OF OBSERVATION CLASSIFIER ON GENERATED IMAGES Figure 11 shows some images sampled from our TELBO model trained on MNIST-A. It also shows the attributes that are predicted by the attribute classifier. We see that the classifier often produces reasonable that we as humans would also agree with. Thus, it acts as a reasonable proxy for humans classifying the labels for the generated images. A.6 HYPERPARAMTER CHOICES FOR TELBO, JMVAE, BIVCCA ON MNIST-AWe discuss more hyperparameter choices for the different objectives and how they impact performance on the MNIST-A dataset. Across all the objectives we set λ x =1, and vary λ y. In addition, we also discuss how the private hyperparamter choices for each loss, γ for TELBO, α for JMVAE, as in BID31 ) and µ for BiVCCA affect performance. We use the JS-overall metric for picking hyperparameters, as explained in the main paper. Observation classifier classifications on generated images across randomly sampled queries for triple ELBO Figure 11: Randomly sampled images from the TELBO model when fed randomly sampled concepts from the iid training set. We also show the outputs of the observation classifier for the images. Note that we visualize mean images above (since they tend to be more human interpretable) but the classifier is fed samples from the model. Figure best viewed by zooming in. Query: 6, small, clockwise, bottom-right TELBO JMVAE BiVCCA Figure 12: Compositional generalization on MNIST-A. Models are given the unseen compositional query shown at the top and each of the three columns shows the mean of the image distribution generated by the models. Images marked with a red box are those that the observation classifier detected as being incorrect. We also show the classification from the observation classifier on top of each image. We see that TELBO and JMVAE both do really well, while BiVCCA is substantially poorer.1. Effect of λ y: We search for λ y values in the set {1, 50, 100} for all objectives. In general, we find the setting of λ y in the elbo terms to be critical for good performance (especially on correctness). For example, at λ y =1, we find that correctness numbers for the best performing TELBO model drop to 60.47 (± 0.34) (from 82.08 (± 0.56) at λ y =50) on the validation set for iid queries. Similar trends can be observed for the JMVAE and BiVCCA objectives as well (with λ y =10 being the best setting for BiVCCA, λ y =50 for JMVAE). We have seen qualitative evidence which shows that the likelihood scaling for λ y affects how disentangled the latent space is along the specified attributes. When the latent space is not grouped or organized as per high-level attributes (see Figure 8 for example), the posterior distribution for a given concept is multimodal, which is hard for a gaussian inference network q(z|y) to capture. This leads to poor correctness values. In addition to the λ y scaling term which is common across all objectives, TELBO has a γ scaling factor which controls how we scale the log p(y|z) term in the elbo γ,1 (y, θ y, φ y) term. We sweep values of {1, 50, 100} for this parameter. In general, we find that the effect of this term is smaller on the performance than the λ y term. Based on the setting of this parameter, we find that, for example, the correctness values for fully specified queries change from 82.08 (±0.56) at γ=50 to 80.27 (±0.38) at γ=1 on validation set for iid queries.3. Effect of α: We generally find that α=1.0 works best for JMVAE across the different choices explored in BID31, namely, {0.01, 0.1, 1.0}. For example, decreasing the value of α to 0.1 or 0.01 reduces correctness for fully sepcified queries from 85.63 (±0.29) to 77.58 (±0.23) at 0.1 and 74.57 (±0.44) at 0.01 respectively on the validation set for iid queries.4. Effect of µ: For BiVCCA, we ran a search for µ over {0.3, 0.5, 0.7}, running each training experiment four times, and picked the best hyperparameter choice across the runs. We found that µ=0.7 was the best value, however the performance difference across different choices was not very large. Intuitively, higher values of µ should lead to improved performance compared to lower values of µ. This is because lower values of µ mean that we put more weight on the elbo term with a q(z|x) inference network than the one with a q(z|y) inference network, which in sharper samples. We next show some examples of compositional generalization on MNIST-A on a validation set of queries. For the compositinal experiments we reused the parameters of the best models on the iid splits for all the models, and trained the models for ∼ 160K iterations. All other design choices were the same. Figure 12 shows some qualitative . FIG1: Set of all 9 images labelled as bald=1 and male=0 in the CelebA dataset. We can see that in all the cases the labels are inaccurate for the image, probably due to annotator error. FIG3: TELBO creates more diverse images than JMVAE. At the top we show the set of attributes which are present and absent in the input query. Below, we show the of generation with all the attributes specified, drawing 10 samples each. We see that both TELBO and JMVAE create accurate images satisfying the constraints. Note that the concept "male" is set to "absent" in the query, which in CelebA means that "female" is present. Next, we unspecify whether the image should contain a male or a female. We see that in this setting, TELBO has a better mixing of male and female images (fourth, sixth, eighth and ninth images in the third row are male), than JMVAE which just produces a single male image (the ninth image in the fourth row).A.8 DETAILS ON CELEBA CelebA consists of 202,599 face colored images and 40 attribute binary vectors. We use the version of this dataset that was used in BID18; this uses a subset of 18 visually distinctive attributes, and preprocesses each image so they are aligned, cropped, and scaled down to 64 x 64. We use the official train and test partitions, 182K for training and 20K for testing. Note that this is an iid split, so the attribute vectors in the test set all occur in the training set, even though the images and people are unique. In total, the original dataset with 40 attributes specified a set of 96486 unique visual concepts, while our dataset of 18 attributes spans 3690 different visual concepts. In Section 5.2, we claim that our generations of "Bald" and "Female" images are from a compositionally novel concept. Our claim comes with a minor caveat/clarification: the concept bald=1 and male=0 does occur in 9 training examples, but they are all incorrect labelings, as shown in FIG1! Further, we see that the images generated from our model (shown in Figure 5) are qualitatively very different from any of the images here, showing that the model has not memorized these examples. A.9 MORE ON CELEBA Finally, we show further qualitative examples of performance on the CelebA dataset. We focus on the TELBO and JMVAE objectives here, since BiVCCA generally produces poor samples (see Figure 5). FIG3 (middle) shows some example generations for the concept specified by the attributes (top). We see that both TELBO and JMVAE produce correct images when provided the full attribute queries (first two rows). However, when we stop specifying attribute "male" or "not male" (female), we see that TELBO provides more diverse samples, spanning both male and female (compared to JMVAE). This ties into the explanation in Appendix A.1, where we show how one can interpret JMVAE as optimizing for the KL(q avg φ (z|y i)|q φ y (z|y i)) to fit the unimodal inference network q φ y (z|y i). Since JMVAE only reasons about the "aggregate" posterior as opposed to the prior (which TELBO reasons about), it has the tendency to generate less diverse samples when shown unseen concepts. | A VAE-variant which can create diverse images corresponding to novel concrete or abstract "concepts" described using attribute vectors. | 1,001 | scitldr |
We introduce "Search with Amortized Value Estimates" (SAVE), an approach for combining model-free Q-learning with model-based Monte-Carlo Tree Search (MCTS). In SAVE, a learned prior over state-action values is used to guide MCTS, which estimates an improved set of state-action values. The new Q-estimates are then used in combination with real experience to update the prior. This effectively amortizes the value computation performed by MCTS, ing in a cooperative relationship between model-free learning and model-based search. SAVE can be implemented on top of any Q-learning agent with access to a model, which we demonstrate by incorporating it into agents that perform challenging physical reasoning tasks and Atari. SAVE consistently achieves higher rewards with fewer training steps, and---in contrast to typical model-based search approaches---yields strong performance with very small search budgets. By combining real experience with information computed during search, SAVE demonstrates that it is possible to improve on both the performance of model-free learning and the computational cost of planning. Model-based methods have been at the heart of reinforcement learning (RL) since its inception , and have recently seen a resurgence in the era of deep learning, with powerful function approximators inspiring a variety of effective new approaches; ). Despite the success of model-free RL in reaching state-of-the-art performance in challenging domains (e.g. ;), model-based methods hold the promise of allowing agents to more flexibly adapt to new situations and efficiently reason about what will happen to avoid potentially bad outcomes. The two key components of any such system are the model, which captures the dynamics of the world, and the planning algorithm, which chooses what computations to perform with the model in order to produce a decision or action . Much recent work on model-based RL places an emphasis on model learning rather than planning, typically using generic off-the-shelf planners like Monte-Carlo rollouts or search (see ; for recent surveys). Yet, with most generic planners, even a perfect model of the world may require large amounts of computation to be effective in high-dimensional, sparse reward settings. For example, recent methods which use Monte-Carlo Tree Search (MCTS) require 100s or 1000s of model evaluations per action during training, and even upwards of a million simulations per time step at test time (; . These large search budgets are required, in part, because much of the computation performed during planning-such as the estimation of action values-is coarsely summarized in behavioral traces such as visit counts (;, or discarded entirely after an action is selected . However, large search budgets are a luxury that is not always available: many real-world simulators are expensive and may only be feasible to query a handful of times. In this paper, we explore preserving the value estimates that were computed by search by amortizing them via a neural network and then using this network to guide future search, ing in an approach which works well even with very small search budgets. We propose a new method called "Search with Amortized Value Estimates" (SAVE) which uses a combination of real experience as well as the of past searches to improve overall performance and reduce planning cost. During training, SAVE uses MCTS to estimate the Q-values at encountered states. These Q-values are used along with real experience to fit a Q-function, thus amortizing the computation required to estimate values during search. The Q-function is then used as a prior for subsequent searches, ing in a symbiotic relationship between model-free learning and MCTS. At test time, SAVE uses MCTS guided by the learned prior to produce effective behavior, even with very small search budgets and in environments with tens of thousands of possible actions per state-settings which are very challenging for traditional planners. Unifying the complementary approaches of learning and search has been of interest to the RL and planning communities for many years (e.g. ; ; ;). SAVE is motivated in particular by two threads in this body of work: one which uses planning in-the-loop to produce experience for Q-learning, and one which learns a policy prior for guiding search. As we will describe next, both of these previous approaches can suffer from issues with training stability which are alleviated by SAVE by simultaneously using MCTS to strengthen an action-value function, and Q-learning to strengthen MCTS. A number of methods have explored learning from planned actions. trained a model-free policy to imitate the actions produced by an MCTS agent. Other methods use planning in-the-loop to recommend actions, which are then executed in the environment to gather experience for model-free learning (; ; ; ; ; ;). However, problems can arise when learning with actions that were produced via planning, even with off-policy algorithms like Q-learning. As noted by both and , planning avoids suboptimal actions, ing in a highly biased action distribution consisting of mostly good actions; information about suboptimal actions therefore does not get propagated back to the Q-function. As an example, consider the case where a Q-function recommends taking action a. During planning, this action is explored and is found to yield lower reward than expected. The planner will end up recommending some other action a, which is executed in the environment and later used to update the Q-function. However, this means that the original action a is never actually experienced and thus is never downweighed in the Q-function, ing in poorly approximated Q-values. One way to deal with this problem is to use a mixture of both on-policy and planned actions . However, this throws away information about poor actions which is acquired during the planning process. In SAVE, we instead make use of this information by using the values estimated during search to help fit the Q-function. If the search finds that a particular action is worse than previously thought, this information will be reflected by the estimated values and will thus ultimately get propagated back to the Q-function. We explicitly test and confirm this hypothesis in Section 4.2. Much research has leveraged prior knowledge in the context of MCTS (; 2011; ; ; b; ; . Some of the most successful methods use a prior policy to guide search, the of which are used to further improve the policy. However, such methods use information about past behavior to learn a policy prior-namely, the visit counts of actions during search-and discard other search information such as inferred Q-values. We might anticipate one potential failure mode of such "count-based policy learning" approaches. Consider an environment with sparse rewards, where most actions are highly suboptimal. In the limit of infinite search, actions which have highest value will be visited most frequently, ing in a policy that guides search towards regions of high value. However, in the regime of small search budgets, the search may very well end up exploring mostly suboptimal actions. These actions have higher visit counts, and so are reinforced, leading to the agent being more likely to explore poor actions. Rather than implicitly biasing search towards value through the use of visit counts, SAVE relies on a prior that explicitly encodes knowledge about value. If SAVE ends up searching poor actions, it will learn that they have low values and this knowledge will be reflected in future searches. Thus, in contrast to count-based approaches, a SAVE agent will be less likely to visit poor actions in the future despite having frequently visited them in the past. We explicitly test and confirm this hypothesis in Section 4.1. Finding effective ways of combining model-based and model-free experience has been of interest to the RL community for decades. Most famously, the Dyna algorithm proposes using real experience to learn a model and then using the model to train a model-free policy. A number of more recent works have explored how to incorporate this idea into deep architectures (; ; ; ; ;), with an emphasis on dealing with the errors that are introduced by approximate models. In these approaches, the policy or value function is typically trained using on-policy rollouts from the model without using additional planning. Another way to combine model-free and model-based approaches is "implicit planning", in which the computation of a planner is built into the architecture of a neural network itself; b;;;;;;; ). While SAVE is not an implicit planning method, it shares similarities with such methods in that it also tightly integrates planning and learning. < l a t e x i t s h a 1 _ b a s e 6 4 = " Y 4 Q n O D p e w 4 e / 9 q 5 T P 2 b Y Q W 6 L h b U = " > A A A B 8 n i c b V B N S 8 N A E J 3 4 W e t X 1 a O X Y B E 9 l a Q K 9 l j w 4 r E F + w F t K J v t p l 2 6 2 Q 2 7 E 6 G E / g w v H h T x 6 q / x 5 r 9 x 2 + a g r Figure 1: Illustration of SAVE. When acting, the agent uses a Q-function, Q θ, as a prior for the Q-values estimated during MCTS. Over K steps of search, Q 0 ≡ Q θ is built up to Q K, which is returned as Q MCTS (Equations 1 and 4). From Q MCTS, an action a is selected via epsilon-greedy and the ing experience (s, a, r, s, Q MCTS) is added to a replay buffer. When learning, the agent uses real experience to update Q θ via Q-learning (L Q) as well as an amortization loss (L A) which regresses Q θ towards the Q-values estimated during search (Equation 6). SAVE features two main components (Figure 1). First, we use a search policy that incorporates the Q-function Q θ (s, a) as a prior over Q-values that are estimated during search. Second, to train the Qfunction we rely on an objective function that combines both the TD-error from Q-learning with an amortization loss that amortizes the value computation performed by the search. The amortization loss, combined with the prior over Q-values, thus enables future searches to build on previous ones, ing in stronger search performance overall. Before explaining how SAVE leverages search, we briefly describe the standard MCTS algorithm (Kocsis & Szepesvári, 2006;). While we focus here on the single-player setting, we note that the formulation of MCTS (and by extension, SAVE) is similar for two-player settings. MCTS uses a simulator or model of the environment to explore possible future states and actions, with the aim of finding a good action to execute from the current state, s 0. In MCTS, we assume access to a budget of K iterations (or simulations). The k th iteration of MCTS consists of three phases: selection, expan-sion, and backup. In the selection phase, we expand a search tree beginning with the current state and taking actions according to a search policy: where Q k is the currently estimated value of taking action a while in state s, which will be explained further below. U k (s, a) is the UCT exploration term: where N k (s, a) is the number of times we have explored taking action a from state s and c UCT is a constant that encourages exploration. This selection procedure is repeated for T − 1 times, until a new action a T −1 that had not previously been explored is chosen from state s T −1. This begins the expansion phase, during which a T −1 is executed in the simulator, ing in a reward r T −1 and new state s T. The new state s T is added to the search tree, and its value V (s T) is estimated either via a state-value function or (more traditionally) via a Monte-Carlo rollout. At this point the backup phase begins, during which the value of s T is used to update (or "back up") the values of its parent states earlier in the tree. Specifically, for state s t, the i th backed up return is estimated as: where γ is the discount factor and r j was the reward obtained after executing a j in s j when traversing the search tree. These backups are then used to estimate the Q-function in Equation 1 as SAVE makes several changes to the standard MCTS procedure. First, it assumes it has visited every state and action pair once by initializing N (s, a) = 1 for all states and actions. 1 Second, for each of these state-action pairs, it assumes a prior estimate of its value, Q θ (s, a), and uses this as an initial estimate for Q k, similar to Gelly & Silver (2007; 2011): where Q 0 (s, a):= Q θ (s, a). Third, rather than using a separate state-value function or Monte-Carlo rollouts to estimate the value of new states, SAVE uses the same state-action value function, i.e. V (s):= max a Q θ (s, a). These three changes provide a mechanism for incorporating Q-based prior knowledge into MCTS: specifically, SAVE acts as if it has visited every state-action pair once, with the estimated values being given by Q θ. Roughly speaking, this can be interpreted as using MCTS to perform Bayesian inference over Q-values, with the prior specified by Q θ with a weight equivalent to a pseudocount of one. This set of changes contrasts with UCT, which does not incorporate prior knowledge, as well as PUCT (; a;, which incorporates prior knowledge via a policy in the exploration term U k (s, a). After K iterations, we return Q MCTS (s, a):= Q K (s, a) and select an action to execute in the environment via epsilon-greedy over Q MCTS (s 0, a). After the action is executed, we store the ing experience along with a copy of Q MCTS (s 0, ·) ≡ {Q MCTS (s 0, a i)} i in the replay buffer. This process is illustrated in Figure 1 (left). During learning, the of the search are amortized into an updated prior Q θ (Figure 1, right). We impose an amortization loss L A which encourages the distribution of Q-values output by the neural network to be similar to those estimated by MCTS. The amortization loss is defined to be the cross-entropy between the softmax of the Q-values before (Q θ) and after (Q MCTS) MCTS. This cross-entropy loss achieves better performance than alternatives like L2, as described in Section 4.2. ), where τ = 1 is the softmax temperature, the loss is defined as: where D is a batch of N experience tuples (s t, a t, r t, s t+1, Q MCTS (s t, ·)) sampled from the replay buffer. This amortization loss is linearly combined with a Q-learning loss, where β Q and β A are coefficients to scale the loss terms. L Q may be any value-based loss function, such as that based on 1-step TD targets, n-step TD targets, or λ-returns . The amortization loss does make SAVE more sensitive to off-policy experience, as the values of Q MCTS stored in the replay buffer will become less useful and potentially misleading as Q θ improves; however, we did not find this to be an issue in practice. We evaluated SAVE in four distinct settings that vary in their branching factor, sparsity of rewards, and episode length. First, we demonstrate through a new Tightrope environment that SAVE performs well in settings where count-based policy approaches struggle, as discussed in Section 2.2. Next, we show that SAVE scales to the challenging Construction domain and that it alleviates the problem with off-policy actions discussed in Section 2.1. We also perform several ablations to tease apart the details of SAVE. Finally, we demonstrate that SAVE dramatically improves over Q-learning in a new and even more difficult construction task called Marble Run, as well as in more standard environments like Atari . In all our experiments we use SAVE with a perfect model of the environment, though we expect our approach would work with learned models as well. In Section 2.2, we hypothesized that approaches which use count-based policy learning rather than value-based learning (e.g. ; may suffer in environments with large branching factors, many suboptimal actions, and small search budgets. To test this hypothesis, we developed a toy environment called Tightrope with these characteristics. Tightrope is a deterministic MDP consisting of 11 labeled states linked together in a chain. At each state, there are 100 actions to take, M % of which are terminal (meaning that when taken they cause the episode to end). The other non-terminal actions will cause the state to transition to the next state in the chain. We considered two settings of the reward function: dense rewards, in which case the agent receives a reward of 0.1 when making it to the next state in the chain and 0 otherwise; and sparse rewards, in which case the agent receives a reward of 1 only when making it to the final state. In the sparse reward setting, we randomly selected one state in the chain to be the "final" state to form a curriculum over the length of the chain. With the exception of the final state in the sparse reward setting, the transition function of the MDP is exactly the same across episodes, with the same actions always having the same behavior. We first examined the behavior of SAVE on Tightrope in a tabular setting to eliminate potential concerns about function approximation (see Section B.2). We compared SAVE to three other agents. UCT is a pure-search agent which runs MCTS using a UCT search policy with no prior. It uses Monte-Carlo rollouts following a random policy to estimate V (s). PUCT is based on AlphaZero and uses a policy prior (which is learned from visit counts during MCTS) and state-value function (which is learned from Monte-Carlo returns). During search, the policy is used in the PUCT exploration term and the value function is used for bootstrapping. More details on PUCT in general are provided in Section A.3. Q-Learning performs one-step tabular Q-learning during training, and MCTS at test time using the same search procedure as SAVE. Figure 2a-c illustrates the in the tabular setting after 500 episodes. UCT, which does not use any learning, illustrates the difficulty of using brute-force search. Q-learning, which does not use any search during training, is slow to converge to a solution within the 500 episodes, particularly in the sparse reward setting; additionally, adding search at test time does not substantially improve things. Although the incorporation of learning with PUCT does improve the , we can see that with small search budgets and high proportions of terminal actions, PUCT struggles to remember which actions are safe (nonterminal), especially in the sparse reward setting. In contrast, SAVE solves the Tightrope environment in all of the dense reward settings and most of the sparse reward settings. As the search budget increases, we see that both PUCT and SAVE reliably converge to a solution; thus, if a large search budget is available both methods may fare equally well. However, if only a small search budget is available, SAVE in much more reliable performance. We also looked at the ability of SAVE and PUCT to solve the Tightrope environment when using function approximation, along with a model-free Q-learning baseline (see Section B.3). We evaluated all agents on the sparse reward version of Tightrope with 95% terminal actions, and used a search budget of 10 (except for Q-learning, which used a test budget of zero). The , shown in Figure 2d, follow the same pattern as in the tabular setting. We next evaluated SAVE in three of the Construction tasks explored by , in which the goal is to stack blocks to achieve a functional objective while avoiding collisions with obstacles. In Connecting, the goal is to connect a target point in the sky to the floor. In Covering, the goal is to cover obstacles from above without touching them. Covering Hard is the same as Covering, except that only a limited number of blocks may be used. The Construction tasks are challenging for modelfree approaches because there is a combinatorial space of possible scenes and the physical dynamics are challenging to predict. However, they are also difficult for traditional search methods, as they have huge branching factors with up to tens of thousands of possible actions per state. Additionally, the simulator in the Construction tasks is expensive to query, making it infeasible to use with search budgets of more than 10-20. To implement SAVE, we used the same agent architecture as. We compared SAVE to a baseline version of SAVE without amortization loss D) ), similar to the MCTS agent described in. We also compared to a Q-learning baseline which performs pure model-free learning during training (but which may also utilize MCTS at test time using the same search procedure as SAVE), as well as a UCT baseline which did not use any learning (but which did use a pretrained value function for bootstrapping). For SAVE-based agents, we used a training budget of 10 simulations and varied the budget at test time; for UCT, we used a constant budget of 1000 simulations at test time (see Appendix C). Results Figure 3a -c shows the on the three construction tasks. The poor performance of UCT (dotted lines) highlights the need for prior knowledge to manage the huge branching factor in these domains. While model-free Q-learning improves performance, simply performing search on top of the learned Q-values only in small gains in performance, if any. The performance of SAVE without amortization loss highlights exactly the issue discussed in Section 2.1. Without the amortization loss, the Q-learning component of SAVE only learns about actions which have been selected via search, and thus rarely sees highly suboptimal actions, ing in a poorly approximated Q-function. Indeed, as we can see in the case where the search budget is zero, the agent's performance falls off dramatically, suggesting that the underlying Q-values are poor. Using search at test time can make up for this problem to some degree, but only when used with a budget very close to that with which it was trained: large search budgets can actually in worse search performance (e.g. in Covering and Covering Hard) because the poor Q-values are also being used for bootstrapping during the search. It is only by leveraging search during training time and incorporating an amortization loss do we see a synergistic : using SAVE in higher rewards across all tasks, strongly outperforming the other agents. Ablation Experiments In the past two sections, we compared SAVE to alternatives which do not include an amortization loss, or which use count-based policy learning rather than value-based learning. However, a number of additional questions remain regarding the architectural choices in SAVE. To address these, we ran a number of ablation experiments on the Covering task, with the shown in Figure 3d. Specifically, we compared SAVE with versions that use an L2 loss (rather than cross entropy), that do not use the Q-learning loss, and that use the Q-values to guide search via PUCT rather than initializing Q 0. Overall, we find that the choices made in SAVE in the highest levels of performance. Of particular note is the ablation that uses the L2 loss, indicating that the softmax cross entropy loss plays an important role in SAVE's performance. We speculate this is true for two reasons. First, because we use small search budgets, the estimated Q MCTS is likely to be noisy, and thus it may be more robust to preserve just the relative magnitudes of action values rather than exact quantities. Second, the cross entropy loss means that Q θ need not represent the values of poor actions exactly, thus freeing up capacity in the neural network to more precisely represent the values of good actions. Details and further discussion is provided in Section C.3. We also compared to a policy-based PUCT agent like that described in Section 4.1, but found this did not achieve positive reward on the harder tasks like Covering. This again highlights the same problem with count-based policy training and small search budgets, as discussed in Section 2.2. SAVE is able to achieve near-ceiling levels of performance on the original Construction tasks. Thus, we developed a new task in the style of the previous Construction tasks called Marble Run which is even more challenging in that it involves sparser rewards and a more complex reward function. Specifically, the goal in Marble Run is to stack blocks to enable a marble to get from its original starting position to a goal location, while avoiding obstacles. At each step, the agent may choose from a number of differently shaped rectangular blocks as well as ramp shapes, and may choose to make these blocks "sticky" (for a price) so that they stick to other objects in the scene. The episode ends once the agent has created a structure that would get the marble to the goal. The agent receives a reward of one if it solves the scene, and zero otherwise. We used the same agent architecture and training setup as with the Construction tasks, except for the curriculum. Specifically, we found it was important to train agents on this task using an adaptive curriculum over difficulty levels rather than a fixed linear curriculum. Under the adaptive curriculum, we only allowed an agent to progress to the next level of difficulty after it was able to solve at least 50% of the scenes at the current level of difficulty. Further details of the Marble Run task and the curriculum are given in Appendix D. Results Figure 4 shows the for SAVE and Q-learning for the two different costs of sticky blocks, as as well as some example constructions. SAVE progresses more quickly through the curriculum and reaches higher levels of difficulty (see Figure D .1) and overall achieves much higher levels of reward at every difficulty level. Additionally, we found that the Q-learning agent reliably becomes unstable and collapses at around difficulty 4-5 (see Figure D. 2), while SAVE does not have this problem. Qualitatively (Figure 4c-d), SAVE is able to build structures which allow the marble to reach targets that are raised above the floor while also spanning multiple obstacles. These on Marble Run also allow us to address the trade-off between model-free experience versus planned experience. Specifically, with a search budget of 10, SAVE effectively sees 10 times as many transitions as a model-free agent trained on the same number of environment interactions. Would a model-free agent trained for 10 times as long achieve equivalent performance? As can be seen in Figure D.2, this is not the case: the model-free agent sees more episodes but in worse performance. We find the same in other Construction tasks as well (see Section C.4). This highlights the positive interaction that occurs when learning both from experience generated from planned actions and from the values estimated during search. To demonstrate that SAVE is applicable to more standard environments, we also evaluated it on a subset of Atari games . We implemented SAVE on top of R2D2, a distributed Q-learning agent that achieves state-of-the-art on Atari . To allow for a fair comparison 2 between purely model-free R2D2 and a version with SAVE, we controlled R2D2 to have the same replay ratio as SAVE and then tuned its hyperparameters to have approximately the same level of performance as the baseline version of R2D2 (see Appendix E). We find that SAVE outperforms or equals this controlled version of R2D2 in all games, with particularly high performance on Frostbite, Alien, and Zaxxon (shown in Figure 5). SAVE also outperforms the baseline version of R2D2 (see Table E We introduced SAVE, a method for combining model-free Q-learning with MCTS. During training, SAVE leverages MCTS to infer a set of Q-values, and then uses a combination of real experience plus the estimated Q-values to fit a Qfunction, thus amortizing the value computation of previous searches via a neural network. The Q-function is used as a prior to guide future searches, enabling even stronger search performance, which in turn is further amortized via the Qfunction. At test time, SAVE can be used to achieve high levels of reward with only very small search budgets, which we demonstrate across four distinct domains: Tightrope, Construction , Marble Run, and Atari . These suggest that SAVEing the experience generated by search in an explicit Q-function, and initializing future searches with that information, offers important advantages for model-based RL. When combining Q-values estimated both from prior searches and real experience, it may also be useful to account for the quality or confidence of the estimated Q-values. Count-based policy methods (; do this by leveraging an estimate of confidence based on visit counts: actions with high visit counts should both have high value (or else they would not have been visited so much) and high confidence (because they have been explored extensively). However, as we have shown, relying solely on visit counts can in poor performance when using small search budgets (Section 4.1). A key future direction will be to amortize both the computation of value and of reliability, achieving the best of both SAVE and count-based methods. Encoding confidence estimates into the Q-values may also be helpful for applying SAVE to settings with learned models, which may have non-trivial approximation errors. In particular, it may be helpful to attenuate the contribution of search-estimated Q-values to the Q-prior both when an action has not been sufficiently explored and when model error is high. Our work demonstrates the value of amortizing the Q-estimates that are generated during MCTS. Indeed, we have shown that by doing so, SAVE reaches higher levels of performance than modelfree approaches while using less computation than is required by other model-based methods. More broadly, we suggest that SAVE can be interpreted as a framework for ensuring that the valuable computation performed during search is preserved, rather than being used only for the immediate action or summarized indirectly via frequency statistics of the search policy. By following this philosophy and tightly integrating planning and learning, we expect that even more powerful hybrid approaches can be achieved. In all experiments except Tabular Tightrope (see Section B.2) and Atari (see Appendix E), we use a distributed training setup with 1 GPU learner and 64 CPU actors. Our setup was implemented using TensorFlow and Sonnet , and gradient descent was performed using the Adam optimizer with the TensorFlow default parameter settings (except learning rate). Except for in Atari (see Appendix E), we used a 1-step implementation of Q-learning, with the standard setup with experience replay and a target network . We controlled the rate of experience processed by the learner such that the average number of times each transition was replayed (the "replay ratio") was kept constant. For all experiments, we used a batch size of 16, a learning rate of 0.0002, a replay size of 4000 transitions (with a minimum history of 100 transitions), a replay ratio of 4, and updated the target network every 100 learning steps. We used a variant of epsilon-greedy exploration described by in which epsilon is changed adaptively over the course of an episode such that it is lower earlier in the episode and higher later in the episode, with an average value of over the whole episode. We annealed the average value of from 1 to 0.01 over 1e4 episodes. Select a using epsilon-greedy from Q MCTS (s, ·) Execute a in environment and receive s, r Compute estimates for Q k+1 (s, a) (Equation 4) 24: The SAVE agent is implemented as described in Section 3 and Algorithm A.1 provides additional pseudocode explaining the algorithm. In Algorithm A.1, we provide an example of using SAVE in an episode setting where learning happens after every episode; however, SAVE can be used in any Q-learning setup including in distributed setups where separate processes are concurrently acting and learning. In particular, in our experiments we use the distributed setup described in Section A.1. Note that when performing epsilon-greedy exploration (Line 6 of Algorithm A.1), we either choose an action uniformly at random with probability, and otherwise choose the action with the highest value of Q MCTS out of the actions which were explored during search (i.e., we do not consider actions that were not explored, even if they have a higher Q MCTS). In all experiments (except tabular Tightrope), we use a UTC exploration constant of c = 2, though we have found SAVE's performance to be relatively robust to this parameter setting. The PUCT search policy is based on that described by Silver et al. (2017a) and. Specifically, we choose actions during search according to Equation 1, with: where c is an exploration constant, π(s, a) is the prior policy, and N k (s, a) is the total number of times action a had been taken from state s at iteration k of the search. Like Silver et al. (2017a;, we add Dirichlet noise to the prior policy: where η ∼ Dir(1/n actions). In our experiments we set = 0.25 and c = 2. During training, after search is complete, we sample an action to execute in the environment from a). At test time, we select the action which has the maximum visit count (with random tie-breaking). To train the PUCT agent, we used separate policy π θ (s, a) and value V θ (s) heads which were trained using a combined loss (Equation 6), with: where R is the Monte-Carlo return observed from state s. We used fixed values of β Q = 0.5 and β A = 0.5 in all our experiments with PUCT. We used the same replay and training setup as used in the Q-learning and SAVE agents, with two exceptions. First, we additionally include episodic Monte-Carlo returns R and policies π MCTS in the replay buffer so they can be used during learning. Second, we did not use -greedy exploration (because the Dirichlet noise in the PUCT term already enables sufficient exploration). We tried several different hyperparameter settings and variants of the PUCT agent to attempt to improve the . For example, we tried using a 1-step TD error for learning the values, which should have lower variance and thus in more stable learning of values. We also tried reducing the replay ratio to 1 and the replay size to 400 in order to make the experience for training more on-policy. However, we did not find that these changes improved the . We also tried different settings of for the Dirichlet noise, but found that lower values ed in too little exploration, while higher values ed in too much exploration. The Tightrope environment has 11 states which are connected together in a chain. Each state has 100 actions, M % of which will cause the episode to terminate when executed and the rest of which will cause the environment to transition to the next state. Each state is represented using a vector of 50 random values drawn from a standard normal distribution, which are the same across episodes. The indices of terminal actions are selected randomly and are different for each state but are consistent across episodes. Agents always begin in the first state of the chain. In the sparse reward setting, we randomly select one of the states in the chain to be the "final" state (excluding the first state), to enable the agent to sometimes train on easy problems and sometimes train on hard problems. If the agent reaches this final state, it receives a reward of 1 and the episode terminates. If it takes a non-terminal action, it transitions to the next state in the chain and receives a reward of 0. Otherwise, if it takes a terminal action, the episode terminates and the agent receives a reward of 0. In the dense reward setting, the "final" state is always chosen to be the last state in the chain. If the agent reaches the final state in the chain, it receives a reward of 0.1 and the episode terminates. If it takes a non-terminal action, it transitions to the next state in the chain and receives a reward of 0.1. Otherwise, if it takes a terminal action, the episode terminates with a reward of 0. During training, we execute each tabular agent in the environment until the episode terminates. Then, we perform a learning step using the experience generated from the previous episode. This process repeats for some number of episodes (in our experiments, 500). After training, we execute each agent in the environment 100 times and compute the average reward achieved across these 100 episodes. For all cases in which search is used, we use a UCT exploration constant of c = 0.1. Q-Learning Tabular Q-learning begins with a table of state-action values initialized to zero. We perform epsilon-greedy exploration with = 0.1, and add the ing experience to a replay buffer with maximum size of 1000 transitions. We perform episodic learning, where during each episode the Q-values are fixed and after the episode is complete we update the Q-values by performing a single pass through the experience in the replay buffer in a random order. We use a learning rate of β Q = 0.01. At test time, the Q-learning agent uses MCTS in the same manner as SAVE. SAVE Tabular SAVE begins with a table of state-action values initialized to zero. During search, values are looked up in this table and used to initialize Q 0. The values are also for bootstrapping. During learning, we perform both Q-learning (as described in the Q-learning agent) as well as an update based on the gradient of the cross-entropy amortization loss (Equation 6). We use β Q = 0.01 and β A = 1. PUCT Tabular PUCT begins with two tables; one with state values (initialized to zero) and one with action probabilities (initialized to the uniform distribution). During search, action probabilities are looked and used in the PUCT term, while state values are looked up and used for bootstrapping. Search proceeds as described in Section A.3. During learning, π MCTS is copied back into the action probability table (this is equivalent to an L2 update with a learning rate of 1); we also experimented with doing an update based on the cross entropy loss but found this ed in worse performance. The value at episode t is given by: where R t−1 (s) is the return obtained after visiting state s during episode t − 1. In our experiments we used α = 0.5. We also experimented with using Q-learning rather than Monte-Carlo returns, but found that these ed in similar levels of performance. UCT The UCT agent is as described in Section 3.1, with V (s) at unexplored nodes estimated via a Monte-Carlo rollout under a uniform random policy. The only difference from regular UCT is that we did not require all actions to be visited before descending down the search tree; unvisited actions were initialized to a value of zero. For Tightrope, this is the optimal setting of the default Q-values because all possible rewards are greater than or equal to zero. Once an action is found with non-zero reward the best option is to stick with it, so it would not make sense to set the values optimistically. Actions that cause the episode to terminate have a reward of zero, so it would also not make sense to set the values pessimistically as this would lead to over-exploring terminal actions. Setting the values to the average of the parent would either have the effect of setting to zero or setting optimistically (if the parent had positive reward). To select the final action to execute in the environment, the UCT agent selects a visited action with the maximum estimated value. We could consider alternate approaches here, such as selecting uniformly at random from unexplored actions if none of the visited actions have high enough expected values. We experimented with this approach, using a threshold value of zero (which is the expected value for bad actions in Tightrope), and find that this indeed improves performance (p = 0.02), though the effect size is quite small: on the dense setting with M = 95% we achieve a median reward of 0.08 (using this thresholding action selection policy) versus 0.07 (selecting the max of visited actions). We used the same learning setup for the Q-learning, SAVE, and PUCT agents as described in Appendix A. For the network architecture of our agents, we used a shared multilayer perceptron (MLP) torso with two layers of size 64 and ReLU activations. To predict Q-values, we used an MLP head with two layers of size 64 and ReLU activations, with a final layer of size 100 (the number of actions) with a linear activation. To predict a policy in the PUCT agent, we used the same network architecture as the Q-value head. To predict state values in the PUCT agent, we used a separate MLP head with two layers of size 64 and ReLU activations, and a final layer of size 1 with a linear activation. All network weights were initialized using the default weight initialization scheme in Sonnet . For both the SAVE and PUCT agents we used loss coefficients of β Q = 0.5 and β A = 0.5. We trained each agent 10 times and report after 1e6 episodes in a version of Tightrope that has 95% terminal actions Figure 2, right). During training, the SAVE and PUCT agents had access to a search budget of 10 simulations; the Q-learning agent did not use search. We also explored training agents with different numbers of terminal actions and different budgets. Qualitatively, we found the same as in the tabular setting: the PUCT agent can perform well for larger budgets (50+), but struggles with small budgets, underperforming the model-free Q-learning agent. In contrast, SAVE performed well in all our experiments, even for small budgets like 5 or 10. C DETAILS ON CONSTRUCTION C.1 AGENT DETAILS SAVE For SAVE, we annealed β Q from 1 to 0.1 and β PI from 0 to 4.5 over the course of 5e4 episodes. We found this allowed the agent to rely more on Q-learning early on in training to build a good Q-value prior, and then more on MCTS later in training once a good prior had already been established. Q-Learning The Q-Learning agent is as described in Section A.1. In particular, we follow the same setup as the GN-DQN agent described in. During training, we use pure Q-learning with no search. At test time, we may allow the Q-learning agent to additionally perform MCTS, using the same search procedure as that used by SAVE (i.e., initializing the Q-values using the trained Q-function and initializing the visit counts to one). The SAVE without an amortization loss is the same as the basic SAVE agent, except that it includes no amortization loss (i.e.,). This is equivalent to the GN-DQN-MCTS agent described by. UCT UCT is as described in Section 3.1, with V (s) at unexplored nodes estimated via using a pretrained action-value function (trained using the same setup as the Q-learning agent). Additionally, unlike standard UCT we did not require all actions to be visited before descending down the search tree. SAVE with L2 SAVE with an L2 loss is identical to SAVE except that it uses a different amortizaton loss: Similar to the SAVE agent, we anneal β Q from 1 to 0.1 and β A from 0 to 0.045 over the course of 5e4 episodes. SAVE without Q-Learning SAVE without the Q-learning loss is identical to SAVE except that we do not use Q-learning and we use the L2 amortization loss described in the previous paragraph: where we set β A = 0.025. The reason we use the L2 loss rather than the cross-entropy loss is that otherwise the Q-values will not actually be real Q-values, in that they will not have grounding in the actual scale of rewards. We did experiment with using only the cross-entropy loss with no Q-learning, and found slightly worse performance than when using the L2 loss and no Q-learning. SAVE with PUCT SAVE with PUCT uses the same learning procedure as SAVE but a different search policy. Specifically, we use the PUCT search policy described in Section A.3 and Equation 7. To do this, we set π(s, a) = σ(Q θ (s, a)), where σ is the softmax over actions with a temperature of 1. We use the same settings for Dirchlet noise to encourage exploration during search. After search is complete, we select an action using the same epsilon-greedy action procedure used by the SAVE agent rather than selecting based on visit counts. We experimented with selecting based on visit counts instead, but found this ed in the same level of performance. Observations are given as graphs representing the scene, with objects in the scene corresponding to nodes in the graph and edges between every pair of objects. All agents use the same network architecture described in to process these graphs. Briefly, we use a graph network architecture which takes a graph as input and returns a graph with Q-values on the edges of the graph. Each edge corresponds to a relative object-based action like "pick up block B and put it on block D". Each edge additionally has multiple actions associated with it which correspond to particular offset locations where the block should be placed, such as "on the top left". describe four Construction tasks: Silhouette, Connecting, Covering, and Covering Hard. We reported on three of these tasks in the main text (Connecting, Covering, and Covering Hard). The agents in already reached ceiling performance on Silhouette and thus we do not report for that task here, except to report that SAVE also reaches ceiling performance. The agents used 10 MCTS simulations during training and were evaluated on 0 to 50 simulations at test time, with the exception of the UCT agent, which always used 1000 simulations at test time, and the Q-learning agent, which did not peform search during learning. We trained 10 seeds per agent and report after 1e6 episodes. Figure C.1 show details of learning progress for each of the agents compared in the ablation experiments on the Covering task (Section 4.2), and Figure C.2 shows detailed final performances evaluated at different test budgets. We evaluated all agents on the hardest level of difficulty of the particular task they were trained on for either 10000 episodes (Figure 3a -c) or 1000 episodes (Figure 3d and Figure C .2). In general, while we find that search at test time can provide small boosts in performance, the main gains are achieved by incorporating search during training. Here we expand on the presented in the main text and in Figure 3d and Figure C.2. Cross-entropy vs. L2 loss While the L2 loss (Figure C .2, orange) can in equivalent performance as the cross-entropy loss (Figure C.2, green), this is at the cost of higher variance across seeds and lower performance on average. This is likely because the L2 loss encourages the Q-function to exactly match the Q-values estimated by search. However, with a search budget of 10, those Qvalues will be very noisy. In contrast, the cross-entropy loss only encourages the Q-function to match the overall distribution shape of the Q-values estimated by search. This is a less strong constraint that allows the information acquired during search to be exploited while not relying on it too strongly. Indeed, we can observe that the agent with L2 amortization loss actually performs worse than the agent that has no amortization loss at all (Figure C .2, purple) when using a search budget of 10, suggesting that trying to match the Q-values during search too closely can harm performance. Additionally, we can consider an interesting interaction between Q-learning and the amortization loss. Due to the search locally avoiding poor actions, Q-learning will rarely actually operate on low-valued actions, meaning most of its computation is spent refining the estimates for high-valued actions. The softmax cross entropy loss ensures that low-valued actions have lower values than high-valued actions, but does not force these values to be exact. Thus, in this regime we should have good estimates of value for high-valued actions and worse estimates of value for low-valued actions. In contrast, an L2 loss would require the values to be exact for both low and high valued actions. By using cross entropy instead, we can allow the neural network to spend more of its capacity representing the high-valued actions and less capacity representing the low-valued actions, which we care less about in the first place anyway. With vs. without Q-learning Without Q-learning (Figure C.2, teal), the SAVE agent's performance suffers dramatically. As discussed in the previous section, the Q-values estimated during search are very noisy, meaning it is not necessarily a good idea to try to match them exactly. Additionally, Q MCTS is on-policy experience and can become stale if Q θ changes too much between when Q MCTS was computed and when it is used for learning. Thus, removing the Q-learning loss makes the learning algorithm much more on-policy and therefore susceptible to the issues that come with on-policy training. Indeed, without the Q-learning loss, we can only rely on the Q-values estimated during search, ing in much worse performance than when Q-learning is used. UCT vs. PUCT Finally, we compared to a variant which utilizes prior knowledge by transforming the Q-values into a policy via a softmax and then using this policy as a prior with PUCT, rather than using it to initialize the Q-values (Figure C.2, brown). With large amounts of search, the initial setting of the Q-values should not matter much, but in the case of small search budgets (as seen here), the estimated Q-values do not change much from their initial values. Thus, if the initial values are zero, then the final values will also be close to zero, which later in the Q-function being regressed towards a nearly uniform distribution of value. By initializing the Q-values with the Qfunction, the values that are regressed towards may be similar to the original Q-function but will not be uniform. Thus, we can more effectively reuse knowledge across multiple searches by initializing the Q-values with UCT rather than incorporating prior knowledge via PUCT. We performed several other experiments to tease apart the questions regarding exploration strategy and data efficiency. Exploration strategy When selecting the final action to perform in the environment, SAVE uses an epsilon-greedy exploration strategy. However, many other exploration strategies might be considered, such as UCB, categorical sampling from the softmax of estimated Q-values, or categorical sampling from the normalized visit counts. We evaluated how well each of these exploration strategies work, with the shown in Figure C.3. We find that using epsilon-greedy works the best out of these exploration strategies by a substantial margin. We speculate that this may be because it is important for the Q-function to be well approximated across all actions, so that it is useful during MCTS backups. However, UCB and categorical methods will not uniformly sample the ac-tion space, meaning that some actions are very unlikely to be ever learned from. The amortization loss will not help either, as these actions will not be explored during search either. The error in the Q-values for unexplored actions will grow over time (due to catastrophic forgetting), leading to a poorly approximated Q-function that is unreliable. In contrast, epsilon-greedy consistently spends a little bit of time exploring these actions, preventing their values from becoming too inaccurate. We expect this would be less of a problem if we were to use a separate state-value function for bootstrapping (as is done by AlphaZero). Data efficiency With a search budget of 10, SAVE effectively sees 10 times as many transitions as a model-free agent trained on the same number of environment interactions. To more carefully compare the data efficiency of SAVE, we compared its performance to that of the Q-learning agent on the Covering task, controlling for the same number of environment interactions (including those seen during search). The are shown in Figure C.4, illustrating that SAVE converges to higher rewards given the same amount of data. We find similar in the Marble Run environment, shown in Figure D Scenes contain the following types of objects (similar to): • Floor (in black) that supports the blocks placed by the agent. • Available blocks (row of blue blocks at the bottom) that the agent picks and place in the scene (with replacement). • Blocks (blue blocks above the floor) that the agent has already placed. They may take a lighter blue color to indicate that they are sticky. A sticky block gets glued to anything it touches. • Goal (blue dot) that the agent has to reach with the marble. • Marble (green circle) that the agent has to route to the goal. • Obstacles (red blocks, including two vertical walls), that the agent has to avoid, by not touching them neither with the blocks or the marble. All the initial positions for obstacles in the scene are sampled from a tessellation (similar to the Silhouette task in) made of rows with random sequences of blocks with sizes of 1 discretization unit in height and 1 or 2 discretization units in width (a discretization unit corresponds to the side of the first available block). The sampling process goes as follows: 1. Set the vertical position of the goal to the specified discrete height (according to level) corresponding to the center of one of the tessellation rows, and the vertical position of the marble 2 rows above that. 2. Uniformly sample a horizontal distance between the marble and the goal from a predefined range, and uniformly sample the absolute horizontal positions respecting that absolute distance. 3. Sample a number of obstacles (according to level) from the tessellation spanning up to the vertical position of the marble. Obstacles are sampled from the tessellation sequentially. Before each obstacle is sampled, all objects in the tessellation that are too close (± 2 layers vertically and with less than 2 discretization units of clearance sideways) to the goal, the target, or previously placed obstacles, are removed from the tessellation in order to prevent unsolvable scenes. Then probabilities are assigned to all of the remaining objects in the tessellation according to one of the following criteria (the criteria itself is also picked randomly with different weights) designed to avoid generating trivial scenes: • (Weight=4) Pick uniformly a tessellation object lying exactly on the floor and between the marble and the goal horizontally, since those objects prevent the marble from rolling freely on the floor (only applicable if the tessellation still has objects of this kind available). • (Weight=1) Pick a tessellation object that is close (horizontally) to the marble. Probabilities proportional to.1 (where d is the horizontal distance between each object and the marble scaled by the width of the scene and τ is a temperature set to 0.1) are assigned to all objects left in the tessellation, and one of them is picked. • (Weight=1) Pick a tessellation object that is close (horizontally) to the goal. Identical to the previous one, but using the distance to the goal. • (Weight=1) Pick a tessellation object that is close (horizontally) to the middle point between the ball and the goal. Identical to the previous one, but using the distance to the middle point, and a temperature of 0.2. • (Weight=1) Pick any object remaining in the tessellation with uniform probability (to increase diversity). We used a curriculum to sample scenes of increasing difficulty (Fig. D.1 During both training and testing, episodes at a certain curriculum level are sampled not only from that difficulty, but also from all of the previous difficulty levels, using a truncated geometric distribution with a decay of 0.5. This means that at each level, about half of the episodes correspond to that level, half of the remaining episodes correspond to the previous level, half of the remaining to the level before that, and so on. By truncated we mean that, because it is not possible to sample episodes for negative levels, so we truncate the probabilities there and re-normalize. Given the complexity and the sparsity of rewards in this task, we trained agents using an adaptive curriculum to avoid presenting unnecessarily hard levels to the agent until the agent is able to solve the simpler levels. Specifically at each level of the curriculum we keep track and bin past episode according to all possible combinations of scene properties consisting of: • Height of the target (discretized to tessellation rows). • Horizontal distance d between marble and goal (discretized to d < 1/3, 1/3 < d < 2/3, or d > 2/3, where d is normalized by the width of the scene). • Number of obstacles. • Height of the highest obstacle (discretized to tessellation rows). • Height of the lowest obstacle (discretized to tessellation rows). and require the agents to have solved at least 50% of scenes of the last 50 episodes in each bin individually, but simultaneously in all bins 3. before we allow the agent to progress to the next level of difficulty. This is a very strict criteria, which effectively means the agent has to find solutions for all representative variations of the task at that level before is allowed to progress to the next level. 1. Block placement phase: The agent picks one object from the available objects and places it into the scene. If the block placed by the agent was sticky the agent will receive a negative reward according to the cost (which may be either 0 or 0.04). 2. Block settlement phase: The physics simulation (keeping the marble frozen) is run until the placed blocks settle (up to a maximum of 20 s). During this phase the new block may affect the position of previously placed blocks. 3. Marble dynamics phase: The physics simulation including the marble is run until the marble collides with 8 objects, with a timeout of 10 s at each collision, that is a maximum of 80s. This phase may terminate early if the marble reaches the goal (task is solved and episode terminated with a reward of 1.), but also if the marble or any of the blocks touch an obstacle. 4. Restore state phase: After the marble dynamics phase, the marble and all of the blocks are moved back to the position where they were at the end of the block settlement phase. This is to prevent the agent from using the marble to indirectly move the blocks with a persistent effect across steps. The block placement phase and block settlement phase, as well as the action space is identical to those in. The observation is identical to the Construction tasks in , with an additional one-hot encoding of the object shape (e.g. rectangle vs triangle vs circle) and includes all blocks positions and the initial marble position at the end of the block settlement phase. Note that the agent never actually gets to observe the marble's dynamics, and therefore does not get direct feedback about why the marble does or does not make it to the goal (such that it is getting stuck in a hole). An interesting direction for future work would be to incorporate this information into the agent's learning as well. There are several episode termination conditions that may be triggered before the task is solved: • An agent places a block in a position that overlaps with an existing block or obstacle. • An agent has placed a block that during the settlement phase touches an obstacle. • An agent has placed a block that, at the end of the block settlement phase overlaps with the initial marble position. • Maximum number of steps is reached. Note that touching obstacles during the marble dynamics phase does not terminate the episode because we are purely evaluating the reward function and, during the restore state phase, all objects are returned to there previous locations. This makes it possible for the agent to correct for any obstacle collisions that happened during the marble dynamics phase, by placing additional blocks that re-route the marble. We used the same experimental setup as in the other Construction tasks (Appendix C). In particular, during training, for each seed of each agent we checkpoint the weights which achieve the highest reward on the highest curriculum level, and then use these checkpoints to evaluate performance in Figure 4. Figure D.2 additionally shows details of the training performance at each level of difficulty in the curriculum. We can see that at around difficulty level 4-5, the Q-learning agent becomes unstable and crashes, while the SAVE agent stays stable and continues to improve. Indeed, as shown in Figure D.3, the Q-learning agent never makes it to difficulty level 6 (when sticky blocks are free) or even difficulty level 5 (when sticky blocks have a moderate cost). The SAVE agent is able to reach harder levels of difficulty, and does so with fewer learning steps. Table E.1: Results on Atari. Scores are final performance averaged over 3 seeds. "Baseline" is the standard version of R2D2 . "Controlled" is our version that is controlled to have the same replay ratio as SAVE. The rightmost column reports the percent change in reward of SAVE over the controlled version of R2D2. Bold scores indicate scores that are within 5% of the best score on a particular game. The last two rows show median and mean scores, respectively. The percentages in the last two rows show the median and mean across percent change, rather than the percent change of the median/mean scores. We evaluated SAVE on a set of 14 Atari games in the Arcade Learning Environment . The games were chosen as a combination of classical action Atari games such as Asteroids and Space Invaders, and games with a stronger strategic component such as Ms. Pacman and Frostbite, which are commonly used as evaluation environments for model-based agents (; ; ;). SAVE was implemented on top of the R2D2 agent as described in Algorithm A.1. Concretely, this means we evaluate the function Q MCTS instead of Q θ to select an action in the actors, and optimize the combined loss function (Equation 6) instead of the TD loss in the learner. For hyperparameters, we used a search budget of 10, and β Q = 1, β A = 10. We did very little tuning to select these hyperparameters, only sweeping over two values of β A ∈ {1, 10}. We found while both of these settings ed in similar performance, β A = 10 worked slightly better. It is likely that with further tuning of these parameters, even larger increases in reward be achieved, as L Q and L A will have very different relative magnitudes depending on the scale of the rewards in each game. All hyper-parameters of R2D2 remain unchanged from the original paper, with the exception of actor speed compensation. By running MCTS, multiple environment interactions need to be evaluated for each actor step, which means transition tuples are added to the replay buffer at a slower rate, changing the replay ratio. To account for this, we increase the number of actors from 256 to 1024, and change the actor parameter update interval from 400 to 40 steps. The learning curves of our experiment are shown in Figure E.1, and Table E.1 shows the final performance in tabular form. We ran three seeds for each of the Baseline, Controlled and SAVE agents for each game and computed final scores as the average score over the last 2e4 episodes of training. The Baseline agent represents the unchanged R2D2 agent from . The Controlled agent is a R2D2 agent controlled to have the same replay ratio as SAVE, which we achieve by running MCTS in the actors but then discarding the . As in SAVE, we use 1024 actors with update interval 40 for the controlled agent. We can observe that in the majority of games, SAVE performs not only better than the controlled agent but also better than the original R2D2 baseline. While we see big improvements in the strategic games such as Ms. Pacman, we also notice a gain in many of the action games. This suggests that model-based methods like SAVE can be useful even in domains that do not require as much longterm reasoning. | We propose a model-based method called "Search with Amortized Value Estimates" (SAVE) which leverages both real and planned experience by combining Q-learning with Monte-Carlo Tree Search, achieving strong performance with very small search budgets. | 1,002 | scitldr |
Two main families of reinforcement learning algorithms, Q-learning and policy gradients, have recently been proven to be equivalent when using a softmax relaxation on one part, and an entropic regularization on the other. We relate this to the well-known convex duality of Shannon entropy and the softmax function. Such a is also known as the Donsker-Varadhan formula. This provides a short proof of the equivalence. We then interpret this duality further, and use ideas of convex analysis to prove a new policy inequality relative to soft Q-learning. • Policy gradients (V.), looks to maximize the expected reward by improving policies to favor high-reward actions. In general, the target loss function is regularized by the addition of an entropic functional for the policy. This makes policies more diffuse and less likely to yield degenerate . A critical step in the theoretical understanding of the field has been a smooth relaxation of the greedy max operation involved in selecting actions, turned into a Boltzmann softmax O. BID8. This new context has lead to a breakthrough this year J. BID7 with the proof of the equivalence of both methods of Q-learning and policy gradients. While that is extremely impressive in its unification, we argue that it is critical to look additionally at the fundamental reasons as to why it occurs. We believe that the convexity of the entropy functional used for policy regularization is at the root of the phenomenon, and that (Lagrangian) duality can be exploited as well, either yielding faster proofs, or further understanding. The contributions of our paper are as follows:1. We show how convex duality expedites the proof of the equivalence between soft Qlearning and softmax entropic policy gradients -heuristically in the general case, rigorously in the bandit case.2. We introduce a transportation inequality that relates the expected optimality gap of any policy with its Kullback-Leibler divergence to the optimal policy. We describe our notations here. Abusing notation heavily by identifying measures with their densities as in dπ(a|s) = π(a|s)da, if we note as either r(s, a) or r(a, s) the reward obtained by taking action a in state s, the expected reward expands as: DISPLAYFORM0 K r is a linear functional of π. Adding Shannon entropic regularization 1 improves numerical stability of the algorithm, and prevents early convergence to degenerate solutions. Noting regularization strength β, the objective becomes a free energy functional, named by analogy with a similar quantity in statistical mechanics: DISPLAYFORM1 Crucially, viewed as a functional of π, J is convex and is the sum of two parts DISPLAYFORM2 2 THE GIBBS VARIATIONAL PRINCIPLE FOR POLICY EVALUATION Here we are interested in the optimal value of the policy functional J, achieved for an optimal policy π *. We hence look for J * = J(π *) = sup π∈P J(π). In the one step-one state bandit setting we are in, this is in fact almost the same as deriving the state-value function. The principles of convex duality BID3 BID16 DISPLAYFORM0 with H the entropy functional defined above, we recover exactly the definition of the LegendreFenchel transformation, or convex conjugate, of β · H. The word convex applies to the entropy functional, and doesn't make any assumptions on the rewards r(s, a), other that they be well-behaved enough to be integrable in a. The Legendre transform inverts derivatives. A simple calculation shows that the formal convex conjugate of f: t → t log t is f *: p → e (p−1) -this because their respective derivatives log and exp are reciprocal. We can apply this to f (π(a|s)) = π(a|s) log π(a|s), and then this relationship can also be integrated in a. Hence the dual Legendre representation of the entropy functional H is known. The Gibbs variational principle states that, taking β = 1/λ as the inverse temperature parameter, and for each Borelian (measurable) test function Φ ∈ C b (A): DISPLAYFORM1 or in shorter notation, for each real random variable X with exponential moments, DISPLAYFORM2 We can prove a stronger . If µ is a reference measure (or policy), and we now consider the relative entropy (or Kullback-Leibler divergence) with respect to µ, H µ (·), instead of the entropy H(·), then the Gibbs variational principle still holds BID15, chapter 22). This regarding dual representation formulas for entropy is important and in fact found in several areas of science:• as above, in thermodynamics, where it is named the Gibbs variational principle;• in large deviations, this also known as the Donsker-Varadhan variational formula BID4;• in statistics, it is the well-known duality between maximum entropy and maximum likelihood estimation BID0; • finally, the theory of information geometry BID1 groups all three views and posits that there exists a general, dually flat Riemannian information manifold. The general form of the is as follows. For each Φ representing a rewards function r(s, a) or an estimator of it: DISPLAYFORM3 and the supremum is reached for the measure π * ∈ P defined by its Radon-Nikodym derivative equal to the Gibbs-Boltzmann measure yielding an energy policy: DISPLAYFORM4 In the special case where µ is the Lebesgue measure on a bounded domain (that is, the uniform policy), we find back the 5 above, up to a constant irrelevant for maximization. In the general case, the mathematically inclined reader will also see this as a rephrasing of the fact the Bregman divergence associated with Shannon entropy is the Kullback-Leibler divergence. For completeness' sake, we provide here its full proof: Proposition 1. Donsker-Varadhan variational formula. Let G be a bounded measurable function on A and π,π be probability measures on A, with π absolutely continuous w.r.t.π. Then DISPLAYFORM5 where π * is a probability measure defined by the Radon-Nikodym derivative: DISPLAYFORM6 Proposition 2. Corollary: DISPLAYFORM7 and the maximum is attained uniquely by π *.Proof. DISPLAYFORM8 Under review as a conference paper at ICLR 2018The link with reinforcement learning is made by picking Φ = r(s, a), π = π(a|s), λ = 1/β, and by recalling the implicit dependency of the right member on s but not on π at optimality, so that we can write DISPLAYFORM9 which is the definition of the one-step soft Bellman operator at optimum R. Fox & Tishby. FORMULA0; O. Nachum & Schuurmans. (2017b.); T. BID12. Note that here V * (s) depends on the reference measure µ which is used to pick actions frequency -we can be off-policy, in which case V * is only a pseudo state-value function. In this simplified one-step setting, this provides a short and direct proof that in expectation, and trained to optimality, soft Q-learning and policy gradients ascent yield the same J. BID7. Standard Q-learning is the special case β → 0, λ → ∞ where by the Laplace principle we recover V (s) → max A r(s, a); that is, the zero-temperature limit, with no entropy regularization. For simplicity of exposition, we have restricted so far to the proof in the bandit setting; now we extend it to the general case. First by inserting V * (s) = sup π V π (s) in the representation formulas above applied to r(s, a) + γV * (s), so that DISPLAYFORM0 The proof in the general case will then be finished if we assume that we could apply the Bellman optimality principle not to the hard-max, but to the soft-max operator. This requires proving that the soft-Bellman operator admits a unique fixed point, which is the above. By the Brouwer fixed point theorem, it is enough to prove that it is a contraction, or at least non-expansive (we assume that the discount factor γ < 1 to that end). We do so below, noting that this has been shown many times in the literature, for instance in O. BID8. Refining the soft-Bellman operator just like above, but in the multi-step case, by the expression DISPLAYFORM1 we get the:Proposition 3. Nonexpansiveness of the soft-Bellman operator for the supremum norm f ∞. DISPLAYFORM2 Proof. Let us consider two state-value functions V (s) and V (s) along with the associated action-value functions Q (s, a) and Q (s, a). Besides, denote MDP transition probability by p(s |s, a). Then: DISPLAYFORM3 ∞ by Hölder's inequality DISPLAYFORM4 In summary, the program of the proof was as below:1. Write down the entropy-regularised policy gradient functional, and apply the DonskerVaradhan formula to it. 2. Write down the ing softmax Bellman operator as a solution to the sup maximization -this obviously also proves existence. 3. Show that the softmax operator, just like the hard max, is still a contraction for the max norm, hence prove uniqueness of the solution by fixed point theorem. The above also shows formally that, should we discretize the action space A to replace integration over actions by finite sums, any strong estimatorr(s, a) of r(s, a), applied to the partition function of rewards 1 λ log a e λr(s,a), could be used for Q-learning-like iterations. This is because strong convergence would imply weak convergence (especially convergence of the characteristic function, via Levy's continuity theorem), and hence convergence towards the log-sum-exp cumulant generative function above. Different estimatorsr(s, a) lead to different algorithms. When the MDP and the rewards function r are not known, the parameterised critic choicer(s, a) ≈ Q w (s, a) recovers Nachum's Path Consistency Learning O. BID8 BID15. O'Donoghue's PGQ method B. O'Donoghue & Mnih. FORMULA0 can be seen as a control variate balancing of the two terms in 7. In theory, the rewards distribution could be also recovered simply by varying λ (or β), for instance by inverse Laplace transform. In this section, we propose an inequality that relates the optimality gap of a policy -by how much that policy is sub-optimal on average -to the Kullback-Leibler divergence between the current policy and the optimum. The proof draws on ideas of convex analysis and Legendre transormation exposed earlier in the context of soft Q-learning. Let us assume that X is a real-valued bounded random variable. We denote sup |X| ≤ M with M constant. Furthermore we assume that X is centered, that is, E[X] = 0. This can always be achieved just by picking DISPLAYFORM0 Then, by the Hoeffding inequality: DISPLAYFORM1 with K a positive real constant, i.e., the variable X is sub-Gaussian, so that its cumulant generating function grows less than quadratically. By taking a Legendre transformation and inverting it, we get that for any pair of measures P and Q that are mutually absolutely continuous, one has DISPLAYFORM2 which by specializing Q to be the measure associated to P * the optimal policy, P θ the current parameterized policy, and X an advantage return r: DISPLAYFORM3 By the same logic, any upper bound on log E e βX can give us information about E Q X − E P X. This enables us to relate the size of Kullback-Leibler trust regions to the amount by which our policy could be improved. In fact by combining the entropy duality formula with the Legendre transformation, one easily proves the below: Proposition 4. Let X a real-valued integrable random variable, and f a convex and differentiable function such that f = f = 0. Then with f *: x → f * (x) = sup(βx − f (β)) the Legendre transformation of f, f * −1 its reciprocal, and P and Q any two mutually absolutely continuous measures, one has the equivalence: DISPLAYFORM4 Proof. By Donsker-Varadhan formula, one has that the equivalence is proven if and only if DISPLAYFORM5 but this right term is easily proven to be nothing but DISPLAYFORM6 the inverse of the Legendre transformation of f applied to D KL (Q||P).This also opens up the possibility of using various softmax temperatures β i in practical algorithms in order to estimate f. Finally, note that if P θ is a parameterized softmax policy associated with action-value functions Q θ (a, s) and temperature β, then because P * is proportional to e −r(a,s)/β, one readily has DISPLAYFORM7 which can easily be inserted in the inequality above for the special case Q = P *. Entropic reinforcement learning has appeared early in the literature with two different motivations. The view of exploration with a self-information intrinsic reward was pioneered by Tishby, and developed in Ziebart's PhD. thesis BID16. It was rediscovered recently that within the asynchronous actor-critic framework, entropic regularization is crucial to ensure convergence in practice V.. Furthermore, the idea of taking steepest KL divergence steps as a practical reinforcement learning method per se was adopted by Schulman J. Schulman & Abbeel. The key common development in these works has been to make entropic regularization recursively follow the Bellman equation, rather than naively regularizing one-step policies G. BID5. Schulman thereafter proposed a general proof of the equivalence, in the limit, of policy gradient and soft Q-learning methods J. BID7, but the proof does not explicitly make the connection with convex duality and the expeditive justification it yields in the one-step case. Applying the Gibbs/Donsker-Varadhan variational formula to entropy in a machine learning context is, however, not new; see for instance BID0. Some of the convex optimization they invoke, including proximal stepping, can be found in the complete treatment by BID3. In the context of neural networks, convex analysis and partial differential equation methods are covered by BID10. Using dual formulas for the entropy functional in reinforcement learning has vast potential ramifications. One avenue of research will be to interpret our findings in a large deviations framework -the log-sum-exp cumulant generative function being an example of rate function governing fluctuations of the tail of empirical n-step returns. Smart drift change techniques could lead to significant variance reduction for Monte-Carlo rollout estimators. We also hope to exploit further concentration inequalities in order to provide more bounds for the state value function. Finally, a complete theory of the one-to-one correspondence between convex approximation algorithms and reinforcement learning methods is still lacking to date. We hope to be able to contribute in this direction through further work. | A short proof of the equivalence of soft Q-learning and policy gradients. | 1,003 | scitldr |
Computer simulation provides an automatic and safe way for training robotic control policies to achieve complex tasks such as locomotion. However, a policy trained in simulation usually does not transfer directly to the real hardware due to the differences between the two environments. Transfer learning using domain randomization is a promising approach, but it usually assumes that the target environment is close to the distribution of the training environments, thus relying heavily on accurate system identification. In this paper, we present a different approach that leverages domain randomization for transferring control policies to unknown environments. The key idea that, instead of learning a single policy in the simulation, we simultaneously learn a family of policies that exhibit different behaviors. When tested in the target environment, we directly search for the best policy in the family based on the task performance, without the need to identify the dynamic parameters. We evaluate our method on five simulated robotic control problems with different discrepancies in the training and testing environment and demonstrate that our method can overcome larger modeling errors compared to training a robust policy or an adaptive policy. Recent developments in Deep Reinforcement Learning (DRL) have shown the potential to learn complex robotic controllers in an automatic way with minimal human intervention. However, due to the high sample complexity of DRL algorithms, directly training control policies on the hardware still remains largely impractical for agile tasks such as locomotion. A promising direction to address this issue is to use the idea of transfer learning which learns a model in a source environment and transfers it to a target environment of interest. In the context of learning robotic control policies, we can consider the real world the target environment and the computer simulation the source environment. Learning in simulated environment provides a safe and efficient way to explore large variety of different situations that a real robot might encounter. However, due to the model discrepancy between physics simulation and the real-world environment, also known as the Reality Gap BID2 BID18, the trained policy usually fails in the target environment. Efforts have been made to analyze the cause of the Reality Gap BID20 and to develop more accurate computer simulation to improve the ability of a policy when transferred it to real hardware. Orthogonal to improving the fidelity of the physics simulation, researchers have also attempted to cross the reality gap by training more capable policies that succeed in a large variety of simulated environments. Our method falls into the second category. To develop a policy capable of performing in various environments with different governing dynamics, one can consider to train a robust policy or to train an adaptive policy. In both cases, the policy is trained in environments with randomized dynamics. A robust policy is trained under a range of dynamics without identifying the specific dynamic parameters. Such a policy can only perform well if the simulation is a good approximation of the real world dynamics. In addition, for more agile motor skills, robust policies may appear over-conservative due to the uncertainty in the training environments. On the other hand, when an adaptive policy is used, it learns to first identify, implicitly or explicitly, the dynamics of its environment, and then selects the best action according to the identified dynamics. Being able to act differently according to the dynamics allows the adaptive policy to achieve higher performance on a larger range of dynamic systems. However, when the target dynamics is notably different from the training dynamics, it may still produce sub-optimal for two reasons. First, when a sequence of novel observations is presented, the learned identification model in an adaptive policy may produce inaccurate estimations. Second, even when the identification model is perfect, the corresponding action may not be optimal for the new situation. In this work, we introduce a new method that enjoys the versatility of an adaptive policy, while avoiding the challenges of system identification. Instead of relating the observations in the target environment to the similar experiences in the training environment, our method searches for the best policy directly based on the task performance in the target environment. Our algorithm can be divided to two stages. The first stage trains a family of policies, each optimized for a particular vector of dynamic parameters. The family of policies can be parameterized by the dynamic parameters in a continuous representation. Each member of the family, referred to as a strategy, is a policy associated with particular dynamic parameters. Using a locomotion controller as an example, a strategy associated with low friction coefficient may exhibit cautious walking motion, while a strategy associated with high friction coefficient may in more aggressive running motion. In the second stage we perform a search over the strategies in the target environment to find the one that achieves the highest task performance. We evaluate our method on three examples that demonstrate transfer of a policy learned in one simulator DART, to another simulator MuJoCo. Due to the differences in the constraint solvers, these simulators can produce notably different simulation . A more detailed description of the differences between DART and MuJoCo is provided in Appendix A. We also add latency to the MuJoCo environment to mimic a real world scenario, which further increases the difficulty of the transfer. In addition, we use a quadruped robot simulated in Bullet to demonstrate that our method can overcome actuator modeling errors. Latency and actuator modeling have been found to be important for Sim-to-Real transfer of locomotion policies BID20. Finally, we transfer a policy learned for a robot composed of rigid bodies to a robot whose end-effector is deformable, demonstrating the possiblity of using our method to transfer to problems that are challenging to model faithfully. While DRL has demonstrated its ability to learn control policies for complex and dynamic motor skills in simulation BID28 BID23 BID15, very few learning algorithms have successfully transferred these policies to the real world. Researchers have proposed to address this issue by optimizing or learning a simulation model using data from the real-world BID9 BID13 BID1. The main drawback for these methods is that for highly agile and high dimensional control problems, fitting an accurate dynamic model can be challenging and data inefficient. Complementary to learning an accurate simulation model, a different line of research in sim-to-real transfer is to learn policies that can work under a large variety of simulated environments. One common approach is domain randomization. Training a robust policy with domain randomization has been shown to improve the ability to transfer a policy BID34 BID26 BID25. BID34 trained an object detector with randomized appearance and applied it in a real-world gripping task. showed that training a robust policy with randomized dynamic parameters is crucial for transferring quadruped locomotion to the real world. Designing the parameters and range of the domain to be randomized requires specific knowledge for different tasks. If the range is set too high, the policy may learn a conservative strategy or fail to learn the task, while a small range may not provide enough variation for the policy to transfer to real-world. A similar idea is to train an adaptive policy with the current and the past observations as input. Such an adaptive policy is able to identify the dynamic parameters online either implicitly BID21 BID24 or explicitly BID38 ) and apply actions appropriate for different system dynamics. Recently, adaptive policies have been used for sim-to-real transfer, such as in-hand manipulation tasks BID21 or non-prehensile manipulation tasks BID24. Instead of training one robust or adaptive policy, trained multiple policies for a set of randomized environments and learned to combine them linearly in a separate set of environments. The main advantage of these methods is that they can be trained entirely in simulation and deployed in real-world without further fine-tuning. However, policies trained in simulation may not generalize well when the discrepancy between the target environment and the simulation is too large. Our method also uses dynamic randomization to train policies that exhibit different strategies for different dynamics, however, instead of relying on the simulation to learn an identification model for selecting the strategy, we propose to directly optimize the strategy in the target environment. A few recent works have also proposed the idea of training policies in a source environment and fine-tune it in the target environment. For example, BID7 proposed MAP-Elite to learn a large set of controllers and applied Bayesian optimization for fast adaptation to hardware damages. Their approach searches for individual controllers for discrete points in a behavior space, instead of a parameterized family of policies as in our case, making it potentially challenging to be applied to higher dimensional behavior spaces. BID27 used progressive network to adapt the policy to new environments by designing a policy architecture that can effectively utilize previously learned representations. BID4 learned an implicit representation of the environment variations by optimizing a latent policy input for each discrete instance of the environment. They showed that fine-tuning on this learned policy achieved improved learning efficiency. In contrast to prior work in which the fine-tuning phase adjusts the neural network weights in the target environment, we optimize only the dynamics parameters input to the policy. This allows our policies to adapt to the target environments with less data and to use sparse reward signal. We formulate the motor skill learning problem as a Markov Decision Process (MDP), M = (S, A, r, P, p 0, γ), where S is the state space, A is the action space, r: S × A → R is the reward function, P: S × A → S is the transition function, p 0 is the initial state distribution and γ is the discount factor. The goal of reinforcement learning is to find a control policy π: S → A that maximizes the expected accumulated reward: J M (π) = E τ =(s0,a0,...,s T) T t=0 γ t r(s t, a t), where s 0 ∼ p 0, a t ∼ π(s t) and s t+1 = P(s t, a t). In practice, we usually only have access to an observation of the robot that contains a partial information of the robot's state. In this case, we will have a Partially-Observable Markov Decision Process (POMDP) and the policy would become π: O → A, where O is the observation space. In the context of transfer learning, we can define a source MDP M s and a target MDP M t and the goal would be to learn a policy π s for M s such that it also works well on M t. In this work, P is regarded as a parameterized space of transition functions, s t+1 = P µ (s t, a t), where µ is a vector of physical parameters defining the dynamic model (e.g. friction coefficient). The transfer learning in this context learns a policy under P s and transfers to P t, where P s = P t. We propose a new method for transferring a policy learned in simulated environment to a target environment with unknown dynamics. Our algorithm consists of two stages: learning a family of policies and optimizing strategy. The first stage of our method is to learn a family of policies, each for a particular dynamics P s µ (·). One can potentially train each policy individually and interpolate them to cover the space of µ BID30 BID8. However, as the dimension of µ increases, the number of policies required for interpolation grows exponentially. Since many of these policies are trained under similar dynamics, our method merges them into one neural network and trains the entire family of policies simultaneously. We follow the work by BID38, which trains a policy π: (o, µ) → a that takes as input not only the observation of the robot o, but also the physical parameters µ. At the beginning of each rollout during the training, we randomly pick a new set of physical parameters for the simulation and fix it throughout the rollout. After training the policy this way, we obtain a family of policies that is parameterized by the dynamics parameters µ. Given a particular µ, we define the corresponding policy as π µ: o → a. We will call such an instantiated policy a strategy. The second stage of our method is to search for the optimal strategy in the space of µ for the target environment. Previous work learns a mapping between the experiences under source dynamics P s µ and the corresponding µ. When new experiences are generated in the target environment, this mapping will identify a µ based on similar experiences previously generated in the source environment. While using experience similarity as a metric to identify µ transfers well to a target environment that has the same dynamic parameter space BID38, it does not generalize well when the dynamic parameter space is different. Since our goal is to find a strategy that works well in the target environment, a more direct approach is to use the performance of the task, i.e. the accumulated reward, in the target environment as the metric to search for the strategy: DISPLAYFORM0 Solving Equation 1 can be done efficiently because the search space in Equation 1 is the space of dynamic parameters µ, rather than the space of policies, which are represented as neural networks in our implementation. To further reduce the number of samples from the target environment needed for solving Equation 1, we investigated a number of algorithms, including Bayesian optimization, model-based methods and an evolutionary algorithm (CMA). A detailed description and comparison of these methods are provided in Appendix C.We chose Covariance Matrix Adaptation (CMA) BID14, because it reliably outperforms other methods in terms of sample-efficiency. At each iteration of CMA, a set of samples are drawn from a Gaussian distribution over the space of µ. For each sample, we instantiate a strategy π µ and use it to generate rollouts in the target environment. The fitness of the sample is determined by evaluating the rollouts using J M t. Based on the fitness values of the samples in the current iteration, the mean and the covariance matrix of the Gaussian distribution are updated for the next iteration. To evaluate the ability of our method to overcome the reality gap, we train policies for four locomotion control tasks (hopper, walker2d, half cheetah, quadruped robot) and transfer each policy to environments with different dynamics. To mimic the reality gap seen in the real-world, we use target environments that are different from the source environments in their contact modeling, latency or actuator modeling. In addition, we also test the ability of our method to generalize to discrepancies in body mass, terrain slope and end-effector materials. FIG0 shows the source and target environments for all the tasks and summarizes the modeled reality gap in each task. During training, we choose different combinations of dynamic parameters to randomize and make sure they do not overlap with the variations in the testing environments. For clarity of exposition, we denote the dimension of the dynamic parameters that are randomized during training as dim(µ). For all examples, we use the Proximal Policy Optimization (PPO) to optimize the control policy. A more detailed description of the experiment setup as well as the simulated reality gaps are provided in Appendix B. For each example presented, we run three trials with different random seeds and report the mean and one standard deviation for the total reward. We compare our method, Strategy Optimization with CMA-ES (SO-CMA) to three baseline methods: training a robust policy (Robust), training an adaptive policy (Hist) and training a Universal Policy with Online System Identification (UPOSI) BID38. The robust policy is represented as a feed forward neural network, which takes as input the most recent observation from the robot, i.e. π robust: o → a. The policy needs to learn actions that work for all the training environments, but the dynamic parameters cannot be identified from its input. In contrast, an adaptive policy is given a history of observations as input, i.e. π adapt: (o t−h, . . ., o t) → a t. This allows the policy to potentially identify the environment being tested and adaptively choose the actions based on the identified environment. There are many possible ways to train an adaptive policy, for example, one can use an LSTM network to represent the policy or use a history of observations as input to a feed-forward network. We find that for the tasks we demonstrate, directly training an LSTM policy using PPO is much less efficient and reaches lower end performance than training a feed-forward network with history input. Therefore, in our experiments we use a feed-forward network with a history of 10 observations to represent the adaptive policy π adapt. We also compare our method to UPOSI, which decouples the learning of an adaptive policy into training a universal policy via reinforcement learning and a system identification model via supervised learning. In theory UPOSI and Hist should achieve similar performance, while in practice we expect UPOSI to learn more efficiently due to the decoupling. We adopt the same training procedure as done by BID38, and use a history of 10 observations as input to the online system identification model. For fair comparison, we continue to train the baseline methods after transferring to the target environment, using the same amount of samples SO-CMA consumes in the target environment. We refer this additional training step as'fine-tuning'. In addition to the baseline methods, we also compare our method to the performance of policies trained directly in the target environments, which serves as an'Oracle' benchmark. The Oracle policies for Hopper, Walke2d, HalfCheetah and Hopper Soft was trained for 1, 000, 000 samples in the target environment as in. For the quadruped example, we run PPO for 5, 000, 000 samples, similar to. We detail the process of'fine-tuning' in Appendix B.4 In the first example, we build a single-legged robot in DART similar to the Hopper environment simulated by MuJoCo in OpenAI Gym BID3. We investigate two questions in this example: 1) does SO-CMA work better than alternative methods in transferring to unknown environments? and 2) how does the choice of dim(µ) affect the performance of policy transfer? To this end, we perform experiments with dim(µ) = 2, 5 and 10. For the experiment with dim(µ) = 2, we randomize the mass of the robot's foot and the restitution coefficient between the foot and the ground. For dim(µ) = 5, we in addition randomize the friction coefficient, the mass of the robot's torso and the joint strength of the robot. We further include the mass of the rest two body parts and the joint damping to construct the randomized dynamic parameters for dim(µ) = 10. The specific ranges of randomization are described in Appendix B.4.We first evaluate how the performance of different methods varies with the number of samples in the target environment. As shown in FIG1, when dim(µ) is low, none of the four methods were able to transfer to the MuJoCo Hopper successfully. This is possibly due to there not being enough variation in the dynamics to learn diverse strategies. When dim(µ) = 5, SO-CMA can successfully transfer the policy to MuJoCo Hopper with good performance, while the baseline methods were not able to adapt to the new environment using the same sample budget. We further increase dim(µ) to 10 as shown in FIG1 (c) and find that SO-CMA achieved similar end performance to dim(µ) = 5, while the baselines do not transfer well to the target environment. We further investigate whether SO-CMA can generalize to differences in joint limits in addition to the discrepancies between DART and MuJoCo. Specifically, we vary the magnitude of the ankle joint limit in [0.5, 1.0] radians (default is 0.785) for the MuJoCo Hopper, and run all the methods with 30, 000 samples. The can be found in FIG2. We can see a similar trend that with low dim(µ) the transfer is challenging, and with higher value of dim(µ) SO-CMA is able to achieve notably better transfer performance than the baseline methods. In this example, we use the lower body of a biped robot constrained to a 2D plane, according to the Walker2d environment in OpenAI Gym. We find that with different initializations of the policy network, training could lead to drastically different gaits, e.g. hopping with both legs, running with one legs dragging the other, normal running, etc. Some of these gaits are more robust to environment changes than others, which makes analyzing the performance of transfer learning algorithms challenging. To make sure the policies are more comparable, we use the symmetry loss from, which leads to all policies learning a symmetric running gait. To mimic modeling error seen on real robots, we add a latency of 8ms to the MuJoCo simulator. We train policies with dim(µ) = 8, for which we randomize the friction coefficient, restitution coefficient and the joint damping of the six joints during training. FIG3 (a) shows the transfer performance of different method with respect to the sample numbers in the target environment. We further vary the mass of the robot's right foot inkg in the MuJoCo Walker2d environment and compare the transfer performance of SO-CMA to the baselines. The default foot mass is 2.9 kg. We use in total 30, 000 samples in the target environment for all methods being compared and the can be found in FIG3 (b). In both cases, our method achieves notably better performance than Hist and UPOSI, while being comparable to Robust. In the third example, we train policies for the HalfCheetah environment from OpenAI Gym. We again test the performance of transfer from DART to MuJoCo for this example. In addition, we add a latency of 50ms to the target environment. We randomize 11 dynamic parameters in the source environment consisting of the mass of all body parts, the friction coefficient and the restitution coefficient during training, i.e. dim(µ) = 11. The of the performance with respect to sample numbers in target environment can be found in FIG4 (a). We in addition evaluate transfer to environments where the slope of the ground varies, as shown in FIG4 (b). We can see that SO-CMA outperforms Robust and Hist, while achieves similar performance as UPOSI. As demonstrated by, when a robust policy is used, having an accurate actuator model is important to the successful transfer of policy from simulation to real-world for a quadruped robot, Minitaur FIG0 ). Specifically, they found that when a linear torque-current relation is assumed in the actuator dynamics in the simulation, the policy learned in simulation transfers poorly to the real hardware. When the actuator dynamics is modeled more accurately, in their case using a non-linear torque-current relation, the transfer performance were notably improved. In our experiment, we investigate whether SO-CMA is able to overcome the error in actuator models. We use the same simulation environment from, which is simulated in Bullet (. During the training of the policy, we use a linear torque-current relation for the actuator model, and we transfer the learned policy to an environment with the more accurate non-linear torque-current relation. We use the same 25 dynamic parameters and corresponding ranges used by for dynamics randomization during training. When applying the robust policy to the accurate actuator model, we observe that the quadruped tends to sink to the ground, similar to what was observed by . SO-CMA, on the other hand, can successfully transfer a policy trained with a crude actuator model to an environment with more realistic actuators( FIG5 (a)). Applying deep reinforcement learning to environments with deformable objects can be computationally inefficient BID5. Being able to transfer a policy trained in a purely rigid-body environment to an environment containing deformable objects can greatly improve the efficiency of learning. In our last example, we transfer a policy trained for the Hopper example with rigid objects only to a Hopper model with a deformable foot FIG0 ). The soft foot is modeled using the soft shape in DART, which uses an approximate but relatively efficient way of modeling deformable objects BID17. We train policies in the rigid Hopper environment and randomize the same set of dynamic parameters as in the in the DART-to-MuJoCo transfer example with dim(µ) = 5. We then transfer the learned policy to the soft Hopper environment where the Hopper's foot is deformable. The can be found in FIG5 (b). SO-CMA is able to successfully control the robot to move forward without falling, while the baseline methods fail to do so. We have demonstrated that our method, SO-CMA, can successfully transfer policies trained in one environment to a notably different one with a relatively low amount of samples. One advantage of SO-CMA, compared to the baselines, is that it works consistently well across different examples, while none of the baseline methods achieve successful transfer for all the examples. We hypothesize that the large variance in the performance of the baseline methods is due to their sensitivity to the type of task being tested. For example, if there exists a robust controller that works for a large range of different dynamic parameters µ in the task, such as a bipedal running motion in the Walker2d example, training a Robust policy may achieve good performance in transfer. However, when the optimal controller is more sensitive to µ, Robust policies may learn to use overly-conservative strategies, leading to sub-optimal performance (e.g. in HalfCheetah) or fail to perform the task (e.g. in Hopper). On the other hand, if the target environment is not significantly different from the training environments, UPOSI may achieve good performance, as in HalfCheetah. However, as the reality gap becomes larger, the system identification model in UPOSI may fail to produce good estimates and in non-optimal actions. Furthermore, Hist did not achieve successful transfer in any of the examples, possibly due to two reasons: 1) it shares similar limitation to UPOSI when the reality gap is large and 2) it is in general more difficult to train Hist due to the larger input space, so that with a limited sample budget it is challenging to fine-tune Hist effectively. We also note that although in some examples certain baseline method may achieve successful transfer, the fine-tuning process of these methods relies on having a dense reward signal. In practice, one may only have access to a sparse reward signal in the target environment, e.g. distance traveled before falling to the ground. Our method, using an evolutionary algorithm (CMA), naturally handles sparse rewards and thus the performance gap between our method (SO-CMA) and the baseline methods will likely be large if a sparse reward is used. We have proposed a policy transfer algorithm where we first learn a family of policies simultaneously in a source environment that exhibits different behaviors and then search directly for a policy in the family that performs the best in the target environment. We show that our proposed method can overcome large modeling errors, including those commonly seen on real robotic platforms with relatively low amount of samples in the target environment. These suggest that our method has the potential to transfer policies trained in simulation to real hardware. There are a few interesting directions that merit further investigations. First, it would be interesting to explore other approaches for learning a family of policies that exhibit different behaviors. One such example is the method proposed by BID11, where an agent learns diverse skills without a reward function in an unsupervised manner. Another example is the HCP-I policy proposed by BID4, which learns a latent representation of the environment variations implicitly. Equipping our policy with memories is another interesting direction to investigate. The addition of memory will extend our method to target environments that vary over time. We have investigated in a few options for strategy optimization and found that CMA-ES works well for our examples. However, it would be desired if we can find a way to further reduce the sample required in the target environment. One possible direction is to warm-start the optimization using models learned in simulation, such as the calibration model in or the online system identification model in BID38.A DIFFERENCES BETWEEN DART AND MUJOCO DART BID19 and MuJoCo BID35 are both physically-based simulators that computes how the state of virtual character or robot evolves over time and interacts with other objects in a physical way. Both of them have been demonstrated for transferring controllers learned for a simulated robot to a real hardware, and there has been work trying to transfer policies between DART and MuJoCo BID36. The two simulators are similar in many aspects, for example both of them uses generalized coordinates for representing the state of a robot. Despite the many similarities between DART and MuJoCo, there are a few important differences between them that makes transferring a policy trained in one simulator to the other challenging. For the examples of DART-to-MuJoCo transfer presented in this paper, there are three major differences as described below:1. Contact Handling Contact modeling is important for robotic control applications, especially for locomotion tasks, where robots heavily rely on manipulating contacts between end-effector and the ground to move forward. In DART, contacts are handled by solving a linear complementarity problem (LCP) (Tan et al.), which ensures that in the next timestep, the objects will not penetrate with each other, while satisfying the laws of physics. In MuJoCo, the contact dynamics is modeled using a complementarity-free formulation, which means the objects might penetrate with each other. The ing impulse will increase with the penetration depth and separate the penetrating objects eventually. Similar to the contact solver, DART tries to solve the joint limit constraints exactly so that the joint limit is not violated in the next timestep, while MuJoCo uses a soft constraint formulation, which means the character may violate the joint limit constraint. In MuJoCo, a diagonal matrix σI n is added to the joint space inertia matrix that can help stabilize the simulation, where σ ∈ R is a scalar named Armature in MuJoCo and I n is the n × n identity matrix. This is not modeled in DART.To illustrate how much difference these simulator characteristics can lead to, we compare the Hopper example in DART and MuJoCo by simulating both using the same sequence of randomly generated actions from an identical state. We plot the linear position and velocity of the torso and foot of the robot, which is shown in FIG6. We can see that due to the differences in the dynamics, the two simulators would control the robot to reach notably different states even though the initial state and control signals are identical. B EXPERIMENT DETAILS We use Proximal Policy Optimization (PPO) implemented in OpenAI Baselines for training all the policies in our experiments. For simulation in DART, we use DartEnv (Yu BID38, which implements the continuous control benchmarks in OpenAI Gym using PyDart BID12 . For all of our examples, we represent the policy as a feed-forward neural network with three hidden layers, each consists of 64 hidden nodes. The observation space, action space and the reward function used in all of our examples can be found in TAB0 . For the Walker2d environment, we found that with the original environment settings in OpenAI Gym, the robot sometimes learn to hop forward, possibly due to the ankle being too strong. Therefore, we reduce the torque limit of the ankle joint in both DART and MuJoCo environment for the Walker2d problem from [−100, 100] to [−20, 20]. We found that with this modification, we can reliably learn locomotion gaits that are closer to a human running gait. Below we list the dynamic randomization settings used in our experiments. TAB1 and TAB3 shows the range of the randomization for different dynamic parameters in different environments. For the quadruped example, we used the same settings as in. To evaluate the ability of our method to overcome the modeling error, we designed six types of modeling errors. Each example shown in our experiments contains one or more modeling errors listed below. For the Hopper, Walker2d and HalfCheetah example, we trained policies that transfers from DART environment to MuJoCo environment. As discussed in Appendix A, the major differences between DART and MuJoCo are contacts, joint limits and armature. The second type of modeling error we tested is latency in the signals. Specifically, we model the latency between when an observation o is sent out from the robot, and when the action corresponding to this observation a = π(o) is executed on the robot. When a policy is trained without any delay, it is usually very challenging to transfer it to problems with delay added. The value of delay is usually below 50ms and we use 8ms and 50ms in our examples. As noted by, error in actuator modeling is an important factor that contributes to the reality gap. They solved it by identifying a more accurate actuator model by fitting a piece-wise linear function for the torque-current relation. We use their identified actuator model as the ground-truth target environment in our experiments and used the ideal linear torque-current relation in the source environments. In the example of Walker2d, we vary the mass of the right foot on the robot to create a family of target environments for testing. The range of the torso mass varies inkg. In the example of HalfCheetah, we vary the slope of the ground to create a family of target environments for testing. This is implemented as rotating the gravity direction by the same angle. The angle varies in the range [−0.18, 0.0] radians. The last type of modeling error we test is that a deformable object in the target environment is modeled as a rigid object in the source environment. The deformable object is modeled using the soft shape object in DART. In our example, we created a deformable box of size 0.5m × 0.19m × 0.13m around the foot of the Hopper. We set the stiffness of the deformable object to be 10, 000 and the damping to be 1.0. We refer readers to BID17 for more details of the softbody simulation. For training policies in the source environment, we run PPO for 500 iterations. In each iteration, we sample 40, 000 steps from the source environment to update the policy. For the rest of the hyperparameters, we use the default value from OpenAI Baselines. We use a large batch size in our experiments as the policy needs to be trained to work on different dynamic parameters µ.For fine-tuning of the Robust and Adaptive policy in the target environment, we sample 2, 000 steps from the target environment at each iteration of PPO, which is the default value used in OpenAI Baselines. Here we use a smaller batch size for two reasons: 1) since the policy is trained to work on only one dynamics, we do not need as many samples to optimize the policy in general and 2) the fine-tuning process has a limited sample budget and thus we want to use a smaller batch size so that the policy can be improved more. In the case where we use a maximum of 50, 000 samples for fine-tuning, this amounts to 50 iterations of PPO updates. Furthermore, we use a maximum rollout length of 1, 000, while the actual length of the rollout collected during training is general shorter due to the early termination, e.g. when the robot falls to the ground. Therefore, with 50, 000 samples in total, the fine-tuning process usually consists of 100 ∼ 300 rollouts, depending on the task. We use the CMA-ES implementation in python by (PyC). At each iteration of CMA-ES, we generate 4 + 3 * log(N) samples from the latest Gaussian distribution, where N is the dimension of the dynamic parameters. During evaluation of each sample µ i, we run the policy π µi in the target environment for three trials and average the returns to obtain the fitness of this sample. In addition to CMA-ES, we have also experimented with a few other options for finding the best µ such that π µ works well in the target environment. Here we show some experiment for Strategy Optimization with Bayesian Optimization (SO-BO) and Model-based Optimization (SO-MB). Bayesian Optimization is a gradient-free optimization method that is known to work well for low dimensional continuous problems where evaluating the quality of each sample can be expensive. The main idea in Bayesian optimization is to incrementally build a Gaussian process (GP) model that estimates the loss of a given search parameter. At each iteration, a new sample is drawn by optimizing an acquisition function on the GP model. The acquisition function takes into account the exploration (search where the GP has low uncertainty) and exploitation (search where the GP predicts low loss). The new sample is then evaluated and added to the training dataset for GP.We test Bayesian Optimization on the Hopper and Quadruped example, as shown in FIG7. We can see that Bayesian Optimization can achieve comparable performance as CMA-ES and thus is a viable choice to our problem. However, SO-BA appears in general noisier than CMA-ES and is in general less computationally efficient due to the re-fitting of GP models. Another possible way to perform strategy optimization is to use a model-based method. In a modelbased method, we learn the dynamics of the target environment using generic models such as neural networks, Gaussian process, linear functions, etc. After we have learned a dynamics model, we can use it as an approximation of the target environment to optimize µ.We first tried using feed-forward neural networks to learn the dynamics and optimize µ. However, this method was not able to reliably find µ that lead to good performance. This is possibly due to that any error in the prediction of the states would quickly accumulate over time and lead to inaccurate predictions. In addition, this method would not be able to handle problems where latency is involved. In the experiments presented here, we learn the dynamics of the target environment with a Long Short Term Memory (LSTM) network BID16. Given a target environment, we first sample µ uniformly and collect experience using π µ until we have 5, 000 samples. We use these samples to fit an initial LSTM dynamic model. We then alternate between finding the best dynamic parametersμ such that πμ achieves the best performance under the latest LSTM dynamic model and update the LSTM dynamic model using data generated from πμ. This is repeated until we have reached the sample budget. We found that LSTM notably outperformed feed-forward networks when applied to strategy optimization. One for Hopper DART-to-MuJoCo can be found in FIG8. It can be seen that Model-based method with LSTM is able to achieve similar performance as CMA-ES. Model-based method provides more flexibility over CMA-ES and Bayesian optimization. For example, if the target environment changes over time, it may be desired to have µ also be time-varying. However, this would lead to a high dimensional search space, which might require significantly more samples for CMA-ES or Bayesian Optimization to solve the problem. If we can learn an accurate enough model from the data, we can use it to generate synthetic data for solving the problem. However, there are two major drawbacks for Model-based method. The first is that to learn the dynamics model, we need to have access to the full state of the robot, which can be challenging or troublesome in the real-world. In contrast, CMA-ES and Bayesian optimization only require the final return of a rollout. Second, the Model-based method is significantly slower to run than the other methods due to the frequent training of the LSTM network. | We propose a policy transfer algorithm that can overcome large and challenging discrepancies in the system dynamics such as latency, actuator modeling error, etc. | 1,004 | scitldr |
We propose vq-wav2vec to learn discrete representations of audio segments through a wav2vec-style self-supervised context prediction task. The algorithm uses either a gumbel softmax or online k-means clustering to quantize the dense representations. Discretization enables the direct application of algorithms from the NLP community which require discrete inputs. Experiments show that BERT pre-training achieves a new state of the art on TIMIT phoneme classification and WSJ speech recognition. Learning discrete representations of speech has gathered much recent interest . A popular approach to discover discrete units is via autoencoding (; ;) sometimes coupled with an autoregressive model. Another line of research is to learn continuous speech representations in a self-supervised way via predicting context information (; van den ;). In this paper, we combine these two lines of research by learning discrete representations of speech via a context prediction task instead of reconstructing the input. This enables us to directly apply well performing NLP algorithms to speech data (Figure 1a). The vq-wav2vec encoder maps raw audio (X) to a dense representation (Z) which is quantized (q) toẐ and aggregated into context representations (C); training requires future time step prediction. (b) Acoustic models are trained by quantizing the raw audio with vq-wav2vec, then applying BERT to the discretized sequence and feeding the ing representations into the acoustic model to output transcriptions. Our new discretization algorithm, vq-wav2vec, learns discrete representations of fixed length segments of audio signal by utilizing the wav2vec loss and architecture (; §2). To choose the discrete variables, we consider a Gumbel-Softmax approach as well as online k-means clustering, similar to VQ-VAE (; ; §3). We then train a Deep Bidirectional Transformer (BERT; ; on the discretized unlabeled speech data and input these representations to a standard acoustic model (Figure 1b; §4). Our experiments show that BERT representations perform better than log-mel filterbank inputs as well as dense wav2vec representations on both TIMIT and WSJ benchmarks. Discretization of audio enables the direct application of a whole host of algorithms from the NLP literature to speech data. For example, we show that a standard sequence to sequence model from the NLP literature can be used to perform speech recognition over discrete audio tokens (§5, §6). 2.1 WAV2VEC wav2vec learns representations of audio data by solving a self-supervised context-prediction task with the same loss function as word2vec (; van den). The model is based on two convolutional neural networks where the the encoder produces a representation z i for each time step i at a rate of 100 Hz and the aggregator combines multiple encoder time steps into a new representation c i for each time step i. Given an aggregated representation c i, the model is trained to distinguish a sample z i+k that is k steps in the future from distractor samplesz drawn from a distribution p n, by minimizing the contrastive loss for steps k = 1,..., K: where T is the sequence length, σ(x) = 1/(1 + exp(−x)), and where σ(z i+k h k (c i)) is the probability of z i+k being the true sample. We consider a step-specific affine transformation et al., 2018). We optimize the loss L = K k=1 L k, summing over different step sizes. After training, the representations produced by the context network c i are input to the acoustic model instead of log-mel filterbank features. BERT is a pre-training approach for NLP tasks, which uses a transformer encoder model to build a representation of text. Transformers uses self-attention to encode the input sequence as well as an optional source sequence . The original BERT model combined two tasks for training: first, masked language modeling randomly removes some of the input tokens and the model has to predict those missing tokens. Second, next sentence prediction splices two different text passages together into a single example and the model needs to predict whether the passages are from the same document. Our approach, vq-wav2vec, learns vector quantized (VQ) representations of audio data using a future time-step prediction task. We follow the same architectual choices as wav2vec (§2.1) with two convolutional networks f: X → Z and g:Ẑ → C for feature extraction and aggregation, as well as a new quantization module q: Z →Ẑ to build discrete representations (Figure 1a). We first map 30ms segments of raw speech to a dense feature representation z at a stride of 10ms using the encoder network f. Next, the quantizer (q) turns these dense representations into discrete indices which are mapped to a reconstructionẑ of the original representation z. We feedẑ into the aggregator g and optimize the same context prediction task as wav2vec outlined in §2.1. The quantization module replaces the original representation z byẑ = e i from a fixed size codebook e ∈ R V ×d which contains V representations of size d. We consider the Gumbel-Softmax which is a differentiable approximation of the argmax for computing one-hot representations (§3.1; Figure 2a) as well as online k-means clustering, similar to the vector quantized variational autoencoder (VQ-VAE; ; §3.2; Figure 2b). Finally, we perform multiple vector quantizations over different parts of z to mitigate mode collapse (§3.3). The Gumbel-Softmax (; ;) enables selecting discrete codebook variables in a fully differentiable way and we use the straight-through estimator of. Given the dense representation z, we apply a linear layer, followed by a ReLU and another linear which outputs l ∈ R V logits for the Gumbel-Softmax. At inference, we simply pick the largest index in l. At training, the output probabilities for choosing the j-th variable are where v = − log(− log(u)) and u are uniform samples from U. During the forward pass, i = argmax j p j and in the backward pass, the true gradient of the Gumbel-Softmax outputs is used. The vector quantization approach of van den is an alternative to making the index selection procedure fully differentiable. Different to their setup, we optimize a future time step prediction loss instead of the reconstruction loss of an autoencoder. We choose the codebook variable representation by finding the closest variable to the input features z in terms of the Euclidean distance, yielding i = argmin j z − e j 2 2. During the forward pass, we selectẑ = e i by choosing the corresponding variable from the codebook. We obtain gradients for the encoder network by back-propagating dL wav2vec /dẑ (van den). The final loss has two additional terms: where sg(x) ≡ x, d dx sg(x) ≡ 0 is the stop gradient operator and γ is a hyper-parameter. The first term is the future prediction task and gradients do not change the codebook because of the straightthrough gradient estimation of mapping z toẑ. The second term sg(z) −ẑ 2 moves the codebook vectors closer to the encoder output, and the third term z − sg(ẑ) 2 makes sure that the encoder outputs are close to a centroid (codeword). So far, we considered replacing the encoder feature vector z by a single entry e i in the codebook. This is prone to mode collapse where only some of the codewords are actually used. Previously, this problem has been mitigated by workarounds such as re-initializing codewords or applying additional regularizers to the loss function . In the following, we describe another strategy where we independently quantize partitions of z, similar to product quantization . This in larger dictionaries and increased downstream performance (Appendix A). The dense feature vector z ∈ R d is first organized into multiple groups G into the matrix form z ∈ R G×(d/G). We then represent each row by an integer index, and hence can represent the full feature vector by the indices i ∈ [V] G, where V again denotes the possible number of variables for this particular group and each element i j corresponds to a fixed codebook vector. For each of the G groups, we apply either one of the two VQ approaches (§3.1 and §3.2). The codebook itself can be initialized in two possible ways: Codebook variables can be shared across groups, i.e., a particular index in group j would reference the same vector as the same index in group j. This yields a codebook e ∈ R V ×(G/d). In contrast, not sharing the codebook variables yields a codebook of size e ∈ R V ×G×(G/d). In practise, we observe that sharing the codebook variables generally yields competitive to a non-shared representation. Once we trained a vq-wav2vec model we can discretize audio data and make it applicable to algorithms that require discrete inputs. One possibility is to use the discretized training data and apply BERT pre-training where the task is to predict masked input tokens based on an encoding of the surrounding context . Once the BERT model is trained, we can use it to build representations and feed them into an acoustic model to improve speech recognition. We follow recent advances in BERT training which only use the masked input token prediction. Since each of the discretized tokens represents around 10 ms of audio it is likely too easy to predict a single masked input token. We therefore change BERT training by masking spans of consecutive discretized speech tokens, similar to. To mask the input sequence, we randomly sample p = 0.05 of all tokens to be a starting index, without replacement, and mask M = 10 consecutive tokens from every sampled index; spans may overlap. This makes the masked token prediction harder and we show later that it improves accuracy over masking individual tokens (§6.5). We generally pre-train vq-wav2vec and BERT on the full 960h of Librispeech and after vq-wav2vec training it is discretized to 345m tokens. Where indicated we perform ablations on a clean 100h subset which is discretized to 36M tokens. We evaluate models on two benchmarks: TIMIT (b) is a 5h dataset with phoneme labels and Wall Street Journal (WSJ; Garofolo et al. 1993a) is a 81h dataset for speech recognition. For TIMIT, we apply the standard evaluation protocol and consider 39 different phonemes. For WSJ, we train acoustic models directly on 31 graphemes, including the English alphabet, the apostrophe, the silence token and tokens for repeating characters. We adapt the fairseq implmentation of wav2vec (; and use vqwav2vec/wav2vec models with 34 × 10 6 parameters. The encoder has 8 layers with 512 channels each, kernel sizes and strides, yielding a total stride of 160. Each layer contains a convolution, followed by dropout, group normalization with a single group and a ReLU non-linearity. The aggregator is composed of 12 layers, with 512 channels, stride 1, and kernel sizes starting at 2 and increasing by 1 for every subsequent layer. The block structure is the same as for the encoder network, except we introduce skip connections between each subsequent block. We train with the wav2vec context prediction loss (Equation 1) for 400k updates, predicting K = 8 steps into the future and sample 10 negatives from the same audio example. Training is warmed up for 500 steps where the learning rate is increased from 1 × 10 −7 to 5 × 10 −3, and then annealed to 1e-06 using a cosine schedule . The batch size is 10, and we crop a random section of 150,000 frames for each example (approximately 9.3 seconds for 16kHz sampling rate). All models are trained on 8 GPUs. For ablations and experiments on the 100h Librispeech subset, we use a smaller model with kernels and strides in the encoder and seven convolutional layers with stride one and kernel size three in the aggregator. This model is trained for 40k updates. Gumbel-Softmax Models. We use G = 2 groups and V = 320 latents per group and the linear layer projects the features produced by the encoder into G · V = 640 logits. The Gumbel-Softmax produces a one-hot vector for each group G. The temperature τ is linearly annealed from 2 to 0.5 over the first 70% of updates and then kept constant at 0.5. This enables the model to learn which latents work best for each input before committing to a single latent. After training this model on 960h of Librispeech and quantizing the training dataset, we are left with 13.5k unique codewords combinations (out of V G = 102k possible codewords). k-means Models. We use G = 2 groups and V = 320 variables per group. vq-wav2vec on full Librispeech yields 23k unique codewords. Following van den , we found γ = 0.25 to be a robust choice for balancing the VQ auxiliary loss. BERT base models have 12 layers, model dimension 768, inner dimension (FFN) 3072 and 12 attention heads . The learning rate is warmed up over the first 10,000 updates to a peak value of 1 × 10 −5, and then linearly decayed over a total of 250k updates. We train on 128 GPUs with a batch size of 3072 tokens per GPU giving a total batch size of 393k tokens . Each token represents 10ms of audio data. BERT small. For ablations we use a smaller setup with model dimension 512, FFN size 2048, 8 attention heads and dropout 0.05. Models are trained for 250k updates with a batch size of 2 examples per GPU. We use wav2letter as accoustic model (; and train for 1k epochs on 8 GPUs for both TIMIT and WSJ using the auto segmentation criterion. For decoding the emissions from the acoustic model on WSJ we use a lexicon as well as a separate language model trained on the WSJ language modeling data only. We consider a 4-gram KenLM language model and a character based convolutional language model and tune the models with the same protocol as. We first evaluate on the WSJ speech recognition benchmark. We train a vq-wav2vec model on the unlabeled version of Librispeech, then discretize the same data with the ing model to estimate a BERT model. Finally, we train a wav2letter acoustic model on WSJ by inputting either the BERT or vq-wav2vec representations instead of log-mel filterbanks. 2 We compare to various from the literature, including wav2vec and we consider three setups: performance without any language model (No LM), with an n-gram LM (4-gram LM) and with a character convolutional LM (Char ConvLM). We report the accuracy of wav2letter with log-mel filterbanks as input (Baseline) and wav2vec. For vq-wav2vec we first experiment with the Gumbel-Softmax, with and without a BERT base model (§5.3). Table 1 shows that vq-wav2vec together with BERT training can achieve a new state of the art of 2.34 WER on nov92. Gains are largest when no language model is used which is the fastest setting. vq-wav2vec with Gumbel-Softmax uses only 13.5k distinct codewords to represent the audio signal and this limited set of codewords is not sufficient to outperform the baseline. However, it does enable training BERT models which require a relatively small vocabulary. Next, we compare Gumbel-Softmax to k-means for vector quantization. For this experiment we use the faster to train BERT small configuration (§5.3). We also train a vq-wav2vec k-means model with a very large number of codewords (39m) to test whether a more expressive model can close dev PER test PER CNN + TD-filterbanks 15.6 18.0 Li-GRU + fMLLR -14.9 wav2vec 12.9 14.7 Baseline (log-mel) 16.9 17.6 vq-wav2vec, gumbel 15.34 17.78 + BERT small 9.64 11.64 vq-wav2vec, k-means 15.65 18.73 + BERT small 9.80 11.40 Table 3: TIMIT phoneme recognition in terms of phoneme error rate (PER). All our models use the CNN-8L-PReLU-do0.7 architecture (Table 4 : Librispeech for a standard sequence to sequence model trained on discretized audio without BERT pre-training and from the literature. All are without a language model. the gap to wav2vec. Table 2 shows that Gumbel-Softmax and k-means clustering perform relatively comparably: in the no language model setup without BERT, Gumbel-Softmax is more accurate than k-means but these differences disappear with BERT. For 4-gram LM setup, k-means is better but those differences disappear again after BERT training. Finally, the large codeword model can substantially reduce the gap to the original wav2vec model. Next, we experiment on the much smaller TIMIT phoneme recognition task where we also pre-train vq-wav2vec on the full Librispeech corpus. Table 3 shows that vq-wav2vec and BERT achieve a new state of the art of 11.67 PER which corresponds to a 21% reduction in error over the previous best of wav2vec. So far we used vq-wav2vec to train BERT on discretized speech. However, once the audio is discretized we can also train a standard sequence to sequence model to perform speech recognition. In preliminary experiments, we trained an off-the-shelf Big Transformer (; on the vq-wav2vec Gumbel-Softmax discretized Librispeech corpus and evaluated on the Librispeech dev/test sets; we use a 4k BPE output vocabulary . Table 4 shows that are promising, even though they are not as good as the state of the art which depends on data augmentation that we do not use. Next, we investigate how well vq-wav2vec can compress the audio data. Specifically, we train models with different numbers of groups G and variables V to vary the size of the possible codebook size V G and measure accuracy on TIMIT phoneme recognition without BERT training. We measure compression with the bitrate r · G log 2 V at sampling rate r = 100Hz and report the tradeoff between bitrate and accuracy on our phoneme recognition task. We experiment with vq-wav2vec k-means and train models with 1,2,4,8,16 and 32 groups, using 40,80,160,...,1280 variables, spanning a bitrate range from 0.53 kbit/s (G = 1, V = 40) to 33.03 kbit/s (G = 32, V = 1280). We place the quantization module after the aggregator module and train all models in the small vq-wav2vec setup (§5.2) on the 100h clean Librispeech subset. As baselines, we consider various lossy compression algorithms applied to the TIMIT audio data and train wav2vec models on the ing audio: Codec2 3 as a low bitrate codec, Opus as a medium bitrate codec and MP3 and Ogg Vorbis as high bitrate codecs. We use the whole spectrum of both variable and constant bitrate settings of the codecs; we encode and decode with ffmpeg (ffmpeg developers, 2016). Figure 3 shows the trade-off between the bitrate and TIMIT accuracy. Acoustic models on vq-wav2vec achieve the best across most bitrate settings. Table 5a shows that masking entire spans of tokens performs significantly better than individual tokens (M = 1). Furthermore, BERT training on discretized audio data is fairly robust to masking large parts of the input (Table 5b). vq-wav2vec is a self-supervised algorithm that quantizes unlabeled audio data which makes it amenable to algorithms requiring discrete data. This approach improves the state of the art on the WSJ and TIMIT benchmarks by leveraging BERT pre-training. In future work, we plan to apply other algorithms requiring discrete inputs to audio data and to explore self-supervised pre-training algorithms which mask part of the continuous audio input. Another future work avenue is to finetune the pre-trained model to output transcriptions instead of feeding the pre-trained features to a custom ASR model. We investigate the relationship between number of variables V and groups G. Table 6 shows that multiple groups are beneficial compared to a single group with a large number of variables. Table 7 shows that with a single group and many variables, only a small number of codewords survive. Table 6: PER on TIMIT dev set for vq-wav2vec models trained on Libri100. Results are based on three random seeds. | Learn how to quantize speech signal and apply algorithms requiring discrete inputs to audio data such as BERT. | 1,005 | scitldr |
Deep Reinforcement Learning algorithms lead to agents that can solve difficult decision making problems in complex environments. However, many difficult multi-agent competitive games, especially real-time strategy games are still considered beyond the capability of current deep reinforcement learning algorithms, although there has been a recent effort to change this \citep{openai_2017_dota, vinyals_2017_starcraft}. Moreover, when the opponents in a competitive game are suboptimal, the current \textit{Nash Equilibrium} seeking, self-play algorithms are often unable to generalize their strategies to opponents that play strategies vastly different from their own. This suggests that a learning algorithm that is beyond conventional self-play is necessary. We develop Hierarchical Agent with Self-play (HASP), a learning approach for obtaining hierarchically structured policies that can achieve higher performance than conventional self-play on competitive games through the use of a diverse pool of sub-policies we get from Counter Self-Play (CSP). We demonstrate that the ensemble policy generated by HASP can achieve better performance while facing unseen opponents that use sub-optimal policies. On a motivating iterated Rock-Paper-Scissor game and a partially observable real-time strategic game (http://generals.io/), we are led to the that HASP can perform better than conventional self-play as well as achieve 77% win rate against FloBot, an open-source agent which has ranked at position number 2 on the online leaderboards. Deep reinforcement learning (RL) has achieved significant success on many complex sequential decision-making problems. Most of the problems are in robotics domain or video games BID16. However, complex real-time strategic competitive games still pose a strong challenge to the current deep reinforcement learning method due to the requirement of the ability to handle long-term scheduling, partial observability and multi-agent collaboration/competition. BID28 BID20. Competitive games such as Go, in which each player optimize their own interests by finding the best response to opponents' strategies, are usually studied mainly on finding the Nash Equilibrium solutions BID20, namely a combination of players' strategies upon which neither player can obtain higher rewards by modifying their strategy unilaterally BID23. However, in the real-world, opponents can have a variety of strengths and play styles and do not always adopt the equilibrium solutions. In fact, human players are often remarkably good at analyzing strategies, tendencies, and flaws in opponents' behavior and then exploiting the opponents even if the ing exploiting strategies themselves are subject to exploitation. Exploitation is a central component of sports and competitive games. This is also applicable in other real-world competitive domains, including airport and network security, financial and energy trading, traffic control, routing, etc. Therefore, exploring game-playing strategies that intentionally avoid the equilibrium solution and instead "learn to exploit" is a promising research direction toward more capable, adaptable, and ultimately more human-like artificial agents. Hence, we develop a new algorithm Hierarchical Agent with Self-Play that learns to exploit the suboptimality of opponents in order to learn a wider variety of behaviors more in line with what humans might choose to display. In this work, we focus on two-player, symmetric, extensive form games of imperfect information, though generalization to more players and asymmetric games is feasible and relatively straightforward. First, we adopt recent Proximal Policy Gradient (PPO) methods in deep reinforcement learning (RL), which has been successful at handling complex games BID16 BID24 BID17 and many other fields BID20 ) Second, we aim to automatically acquire a strong strategy that generalizes against opponents that we have not seen in training. Here we use self-play to gradually acquire more and more complex behaviors. This technique has proven successful at solving backgammon BID27, the game of Go, imperfect information games such as Poker BID8 BID9, continuous control BID0, and modern video games BID20.In this paper, we investigate a new method for learning strong policies on multi-player games. We introduce Hierarchical Agent with Self-Play, our hierarchical learning algorithm that automatically learns several diverse, exploitable polices and combines them into an ensemble model that draws on the experience of the sub-policies to respond appropriately to different opponents. Then, we show the of some experiments on two multiplayer games: iterated Rock-Paper-Scissors and a partially observable real-time strategy game based on a popular online game generals.io (http://generals.io/). We show that compared to conventional self-play, our algorithm learns a more diverse set of strategies and obtains higher rewards against test opponents of different skill levels. Remarkably, it can achieve 77% win rate against the FloBot, the strongest open-sourced scripted bot on the generals.io online leaderboard. Many real world multi-agent problems can be described by Markov games BID12. A Markov game of N players at time step t has a full state s t and assigns each player i (1 ≤ i ≤ N) an observation o t,i. Then player i samples an action a t,i from its policy DISPLAYFORM0 where O i and A i are the observation and action spaces. Given all players' actions, the environment transits to a new state s t+1 and sends a reward r t,i to player i. The goal for each player i is to maximize expected total discounted rewards DISPLAYFORM1 A typical characterization of optimal strategies / policies is a Nash equilibrium, DISPLAYFORM2.., π N ) for any other π i. Finding equilibrium strategies can be very challenging, even for single-step (a.k.a. normal form) games BID23. Most studies focus on either fully cooperative (r t,i = r t,j ∀i, j) or two-player, zero-sum (N = 2, r t,1 = −r t,2) competitive cases. Methods for cooperative games include optimistic and hysteretic Q learning BID10 BID14 BID19, and recently centralized critic with decentralized actors BID6 BID13. BID7 also shows the ability to find exploiting strategies; however it is in under relatively simple games and can be potentially combined with our method to have better performance. BID21 and BID4 have comprehensive surveys on this topic. For two-player, zero-sum games, though small problems can be solved by linear programming, more complex (and symmetric) ones usually require self-play and some form of learning. Self-play, namely training the agent against itself, is a powerful technique to bootstrap from an initially random agent. Classic game-theoretic techniques often offer provable convergence to Nash equilibrium, and some also achieve success while combined with deep reinforcement learning. Such methods include fictitious play BID2 BID8, counterfactual regret minimization BID29 BID18 BID3, replicator dynamics BID26, double oracle BID15 BID9, and so on. The combination of deep learning, planning, and self-play led to the famous Go-playing agents AlphaGo and AlphaZero. Most recently, self-play achieved success on full five versus five Dota 2, beating a team of 99.95th percentile Dota players . Games are often popular environments for reinforcement learning research, and recently there has been a heavy interest BID20 BID28 in real-time strategy (RTS) games. These games prove difficult for current methods due to the importance of long-term decisions as well as large action spaces. We propose generals.io (Generals) as an interesting and economical research environment that has many of the same challenges as Dota 2 and Starcraft II (SC2) while being very fast to simulate and having a sizable community of players online to evaluate against. (a) One action in the generals game. An agent needs to select which grid to execute and then perform an action to a one of the 4 adjacent grid. After each action, one of the arm will be left in the original grid and all the army will be moved to the new grid.(b) The army aggregation scheme in the game. For each turn, the taken city and generals grids will generate one more army on the grid. After 50 turns, all the plain grid will increase 1 army. Generals games in FIG0 take place on a rectangular grid. There can be anywhere from two to eight players at a time, although in this work we only consider the case of two players. Each player gets their own color, as well as a "General", and they cannot see any tile that is not adjacent to a tile they own. Moves happen every 0.5 seconds, and consist of each player moving some army from one tile to an adjacent tile. Players can choose the grid to move and then move their army freely between tiles that they own, and if a player decides to move its army into a tile that is not their own color, the amount of army they are moving is subtracted from that tile and if the total is now negative, the player takes ownership. Fig. 2a demonstrates an example of how to move the army. There are also cities scattered throughout the map, which have a high cost to conquer. The army aggregation mechanism is described in Fig. 2b Every other turn, each player gets one army on their general as well as on every city they own. Every 50 turns, each player also gets an additional army on each tile they own. The goal of the game is to take ownership of the opponent's general. The fog-of-war in Generals means that agents have to actively learn to seek out information. Agents also need to learn how to prevent themselves from being revealed, as well as learn to defend against incoming attacks, know when to strike, manage its own resources by expanding its own territory and investing in cities. Players online show a variety of strategies, including rushing to their opponent in the early game, staying small and hiding while investing in cities, or expanding and slowly suffocating their opponents by gaining slightly more resources than they have. Using Generals also has several benefits compared to SC2. In Starcraft II game, a bot can easily have access to additional information that a human player cannot know and thus hard to evaluate the quality of a game agent. Another issue with SC2 is that in many games the bot can cheat by having extremely short reaction time which is impossible to achieve by a professional human players. Moreover, usually strategic games are very slow to simulate due to the large amount of information. However, Generals has a simple game board that is provided as a raw image (12x12) to both the agent and human players, as well as a small delay between turns that should lessen the effect of reaction time on the games. With our python version of generals that interfaces with OpenAI Gym BID1, we can achieve over 1000 frame per second with an average game time with a single GPU. This platform is considered more fexible and more transparent for evaluation. There are also resources available on the developer's website (http://dev.generals.io/), including an API to let an agent play online versus other bots or humans. They also have a collection of over 200,000 replays available as well as a Github repository full of parsing utilities as well as open-source bots online. In this work, we will have a high-level policy and a set of sub-policies to form a hierarchically structured agent. We will learn a series of sub-policies that can exploit one certain style in the game by self-play. The sub-policies must be strong because they are to be used by a High-level policy as a component. Moreover, the sub-policies must be diverse enough so that no one sub-policy dominates the rest. Then we train the high-level policy to choose from the sub-policies conditioned on observations. For both the sub-polices and the high-level policy in Generals, the observation includes a 12 by 12 map. In Section 3.1, we show the detailed architecture as well as PPO algorithm we use. In Section 3.2, we show how can we achieve the diverse and robust strategies that can exploit other strategies by using counter self play. And in Section 3.3, we show how to train the high-level policy. We adopt the Proximal Policy Optimization algorithm in our training framework. PPO uses a surrogate objective which is maximized while penalizing large changes to the policy. The algorithm alternates between sampling trajectories from the policy and performing SGD on the sampled dataset to optimize this surrogate objective. Although the algorithm is not new, we need to come up with specific network architecture to learn the policy and the value function. The policy network is a fully convolutional network. The rationale behind this selection of network architecture is that for real-time strategic games, there are multiple similar events will happen in different spatial location. The fully convolutional architecture has the capability of maintaining translation invariance and of generalizing similar events to different locations. We also share the initial 3 layers of parameters between the value network and the policy network. In this section, we describe how to achieve diverse and robust sub-policies by applying self-play. To learn a robust sub-policy by vanilla self-play requires occasional success by performing a series of random actions. But due to the large state space and action space, it is too challenging to win a game of Generals by chance on a large map. To alleviate this sparse signal problem in Generals, there are several methods such as engineering a dense reward for each step the agent take, imitating from demonstration replays as warm start or doing an exploration curriculum by adjusting the size of the map. We adopt the second method by imitating from demonstrations as a warm start. This method helps the sub-policy overcome the exploration problem, but it is still only rarely able to win against decent agents. A second problem is that if we start training a self-play agent, it will converge quickly to a local optima and stop generating diverse sub-policies which make this unusable for our high-level policy. In order to get a larger amount of diverse strategies we run conventional self-play with added an additional "achievement reward" as bonus for the initialization. However, we only use this to seed our method; after we get the initial policy, we only give all the agents the true rewards provided by the game and make the agent only want to win and win fast. During Counter Self-Play, we only train against the most recent self to exploit it instead of training against a set of previous selves as in BID0. We also reinitialize the agent's policy each time we save our current sub-policy and start a new one. This way, each agent has a fixed number of training iterations and the main difference between the subpolicies will be "style" not "strength". The current agent will become the counter strategy to the previous agent. After we have enough sub-policies, we can stop training and start the learning of high-level policy. In previous work BID5, there is another way to generate diverse policies by maximizing an information theoretic objective. However, the sub-policies generated with our method have meaning in that they are best-responses to different opponents rather than just being diverse in the sense that they reach different states in state-space. In this section, we describe how can we train a high-level policy based on the sub-policies we have. We initialize a parameterized high-level policy with random parameters. Then we train this agent with self-play as well. At each step in the game, the high-level policy will take the observation from the map as input and decide which sub-policy to choose. Different from prior work , we fix the sub-policies during training the high-level policy. After choosing, the sub-policy is fed with the same observation and execute the action based on it for a single step. On simple games such as iterated Rock-Paper-Scissor, the high-level policy can be computed once by first finding the pairwise expected payoffs of the sub-policies, and then solving a matrix game with Linear Programming as is the standard method in game theory. We test HASP and baseline methods on two discrete two-player competitive games: iterated Rock-Paper-Scissor (RPS) and Generals.1 We want to answer the following questions with our experiments.1. What types of sub-policies does our algorithm find? 2. Are the sub-policies from Counter Self-Play meaningfully diverse to be used by the high-level policy? 3. Can HASP lead to a stronger and more robust agent against opponents that are not seen in the training time compared with conventional self-play? For one baseline, we use conventional self-play, which is very similar to the way of how we get our sub-policies. To have a fair comparison with our method, we also incorporate the self-play agent with the imitation warm start stage described in section 3.2.We also compare with the baseline that randomly chooses from the time-averaged past sub-policies from counter self-play (CSP) as a baseline to see how much improvement we get by training a high-level policy instead of just choosing a sub-policy randomly. Iterated RPS is formulated simply as Rock-Paper-Scissor repeated over multiple turns. We consider two turns for illustration purposes. An agent gets a reward +1 for winning more turns than the opponent, -1 for less, and 0 for equal turns. Though the game has an obvious equilibrium strategy -uniformly random at any turn, sub-optimal players may incline towards specific patterns. For example, some players take the counter action in turn 2 against the opponent's action in turn 1 (the "counter" strategy), under the assumption that the opponent does not change his action ("repeat"). Some may reason further and assume the 1 Videos, codes and more information will be available at https://sites.google.com/view/hasp/home. Table 2: Performance on iterated RPS. R = rock_ only, P = paper_only, S = scissors_only, c = "counter", ci = "counter i", min denotes the minimum performance along a row winrate (opponent using "counter", so chooses the counter to that ("counter2") instead. The reasoning continues until it loops back to "counter6" = "repeat". If our algorithm is successful and the initial policy is ("repeat"), it should autonomously discover all these sub-policies in the strategy space. Running phase 1 of HASP here in ten different policies. We provide a t-SNE projection of the learned policies onto the 2D plane below. Since the action space of rock-paper-scissors is only three dimensional, we use the Linear Programming (LP) method to find a Nash-equilibrium mixed strategy where the moves consist of the ten learned sub-policies. The ing policy samples a sub-policy with probabilities shown in Table 1.In addition, we trained a baseline agent using conventional self-play. We found that using PPO to learn stochastic policies is quite hard, since most best-responses are pure strategies and when our agent reached low entropy, it would stop exploring, even if that policy was no longer good. In general, policies learned under self-play ed were close to deterministic and reasonable at exploiting a large number of different policies. However, we found that they were highly exploitable. Table 2 shows the performance of the different methods against some scripted test policies. Notice that the policy learned under HASP is less exploitable and therefore closer to an equilibrium policy than the different self-play runs. Note that randomly choosing a sub-policy also achieves a low exploitability, although not as low as with HASP. For all of our Generals experiments, we initialize our agent with behavioral cloning on an open-sourced agent called FloBot. After behavioral cloning, our agent can get only an average of -1.7 reward against FloBot but is able to win rarely. While we include our final against FloBot, note that they are slightly inflated in the self-play baseline, since the initial FloBot imitator is always in the training pool and therefore we learn a style that is effective against it. We find that when learning our our sub-policies, we can learn styles that are different from that of FloBot's. In order to break symmetry between strategies, we added a small negative reward to each time step. This discourages our agents from playing a safe, defensive strategy, and encourages them to use the information they know about their opponents in order to end the game faster. The final reward that we use for training in Generals is: We initialize the first phase of HASP with an agent trained to prefer owning territory on the top half of the map by adding the achievement reward described in Section 3.2. Running phase 1 of HASP, we obtained 6 different sub-policies that are diversely distributed in the sub-policy space. From observation, we find that the sub-policies are different from each other in terms of playing styles. We also find that, as we expected, each of them is trying to counter the previous self by a large margin, although it is exploitable itself. We performed the same test of training an agent against all our sub-policies to see if our sub-policies are meaningfully diverse. TAB2 shows the performance of our agent trained vs. several sub-policies generated via CSP compared to the best response to those policies. Since the best-responses have the highest reward, we conclude that knowing ahead of time which opponent we are facing is valuable and therefore our strategies are meaningfully diverse. For Generals, we trained our ensemble model using the Algorithm 1. The final usage frequencies (averaged over several games) of use of the sub-policies when facing another style of playing agent "expand_down" is shown in Table 5. Note that our agent learns to play a mixture of all strategies. Table 5: The frequency at which our policy selects each sub-policy when playing against expand_left sub-policies 1 2 3 4 5 6 0.14 0.2 0.18 0.22 0.12 0.14 TAB3 shows the performance of our HASP agent as well as the baselines against several held-out agents that we trained using self-play with an additional "achievement" reward: expand-left, expand-right, and expand-more. The agent trained using HASP achieves a higher reward against all of the held-out agents that we tested on. In contrast with our Iterated Rock-Paper-Scissors , randomly selecting a sub-policy at each turn is competitive with our policy that can change the probability of sampling a sub-policy with the current state. This suggests that the power of our method in Generals comes from the diverse sub-policies themselves. In this paper, we investigate a novel learning approach Hierarchical Agent with Self-Play to learning strategies in competitive games and real-time strategic games by learning several opponent-dependent sub-policies. We evaluate its performance on a popular online game, where we show that our approach generalizes better than conventional self-play approaches to unseen opponents. We also show that our algorithm vastly outperforms conventional self-play when it comes to learning optimal mixed strategies in simpler matrix games. Though our method has achieved good , there are some areas which could be improved in future research. In the future, we hope to also achieve good performance on larger versions of Generals, where games last longer and therefore learning is harder. We would also like to investigate further the effects that our algorithm has on exploration with sparse reward. | We develop Hierarchical Agent with Self-play (HASP), a learning approach for obtaining hierarchically structured policies that can achieve high performance than conventional self-play on competitive real-time strategic games. | 1,006 | scitldr |
We introduce adaptive input representations for neural language modeling which extend the adaptive softmax of to input representations of variable capacity. There are several choices on how to factorize the input and output layers, and whether to model words, characters or sub-word units. We perform a systematic comparison of popular choices for a self-attentional architecture. Our experiments show that models equipped with adaptive embeddings are more than twice as fast to train than the popular character input CNN while having a lower number of parameters. On the WikiText-103 benchmark we achieve 18.7 perplexity, an improvement of 10.5 perplexity compared to the previously best published and on the Billion Word benchmark, we achieve 23.02 perplexity. Language modeling is a basic task in natural language processing, with many applications such as speech recognition BID1 and statistical machine translation BID28 BID33 BID2. Recently, much progress has been made by neural methods BID3 BID20 ) based on LSTMs BID13, gated convolutional networks BID7 and self-attentional networks BID0.There are different choices for the basic unit we wish to model, including full words BID3, characters for the input BID15, or also the output BID18 as well as sub-words BID4 BID19. Word-based models are particularly challenging since computing probabilities for all 800K words of the BILLION WORD benchmark is still a substantial part of the overall computation BID6.A popular approach to lower the computational burden is to structure the output vocabulary so that not all probabilities need to be computed. The hierarchical softmax does this by introducing latent variables or clusters to simplify normalization BID8 BID22 BID21. This has been further improved by the adaptive softmax which introduces a variable capacity scheme for output word embeddings, assigning more parameters to frequent words and fewer parameters to rare words BID10.In this paper, we introduce adaptive input embeddings which extend the adaptive softmax to input word representations. This factorization assigns more capacity to frequent words and reduces the capacity for less frequent words with the benefit of reducing overfitting to rare words. For a competitive setup on the BILLION WORD benchmark, adaptive input embeddings reduce the number of parameters in the input and output layers by 23% while achieving higher accuracy over fixed size embeddings. When the adaptive input representations are tied with an adaptive softmax in the output, then the number of parameters is reduced by a total of 61%.Our experiments compare models based on word inputs, character inputs, as well as sub-word units using a self-attention architecture BID34. We show that models with adaptive word representations can outperform very strong character-based models while training more than twice as fast. We also substantially improve adaptive softmax by introducing additional dropout regularization in the tail projection. On the WIKITEXT-103 benchmark we achieve a perplexity of 18.7, a DISPLAYFORM0 A m e 4 R X e n K n z 4 r w 7 H 8 v R k l P s n M I f O J 8 / t g + R 9 g = = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " q u R U N k T x I l J t y S 1 E 3 C q d 9 P h a 6 P U = " > A DISPLAYFORM1 A m e 4 R X e n K n z 4 r w 7 H 8 v R k l P s n M I f O J 8 / t g + R 9 g = = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " q u R U N k T x I l J t y S 1 E 3 C q d 9 P h a 6 P U = " > A DISPLAYFORM2 A m e 4 R X e n K n z 4 r w 7 H 8 v R k l P s n M I f O J 8 / t g + R 9 g = = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " q u R U N k T x I l J t y S 1 E 3 C q d 9 P h a 6 P U = " > A DISPLAYFORM3 DISPLAYFORM4 DISPLAYFORM5 DISPLAYFORM6 DISPLAYFORM7 DISPLAYFORM8 DISPLAYFORM9 DISPLAYFORM10 Adaptive word representations are inspired by the adaptive softmax work BID10 which first described a GPU friendly way to construct a hierarchical softmax and showed that it performs very competitively compared to a full softmax, while offering significantly faster speed and a lower memory footprint. BID18 use a modified version of adaptive softmax which does not reduce the dimensionality of less frequent words in order to be able to share output embeddings with the input. This setup is akin to a hierarchical softmax with tied weights. We show that variable-sized input embeddings can perform better than fixed sized embeddings. Furthermore, this also enables weight sharing with an adaptive softmax output layer. BID18 evaluates both character-based and word-based factorizations but does not directly compare them to each other. We perform a direct comparison of word-based and characterbased input vocabularies and also compare to a sub-word factorization for both the input and output. Recently, BID0 demonstrated that self-attentional models can perform very well on language modeling tasks where the input and output is both characters. We also consider word-based benchmarks. The adaptive softmax exploits the fact that the distribution of word types in natural language follows a Zipfian distribution in order to improve the computation of the output probabilities. We apply the same intuition for input word embeddings with the motivation to reduce the number of parameters which frees up capacity for other parts of the model. We define a number of clusters that partitions the frequency ordered vocabulary V = V 1 ∪ V 2,..., V n−1 ∪ V n such that V i ∩ V j = ∅ for ∀i, j, and i = j, where V 1 contains the most frequent words and V n the least frequent words. We will refer to V 1 as the head and to any subsequent clusters loosely as tail. We reduce the capacity for each cluster by a factor of k. That is, if words in V 1 have dimension d, then words in V n have dimension d k n−1. We typically set k = 4 following BID10.Next, we add linear projections W 1 ∈ R d×d,..., W n ∈ R d/k n−1 ×d to map the embeddings of each cluster to dimension d so that the concatenated output of the adaptive input embedding layer can be easily used by the subsequent model FIG11. We also project V 1 which already has dimension d. When presented with a number of input words, the adaptive input embedding layer partitions the words into the various clusters, performs separate lookups in the embedding tables and then projects to dimension d, followed by concatenating the embeddings in the original order. Weight sharing. When the output layer is an adaptive softmax with the same partition of V, d, and k as the adaptive input layer, then we can tie the weights BID12 BID26. This further reduces the number of parameters and can simultaneously improve performance (§5). We can share both the parameters for the actual words as well as the projections W 1,..., W n Sharing the word embeddings is straightforward except for the head where the adaptive softmax has n − 1 additional embeddings for the remaining clusters which are not shared with the input. We share all projections, except for the head projection which is not available in the adaptive softmax since the model output is directly multiplied with the output word embeddings for the head band. Performance decreased when we added a head projection to the adaptive softmax in the output, regardless of when it was shared or not. Sharing both the word embeddings as well as the projections performed very well on WIKITEXT-103 but on BILLION WORD we only share the word embeddings as we found that this performed better on the validation set. We follow most of the architectural choices described in BID34 but use only a decoder network. We add sinusoidal position embeddings to the input layer and stack N = 16 blocks for both Each block contains two sub-blocks: the first is a multihead self-attention module with H = 16 heads. The second sub-block is a feed-forward module (FFN) of the form ReLU (W 1 X + b 1)W 2 + b 2 where W 1 ∈ R e×e f f, W 1 ∈ R e f f ×e and e = 1024, e f f = 4096 unless otherwise stated. Different to BID34 we apply layer normalization before the self-attention and FFN blocks instead of after, as we find it leads to more effective training. Sub-blocks are surrounded by a residual connection BID11.We use a dropout rate of 0.1 and attention dropout of 0.1 for BILLION WORD models, and increase regularization for by using dropout 0.3, and 0.1 ReLU dropout as well as attention dropout 0.1. We use the same hyperparameters for all models trained on the same dataset in order to enable a like for like comparison. When the dimensionality of the input or output layer differs from e, then we add a simple linear projection with no bias. We experiment on the BILLION WORD benchmark and WIKITEXT-103. BILLION WORD contains 768M word tokens and has a vocabulary of about 800K word types, which corresponds to words with more than 3 occurrences in the training set BID5.The training data of WIKITEXT-103 comprises about 100M tokens and a vocabulary of around 260K, corresponding to types with more than 3 occurrences in the training data BID17. The dataset is composed of shuffled Wikipedia articles where the context carries across sentences. For BILLION WORD we batch individual sentences since the corpus does not contain document structure. For we partition the training data into blocks of 512 contiguous tokens ignoring document boundaries. Evaluation is the same except that we require blocks to contain complete sentences totaling up to 512 tokens. We limit the number of tokens per GPU to a maximum threshold B per GPU. That is, we add examples of similar length until we reach this threshold. When we train on multiple GPUs, each GPU processes B tokens using the same model parameters. This increases the effective batch size to the product of the number of GPUs and B. For BILLION WORD models we use B = 2048 and typically train on 32 GPUs, giving an effective batch size of 65K tokens. The smaller vocabulary of enables increasing B to 4096 and we train on 8 GPUs. We found that large batch training is beneficial for this dataset and we therefore accumulate gradient updates over two batches before committing a parameter update BID23. This gives an effective batch size of 65K tokens for Embedding sizes. For fixed size word input layers and softmax output layers we generally use embeddings of size 512 for WIKITEXT-103. When we use an adaptive softmax in the output and fixed size word embeddings for the input, then we use dimension 256 for the input embeddings for BILLION WORD and 64 for We tuned this choice on the validation set (Appendix A). BPE inputs and outputs have embeddings of size 1024.Character CNN. We model character inputs by convolving the representations of all characters in a word following BID14 which applies several filters, then max pooling, a number of highway layers and a projection. Character embeddings have size 128 and we apply seven filters of size 1x128, 2x256, 3x384, 4x512, 5x512, 6x512, 7x512, where 3x128 indicates a filter processing three characters that outputs 128 features. We use a single highway layer for WIKITEXT-103, and two for BILLION WORD. We do not add start of word and end of word markers as they did not improve validation accuracy. We train on the same pre-processed data as the other models, with unknown tokens in both the inputs and outputs. Adaptive input representations and adaptive softmax. We use an adaptive softmax output layer to train models with large word-based vocabularies. For adaptive word inputs and adaptive softmax, we use embeddings of size d = 1024 for the head and reduce the size of subsequent clusters by a factor of k = 4. For WIKITEXT-103, we have three bands of size 20K (d=1024), 40K (d=256) and 200K (d=64). For BILLION WORD the bands are 60K (d=1024), 100K (d=256), and 640K (d=64).Sub-word models. We learn a byte-pair encoding (BPE) of 32K codes on the training data of each benchmark BID29. After applying the code to the training data we obtain a vocabulary of 33,337 tokens for and 32,347 tokens for BILLION WORD. BPE input/output embeddings have size 1024. The final evaluation is in terms word-level perplexity to be comparable to other models. The probability of a word is the product of the sub-word units. Different to BID34 we use Nesterov's accelerated gradient method BID32 ) with a momentum of 0.99 and we renormalize gradients if their norm exceeds 0.1 BID25. The learning rate is linearly warmed up from 10 −7 to 1 for 16K steps and then annealed using a cosine learning rate schedule with C cycles BID16. Each cycle runs for twice the number of updates than the previous cycle and we lower the maximum and minimum learning rates by a rate M compared to the previous cycle. The initial minimum learning rate is 10 −5 and the maximum is 1.BILLION WORD models train for a total of 975K updates over C = 3 cycles, the first cycle takes 137K steps, and we set M = 0.6. The WIKITEXT-103 models train for 286K steps over C = 4 cycles, the first cycle takes 18K setps and we set M = 0.75. We run experiments on DGX-1 machines with 8 NVIDIA V100 GPUs and machines are interconnected by Infiniband. We also use the NCCL2 library and the torch.distributed package for inter-GPU communication. We train models with 16-bit floating point precision, following BID24. 10 LSTMs + SNM10-SKIP 23.7 -- For the main on BILLION WORD, we doubled the batch size by training on 64 GPUs instead of 32 GPUs. We also consider two larger setups, one where we added four more blocks (N = 20) and increased the FFN dimension to e f f = 6144 (large), and another where we add another four blocks (N = 24) with e f f = 8192 and e = 1536 (very large). All other settings follow §4.4 and all models were trained for the same number of steps. TAB1 compares our models to previous work on BILLION WORD. The adaptive input model outperforms the best previously reported at an order of magnitude fewer parameters. Our large model performs nearly as well as an ensemble of over ten models and achieves a new state of the art of 24.14 perplexity. Our very large model performs as well as an ensemble of over ten models and achieves 23.02 perplexity. The Char-CNN model performs 0.6 PPL worse than the standard adaptive input model even though it trained for over 40% longer. TAB2 shows our on where adaptive inputs achieve 18.7 perplexity. For this only, we partition the training data into blocks of 3072 contiguous tokens instead of 512 tokens as for other experiments. During evaluation we require blocks to contain complete sentences totaling up to 3072 tokens of which the first 2560 tokens serve as context to score the last 512 tokens; we take care to score all tokens in the test and validation sets. We motivate this choice in §5.3. word units, both in the input and output, with embeddings of size 1024 (BPE) and shared weights (BPE-T). Next, we consider replacing the fixed size output representations by an adaptive softmax (ASM) and characters as input (CNN). Finally, we use both adaptive input word representations as well as an adaptive softmax (ADP) and a tied version (ADP-T). All models use the same selfattention architecture described in §4.1. TAB4 shows when training all configurations for the same number of updates. Adaptive input representations with tied input and output layers (ADP-T) achieve the highest accuracy at the same speed as the BPE models which have a very small vocabulary (33K versus 260K). CNN is 1 perplexity worse than ADP-T and requires well over twice the training time. It is the slowest approach, even though it has a fast adaptive softmax in the output. Fixed word embeddings perform least well (SM). Sub-word units are fast to train and perform better than word models with fixed sized embeddings. ASM improves over SM and greatly speeds up training. For ASM, we found that reducing the dimension of the input word embeddings to 64 on WIKITEXT-103 in better accuracy (Appendix A). TAB5 shows that adaptive input representations perform equally well on BILLION WORD compared to other factorizations. ADP-T is 34% faster than ADP because there are fewer parameters to update. Similar to before, ADP-T trains more than twice as fast as CNN at higher accuracy, however, the accuracy gap is narrower than for Regularization is more important on WIKITEXT-103 while models for BILLION WORD benefit from additional capacity. Because of this we used input word embeddings of size 256 for ASM. We also trained CNN without replacing input words outside the vocabulary by an unknown symbol, however, this only improved validation perplexity by 0.16. (cf. Figure 2). Next, we turn to the question of how well models perform on rare words compared to frequent words. We compute the average loss for each word in the test set and group words by frequency. Figure 2 shows on Tying weights helps all models on rare words, likely because of regularization effects. Fixed size word embeddings with a word softmax (SM and SM-T) do not perform well on rare words. This is likely due to underfitting on common words and we use the largest possible embedding size we could fit on 16GB GPU cards given our batch size (more experimentation in Appendix A). BPE and BPE-T perform poorly on rare words because probabilities are a product of several sub-word units. ADP-T performs best across all frequency ranges. Figure 3 bins the loss by the frequency of the previous word and shows that CNN does well when it has rare words in the context, however, ADP-T does best across all bins. Figure 4 shows an equivalent analysis for BILLION WORD. The largest differences between models is on rare words. CNN performs best on very rare words but is outperformed by ADP in all other settings. Similar to analysis (Appendix 5.3) binning the loss by the frequency of the previous word shows that weight sharing also helps for BILLION WORD and that CNN does very well on rare words for BILLION WORD compared to other models. TAB7 shows the importance of context size for Training block size is the number of consecutive tokens that are considered at once during training. Inference context is the number of tokens that are provided at evaluation before any tokens are scored. Simply increasing the training block size from 512 to 3072 in an improvement of nearly 1.2 perplexity with no inference context window. Increasing the context size at inference time in an improvement of 0.6 perplexity for the largest training block size. We also found that adaptive softmax can benefit from additional regularization of rare words. Adaptive softmax first projects the model output to the dimension of a particular cluster and then computes a dot product with the respective word embeddings. We add dropout to the output of the first projection for all clusters, except for the head. This change enables the adaptive softmax to outperform a standard softmax over fixed size output word embeddings on (Table 6).However, we found that adding dropout in this way is not helpful for larger datasets such as BILLION WORD. Unfortunately, a standard softmax over 800K words is not tractable and we were unable to make a comparison. It may be possible to achieve better by tuning dropout for each band of the tail and we leave this for future work. Table 6: Perplexity on WIKITEXT-103 when regularizing rare words in adaptive softmax. Adaptive input embeddings vary the size of input word embeddings which can improve accuracy while drastically reducing the number of model parameters. When sharing parameters with an adaptive softmax, the number of parameters can be further reduced which improves training speed. We presented a comparison between different input and output layer factorizations including word inputs, character inputs and sub-word units in both the input and output. Our experiments show that models with adaptive input embeddings train faster compared to character input CNNs while achieving higher accuracy. We achieve new state of the art on and BILLION WORD. In future work, we will apply variable sized input embeddings to other tasks. This appendix shows various ablation. TAB10 shows that reducing the capacity of fixed size word input embddings is beneficial on WIKITEXT-103. The next set of in TAB10 shows for various settings of the SM and SM-T models. We also experimented with sharing the head projection but found this to perform less well than not sharing it. Finally, TAB11 shows various band sizes for adaptive input word embbedings. We also show the performance of BID18 who use an adaptive softmax with equally sized word representations and share the input and output embeddings (no dim reduction, tied). Figure 3). | Variable capacity input word embeddings and SOTA on WikiText-103, Billion Word benchmarks. | 1,007 | scitldr |
In this paper, we consider the problem of detecting object under occlusion. Most object detectors formulate bounding box regression as a unimodal task (i.e., regressing a single set of bounding box coordinates independently). However, we observe that the bounding box borders of an occluded object can have multiple plausible configurations. Also, the occluded bounding box borders have correlations with visible ones. Motivated by these two observations, we propose a deep multivariate mixture of Gaussians model for bounding box regression under occlusion. The mixture components potentially learn different configurations of an occluded part, and the covariances between variates help to learn the relationship between the occluded parts and the visible ones. Quantitatively, our model improves the AP of the baselines by 3.9% and 1.2% on CrowdHuman and MS-COCO respectively with almost no computational or memory overhead. Qualitatively, our model enjoys explainability since we can interpret the ing bounding boxes via the covariance matrices and the mixture components. Figure 1: We observe that an occluded bounding box usually exhibits multiple modes in most detection datasets, no matter whether the ground truth annotation is visible box or full box: (a) visible bounding box annotation (b) full object bounding box labeled by different annotators (c) visible bounding box annotated accurately (d) visible bounding box annotated inaccurately Object detectors based on deep convolutional neural networks (CNNs) are the backbone of many real-world applications like self-driving cars , robotics grasping and video surveillance . Most object detectors learn to detect an object in two folds : categorization of the candidate bounding box regress each coordinate of the candidate box towards the ground truth one independently. Currently, there are two styles of bounding box annotation among the large-scale object detection datasets: visible box that only contains visible parts (e.g., MS-COCO and PASCAL VOC ) full box that contains both visible and occluded parts (e.g., CrowdHuman and VehicleOcclusion ). For full box annotation, regressing a single set of bounding box coordinates works well for fully visible objects, since it is a unimodal problem. However, when an object is occluded, we observe that its occluded parts can have several plausible configurations (e.g., Figure 1 (b)), which is a multimodal problem. Even for visible box annotation, an object sometimes still exhibits multiple modes due to inaccurate labeling (e.g., Figure 1 (c) vs. (d)). We argue that an object detector robust to occlusion should learn a multimodal distribution with the capability of proposing more than one plausible hypothesis for the configuration of an occluded part. Besides, we also observe that the bounding box coordinates have correlations by nature. Take Figure 1 (c) as an example, by knowing the position of the car's roof, we can easily infer the location of the left border even without looking at it. Therefore, an object detector robust to occlusion also needs to be capable of inferring the correlations between the occluded bounding box borders and the visible ones. Motivated by these two observations, we propose a deep multivariate mixture of Gaussians model for object detection under occlusion. Concretely, instead of regressing a single set of bounding box coordinates, our model regresses several sets of coordinates, which are the means of the Gaussians. Moreover, we learn a covariance matrix for the coordinates of each Gaussian mixture component. These components are summed together as the prediction for the distribution of plausible bounding box configurations. At inference time, we choose the expectation of our model's distribution as the final predicted bounding box. To demonstrate the generalizability of our proposed model, we conduct experiments on four datasets: CrowdHuman, MS-COCO, VehicleOcclusion, and PASCAL VOC 2007. Quantitatively, our model improves the AP (Average Precision of the baselines by 3.9% and 1.2% on CrowdHuman and MS-COCO respectively (Table 1 and Table 2). Qualitatively, our model enjoys explainability since the ing bounding boxes can be interpreted using the covariance matrices and the Gaussian mixture components (Figure 5 and Figure 4). More importantly, our model is almost computation and memory free, since predicting the mixture components only requires a fully-connected layer, and we can discard the covariance matrices at inference time (Table 5). Object Detection: Deep convolutional neural networks were first introduced to object detection in R-CNN and Fast R-CNN . Currently, there are mainly two types of object detectors: one-stage object detectors and two-stage object detectors. One-stage detectors like YOLO , SSD and RetinaNet are fast in general. Two-stage detectors (; ;) are accurate however sacrificing speed. In this paper, although we conduct experiments based on the Faster R-CNN heads of Faster R-CNN and Mask R-CNN, our method is not limited to two-stage detectors. Object Detection Under Occlusion: Occlusion-aware R-CNN (b) proposes to divide pedestrian detection into five parts and predict the visibility scores respectively, which are integrated with the prior structure information of the human body into the network to handle occlusion. Zhang et al. (2018a) proposes an attention network with self or external guidance. These methods are specifically designed for pedestrian detection task. By contrast, our method is designed for general object detection. Deep Voting (c) proposes to utilize spatial information between visual cues and semantic parts and also learn visual cues from the context outside an object. However, detecting semantic parts needs manual labels, which our approach does not require. Besides, our approach does not introduce additional computation during the inference (Table 5). Amodal instance segmentation considers the task of predicting the region encompassing both visible and occluded parts of an object. The authors propose to add synthetic occlusion to visible objects and retain their original masks, then employ a CNN to learn on the generated composite images, which resembles the VehicleOcclusion in our experiments. proposes bounding box regression with uncertainty, which is a degradation case of our model (Gaussian). Datasets for Detection under Occlusion: Currently, there are three categories of annotation for an occluded object: visible bounding box that contains the visible parts full box that contains both visible and occluded parts of an object annotated by human full box by synthesizing occluders on a visible object. MS-COCO, PASCAL VOC, ImageNet Figure 2: Faster R-CNN head architecture for our approach: We extended the existing Faster R-CNN head to predict the parameters of multivariate mixture of Gaussian µ, φ and Σ Cityscapes fall into the first category. CrowdHuman and Semantic Amodal Segmentation dataset require the annotators to label the invisible parts. VehicleOcclusion instead synthesizes the occluders for visible objects. We conduct experiments on MS-COCO, PASCAL VOC 2007, CrowdHuman, and VehicleOcclusion, covering all these categories. We observe that when an object is partially occluded, the occluded bounding box border can usually be inferred to some extent by other visible parts of the object (e.g., it is easy to infer the left border of the car given the car roof position in Figure 1 (c)). Besides, the occluded bounding box exhibits multiple modes. For example, the left arm of the teddy bear could have several possible configurations in Figure 1 (b). Motivated by these two observations, we propose to estimate the bounding box coordinates as a probability distribution during bounding box regression instead of a set of deterministic coordinates. Specifically, we propose to estimate a multivariate mixture of Gaussians distribution with a deep network. Multivariate Gaussian helps the case where bounding box borders have correlations, and a mixture of Gaussians helps the case where an occluded bounding box border exhibits multiple modes. Formally, we predict the distribution p θ (x|I) given the feature maps I of a region of interest (RoI). The distribution is parameterized by θ, which is a neural network (e.g., Faster R-CNN head, Figure 2). The distribution has T, which is the most probable bounding box coordinates relative to the RoI, estimated by the component: Σ is the covariance matrix, which is a symmetric semi-positive definite matrix in general. To be able to compute the inverse Σ −1, we constrain the covariance matrix to be a symmetric positive definite matrix. In this case, the precision matrix Σ −1 is also a symmetric positive definite matrix. During training, the model estimates the precision matrix Σ −1 instead of the covariance matrix Σ, so that we do not need to compute the inverse every time during training which we also find more stable in our experiments. To ensure the properties of the precision matrix Σ −1, we parameterize it using the Cholesky decomposition: where U is an upper triangular matrix with strictly positive diagonal entries, such that Cholesky decomposition is guaranteed to be unique. We parameterize the mixture weights φ i using Softmax, so that they range from 0 to 1 and sum to 1: z i, u ii and µ i are outputs produced by a fully-connected layer on top of the final fully-connected layer fc7 on the Faster R-CNN head. Take Faster R-CNN with RPN as an example, Figure 2 shows the architecture of our model. Since we only modify a small part of the architecture, our approach might also be applied to other object detectors than Faster R-CNN, like one-stage object detectors YOLO and RetinaNet. Learning: Our model parameterizes the distribution over bounding boxes using a neural network which depends on RoI features. During training, we estimate the parameters θ with maximum likelihood estimation on a given dataset {I, µ * | = 1, 2, ..., N}, where µ * represents the ground truth coordinates for RoI feature maps I and N is the number of observations: In practice, N is the number of samples in a mini-batch. We use momentum stochastic gradient descent (SGD) to minimize the localization loss L loc and the classification loss L cls: Note that we use different parameters θ for different classes in practice. For simplicity, the formulation above only considers the regression problem for a single class. Inference: During testing, we use the expectation of our mixture module as prediction: Notice that the covariance matrix Σ i is not involved in inference. In practice, we discard the neurons that produce the covariance matrix to speed up inference. In our experiments (Table 5), our model has almost the same inference latency and memory consumption as the baseline network. Multivariate Gaussian: When the number of mixture components K = 1, our model degrades into a multivariate Gaussian model. And the localization loss can be rewritten as follow (for simplicity, we only illustrate the loss for a single sample): where 2 ln 2π is a constant which can be ignored during training. where (U i) jj is the jth diagonal element of the matrix U i. Multimodality is helpful under occlusion because an occluded object usually has multiple modes. Gaussian: When the number of mixture components K = 1 and the covariance is constrained to be a diagonal matrix, it becomes a simple Gaussian model where different variables are independent: We argue that this simple model helps detection in most cases. Here (U) jj behaves like a balancing term. When the bounding box regression is inaccurate (large (µ * j − µ j) 2 /2), the variance 1/(U) 2 jj tends to be larger. Therefore smaller gradient will be provided to bounding box regression (U) 2 /2 in this case, which might help training the network (Table 1 and Table 2). If bounding box regression is perfect, U tend to infinity (i.e., the variance should be close 0). However, regression is not that accurate in practice, U will be punished for being too large. Euclidean Loss: When all the diagonal elements (U) jj are one (u jj = 0), our model degenerates to the standard euclidean loss: We initialize the weights of µ i, z i and u ii layers (Figure 2) using random Gaussian initialization with standard deviations 0.0001 and biases 0, −1 and 0 respectively. So that at the start of training, bounding box coordinate µ i is at an unbiased position, U i is an identity matrix and φ i treats each mixture component equally. Our model can be trained end-to-end. Unless specified, we follow settings in Detectron and those original papers. To demonstrate the generalizability of our method, we conduct experiments on four datasets: CrowdHuman is a large, rich-annotated and highly diverse dataset for better evaluation of detectors in crowd scenarios. Its training and validation sets contain a total of 470k human instances, and around 22.6 persons every image under various kinds of occlusions. The annotations for occluded bounding boxes are full boxes (Figure 1 (b) ) instead of visible boxes (Figure 1 (a) ). The experiments are in Table 1. VehicleOcclusion is a synthetic dataset designed for object detection under occlusion . Same as above, the annotations are full boxes. The occlusion annotations are more accurate since the occluders (occluding objects) are randomly placed on the annotated visible object. It contains six types of vehicles and occluded instances at various difficulty levels. Specifically, it consists of four occlusion levels: No occlusion (0%), L1 (20% ∼ 40%), L2 (40% ∼ 60%), L3 (60% ∼ 80%). The percentages are computed by pixels. At level L1, L2 and L3, there are two, three, and four occluders placed on the object, respectively (Table 4). is a large-scale object detection dataset containing 80 object categories, 330k images (> 200k labeled) and 1.5 million object instances. Compared with the two datasets above, MS-COCO has fewer occlusion cases. For example, the IoU (intersection over union) between overlapped human bounding boxes in MS-COCO are less than 0.7 . We use train2017 for training and val2017 for testing (Table 2). Different from above, the annotations are visible boxes. PASCAL VOC 2007 has 9,963 images and 20 classes in total, containing 24,640 annotated objects (Everingham et al.). Similar with MS-COCO, this dataset has less occlusion cases than the first two datasets. We use voc_2007_train and voc_2007_val for training and voc_2007_test for testing (Table 3). The annotations are visible boxes. Number of Mixture Components: Shown in Figure 3, we test our mixture of Gaussians model by varying the number of mixture components. The baseline is ResNet-50 FPN Faster R-CNN on CrowdHuman. As the number of components increases from 1, 4 to 8, we observe consistent performance improvement. The mixture of eight Gaussians model (Eq. 8) outperforms Gaussian model (Eq. 9) by 1% AP. However, the performance goes down when there are more than 16 components. This might be because the objects in the dataset might not have as many as 16 modes when occluded. Besides, the more components we have, the higher the chance of over-fitting. Unless specified, we use eight components for the mixture of Gaussians model. Mixture of Gaussian vs. Multivariate Gaussian: Shown in Table 1 and 2, we compare the degradation cases of our complete model (Eq. 1): Gaussian (Eq. 9), mixture of Gaussians (Eq. 8) and multivariate Gaussian (Eq. 7) on CrowdHuman and MS-COCO. For CrowdHuman, we use ResNet-50 FPN Faster R-CNN as the baseline. For MS-COCO, we use ResNet-50 FPN Mask R-CNN. On CrowdHuman which has a lot of crowded scenes, our model greatly improves the baseline. Gaussian improves the baseline by 1.2% AP. A mixture of eight Gaussians improves 2.3% AP, and multivariate Gaussians improves 2.9% AP. The complete model improves the performance by 3.9% AP. The improvements indicate all these assumptions are helpful under heavy occlusion. Gaussian helps training the regression network by learning to decrease the gradients for high variance cases. Multivariate Gaussian helps to learn the correlations between an occluded border and the visible borders. Mixture of Gaussians helps to learn a multimodal model for the occluded cases which have multiple modes. Soft-NMS modifies classification scoring, while our approach improves localization. Though it achieves comparable performance (1.7% AP improvement), it can be applied together with our method. With soft-NMS, the AP of mixture of 8 Gaussian, multivariate Gaussian and the complete model further improves 1.7%, 1.5% and 1.5% respectively. On MS-COCO, the bounding box annotations are visible boxes instead of full boxes used in CrowdHuman. Gaussian still works here which improves the baseline by 0.4% AP, since there are variances in the dataset caused by inaccurate annotation (e.g., Figure 1 (d) ). Gaussian helps to reduce the gradients for these ambiguous cases. A mixture of eight Gaussians improves 0.6% AP, and multivariate Gaussians improves 0.7% AP. The complete model improves the performance by 1.2% AP. The improvements are noticeable, however less significant than on CrowdHuman. On the one hand, there are fewer occluded instances in MS-COCO, multimodality and covariances might not as helpful as in CrowdHuman. On the other hand, predicting full boxes require guessing the invisible parts where multimodality and covariances are more useful. We further conduct experiments on PASCAL VOC 2007, shown in Table 3. VGG-CNN-M-1024 Faster R-CNN is the baseline. Similar to MS-COCO, the bounding box annotations are visible boxes instead of full boxes used in CrowdHuman. We observe that Gaussian improve the mAP (mean Average Precision) by 1.5%. The complete model improves the mAP by 2.0%. Multimodality and multivariate Gaussian do not substantially improve the performance. These observations coincide with the observations on MS-COCO. Comparison with State-of-the-art: Shown in Table 4, we compare multivariate mixture of eight Gaussians model to DeepVoting Zhang et al. (2018c) on VehicleOcclusion. Similar to CrowdHuman, the bounding box annotations are full boxes. The baseline is VGG-16 Faster R-CNN. Our multivariate mixture of eight Gaussians model outperforms DeepVoting by a large margin at different occlusion levels. Without occlusion, our model also helps to learn a better detector, coinciding the experiments above. We argue that our model considers multiple modes of an object and the correlations between each border of a bounding box, which helps detection under occlusion. Model Size and Inference Speed: We measure the inference speed of our models using ResNet-50 FPN Mask R-CNN with a TITAN Xp, CUDA 10.1 and cuDNN 7.5.0 on MS-COCO val2017. Shown in Table 5, Gaussian (Eq. 9) and multivariate Gaussian (Eq. 7) neither slow down the inference nor increase the number of parameters, since we can discard the covariance Σ at inference time (Section 3.1). The complete model, multivariate mixture of eight Gaussians (Eq. 1), only increases 2M parameters and sacrifices 0.9 FPS on GPU. Our models outperform the baselines by large margins (Table 1, 2 and 4), while requires almost no additional computation and memory. Note that we measure the inference latency on MS-COCO where there are 80 classes, such that the number of parameters for µ is 1024 × 80 × K (1024 is the number of output channels of fc7, Figure 2). On CrowdHuman where there is only one class (human), the number of parameters for µ is only 1024 × K, which will consume even fewer computation and memory resources. Figure 4 shows the visualization of our mixture of Gaussian prediction on CrowdHuman. When the object is not occluded, our model usually only exhibits a single mode. In Figure 4 (a), the predictions of the mixture components for the athlete are almost the same. When the object is occluded, the occluded bounding box border usually exhibits multiple modes. For example, the left arm of the man can have several reasonable poses in Figure 4 (b). Figure 5 shows the visualization of our multivariate Gaussian prediction on CrowdHuman. When the object is not occluded, like in Figure 5 (a), most terms in the covariance matrix are usually almost zeros. When a border of the object is occluded, like in Figure 5 (b), the variance term for that border tends to be very high. Sometimes our model learns the covariance between bounding box borders. For example, in Figure 5 (c), x 1 and x 2 has a positive correlation, which suggests if the left border moves right, the right border might also move right. When the object is heavily occluded, most of its variance terms are usually very high, shown in Figure 5 (d). We propose a multivariate mixture of Gaussians model for object detection under occlusion. Quantitatively, it demonstrates consistent improvements over the baselines among MS-COCO, PASCAL VOC 2007, CrowdHuman, and VehicleOcclusion. Qualitatively, our model enjoys explainability as the detection can be diagnosed via the covariance matrices and the mixture components. | a deep multivariate mixture of Gaussians model for bounding box regression under occlusion | 1,008 | scitldr |
Adversarial examples are a pervasive phenomenon of machine learning models where seemingly imperceptible perturbations to the input lead to misclassifications for otherwise statistically accurate models. Adversarial training, one of the most successful empirical defenses to adversarial examples, refers to training on adversarial examples generated within a geometric constraint set. The most commonly used geometric constraint is an $L_p$-ball of radius $\epsilon$ in some norm. We introduce adversarial training with Voronoi constraints, which replaces the $L_p$-ball constraint with the Voronoi cell for each point in the training set. We show that adversarial training with Voronoi constraints produces robust models which significantly improve over the state-of-the-art on MNIST and are competitive on CIFAR-10. Deep learning at scale has led to breakthroughs on important problems in computer vision , natural language processing , and robotics . Shortly thereafter, the interesting phenomena of adversarial examples was observed. A seemingly ubiquitous property of machine learning models where perturbations of the input that are imperceptible to humans reliably lead to confident incorrect classifications . What has ensued is a standard story from the security literature: a game of cat and mouse where defenses are proposed only to be quickly defeated by stronger attacks . This has led researchers to develop methods which are provably robust under specific attack models;;; ) as well as empirically strong heuristics; ). As machine learning proliferates into society, including security-critical settings like health care or autonomous vehicles , it is crucial to develop methods that allow us to understand the vulnerability of our models and design appropriate counter-measures. Adversarial training has been one of the few heuristic methods which has not been defeated by stronger attacks. In this paper, we propose a modification to the standard paradigm of adversarial training. We replace the L p -ball constraint with the Voronoi cells of the training data, which have several advantages detailed in Section 3. In particular, we need not set the maximum perturbation size as part of the training procedure. The Voronoi cells adapt to the maximum allowable perturbation size locally on the data distribution. We show how to construct adversarial examples within the Voronoi cells and how to incorporate Voronoi constraints into standard adversarial training. In Section 5 we show that adversarial training with Voronoi constraints gives state-of-the-art robustness on MNIST and competitive on CIFAR-10. Adversarial training, the process of training on adversarial examples generated in L p -balls around the training data, is a very natural approach to constructing robust models and was originally proposed by. formalized the adversarial training objective and highlighted the importance of a strong adversary for constructing adversarial examples in the inner training loop. Their approach to adversarial training, which utilized a projected gradient descent adversary, produced some of the first empirically robust models which were not later broken by stronger attacks. There's was the only approach surveyed by which was not either fully circumvented by or in a later paper . More recently, the celebrated algorithm TRADES ) has been proposed, which attempts to provide a principled way to trade off between robustness and natural accuracy. The analysis that inspires TRADES decomposes the robust error into two terms: natural error and error near the decision boundary. The yields an objective function with two terms, one which encourages accuracy and another which pushes the decision boundary away from the data distribution. Constructing a decision boundary that is far from the data distribution is explored in other heuristic works such as;;. Our approach falls into this class of defenses and so we will compare exclusively against such defenses. The frequency with which heuristic defenses have been defeated by stronger attacks has led to a line of work on certifiable robustness, which can guarantee that there exists no perturbation within an L pball of radius which causes the classifier to change its classification. One of the first works by proposed to approximate the set of possible activations of every L ∞ -bounded perturbation by propagating upper and lower bounds for each activation through the network. These upper and lower bounds are used to construct a convex outer approximation to the set of possible activations in the final layer, and a linear program is used to certify that this convex approximation does not intersect the decision boundary. This initial work had several notable drawbacks, and several subsequent works have attempted to improve upon these initial (; ; ; ;). However the fundamental problems have remained: these approaches do not scale to larger networks despite considerable effort, they often depend crucially on the specific details of the architecture, and the size of which can be certified is often considerably smaller than what we observe to be empirically robust. A different approach to certified robustness which addresses some of these concerns, called randomized smoothing , has recently been proposed. Randomized smoothing leverages the ability of any classifier f to perform well on Gaussian noise to construct a new classifier g which is certifiably robust under adversarial L 2 perturbations. Unlike prior approaches to certified robustness, randomized smoothing is a simple approach which does not depend on the architecture details of the classifier. Its main drawback is that it is currently, and possibly fundamentally, limited to L 2. We also note that more recent work has combined randomized smoothing with adversarial training to produce even more certifiably robustness classifiers in L 2 . Since the goal and limitations of these method are often different from heuristic approaches we do not compare our method against these approaches. Finally there has been a long line of work on the theory of adversarial examples. explore the sample complexity required to produce robust models. They demonstrate a simple setting, a mixture of two Gaussians, in which a linear classifier with near perfect natural accuracy can be learned from a single sample, but any algorithm that produces any binary classifier requires Ω(√ d) samples to produce a robust classifier. Followup work by suggests that adversarial examples may arise from computational constraints. They exhibit pairs of distributions that differ only in a k-dimensional subspace, and are otherwise standard Gaussians, and show that while it is information-theoretically possible to distinguish these distributions, it requires exponentially many queries in the statistical query model of computation. We note that both of these constructions produce distributions whose support is the entirety of R d. Additionally there is a line work that attempts to explain the pervasiveness of adversarial examples through the lens of high-dimensional geometry. The work of experimentally evaluated the setting of two concentric under-sampled 499-spheres embedded in R 500, and concluded that adversarial examples occur on the data manifold. suggest that adversarial examples may be an unavoidable consequence of the high-dimensional geometry of data. Their depends upon the use of an isopermetric inequality. The main drawback of these works, as well as the constructions in the previous paragraph, is that they assume that the support of the data distribution has full or nearly full dimension. We do not believe this to be the case in practice, instead we believe that the data distribution is often supported on a very low-dimensional subset of R d. This case is addressed in , where they consider the problem of adversarial robustness in the case where data is drawn from a low-dimensional manifold embedded in R d. They highlight the role of co-dimension, the difference between the dimension of the embedding space and the dimension of the data manifold, as a key source of the pervasiveness of adversarial vulnerability. Said differently, it is the low-dimensional structure of features embed-ded in high-dimensional space that contributes, at least in part, to adversarial examples. This idea is also explored in , but with emphasis on the cross-entropy loss. We build on the work of , specifically their on adversarial training in high-codimensions, which make clear several drawbacks of the L p -ball formulation. In Section 5.1 we show that our approach improves robustness in high-codimension settings. 3 originally proposed adversarial training where adversarial examples were constructed inside of an L p -ball of radius. The use of the L p -ball was meant to represent a simple notion of similarity between two images, delaying the complicated question of what is an adversarial image in favor of a tractable research problem. However it was never meant to be the final say on the threat model of the adversary and recent work has begun to explore alternative adversaries . describe a number of issues associated with the use of L p -balls. Their are formalized in the manifold setting, where samples from each class are sampled from one of C class manifolds M 1,..., M C, and the data manifold show that the L 2 -balls centered on a dense sample of M covers a negligible fraction of the neighborhood around M. Thus, when constructing adversarial examples in the inner training loop, the adversary is restricted to constructing adversarial examples in a negligible fraction of the neighborhood around the data manifold. This vulnerability increases with the codimension d − k of M. Furthermore they show that, for any p, a nearest neighbor classifier more effectively covers the neighborhood around M than a robust empirical risk minimization oracle, which outputs a classifier that is guaranteed to be correct in the L p -balls centered on the data. The Voronoi diagram for a dense sample drawn from a low-dimensional distribution with two classes, one in red and one in black. The Voronoi cells, shown in green, vary in size depending on how close a sample is to samples in the other class. The Voronoi edges that are adjacent to two samples from two different classes are shown in solid green, and approach a decision boundary which is as far from the data distribution as possible. To remedy these shortcomings, we replace the L p -ball constraint with a different geometric constraint, namely the Voronoi cell at each sample x, defined as In words, the Voronoi cell Vor p x of x is the set of all points in R d that are closer to x than to any other sample in X. The Voronoi diagram is defined as the collection of Voronoi cells, and their lower dimensional faces, for each sample in X. Figure 1 shows the Voronoi diagram for a dense sample from a dataset with two classes of data. The Voronoi cell constraint has many advantages over the L p -ball constraint. First the Voronoi cells partition the entirety of R d and so the interiors of Voronoi cells generated by samples from different classes do not intersect. This is in contrast to L p -balls which may intersect for sufficiently large. In particular the Voronoi cells partition the neighborhood around M and, for dense samples, are elongated in the directions normal to the data manifold . Thus the Voronoi cells are well suited for high codimension settings. Second, the size of the Voronoi cells adapts to the data distribution. A Voronoi cell generated by a sample which is close to samples from a different class manifold is smaller, while those further away are larger. See Figure 1. Thus we do not need to set a value for in the optimization procedure. The constraint naturally adapts to the largest value of possible locally on the data manifold. Note that the maximum perturbation size possible will often vary as we move along the data manifold, and cannot be captured by a single number which, by necessity, is upper bounded by the smallest distance to a different class. In summary, the Voronoi constraint gives the adversary the freedom to explore the entirety of the neighborhood around M. At each iteration of standard adversarial training, we must solve the inner optimization problem max δ∈B(0,) L(x + δ, y; θ) to generate an adversarial example. solve this problem using the fast gradient sign method (FGSM), while use projected gradient descent. To incorporate Voronoi constraints, at each iteration of the outer training loop we must solve the inner optimization problem maximizê x L(x, y; θ) When p = 2 the Voronoi cells are convex and so we can project a point onto a Voronoi cell by solving a quadratic program. Thus we can solve Problem 2 using projected gradient descent, as in. When p = 2 the Voronoi cells are not necessarily convex. In this setting there are many approaches, such as barrier and penalty methods, one might employ to approximately solve Problem 2 . However we found that the following heuristic is both fast and works well in practice. At each iteration of the outer training loop, for each training sample x in a batch, we generate adversarial examples by taking iterative steps in the direction of the gradient starting from x. Instead of projecting onto a constraint after each iterative step, we instead check if any of the Voronoi constraints of x shown in Equation 1 are violated. If no constraint is violated we perform the iterative update, otherwise we simply stop performing updates for x. Figure 2 illustrates the procedure. Figure 2: To construct an adversarial example within a Voronoi cell, we repeatedly take steps in the direction of the gradient of the loss, shown in blue. After each iteration we check if any of the Voronoi constraints are violated. We take the last iteration before a constraint is violated as our adversarial example. Problem 2 has n − 1 constraints, one for each sample in X\{x}. In practice however very few samples contribute to the Voronoi cell of x. Even fewer contribute to the faces of the Voronoi cell that are shared by samples in different classes, as shown in Figure 1. At each iteration, we perform a nearest neighbor search query to find the m nearest samples to x in each other class. That is we search for m(C − 1) samples where C is the number of classes. We do not impose constraints from samples in the same class as x; there is no benefit to restricting the adversary's movement with the neighborhood around the class manifold of x. In our experiments we set m = 10. 4 ADVERSARIAL TRAINING WITH VORONOI CONSTRAINTS formalize adversarial training by introducing the robust objective L(x, y; θ) where D is the data distribution and B is a L p -ball centered at x with radius. Their main contribution was the use of a strong adversary which used projected gradient descent to solve the inner optimization problem. To incorporate Voronoi constraints, we replace the L p -ball constraint in Equation 3 with the Voronoi cell at x. That is, we formalize the adversarial training objective as where we use the optimization procedure described in Section 3 to solve the inner optimization problem. Datasets. introduce a synthetic dataset, PLANES, to investigate how the codimension (low-rank features) of a dataset influences robustness. The PLANES dataset consists of two 2-dimensional planes, the first in the x d = 0 and the second in x d = 2. The first two axis of both planes are bounded as −10 ≤ x 1, x 2 ≤ 10, while x 3 =... = x d−1 = 0. The training set is sampled at the vertices of a regular grid with side length √ 2, and the test set at the centers of the grid cubes. This sampling is chosen so that the L 2 -balls of radius 1 cover the 2-dimensional planes, and so a classifier that does well inside these balls also has perfect natural accuracy. The spacing along the axis x d is chosen so the maximum perturbation size is 1. The codimension of this dataset is d − 2. We also evaluate on MNIST and CIFAR-10. Our controlled experiments on synthetic data consider a fully connected network with 1 hidden layer, 100 hidden units, and ReLU activations. We set the learning rate for Adam as α = 0.1. Our experimental are averaged over 20 retrainings. For a fair comparison to adversarial training, our experiments on MNIST and CIFAR-10 use the same model architectures as in. We train the MNIST model using Adam for 100 epochs and the CIFAR-10 model using SGD for 250 epochs. On MNIST we apply 300-step projected gradient descent (PGD), with step sizes {0.05, 0.07, 0.1, 0.15, 0.17, 0.2}. On CIFAR-10 we apply 20-step PGD with step sizes {2.0, 3.0, 4.0}. For both datasets we also apply the fast gradient sign method (FGSM) to uncover possible gradient masking as recommended in. We evaluate these attacks per sample, meaning that if any attack successfully constructs an adversarial example for a sample x at a specific, it reduces the robust accuracy of the model at that. Accuracy measures. We plot the robust classification accuracy as a function of, for each of our datasets. Since one of the primary advantages of Voronoi constraints is that we do not need to set, we need a measure of robustness that considers the total robustness of the model. Thus we report the normalized area under the curve (NAUC) defined as where acc: [0, max] → measures the classification accuracy and max is the largest perturbation considered. Note that NAUC ∈ with higher values corresponding to more robust models. Implementation Details. Constructing adversarial examples within the Voronoi cells, as described in Section 3, requires a nearest neighbor search query to find the m nearest samples to x in each other class. When the dataset remains constant throughout the course of training, this search can be performed once before training begins and reused at each iteration. However when the dataset is augmented during training, as in the case of data augmentation on CIFAR-10, the nearest neighbor search query must be computed at each iteration. Since this computation is performed on the CPU, we create 16 threads, each with a copy of a k-d tree, which constantly pull mini-batches of samples from a queue and perform nearest neighbor queries. With 16 threads running in parallel, the bottleneck for training became the construction of adversarial examples on the GPU, and so adversarial training with Voronoi constraints ran in time similar to standard adversarial training. showed that as the codimension of the PLANES dataset increases, the adversarial training approach of with training = 1 became less robust. They suggested that this was because the L 2 -balls with radius 1 around the dataset covered an increasingly smaller fraction of the neighborhood around the data manifold. To explore the performances of adversarial training with Voronoi constraints on more realistic datasets, we evaluate on MNIST and CIFAR-10 and compare against the robust pretrained models of. 12. We include the recently proposed Jacobian regularization algorithm of with λ jr = 1.0 as an additional baseline. Figure 4 (Left) shows that our model maintains near identical robustness to the Madry model on MNIST up to = 0.3, after which our model significantly outperforms the Madry model. The Madry model was explicitly trained for = 0.3 perturbations. We emphasize that one advantage of our approach is that we did not need to set a value for the maximum perturbation size. The Voronoi cells adapt to the maximum size allowable locally on the data distribution. Our model maintains 76.3% accuracy at = 0.4 compared to 2.6% accuracy for the Madry model. Furthermore our model achieves NAUC of 0.81, while the Madry model achieves NAUC of 0.67, an improvement of 20.8% and over the baseline. To our knowledge, this is the most robust MNIST model to L ∞ attacks. In particular, our model maintains 76.3% accuracy at = 0.4, compared to 2.6% accuracy for the Madry model. Right: On CIFAR-10, both models achieve NAUC of 0.29, but our model trades natural accuracy for robustness to larger perturbations. A natural approach to improving the robustness of models produced by the adversarial training paradigm of is to simply increase the maximum allowable perturbation size of the norm ball constraint. As shown in Figure 5, increasing the size of to 0.4, from the 0.3 with which originally trained, and training for only 100 epochs produces a model which exhibits significantly worse robustness in the range [0, 0.3] than the pretrained model. If we increase the number of training epochs to 150, the approach of We emphasize that our approach does not require us to set, which is particularly important in practice where the maximum amount of robustness achievable may not be known a-priori. The L p -ball constraint for describing adversarial perturbations has been a productive formalization for designing robust deep networks. However, the use of L p -balls has significant drawbacks in highcodimension settings and leads to sub-optimal in practice. Adversarial training with Voronoi constraints improves robustness by giving the adversary the freedom to explore the neighborhood around the data distribution. | We replace the Lp ball constraint with the Voronoi cells of the training data to produce more robust models. | 1,009 | scitldr |
Recent research efforts enable study for natural language grounded navigation in photo-realistic environments, e.g., following natural language instructions or dialog. However, existing methods tend to overfit training data in seen environments and fail to generalize well in previously unseen environments. In order to close the gap between seen and unseen environments, we aim at learning a generalizable navigation model from two novel perspectives: we introduce a multitask navigation model that can be seamlessly trained on both Vision-Language Navigation (VLN) and Navigation from Dialog History (NDH) tasks, which benefits from richer natural language guidance and effectively transfers knowledge across tasks; we propose to learn environment-agnostic representations for navigation policy that are invariant among environments, thus generalizing better on unseen environments. Extensive experiments show that our environment-agnostic multitask navigation model significantly reduces the performance gap between seen and unseen environments and outperforms the baselines on unseen environments by 16% (relative measure on success rate) on VLN and 120% (goal progress) on NDH, establishing the new state of the art for NDH task. Navigation in visual environments by following natural language guidance is a fundamental capability of intelligent robots that simulate human behaviors, because humans can easily reason about the language guidance and navigate efficiently by interacting with the visual environments. Recent efforts (b; ;) empower large-scale learning of natural language grounded navigation that is situated in photorealistic simulation environments. Nevertheless, the generalization problem commonly exists for these tasks, especially indoor navigation: the agent usually performs poorly on unknown environments that have never been seen during training. One of the main causes for such behavior is data scarcity as it is expensive and time-consuming to extend either visual environments or natural language guidance. The number of scanned houses for indoor navigation is limited due to high expense and privacy concerns. Besides, unlike vision-only navigation tasks (; ; Manolis Savva* et al., 2019;) where episodes can be exhaustively sampled in simulation, natural language grounded navigation is supported by human demonstrated interaction and communication in natural language. It is impractical to fully collect and cover all the samples for individual tasks. Therefore, it is essential though challenging to efficiently learn a more generalized policy for natural language grounded navigation tasks from existing data (a; b). In this paper, we study how to resolve the generalization and data scarcity issues from two different angles. First, previous methods are trained for one task at the time, so each new task requires training a brand new agent instance that can only solve the one task it was trained on. In this work, we propose a generalized multitask model for natural language grounded navigation tasks such as Vision-Language Navigation (VLN) and Navigation from Dialog History (NDH), aiming at efficiently transferring knowledge across tasks and effectively solving both tasks with one agent simultaneously. Moreover, although there are thousands of trajectories paired with language guidance, the underlying house scans are restricted. For instance, the popular Matterport3D dataset contains only 61 unique house scans in the training set. The current models perform much better in seen environments by taking advantage of the knowledge of specific houses they have acquired over multiple task completions during training, but fail to generalize to houses not seen during training. Hence we propose an environment-agnostic learning method to learn a visual representation that is invariant to specific environments but still able to support navigation. Endowed with the learned environment-agnostic representations, the agent is further prevented from the overfitting issue and generalizes better on unseen environments. To the best of our knowledge, we are the first to introduce natural language grounded multitask and environment-agnostic training regimes and validate their effectiveness on VLN and NDH tasks. Extensive experiments demonstrate that our environment-agnostic multitask navigation model can not only efficiently execute different language guidance in indoor environments but also outperform the single-task baseline models by a large margin on both tasks. Besides, the performance gap between seen and unseen environments is significantly reduced. We also set a new state of the art on NDH with over 120% improvement in terms of goal progress. Vision-and-Language Navigation. Vision-and-Language Navigation (b;) task requires an embodied agent to navigate in photo-realistic environments to carry out natural language instructions. The agent is spawned at an initial pose p 0 = (v 0, φ 0, θ 0), which includes the spatial location, heading and elevation angles. Given a natural language instruction X = {x 1, x 2, ..., x n}, the agent is expected to perform a sequence of actions {a 1, a 2, ..., a T} and arrive at the target position v tar specified by the language instruction X, which describes stepby-step instructions from the starting position to the target position. In this work, we consider VLN task defined for Room-to-Room (R2R) (b) dataset which contains instructiontrajectory pairs across 90 different indoor environments (houses). Previous VLN methods have studied various aspects to improve the navigation performance, such as planning, data augmentation , cross-modal alignment (; b), progress estimation (a), error correction (b;), interactive language assistance Nguyen & Daumé ) etc. This work tackles VLN via multitask learning and environmentagnostic learning, which is orthogonal to all these prior arts. Navigation from Dialog History. Different from Visual Dialog which involves dialog grounded in a single image, the recently introduced Cooperative Vision-and-Dialog Navigation (CVDN) dataset includes interactive language assistance for indoor navigation, which consists of over 2,000 embodied, human-human dialogs situated in photo-realistic home environments. The task of Navigation from Dialog History (NDH) is defined as: given a target object t 0 and a dialog history between humans cooperating to perform the task, the embodied agent must infer navigation actions towards the goal room that contains the target object. The dialog history is denoted as < t 0, Q 1, A 1, Q 2, A 2,..., Q i, A i >, including the target object t 0, the questions Q and answers A till the turn i (0 ≤ i ≤ k, where k is the total number of Q-A turns from the beginning to the goal room). The agent, located in p 0, is trying to move closer to the goal room by inferring from the dialog history that happened before. Multitask Learning. The basis of Multitask (MT) learning is the notion that tasks can serve as mutual sources of inductive bias for each other . When multiple tasks are trained jointly, MT learning causes the learner to prefer the hypothesis that explains all the tasks simultaneously, hence leading to more generalized solutions. MT learning has been successful in natural language processing , speech recognition , computer vision , drug discovery , and Atari games . The deep reinforcement learning methods that have become very popular for training models on natural language grounded navigation tasks (; a; b;) are known to be data inefficient. In this work, we introduce multitask reinforcement learning for such tasks to improve data efficiency by positive transfer across related tasks. Environment-agnostic Learning. A few studies on agnostic learning have been proposed recently. For example, Model-Agnostic Meta-Learning (MAML) aims to train a model on a variety of learning tasks and solve a new task using only a few training examples. proposes a unified feature disentangler that learns domain-invariant representation across multiple domains for image translation. Other domain-agnostic techniques are also proposed for supervised and unsupervised domain adaption . In this work, we pair the environment classifier with a gradient reversal layer to learn an environment-agnostic representation that can be better generalized on unseen environments in a zero-shot fashion where no adaptation is involved. Distributed Actor-Learner Navigation Learning Framework. To train models for the various language grounded navigation tasks like VLN and NDH, we develop a distributed actor-learner learning infrastructure 1. The framework design is inspired by IMPALA and uses its off-policy correction method called V-trace to efficiently scale reinforcement learning methods to thousands of machines. The framework additionally supports a variety of supervision strategies important for navigation tasks such as teacher-forcing (b), studentforcing (b) and mixed supervision . The framework is built using TensorFlow and supports ML accelerators (GPU, TPU). 3.1 OVERVIEW Our environment-agnostic multitask navigation model is illustrated in Figure 1. First, we adapt the reinforced cross-modal matching (RCM) model and make it seamlessly transfer across tasks by sharing all the learnable parameters for both NDH and VLN, including joint word embedding layer, language encoder, trajectory encoder, cross-modal attention module (CM-ATT), and action predictor. Furthermore, to learn the environment-agnostic representation z t, we equip the navigation model with an environment classifier whose objective is to predict which house the agent is. But note that between trajectory encoder and environment classifier, a gradient reversal layer is introduced to reverse the gradients backpropagated to the trajectory encoder, making it learn representations that are environment-agnostic and thus more generalizable in unseen environments. During training, the environment classifier is minimizing the environment classification loss L env, while the trajectory encoder is maximizing L env and minimizing the navigation loss L nav. The other modules are optimized with the navigation loss L nav simultaneously. Below we introduce multitask reinforcement learning and environmentagnostic representation learning. A more detailed model architecture is presented in Section 4. Interleaved Multitask Data Sampling. To avoid overfitting to individual tasks, we adopt an interleaved multitask data sampling strategy to train the model. Particularly, each data sample within a mini-batch can be from either task, so that the VLN instruction-trajectory pairs and NDH dialogtrajectory pairs are interleaved in a mini-batch though they may have different learning objectives. Reward Shaping. Following prior art, we first implement a discounted cumulative reward function R for the VLN and NDH tasks: where γ is the discounted factor, d(s t, v tar) is the distance between state s t and the target location v tar, and d th is the maximum distance from v tar that the agent is allowed to terminate for success. Different from VLN, NDH is essentially room navigation instead of point navigation because the agent is expected to reach a room that contains the target object. Suppose the goal room is occupied by a set of nodes {v i} N 1, we replace the distance function d(s t, v tar) in Equation 1 with the minimum distance to the goal room Navigation Loss. Since human demonstrations are available for both VLN and NDH tasks, we use behavior cloning to constrain the learning algorithm to model state-action spaces that are most relevant to each task. Following previous works , we also use reinforcement learning to aid the agent's ability to recover from erroneous actions in unseen environments. During multitask navigation model training, we adopt a mixed training strategy of reinforcement learning and behavior cloning, so the navigation loss function is: where we use REINFORCE policy gradients and supervised learning gradients to update the policy π. b is the estimated baseline to reduce the variance and a * t is the human demonstrated action. To further improve the generalizability of the navigation policy, we propose to learn a latent environment-agnostic representation that is invariant among seen environments. We would like to get rid of the environment-specific features that are irrelevant to general navigation (e.g. unique house appearances), preventing the model from overfitting to specific seen environments. We can reformulate the navigation policy as where z t is a latent representation. As shown in Figure 1, p(a t |z t, s t) is modeled by the policy module (including CM-ATT and action predictor) and p(z t |s t) is modeled by the trajectory encoder. In order to learn the environmentagnostic representation, we employ an environment classifier and a gradient reversal layer . The environment classifier is parameterized to predict the identity of the house where the agent is, so its loss function L env is defined as where y * is the ground-truth house label. The gradient reversal layer has no parameters. It acts as an identity transform during forward-propagation, but multiplies the gradient by −λ and passes it to the trajectory encoder during back-propagation. Therefore, in addition to minimizing the navigation loss L nav, the trajectory encoder is also maximizing the environment classification loss L env, trying to increase the entropy of the classifier in an adversarial learning manner where the classifier is minimizing the classification loss conditioned on the latent representation z t. Language Encoder. The natural language guidance (instruction or dialog) is tokenized and embedded into n-dimensional space X = {x 1, x 2, ..., x 3} where the word vectors x i are initialized randomly. The vocabulary is restricted to tokens that occur at least five times in the training instructions (The vocabulary used when jointly training VLN and NDH tasks is the union of the two tasks' vocabularies.). All out-of-vocabulary tokens are mapped to a single out-of-vocabulary identifier. The token sequence is encoded using a bi-directional LSTM to create H X following: where − → h X t and ← − h X t are the hidden states of the forward and backward LSTM layers at time step t respectively, and the σ function is used to combine − → h X t and Similar to benchmark models (; ; b), at each time step t, the agent perceives a 360-degree panoramic view at its current location. The view is discretized into k view angles (k = 36 in our implementation, 3 elevations by 12 headings at 30-degree intervals). The image at view angle i, heading angle φ and elevation angle θ is represented by a concatenation of the pre-trained CNN image features with the 4-dimensional orientation feature [sin φ; cos φ; sin θ; cos θ] to form v t,i. The visual input sequence V = {v 1, v 2, ..., v m} is encoded using a LSTM to create H V following: is the attention-pooled representation of all view angles using previous agent state h t−1 as the query. We use the dot-product attention hereafter. Policy Module. The policy module comprises of cross-modal attention (CM-ATT) unit as well as an action predictor. The agent learns a policy π θ over parameters θ that maps the natural language instruction X and the initial visual scene v 1 to a sequence of actions [a 1, a 2, ..., a n]. The action space which is common to VLN and NDH tasks consists of navigable directions from the current location. The available actions at time t are denoted as u t,1..l, where u t,j is the representation of the navigable direction j from the current location obtained similarly to v t,i. The number of available actions, l, varies per location, since graph node connectivity varies. As in , the model predicts the probability p d of each navigable direction d using a bilinear dot product: where c Environment Classifier. The environment classifier is a two-layer perceptron with a SoftMax layer as the last layer. Given the latent representation z t (which is h V t in our setting), the classifier generates a probability distribution over the house labels. Implementation Details. In the experiments, we use a 2-layer bi-directional LSTM for the instruction encoder where the size of LSTM cells is 256 units in each direction. The inputs to the encoder are 300-dimensional embeddings initialized randomly. For the visual encoder, we use a 2-layer LSTM with a cell size of 512 units. The encoder inputs are image features derived as mentioned in Section 4. The cross-modal attention layer size is 128 units. The environment classifier has one hidden layer of size 128 units followed by an output layer of size equal to the number of classes. During training, some episodes in the batch are identical to available human demonstrations in the training dataset where the objective is to increase the agent's likelihood of choosing human actions (behavioral cloning ). The rest of the episodes are constructed by sampling Figure 2: Selected tokens from the vocabulary for VLN (left) and NDH (right) tasks which gained more than 40 additional occurrences in the training dataset due to joint-training. from agent's own policy. In the experiments, unless otherwise stated, we use entire dialog history from NDH task for model training. All the reported in subsequent studies are averages of at least 3 independent runs. Evaluation Metrics. The agents are evaluated on two datasets, namely Validation Seen that contains new paths from the training environments and Validation Unseen that contains paths from previously unseen environments. The evaluation metrics for VLN task are as follows: Path Length (PL) measures the total length of the predicted path; Navigation Error (NE) measures the distance between the last nodes in the predicted and the reference paths; Success Rate (SR) measures how often the last node in the predicted path is within some threshold distance of the last node in the reference path; Success weighted by Path Length (SPL) (a) measures Success Rate weighted by the normalized Path Length; and Coverage weighted by Length Score (CLS) measures predicted path's conformity to the reference path weighted by length score. For NDH task, the agent's progress is defined as reduction (in meters) from the distance to the goal region at agent's first position versus at its last position . Table 1 shows the of training the navigation model using environment-agnostic learning (EnvAg) as well as multitask learning (MT-RCM). First, both learning methods independently help the agent learn more generalized navigation policy as is evidenced by significant reduction in agent's performance gap between seen and unseen environments. For instance, performance gap for agent's goal progress on NDH task drops from 3.85m to 0.92m using multitask learning and agent's success rate on VLN task between seen and unseen datasets drops from 9.26% to 8.39% using environmentagnostic learning. Second, the two techniques are complementary-the agent's performance when trained with both the techniques simultaneously improves on unseen environments compared to when trained separately. Finally, we note here that MT-RCM + EnvAg outperforms the state-of-theart goal progress of 2.10m on NDH validation unseen dataset by more than 120%. At the same time, it outperforms the equivalent RCM baseline of 40.6% success rate by more than 16% (relative measure) on VLN validation unseen dataset. Next, we conduct studies to examine cross-task transfer using multitask learning alone. One of the main advantages of multitask learning is that under-represented tokens in each of the individual tasks get a significant boost in the number of training samples. Figure 2 illustrates that tokens with less than 40 occurrences end up with sometimes more than 300 occurrences during joint-training. To examine the impact of dialog history in NDH task, we conduct studies with access to different parts of the dialog-the target object t o, the last oracle answer A i, the prefacing navigator question Q i and the full dialog history. Table 2 shows the of jointly training MT-RCM model on VLN and NDH tasks. MT-RCM model learns a generalized policy that consistently outperforms the competing model with access to similar parts of the dialog on previously unseen environments. As noted before, multitask learning significantly reduces the gap between the agent's performance on previously seen and unseen environments for both tasks. Furthermore, we see a consistent and gradual increase in the success rate of MT-RCM on VLN task as it is trained on paths with richer dialog history from the NDH task. This shows that the agent benefits from more complete information about the path implying the importance given by the agent to the language instructions in the task. We also investigate the impact of parameter sharing of the language encoder for both tasks. As shown in Table 3, the model with shared language encoder for NDH and VLN tasks outperforms the model that has separate language encoders for the two tasks, hence demonstrating the importance of parameter sharing during multitask learning. A more detailed analysis can be found in the Appendix. From Table 1, it can be seen that both VLN and NDH tasks benefit from environment-agnostic learning independently. To further examine the generalization property due to environment-agnostic objective, we train a model with the opposite objective-learn to correctly predict the navigation environments by removing the gradient reversal layer (environment-aware learning). Interesting Figure 3: t-SNE visualization of trajectory encoder's output (1000 random paths across 11 different color-coded environments) for models trained with environment-aware objective (left) versus environment-agnostic objective (right). are observed in Table 4 that environment-aware learning leads to overfitting on the training dataset (performance on environments seen during training consistently increases for both tasks), while environment-agnostic learning leads to more generalizable policy which performs better on previously unseen environments. Figure 3 further shows that due to environment-aware objective, the model learns to represent visual inputs from the same environment closer to each other while the representations of different environments are farther from each other ing in a clustering learning effect. On the other hand, the environment-agnostic objective leads to more general representation across different environments which in better performance on unseen environments. As discussed in Section 3.2, we conducted studies to shape the reward for NDH task. The in Table 5 indicate that incentivizing the agent to get closer to the goal room is better than to the exact goal location, because it is aligned with the objective of NDH task, which is to reach the room containing the goal object. Detailed ablation is presented in Appendix showing that the same holds true consistently as the agent is provided access to different parts of the dialog history. In this work, we show that the model trained using environment-agnostic multitask learning approach learns a generalized policy for the two natural language grounded navigation tasks. It closes down the gap between seen and unseen environments, learns more generalized environment representations and effectively transfers knowledge across tasks outperforming baselines on both the tasks simultaneously by a significant margin. At the same time, the two approaches independently benefit the agent learning and are complementary to each other. There are possible future extensions to our work-the MT-RCM can further be adapted to other language-grounded navigation datasets, such as those using Street View (e.g., Touchdown Table 6 presents a more detailed ablation of Table 5 using different parts of dialog history. The prove that agents rewarded for getting closer to the goal room consistently outperform agents rewarded for getting closer to the exact goal location. Table 7 presents a more detailed analysis from Table 3 with access to different parts of dialog history. The models with shared language encoder consistently outperform those with separate encoders. Figure 4: Visualizing performance gap between seen and unseen environments for VLN and NDH tasks. For VLN, the plotted metric is agent's success rate while for NDH, the metric is agent's progress. As mentioned in Section 5.2, both multitask learning as well as environment-agnostic learning methods reduce the agent's performance gap between seen and unseen environments which is demonstrated in Figure 4. | We propose to learn a more generalized policy for natural language grounded navigation tasks via environment-agnostic multitask learning. | 1,010 | scitldr |
In this paper we propose to perform model ensembling in a multiclass or a multilabel learning setting using Wasserstein (W.) barycenters. Optimal transport metrics, such as the Wasserstein distance, allow incorporating semantic side information such as word embeddings. Using W. barycenters to find the consensus between models allows us to balance confidence and semantics in finding the agreement between the models. We show applications of Wasserstein ensembling in attribute-based classification, multilabel learning and image captioning generation. These show that the W. ensembling is a viable alternative to the basic geometric or arithmetic mean ensembling. Model ensembling consists in combining many models into a stronger, more robust and more accurate model. Ensembling is ubiquitous in machine learning and yields improved accuracies across multiple prediction tasks such as multi-class or multi-label classification. For instance in deep learning, output layers of Deep Neural Networks(DNNs), such as softmaxes or sigmoids, are usually combined using a simple arithmetic or geometric mean. The arithmetic mean rewards confidence of the models while the geometric means seeks the consensus across models. What is missing in the current approaches to models ensembling, is the ability to incorporate side information such as class relationships represented by a graph or via an embedding space. For example a semantic class can be represented with a finite dimensional vector in a pretrained word embedding space such as GloVe BID28. The models' predictions can be seen as defining a distribution in this label space defined by word embeddings: if we denote p i to be the confidence of a model on a bin corresponding to a word having an embedding x i, the distribution on the label space is therefore p = i p i δ xi. In order to find the consensus between many models predictions, we propose to achieve this consensus within this representation in the label space. In contrast to arithmetic and geometric averaging, which are limited to the independent bins' confidence, this has the advantage of carrying the semantics to model averaging via the word embeddings. More generally this semantic information can be encoded via cost a matrix C, where C ij encodes the dissimilarity between semantic classes i and j, and C defines a ground metric on the label space. To achieve this goal, we propose to combine model predictions via Wasserstein (W.) barycenters BID0, which enables us to balance the confidence of the models and the semantic side information in finding a consensus between the models. Wasserstein distances are a naturally good fit for such a task, since they are defined with respect to a ground metric in the label space of the models, which carry such semantic information. Moreover they enable the possiblity of ensembling predictions defined on different label sets, since the Wasserstein distance allows to align and compare those different predictions. Since their introduction in BID0 W. barycenter computations were facilitated by entropic regularization BID6 and iterative algorithms that rely on iterative Bregman projections BID2. Many applications have used W. barycenters in Natural Language Processing (NLP), clustering and graphics. We show in this paper that W. barycenters are effective in model ensembling and in finding a semantic consensus, and can be applied to a wide range of problems in machine learning (Table 1).The paper is organized as follows: In Section 2 we revisit geometric and arithmetic means from a geometric viewpoint, showing that they are 2 and Kullback Leibler divergence KL (extended KL divergence) barycenters respectively. We give a brief overview of optimal transport metric and W. barycenters in Section 3. We highlight the advantages of W. barycenter ensembling in terms of semantic smoothness and diversity in Section 4. Related work on W. barycenters in Machine learning are presented in Section 5. Finally we show applications of Wasserstein ensembling on attribute based classification, multi-label learning and image captioning in Section 6. Normalized and Unnormalized predictions Ensembling. In deep learning, predictions on a label space of fixed size M are usually in one of two forms: a) normalized probabilities: in a multiclass setting, the neural network outputs a probability vector (normalized through softmax), where each bin corresponds to a semantic class; b) unnormalized positive scores: in a multi-label setting, the outputs of M independent logistic units are unnormalized positive scores, where each unit corresponds to the presence or the absence of a semantic class. Model ensembling in those two scenarios has long history in deep learning and more generally in machine learning BID3 BID13 ) as they lead to more robust and accurate models. As discussed in the introduction, two methods have been prominent in model ensembling due to their simplicity: majority vote using the arithmetic mean of predictions, or consensus based using the geometric mean. Revisiting Arithmetic and Geometric Means from a geometric viewpoint. Given m predictions µ, and weights λ ≥ 0 such that m =1 λ = 1, the weighted arithmetic mean is given byμ a = m =1 λ µ, and the weighted geometric mean byμ g = Π m =1 (µ λ).It is instructive to reinterpret the arithmetic and geometric mean as weighted Frechet means (Definition 1) . 2 2 (the 2 Euclidean distance). A less known fact is that the geometric mean corresponds to a Frechet Mean for d = KL, where KL is the extended KL divergence to unnormalized measures: KL(p, q) = i p i log pi qi − p i + q i. We give proofs and properties of arithmetic and geometric mean in Appendix F.Following this geometric viewpoint, in order to incorporate the semantics of the target space in model ensembling, we need to use a distance d that takes advantage of the underlying geometry of the label space via a cost matrix C when comparing positive measures. Optimal transport (OT) metrics such as Wasserstein-2 have this property since they are built on an explicit cost matrix defining pairwise distance between the semantic classes. In this paper we propose to use the Frechet means with Wasserstein distance (d = W 2 2) for model ensembling, i.e. use Wasserstein barycenters BID0 for model ensembling: DISPLAYFORM0 Intuitively, the barycenter looks for a distribution ρ (a histogram) that is close to all the base distributions µ in the Wasserstein sense. In our context transporting the consensus ρ to each individual model µ should have a minimal cost, where the cost is defined by the distance in the word embedding space. Wasserstein distances were originally defined between normalized probability vectors (Balanced OT) BID36 ), but they have been extended to deal with unnormalized measures and this problem is referred to as unbalanced OT BID5 BID14. Motivated by the multi-class and the multi-label ensembling applications, in the following we present a brief overview of W. barycenters in the balanced and unbalanced cases., p represents histograms on source label space Ω S = {x i ∈ R d, i = 1 . . . N}, for e.g words embeddings. Consider similarly q ∈ ∆ M representing histograms whose bins are defined on a target label space Ω T = {y j ∈ R d, j = 1 . . . M}. Consider a cost function c(x, y), (for example c(x, y) = x − y 2 ). Let C be the matrix in ∈ R N ×M such that C ij = c(x i, y j). 1 N denotes a vector with all ones. Let γ ∈ R N ×M be a coupling matrix whose marginals are p and q such that: DISPLAYFORM0 The optimal transport metric is defined as follows: DISPLAYFORM1 When c(x, y) = x − y 2 2, this distance corresponds to the so called Wasserstein−2 distance W 2 2.Unbalanced OT. When p and q are unnormalized and have different total masses, optimal transport metrics have been extended to deal with this unbalanced case. The main idea is in relaxing the set Π(p, q) using a divergence such as the extended KL divergence: KL. BID5 define for λ > 0 the following generalized Wasserstein distance between unnormalized measures: DISPLAYFORM2 Throughout the paper we consider m discrete prediction vectors µ ∈ R N +, = 1... m defined on a discrete space (word embeddings for instance) DISPLAYFORM0 We refer to Ω S as source spaces. Our goal is to find a consensus predictionμ DISPLAYFORM1 Balanced W. Barycenters: Normalized predictions. The W. barycenter BID0 of normalized predictions is defined as follows:μ w = arg min ρ m =1 λ W (ρ, µ), for the Wasserstein distance W defined in equation. Hence one needs to solve the following problem, for m coupling matrices γ, = 1... m: DISPLAYFORM2 Unbalanced W. Barycenters: Unnormalized predictions. Similarly the W. barycenter of unnormalized predictions is defined as follows:μ w = arg min ρ m =1 λ W unb (ρ, µ), for the Generalized Wasserstein distance W unb defined in equation. Hence the unbalanced W. barycenter problem BID5 amounts to solving, for m coupling matrices γ, = 1... m: DISPLAYFORM3 3.3 COMPUTATION VIA ENTROPIC REGULARIZATION AND PRACTICAL ADVANTAGES Entropic Regularized Wasserstein Barycenters Algorithms. The computation of the Wasserstein distance grows super-cubicly in the number of points. This issue was alleviated by the introduction of the entropic regularization BID6 to the optimization problem making it strongly convex. Its solution can be found using scaling algorithms such as the so called Sinkhorn algorithm. For any positive matrix γ, the entropy is defined as follows: DISPLAYFORM4 The entropic regularized OT distances in the balanced and unbalanced case become, for a hyperparameter ε > 0: DISPLAYFORM5 for ε → 0, W ε and W unb,ε converge to the original OT distance, and for higher value of ε we obtain the so called Sinkhorn divergence that allows for more diffuse transport between p and q. Balanced and unbalanced W. barycenters can be naturally defined with the entropic regularized OT distance as follows: DISPLAYFORM6 respectively. This regularization leads to simple iterative algorithms BID2 BID5 ) (for more details we refer the interested reader to BID5 and references therein) for computing W. barycenters that are given in Algorithms 1 and 2.Algorithm 1: Balanced Barycenter for Multiclass Ensembling BID2 Inputs: DISPLAYFORM7 Algorithm 2: Unbalanced Barycenter for Multilabel Ensembling BID5 ) DISPLAYFORM8 We see that the output of Algorithm 1 is the geometric mean of K u, = 1... m, where K is a Gaussian kernel with bandwidth ε the entropic regularization parameter. Note v *, = 1... m the values of v at convergence of Algorithm 1. The entropic regularized W. barycenter can be written as follows: exp DISPLAYFORM9. We see from this that K appears as matrix product multiplying individual models probability µ and the quantities v * related to Lagrange multipliers. This matrix vector product with K ensures probability mass transfer between semantically related classes i.e between items that has entries K,ij with high values. Remark 1 (The case K = K = I). As the kernel K in Algorithm 1 approaches I (identity) (this happens when ε → 0), the alternating Bregman projection of BID2 for balanced W. barycenter converges to the geometric meanμ g = Π m =1 (µ) λ.We prove this in Appendix D. When K = I the fixed point of Algorithm 1 reduces to geometric mean, and hence diverges from the W. barycenter. Note that K approaches identity as ε → 0, and in this case we don't exploit any semantics. Wasserstein Ensembling in Practice. Table 1 gives a summary of machine learning tasks that can benefit from Wasserstein Ensembling, and highlights the source and target domains as well as the corresponding kernel matrix K. In the simplest case Ω S = Ω T and N = M for all, this corresponds to the case we discussed in multi-class and multi-labels ensemble learning, W. barycenters allows to balance semantics and confidence in finding the consensus. The case where source and target spaces are different is also of interest, we give here an application example in attribute based classification: µ corresponds to prediction on a set of attributes and we wish to make predictions through the W. barycenter on a set of labels defined with those attributes. See Section 6.1. as we use beam search on the predictions, diversity and smoothness of the predictions become key to the creativity and the composition of the sequence generator in order to go beyond "baby talk" and vanilla language based on high count words in the training set. Hence we need to increase the entropy of the prediction by finding a semantic consensus whose predictions are diverse and smooth on semantically coherent concepts without compromising accuracy. We will show in the following proposition that the W. barycenter allows such aggregation: Proposition 1 (Properties of Wasserstein Barycenters). Let ν be the target distribution (an oracle) defined on a discrete space Ω = {x 1, . . . x K, x j ∈ R d} (word embedding space) and µ, = 1... m be m estimates of ν. Assume W 2 2 (µ, ν) ≤ ε. The W. barycenterμ w of {µ} satisfies the following: 1) Semantic Accuracy (Distance to an oracle). We have: DISPLAYFORM10 2) Diversity. The diversity of the W. barycenter depends on the diversity of the models with respect to the Wasserstein distance (pairwise Wasserstein distance between models): DISPLAYFORM11 3) Smoothness in the embedding space. Define the smoothness energy E (ρ) = DISPLAYFORM12 The W. barycenter is smoother in the embedding space than the individual models. DISPLAYFORM13 Proof. The proof is given in Appendix F.We see from Proposition 1 that the W. barycenter preserves accuracy, but has a higher entropy than the individual models. This entropy increase is due to an improved smoothness on the embedding space: words that have similar semantics will have similar probability mass assigned in the barycenter. The diversity of the barycenter depends on the Wasserstein pairwise distance between the models: the W. barycenter output will be less diverse if the models have similar semantics as measured by the Wasserstein distance. The proof of proposition 1 relies on the notion of convexity along generalized geodesics of the Wasserstein 2 distance BID0. Propositions 2 and 3 in Appendix F give similar for geometric and arithmetic mean, note that the main difference is that the guarantees are given in terms of KL and 2 respectively, instead of W 2.In order to illustrate the diversity and smoothness of the W. barycenter, we give here a few examples of the W. barycenter on a vocabulary of size 10000 words, where the cost matrix is constructed from word synonyms ratings, defined using Power Thesaurus or using GloVe word embeddings BID28. We compute the W. barycenter (using Algorithm 1) between softmax outputs of 4 image captioners trained with different random seeds and objective functions. Figure 4 shows the W. barycenter as well as the arithmetic and geometric mean. It can be seen that the W. barycenter has higher entropy and is smooth along semantics (synonyms or semantics in the GloVe space) and hence more diverse than individual models. Table 2 shows top 15 words of barycenter, arithmetic and geometric means, from which we see that indeed the W. barycenter outputs clusters according to semantics. In order to map back the words x j that have high probability in the W. barycenter to an individual model, we can use the couplings γ as follows: γ ij is the coupling between word j in the barycenter and word i in model. Examples are given in supplement in Table 2: Sample output (top 15 words) of W. barycenter (Algorithm 1), arithmetic and geometric means based on four captioner models. Each row shows a word and a corresponding probability over the vocabulary (as a percentage). W. Barycenter has higher entropy, spreading the probability mass over the synonyms and words related to the top word "car" and downweights the irrelevant objects (exploiting the side information K). Simple averaging techniques, which use only the confidence information, mimick the original model outputs. Figure 4 in Appendix gives a histogram view. Controllable Entropy via Regularization. As the entropic regularization parameter ε increases the distance of the kernel K from identity I increases and the entropy of the optimal couplings γ, (H(γ)) increases as well. Hence the entropy of entropic regularized W. Barycenter is controllable via the entropic regularization parameter ε. In fact since the barycenter can be written asμ w = γ 1 N, one can show that (Lemma 2 in Appendix): DISPLAYFORM14 As epsilon increases the right-hand side of the inequality increases and so does H(μ w). This is illustrated in Tables 3 and 8, we see that the entropy of the (entropic regularized) W. barycenter increases as the distance of the kernel K to identity increases (K − I F increases as ε increases) and the output of the W. barycenter remains smooth within semantically coherent clusters. Table 3: Controllable Entropy of regularized Wasserstein Barycenter (Algorithm 1). Output (top 15 words) for a synonyms-based similarity matrix K under different regularization ε (which controls the distance of K to identity I, K − I F). As ε decreases, K − I F also decreases, i.e., K approaches identity matrix, and the entropy of the output of Algorithm 1 decreases. Note that the last column, corresponding to very small entropic regularization, coincides with the output from geometric mean in FIG3 (for K = I, the Algorithm 1 outputs geometric mean as a barycenter). Wasserstein Barycenters in Machine Learning. Optimal transport is a relatively new comer to the machine learning community. The entropic regularization introduced in BID6 fostered many applications and computational developments. Learning with a Wasserstein loss in a multilabel setting was introduced in BID14, representation learning via the Wasserstein discriminant analysis followed in BID12. More recently a new angle on generative adversarial networks learning with the Wasserstein distance was introduced in (; BID15 BID31 . Applications in NLP were pioneered by the work on Word Mover Distance (WMD) on word embeddings of BID23. Thanks to new algorithmic developments BID7 BID2 W. barycenters have been applied to various problems: in graphics BID35, in clustering , in dictionary learning BID33, in topic modeling , in bayesian averaging BID30, and in learning word and sentences embeddings BID26 BID27 etc. Most of these applications of W. barycenter focus on learning balanced barycenters in the embedding space (like learning the means of the clusters in clustering), in our ensembling application we assume the embeddings given to us (such as GloVe word embedding) and compute the barycenter at the predictions level. Finally incorporating side information such as knowledge graphs or word embeddings in classification is not new and has been exploited in diverse ways at the level of individual model training via graph neural networks BID25 BID9, in the framework of W. barycenter we use this side information at the ensemble level. In this Section we evaluate W. barycenter ensembling in the problems of attribute-based classification, multi-label prediction and in natural language generation in image captioning. As a first simple problem we study object classification based on attribute predictions. We use Animals with Attributes which has 85 attributes and 50 classes. We have in our experiments 2 attributes classifiers to predict the absence/presence of each of the 85 attributes independently, based on resnet18 and resnet34 BID18 input features while training only the linear output layer (following the details in Section 6.2). We split the data randomly in 30322 / 3500 / 3500 images for train / validation / test respectively. We train the attribute classifiers on the train split. Based on those two attributes detectors we would like to predict the 50 categories using unbalanced W. barycenters using Algorithm 2. Note that in this case the source domain is the set of the 85 attributes and the target domain is the set of 50 animal categories. For Algorithm 2 we use a columnnormalized version of the binary animal/attribute matrix as K matrix (85 × 50), such that per animal the attribute indicators sum to 1. We selected the hyperparameters ε = 0.3 and λ = 2 on the validation split and report here the accuracies on the test split. Table 4: Attribute-based classification. The W. barycenter ensembling achieves better accuracy by exploiting the cross-domain similarity matrix K, compared to a simple linear-transform of probability mass from one domain to another as for the original models or their simple averages. As a baseline for comparison, we use arithmetic mean (μ a) and geometric mean (μ g) ensembling of the two attribute classifiers resnet18 and resnet34. Then, using the same matrix K as above, we define the probability of category c (animal) as p(c|µ) = K μ (forμ =μ a andμ g resp.). We see from Table 4 that W. barycenter outperforms arithmetic and geometric mean on this task and shows its potential in attribute based classification. For investigating W. barycenters on a multi-label prediction task, we use MS-COCO BID24 with 80 objects categories. MS-COCO is split into training (≈82K images), test (≈35K), and validation (5K) sets, following the Karpathy splits used in the community BID20. From the training data, we build a set of 8 models using'resnet18' and'resnet50' architectures BID18. To ensure some diversity, we start from pretrained models from either ImageNet BID8 ) or Places365 (. Each model has its last fully-connected ('fc') linear layer replaced by a linear layer allowing for 80 output categories. All these pretrained models are fine-tuned with some variations: The'fc' layer is trained for all models, some also fine-tune the rest of the model, while some fine-tune only the'layer4' of the ResNet architecture. These variations are summarized in Table 5. Training of the'fc' layer uses a 10 −3 learning rate, while all fine-tunings use 10 −6 learning rate. All multi-label trainings use ADAM BID21 with (β 1 = 0.9, β 2 = 0.999) for learning rate management and are stopped at 40 epochs. Only the center crop of 224 * 224 of an input image is used once its largest dimension is resized to 256. Table 5: Description of our 8 models built on MS-COCO Evaluation Metric. We use the mean Average Precision (mAP) which gives the area under the curve of P = f (R) for precision P and recall R, averaged over each class. mAP performs a sweep of the threshold used for detecting a positive class and captures a broad view of a multi-label predictor performance. Performances for our 8 models are reported in TAB5. Precision, Recall and F1 for micro/macro are given in TAB10. Our individual models have reasonable performances overall. Arithmetic and geometric means offer direct mAP improvements over our 8 individual models. For unbalanced W. barycenter, the transport of probability mass is completely defined by its matrix K = K in Algorithm 2. We investigated multiple K matrix candidates by defining K(i, j) as (i) the pairwise GloVe distance between categories, (ii) pairwise visual word2vec embeddings distance, (iii) pairwise co-occurence counts from training data. In our experience, it is challenging to find a generic K that works well overall. Indeed, W. barycenter will move mass exactly as directed by K. A generic K from prior knowledge may assign mass to a category that may not be present in some images at test time, and get harshly penalized by our metrics. A successful approach is to build a diagonal K for each test sample based on the top-N scoring categories from each model and assign the average of model posteriors scores K(i, i) = 1 M m p m (i|x) for image x and category i. If a category is not top scoring, a low K(i, i) = ζ value is assigned to it, diminishing its contribution. It gives W. barycenter the ability to suppress categories not deemed likely to be present, and reinforce the contributions of categories likely to be. This simple diagonal K gives our best when using the top-2 scoring categories per model (the median number of active class in our training data is about 2) and outperforms arithmetic and geometric means as seen in TAB5. In all our experiments, W. barycenters parameters {ε, λ} in Algorithm 2 and ζ defined above were tuned on validation set (5K). We report on MS-COCO test set (≈35K). In this task of improving our 8 models, W. barycenter offers a solid alternative to commonly used arithmetic and geometric means. Appendix B.2 shows that non-uniform weighting further improves W. ensembling performance. In this task the objective is to find a semantic consensus by ensembling 5 image captioner models. The base model is an LSTM-based architecture augmented with the attention mechanism over the image. In this evaluation we selected captioners trained with cross entropy objective as well as GAN-trained models BID10. The training was done on COCO dataset BID24 using data splits from BID19: training set of 113k images with 5 captions each, 5k validation set, and 5k test set. The size of the vocabulary size is 10096 after pruning words with counts less than 5. The matrix K = K in Algorithm 1 was constructed using word similarities, defined based on (i) GloVe word embeddings, so that K = exp(−C/ε), where cost matrix C is constructed based on euclidean distance between normalized embedding vectors; and (ii) synonym relationships, where we created K based on the word synonyms graph and user votes from Power Thesaurus. The model prediction µ, for = 1,..., 5 was selected as the softmax output of the captioner's LSTM at the current time step, and each model's input was weighted equally: λ = 1/m. Once the barycenter p was computed, the was fed into a beam search (beam size B = 5), whose output, in turn, was then given to the captioner's LSTM and the process continued until a stop symbol (EOS) was generated. In order to exploit the controllable entropy of W. barycenter via the entropic regualrization parameter ε, we also decode using randomized Beam search of BID34, where instead of maintaining the top k values, we sample D candidates in each beam. The smoothness of the barycenter in semantic clusters and its controllable entropy promotes diversity in the ing captions. We baseline the W. barycenter ensembling with arithmetic and geometric means.. The x-axis shows K − I F, which corresponds to a different regularization parameter ε (varied form 1 to 50). We can see that for topK beam search (left panel) the further K is from the identity matrix, the larger the similarity neighborhood of each word, the more diverse are the generated captions (the barycenter has higher entropy), while still remaining semantically close to the ground truth. On the other hand, for randomized beam search (right panel), it is important to maintain a smaller similarity neighborhood, so that the generated sentences are not too different from the referenced ground truth. Controllable entropy and diversity. FIG0 show the comparison of the ensembling methods on the validation set using topK and randomized beam search. The x-axis shows K −I F, which corresponds to a different regularization ε (varied form 1 to 50). We report two n-gram based metrics: CIDEr and SPICE scores, as well as the WMD (Word Mover Distance) similarity BID23, which computes the earth mover distance (the Wasserstein distance) between the generated and the ground truth captions using the GloVe word embedding vectors. In topK beam search, as ε increases, causing the entropy to go up, the exact n-grams matching metrics, i.e., CIDEr and SPICE, deteriorate while WMD remains stable. This indicates that while the barycenter-based generated sentences do not match exactly the ground truth, they still remain semantically close to it (by paraphrasing), as indicated by the stability of WMD similarity. The of the GloVe-based barycenter on the test split of COCO dataset are shown in Table 7. In randomized beam search, the increase in entropy of the barycenter leads to a similar effect of paraphrasing but this works only up to a smaller value of ε, beyond which we observe a significant deterioration of the . At that point all the words become neighbors and in a very diffused barycenter, close to a uniform distribution. This diffusion effect is smaller for the synonyms-based K since there are only a certain number of synonyms for each word, thus the maximum neighborhood is limited. Table 7: Performance of GloVe-based W. barycenter on COCO test split using topK beam search versus Geometric and Arithmetic ensembling. While the generated sentences based on W. barycenter do not match exactly the ground truth (lower CIDEr), they remain semantically close to it, while being more diverse (e.g., paraphrased) as indicated by the higher entropy and stable WMD.Robustness of W. Barycenter to Semantic Perturbations. Finally, the right panel of FIG3, shows the robustness of the W. barycenter to random shuffling of the µ values, within semantically coherent clusters. Note that the size of those clusters increases as K moves away from identity. The show that barycenter is able to recover from those perturbations, employing the side information from K, while both the arithmetic and geometric means (devoid of such information) are confused by this shuffling, displaying a significant drop in the evaluation metrics. Comparison of ensembling methods when the predictions of the input models are shuffled according to the neighborhood structure defined by K. It can be seen that the W. Barycenter ensembling is able to recover from the word shuffling and produce better captions then the simple averaging methods, which are not able to exploit the provided side information. Human Evaluation. We performed human evaluation on Amazon MTurk on a challenging set of images out of context of MS-COCO BID10. We compared three ensembling techniques: arithmetic, geometric and W. barycenter. For W. barycenter we used the similarity matrix K defined by visual word2vec BID22. For the three models we use randomized beam search. We asked MTurkers to give a score for each caption on a scale 1-5 and choose the best captions based on correctness and detailedness. Captions examples are given in Fig. 6 (Appendix). FIG4 shows that W. barycenter has an advantage over the basic competing ensembling techniques. We showed in this paper that W. barycenters are effective in model ensembling in machine learning. In the unbalanced case we showed their effectiveness in attribute based classification, as well as in improving the accuracy of multi-label classification. In the balanced case, we showed that they promote diversity and improve natural language generation by incorporating the knowledge of synonyms or word embeddings. Table 8: Sample output (top 20 words) of barycenter for different similarity matrices K based on GloVe (columns titles denote the distance of K from identity K − I F and corresponding .). Each column shows a word and its corresponding probability over the vocabulary. Note that the last column coincides with the output from geometric mean. Table 8 shows the effect of entropic regularization ε on the ing distribution of the words of W. barycenter using GloVe embedding matrix. As K moves closer to the identity matrix, the entropy of barycenter decreases, leading to outputs that are close/identical to the geometric mean. On the other hand, with a large entropic regularization, matrix K moves away from identity, becoming an uninformative matrix of all 1's. This eventually leads to a uniform distribution which spreads the probability mass equally across all the words. This can be also visualized with a histogram in Figure 5, where the histograms on the bottom represent distributions that are close to uniform, which can be considered as failure cases of W. barycenter, since the image captioner in this case can only generate meaningless, gibberish captions. In TAB1 we show a mapping from a few top words in the barycenter output (for similarity matrix K based on synonyms) to the input models. In other words, each column defines the words in the input models which have the greatest influence on each of top 3 words in the barycenter output. In Figure 6 we present a few captioning examples showing qualitative difference between the considered ensembling techniques. Figure 4: Visualization of the word distributions of W. barycenter, arithmetic and geometric means based on four captioning models, whose input image is shown on top (one of the ground-truth human-annotated captions for this image reads: A police car next to a pickup truck at an intersection). The captioner generates a sentence as a sequence of words, where at each step the output is a distribution over the whole vocabulary. The top four histograms show a distribution over the vocabulary from each of the model at time t = 3 during the sentence generation process. The bottom three histograms show the ing distribution over the vocabulary for the ensembles based on W. Barycenter, arithmetic and geometric means. It can be seen that the W. Barycenter produces high entropy distribution, spreading the probability mass over the synonyms of the word "car" (which is the top word in all the four models), based on the synonyms similarity matrix K.Figure 5: Visualization of the word distributions of W. barycenter for different similarity matrices K based on GloVe (rows denote the distance of K from identity K − I F and corresponding). Large entropic regularization generates K close to uninformative matrices of all 1's. This eventually leads to a barycenter which is close to a uniform distribution spreading the probability mass almost equally across all the words. TAB1: Mapping from a few top words in the barycenter output (for similarity matrix K based on synonyms) to the input models. For each word in the left columns, the remaining columns show the contributing words and the percent of contribution. BA: a television is placed on the curb of the road AM: a TV sits on the side of a street GM: a television sitting on the side of a street GT: an empty sidewalk with an abandoned television sitting alone BA: a car that is parked at the station AM: a car that has been shown in a subway GM: a car that is sitting on the side of a road GT: a car at the bottom of the stair well BA: a person is sitting on the sidewalk with a tent AM: a couple of people sitting on benches next to a building GM: a couple of people sitting on the side of a street GT: a woman is sitting with a guitar near a man that is sitting on the ground in front of a tent BA: a sheep sitting in a car looking out the window AM: a white sheep is sitting in a vehicle GM: a close up of a sheep in a car GT: a sheep sitting at the steering wheel of a car with its hooves on the wheels Figure 6: Examples of captions for several images. BA: Wasserstein Barycenter, AM: Arithmetic mean, GM: Geometric mean, GT: Ground truth. We evaluate our models using micro and macro versions of precision, recall, and F1-measure as covered in multi-label prediction metrics study from . For these measures, a threshold of 0.5 is commonly used to predict a label as positive in the community's published . Macro precision is an average of per-class precisions while micro precision is computed by computing the ratio of all true positives across all image samples over the number of all positive classes in a dataset. Therefore a macro (or per-class) precision'P-C' is defined as 1 C i P i while a micro (or overall precision)'P-O' is defined as i T Pi i T Pi+F Pi where T P i and F P i are true and false positives respectively. Per-class and overall versions for R and F1 are defined similarly. We also employ mean Average Precision (mAP) which gives the area under the curve of P = f (R) averaged over each class. Unlike P,R and F1, mAP inherently performs a sweep of the threshold used for detecting a positive class and captures a broader view of a multi-label predictor's performance. Performances for our 8 models and previously published are reported in TAB5 in the paper. Our models have reasonable performances overall. Ensembling given in Tab. 6 are using uniformly weighted models, i.e. λ = 1 m where m is the number of models. However, in practice, arithmetic and geometric mean ensembling usually use weighted ensembles of models The weights are then optimized and established on a small validation set before being used for ensembling on a test set. A well-known embodiment of this type of approach is Adaboost BID13 where weights are dynamically defined at each pass of training wrt to the accuracy of base models. Here, we follow a much simpler but similar approach by defining the performance of each model as the mean average precision (mAP) on the validation set. mAP is used to define λ such that λ = mAP mAP. λ are then applied to the models' scores for arithmetic, geometric mean and W.Barycenter ensemblings. Tab. 11 reports mAP for each ensembling technique over the MS-COCO test set (35150 images). Note that the λ weights definition is based on the final metric evaluation, mAP in this case. For other tasks such as classification, accuracy or any other valid metric can be employed to compute the λ weights. It must be noted that the weights are computed with respect to the ultimate performance metric at hand. Tab. 11 reveals clearly that such approach of weighting models by their performance benefits arithmetic and W.Barycenter ensembling for this task. Both methods leverage the confidence of the underlying models and the mAP weighting of models will reinforce the contributions of better performing models. Geometric means ensembling is not significantly impacted by non-uniform λ since it is mostly relying on consensus of the models, not their confidence. We conclude that weighting indeed helps performance and keeps a performance advantage for W. Barycenter over the alternatives arithmetic and geometric means. Table 11: multi-label models ensembling mAP on MS-COCO test set (35150 images). Performancebased weighting helps both arithmetic and W.Barycenter ensembling, the latter retaining its performance vantage. At iteration 0, the holds since we have: DISPLAYFORM0 Assume the holds at time t. Let us prove it for t + 1, following the updates of Algorithm 1: DISPLAYFORM1 QED.For ε > 0, Feydy et al BID11 showed recently that the Sinkhorn divergence defines an interpolation between the MMD distance (Maximum mean discrepancy BID16) and the Wasserstein Distance. Hence for ε > 0 Algorithm 1 provides still an interesting solution that can be seen as an interpolation between the original (unregularized) Wasserstein Barycenter and the MMD Barycenter (Frechet barycenter for d = MMD). DISPLAYFORM2 As λ goes to infinity this unbalanced cost converges to the Hellinger distance: We make the following theoretical and practical remarks on how to improve this computational complexity to reach an almost linear dependency on N using low rank approximation of the kernel matrix K, and parallelization on m machines: DISPLAYFORM3 1. Dependency on Maxiter: For the number of iterations we found that Maxiter = 5 is enough for convergence, which makes most of the computational complexity dependent on m and N.2. Dependency on N and low rank approximation: The main computational complexity comes from the matrix vector multiply K u that is of O(N 2). Note that this complexity can be further reduced since the kernel matrix K is often low rank. Therefore we can be written K = ΦΦ where Φ ∈ R N ×k, where k N, which allows to compute this product as follows ΦΦ u that has a lower complexity O(N k). Φ can be computed using Nystrom approximation or random Fourier features. Hence potentially on can get an algorithm with complexity O(mN k), where k has a logarithmic dependency on N. This was studied recently in BID1.3. Dependency on m and parallelization: Regarding the dependency on m, as noted in BID5 BID2 the algorithm is fully parallelizable which would lead to a computational complexity of O(N k) by using simply m machines.4. GPU and Batch version: Practically, the algorithm implemented takes advantage of matrix vector products' speed on GPU. The algorithm can be further accelerated by computing Sinkhorn divergences in batches as pointed in BID11 ). We evaluated the time complexity of the GPU implementation of Wasserstein Barycenters in pytorch on our multi-label prediction experiments using MS-COCO test set (35150 samples). Note that we used a vanilla implementation of Algorithm 2, i.e without parallelization, batching, or low rank approximation. Results and comments for these wall clock timings can be found in Tab. 13. As it can be observed, we need to use Maxiter = 5 on a GPU-V100 to reach below 4ms/image for Wasserstein ensembling. This is not a major overhead and can be further improved as discussed previously by using parallelization, batching and low rank approximation. In TAB14, each timing was done over the whole test set; each timing repeated 5 times. We report means and standard deviations of total wall clock times for ensembling 8 models. Last column on the right is the average timing per image (in ms) for W.Barycenter. The number of W.Barycenter iterations (Maxiter) was varied from 1, 5 and 10 to show its impact. We report timing numbers over two GPU architectures, NVIDIA Tesla K80 and V100. W.Barycenters leverage the GPU while Arithmetic and Geometric means do not. Timings are of the computations of the means themselves, no data fetching or data preparation is included in these timings. As expected, the wall clock time cost for W.Barycenters is several order of magnitude higher than for Arithmetic and Geometric means. The difference of GPU does not impact the Arithmetic and Geometric means as they do not use it in our implementation. The Barycenter computation see a speed up from K80 to V100 as V100 is much better at reducing wall time for longer number of iterations. Proposition 2 (propreties of Geometric Mean). The following properties hold for geometric mean:1. Geometric mean is the Frechet mean of KL. The geometric mean is the Frechet mean with respect to the extended KL divergence: First order optimality condition: DISPLAYFORM0 DISPLAYFORM1 This gives us the : DISPLAYFORM2 Published as a conference paper at ICLR 2019Proof. Let γ ∈ R N ×M + be a coupling between p ∈ ∆ M and q ∈ ∆ N, q j > 0 we have: γ 1 N = p and γ1 M = q, we have: DISPLAYFORM3 DISPLAYFORM4 Now the entropy of the convex combination is higher than convex combination of entropies (the entropy is concave): DISPLAYFORM5 γ ij log(γ ij) − log(q i)q i.Hence: DISPLAYFORM6 γ ij log(γ ij) − H(q)Hence: DISPLAYFORM7 γ ij log(γ ij) | we propose to use Wasserstein barycenters for semantic model ensembling | 1,011 | scitldr |
While deep learning has been incredibly successful in modeling tasks with large, carefully curated labeled datasets, its application to problems with limited labeled data remains a challenge. The aim of the present work is to improve the label efficiency of large neural networks operating on audio data through a combination of multitask learning and self-supervised learning on unlabeled data. We trained an end-to-end audio feature extractor based on WaveNet that feeds into simple, yet versatile task-specific neural networks. We describe several easily implemented self-supervised learning tasks that can operate on any large, unlabeled audio corpus. We demonstrate that, in scenarios with limited labeled training data, one can significantly improve the performance of three different supervised classification tasks individually by up to 6% through simultaneous training with these additional self-supervised tasks. We also show that incorporating data augmentation into our multitask setting leads to even further gains in performance. Deep neural networks (DNNs) are the bedrock of state-of-the-art approaches to modeling and classifying auditory data BID0; van den BID20; ). However, these data-hungry neural architectures are not always matched to the available training resources, and the creation of large-scale corpora of audio training data is costly and time-consuming. This problem is exacerbated when training directly on the acoustic waveform, where input is highdimensional and noisy. While labeled datasets are quite scarce, we have access to virtually infinite sources of unlabeled data, which makes effective unsupervised learning an enticing research direction. Here we aim to develop a technique that enables models to generalize better by incorporating auxiliary self-supervised auditory tasks during model training BID4 ).Our main contributions in this paper are twofold: the successful identification of appropriate selfsupervised audio-related tasks and the demonstration that they can be trained jointly with supervised tasks in order to significantly improve performance. We also show how to use WaveNet as a general feature extractor capable of providing rich audio representations using raw waveform data as input. We hypothesize that by learning multi-scale hierarchical representations from raw audio, WaveNetbased models are capable of adapting to subtle variations within tasks in an efficient and robust manner. We explore this framework on three supervised classification tasks -audio tagging, speaker identification and speech command recognition -and demonstrate that one can leverage unlabeled data to improve performance on each task. We further show that these pair well with more common data augmentation techniques, and that our proposed self-supervised tasks can also be used as a pre-training stage to provide performance improvements through transfer learning. These authors contributed equally to this work. Prevailing wisdom suggests that a single model can only learn multiple tasks if they are related in some way with some underlying structure common to them BID3 ). Such structure has been described for decades in the literature on sensory environments, with Gabor filters and gammatone filters underlying much of visual and auditory processing, respectively BID14 ). Perhaps models trained to accomplish many tasks might be able to synergize to uncover this underlying structure, enabling better single-task performance with smaller amounts of data per-task. We follow a relatively common approach to multitask learning aimed at learning a single non-trivial general-purpose representation BID2 ). Examples of other intriguing approaches can be found in BID11 BID19 ).Much as shared representations allow models to pool data from different datasets, the problem persists that the cleanly labeled datasets that have permitted numerous breakthroughs in deep learning are painstaking to come by. One promising solution to label scarcity uses self-supervised learning to take advantage of unlabeled data. Self-supervised learning has shown promising in the visual domain, leveraging unlabeled data using tasks like inpainting for image completion BID13; BID18 ), image colorization BID7; BID22 ), and motion segmentation BID17 ). Despite these efforts, little previous work has taken advantage of self-supervision in the audio domain. We implemented an end-to-end audio processing network that finds a common embedding of the acoustic waveform within a "trunk" network modeled after the WaveNet architecture BID20 ). The embedding is then processed by simple, independent, task-specific "head" networks. The trunk and head networks are trained jointly for each experiment described below. Our experiments consist primarily of models in which a single supervised "main" task is trained jointly with 0 to 3 self-supervised "auxiliary" tasks. Briefly (see appendix for details), our WaveNet trunk consists of 3 blocks of 6 dilation stacks each. Each dilation stack is comprised of a gate and filter module, with 64 convolutional units per module. The outputs from the filter and gate modules are (elementwise) multiplied and then summed with the input to the stack. These choices yield a WaveNet trunk with an effective receptive field length of 1 + 3(2 6 − 1) = 190 samples or approximately 12 ms. We tested our setup on three distinct supervised tasks: audio tagging, speaker identification and speech command recognition. Each is trained using a separate labeled dataset along with up to three self-supervised tasks trained with unlabeled data. Our description of the tasks is necessarily brief, with details relegated to the appendix. The audio tagging task is trained on the FSDKaggle2018 dataset collected through Freesound. This dataset contains a total of 11,073 files provided as uncompressed PCM 16 bit, 44.1 kHz, monaural audio which is further subdivided into a training set and a test set. Before being fed to the network, each audio segment is first cropped to 2 seconds and padded with zeros if the source clip is too short. Since the WaveNet trunk produces embeddings with a temporal structure, this task averages the output across time to produce a single output vector for the entire audio sequence, which in turn feeds into a single fully-connected layer with 512 units and ReLU nonlinearity, followed by a softmax output layer. Training is done by minimizing the cross entropy between the softmax outputs and one-hot encoded classification labels. The speaker identification task is trained on the VoxCeleb-1 dataset BID12 ) which has 336 hours of data from 1251 speakers. Individual clips are sourced from interviews with celebrities in a variety of different settings. Data from each individual is sourced from multiple interviews and one interview is held-out to produce a test set with 15 hours of data. Before being fed to the network, each audio segment is first cropped to 2 seconds in duration. Given the large variations in the audio quality of the samples in this dataset, we found it necessary to also normalize the clips and apply a pre-emphasis filter. This task's head architecture features a global average pooling layer, followed by 2-layer perceptron with 1024 units per layer, batch normalization and a ReLU nonlinearity. The output is then passed to a softmax layer and evaluated using a cross-entropy loss. The speech command recognition task is trained on the Speech Commands dataset BID21 ).The entire dataset consists of 65,000 utterances of 30 short words, formatted in one-second WAVE format files. There is a total of 12 categories; 10 words (yes, no, up, down, left, right, on, off, stop, go), with the rest classified as either unknown or silence. The speech command recognition head is a stack of three 1D convolutions. Between each convolutional layer we used batch normalization and dropout, followed by a ReLU nonlinearity. The three convolution layers have widths of 100, 50, and 25 and strides of 16, 8, and 4, respectively. The output is passed to a final softmax layer and evaluated using a cross-entropy loss. We selected next-step prediction, noise reduction, and upsampling for our self-supervised, auxiliary tasks. They are easily implemented and can be synergistically paired with our main (supervised) tasks. The self-supervised tasks were trained on both the main task's data and unlabeled data sampled from the 100-hour and 500-hour versions of the Librispeech dataset BID15 ). This dataset was only used to train the auxiliary tasks. All three auxiliary tasks share the same basic head architecture. They begin with two convolutional layers with 128 filters and ReLU nonlinearities and a final linear convolutional layer with 1 output unit feeding into a regression-type loss function (see appendix for details). Our primary goal was to develop a multitask framework which is completely generic for audio, making it prudent to work with waveform inputs as opposed to, say, "high level" feature representations like spectrograms. While convolutional architectures trained on spectral/cepstral representations of audio can indeed give better classification performance than models trained directly on raw waveforms, they significantly restrict the range of audio processing tasks which they can perform. Thus, state-of-the-art baseline models for different tasks may vary wildly in their network architectures, subsequently limiting the amount of information that can be gained from a smaller pool of potential self-supervised tasks. If the goal is to understand the interaction between the learning dynamics of disparate tasks, then the focus should be on models which make the fewest assumptions about the representation of inputs. As such, we emphasize improvements in performance afforded by multitask learning relative to a single task baseline trained on raw audio. Closing the performance gap between models trained using spectral representations (e.g. BID8 ; BID12) and those trained on waveforms is left to future work. Joint training with three self-supervised tasks proved beneficial for each of our three supervised tasks TAB0. For the audio tagging task, multitask training improved MAP@3 score by.019 and top-1 classification rate by 1.62%, simply by including additional unsupervised tasks without increasing training data. Since the auxiliary tasks can be trained with unlabeled data, we gradually incorporated larger versions of Librispeech into our training regimen to investigate the effects of self-supervision. With each increase in unlabeled dataset size, we saw a further improvement on both performance metrics, with a MAP@3 increase of up to.056 with an additional 500 hours of unlabeled data. Using the same setup, but swapping the audio tagging task with either the speech command classification or the speaker identification task showed a similar, though more measured, trend with increasing amounts of unlabeled data. Speech command classification went from 93.05% in the baseline model to 93.78% when trained with an additional 500 hours of unlabeled data. Speaker identification on the VoxCeleb dataset was a much more challenging task for the network overall. There, top-5 classification performance peaked at 75.22%, up from the baseline performance of 73.81%. The above show that multitask learning can improve the performance of any of our supervised tasks without any additional labeled data. To get an idea of the significance of the observed effects, we decided to compare the above with another common technique for improving label efficiency: data augmentation. We trained a single task model on audio tagging with two different kinds of data augmentation: pitch shifting and additive noise (with SNRs of 10 to 15 dB). We found that pitch-shift augmentation produced an increase in MAP@3 of.066, comparable to our largest multitask benefits TAB1. Noise augmentation showed a somewhat smaller MAP@3 increase of.024. Interestingly, the performance gains from augmenting with noisy data are similar to those obtained by training the main task jointly with a self-supervised noise-reduction task. Finally, training with both pitch-shift augmentation and additional self-supervised tasks yielded a MAP@3 increase of.089 -our highest performance from any experiment -suggesting that both methods for improving label efficiency are complementary. In computer vision, the scope of transfer learning has been enlarged to include knowledge transfer from self-supervised tasks trained on unlabeled data to supervised tasks BID4 ). This inspired us to reconsider our multitask learning approach from a transfer learning perspective. In this variant of transfer learning, we jointly "pre-train" our three self-supervised tasks on purely unlabeled data to convergence. We follow this up with a fine-tuning stage, using a much smaller quantity of labeled data, to train a supervised task. We carried out transfer learning experiments on the same trio of tasks tackled above in our multitask learning experiments. The (see TAB2) favor transfer learning over simultaneously training all tasks together. The present work developed the following theme: faced with training an audio task on limited quantities of labeled data, one can expect performance gains by jointly training the supervised task together with multiple self-supervised tasks using a WaveNet-based model operating directly on raw audio waveforms. We have shown that the improved performance on the supervised tasks scales with the quantity of unlabeled data and can be used to supplement existing data augmentation schemes. Predicated on the performance gains observed on three fairly distinct audio classification tasks, we expect our approach to generalize to a broad range of supervised audio tasks. Our methodology and suggest many interesting directions for further development. Is there a limit on the number of auxiliary tasks that a single model at fixed capacity can benefit from, and can one place bounds on the expected improvement in performance? Intuitively, we expect that when our multitasking model learns to simultaneously forecast frames of audio, remove noise from the audio and perform upsampling, it must have formed a representation of the audio. What is this representation? Can it be extracted or distilled? A proper exploration of these questions should enable us to handle a broader range of auditory tasks. Although audio tag classification does not require the fine temporal resolution found in raw audio waveforms, our chosen auxiliary tasks (or any arbitrary auditory task for which we may desire our model to be sufficient) require higher temporal resolutions. To satisfy this, we chose to build our model following the WaveNet architecture (van den).WaveNet models are autoregressive networks capable of processing high temporal resolution raw audio signals. Models from this class are ideal in cases where the complete sequence of input samples is readily available. WaveNet models employ causal dilated convolutions to process sequential inputs in parallel, making these architectures faster to train compared to RNNs which can only be updated sequentially. ing small, task-specific neural networks built atop a task-agnostic trunk. The trunk architecture principally follows the structure of WaveNet, with several blocks of stacked, dilated, and causal convolutions between every convolution layer. Outputs from the trunk are fed into task-specific heads (details in Section 6.1).As shown Figure 6.1, our WaveNet trunk is composed of N blocks, where each block consists of S dilated causal convolution layers, with dilation factors increasing from 1 to 2 S − 1, residual connections and saturating nonlinearities. We label the blocks using b = 1, · · ·, N. We use indices ∈ [1 + (b − 1)S, bS] to label layers in block b. Each layer,, of the WaveNet trunk consists of a "residual atom" which involves two computations, labeled as "Filter" and "Gate" in the figure. Each residual atom computation produces a hidden state vector h and a layer output x defined via DISPLAYFORM0 where denotes element-wise products, represents the regular convolution operation, denotes dilated convolutions with a dilation factor of 2 mod bS if is a layer in block b+1, σ denotes the sigmoid function and W The first (= 0) layer -represented as the initial stage marked "1×1 Conv" in Figure 6.1 -applies causal convolutions to the raw audio waveforms X = (X 1, X 2, · · ·, X T), sampled at 16 kHz, to produce an output DISPLAYFORM1 Given the structure of the trunk laid out above, any given block b has an effective receptive field of 1 + b(2 S − 1). Thus the total effective receptive field of our trunk is τ = 1+N (2 S −1). Following an extensive hyperpameter search over various configurations, we settled on N = 3 blocks comprised of S = 6 layers each for our experiments. Thus our trunk has a total receptive field of τ = 190, which corresponds to about 12 milliseconds of audio sampled at 16kHz. As indicated above, each task-specific head is a simple neural network whose input data is first constrained to pass through a trunk that it shares with other tasks. Each head is free to process this input to its advantage, independent of the other heads. Each task also specifies its own objective function, as well as a task-specific optimizer, with customized learning rates and annealing schedules, if necessary. We arbitrarily designate supervised tasks as the primary tasks and refer to any self-supervised tasks as auxiliary tasks. In the experiments reported below, we used "audio tagging" as the primary supervised classification task and "next-step prediction", "noise reduction" and "upsampling" as auxiliary tasks training on various amounts of unlabeled data. The parameters used for each of the task specific heads can be found in TAB3 of the accompanying supplement to this paper. Figure 2: The head architectures were designed to be simple, using only as few layers as necessary to solve the task. Simpler head architectures force the shared trunk to learn a representation suitable for multiple audio tasks. The next-step prediction task can be succinctly formalized as follows: given a sequence {x t−τ +1, · · ·, x t} of frames of an audio waveform, predict the next value x t+1 in the sequence. This prescription allows one to cheaply obtain arbitrarily large training datasets from an essentially unlimited pool of unlabeled audio data. Our next-step prediction head is a 2-layer stack of 1 × 1 convolutional layers with ReLU nonlinearities in all but the last layer. The first layer contains 128 units, while the second contains a single output unit. The head takes in τ frames of data from the trunk, where τ is the trunk's effective receptive field, and produces an output which represents the model's prediction for the next frame of audio in the sequence. The next-step head treats this as a regression problem, using the mean squared error of the difference between predicted values and actual values as a loss function, i.e. given inputs {x t−τ +1, · · ·, x t}, the head produces an output y t from which we compute a loss L MSE (t) = (y t − x t+1) 2 and then aggregate over the frames to get the total loss. We would like to note that the original WaveNet implementation treated next-step prediction as a classification problem, instead predicting the bin-index of the audio following a µ-law transform. We found that treating the task as a regression problem worked better in multitask situations but make no claims on the universality of this choice. In defining the noise reduction task, we adopt the common approach of treating noise as an additive random process on top of the true signal: if {x t} denotes the clean raw audio waveform, we obtain the noisy version viax t:= x t + ξ t where ξ t an arbitrary noise process. For the denoising task, the model is trained to predict the clean sample, x t, given a window x t− 1 2 (τ −1), · · ·,x t+ 1 2 (τ −1) of noisy samples. Formally speaking, the formulation of the next-step prediction and denoising tasks are nearly identical, so it should not be surprising to find that models with similar structures are well-adapted to solving either task. Thus, our noise reduction head has a structure similar to the next-step head. It is trained to minimize a smoothed L1 loss between the clean and noisy versions of the waveform inputs, i.e. for each frame t, the head produces an outputŷ t, and we compute the loss DISPLAYFORM0 and then aggregate over frames to obtain the total loss. We used the smooth L1 loss because it provided a more stable convergence for the denoising task than mean squared error. In the same spirit as the denoising task, one can easily create an unsupervised upsampling task by simply downsampling the audio source. The downsampled signal serves as input data while the original source serves as the target. Upsampling is an analog of the "super-resolution" task in computer vision. For the upsampling task, the original audio was first downsampled to 4 kHz using the resample method in the librosa python package BID10 ). To keep the network operating at the same time scale for all auxiliary tasks, we repeated every time-point of the resampled signal 4 times so as to mimic the original signal's 16 kHz sample rate. The job of the network is then to infer the high frequency information lost during the transform. Again, given the formal similarity of the upsampling task to the next-step prediction and noisereduction tasks, we used an upsampling head with a structure virtually identical to those described above. As with the denoising task, we used a smooth L1 loss function (see eqn. above) to compare the estimated upsampled audio with the original. We trained the model using raw audio waveform inputs taken from the FSDKaggle2018 and Librispeech datasets. All code for the experiments described here was written in the PyTorch framework BID16. All audio samples were first cropped to two seconds in duration and downsampled to 16 kHz. To normalize for the variation in onset times for different utterances, the 2 seconds were randomly selected from the original clip. Samples shorter than 2 seconds were zero padded. We then scaled the inputs to lie in the interval [−1, 1]. The noise-reduction task required noisy inputs which we obtained by adding noise sampled from ChiME3 datasets BID1 at a randomly chosen SNR from 10dB to 15dB. The noise types include booth (BTH), on the bus (BUS), cafe (CAF), pedestrian area (PED), and street junction (STR)). Starting with the main task, we first performed a hyperparameter search over the number of blocks in the trunk, the number of layers per block, the number of layers and units of the main task head, and the learning rate. We tried several values for the number of blocks in the trunk, ranging from 2 to 5. We also varied the number of dilated convolution layers in each block from 3 to 8. We found that the performance and training characteristics of the network were largely unaffected by the exact architecture specifications, though learning rate was often important. We then searched over the depth and width of each auxiliary task head, as well as the learning rate for the head. These searches were done by pairing each task individually with the main task. The final choice of hyper-parameters was made by picking values which gave the best possible performance on both the main task and the auxiliary tasks, heuristically favoring performance on the main task. We jointly trained the model on all tasks simultaneously by performing a forward pass for each task, computing the loss function for each task, and then calculating the gradients based on a weighted sum of the losses, viz. L total = i α i L i, where the sum runs over all the tasks. We used a uniform weighting strategy in our current experiments. More advanced weighting strategies showed no benefit for the tagging task. We used the "Adam" optimizer BID6 with parameters β 0 = 0.9, β 1 = 0.99, ε = 10 −8. The learning rate was decayed by a factor of.95 every 5 epochs, as this was found to improve convergence. We used a batch size of 48 across all experiments, since it was the largest batch size permissible by the computational resources available to us. Adding the noise reduction and upsampling tasks required a separate forward propagation of the noisy and downsampled audio, respectively. Exact values for all important parameters of the model can be found in TAB3. | Label-efficient audio classification via multi-task learning and self-supervision | 1,012 | scitldr |
Recent neural network and language models have begun to rely on softmax distributions with an extremely large number of categories. In this context calculating the softmax normalizing constant is prohibitively expensive. This has spurred a growing literature of efficiently computable but biased estimates of the softmax. In this paper we present the first two unbiased algorithms for maximizing the softmax likelihood whose work per iteration is independent of the number of classes and datapoints (and does not require extra work at the end of each epoch). We compare our unbiased methods' empirical performance to the state-of-the-art on seven real world datasets, where they comprehensively outperform all competitors. Under the softmax model 1 the probability that a random variable y takes on the label ∈ {1, ..., K}, is given by p(y = |x; W) = e where x ∈ R D is the covariate, w k ∈ R D is the vector of parameters for the k-th class, and W = [w 1, w 2, ..., w K] ∈ R D×K is the parameter matrix. Given a dataset of N label-covariate pairs D = {(y i, x i)} N i=1, the ridge-regularized maximum log-likelihood problem is given by DISPLAYFORM0 where W 2 denotes the Frobenius norm. This paper focusses on how to maximize when N, K, D are all large. Having large N, K, D is increasingly common in modern applications such as natural language processing and recommendation systems, where N, K, D can each be on the order of millions or billions BID15 BID6 BID4.A natural approach to maximizing L(W) with large N, K, D is to use Stochastic Gradient Descent (SGD), sampling a mini-batch of datapoints each iteration. However if K, D are large then the O(KD) cost of calculating the normalizing sum K k=1 e x i w k in the stochastic gradients can still be prohibitively expensive. Several approximations that avoid calculating the normalizing sum have been proposed to address this difficulty. These include tree-structured methods BID2 BID7 BID9, sampling methods BID1 BID14 BID10 and self-normalization BID0. Alternative models such as the spherical family of losses that do not require normalization have been proposed to sidestep the issue entirely BID13. BID11 avoid calculating the sum using a maximization-majorization approach based on lower-bounding the eigenvalues of the Hessian matrix. All 2 of these approximations are computationally tractable for large N, K, D, but are unsatisfactory in that they are biased and do not converge to the optimal W * = argmax L(W).Recently BID16 managed to recast as a double-sum over N and K. This formulation is amenable to SGD that samples both a datapoint and class each iteration, reducing the per iteration cost to O(D). The problem is that vanilla SGD when applied to this formulation is unstable, in that the gradients suffer from high variance and are susceptible to computational overflow. BID16 deal with this instability by occasionally calculating the normalizing sum for all datapoints at a cost of O(N KD). Although this achieves stability, its high cost nullifies the benefit of the cheap O(D) per iteration cost. The goal of this paper is to develop robust SGD algorithms for optimizing double-sum formulations of the softmax likelihood. We develop two such algorithms. The first is a new SGD method called U-max, which is guaranteed to have bounded gradients and converge to the optimal solution of for all sufficiently small learning rates. The second is an implementation of Implicit SGD, a stochastic gradient method that is known to be more stable than vanilla SGD and yet has similar convergence properties BID18. We show that the Implicit SGD updates for the doublesum formulation can be efficiently computed and has a bounded step size, guaranteeing its stability. We compare the performance of U-max and Implicit SGD to the (biased) state-of-the-art methods for maximizing the softmax likelihood which cost O(D) per iteration. Both U-max and Implicit SGD outperform all other methods. Implicit SGD has the best performance with an average log-loss 4.29 times lower than the previous state-of-the-art. In summary, our contributions in this paper are that we:1. Provide a simple derivation of the softmax double-sum formulation and identify why vanilla SGD is unstable when applied to this formulation (Section 2). 2. Propose the U-max algorithm to stabilize the SGD updates and prove its convergence (Section 3.1). 3. Derive an efficient Implicit SGD implementation, analyze its runtime and bound its step size (Section 3.2). 4. Conduct experiments showing that both U-max and Implicit SGD outperform the previous state-of-the-art, with Implicit SGD having the best performance (Section 4). In order to have an SGD method that samples both datapoints and classes each iteration, we need to represent as a double-sum over datapoints and classes. We begin by rewriting in a more convenient form, DISPLAYFORM0 The key to converting into its double-sum representation is to express the negative logarithm using its convex conjugate: DISPLAYFORM1 where u = − log(−v) and the optimal value of u is u * (a) = log(a). Applying to each of the logarithmic terms in yields DISPLAYFORM2 is our double-sum representation that we seek to minimize and the optimal solution for u i is DISPLAYFORM3 Clearly f is a jointly convex function in u and W. In Appendix A we prove that the optimal value of u and W is contained in a compact convex set and that f is strongly convex within this set. Thus performing projected-SGD on f is guaranteed to converge to a unique optimum with a convergence rate of O(1/T) where T is the number of iterations BID12. The challenge in optimizing f using SGD is that it can have problematically large magnitude gradients. DISPLAYFORM0 where DISPLAYFORM1 is the inverse of the probability of class j being sampled either through i or k, and n j = |{i : y i = j, i = 1, ..., N}|. The corresponding stochastic gradient is: DISPLAYFORM2 If u i equals its optimal value u * i (W) = log(1 + k =yi e x i (w k −wy i) ) then e x i (w k −wy i)−ui ≤ 1 and the magnitude of the N (K − 1) terms in the stochastic gradient are bounded by DISPLAYFORM3 1 and the magnitude of the gradients can become extremely large. Extremely large gradients lead to two major problems: (a) the gradients may computationally overflow floating-point precision and cause the algorithm to crash, (b) they in the stochastic gradient having high variance, which leads to slow convergence 3. In Section 4 we show that these problems occur in practice and make vanilla SGD both an unreliable and inefficient method 4.The sampled softmax optimizers in the literature BID1 BID14 BID10 do not have the issue of large magnitude gradients. Their gradients are bounded by N (K − 1) x i 2 due to their approximations to u * i (W) always being greater than x i (w k − w yi). For example, in one-vs-each BID17, u * i (W) is approximated by log(1 + e x i (w k −wy i) ) > x i (w k − w yi). However, as they only approximate u * i (W) they cannot converge to the optimal W *.The goal of this paper is to design reliable and efficient SGD algorithms for optimizing the doublesum formulation f (u, W) in. We propose two such methods: U-max (Section 3.1) and an implementation of Implicit SGD (Section 3.2). But before we introduce these methods we should establish that f is a good choice for the double-sum formulation. The double-sum in is different to that of BID16. Their formulation can be derived by applying the convex conjugate substitution to instead of. The ing equations are DISPLAYFORM0 Although both double-sum formulations can be used as a basis for SGD, our formulation tends to have smaller magnitude stochastic gradients and hence faster convergence. To see this, note that typically x i w yi = argmax k {x i w k} and so theū i, x i w yi and e x i wy i −ūi terms in are of the greatest magnitude. Although at optimality these terms should roughly cancel, this will not be the case during the early stages of optimization, leading to stochastic gradients of large magnitude. In contrast the function f ik in only has x i w yi appearing as a negative exponent, and so if x i w yi is large then the magnitude of the stochastic gradients will be small. In Section 4 we present numerical confirming that our double-sum formulation leads to faster convergence. As explained in Section 2.2, vanilla SGD has large gradients when u i x i (w k − w yi). This can only occur when u i is less than its optimum value for the current W, since u * DISPLAYFORM0 and so the gradients are bounded. It also brings u i closer 5 to its optimal value for the current W and thereby decreases the the objective f (u, W). This is exactly the mechanism behind the U-max algorithm -see Algorithm 1 in Appendix C for its pseudocode. U-max is the same as vanilla SGD except for two modifications: (a) u i is set equal to log(1 + e DISPLAYFORM1 DISPLAYFORM2, then U-max with threshold δ converges to the optimum of, and the rate is at least as fast as SGD with same learning rate, in expectation. Proof. The proof is provided in Appendix D.U-max directly resolves the problem of extremely large gradients. Modification (a) ensures that δ ≥ x i (w k − w yi) − u i (otherwise u i would be increased to log(1 + e x i (w k −wy i) )) and so the magnitude of the U-max gradients are bounded above by N (K − 1)e δ x i 2.In U-max there is a trade-off between the gradient magnitude and learning rate that is controlled by δ. For Theorem 1 to apply we require that the learning rate η t ≤ δ 2 /(4B 2 f). A small δ yields small magnitude gradients, which makes convergence fast, but necessitates a small η t, which makes convergence slow. Another method that solves the large gradient problem is Implicit SGD 6 BID3 BID18. Implicit SGD uses the update equation DISPLAYFORM0 where θ (t) is the value of the t th iterate, f is the function we seek to minimize and ξ t is a random variable controlling the stochastic gradient such that ∇f (θ) = E ξt [∇f (θ, ξ t)]. The update differs from vanilla SGD in that θ (t+1) appears on both the left and right side of the equation, DISPLAYFORM1. 6 Also known to as an "incremental proximal algorithm" BID3 whereas in vanilla SGD it appears only on the left side. In our case θ = (u, W) and DISPLAYFORM2 Although Implicit SGD has similar convergence rates to vanilla SGD, it has other properties that can make it preferable over vanilla SGD. It is known to be more robust to the learning rate BID18, which important since a good value for the learning rate is never known a priori. Another property, which is of particular interest to our problem, is that it has smaller step sizes. Proposition 1. Consider applying Implicit SGD to optimizing DISPLAYFORM3 and so the Implicit SGD step size is smaller than that of vanilla SGD.Proof. The proof is provided in Appendix E.The bound in Proposition 1 can be tightened for our particular problem. Unlike vanilla SGD whose step size magnitude is exponential in x i (w k − w yi) − u i, as shown in FORMULA8, for Implicit SGD the step size is asymptotically linear in x i (w k − w yi) − u i. This effectively guarantees that Implicit SGD cannot suffer from computational overflow. Proposition 2. Consider the Implicit SGD algorithm where in each iteration only one datapoint i and one class k = y i is sampled and there is no ridge regularization. The magnitude of its step size in w is O(DISPLAYFORM4 Proof. The proof is provided in Appendix F.2.The difficulty in applying Implicit SGD is that in each iteration one has to compute a solution to. The tractability of this procedure is problem dependent. We show that computing a solution to is indeed tractable for the problem considered in this paper. The details of these mechanisms are laid out in full in Appendix F.Proposition 3. Consider the Implicit SGD algorithm where in each iteration n datapoints and m classes are sampled. Then the Implicit SGD update θ (t+1) can be computed to within accuracy in runtime O(n(n + m)(D + n log( −1))).Proof. The proof is provided in Appendix F.3.In Proposition 3 the log(−1) factor comes from applying a first order method to solve the strongly convex Implicit SGD update equation. It may be the case that performing this optimization is more expensive than computing the x i w k inner products, and so each iteration of Implicit SGD may be significantly slower than that of vanilla SGD or U-max. However, in the special case of n = m = 1 we can use the bisection method to give an explicit upper bound on the optimization cost. Proposition 4. Consider the Implicit SGD algorithm with learning rate η where in each iteration only one datapoint i and one class k = y i is sampled and there is no ridge regularization. Then the Implicit SGD iterate θ (t+1) can be computed to within accuracy with only two D-dimensional vector inner products and at most log 2 (DISPLAYFORM5 Proof. The proof is provided in Appendix F.1For any reasonably large dimension D, the cost of the two D-dimensional vector inner products will outweigh the cost of the bisection, and Implicit SGD will have roughly the same speed per iteration as vanilla SGD or U-max. In summary, Implicit SGD is robust to the learning rate, does not have overflow issues and its updates can be computed in roughly the same time as vanilla SGD. Two sets of experiments were conducted to assess the performance of the proposed methods. The first compares U-max and Implicit SGD to the state-of-the-art over seven real world datasets. The second investigates the difference in performance between the two double-sum formulations discussed in Section 2.3. We begin by specifying the experimental setup and then move onto the . Data. We used the MNIST, Bibtex, Delicious, Eurlex, AmazonCat-13K, Wiki10, and Wiki-small datasets 7, the properties of which are summarized in TAB1 . Most of the datasets are multi-label and, as is standard practice , we took the first label as being the true label and discarded the remaining labels. To make the computation more manageable, we truncated the number of features to be at most 10,000 and the training and test size to be at most 100,000. If, as a of the dimension truncation, a datapoint had no non-zero features then it was discarded. The features of each dataset were normalized to have unit L 2 norm. All of the datasets were pre-separated into training and test sets. We only focus on the performance on the algorithms on the training set, as the goal in this paper is to investigate how best to optimize the softmax likelihood, which is given over the training set. Algorithms. We compared our algorithms to the state-of-the-art methods for optimizing the softmax which have runtime O(D) per iteration 8. The competitors include Noise Contrastive Estimation (NCE) BID14, Importance Sampling (IS) BID1 and One-Vs-Each (OVE) BID17. Note that these methods are all biased and will not converge to the optimal softmax MLE, but something close to it. For these algorithms we set n = 100, m = 5, which are standard settings 9. For Implicit SGD we chose to implement the version in Proposition 4 which has n = 1, m = 1. Likewise for U-max we set n = 1, m = 1 and the threshold parameter δ = 1. The ridge regularization parameter µ was set to zero for all algorithms. Epochs and losses. Each algorithm is run for 50 epochs on each dataset. The learning rate is decreased by a factor of 0.9 each epoch. Both the prediction error and log-loss are recorded at the end of 10 evenly spaced epochs over the 50 epochs. Learning rate. The magnitude of the gradient differs in each algorithm, due to either under-or overestimating the log-sum derivative from. To set a reasonable learning rate for each algorithm on 7 All of the datasets were downloaded from http://manikvarma.org/downloads/XC/ XMLRepository.html, except Wiki-small which was obtained from http://lshtc.iit. demokritos.gr/.8 BID16 have runtime O(N KD) per epoch, which is equivalent to O(KD) per iteration. This is a factor of K slower than the methods we compare against. 9 We also experimented setting n = 1, m = 1 in these methods and there was virtually no difference except the runtime was slower. For example, in Appendix G we plot the performance of NCE with n = 1, m = 1 and n = 100, m = 5 applied to the Eurlex dataset for different learning rates and there is very little difference between the two. Table 2: Tuned initial learning rates for each algorithm on each dataset. The learning rate in 10 0,±1,±2,±3 with the lowest log-loss after 50 epochs using only 10% of the data is displayed. Vanilla SGD applied to AmazonCat, Wiki10 and Wiki-small suffered from overflow with a learning rate of 10 −3, but was stable with smaller learning rates (the largest learning rate for which it was stable is displayed). Wiki-small Figure 1: The x-axis is the number of epochs and the y-axis is the log-loss from calculated at the current value of W. each dataset, we ran them on 10% of the training data with initial learning rates η = 10 0,±1,±2,±3. The learning rate with the best performance after 50 epochs is then used when the algorithm is applied to the full dataset. The tuned learning rates are presented in Table 2. Note that vanilla SGD requires a very small learning rate, otherwise it suffered from overflow. Comparison to state-of-the-art. Plots of the performance of the algorithms on each dataset are displayed in Figure 1 with the relative performance compared to Implicit SGD given in TAB2. The Implicit SGD method has the best performance on virtually all datasets. Not only does it converge faster in the first few epochs, it also converges to the optimal MLE (unlike the biased methods that prematurely plateau). On average after 50 epochs, Implicit SGD's log-loss is a factor of 4.29 lower than the previous state-of-the-art. The U-max algorithm also outperforms the previous state-of-theart on most datasets. U-max performs better than Implicit SGD on AmazonCat, although in general Implicit SGD has superior performance. Vanilla SGD's performance is better than the previous state-of-the-art but worse than U-max and Implicit SGD. The difference in performance between vanilla SGD and U-max can largely be explained by vanilla SGD requiring a smaller learning rate to avoid computational overflow. The sensitivity of each method to the initial learning rate can be seen in Appendix G, where the of running each method on the Eurlex dataset with learning rates η = 10 0,±1,±2,±3 is presented. The are consistent with those in Figure 1, with Implicit SGD having the best performance for most learning rate settings. For learning rates η = 10 3,4 the U-max log-loss is extremely large. This can be explained by Theorem 1, which does not guarantee convergence for U-max if the learning rate is too high. Comparison of double-sum formulations. FIG2 illustrates the performance on the Eurlex dataset of U-max using the proposed double-sum in compared to U-max using the double-sum of in. The proposed double-sum clearly outperforms for all 10 learning rates η = 10 0,±1,±2,−3,−4, with its 50 th -epoch log-loss being 3.08 times lower on average. This supports the argument from Section 2.3 that SGD methods applied to the proposed double-sum have smaller magnitude gradients and converge faster. In this paper we have presented the U-max and Implicit SGD algorithms for optimizing the softmax likelihood. These are the first algorithms that require only O(D) computation per iteration (without extra work at the end of each epoch) that converge to the optimal softmax MLE. Implicit SGD can be efficiently implemented and clearly out-performs the previous state-of-the-art on seven real world datasets. The is a new method that enables optimizing the softmax for extremely large number of samples and classes. So far Implicit SGD has only been applied to the simple softmax, but could also be applied to any neural network where the final layer is the softmax. Applying Implicit SGD to word2vec type models, which can be viewed as softmaxes where both x and w are parameters to be fit, might be particularly fruitful. 10 The learning rates η = 10 3,4 are not displayed in the FIG2 for visualization purposes. It had similar behavior as η = 10 2. We first establish that the optimal values of u and W are bounded. Next, we show that within these bounds the objective is strongly convex and its gradients are bounded. Lemma 1 (BID16). The optimal value of W is bounded as W * 2 DISPLAYFORM0 Proof. DISPLAYFORM1 Rearranging gives the desired . Lemma 2. The optimal value of u i is bounded as u * i ≤ B u where B u = log(1 + (K − 1)e 2BxBw ) and B x = max i {x i 2} Proof. DISPLAYFORM2 W and u i ≤ B u then f (u, W) is strongly convex with convexity constant greater than or equal to min{exp(−B u), µ}.Proof. Let us rewrite f as DISPLAYFORM3 where θ = (u, w 1, ..., w k) ∈ R N +KD with a i and b ik being appropriately defined. The Hessian of f is DISPLAYFORM4 where e i is the i th canonical basis vector, 0 N is an N -dimensional vector of zeros and 1 KD is a KD-dimensional vector of ones. It follows that DISPLAYFORM5 W and u i ≤ B u then the 2-norm of both the gradient of f and each stochastic gradient f ik are bounded by DISPLAYFORM6 Proof. By Jensen's inequality max DISPLAYFORM7 Using the from Lemmas 1 and 2 and the definition of f ik from, DISPLAYFORM8 and for j indexing either the sampled class k = y i or the true label y i, DISPLAYFORM9 we have DISPLAYFORM10 We can write the equation for L(W) from as (where we have set µ = 0 for notational simplicity), DISPLAYFORM0 Here e i v = v i ∈ R is a variable that is explicitly kept track of with DISPLAYFORM1 k =yi e x i (w k −wy i) (with exact equality in the limit as t → ∞). Clearly v i in stochastic composition optimization has a similar role as u i has in our formulation for f in.If i, k are sampled with k = y i in stochastic composition optimization then the updates are of the form BID20 w yi = w yi + η t N K e DISPLAYFORM2 where z k is a smoothed value of w k. These updates have the same numerical instability issues as vanilla SGD on f in: it is possible that Algorithm 1: U-max DISPLAYFORM0, number of classes K, number of datapoints N, learning rate η t, class sampling probability β k = N n k +(N −n k)(K−1), threshold parameter δ > 0, bound B W on W such that W 2 ≤ B W and bound B u on u such that u i ≤ B u for i = 1,..., N Output: DISPLAYFORM1 In this section we will prove the claim made in Theorem 1, that U-max converges to the softmax optimum. Before proving the theorem, we will need a lemma. Lemma 5. For any δ > 0, if u i ≤ log(1+e x i (w k −wy i) )−δ then setting u i = log(1+e DISPLAYFORM0 Proof. As in Lemma 3, let θ = (u, w 1, ..., w k) ∈ R N +KD. Then setting u i = log(1 + e x i (w k −wy i) ) is equivalent to setting θ = θ + ∆e i where e i is the i th canonical basis vector and ∆ = log(1 + e x i (w k −wy i) ) − u i ≥ δ. By a second order Taylor series expansion DISPLAYFORM1 for some λ ∈. Since the optimal value of u i for a given value of W is u * i (W) = log(1 + k =yi e x i (w k −wy i) ) ≥ log(1+e x i (w k −wy i) ), we must have ∇f (θ+∆e i) e i ≤ 0. From Lemma 3 we also know that DISPLAYFORM2 Putting in bounds for the gradient and Hessian terms in, DISPLAYFORM3 Now we are in a position to prove Theorem 1.Proof of Theorem 1. Let θ (t) = (u (t), W (t) ) ∈ Θ denote the value of the t th iterate. Here Θ = {θ : W 2 2 ≤ B 2 W, u i ≤ B u} is a convex set containing the optimal value of f (θ). DISPLAYFORM4 If indices i, k are sampled for the stochastic gradient and u i ≤ log(1 + e x i (w k −wy i) ) − δ, then the value of f at the t + 1 st iterate is bounded as DISPLAYFORM5. Taking expectations with respect to i, k, DISPLAYFORM6 Finally let P denote the projection of θ onto Θ. Since Θ is a convex set containing the optimum we have f (P (θ)) ≤ f (θ) for any θ, and so DISPLAYFORM7 which shows that the rate of convergence in expectation of U-max is at least as fast as that of standard SGD. Proof of Theorem 2. Let f (θ, ξ) be m-strongly convex for all ξ. The vanilla SGD step size is η t ∇f (θ (t), ξ t ) 2 where η t is the learning rate for the t th iteration. The Implicit SGD step size is η t ∇f (θ (t+1), ξ t ) 2 where DISPLAYFORM0 )/η t and so it must be the case that ∇f (θ DISPLAYFORM1 2 . Our desired follows: DISPLAYFORM2 where the first inequality is by Cauchy-Schwarz and the second inequality by strong convexity. In this section we will derive the updates for Implicit SGD. We will first consider the simplest case where only one datapoint (x i, y i) and a single class is sampled in each iteration with no regularizer. Then we will derive the more complicated update for when there are multiple datapoints and sampled classes with a regularizer. F.1 SINGLE DATAPOINT, SINGLE CLASS, NO REGULARIZER Equation for the stochastic gradient for a single datapoint and single class with µ = 0 is DISPLAYFORM0 The Implicit SGD update corresponds to finding the variables optimizing DISPLAYFORM1 where η is the learning rate and the tilde refers to the value of the old iterate (, Eq. 6). Since f ik is only a function of u i, w k, w yi the optimization reduces to DISPLAYFORM2 The optimal value of w k, w yi must deviate from the old valuew k,w yi in the direction of x i. Furthermore we can observe that the deviation of w k must be exactly opposite that of w yi, that is: DISPLAYFORM3 for some a ≥ 0. The optimization problem reduces to min ui,a≥0 DISPLAYFORM4 We'll approach this optimization problem by first solving for a as a function of u i and then optimize over u i. Once the optimal value of u i has been found, we can calculate the corresponding optimal value of a. Finally, substituting a into will give us our updated value of W. We solve for a by setting its derivative equal to zero in DISPLAYFORM0 The solution for a can be written in terms of the principle branch of the Lambert W function P, DISPLAYFORM1 Substituting the solution to a(u i) into FORMULA0, we now only need minimize over u i: DISPLAYFORM2 where we used the fact that e −P (z) = P (z)/z. The derivative with respect to u i in FORMULA0 is DISPLAYFORM3 where to calculate ∂ ui a(u i) we used the fact that ∂ z P (z) = P (z) z(1+P (z)) and so DISPLAYFORM4 Bisection method for u i We can solve for u i using the bisection method. Below we show how to calculate the initial lower and upper bounds of the bisection interval and prove that the size of the interval is bounded (which ensures fast convergence).Start by calculating the derivative in at u i =ũ i. If the derivative is negative then the optimal u i is lower bounded byũ i. An upper bound is provided by DISPLAYFORM5 In the first inequality we set a(u i) = 0, since by the envelop theorem the gradient of u i is monotonically increasing in a. In the second inequality we used the assumption that u i is lower bounded byũ i. Thus if the derivative in is negative at DISPLAYFORM6 then the size of the interval must be less than log, sinceũ i ≥ 0. Otherwise the gap must be at most log(2(K −1)e x i (w k −wy i) ) −ũ i = log(2(K −1))+x i (w k −w yi)−ũ i. Either way, the gap is upper bounded by log(2(K − 1)) + |x i (w k −w yi) −ũ i |. Now let us consider if the derivative in is positive at u i =ũ i. Then u i is upper bounded byũ i. Denoting a as the optimal value of a, we can lower bound u i using DISPLAYFORM7 where the first inequality comes dropping the (u i −ũ i) 2 term due to the assumption that u i <ũ i. Recall FORMULA0, DISPLAYFORM8 The solution for a is strictly monotonically increasing as a function of the right side of the equation. Thus replacing the right side with an upper bound on its value in an upper bound on a. Substituting the bound for u i, DISPLAYFORM9 Substituting this bound for a into yields DISPLAYFORM10 Thus if the derivative in is postive at u i =ũ i then log(K − 1) + x i (w k −w yi) − 2ηN x i 2 2 ≤ u i ≤ũ i. The gap between the upper and lower bound isũ i −x i (w k −w yi)+2ηN x i 2 2 −log(K−1). In summary, for both cases of the sign of the derivative in at u i =ũ i we are able to calculate a lower and upper bound on the optimal value of u i such that the gap between the bounds is at most |ũ i − x i (w k −w yi)| + 2ηN x i 2 2 + log(K − 1). This allows us to perform the bisection method where for > 0 level accuracy we require only log 2 (−1)+log 2 (|ũ i −x i (w k −w yi)|+2ηN x i 2 2 + log(K − 1)) function evaluations. Here we will prove that the step size magnitude of Implicit SGD with a single datapoint and sampled class with respect to w is bounded as O(x i (w k −w yi) −ũ i ). We will do so by considering the two cases u i ≥ũ i and u i <ũ i separately, where u i denotes the optimal value of u i in the Implicit SGD update andũ i is its value at the previous iterate. Case: u i ≥ũ i Let a denote the optimal value of a in the Implicit SGD update. From a = a(u i) = P (e DISPLAYFORM0 2) ). Now using the fact that P (z) = O(log(z)), DISPLAYFORM1 Putting together the two cases, DISPLAYFORM0 The actual step size in w is ±a DISPLAYFORM1 The Implicit SGD update when there are multiple datapoints, multiple classes, with a regularizer is similar to the singe datapoint, singe class, no regularizer case described above. However, there are a few significant differences. Firstly, we will require some pre-computation to find a low-dimensional representation of the x values in each mini-batch. Secondly, we will integrate out u i for each datapoint (not w k). And thirdly, since the dimensionality of the simplified optimization problem is large, we'll require first order or quasi-Newton methods to find the optimal solution. The first step is to define our mini-batches of size n. We will do this by partitioning the datapoint indices into sets S 1,..., S J with S j = {j : = 1, ..., n} for j = 1,..., N/n, S J = {J : = 1, ..., N mod n}, S i ∩ S j = ∅ and ∪ J j=1 S j = {1, ..., N}.Next we define the set of classes C j which can be sampled for the j th mini-batch. The set C j is defined to be all sets of m distinct classes that are not equal to any of the labels y for points in the mini-batch, that is, DISPLAYFORM0 Now we can write down our objective from in terms of an expectation of functions corresponding to our mini-batches: DISPLAYFORM1 where j is sampled with probability p j = |S j |/N and C is sampled uniformly from C j and DISPLAYFORM2 The value of the regularizing constant β k is such that E[I[k ∈ C ∪ S j]β k ] = 1, which requires that DISPLAYFORM3 The Implicit SGD update corresponds to solving DISPLAYFORM0 where η is the learning rate and the tilde refers to the value of the old iterate (, Eq. 6). Since f j,C is only a function of u Sj = {u i : i ∈ S j} and W j,C = {w k : k ∈ S j ∪ C} the optimization reduces to DISPLAYFORM1 The next step is to analytically minimize the u Sj terms. The optimization problem in decomposes into a sum of separate optimization problems in u i for i ∈ S j, DISPLAYFORM2 Setting the derivative of u i equal to zero yields the solution DISPLAYFORM3 where P is the principle branch of the Lambert W function. Substituting this solution into our optimization problem and simplifying yields DISPLAYFORM4 where we have used the identity e −P (z) = P (z)/z. We can decompose into two parts by splitting W j,C = W j,C + W ⊥ j,C, its components parallel and perpendicular to the span of {x i : i ∈ S j} respectively. Since the leading term in only depends on W j,C, the two ing sub-problems are DISPLAYFORM5 Let us focus on the perpendicular component first. Simple calculus yields the optimal value w DISPLAYFORM6 Moving onto the parallel component, let the span of {x i : i ∈ S j} have an orthonormal basis 11 DISPLAYFORM7 D×n with x i = V j b i for some b i ∈ R n. With this basis we can write DISPLAYFORM8 n which reduces the parallel component optimization problem to DISPLAYFORM9 where A j,C = {a k : k ∈ S j ∪ C} ∈ R (n+m)×n and DISPLAYFORM10 The e b i (a k −ay i) factors come from DISPLAYFORM11 since V j is an orthonormal basis. To optimize we need to be able to take the derivative: DISPLAYFORM0 where we used that ∂ z P (z) = P (z) z(1+P (z)) and e −P (z) = P (z)/z. To complete the calculation of the derivate we need, DISPLAYFORM1.In order to calculate the full derivate with respect to A j,C we need to calculate b i a k for all i ∈ S j and k ∈ S j ∪ C. This is a total of n(n + m) inner products of n-dimensional vectors, costing O(n 2 (n + m)). To find the optimum of we can use any optimization procedure that only uses gradients. Since is strongly convex, standard first order methods can solve to accuracy in O(log( −1)) iterations (, Sec. 9.3). Thus once we can calculate all of the terms in, we can solve it to accuracy in runtime O(n 2 (n + m) log(−1)).Once we have solved for A j,C, we can reconstruct the optimal solution for the parallel component of w k as w k =w k + V j a k. Recall that the solution to the perpendicular component is w DISPLAYFORM2 If the features x i are sparse, then we'd prefer to do a sparse update to w, saving computation time. We can achieve this by letting DISPLAYFORM3 where γ k is a scalar and r k a vector. Updating w k =w k + V j a k + 1 1+µβ k /2w ⊥ k is equivalent to γ k =γ k · 1 1 + µβ k /2 r k =r k + µβ k /2 ·r k +γ DISPLAYFORM4 Since we only update r k along the span of {x i : i ∈ S j}, its update is sparse. F.3.4 RUNTIME There are two major tasks in calculating the terms in. The first is to calculate x iw k for i ∈ S j and k ∈ S j ∪ C. There are a total of n(n + m) inner products of D-dimensional vectors, costing O(n(n + m)D). The other task is to find the orthonormal basis V j of {x i : i ∈ S j}, which can be achieved using the Gram-Schmidt process in O(n 2 D). We'll assume that {V j : j = 1, ..., J} is computed only once as a pre-processing step when defining the mini-batches. It is exactly because calculating {V j : j = 1, ..., J} is expensive that we have fixed mini-batches that do not change during the optimization routine. Adding the cost of calculating the x iw k inner products to the costing of optimizing leads to the claim that solve the Implicit SGD update formula to accuracy in runtime O(n(n + m)D + n 2 (n + m) log(−1)) = O(n(n + m)(D + n log( −1))). As was the case in Section F.1, it is important to initialize the optimization procedure at a point where the gradient is relatively small and can be computed without numerical issues. These numerical issues arise when an exponent x i (w k −w yi) −ũ i + b i (a k − a yi) 0. To ensure that this does not occur for our initial point, we can solve the following linear problem, 13 R = min DISPLAYFORM0 Note that if k = y i then the constraint 0 ≥ x i (w k −w yi)−ũ i +b i (a k −a yi) = −ũ i is automatically fulfilled sinceũ i ≥ 0. Also observed that setting a k = −V jw k satisfies all of the constraints, and so Putting the bounds together we have that the optimal value of is upper bounded by its value at the solution to, which in turn is upper bounded by n(1 + P (Kηp This bound is guarantees that our initial iterate will be numerically stable. Here we present the of using different learning rates for each algorithm applied to the Eurlex dataset. In addition to the Implicit SGD, NCE, IS, OVE and U-max algorithms, we also provide for NCE with n = 1, m = 1, denoted as NCE. NCE and NCE have near identical performance. | Propose first methods for exactly optimizing the softmax distribution using stochastic gradient with runtime independent on the number of classes or datapoints. | 1,013 | scitldr |
Crafting adversarial examples on discrete inputs like text sequences is fundamentally different from generating such examples for continuous inputs like images. This paper tries to answer the question: under a black-box setting, can we create adversarial examples automatically to effectively fool deep learning classifiers on texts by making imperceptible changes? Our answer is a firm yes. Previous efforts mostly replied on using gradient evidence, and they are less effective either due to finding the nearest neighbor word (wrt meaning) automatically is difficult or relying heavily on hand-crafted linguistic rules. We, instead, use Monte Carlo tree search (MCTS) for finding the most important few words to perturb and perform homoglyph attack by replacing one character in each selected word with a symbol of identical shape. Our novel algorithm, we call MCTSBug, is black-box and extremely effective at the same time. Our experimental indicate that MCTSBug can fool deep learning classifiers at the success rates of 95% on seven large-scale benchmark datasets, by perturbing only a few characters. Surprisingly, MCTSBug, without relying on gradient information at all, is more effective than the gradient-based white-box baseline. Thanks to the nature of homoglyph attack, the generated adversarial perturbations are almost imperceptible to human eyes. Recent studies have shown that adding small modifications to continuous inputs can fool state-ofthe-art deep classifiers, ing in incorrect classifications BID33 BID15. This phenomenon was first formulated as adding tiny and often imperceptible perturbations on image pixel values that could fool deep learning models to make wrong predictions. It raises concerns about the robustness of deep learning systems, considering that they have become core components of many safety-sensitive applications. For a given classifier F and a test sample x, recent literature defined such perturbations as vector ∆x and the ing sample x = x + ∆x as an adversarial example BID15.Deep learning has achieved remarkable in the field of natural language processing (NLP), including sentiment analysis, relation extraction, and machine translation BID35 BID26 BID34. In contrast to the large body of research on adversarial examples for image classification BID15 BID33 BID29 BID2, less attention has been paid on generating adversarial examples on texts. A few recent studies defined adversarial perturbations on deep RNN-based text classifiers BID30 BID32. BID30 first chose the word at a random position in a text input, then used a projected Fast Gradient Sign Method to perturb the word's embedding vector. The perturbed vector is then projected to the nearest word vector in the word embedding space, ing in an adversarial sequence (adversarial examples in the text case; we adopt the name in this paper). This procedure may, however, replace words in an input sequence with totally irrelevant words since there is no hard guarantee that words close in the embedding space are semantically similar. BID32 used the "saliency map" of input words and complicated linguistic strategies to generate adversarial sequences that are semantically meaningful to humans. However, this strategy is difficult to perform automatically. BID12 proposed greedy scoring strategies to rank words in a text sequence and then applied simple character-level Figure 1: An example of MCTSBug generated black-box adversarial sequence. x shows an original text sample and x shows an adversarial sequence generated from x. From x to x, only two characters are modified. This fools the deep classifier to return a wrong classification of sentiment (from positive to negative).transformations like swapping to fool deep classifiers. Its central idea, minimizing the edit distance of the perturbation makes sense. However, the perturbations are quite visible and the empirical effectiveness needs improvements. We provide more discussions in Section 5.1.Crafting adversarial examples on discrete text sequences is fundamentally different from creating them on continuous inputs like images or audio signals. Continuous input such as images can be naturally represented as points in a continuous R d space (d denotes the total number of pixels in an image). Using an L p -norm based distance metric to limit the modification ∆x on images appears natural and intuitive. However, for text inputs searching for small text modifications is difficult, because it is hard to define the distance between two discrete sequences. Three possible choices exist:• Because deep learning NLP models usually use an embedding layer to map discrete inputs to a continuous space (gradients on the embedding layer is calculable). Therefore we can measure the distance among text inputs in the continuous space defined by the word2vec embedding BID31. However, state-of-the-art embedding models are still unsatisfactory especially in terms of providing nearest neighbors among words. FIG3 shows a few examples we chose based on the GloVe BID31. Two words close to each other in the GloVe embedding cannot guarantee they are similar, for example, they can be antonyms. For instance, the word "loved" is the nearest neighbor of the word "hated".• We can use the huge body of linguistic knowledge to measure the distance between two text inputs. However, this strategy is hard to generalize and is difficult to extend to other discrete spaces.• Shown in Figure 1, we can also use the edit distance between text x and text x being defined as the minimal edit operations that are required to change x to x. We focus on this distance in the paper. Intuitively we want to find imperceptible perturbations on a text input (with respect to human eyes) to evade deep learning classifiers. The second major concern we consider in this paper is the black-box setup. An adversary may have various degrees of knowledge about a deep-learning classifier it tries to fool, ranging from no information to complete information. FIG4 depicts the Perspective API BID19 from Google, which is a deep learning based text classification system that predicts whether a message is toxic or not. This service can be accessed directly from the API website that makes querying the model uncomplicated and widely accessible. The setting is a black-box scenario as the model is run on cloud servers and its structure and parameters are not available. Many state-of-the-art deep learning applications have the similar system design: the learning model is deployed on the cloud servers, and users can access the model via an app through a terminal machine (frequently a mobile device). In such cases, a user could not examine or retrieve the inner structure of the models. We believe that the black-box attack is generally more realistic than the white-box. Previous efforts about adversarial text sequences mostly replied on using gradient evidence. We instead assume attackers cannot access the structure, parameters or gradient of the target model. Considering the vast search space of possible changes (among all words/characters changes) from x to x, we design a search strategy based on Monte Carlo tree search (MCTS) for finding the most important few words to perturb. The search is conducted as a sequential decision process and aims to make small edit operations such that a human would consider the generated x (almost) the same as the original sequence. Inspired by the homoglyph attack BID11 that attacks characters with symbols of identical shapes, we replace a character in those important words found by MCTS with its homoglyph character (of identical shape). This simple strategy can effectively forces a deep classifier to a wrong decision by perturbing only a few characters in a text input. Contributions: This paper presents an effective algorithm, MCTSBug, that can generate adversarial sequences of natural language inputs to evade deep-learning classifiers. The techniques we explore here may shed light on discovering the vulnerability of using deep learning on other discrete inputs. Our novel algorithm has the following properties:• Black-box: Previous methods require knowledge of the model structure and parameters of the word embedding layer, while our method can work in a black-box setting.• Effective: on seven real-world text classification tasks, our MCTSBug can fool two stateof-the-art deep RNN models with the success rate of 95% on average (see FIG2).• Simple: MCTSBug uses simple character-level transformations to generate adversarial sequences, in contrast to previous works that use projected gradient or multiple linguisticdriven steps.• Almost imperceptible perturbations to human observers: MCTSBug can generate adversarial sequences that visually identical to seed sequences. Att: For the rest of the paper, we denote samples in the form of pair (x, y), where x = x 1 x 2 x 3...x n is an input text sequence including n tokens (each token could be either a word or a character in different models) and y set including {1, ..., K} is a label of K classes. A deep learning classifier is represented as F: X → Y, a function mapping from the input space to the label space. Basics of Adversarial Examples: Formally, we can define adversarial sample BID15 via the following equation: DISPLAYFORM0 Here we denote a machine learning classifier as F: X → Y, where X is the sample space, x ∈ X denotes a single sample and Y describes the set of output classes. ∆x describes the perturbation vector and the ing sample x is an adversarial example BID15. The strength of the adversary,, measures the permissible transformations. The choice of condition in Eq. indicates two methods for finding adversarial examples: whether they are untargeted(F (x) = F (x)) or targeted (F (x) = t) BID1.The choice of ∆x is typically an L p -norm distance metric. Recent studies BID33 BID15 BID2 BID29 DISPLAYFORM1 measures the maximum change in any dimension. This means an L ∞ adversary is limited by the maximum change it can make to each feature but can alter all the features by up to that maximum BID15. The L 2 norm corresponds to the Euclidean distance between x and x BID2. This distance can still remain small when small changes are applied to many different features. An L 0 adversary is limited by the number of feature variables it can alter BID29 ).There exist a large body of recent studies focusing on adversarial examples for image classification that typically created imperceptible modifications to pixel values through an optimization procedure BID15 BID33 BID29 BID2. We put more details in Section 5.4. A third parameter for categorizing recent methods, in addition to targeted/untargeted and ∆ choices, is whether the assumption of an adversary is blackbox or white box. An adversary may have various degrees of knowledge about the model it tries to fool, ranging from no information to complete information. In the black box setting, an adversary is only allowed to query the target classifier and does not know the details of learned models or the feature representations of inputs. Since the adversary does not know the feature set, it can only manipulate input samples by testing and observing a classification model's outputs. In the white box setting, an adversary has access to the model, model parameters, and the feature set of inputs. Similar to the black-box setting, the adversary is still not allowed to modify the model itself or change the training data. Most studies of adversarial examples use the white-box assumption BID33 BID15 BID2 BID29.Multiple recent studies extended adversarial examples to the black-box. One study proposed by BID28 showed that it is possible to create adversarial samples that successfully reduce the classification accuracy without knowing the model structure or parameters on image classification tasks. The method is to estimate a local model and attack the local model instead. Another paper BID6 generates black-box adversarial sample by estimating the gradient directly via queries. BID16 follows this direction, proposes three different threat models of black-box attack, and uses statistical gradient estimation. BID17 uses prior information about images and bandit to optimize the gradient estimation and adversarial sample search process. We design a method we call MCTSBug to generate adversarial modifications on a text sequence directly, without the guidance of gradients. Considering that the natural form of English text has white-spaces separating words, we treat the search of perturbations as a two-stage task:• Step 1: Determine the important words to change.•Step 2: Modify the important words slightly by creating "imperceivable" modifications. To find the important words, we formalize the problem as a search problem and use Monte Carlo Tree Search to achieve a good approximate to the global maximum. We then use homoglyph characters BID11 to create stable and effective imperceivable modifications. On the other hand, these modifications create big difficulties (i.e. large changes) to a target deep learning model. We first assume that words in a text input contribute differently to the final prediction made by a deep classifier. In Section 2.2, we propose a search process to find those important words through MCTS.After we find an important word, we need to use an efficient strategy to modify it by making imperceptibly changes (to human) while at the same time such changes can force a deep model's output classification of the whole text sequence to change. We therefore define such a small change on a word as one character modification based on homoglyph attack BID11. Homoglyph attack is an attack based on symbols with identical shapes. Figure 2 shows a table including all the text characters together with its homoglyph pair we use. In complicated modern computer character systems such as Unicode, many symbols have the very similar form to human observers but coming from different origins. For example, Cyrillic letter'a' and the Latin letter'a' are visually identical in every ways. However, computer systems treat them as different characters. Homoglyph attacks, though simple, have created a severe problem in cyber security when attackers spoof the domain name with homoglyphs of influential websites. Words are symbolic, and learning-based classifiers handle words through a dictionary to represent a finite set of possible words. However, the size of the typical NLP dictionary is much smaller than the huge space built by the combinations of all possible characters. This means if we deliberately create (visually) similar but mis-spelled version of important words, we can easily convert those crucial words to "unknown" (i.e., words not in the dictionary). Unknown words are mapped to the "unknown" embedding vector, which is likely to be vastly different from the embedding of the original word. Our empirical (Section 3) strongly indicate that this simple strategy can effectively force deep learning models to make wrong classifications. Based on our empirical , changing one character to its homoglyph pair is enough to make a word to be unrecognizable ("unknown") by a deep learning system. We say a word is "neutralized" to mean a deep learning system loses the information of this word. In the following, when we say "to neutralize a certain word", we mean to modify a random character in that word to its homoglyphs pair in Figure 2. while not reach maximum iteration R do s = root; while n is in T do s = arg max s ∈s.child v s + U CB s; # jump to next node end sT = random rollout(s); vs T = fs(sT, c); DISPLAYFORM0 DISPLAYFORM1 Input: Sample X = x1x2..xn, maximum search depth d, classification of the sample c l, maximum iterations R Output: Adversarial sample xact Initial action list A = set(X); DISPLAYFORM2 Connecting to related studies: Any method to generate adversarial text sequence needs a token transformer; that is, a mechanic to perturb text. On the well studied adversarial image samples, a perturbation is often following the gradient direction. However, such idea is problematic when directly being applied to text samples. One important issue is that gradient on the original text sample is hard to defined. Deep learning classifiers often accept the text sample as a sequence of id numbers to a dictionary, and gradients on those id numbers are not entirely meaningful. BID30 suggested to create perturbation based on gradient of the word embedding layer. However, generating such perturbation requires the assumption of continuity, that is, words near each other shares some similarity. However, two words close to each other in the embedding layers are often neither similar on shape nor on meaning (FIG3). Besides, generating such perturbation requires a projection from a perturbed embedding vector to the discrete space. Hard to control the distance of such perturbations. BID32 used linguistic based rules to transform words. While linguistic rules might be able to successfully generate appropriate perturbations in some cases, their method is complicated and time consuming. More importantly, the generated samples cannot provide any guarantee on the size of the perturbation. Homoglyph attack can successfully neutralize a word for a deep-learning classifier. To perform the minimal edit operations for generating an adversarial sequence from a seed x, we still need a powerful way to find out those important words (with respect to their influence on a deep classifier's prediction). Monte Carlo Tree Search is a method that creates a search tree to find optimal in some decision space, which combines Monte Carlo Method and Bandit. The Upper Confidence Tree (UCT) framework BID20 includes Upper Confidence Bound algorithm (UCB) BID0 to balance the exploration and exploitation in the tree search process, and achieve significant improvement in Go AI BID14. Later, many different applications choose MCTS as the solving algorithm, including game AI BID5, single player game BID4, and even feature selection BID13.Monte Carlo Tree Search is known to be able to find near optimal solutions in the vast search space. It has shown great success recently in different scenarios. To make it work, it requires a value function which can be queried at any time, which in our case can be calculated in a black-box manner through querying the target deep learning neural network. In details, we formalize adversarial perturbation generation problem as a reinforcement learning problem as follow. Let X = {x 1, x 2 ..., x n} denote the set of all possible words to change. We define state s to be the current set of words will be sent to the homoglyph transformer, and action a to be the action of select a single word and add it to current set. The whole process terminates when the size of current state s reaches a pre-defined limitation of d (or there's no more items to add). Therefore, the state space S is the powerset of word set X. The action space A is then equivalent to the set X. In this setting, a determined transition function over state and actions is then defined: p: S × A → S, which is simply removing the word action a represents from current s. A policy π(s) is a probability distribution over all possible action a. In the MCTS algorithm creates a search tree, in which every node represents a state. Every node stores a value v and number of times visits n c. Each edge represents an action a, that connects parent s and child s if state s takes action a comes to state s. The goal of the MCTS is to find a node(state) that minimize some loss function, which in our setting should be maximize a score which correctly evaluates the quality of adversarial samples. Formally, the goal of MCTS is to find the optimal terminal state s T over a predefined score function f s, that s T = arg max s∈Ts f s (s), T s stands for the set of terminal states. In our setting, the score function f s have multiple choices, one example can be the cross-entropy loss to its label. However, this loss function doesn't typically reflect the terminal condition of non targeted scenario: A higher score doesn't necessarily indicate the sample is better. A successful adversarial sample may have a lower cross entropy than an unsuccessful adversarial sample, if it's score is more balanced on different terms. Suppose f s the We defined directly by the definition of untargeted attack: DISPLAYFORM0 Here c l is the original predicted class, which the class that adversary trying to deviate the sample from. The equation stands for the maximum predicted probability on any class except c l minus the predicted probability of target class c l.We defined f s for the targeted attack as: DISPLAYFORM1 c target is the class that adversary is trying to convert the sample into. Such score is not fully differentiable, so that it can't be used in other optimization process, SGD for example. However, in MCTS the gradient is not required. That means unlike traditional optimization, MCTS is free to choose any function. We use a modified version of the UCT-1 algorithm BID20. Generally the algorithm treats every node as a independent bandit problem. Every iteration of MCTS algorithm includes four steps: Selection, Expansion, Simulation and Backpropagation. In the selection phase, the algorithm start from the root node. And on each node (s, v s, n s) where s is a set of words, the algorithm selects the action following the Upper Confidence Bound (UCB) criterion: DISPLAYFORM2 2 ln ns 2n s, C > 0 is a hyperparameter constant. The search ends at a node s doesn't belong to the tree, and it is then added to the tree in the following expansion phase, with v s and n s initialized as 0. After that, the algorithm starts a simulation phase that that selects random actions from the node s, until a terminal state is reached. It will then calculate the score function to return a score for the terminal state. Finally, in the backpropagation update the value of all the tree nodes, from s go up to all its parent nodes. The visit count n is increase by 1 and the value is updated to the average of all the scores in the trails. The total algorithm is described in Algorithm 1. To summarize, MCTSBug use MCTS to search for the most appropriate set of d words to change. After it found that set of words, it then modifies the words by using homoglyphs. One character in one word can neutralize the set of words. The effect to the prediction is correctly simulated in the MCTS value function. If the adversarial sample generated is not enough, the algorithm increase the value d. The algorithm will stop either the adversarial samples is a success or the maximum search limit is reached. Connecting to related studies: One previously proposed way to select words used the saliency over those words on the word embedding BID32. However, that breaks the black-box assumption. Besides, saliency can't accurately reflect the effect of the perturbation. A more accurate way is to use simulated leave-one-out score to sort all the words, such as in BID12. While the method is shown to effective to text input, it can be sub-optimal as it is essentially a greedy selection based on single round . Instead, we model the important words' selection problem as a combinatorial optimization problem and use Monte Carlo Tree Search to solve it. Comparing to the leave-one-out based greedy strategy, the greedy approach only focuses on local optimum, and MCTS focus on the global optimum. MCTS instead search more space, and can provide a good estimation of the global maximum. We evaluated the effectiveness of our algorithm by conducting experiments on multiple deep learning models across multiple real-world text classification datasets. In particular, we wanted to answer the following questions empirically:• Do adversarial samples generated by MCTSBug look valid?• Does the accuracy of deep classifier decrease by the crafted adversarial samples?• Can the adversarial samples generated by MCTSBug transfer between models?• Are MCTSBug strategies robust to configuration parameters?To evaluate the quality of adversarial samples generated by MCTSBug, we implemented three other baselines and compared them to MCTSBug. For fair comparisons, all three baselines also use the homoglyph word transformer. In summary we compare the following four methods:1. Random (baseline): Random select words for the perturbation.2. Gradient (baseline): Contrary to random selection which uses no knowledge of the model, we also compare to full knowledge of the model, where gradients are used to find the most important tokens. Following equation 3, the gradient method uses the magnitude of gradient w.r.t the original classification to decide which tokens should be changed. This method of selecting tokens is proposed in BID32 Greedy viruses keep on growing most it managers won 39 s quertion the importance of securits but this priority has been sliding between the third and fourth most important focus for companies MCTSBug viruses keep on growing most it managers won 39 t question the importance of security but this priority has been sliding between the third and fourth most important focus for companies Original Message dave matthews band even in the slightest add this to your collection there is one song angle where his back up singers take over a bit and it is so incredible that one song is worth buying this dvd for but trust me you wont be disappointed in the rest Projected Gradient Descent dave matthews band even in the slightest add this to your collection there is one song angle where his back up singers take over a bit and it inventions mfc terrific that one song is worth buying this dvd sensitivity hathaway clicks me hone transaction be studio in the olsen 5 Stars Amazon Reviews Full Dataset Greedy dave matthews band even in the slightest add this to your collection there is one song angle where his back up singers take over a bit and it is so incredible that one song is worth buying this dvd for but trust me you eont be disappointed in the rest MCTSBug dave matthews band even in the slightest add this to your collection there is one song angle where his back up singers take over a bit and it is so incredible that one song is worth buying this dvd for but trust me you wont be disappointed in the rest 3 StarsOriginal Message... 20 and 100 m they can be found worldwide in tropical and subtropical waters they are predators that wait for prey fish to pass by they have which they move to attract the prey they have little economic value other than a minor role in the aquarium trade Animal Projected Gradient Descent...goran and stretched cbd nationale luc jesus oakland stacy in sunflower and spice waters they are predators that wait for prey fish to pass by they have which they move to attract the prey they have little economic value other than a minor role in the aquarium trade Building DBPedia Dataset Greedy... 20 and 100 m they can be fcund worldwide in tropical and subtropical waters they ane aredators that wait for prey fivh to pass by they have which they move to attract the prey they have little economic value other than a minor role in the aquarium trade Building MCTSBug... 20 and 100 m they can be found worldwide in tropical and subtropical waters they are predators that wait for prey fish to pass by they have which they move to attract the prey they have little economic value other than a minor role in the aquarium trade Building Table 1: Examples of generated adversarial samples: The red part indicates the difference to the original message.3. Greedy (baseline): Following DeepWordbug paper BID12, the greedy baseline first evaluate the importance of each word by calculating a leave-one-out score on every word, and perform a greedy selection on the words. 4. MCTSBug (our method): Use MCTS to find the most important tokens (Section 2).Due to space limit, we have put the details of datasets, experimental setups, and details of target deep models in the appendix. We also put two detailed figures FIG6 ) and FIG1 showing how various methods behave by varying the number of characters they can modify and two tables TAB8 ) showing the effective rates under targeted and untargeted attacks in the appendix. Besides, the empirical of transferbility are shown in Figure 10. FIG10 shows the effective curves for different output classes and FIG7 shows the sensitivity analysis of how MCTSBug behaves when varying the value of C. Lastly we tried to conduct adversarial training by retraining the deep model using adverarial sequences generated by MCTSBug on the AG's News datase. However FIG9 shows that this retraining strategy does not effectively defend against our attack. To evaluate the quality of adversarial samples, we first compare our generated samples to state-ofthe-art baselines. The is summarized in 1. From Table 1, we can see that our MCTSBug creates samples with minimal perturbation. Comparing to baselines that changes many words or characters, adversarial samples from MCTSBug are extremely hard to be found to human eyes. We first present the of untargeted attack. Here, we calculate attacking successful rates on all 7 datasets when gradually relaxing the limitation on maximum edit distance difference. shows a direct comparison of the effectiveness of our attack to the baseline methods: With the attack that flips at most 20 words, MCTSBug have 100% or nearly 100% attacking successful rate on all 7 datasets, which means MCTSBug can successfully flip the prediction of any samples, which clearly outperforms all the baselines in comparison. We also notice that for most datasets, manipulating no more than 5 words can successfully flip the prediction of 70% to 90% samples. Comparing to the baselines, the Random baseline not surprisingly doesn't perform very well, which indicates that even with a powerful transformation function like homoglyph attack, generating an adversarial text input is not a trivial task. The Gradient baseline (Green line) also doesn't perform very well. The basic reason for the bad performance of gradient based white-box method is it doesn't fit our task. Gradient, as a estimator of function value change by perturbation, is only accurate when the perturbation is small enough, and has the same perturbation over every input features. In our case, our modification on the target is not small and also not same perturbation over different input features (as every word vector are different). In this case, gradient is not a good estimation of the effect of our transformation. We then present the of targeted attack. One may assume as targeted attack are much harder than untargeted attack, the power of MCTSBug will be severely undermined when comes to the targeted scenario. To justify it, we similarly present the targeted attacking successful rates on all 7 datasets when relaxing the limitation on maximum edit distance difference. FIG6 shows a direct comparison of the effectiveness of our attack to the baseline methods: MCTSBug have the attacking successful rate 70%-90% on the Basic-LSTM model. Due to the combinatorial nature of a large and discrete input space, searching adversarial text sequences is not a straightforward extension from image-based techniques generating adversarial examples. In this paper, we present a novel framework, MCTSBug which can generate imperceptible adversarial text sequences in a black-box manner. By combining homoglyph attack and MCTS, the proposed method can successfully fool two deep RNN-based text classifiers across seven large-scale benchmarks. The key idea we utilized in MCTSBug is that words with one character changed to its homoglyph pair are usually viewed as "unknown" by the deep-learning models. The shape change is almost invisible to humans, however, it is a huge change to a deep-learning model. More fundamentally this is caused by the fact that NLP training datasets normally only cover a very small portion of the huge space built by the combination of possible NLP letters. Then the question is: should the deep-learning classifiers model those words having identical shapes to human eyes as similar representations? We think the answer should be Yes. Early research on attacking text classifiers starts from BID10 and BID23, when researchers show that malicious manipulation on input can cause machine learning based spam detectors to generate false positives. BID24 propose the Good Word attack, which is a practical attack that adds positive, non-spam words into a spam message to evade machine learning classifiers. These attacks are designed for simple classifiers such as Naive Bayes, which work directly on different features. However, modern deep learning models work differently from traditional machine learning models. Also, such methods give no guarantee of the quality of the generated sample. For example, a spam message could become non-spam if too many "good words" were added to it, but the output message is almost guaranteed to be meaningless. In 2013, BID33 proposes the concept of adversarial sample: imperceptible perturbation on images can fool deep learning classifiers. This discovery is interesting because small modifications guarantee the validity of generated samples. Compared to studies of adversarial examples on images, little attention has been paid on generating adversarial sequences on text. Papernot et al. applied gradient-based adversarial modifications directly to NLP inputs targeting RNN-based classifiers in BID30. The ing samples are called "adversarial sequence," and we also adopt the name in this paper. The study proposed a white-box adversarial attack called projected Fast Gradient Sign Method and applied it repetitively to modify an input text until the generated sequence is misclassified. It first randomly picks a word, and then uses the gradient to generate a perturbation on the corresponding word vector. Then it maps the perturbed word vector into the nearest word based on Euclidean distance in the word embedding space. If the sequence is not yet misclassified, the algorithm will then randomly pick another position in the input. Recently, BID32 used the embedding gradient to determine important words. The technique used heuristic driven rules together with hand-crafted synonyms and typos. The method is a white-box attack since it accesses the gradient of the model. Another paper measures the importance of each word to a certain class by using the word frequency from that class's training data. Then the method uses heuristic driven techniques to generate adversarial samples by adding, modifying, or removing important words. This method needs to access a large set of labeled data. All after-mentioned methods are under the typical white-box setting of adversarial generation scenarios, in which gradients are used to guide the modification of input tokens from an original sample to an adversarial sample. However, gradients are hard to define on discrete symbolic text inputs. Also in black-box settings, calculating gradients is not possible since the model parameters are not observable. Recently, BID12 proposed greedy scoring strategies to rank words in a text sequence and then applied simple character-level transformations like swapping to fool deep classifiers. Its central idea, minimizing the edit distance of the perturbation makes sense. However the perturbations are quite visible and the empirical effectiveness needs improvements. BID12 propose multiple scoring strategies. However, it lacks the comparison between those scoring strategies. Our method does not require knowing the structure, parameters or gradient of the target model, while previous methods do. Also, most previous approaches used heuristic-driven and complicated modification approaches. We summarize the differences between our method and previous methods on generating adversarial text samples in TAB2.Besides, a few recent studies tried to generate adversarial examples for other NLPS tasks, like seq2seq based machine translation or machine-learning based reading comprehension NLP tasks. For example, BID7 attacks seq2seq translation models. Use projected gradient based input, BID7 can make the output of the translation model be completely different. BID18 generate samples that attacks Question Answering models. The authors propose algorithms which imitate the pattern of the question and feed additional wrong information into the sample that mislead the model to give a wrong answer. Machine learning models, when processing text, must first discretize a text into tokens. These tokens can then be fed into the model. Words are often used as the smallest unit for input to a model. A word embedding is a mapping that projects every word in a dictionary to a unique vector in some vector space. Such mappings transform the discrete representations of words into features on a continuous space, which is more conducive to use with machine learning models. Word embeddings have shown success in many NLP tasks BID9 BID31 BID25. Such models hold a dictionary and build the word embedding based on the dictionary. To be able to work with words not in the dictionary, word embedding based models often add a special entry in the dictionary called "out of vocabulary," or "unknown". Those models work well if the word embedding is well constructed and covers most words that occur in the data. Homoglyph has been a notorious security problem to many computer systems for many years. Hackers and malicious attackers has been deliberately using homoglyphs as attacks for many years. According to BID11, back to 2000 some person create a website'bl00mberg.com'(with 0 instead of o) to release fake financial news and fool many investors. Such issue become more severe as later domain name can include Unicode characters(punycode). Early studies of homoglyph attack includes BID11 and BID21.One may suggest since this problem has been popular for many years, it will certainly be solved by modern designs of web systems. However, in 2017 this issue still happens in popular web browsers Chrome and Firefox. Similar name spoofing attack are also used in computer viruses, where they make the name of virus process looks to be very similar to one of the system process (for example, svchost.exe and svch0st.exe). There exist a large body of recent studies focusing on adversarial examples for image classification that typically created imperceptible modifications to pixel values through an optimization procedure BID15 BID33 BID29 BID2. BID33 first observed that DNN models are vulnerable to ad- versarial perturbation (by limiting the modification using L 2 norm) and used the Limited-memory Broyden-Fletcher-Goldfarb-Shanno (L-BFGS) algorithm to find adversarial examples. Their study also found that adversarial perturbations generated from one Convolutional Neural Network (CNN) model can also force other CNN models to produce incorrect outputs. Subsequent papers have explored other strategies to generate adversarial manipulations, including using the linear assumption behind a model BID15 (by limits on L ∞ norm), saliency maps BID29 (by limits on L 0 norm), and evolutionary algorithms BID27. Recently, Carlini et al. proposed a group of attacking methods with optimization techniques to generate adversarial images with even smaller perturbations BID2. BID28 shows that it's possible to generate adversarial samples that can successfully reduce the classification accuracy in the black-box manner without the knowledge of model structure or the model parameters in the neural network. Recently multiple studies have extended adversarial examples to domains similar as images such as like speech recognition BID8 or speech-to-text BID3. Our experimental set up is as follows:• Datasets: In our experiments, we use seven large-scale text datasets gathered from BID35, which includes a variety of NLP tasks, e.g., text classification, sentiment analysis and spam detection. Details of the datasets are listed in TAB4.• Target Models: We conduct our experiment on several state-of-the-art RNN models. Basic-LSTM: A uni-directional LSTM. Detailedly speaking, the network contains a random embedding layer with dimension 100 to accept the word inputs. The embedding vectors are then fed through two LSTM layers (one for each direction) with 100 hidden nodes each. The hidden states of each direction are concatenated at the output and fed to a fully connected layer for the classification. Basic-LSTM with GloVe Embedding: Similar to Basic-LSTM, but use pretrained GloVe embedding BID31 from Stanford as the embedding layer. Bi-LSTM: A bi-directional LSTM, which contains an LSTM in both directions (reading from first word to last and from last word to first). The network contains a random embedding layer to accept the word inputs. The embedding vectors are then fed through two LSTM layers (one for each direction) with 100 hidden nodes each. The hidden states of each direction are concatenated at the output and fed to a fully connected layer for the classification. Bi-LSTM with GloVe Embedding: Similar to Bi-LSTM, but use pretrained GloVe embedding BID31 We summarize the performance of the models without adversarial samples in Table 3. In the non-adversarial setting, these models has achieve state-of-the-art level performance on those datasets.• Platform: We train the target deep-learning models and implement attacking methods using software platform PyTorch 0.3.0. All the experiments run on a server machine, whose operating system is CentOS7 and have 4 Titan X GPU cards.• Evaluation: Performance of the attacking methods is measured by the successful rate of the attack, which is 1 -the model on generated adversarial sequences. The lower the accuracy, the more effective is the attacking method. shows the attack successful rate on generated adversarial samples, each figure represents the on a certain dataset (We only present those dataset with larger than 2 classes). The X axis represents the number of modified characters(d), and the Y axis corresponds to the attack successful rate of the adversarial samples generated using the respective attacking methods. We study the effect of different parameters in this section. Our goal is to show our method i s robust to the change of parameters. In the MCTSBug algorithm, c is a parameter of UCB that balances the exploration and exploitation. We test several different choices of c value on AG's News Dataset. The shows that MCTSBug algorithm is robust to the change of C. Adversarial samples are famous for their transferability, which means a lot of adversarial samples that are generated for a certain model can also successfully fool other DNN models. We tested the transferability of adversarial samples generated using MCTSBug.We evaluate the transferability using four LSTMs. In the following table, LSTM1/BiLSTM1 are trained with randomly-initialized embedding, and LSTM2/BiLSTM2 are trained with GLoVe word embeddings. Figure 10 shows the accuracy from feeding adversarial sequences generated by one model to another model on the same task. The show that the target model accuracy in these circumstances is reduced from around 90% to 30-60%. Most adversarial samples can be successfully transferred to other models, even to those models with different word embedding. This experiment demonstrates that our method can successfully find those words that are important for classification and that the transformation is effective across multiple models. In another experiment, we study the performance of DeepWordBug attack on different classes of samples. We present the in the FIG10. From the figure, we can see that our attack works differently on different classes of inputs. It reveals that there may exist some bias in the target deep learning model towards some certain class. We present a histogram to analyze the distribution of the output probability of the target machine learning model on the fooled class in FIG11. The shows that even though we increase the word modification one by one, most adversarial samples we generated are predicted wrongly by the model with a high certainty, which indicated that the generated adversarial samples are hard to be filtered directly using the probability distribution. Certainty of model on the adversarial sample The MCTS attack algorithm can be also applied to other discrete type inputs, with a properly defined value function and a properly defined transformation function. As an example, we show the to generate on the MNIST dataset in FIG2.The idea is to generate adversarial samples on L0 norm. Because the uniqueness of the task, we can define a discrete action set over the pixels. Currently, for every pixel we define two types of actions: Change the value to 255 or change the value to 0. Doing so, the question can be viewed as a combinitorial optimization problem once again. We solve this problem: By performing at most 20 actions, we successfully flip 37 samples out of 100 samples we tested. We show some samples in FIG2. | Use Monte carlo Tree Search and Homoglyphs to generate indistinguishable adversarial samples on text data | 1,014 | scitldr |
We introduce a model that learns to convert simple hand drawings into graphics programs written in a subset of \LaTeX.~ The model combines techniques from deep learning and program synthesis. We learn a convolutional neural network that proposes plausible drawing primitives that explain an image. These drawing primitives are like a trace of the set of primitive commands issued by a graphics program. We learn a model that uses program synthesis techniques to recover a graphics program from that trace. These programs have constructs like variable bindings, iterative loops, or simple kinds of conditionals. With a graphics program in hand, we can correct errors made by the deep network and extrapolate drawings. Taken together these are a step towards agents that induce useful, human-readable programs from perceptual input. How can an agent convert noisy, high-dimensional perceptual input to a symbolic, abstract object, such as a computer program? Here we consider this problem within a graphics program synthesis domain. We develop an approach for converting hand drawings into executable source code for drawing the original image. The graphics programs in our domain draw simple figures like those found in machine learning papers (see FIG0). The key observation behind our work is that generating a programmatic representation from an image of a diagram involves two distinct steps that require different technical approaches. The first step involves identifying the components such as rectangles, lines and arrows that make up the image. The second step involves identifying the high-level structure in how the components were drawn. In FIG0, it means identifying a pattern in how the circles and rectangles are being drawn that is best described with two nested loops, and which can easily be extrapolated to a bigger diagram. We present a hybrid architecture for inferring graphics programs that is structured around these two steps. For the first step, a deep network to infers a set of primitive shape-drawing commands. We refer FIG8: Both the paper and the system pipeline are structured around the trace hypothesisThe new contributions of this work are: The trace hypothesis: a framework for going from perception to programs, which connects this work to other trace-based models, like the Neural Program Interpreter BID17; BID26 A model based on the trace hypothesis that converts sketches to high-level programs: in contrast to converting images to vectors or low-level parses BID11 BID14 BID24 BID1 BID2. FORMULA8 A generic algorithm for learning a policy for efficiently searching for programs, building on Levin search BID13 and recent work like DeepCoder BID0. Even with the high-level idea of a trace set, going from hand drawings to programs remains difficult. We address these challenges: Inferring trace sets from images requires domain-specific design choices from the deep learning and computer vision toolkits (Sec. 2). FORMULA4 Generalizing to noisy hand drawings, we will show, requires learning a domain-specific noise model that is invariant to the variations across hand drawings (Sec. 2.1). Discovering good programs requires solving a difficult combinatorial search problem, because the programs are often long and complicated (e.g., 9 lines of code, with nested loops and conditionals). We give a domain-general framework for learning a search policy that quickly guides program synthesizers toward the target programs (Sec. 3.1). We developed a deep network architecture for efficiently inferring a trace set, T, from an image, I. Our model combines ideas from Neurally-Guided Procedural Models BID18 and Attend-Infer-Repeat . The network constructs the trace set one drawing command at a time, conditioned on what it has drawn so far. FIG1 illustrates this architecture. We first pass a 256 × 256 target image and a rendering of the trace set so far (encoded as a two-channel image) to a convolutional network. Given the features extracted by the convnet, a multilayer perceptron then predicts a distribution over the next drawing command to add to the trace set (see Tbl. 1). We also use a differentiable attention mechanism (Spatial Transformer Networks:) to let Blue: network inputs. Black: network operations. Red: samples from a multinomial. Typewriter font: network outputs. Renders snapped to a 16 × 16 grid, illustrated in gray. STN (spatial transformer network) is a differentiable attention mechanism. Table 1: Primitive drawing commands currently supported by our model. Circle at (x, y) rectangle(x 1, y 1, x 2, y 2)Rectangle with corners at (x 1, y 1) & (x 2, y 2) line(x 1, y 1, x 2, y 2, arrow ∈ {0, 1}, dashed ∈ {0, 1})Line from (x 1, y 1) to (x 2, y 2), optionally with an arrow and/or dashed STOP Finishes trace set inference the model attend to different regions of the image while predicting drawing commands. We currently constrain coordinates to lie on a discrete 16 × 16 grid, but the grid could be made arbitrarily fine. We train the network by sampling trace sets T and target images I for randomly generated scenes and maximizing the likelihood of T given I with respect to the model parameters, θ, by gradient ascent. We trained the network on 10 5 scenes, which takes a day on an Nvidia TitanX GPU. FIG10: Parsing L A T E X output after training on diagrams with ≤ 12 objects. Model generalizes to scenes with many more objects. Neither SMC nor the neural network are sufficient on their own. # particles varies by model: we compare the models with equal runtime (≈ 1 sec/object) Our network can "derender" random synthetic images by doing a beam search to recover trace sets maximizing DISPLAYFORM0 But, if the network predicts an incorrect drawing command, it has no way of recovering from that error. For added robustness we treat the network outputs as proposals for a Sequential Monte Carlo (SMC) sampling scheme . The SMC sampler is designed to sample from the distribution DISPLAYFORM1, where L(·|·) uses the pixel-wise distance between two images as a proxy for a likelihood. Here, the network is learning a proposal distribution in an amortized way BID15 and using it to invert a generative model (the renderer).Experiment 1: FIG10. To evaluate which components of the model are nec-essary to parse complicated scenes, we compared the neural network with SMC against the neural network by itself or SMC by itself. Only the combination of the two passes a critical test of generalization: when trained on images with ≤ 12 objects, it successfully parses scenes with many more objects than the training data. We compare with a baseline that produces the trace set in one shot by using the CNN to extract features of the input which are passed to an LSTM which finally predicts the trace set token-by-token (LSTM in FIG10). This architecture is used in several successful neural models of image captioning (e.g.,), but, for this domain, cannot parse cluttered scenes with many objects. We trained the model to generalize to hand drawings by introducing noise into the renderings of the training target images. We designed this noise process to introduce the kinds of variations found in hand drawings (see supplement for details).Our neurally-guided SMC procedure used pixel-wise distance as a surrogate for a likelihood function (L(·|·) in section 2). But pixel-wise distance fares poorly on hand drawings, which never exactly match the model's renders. So, for hand drawings, we learn a surrogate likelihood function, L learned (·|·).The density L learned (·|·) is predicted by a convolutional network that we train to predict the distance between two trace sets conditioned upon their renderings. We train our likelihood surrogate to approximate the symmetric difference, which is the number of drawing commands by which two trace sets differ: DISPLAYFORM0 Experiment 2: Figures 5-7. We evaluated, but did not train, our system on 100 real hand-drawn figures; see Fig. 5 -6. These were drawn carefully but not perfectly with the aid of graph paper. For each drawing we annotated a ground truth trace set and had the neurally guided SMC sampler produce 10 3 samples. For 63% of the drawings, the Top-1 most likely sample exactly matches the ground truth; with more samples, the model finds trace sets that are closer to the ground truth annotation FIG3. We will show that the program synthesizer corrects some of these small errors (Sec. 4.1). Although the trace set of a graphics program describes the contents of a scene, it does not encode higher-level features of the image, such as repeated motifs or symmetries. A graphics program better describes such structures. We seek to synthesize graphics programs from their trace sets. We constrain the space of programs by writing down a context free grammar over programs -what in the program languages community is called a Domain Specific Language (DSL) BID16. Our DSL (Tbl. 2) encodes prior knowledge of what graphics programs tend to look like. DISPLAYFORM0 Given the DSL and a trace set T, we want a program that both evaluates to T and, at the same time, is the "best" explanation of T. For example, we might prefer more general programs or, in the spirit of Occam's razor, prefer shorter programs. We wrap these intuitions up into a cost function over programs, and seek the minimum cost program consistent with T: DISPLAYFORM1 We define the cost of a program to be the number of Statement's it contains (Tbl. 2). We also penalize using many different numerical constants; see supplement. The constrained optimization problem in Eq. 2 is intractable in general, but there exist efficient-inpractice tools for finding exact solutions to such program synthesis problems. We use the state-ofthe-art Sketch tool BID20. Sketch takes as input a space of programs, along with a specification of the program's behavior and optionally a cost function. It translates the synthesis problem into a constraint satisfaction problem and then uses a SAT solver to find a minimum-cost program satisfying the specification. Sketch requires a finite program space, which here means that the depth of the program syntax tree is bounded (we set the bound to 3), but has the guarantee that it always eventually finds a globally optimal solution. In exchange for this optimality guarantee it comes with no guarantees on runtime. For our domain synthesis times vary from minutes to hours, with 27% of the drawings timing out the synthesizer after 1 hour. Tbl. 3 shows programs recovered by our system. A main impediment to our use of these general techniques is the prohibitively high cost of searching for programs. We next describe how to learn to synthesize programs much faster (Sec. 3.1), timing out on 2% of the drawings and solving 58% of problems within a minute. figure, and the complicated program in the second figure to bottom. Line BID26 15, BID28 15) Line BID28 9, BID28 13) Line BID27 11, BID27 14) Line BID26 13, BID26 15) Line BID27 14, 6, 14) Line BID28 13, 8, 13) for(i<3) DISPLAYFORM0 Circle BID29 8) Circle BID26 8) Circle Line Circle Line BID27 8, BID28 8) Line BID27 11, BID28 11)... etc....; 21 lines for(i<3) for FORMULA8 if (j>0) line(-3 * j+8,-3 * i+7, -3 * j+9,-3 * i+7) line(-3 * i+7,-3 * j+8, -3 * i+7,-3 * j+9) circle(-3 * j+7,-3 * i+7) 21 6 = 3.5xRectangle BID25 10, BID27 11) Rectangle BID25 12, BID27 13) Rectangle BID28 8, 6, 9) Rectangle BID28 10, 6, 11)... etc....; 16 lines for(i<4) for FORMULA9 rectangle(-3 * i+9,-2 * j+6, -3 * i+11,-2 * j+7) DISPLAYFORM1 Line (3,10,3,14,arrow) Rectangle Rectangle Line (13,10,13,14,arrow)... etc....; 16 lines for(i<3) line (7,1,5 * i+2,3,arrow) for(j<i+1) if(j>0) line(5 * j-1,9,5 * i,5,arrow) line(5 * j+2,5,5 * j+2,9,arrow) rectangle (5 * i,3,5 * i+4,5) rectangle(5 * i,9,5 * i+4,10) rectangle BID26 0, 12, BID25 16 9 = 1.8xCircle BID26 8) Rectangle Circle Rectangle Rectangle BID27 9, BID28 10)... etc....; 9 lines reflect(y=8) for FORMULA8 if (i>0) rectangle (3 * i-1,2,3 * i,3) circle(3 * i+1,3 * i+1) 9 5 = 1.8x We want to leverage powerful, domain-general techniques from the program synthesis community, but make them much faster by learning a domain-specific search policy. A search policy poses search problems like those in Eq. 2, but also offers additional constraints on the structure of the program (Tbl. 4). For example, a policy might decide to first try searching over small programs before searching over large programs, or decide to prioritize searching over programs that have loops. A search policy π θ (σ|T) takes as input a trace set T and predicts a distribution over synthesis problems, each of which is written σ and corresponds to a set of possible programs to search over (so σ ⊆ DSL). Good policies will prefer tractable program spaces, so that the search procedure will terminate early, but should also prefer program spaces likely to contain programs that concisely explain the data. These two desiderata are in tension: tractable synthesis problems involve searching over smaller spaces, but smaller spaces are less likely to contain good programs. Our goal now is to find the parameters of the policy, written θ, which best navigate this trade-off. Given a search policy, what is the best way of using it to quickly find minimum cost programs? We use a bias-optimal search algorithm BID19:Definition: Bias-optimality. A search algorithm is n-bias optimal with respect to a distribution P bias [·] if it is guaranteed to find a solution in σ after searching for at least time n × t(σ) DISPLAYFORM0, where t(σ) is the time it takes to verify that σ contains a solution to the search problem. An example of a 1-bias optimal search algorithm is a time-sharing system that allocates P bias [σ] of its time to trying σ. We construct a 1-bias optimal search algorithm by identifying P bias [σ] = π θ (σ|T) and t(σ) = t(σ|T), where t(σ|T) is how long the synthesizer takes to search σ for a program for T. This means that the search algorithm explores the entire program space, but spends most of its time in the regions of the space that the policy judges to be most promising. Now in theory any π θ (·|·) is a bias-optimal searcher. But the actual runtime of the algorithm depends strongly upon the bias P bias [·]. Our new approach is to learn P bias [·] by picking the policy minimizing the expected bias-optimal time to solve a training corpus, D, of graphics program synthesis problems: DISPLAYFORM1 where σ ∈ BEST(T) if a minimum cost program for T is in σ. Practically, bias optimality has now bought us the following: FORMULA10 a guarantee that the policy will always find the minimum cost program; and FORMULA4 a differentiable loss function for the policy parameters that takes into account the cost of searching, in contrast to e.g. DeepCoder BID0.To generate a training corpus for learning a policy which minimizes this loss, we synthesized minimum cost programs for each trace set of our hand drawings and for each σ. We locally minimize this loss using gradient descent. Because we want to learn a policy from only 100 hand-drawn diagrams, we chose a simple low-capacity, bilinear model for a policy: DISPLAYFORM2 where φ params (σ) is a one-hot encoding of the parameter settings of σ (see Tbl. 4) and φ trace (T) extracts a few simple features of the trace set T; see supplement for details. Experiment 3: Figure 8. We compare synthesis times for our learned search policy with two alternatives: Sketch, which poses the entire problem wholesale to the Sketch program synthesizer; and an Oracle, a policy which always picks the quickest to search σ also containing a minimum cost program. Our approach improves upon Sketch by itself, and comes close to the Oracle's performance. One could never construct this Oracle, because the agent does not know ahead of time which σ's contain minimum cost programs nor does it know how long each σ will take to search. With this learned policy in hand we can synthesize 58% of programs within a minute. Solve the problem piece-by-piece or all at once? {True, False} Maximum depth Bound on the depth of the program syntax tree {1, 2, 3} Why synthesize a graphics program, if the trace set already suffices to recover the objects in an image? Within our domain of hand-drawn figures, graphics program synthesis has several uses: The program synthesizer corrects errors made by the neural network by favoring trace sets which lead to more concise or general programs. For example, figures with perfectly aligned objects are preferable, and precise alignment lends itself to short programs. Concretely, we run the program synthesizer on the Top-k most likely trace sets output by the neurally guided sampler. Then, the system reranks the Top-k by the prior probability of their programs. The prior probability of a program is learned by picking the prior maximizing the likelihood of the ground truth trace sets; see supplement for details. But, this procedure can only correct errors when a correct trace set is in the Top-k. Our sampler could only do better on 7/100 drawings by looking at the Top-100 samples (see FIG3), precluding a statistically significant analysis of how much learning a prior over programs could help correct errors. But, learning this prior does sometimes help correct mistakes made by the neural network; see Fig. 9 for a representative example of the kinds of corrections that it makes. See supplement for details. Having access to the source code of a graphics program facilitates coherent, high-level image editing. For example we can extrapolate figures by increasing the number of times that loops are executed. Extrapolating repetitive visuals patterns comes naturally to humans, and is a practical application: imagine hand drawing a repetitive graphical model structure and having our system automatically induce and extend the pattern. FIG0 shows extrapolations produced by our system. Program Induction: Our approach to learning to search for programs draws theoretical underpinnings from Levin search BID13 BID21 ) and Schmidhuber's OOPS model . DeepCoder BID0 ) is a recent model which, like ours, learns to predict likely program components. Our work differs because we treat the problem as metareasoning, identifying and modeling the trade-off between tractability and probability of success. TerpreT systematically compares constraint-based program synthesis techniques against gradient-based search techniques, like those used to train Differentiable Neural Computers BID9. The TerpreT experiments motivate our use of constraint-based techniques. Deep Learning: Our neural network bears resemblance to the Attend-Infer-Repeat (AIR) system, which learns to decompose an image into its constituent objects BID5. AIR learns an iterative inference scheme which infers objects one by one and also decides when to stop inference. Our network differs in its architecture and training regime: AIR learns a recurrent auto-encoding model via variational inference, whereas our parsing stage learns an autoregressive-style model from randomly-generated (trace, image) pairs. IM2LATEX BID2 ) is a recent work that also converts images to L A T E X. Their goal is to derender L A T E X equations, which recovers a markup language representation. Our goal is to go from noisy input to a high-level program, which goes beyond markup languages by supporting programming constructs like loops and conditionals. Recovering a high-level program is more challenging than recovering markup because it is a highly under constrained symbolic reasoning problem. Our image-to-trace parsing architecture builds on prior work on controlling procedural graphics programs BID18. We adapt this method to a different visual domain (figures composed of multiple objects), using a broad prior over possible scenes as the initial program and viewing the trace through the guide program as a symbolic parse of the target image. We then show how to efficiently synthesize higher-level programs from these traces. In the computer graphics literature, there have been other systems which convert sketches into procedural representations. One uses a convolutional network to match a sketch to the output of a parametric 3D modeling system BID11. Another uses convolutional networks to support sketch-based instantiation of procedural primitives within an interactive architectural modeling system BID14. Both systems focus on inferring fixed-dimensional parameter vectors. In contrast, we seek to automatically infer a structured, programmatic representation of a sketch which captures higher-level visual patterns. Hand-drawn sketches: Prior work has also applied sketch-based program synthesis to authoring graphics programs. Sketch-n-Sketch is a bi-directional editing system in which direct manipulations to a program's output automatically propagate to the program source code BID10. We see this work as complementary to our own: programs produced by our method could be provided to a Sketch-n-Sketch-like system as a starting point for further editing. The CogSketch system BID6 ) also aims to have a high-level understanding of handdrawn figures. Their primary goal is cognitive modeling (they apply their system to solving IQ-test style visual reasoning problems), whereas we are interested in building an automated AI application (e.g. in our system the user need not annotate which strokes correspond to which shapes; our neural network produces something equivalent to the annotations).The Trace Hypothesis: The idea that an execution trace could assist in program learning goes back to the 1970's BID22 and has been applied in neural models of program induction, like Neural Program Interpreters BID17, or DeepCoder, which predicts what functions occur in the execution trace BID0. Our contribution to this idea is the trace hypothesis: that trace sets can be inferred from perceptual data, and that the trace set is a useful bridge between perception and symbolic representation. Our work is the first to articulate and explore this hypothesis by demonstrating how a trace could be inferred and how it can be used to synthesize a high-level program. We have presented a system for inferring graphics programs which generate L A T E X-style figures from hand-drawn images. The system uses a combination of deep neural networks and stochastic search to parse drawings into symbolic trace sets; it then feeds these traces to a general-purpose program synthesis engine to infer a structured graphics program. We evaluated our model's performance at parsing novel images, and we demonstrated its ability to extrapolate from provided drawings. In the near future, we believe it will be possible to produce professional-looking figures just by drawing them and then letting an artificially-intelligent agent write the code. More generally, we believe the trace hypothesis, as realized in our two-phase system-parsing into trace sets, then searching for a low-cost symbolic program which generates those traces-may be a useful paradigm for other domains in which agents must programmatically reason about noisy perceptual input. Concretely, we implemented the following scheme: for an image I, the neurally guided sampling scheme of section 3 of the main paper samples a set of candidate traces, written F(I). Instead of predicting the most likely trace in F(I) according to the neural network, we can take into account the programs that best explain the traces. WritingT (I) for the trace the model predicts for image I, DISPLAYFORM0 where P β [·] is a prior probability distribution over programs parameterized by β. This is equivalent to doing MAP inference in a generative model where the program is first drawn from P β [·], then the program is executed deterministically, and then we observe a noisy version of the program's output, where L learned (I|render(·)) × P θ [·|I] is our observation model. Given a corpus of graphics program synthesis problems with annotated ground truth traces (i.e. (I, T) pairs), we find a maximum likelihood estimate of β: DISPLAYFORM1 where the expectation is taken both over the model predictions and the (I, T) pairs in the training corpus. We define P β [·] to be a log linear distribution ∝ exp(β · φ(program)), where φ(·) is a feature extractor for programs. We extract a few basic features of a program, such as its size and how many loops it has, and use these features to help predict whether a trace is the correct explanation for an image. We synthesized programs for the top 10 traces output by the deep network. Learning this prior over programs can help correct mistakes made by the neural network, and also occasionally introduces mistakes of its own; see FIG0 for a representative example of the kinds of corrections that it makes. On the whole it modestly improves our Top-1 accuracy from 63% to 67%. Recall that from Fig. 6 of the main paper that the best improvement in accuracy we could possibly get is 70% by looking at the top 10 traces. We measure the similarity between two drawings by extracting features of the best programs that describe them. Our features are counts of the number of times that different components in the DSL were used. We project these features down to a 2-dimensional subspace using primary component analysis (PCA); see FIG8. One could use many alternative similarity metrics between drawings which would capture pixel-level similarities while missing high-level geometric similarities. We used our learned distance metric between traces, L learned (·|·), and projected to a 2-dimensional subspace using multidimensional scaling (MDS: FORMULA10). This reveals similarities between the objects in the drawings, while missing similarities at the level of the program. Recall from the main paper that our goal is to estimate the policy minimizing the following loss: DISPLAYFORM0 where σ ∈ BEST(T) if a minimum cost program for T is in σ. We make this optimization problem tractable by annealing our loss function during gradient descent: DISPLAYFORM1 where DISPLAYFORM2 Notice that SOFTMINIMUM β=∞ (·) is just min(·). We set the regularization coefficient λ = 0.1 and minimize equation 4 using Adam for 2000 steps, linearly increasing β from 1 to 2.We parameterize the space of policies as a simple log bilinear model: DISPLAYFORM3 where: DISPLAYFORM4 For the model in FIG10, the distribution over the next drawing command factorizes as: DISPLAYFORM0 where t 1 t 2 · · · t K are the tokens in the drawing command, I is the target image, T is a trace set, θ are the parameters of the neural network, f θ (·, ·) is the image feature extractor (convolutional network), and a θ (·|·) is an attention mechanism. The distribution over traces factorizes as: DISPLAYFORM1 where |T | is the length of trace T, the subscripts on T index drawing commands within the trace (so T n is a sequence of tokens: t 1 t 2 · · · t K), and the STOP token is emitted by the network to signal that the trace explains the image. The convolutional network takes as input 2 256 × 256 images represented as a 2 × 256 × 256 volume. These are passed through two layers of convolutions separated by ReLU nonlinearities and max pooling:• Layer 1: 20 8 × 8 convolutions, 2 16 × 4 convolutions, 2 4 × 16 convolutions. Followed by 8 × 8 pooling with a stride size of 4.• Layer 2: 10 8 × 8 convolutions. Followed by 4 × 4 pooling with a stride size of 4. Given the image features f, we predict the first token (i.e., the name of the drawing command: circle, rectangle, line, or STOP) using logistic regression: DISPLAYFORM0 where W t1 is a learned weight matrix and b t1 is a learned bias vector. Given an attention mechanism a(·|·), subsequent tokens are predicted as: DISPLAYFORM1 Thus each token of each drawing primitive has its own learned MLP. For predicting the coordinates of lines we found that using 32 hidden nodes with sigmoid activations worked well; for other tokens the MLP's are just logistic regression (no hidden nodes).We use Spatial Transformer Networks as our attention mechanism. The parameters of the spatial transform are predicted on the basis of previously predicted tokens. For example, in order to decide where to focus our attention when predicting the y coordinate of a circle, we condition upon both the identity of the drawing command (circle) and upon the value of the previously predicted x coordinate: DISPLAYFORM2 So, we learn a different network for predicting special transforms for each drawing command (value of t 1) and also for each token of the drawing command. These networks (MLP t1,n in equation 11) have no hidden layers and output the 6 entries of an affine transformation matrix; see FORMULA4 for more details. Training takes a little bit less than a day on a Nvidia TitanX GPU. The network was trained on 10 5 synthetic examples. We compared our deep network with a baseline that models the problem as a kind of image captioning. Given the target image, this baseline produces the program trace in one shot by using a CNN to extract features of the input which are passed to an LSTM which finally predicts the trace token-by-token. This general architecture is used in several successful neural models of image captioning (e.g., FORMULA9).Concretely, we kept the image feature extractor architecture (a CNN) as in our model, but only passed it one image as input (the target image to explain). Then, instead of using an autoregressive decoder to predict a single drawing command, we used an LSTM to predict a sequence of drawing commands token-by-token. This LSTM had 128 memory cells, and at each time step produced as output the next token in the sequence of drawing commands. It took as input both the image representation and its previously predicted token. Our architecture for L learned (render(T 1)|render(T 2)) has the same series of convolutions as the network that predicts the next drawing command. We train it to predict two scalars: |T 1 − T 2 | and |T 2 − T 1 |. These predictions are made using linear regression from the image features followed by a ReLU nonlinearity; this nonlinearity makes sense because the predictions can never be negative but could be arbitrarily large positive numbers. We train this network by sampling random synthetic scenes for T 1, and then perturbing them in small ways to produce T 2. We minimize the squared loss between the network's prediction and the ground truth symmetric differences. T 1 is rendered in a "simulated hand drawing" style which we describe next. We introduce noise into the L A T E X rendering process by:• Rescaling the image intensity by a factor chosen uniformly at random from [0. 5, 1.5] • Translating the image by ±3 pixels chosen uniformly random• Rendering the L A T E X using the pencildraw style, which adds random perturbations to the paths drawn by L A T E Xin a way designed to resemble a pencil.• Randomly perturbing the positions and sizes of primitive L A T E Xdrawing commands 6 LIKELIHOOD SURROGATE FOR SYNTHETIC DATA For synthetic data (e.g., L A T E X output) it is relatively straightforward to engineer an adequate distance measure between images, because it is possible for the system to discover drawing commands that Figure 5: Example synthetic training data exactly match the pixels in the target image. We use: DISPLAYFORM0 where α, β are constants that control the trade-off between preferring to explain the pixels in the image (at the expense of having extraneous pixels) and not predicting pixels where they don't exist (at the expense of leaving some pixels unexplained). Because our sampling procedure incrementally constructs the scene part-by-part, we want α > β. That is, it is preferable to leave some pixels unexplained; for once a particle in SMC adds a drawing primitive to its trace that is not actually in the latent scene, it can never recover from this error. In our experiments on synthetic data we used α = 0.8 and β = 0.04. We generated synthetic training data for the neural network by sampling L A T E X code according to the following generative process: First, the number of objects in the scene are sampled uniformly from 1 to 12. For each object we uniformly sample its identity (circle, rectangle, or line). Then we sample the parameters of the circles, than the parameters of the rectangles, and finally the parameters of the lines; this has the effect of teaching the network to first draw the circles in the scene, then the rectangles, and finally the lines. We furthermore put the circle (respectively, rectangle and line) drawing commands in order by left-to-right, bottom-to-top; thus the training data enforces a canonical order in which to draw any scene. To make the training data look more like naturally occurring figures, we put a Chinese restaurant process prior FORMULA14 over the values of the X and Y coordinates that occur in the execution trace. This encourages reuse of coordinate values, and so produces training data that tends to have parts that are nicely aligned. In the synthetic training data we excluded any sampled scenes that had overlapping drawing commands. As shown in the main paper, the network is then able to generalize to scenes with, for example, intersecting lines or lines that penetrate a rectangle. When sampling the endpoints of a line, we biased the sampling process so that it would be more likely to start an endpoint along one of the sides of a rectangle or at the boundary of a circle. If n is the number of points either along the side of a rectangle or at the boundary of a circle, we would sample an arbitrary endpoint with probability 2 2+n and sample one of the "attaching" endpoints with probability 1 2+n. See FIG3 for examples of the kinds of scenes that the network is trained on. For readers wishing to generate their own synthetic training sets, we refer them to our source code at: redactedForAnonymity.com. We seek the minimum cost program which evaluates to (produces the drawing primitives in) an execution trace T: DISPLAYFORM0 Programs incur a cost of 1 for each command (primitive drawing action, loop, or reflection). They incur a cost of 1 3 for each unique coefficient they use in a linear transformation beyond the first coefficient. This encourages reuse of coefficients, which leads to code that has translational symmetry; rather than provide a translational symmetry operator as we did with reflection, we modify what is effectively a prior over the space of program so that it tends to produce programs that have this symmetry. Programs also incur a cost of 1 for having loops of constant length 2; otherwise there is often no pressure from the cost function to explain a repetition of length 2 as being a reflection rather a loop. Below we show our full data set of drawings. The leftmost column is a hand drawing. The middle column is a rendering of the most likely trace discovered by the neurally guided SMC sampling scheme. The rightmost column is the program we synthesized from a ground truth execution trace of the drawing. Note that because the inference procedure is stochastic, the top one most likely sample can vary from run to run. Below we report a representative sample from a run with 2000 particles. line(6,2,6,3, arrow = False,solid = True); line(6,2,3,2, arrow = True,solid = True); reflect(y = 9){line BID27 7, BID29 BID29, arrow = True,solid = True); rectangle BID25 BID25 BID27 BID27; rectangle; rectangle } for (i < 2){line (8, 8, BID27 8, arrow = True,solid = False); line(-2 * i + 12,5,-2 * i + 13,5 arrow = True,solid = True); line (6, BID29 7, BID29, arrow = True,solid = True); line(3,-6 * i + 8,5,-2 * i + 6, arrow = True,solid = True); rectangle(-2 * i + 13,4,-2 * i + rectangle (1,-6 * i + 7,3,-6 * i}; circle; rectangle BID29 BID27 6, 7); rectangle; line (8, 6, 8, 8, arrow = False, solid = False) reflect(y = 7){line BID26 6, BID28 BID28, arrow = True,solid = True); rectangle }; rectangle BID28 BID26 6, BID29 line (7, BID29 9, BID29, arrow = True,solid = True); rectangle BID29 BID27 7, 7); rectangle; reflect(y = 10){circle; line BID27 BID26 BID29 BID28, arrow = True,solid = True); rectangle BID25 BID25 BID27 BID27 } line(10,1,2,1, arrow = True,solid = False); line(10,1,10,3, arrow = False,solid = False); line (7, BID28 9, BID28, arrow = True,solid = True); reflect(y = 8){circle; line BID26 BID25 BID28 BID27, arrow = True,solid = True); rectangle; rectangle } line(12,9,12,0, arrow = True,solid = True); rectangle (9, BID27 11, 9); rectangle (6, BID29 8, 9); rectangle; rectangle BID27 8, BID29 9) for (i < 3){for (j < (1 * i + 1)){if (j > 0){line(3 * j + -3,3 * i + -2,3 * j arrow = False,solid = True); line(0,3 * j + -2,3 * j + -3,4, arrow = False,solid = True) } rectangle (2,0,5, rectangle (0, 0, BID27 BID28 circle BID25 BID25 reflect(x = 7){circle; line(6,2,6,5, arrow = False,solid = True); rectangle BID29 BID29 7, 7) }; line BID26 6, BID29 6, arrow = False,solid = True); line BID26 BID25 BID29 BID25 arrow = False, solid = True) line (BID28 BID28 6, 6); rectangle(4 * i,0,4 * i + 2,2) }; rectangle (8, BID28 10, 6) for (BID29 7, BID29 6, arrow = True,solid = True); line BID27 BID27 BID27 BID26 arrow = True, solid = True); line BID25 7, BID25 6, arrow = True,solid = True); rectangle (0, BID27 6, 6); rectangle BID26 0, BID28 BID26; rectangle line (BID28 6, 6, 6, arrow = True,solid = True); line BID26 10, BID26 8, arrow = True,solid = True); rectangle BID25 0, BID27 BID26 }; rectangle (0, BID28 BID28 8) reflect(y = 9){reflect(x = 9){circle; line BID27 8, 6, 8, arrow = False,solid = True); line BID25 BID27 BID25 6, arrow = False,solid = True) } } reflect(x = 11){rectangle; reflect(y = 11){rectangle (8, 0, 11, BID27 ; rectangle } } for (i < 3){line(2 * i,-2 * i + 5,2 * i + 2, arrow = False,solid = True); line(2 * i + 1,-2 * i + 4,2 * i arrow = False,solid = True) } rectangle BID28 BID25 6, BID26; rectangle; reflect(y = 10){rectangle (0, 0, BID27 BID27 ; rectangle BID25 BID28 BID26 6) } line(7,4,9,4, arrow = True,solid = True); line (8, BID27 7, BID27, arrow = True,solid = True); reflect(y = 7){line BID26 BID25 BID28 BID27, arrow = True,solid = True); rectangle }; line(8,3,9,3, arrow = False,solid = True); rectangle; rectangle BID28 BID26 7, BID29 for (BID25 BID25 BID27 BID27 ; rectangle BID25 BID29 BID27 7) circle BID25 BID29 (0, BID28 BID29 6) circle BID27 BID25; reflect(x = 6){circle BID29 BID29 ; circle BID25 9); line BID29 BID28 BID27 BID26, arrow = True,solid = True); line BID29 8, BID26 BID29, arrow = True,solid = True); line BID25 8, BID25 6, arrow = True,solid = True) } for (i < 3){line(7,1,5 * i + 2,3, arrow = True,solid = True); for (j < (1 * i + 1)){if (j > 0){line(5 * j + -1,9,5 * i,5, arrow = True,solid = True) } line(5 * j + 2,5,5 * j + 2,9, arrow = True,solid = True) }; rectangle (5 * i,3,5 * i + 4,5); rectangle(5 * i,9,5 * i + 4,10) }; rectangle BID26 0, 12, BID25 reflect(y = 8){for (i < 3){circle(-3 * i + 7,-3 * i + 7) }; rectangle BID26 BID26 BID27 BID27; rectangle BID29 BID29 6, 6 BID25 8); reflect(x = 10){line(6,1,9,3, arrow = False,solid = True); line BID26 8, BID28 8, arrow = False,solid = True); line(9,5,9,7, arrow = False,solid = True); rectangle }; rectangle BID28 7, 6, 9) line BID27 BID26 BID29 BID28, arrow = True,solid = True); line(6,6,6,5, arrow = True,solid = True); line (8, BID27 7, BID28, arrow = True,solid = True); line BID28 0, 12, 8 BID25 8, BID28 BID29 arrow = True, solid = True) reflect(x = 14){circle; circle BID27 BID28; circle; reflect(y = 20){circle; circle }; line BID27 BID27 7, BID26, arrow = True,solid = True); line(10,10,5,8, arrow = True,solid = True); reflect(x = 6){line BID29 12, BID27 11, arrow = True,solid = True); line BID25 6, BID27 BID29, arrow = True,solid = True); line BID27 9, BID29 8, arrow = True,solid = True) } } circle FIG0; circle; rectangle (6, 0, 8, BID26 ; rectangle (9, 0, 11, BID26 Solver timeout reflect(x = 10){circle; circle BID26 BID28; line (2,3,5,2, BID26 BID28 BID28 6); rectangle BID25 0, 13, 9); line(9,6,9,8, arrow = False,solid = True); line(6,4,7,5, arrow = False,solid = True); line(10,5,12,5, arrow = False,solid = True); line BID27 BID25 BID27 BID28, arrow = False,solid = True) circle; for (i < 3){circle(5 * i + 1,7) }; line BID29 7, BID26 7, arrow = True,solid = True); line(6,6,6,3, arrow = True,solid = True); line(10,7,7,7, arrow = True,solid = True); rectangle BID28 0, 8, 9) reflect (BID25 9); for (i < 4){circle(-2 * i + 9,-2 * i + 9) } } for (i < 3){circle(1,-4 * i + 9); circle(5,-4 * i + 9); for (j < 3){if (j > 0){line(4 * i + -3,-4 * j + 10,4 * arrow = False,solid = True) } line(2,-4 * j + 9,4,-4 * j + 9, arrow = False,solid = True) } } circle; circle; circle BID26 11); line BID26 BID29 BID26 BID27, arrow = True,solid = True); line(2,10,2,7, arrow = True,solid = True); rectangle (0, 0, BID28 9) for (i < 2){circle(4,6 * i + 1); circle(1,6 * i + 4); rectangle(0,6 * i,2,6 * i + 2); rectangle (3,6 * i + 3,5,6 * i +} | Learn to convert a hand drawn sketch into a high-level program | 1,015 | scitldr |
Adversarial examples remain an issue for contemporary neural networks. This paper draws on Background Check , a technique in model calibration, to assist two-class neural networks in detecting adversarial examples, using the one dimensional difference between logit values as the underlying measure. This method interestingly tends to achieve the highest average recall on image sets that are generated with large perturbation vectors, which is unlike the existing literature on adversarial attacks . The proposed method does not need knowledge of the attack parameters or methods at training time, unlike a great deal of the literature that uses deep learning based methods to detect adversarial examples, such as , imbuing the proposed method with additional flexibility. Adversarial examples are specially crafted input instances generated by adversarial attacks. The term was introduced by BID23 in the context of image classification. These attacks generate, or manipulate data, to achieve poor performance when classified by neural networks, which poses existential questions about their usage in high stakes security critical applications. Since they were introduced, there have been many papers that have introduced novel attack methods and other papers that attempt to combat those attacks. For instance, BID5 introduces the fast gradient sign method (FGSM), and BID20 proposes a method based on modifying the gradient of the softmax function as a defence. Adversarial attacks can be identified into various classes such as white box and black box, where in the former, the attack has full knowledge of all model parameters. Examples created by these attacks can be false positives or false negatives. In the case of images, they can be nonsensical data (e.g. noise classified as a road sign) or clear cut (e.g. a visually clear cat, classified as a road sign). These attacks can be non-targeted or targeted such that the classifier chooses a specific class for the adversarial example. Various adversarial defences exist, some based on deep learning techniques and others on purely distributional techniques. Similar work on adversarial defences has been done by BID6, in which the network is trained on specific attack types and parameters with an additional outlier class for adversarial examples. A multi-dimensional statistical test over the maximum mean discrepancy and the energy distance on input features is then used to classify instances as adversarial. Other work has been done by BID0, where Gaussian Processes are placed on top of conventional convolutional neural network architectures, with radial basis kernels, imbuing the neural network with a way of understanding its own perceptual limits. The authors find that the network becomes more resistant to adversarial attack. The work that follows continues in a similar vein to both of these methods. Some methods such as BID14 use sub-units of deep learning architectures to detect adversarial instances. Calibration is a technique of converting model scores, normally, through application of a post processing function, to probability estimates. Background Check is a method to yield probability estimates, via a set of explicit assumptions, in regions of space where no data has been observed. In this work, Background Check is useful in producing calibrated probabilities for adversarial data that often exists in regions where no training and test data has been seen. Reliable probability estimates can then be measured by calibration and refinement loss. Various calibrating procedures exist such as binning, logistic regression, isotonic regression and softmax. BID8 demonstrates the logistic function is optimal when the class-conditional densities are Gaussians with unit variance. Softmax extends this to multi-variate Gaussian densities with unit variance. Calibration of neural network models has been performed by BID7, using a method called Temperature Scaling, that modifies the gradient of the softmax function allowing softmax to calibrate densities with non-unit variance. The authors perform this calibration after noticing that calibration loss for neural networks has increased in recent years. When adversarial attacks against neural networks are brought into perspective, a problem arises for existing calibration techniques, which is the question of mapping adversarial logit scores to reliable probability estimates (which should be zero for a successful adversarial attack). In this work, a method is demonstrated that uses Background Check to identify adversarial attacks. A classifier is said to be well calibrated, if, as the number of predictions approaches infinity, the proportion of outcomes given probability p, occur p fraction of the time. Denoting x 1, x 2,..., x n as data-set instances and y 1, y 2,..., y n as their corresponding ground truth class labels, a scoring classifier s = f (x i) has a calibrating function µ applied to it yielding µ(f (x i)). Perfect calibration is defined as the expectation DISPLAYFORM0 where random variables X, Y denote the features and class label of a uniformly randomly drawn instance from the dataset respectively, such that Y = 1, Y = 0 represent an individual positive and negative instance, respectively. To visualize calibration performance of a classifier, BID17 plots the observed frequency of an event against the predicted frequency yielding calibration curves. Calibration curves plot the observed relative frequency against the predicted probability for all test data. Perfect calibration occurs if the calibration curve exactly fits the identity line. Refinement loss measures the difference between a probability estimate and zero or one. This can be combined with a frequency distribution, giving an indication of spread, which is useful if there are few events associated with a particular probability. The notion of refinement is also useful when considering calibration. By considering the crude constant classifier, which predicts the probability corresponding to the class distribution for all inputs, it is clear that this calibration estimate is perfectly calibrated. However, an intuitively more valuable calibration estimate, is one which predicts a value closer to either zero or one. For this reason, BID3 suggests measuring refinement loss, which measures the distance of the classifiers probability estimates to either zero or one. Together, calibration and refinement loss make up the Brier Score. BID10 defines calibration and refinement loss. DISPLAYFORM1 ] is the loss due to the expected difference between the model score S and the proportion of positives among instances (observed relative frequency) with the same score. ] is the loss due to the presence of instances from multiple classes among instances with the same estimate S. In the worst case, this clearly reduces to the crude constant classifier mentioned above. An instance of recent work related to calibration is Beta calibration. Beta calibration BID11 is based on the beta distribution which includes functions such as the logit, sigmoid and identity. This allows it to calibrate scores produced by models such as naive bayes, which biases its scores towards extremities when the assumption of feature independence is not met, using the inverse sigmoid or logit function, a function which is not in softmax's repertoire. In the context of adversarial attacks, the optimization problem that is often formulated to construct adversarial examples is shown. formed example, subject to the constraints above. Intuitively, these constraints coerce the network into mis-classifying each example with a minimal perturbation vector. Example distance metrics that measure the size of the perturbation include L p metrics, as well as the PASS score BID22, a metric designed to better reflect notions of psycho-physical similarity than L p metrics. Adversarial attack methods include L-BFGS from BID23, which uses linear trial and error to find a c for each data instance such that c multiplied by an arbitrary small perturbation vector mis-classifies the instance. FGSM builds on the L-BFGS method, replacing expensive linear search with gradient descent to find the perturbation vector. This uses the following optimization strategy to create the perturbation vector η, such that x is the input to the model, y is the predicted class, θ are the model parameters and J(θ, x, y) is the cost function. x indicates the derivative of the cost function is taken with respect to the input x. The sign function takes the sign of the ing derivative. The adversarial example is then constructed as x = x + η. DISPLAYFORM0 A momentum term can be added to the gradient descent process to yield the work of BID4. The BIM attack by BID12, uses a smaller noise vector produced by FGSM, applied iteratively, before a clip operation is applied to the ing image, keeping it within the maximum image pixel values after each iteration. JSMA is a forward derivative approach, by BID19, which uses a Jacobian matrix, to produce an adversarial saliency map, which indicate the features that when (positively or negatively) perturbed, most efficiently achieve a desired network output. DeepFool BID16 finds an image that is at a minimal distance to the decision boundary from the proposed example, to another target class which isn't the source class, treating multi-class classifiers as combinations of binary affine classifiers. Background Check, introduced by BID21, is the calibrating procedure that our proposed method uses to defend neural networks from adversarial attacks. Background Check takes as input, the scores from logit vectors and maps them to probabilities, replacing softmax after training. Background Check provides a framework to classify regions where no previous data has been seen as regions, that in the context of the proposed method will be classified as regions where adversarial data may lie. Background Check also provides a framework to resolve ambiguity inherent in a single value representing probability. More specifically, Background Check provides two values to represent a single probability value of a data instance. One value represents distance from the data density, and another represents the certainty of a particular class. This approach avoids overloading the meaning of a single number representing probability. More specifically, an uncertainty of 1 /n for all classes, could represent an instance very close to the decision boundary or very far away from training data. On the other hand, an output of zero, could represent a point very far away from training data or a classification that the data is definitely not that particular class. Background Check introduces an additional outlier class, b, representing regions of space where data is sparse or non-existent. Then, a foreground class is introduced, f. This represents regions of space where data is plentiful or dense, i.e. data from any class apart from the class, is abundant. b is introduced as an additional class whilst f is kept as a reference class. Every instance x necessarily belongs to either f or b. P (b|x) = 0 and P (f |x) = 1 refers to absolute certainty that the instance belongs to one of the classes with sufficient training data, where P (C|x) is a conditional probability measure. The ratio of the two conditional measures defines the reliability factor r(x). DISPLAYFORM0 If r(x) ≤ 1, the classification that the reliability factor indicates is b, else if r(x) > 1 then the classification is f. P (x, f) and P (x, b) are referred to as the foreground and densities. Furthermore, the relative foreground and densities can be defined, q f (x) and q b (x). DISPLAYFORM1 Intuitively, the relative density outputs the proportion of f or b, at the point in space corresponding to the instance x being evaluated. Simple dividing q f (x) by q b (x) yields r(x). DISPLAYFORM2 To construct q b (x), four inductive biases, in increasing strength, are given. This work only uses the third inductive bias.1. Inductive bias 1: q b (x) is a function of q f (x). This is justified, by the idea that with no other information, there is no reason to assign different densities to points with the same foreground density. The domain knowledge informs the function used µ: DISPLAYFORM3 2. Inductive bias 2: monotonicity of µ, that is, when moving to a region with higher foreground density the increases or decreases.3. Inductive bias 3: an affine bias, i.e. µ(x) = ax + b or by replacing a and b: DISPLAYFORM4 4. Inductive bias 4: constant ie µ, µ = 0.5. The key notion in Background Check is the implementation of the inductive bias that shapes q b. This implementation is provided in two different different ways.1. BCD: referred to as the discriminative approach, involves generating artificial instances around foreground data and then training a binary discriminative classifier to separate them. The instances are generated in a hypercube or a hypersphere, such that the is half as dense as max x P (x, f).2. BCF: referred to as the familiarity approach, this involves fitting a one class model on the foreground data to obtain q f, then using an inductive bias to obtain q b. The data, x, that is being fit must have an underlying measure. The familiarity factor r(x) can be found, allowing, the posterior probabilities P (b|x) and P (f |x) to be computed. The implementation that the proposed method in this work uses, is the BCF method for its speed in high dimensional spaces. The measure underlying the space was the one-dimensional L 1 difference between elements of the logit vector. For instance, given a two-class score vector [−5, 5], then the score difference is 10. This measure represents the distance of a data point to the softmax decision boundary. The one-class model used to fit q f (x) was a gamma function optimized using maximum likelihood estimation. To establish the link from q f (x) to q b (x), the third inductive bias was used with µ = 1 and µ = 0, with domain knowledge informing the use of a power value. This link manifests itself in the equation below. DISPLAYFORM0 One neural network for each attack type, parameter combination and dataset, is trained with the Adam optimizer . A batch size of 256 and a learning rate of 0.001 is employed. The biases of the neurons are set to 0 and the weights are sampled uniformly at random Figure 1: Background Check applied to the score difference of two, two class neural network logit vectors (one network each for the top and bottom figures). The training, test and adversarial images, in blue, orange and green, respectively, are organised into ten bins each. The adversarial data for the top figure was generated by the momentum attack, with very large perturbations applied. The bottom figure has adversarial data crafted by the JSMA attack with a moderate perturbation vector applied. The green line represents q f, the foreground relative density and q b represents the relative density. It is clear, that the adversarial images in both cases are distinctly separated from the training and test data, which overlap to the point of in-distinguishability. In the top figure, the adversarial images have logit differences much larger than the training/test data compared to much smaller logit differences for the bottom figure. from the interval. Each neural network is tasked with classifying a variety of paired class combinations from the CIFAR10 dataset. These pairs were: dog versus plane (DVP), fish vs ship (FVS) and airplane vs horse (AVH). Regularization techniques such as L 2 regularization and dropout, with p = 0.5 were applied to the networks. The training vs test ratio is 5:1, such that each class has exactly 1000 test images and 5000 training images. The images were pre-processed such that all image pixel values were floating point values between zero and one, rather than values between 0 and 255. Seven different adversarial attacks, from the cleverhans API BID18, were tested against the networks with a variety of parameter combinations. The adversarial images were generated from test data. The attacks were all white-box attacks and performed on the network which included a final softmax layer in its structure. The final two-class average recall of each network on the validation set of the network was always above 80% after only 300 iterations over the training data. The attacks had different effects on each image, for identical parameter settings and as such, the parameters were subjectively chosen based on whether the image fell into one of four classes. Some attacks, due to a mixture of constraints and difficulty in searching the adversarial image space, only had images in one or two of the available classes.1. Large -Image consists entirely of noise.2. Typical -Moderate noise levels, underlying image recognizable.3. Small -Recognizable noise, but clear image.4. Very Small -No noticeable noise, clear image. For instance, BIM an iterative method had 10 iterations, with: 0.8 and i: 0.05, for a large perturbation vector, yet the method from BID15, a non-iterative method had: 12.0. The full parameter settings are listed in the appendix. The average recall, defined in the equation below, rather than accuracy, is evaluated due to the presence of varying class proportions. The 2-class average recall is on the two classes in CIFAR-10. The 3-class average recall includes the adversarial class. DISPLAYFORM0 The table of demonstrates that large perturbation vectors, associate with a mean reduction in average recall of 11.6, whereas for very small perturbation vectors, the mean reduction in average recall is 35.7. Typical perturbation vectors have a mean reduction in average recall of 6.9. In all cases except for two, the average recall decreases. It is clear that the adversarial TPR generally increases as the size of the perturbation vector increases. In particular, three out of the five large perturbation vectors achieve a TPR on the adversarial class of 100%. All models achieve higher average recall than the baseline, which was simply a strategy that with uniform randomness guesses the class. In order to visualize areas where Background Check assigns foreground and densities, it is helpful to construct histograms. When constructing histograms of the test, training and adversarial L 1 logit differences, three categories were established over the space.1. Adversarial examples in a distinct cluster, closer to the decision boundary, than the training and test data.2. Adversarial examples scattered amongst the training/test data.3. Adversarial examples in a distinct cluster, further from the decision boundary, than the training and test data. The JSMA and DeepFool attacks found logit differences smaller than the test and training logits, yet still high enough to yield a significant confidence level when applied to the softmax function. The Madry, Momentum and BIM attacks produced logit differences far higher than the test and training logit differences. However, some attacks found logit differences within the test and training distributions. Figure 2: Madry attack from BID13 in category 3, with a large perturbation vector applied. The differences between the per-class logit values are large, at least 70, albeit less than some of the other attacks which have score differences in the hundreds. The adversarial images with scores represented by the green histogram are very far away from the training and test data, which means that Background Check will separate them well. The central example successful adversarial image is noise, due to the large perturbation applied. The top left confusion matrix is that of the neural network without Background Check. The top right confusion matrix is that of the neural network with Background Check applied to it. The confusion matrices are relatively coloured with the true labels on the vertical axis and the predicted labels on the horizontal axis. The histogram in the bottom of each image uses ten bins, whose size is chosen by scipy, relative to the spread of the data. The y-axis or height of each bar of the blue, orange and green histograms represent the relative frequency of examples in the training, test and adversarial classes, respectively. The x-axis represents the score difference between the logit vectors. The left hand confusion matrix, on the bottom row, shows the attack had equivalent ease generating adversarial images from either class. The right hand confusion matrix shows a reduction in average recall due to the many, 198/1000 false negatives, for the first class, and 136/1000 for the second, which, because the classifier is evaluated on the average recall, will make a large difference to the final average recall. For the histograms of all of the methods, please see the appendix. Figure b) shows the trend of the pairwise differences as the attack parameters increase for the attack from BID13 7 DISCUSSIONBackground check models the data density over the logit differences. It is clear from the figures in the appendix that the adversarial attacks find examples in regions where the test and training data do not exist. Thus, Background Check improves the discriminative performance of the classifier when dealing with adversarial examples with large perturbation vectors. These ing images are noisy and hard to allocate a non-ambiguous class, though we argue that these images can occur and be just as damaging in the real world and as such need to be defended against. It would be useful to follow on from this method with an analysis of generative probability estimation and a corresponding measure of calibration and refinement loss. Promising future research would scale up the logit difference metric underlying Background Check to higher dimensional spaces to deal with a full ten classes to allow for comparability to mainstream literature on adversarial defences. Possible metrics that can underlie Background Check could use the energy distance. In addition, Background Check could be applied to each layer of a neural network. However, this must be setup such that it does not interfere with the ability of the neural network to generalize. The performance of Background Check can be measured in different ways. For example, BID1 proposes performance measures for classification systems with the rejection option. These measures consist of metrics such as the non-rejected accuracy, which measures the ability of the classifier to accurately classify non-rejected samples. The classification quality, which measures the correct decision making of the classifier with the rejector and finally, the rejection quality, which measures the ability to concentrate all mis-classified samples onto the set of rejected samples. A novel approach to defending neural networks against adversarial attacks has been established. This approach intersects two previously unrelated fields of machine learning, calibration and adversarial defences, using the principles underlying Background Check. This work demonstrates that adversarial attacks, produced as a of large perturbations of various forms, can be detected and assigned to an adversarial class. The larger the perturbation, the easier it was for the attacks to be detected. | This paper uses principles from the field of calibration in machine learning on the logits of a neural network to defend against adversarial attacks | 1,016 | scitldr |
We present a novel multi-task training approach to learning multilingual distributed representations of text. Our system learns word and sentence embeddings jointly by training a multilingual skip-gram model together with a cross-lingual sentence similarity model. We construct sentence embeddings by processing word embeddings with an LSTM and by taking an average of the outputs. Our architecture can transparently use both monolingual and sentence aligned bilingual corpora to learn multilingual embeddings, thus covering a vocabulary significantly larger than the vocabulary of the bilingual corpora alone. Our model shows competitive performance in a standard cross-lingual document classification task. We also show the effectiveness of our method in a low-resource scenario. Learning distributed representations of text, whether it be at the level of words BID26; BID28, phrases BID32; BID30, sentences BID18 or documents BID22, has been one of the most widely researched subjects in natural language processing in recent years. Word/sentence/document embeddings, as they are now commonly referred to, have quickly become essential ingredients of larger and more complex NLP systems BID4; BID25 BID8; BID1; BID6 looking to leverage the rich semantic and linguistic information present in distributed representations. One of the exciting avenues of research that has been taking place in the context of distributed text representations, which is also the subject of this paper, is learning multilingual text representations shared across languages BID11; BID3; BID24. Multilingual embeddings open up the possibility of transferring knowledge across languages and building complex NLP systems even for languages with limited amount of supervised resources BID0; BID17. By far the most popular approach to learning multilingual embeddings is to train a multilingual word embedding model that is then used to derive representations for sentences and documents by composition BID14. These models are typically trained solely on word or sentence aligned corpora and the composition models are usually simple predefined functions like averages over word embeddings BID14; BID27 or parametric coposition models learned along with the word embeddings. In this work we learn word and sentence embeddings jointly by training a multilingual skip-gram model BID24 together with a cross-lingual sentence similarity model. The multilingual skip-gram model transparently consumes (word, context word) pairs constructed from monolingual as well as sentence aligned bilingual corpora. We use a parametric composition model to construct sentence embeddings from word embeddings. We process word embeddings with a Bi-directional LSTM and then take an average of the LSTM outputs, which can be viewed as context dependent word embeddings. Since our multilingual skip-gram and cross-lingual sentence similarity models are trained jointly, they can inform each other through the shared word embedding layer and promote the compositionality of learned word embeddings at training time. Further, the gradients flowing back from the sentence similarity model can affect the embeddings learned for words outside the vocabulary of the parallel corpora. We hypothesize these two aspects of our model lead to more robust sentence embeddings. Our contributions are as follows:• Scalable approach: We show that our approach performs better as more languages are added, since represent the extended lexicon in a suitable manner.• Ability to perform well in low-resource scenario: Our approach produces representations comparable with the state-of-art multilingual sentence embeddings using a limited amount of parallel data. Our sentence embedding model is trained end-to-end on a vocabulary significantly larger than the vocabulary of the parallel corpora used for learning crosslingual sentence similarity.• Amenable to Multi-task modeling: Our model can be trained jointly with proxy tasks, such as sentiment classification, to produce more robust embeddings for downstream tasks. This section gives a brief survey of relevant literature. For a through survey of cross-lingual text embedding models, please refer to BID31. BID33 and 4. joint optimization: using both parallel and monolingual corpora BID19; BID24; BID36; BID9. We adopt the skip-gram architecture of BID24 and train a single multilingual model using monolingual data from each language as well as any sentence aligned bilingual data available for any language pair. Cross-lingual Sentence Embeddings: Some works dealing with cross-lingual word embeddings have considered the problem of constructing sentence embeddings including BID34; BID29 BID14. In general, it is not trivial to construct crosslingual sentence embeddings by composing word embeddings as the semantics of a sentence is a complex language-dependent function of its component words as well as their ordering. BID29 addresses this difficulty by extending the paragraph vector model of BID22 to the bilingual context which models the sentence embedding as a separate context vector used for predicting the n-grams from both sides of the parallel sentence pair. At test time, the sentence vector is randomly initialized and trained as part of an otherwise fixed model to predict the n-grams of the given sentence. Our sentence embedding model is closer to the approach taken in BID14. They construct sentence embeddings by taking average of word or bi-gram embeddings and use a noise-contrastive loss based on euclidean distance between parallel sentence embeddings to learn these embeddings. Multi-task Learning: Multi-task learning has been employed in various NLP applications where the parameters are shared among tasks BID7; BID23; BID12. BID23 show the effectiveness of multi-task learning in multiple sentiment classification tasks by sharing an RNN layer across tasks while learning separate prediction layers for each task. BID37 recently showed benefits of learning a common semantic space for multiple tasks which share a low level feature dictionary. Our multi-task architecture treats training multilingual word embeddings as a separate task with a separate objective as opposed to training them beforehand or training them only as part of a larger model. Our model is trained to optimize two separate objectives: multilingual skip-gram BID24 and cross-lingual sentence similarity. These two tasks are trained jointly with a shared word embedding layer in an end-to-end fashion. Overview of the architecture that we use for computing sentence representations R S and R T for input word sequences S and T. Multilingual skip-gram model BID24 extends the traditional skip-gram model by predicting words from both the monolingual and the cross-lingual context. The monolingual context consists of words neighboring a given word as in the case of the traditional skip-gram model. The cross-lingual context, on the other hand, consists of words neighboring the target word aligned with a given source word in a parallel sentence pair. FIG0, shows an example alignment, where an aligned pair of words are attached to both their monolingual and bilingual contexts. For a pair of languages L1 and L2, the word embeddings are learned by optimizing the traditional skip-gram objective with (word, context word) pairs sampled from monolingual neighbors in L1 → L1 and L2 → L2 directions as well as cross-lingual neighbors in L1 → L2 and L2 → L1 directions. In our setup, cross-lingual pairs are sampled from parallel corpora while monolingual pairs are sampled from both parallel and monolingual corpora. We use a parametric composition model to construct sentence embeddings from word embeddings. We process word embeddings with a bi-directional; BID15 and then take an average of the LSTM outputs. There are various implementations of LSTMs available; in this work we use an implementation based on BID40. The LSTM outputs (hidden states) contextualize input word embeddings by encoding the history of each word into its representation. We hypothesize that this is better than averaging word embeddings as sentences generally have complex semantic structure and two sentences with different meanings can have exactly the same words. In FIG1, the word embeddings x i are processed with a bi-directional LSTM layer to produce h i. Bi-directional LSTM outputs are then averaged to get a sentence representation. Learning Method: Let R: S → R d denote our sentence encoder mapping a given sequence of words S to a continuous vector in R d. Given a pair of parallel sentences (S, T), we define the loss L of our cross-lingual sentence encoder model as: DISPLAYFORM0 Therefore, for similar sentences (S ≈ T), we minimize the loss L ST between their embeddings. We also use a noise-constrastive large-margin update to ensure that the representations of non-aligned sentences observe a certain margin from each other. For every parallel sentence pair (S, T) we randomly sample k negative sentences N i, i = 1... k. With high probability N i is not semantically equivalent to S or T.We define our loss for a parallel sentence pair as follows: DISPLAYFORM1 Without the LSTM layer, this sentence encoder is similar to the except that we use also the reversed sample (T, S) to train the model, therefore showing each pair of sentences to the model two times per epoch. Following the literature, we use The Europarl corpus v71 BID20 for initial development and testing of our approach. We use the first 500K parallel sentences for each of the EnglishGerman (en-de), English-Spanish (en-es) and English-French (en-fr) language pairs. We keep the first 90% for training and the remaining 10% for development purposes. We also use additional 500K monolingual sentences from the Europarl corpus for each language. These sentences do not overlap with the sentences in parallel data. Words which have a frequency less than 5 for a language are replaced with the <unk> symbol. In the joint multi-task setting, the word frequencies are counted using the combined monolingual and parallel corpora. When using just the parallel data for the en-de pair, the vocabulary sizes are 39K for German (de) and 21K for English (en). Vocabulary sizes are 120K for German and 68K for English when both the parallel and the monolingual data are used. We evaluate our model on the RCV1/RCV2 cross-lingual document classification task where for each language we use 1K documents for training and 5K documents for testing. A. Multilingual Skip-gram: We use stochastic gradient descent with a learning rate of 0.01 and exponential decay of 0.98 after 10k steps (1 step = 256 word pairs), negative sampling with 128 samples, skip-gram context window of size 5. Reducing the learning rate of the skip-gram model helps in the multi-task scenario by allowing skip-gram objective to converge in parallel with the sentence similarity objective. We do this modification to make sure that shared word embeddings receive enough supervision from the multilingual sentence similarity objective. At every step, we sample equal number of monolingual and cross-lingual word pairs to make a mini-batch. We keep the batch size to be 50 sentence pairs. LSTM hidden dimension P is one of 100, 128, 512 depending on the model. We use dropout at the embedding layer with drop probability 0.3. Hinge-loss margin m is always kept to be sentence embedding size. We sample 5 negative samples for the noise-contrastive loss. The model is trained using the Adam optimizer with a learning rate of 0.001 and an exponential decay of 0.98 after 10k steps (1 step = 50 sentence pairs = 1 mini-batch).The system is optimized by alternating between mini-batches of these two tasks. All of our models project words from all input languages to a shared vector space. We train four types of models.• Sent-Avg: This model simply averages word embeddings to get a sentence embedding. It is similar to BiCVM-add model from Hermann & Blunsom FORMULA0, but we also add sentence pairs in the opposite direction, so that the model performs well in both directions.• Sent-LSTM: Represents words in context using the bidirectional LSTM layer, which are then averaged to get sentence embeddings.• JMT-Sent-Avg: Multilingual skip-gram jointly trained with Sent-add. In this setting, the model is optimized by alternating between mini-batches for the two models. JMT refers to Joint Multi-task.• JMT-Sent-LSTM: Multilingual skip-gram jointly trained with Sent-LSTM. We report on the Reuters RCV1/RCV2 cross-lingual document classification (CLDC) task BID19 using the same experimental setup. We learn the distributed representations on the Europarl corpus. We construct document embeddings by averaging sentence embeddings. Sentence representations are fixed vectors determined by a sentence encoder trained on parallel and monolingual Europarl corpora. For a language pair L1-L2, a document classifier (single layer average perceptron) is trained using the document representations from L1, and tested on documents from L2. Due to lack of supervision on the test side, CLDC setup relies on documents with similar meaning having similar representations. Table 1, shows the for our systems and compares it to some state-of-the-art approaches. When the sentence embedding dimension is 128, we outperform most of the systems compared. When the sentence embedding dimension is increased to 512, our are close to the best obtained for this task. Our models with an LSTM layer (Sent-LSTM and JMT-Sent-LSTM) are significantly better than those without one. There are also significant gains when the document embeddings are obtained from sentence encoders trained in the multi-task setting. The ablation experiments where we just use parallel corpora suggest that these gains are mostly due to additional monolingual data that we can exploit in the multi-task setting. Table 2: We compare our JMT-Sent-LSTM model trained on three languages to one trained on two languages. Table 2 compares models trained on data from four languages (en, es, de, fr) to models trained on data from two languages. The suggest that models trained on multiple languages perform better when English is the source language used to train the CLDC system. The multilingual systems also show promising for es-de pair, for which there was no direct parallel data available. Validation loss for JMT-sent-add model shows more stability and achieves a lower value than the one for Sent-add model in the low-resource scenario. At every training step, the validation set is created by randomly choosing 50 sentences from the development set. The main motivation behind the multi-task architecture is to create high quality multilingual embeddings for languages which have limited amount of parallel data available. Therefore, we compare the effectiveness of our Joint multi-task models in the low resource scenario, where for each language pair we use 100k parallel sentences and 1 million monolingual sentences for training the sentence encoder. We evaluate on the RCV1/RCV2 document classification task. Like before, we keep the first 90% (90k parallel sentences) of parallel data for training and 10% (10k parallel sentences) for development purposes. FIG2 shows the loss curves for sent-add and JMT-Sent-add models. On the validation set, JMTSent-add model gives a smoother and lower loss curve. Our suggest that using a parametric composition model to derive sentence embeddings from word embeddings and joint multi-task learning of multilingual word and sentence embeddings are promising directions. This paper is a snapshot of our current efforts and w e believe that our sentence embedding models can be improved further with straightforward modifications to the model architecture, for instance by using stacked LSTMs, and we plan to explore these directions in future work. In our exploration of architectures for the sentence encoding model, we also tried using a selfattention layer following the intuition that not all words are equally important for the meaning of a sentence. However, we later realized that the cross lingual sentence similarity objective is at odds with what we want the attention layer to learn. When we used self attention instead of simple averaging of word embeddings, the attention layer learns to give the entire weight to a single word in both the source and the target language since that makes optimizing cross lingual sentence similarity objective easier. Even though they are related tasks, multilingual skip-gram and cross-lingual sentence similarity models are always in a conflict to modify the shared word embeddings according to their objectives. This conflict, to some extent, can be eased by careful choice of hyper-parameters. This dependency on hyper-parameters suggests that better hyper-parameters can lead to better in the multi-task learning scenario. We have not yet tried a full sweep of the hyperparameters of our current models but we believe there may be easy gains to be had from such a sweep especially in the multi-task learning scenario. | We jointly train a multilingual skip-gram model and a cross-lingual sentence similarity model to learn high quality multilingual text embeddings that perform well in the low resource scenario. | 1,017 | scitldr |
Generative Adversarial Networks (GANs) are a very powerful framework for generative modeling. However, they are often hard to train, and learning of GANs often becomes unstable. Wasserstein GAN (WGAN) is a promising framework to deal with the instability problem as it has a good convergence property. One drawback of the WGAN is that it evaluates the Wasserstein distance in the dual domain, which requires some approximation, so that it may fail to optimize the true Wasserstein distance. In this paper, we propose evaluating the exact empirical optimal transport cost efficiently in the primal domain and performing gradient descent with respect to its derivative to train the generator network. Experiments on the MNIST dataset show that our method is significantly stable to converge, and achieves the lowest Wasserstein distance among the WGAN variants at the cost of some sharpness of generated images. Experiments on the 8-Gaussian toy dataset show that better gradients for the generator are obtained in our method. In addition, the proposed method enables more flexible generative modeling than WGAN. Generative Adversarial Networks (GANs) BID2 are a powerful framework of generative modeling which is formulated as a minimax game between two networks: A generator network generates fake-data from some noise source and a discriminator network discriminates between fake-data and real-data. GANs can generate much more realistic images than other generative models like variational autoencoder BID10 or autoregressive models BID14, and have been widely used in high-resolution image generation BID8, image inpainting BID18, image-to-image translation BID7, to mention a few. However, GANs are often hard to train, and various ways to stabilize training have been proposed by many recent works. Nonetheless, consistently stable training of GANs remains an open problem. GANs employ the Jensen-Shannon (JS) divergence to measure the distance between the distributions of real-data and fake-data BID2. provided an analysis of various distances and divergence measures between two probability distributions in view of their use as loss functions of GANs, and proposed Wasserstein GAN (WGAN) which has better theoretical properties than the original GANs. WGAN requires that the discriminator (called the critic in) must lie within the space of 1-Lipschitz functions to evaluate the Wasserstein distance via the Kantorovich-Rubinstein dual formulation. further proposed implementing the critic with a deep neural network and applied weight clipping in order to ensure that the critic satisfies the Lipschitz condition. However, weight clipping limits the critic's function space and can cause gradients in the critic to explode or vanish if the clipping parameters are not carefully chosen BID3. WGAN-GP BID3 and Spectral Normalization (SN) BID12 apply regularization and normalization, respectively, on the critic trying to make the critic 1-Lipschitz, but they fail to optimize the true Wasserstein distance. In the latest work, BID11 proposed a new WGAN variant to evaluate the exact empirical Wasserstein distance. They evaluate the empirical Wasserstein distance between the empirical distributions of real-data and fake-data in the discrete case of the Kantorovich-Rubinstein dual for-mulation, which can be solved efficiently because the dual problem becomes a finite-dimensional linear-programming problem. The generator network is trained using the critic network learnt to approximate the solution of the dual problem. However, the problem of approximation error by the critic network remains. In this paper, we propose a new generative model without the critic, which learns by directly evaluating gradient of the exact empirical optimal transport cost in the primal domain. The proposed method corresponds to stochastic gradient descent of the optimal transport cost. argued that JS divergences are potentially not continuous with respect to the generator's parameters, leading to GANs training difficulty. They proposed instead using the Wasserstein-1 distance W 1 (q, p), which is defined as the minimum cost of transporting mass in order to transform the distribution q into the distribution p. Under mild assumptions, W 1 (q, p) is continuous everywhere and differentiable almost everywhere. The WGAN objective function is constructed using the Kantorovich-Rubinstein duality (, Chapter 5) as DISPLAYFORM0 to obtain min DISPLAYFORM1 where D is the set of all 1-Lipschitz functions, where P r is the real-data distribution, and where P g is the generator distribution implicitly defined by y = G(z), z ∼ p(z). Minimization of this objective function with respect to G with optimal D is equivalent to minimizing W 1 (P r, P g). et al. further proposed implementing the critic D in terms of a deep neural network with weight clipping. Weight clipping keeps the weight parameter of the network lying in a compact space, thereby ensuring the desired Lipschitz condition. For a fixed network architecture, however, weight clipping may significantly limit the function space to a quite small fraction of all possible 1-Lipschitz functions representable by networks with the prescribed architecture. BID3 proposed introduction of gradient penalty (GP) to the WGAN objective function in place of the 1-Lipschitz condition in the Kantorovich-Rubinstein dual formulation, in order to explicitly encourage the critic to have gradients with magnitude equal to 1. Since enforcing the constraint of unit-norm gradient everywhere is intractable, they proposed enforcing the constraint only along straight line segments, each connecting a real-data point and a fake-data point. The ing learning scheme, which is called the WGAN-GP, was shown to perform well experimentally. It was pointed out, however BID12, that WGAN-GP is susceptible to destabilization due to gradual changes of the support of the generator distribution as learning progresses. Furthermore, the critic can easily violate the Lipschitz condition in practice, so that there is no guarantee that WGAN-GP optimizes the true Wasserstein distance. SN, proposed by BID12, is based on the observation that the Lipschitz norm of a critic represented by a multilayer neural network is bounded from above by the product, across all layers, of the Lipschitz norms of the activation functions and the spectral norms of the weight matrices, and normalizes each of the weight matrices with its spectral norm to ensure the ing critic to satisfy the desired Lipschitz condition. It is well known that, for any m × n matrix W = (w ij), the max norm W max = max{|w ij |} and the spectral norm σ(W) satisfy the inequality DISPLAYFORM0 The proposed method in this paper is based on the fact that the optimal transport cost between two probability distributions can be evaluated efficiently when the distributions are uniform over finite sets of the same cardinality. Our proposal is to evaluate empirical optimal transport costs on the basis of equal-size sample datasets of real-and fake-data points. The optimal transport cost between the real-data distribution P r and the generator distribution P g is defined as DISPLAYFORM0 where c(x, y) is the cost of transporting one unit mass from x to y, assumed differentiable with respect to its arguments almost everywhere, and where Π(P r, P g) denotes the set of all couplings between P r and P g, that is, all joint probability distributions that have marginals P r and P g.Let D = {x j |x j ∼ P r (x)} be a dataset consisting of independent and identically-distributed (iid) real-data points, and F = {y i |y i ∼ P g (y)} be a dataset consisting of iid fake-data points sampled from the generator. Let P D and P F be the empirical distributions defined by the datasets D and F, respectively. We further assume in the following that |D| = |F | = N holds. The empirical optimal transport costĈ(D, F) = C(P D, P F) between the two datasets D and F is formulated as a linear-programming problem, aŝ DISPLAYFORM1 DISPLAYFORM2 DISPLAYFORM3 It is known BID15 that the linear-programming problem- FORMULA6 admits solutions which are permutation matrices. One can then replace the constraints M i,j ≥ 0 in FORMULA6 with M i,j ∈ {0, 1} without affecting the optimality. The ing optimization problem is what is called the linear sum assignment problem, which can be solved more efficiently than the original linear-programming problem. As far as the authors' knowledge, the most efficient algorithm to date for solving a linear sum assignment problem has time complexity of O(N 2.5 log(N C)), where C = max i,j c(x j, y i) when one scales up the costs {c(x j, y i)|x j ∈ D, y i ∈ F } to integers (, Chapter 4). This is a problem to find the optimal transport plan, where M i,j = 1 is corresponding to transporting fake-data point y i ∈ F to real-data point x j ∈ D, and where the objective is to minimize the average FIG0 shows a two-dimensional example of this problem and its solution. - FORMULA6 with N = 8. Circles • represent real-data points in D and triangles represent fake-data points in F. Arrows between circles • and filled triangles show the optimal transport plan M * with c(x, y) = x − y 2, which is an identity matrix. Arrows between open △ and filled triangles show small perturbations of F, which do not change M *. DISPLAYFORM4 One requires evaluations not only of the optimal transport cost C(P r, P g) but also of its derivative in order to perform learning of the generator with backpropagation. Let θ denote the parameter of the generator, and let ∂ θ C denote the derivative of the optimal transport cost C with respect to θ. Conditional on z, the generator output G(z) is a function of θ. Hence, in order to estimate ∂ θ C, in our framework one has to evaluate ∂ θĈ. In general, it is difficult to differentiate with respect to generator output y i, as the optimal transport plan M * can be highly dependent on y i. Under the assumption |D| = |F | = N which we adopt here, however, the feasible set for M is the set of all permutation matrices and is a finite set. It then follows that, as a generic property, the optimal transport plan M * is unchanged under small enough perturbations of F (see FIG0 . We take advantage of this fact and regard M * as independent of y i . Now that differentiation of becomes tractable, we use as the loss function of the generator and update the generator with the direct gradient of the empirical optimal transport cost, as DISPLAYFORM5 Although the framework described so far is applicable to any optimal transport cost, several desirable properties can be stated if one specializes in the Wasserstein distance. Assume, for a given p ≥ 1, that the real-data distribution P r and the generator distribution P g have finite moments of order p. The Wasserstein-p distance between P r and P g is defined in terms of the optimal transport cost with c(x, y) = x − y p as DISPLAYFORM6 Due to the law of large numbers, the empirical distributions P D and P F converge weakly to P r and P g, respectively, as N → ∞. It is also known (, Theorem 6.9) that the Wasserstein-p distance W p metrizes the space of probability measures with finite moments of order p. Consequently, the empirical Wasserstein distanceŴ p (D, F) is a consistent estimator of the true Wasserstein distance W p (P r, P g). Furthermore, with the upper bound of the error of the estimator DISPLAYFORM7 which is derived on the basis of the triangle inequality, as well as with the upper bounds available for expectations of W p (P D, P r) and W p (P F, P g) under mild conditions BID17, one can see thatŴ p (D, F) is an asymptotically unbiased estimator of W p (P r, P g).Note that our method can directly evaluate the empirical Wasserstein distance without recourse to the Kantorovich-Rubinstein dual. Hence, our method does not use a critic and is therefore no longer a GAN. It is also applicable to any optimal transport cost. We summarize the proposed method in Algorithm 1. We first show experimental on the MNIST dataset of handwritten digits. In this experiment, we resized the images to resolution 64 × 64 so that we can use the convolutional neural networks Sample {x i} i∈{1,...,N} ∼ X real from real-data. Sample {z j} j∈{1,...,N} ∼ p(z) from random noises. Let y j = G θ (z j), ∀j ∈ {1, . . ., N}. Solve-7 to obtain M *.9: DISPLAYFORM0 θ ← Adam(g θ, θ, α, β 1, β 2) 11: end while described in Appendix A.1 as the critic and the generator. In all methods, the batch size was set to 64 and the prior noise distribution was the 100-dimensional standard normal distribution. The maximum number of iterations in training of the generator was set to 30,000. The Wasserstein-1 distance with c(x, y) = x − y 1 was used. More detailed settings are described in Appendix B.1.Although several performance metrics have been proposed and are commonly used to evaluate variants of WGAN, we have decided to use the empirical Wasserstein distance (EWD) to compare performance of all methods. It is because all the methods adopt objective functions that are based on the Wasserstein distance, and because EWD is a consistent and asymptotically unbiased estimator of the Wasserstein distance and can efficiently be evaluated, as discussed in Section 4. TAB1 shows EWD evaluated with 256 samples and computation time per generator update for each method. For reference, performance comparison with the Fréchet Inception Distance BID4 and the Inception Score BID13, which are commonly used as performance measures to evaluate GANs using feature space embedding with an inception model, is shown in Appendix C. The proposed method achieved a remarkably small EWD and computational cost compared with the variants of WGAN. Our method required the lowest computational cost in this experimental setting mainly because it does not use the critic. Although we think that the batch size used in the experiment of the proposed method was appropriate since the proposed method achieved lower EWD, if a larger batch size would be required in training, it will take much longer time to solve the linear sum assignment problem-.We further investigated behaviors of the methods compared in more detail, on the basis of EWD. WGAN-SN failed to learn. The loss function of the critic showed divergent movement toward −∞, and the behaviors of EWD in different trials were different even though the behaviors of the critic loss were the same (Figure 2 (a) and (b)). WGAN training never failed in 5 trials, and EWD improved stably without sudden deterioration. Although training with WGAN-GP proceeded favorably in initial stages, at certain points the gradient penalty term started to increase, causing EWD to deteriorate (Figure 2 (c) ). This happened in all 5 trials. Since gradient penalty is a weaker restriction than weight clipping, the critic may be more likely to cause extreme behaviors. We examined both WGAN-TS with and without weight scaling. Whereas WGAN-TS with weight scaling did not fail in training but achieved higher EWD than WGAN, WGAN-TS without weight scaling achieved lower EWD than WGAN at the cost of the stability of training (Figure 3). The proposed method was trained stably and never failed in 5 trials. As mentioned in Section 3, the critic in WGAN-TS simply regresses the optimizer of the empirical version of the Kantorovich-Rubinstein dual. Thus, there is no guarantee that the critic will satisfy the 1-Lipschitz condition. BID11 pointed out that it is indeed practically problematic with WGAN-TS, and proposed weight scaling to ensure that the critic satisfies the desired condition. We have empirically found, however, that weight scaling exhibited the following trade-off (Figure 3). Without weight scaling, training of WGAN-TS suddenly deteriorated in some trials because the critic came to not satisfy the Lipschitz condition. With weight scaling, on the other hand, the regression error of the critic with respect to the solution increased and the EWD became worse. The proposed method directly solves the empirical version of the optimal transport problem in the primal domain, so that it is free from such trade-off. Figure 4 shows fake-data images generated by the generators trained with WGAN, WGAN-GP, WGAN-TS, and the proposed method. Although one can identify the digits for the generated images with the proposed method most easily, these images are less sharp. Among the generated images with the other methods, one can notice several images which have almost the same appearance as real-data images, whereas in the proposed method, such fake-data images are not seen and images that seem averaged real-data images belonging to the same class often appear. This might imply that merely minimizing the Wasserstein distance between the real-data distribution and the generator distribution in the raw-image space may not necessarily produce realistic images. We next observed how the generator distribution is updated in order to compare the proposed method with variants of WGAN in terms of the gradients provided. Figure 5 shows typical behavior of the generator distribution trained with the proposed method on the 8-Gaussian toy dataset. The 8-Gaussian toy dataset and experimental settings are described in Appendix B.2. One can observe that, as training progresses, the generator distribution comes closer to the real-data distribution. FIG4 shows comparison of the behaviors of the proposed method, WGAN-GP, and WGAN-TS. We excluded WGAN and WGAN-SN from this comparison: WGAN tended to yield generator distributions that concentrated around a single Gaussian component, and hence training did not progress well. WGAN-SN could not correctly evaluate the Wasserstein distance as in the experiment on the MNIST dataset. One can observe in FIG4 that directions of sample updates are diverse in the proposed method, especially in later stages of training, and that the sample update directions tend to be aligned with the optimal gradient directions. These behaviors will be helpful for the generator to learn the realdata distribution efficiently. In WGAN-GP and WGAN-TS, on the other hand, the sample update directions exhibit less diversity and less alignment with the optimal gradient directions, which would make the generator distribution difficult to spread and would slow training. One would be able to ascribe such behaviors to poor quality of the critic: Those behaviors would arise when the generator learns on the basis of unreliable gradient information provided by the critic without learning sufficiently to accurately evaluate the Wasserstein distance. If one would increase the number n c of critic iterations per generator iteration in order to train the critic better, the total computational cost of training would increase. In fact, n c = 5 is recommended in practice and has commonly been used in WGAN and its variants because the improvement in learning of the critic is thought to be small relative to increase in computational cost. In reality, however, 5 iterations would not be sufficient for the critic to learn, and this might be a principal reason for the critic to provide poor gradient information to the generator in the variants of WGAN. We have proposed a new generative model that learns by directly minimizing exact empirical Wasserstein distance between the real-data distribution and the generator distribution. Since the proposed method does not suffer from the constraints on the transport cost and the 1-Lipschitz constraint imposed on WGAN by solving the optimal transport problem in the primal domain instead of the dual domain, one can construct more flexible generative modeling. The proposed method provides the generator with better gradient information to minimize the Wasserstein distance (Section 5.2) and achieved smaller empirical Wasserstein distance with lower computational cost (Section 5.1) than any other compared variants of WGAN. In the future work, we would like to investigate the behavior of the proposed method when transport cost is defined in the feature space embedded by an appropriate inception model. A NETWORK ARCHITECTURES A.1 CONVOLUTIONAL NEURAL NETWORKS We show in TAB2 the network architecture used in the experiment on the MNIST dataset in Section 5.1. The generator network receives a 100-dimensional noise vector generated from the standard normal distribution as an input. The noise vector is passed through the fully-connected layer and reshaped to 4 × 4 feature maps. Then they are passed through four transposed convolution layers with 5 × 5 kernels, stride 2 and no biases (since performance was empirically almost the same with or without biases, we took the simpler option of not considering biases), where the resolution of feature maps is doubled and the number of them is halved except for the last layer. The critic network is basically the reverse of the generator network. A convolution layer is used instead of a transposed convolution layer in the critic. After the last convolution layer, the feature maps are flattened into a vector and passed through the fully-connected layer. We employed batch normalization BID6 in all intermediate layers in both of the generator and the critic. Rectified linear unit (ReLU) was used as the activation function in all but the last layers. As the activation function in the last layer, the hyperbolic tangent function and the identity function were used for the generator and for the critic, respectively. We show in Table 3 the network architecture used in the experiment on the 8-Gaussian toy dataset in Section 5.2. The generator network architecture receives a 100-dimensional noise vector as in the experiment on the MNIST dataset. The noise vector is passed through the four fully-connected layers with biases and mapped to a two-dimensional space. The critic network is likewise the reverse of the generator network. The MNIST dataset of handwritten digits used in the experiment in Section 5.1 contains 60,000 two-dimensional images of handwritten digits with resolution 28 × 28.We used default parameter settings decided by the proposers of the respective methods. We used RMSProp BID5 with learning rate 5e−5 for the critic and the generator in WGAN. The weight clipping parameter c was set to 0.01. We used Adam BID9 with learning rate 1e−4, β 1 = 0.5, β 2 = 0.999 in the other methods. λ gp in WGAN-GP was set to 10. In the methods with the critic, the number n c of critic iterations per generator iteration was set to 5. The 8-Gaussian toy dataset used in the experiment in Section 5.2 is a two-dimensional synthetic dataset, which contains real-data sampled from the Gaussian mixture distribution with 8 centers equally distant from the origin and unit variance as the real-data distribution. The centers of the 8 Gaussian component distributions are (±10, 0), (0, ±10), and (±10/ √ 2, ±10/ √ 2). 30, 000 samples were generated in advance before training and were used as the real-data samples. In all methods, the batch size was set to 64 and the maximum number of iterations in training the generator was set to 1,000. WGAN and WGAN-SN could not learn well with this dataset, even though we considered several parameter sets. We used Adam with learning rate 1e−3, β 1 = 0.5, β 2 = 0.999 for WGAN-GP, WGAN-TS and the proposed method. λ gp in WGAN-GP was set to 10. In the methods with the critic, the number n c of critic iterations was set to 5. All the numerical experiments in this paper were executed on a computer with an Intel Core i7-6850K CPU (3.60 GHz, 6 cores) and 32 GB RAM, and with four GeForce GTX 1080 graphics cards installed. Linear sum assignment problems were solved using the Hungarian algorithm, which has time complexity of O(N 3). Codes used in the experiments were written in tensorflow 1.10.1 on python 3.6.0, with eager execution enabled. We show the of evaluation of the experimented methods with FID and IS in TAB3. Both FID and IS are commonly used to evaluate GANs. FID calculates the distance between the set of real-data points and that of fake-data points. The smaller the distance is, the better the fake-data points are judged. Assuming that the vector obtained from a fake-or real-data point through the inception model follows a multivariate Gaussian distribution, FID is defined by the following equation: DISPLAYFORM0 where (µ i, Σ i) is the mean vector and the covariance matrix for dataset i, evaluated in the feature space embedded with inception scores. It is nothing but the square of the Wasserstein-2 distance between two multivariate Gaussian distributions with parameters (µ 1, Σ 1) and (µ 2, Σ 2), respectively. IS is a metric to evaluate only the set of fake-data points. Let x i be a data point, y be the label of x i in the data identification task for which the inception model was trained, p(y|x i) be the probability of label y obtained by inputting x i to the inception model. Letting X be the set of all data points used for calculating the score, the marginal probability of label y is p(y) = 1 |X| xi∈X p(y|x i). IS is defined by the following equation: DISPLAYFORM1 where KL is Kullback-Leibler divergence. IS is designed to be high as the data points are easy to identify by the inception model and variation of labels identified from the data points is abundant. In WGAN-GP, WGAN-SN and WGAN-TS*, we observed that training suddenly deteriorated in some trials. We thus used early stopping on the basis of EWD, and the of these methods shown in TAB3 are with early stopping. The proposed method marked the worst in FID and the best in IS among all the methods compared. Certainly, the fake-data generated by the proposed method are non-sharp and do not resemble realdata points, but it seems that it is easy to distinguish them and they have diversity as digit images. If one wishes to produce higher FID using the proposed method, transport cost should be considered in the desired space corresponding to FID. | We have proposed a flexible generative model that learns stably by directly minimizing exact empirical Wasserstein distance. | 1,018 | scitldr |
Neural Architecture Search (NAS) is an exciting new field which promises to be as much as a game-changer as Convolutional Neural Networks were in 2012. Despite many great works leading to substantial improvements on a variety of tasks, comparison between different methods is still very much an open issue. While most algorithms are tested on the same datasets, there is no shared experimental protocol followed by all. As such, and due to the under-use of ablation studies, there is a lack of clarity regarding why certain methods are more effective than others. Our first contribution is a benchmark of 8 NAS methods on 5 datasets. To overcome the hurdle of comparing methods with different search spaces, we propose using a method’s relative improvement over the randomly sampled average architecture, which effectively removes advantages arising from expertly engineered search spaces or training protocols. Surprisingly, we find that many NAS techniques struggle to significantly beat the average architecture baseline. We perform further experiments with the commonly used DARTS search space in order to understand the contribution of each component in the NAS pipeline. These experiments highlight that: (i) the use of tricks in the evaluation protocol has a predominant impact on the reported performance of architectures; (ii) the cell-based search space has a very narrow accuracy range, such that the seed has a considerable impact on architecture rankings; (iii) the hand-designed macrostructure (cells) is more important than the searched micro-structure (operations); and (iv) the depth-gap is a real phenomenon, evidenced by the change in rankings between 8 and 20 cell architectures. To conclude, we suggest best practices, that we hope will prove useful for the community and help mitigate current NAS pitfalls, e.g. difficulties in reproducibility and comparison of search methods. The code used is available at https://github.com/antoyang/NAS-Benchmark. As the deep learning revolution helped us move away from hand crafted features and reach new heights , so does Neural Architecture Search (NAS) hold the promise of freeing us from hand-crafted architectures, which requires tedious and expensive tuning for each new task or dataset. Identifying the optimal architecture is indeed a key pillar of any Automated Machine Learning (AutoML) pipeline. Research in the last two years has proceeded at a rapid pace and many search strategies have been proposed, from Reinforcement Learning , to Evolutionary Algorithms , to Gradient-based methods ). Still, it remains unclear which approach and search algorithm is preferable. Typically, methods have been evaluated on accuracy alone, even though accuracy is influenced by many other factors besides the search algorithm. Comparison between published search algorithms for NAS is therefore either very difficult (complex training protocols with no code available) or simply impossible (different search spaces), as previously pointed out (; ;). NAS methods have been typically decomposed into three components : search space, search strategy and model evaluation strategy. This division is important to keep in mind, as an improvement in any of these elements will lead to a better final performance. But is a method with a more (manually) tuned search space a better AutoML algorithm? If the key idea behind NAS is to find the optimal architecture, without human intervention, why are we devoting so much energy to infuse expert knowledge into the pipeline? Furthermore, the lack of ablation studies in most works makes it harder to pinpoint which components are instrumental to the final performance, which can easily lead to Hypothesizing After the Results are Known (HARKing;). Paradoxically, the huge effort invested in finding better search spaces and training protocols, has led to a situation in which any randomly sampled architecture performs almost as well as those obtained by the search strategies. Our findings suggest that most of the gains in accuracy in recent contributions to NAS have come from manual improvements in the training protocol, not in the search algorithms. As a step towards understanding which methods are more effective, we have collected code for 8 reasonably fast (search time of less than 4 days) NAS algorithms, and benchmarked them on 5 well known CV datasets. Using a simple metric-the relative improvement over the average architecture of the search space-we find that most NAS methods perform very similarly and rarely substantially above this baseline. The methods used are DARTS, StacNAS, PDARTS, MANAS, CNAS, NSGANET, ENAS and NAO. The datasets used are CIFAR10, CIFAR100, SPORT8, MIT67 and FLOWERS102. Through a number of additional experiments on the widely used DARTS search space, we will show that: (a) how you train your model has a much bigger impact than the actual architecture chosen; (b) different architectures from the same search space perform very similarly, so much so that (c) hyperparameters, like the number of cells, or the seed itself have a very significant effect on the ranking; and (d) the specific operations themselves have less impact on the final accuracy than the hand-designed macro-structure of the network. Notably, we find that the 200+ architectures sampled from this search space (available from the link in the abstract) are all within a range of one percentage point (top-1 accuracy) after a standard full training on CIFAR10. Finally, we include some observations on how to foster reproducibility and a discussion on how to potentially avoid some of the encountered pitfalls. As mentioned, NAS methods have the potential to truly revolutionize the field, but to do so it is crucial that future research avoids common mistakes. Some of these concerns have been recently raised by the community. For example, highlight that most NAS methods a) fail to compare against an adequate baseline, such as a properly implemented random search strategy, b) are overly complex, with no ablation to properly assign credit to the important components, and c) fail to provide all details needed for successfully reproducing their . In our paper we go one step further and argue that the relative improvement over the average (randomly sampled) architecture is an useful tool to quantify the effectiveness of a proposed solution and compare it with competing methods. To partly answer their second point, and understand how much the final accuracy depends on the specific architecture, we implement an in-depth study of the widely employed DARTS search space and perform an ablation on the commonly used training techniques (e.g. Cutout, DropPath, AutoAugment). In addition, also took the important step of systematically using fair baselines, and suggest random search with early stopping, averaged over multiple seeds, as an extremely competitive baseline. They find that the search spaces of three methods investigated (DARTS, ENAS, NAO) have been expertly engineered to the extent that any randomly selected architecture performs very well. In contrast, we show that even random sampling (without search) provides an incredibly competitive baseline. Our relative improvement metric allows us to isolate the contribution of the search strategy from the effects of the search space and training pipeline. Thus, we further confirm the authors' claim, showing that indeed the average architecture performs extremely well and that how you train a model has more impact than any specific architecture. In this section we present a systematic evaluation of 8 methods on 5 datasets using a strategy that is designed to reveal the quality of each method's search strategy, removing the effect of the manuallyengineered training protocol and search space. The goal is to find general trends and highlight common features rather than just pin-pointing the most accurate algorithm. Understanding why methods are effective is not an easy task: most introduce variations to previous search spaces, search strategies, and training protocols-with ablations disentangling the contribution of each component often incomplete or missing. In other words, how can we be sure that a new state-of-the-art method is not so simply due to a better engineered search space or training protocol? To address this issue we compare a set of 8 methods with randomly sampled architectures from their respective search spaces, and trained with the same protocol as the searched architectures. The ultimate goal behind NAS should be to return the optimal model for any dataset given, at least within the limits of a certain task, and we feel that the current practices of searching almost exclusively on CIFAR10 go against this principle. Indeed, to avoid the very concrete risk of overfitting to this set of data, NAS methods should be tested on a variety of tasks. For this reason we run experiments on 5 different datasets. Criteria for dataset selection. We selected datasets to cover a variety of subtasks within image classification. In addition to the standard CIFAR10 we select CIFAR100 for a more challenging object classification problem ; SPORT8 for action classification ; MIT67 for scene classification ; and FLOWERS102 for finegrained object classification . More details are given in the Appendix. Criteria for method selection. We selected methods which (a) have open-source code, or provided it upon request, and (b) have a reasonable running time, specifically a search time under 4 GPU-days on CIFAR10. The selected methods are: DARTS, StacNAS, PDARTS , MANAS , CNAS , NSGANET , ENAS , and NAO . With the exception of NAO and NSGANET, all methods are DARTS variants and use weight sharing. Evaluation protocol. NAS algorithms usually consist of two phases: (i) search, producing the best architecture according to the search algorithm used; (ii) augmentation, consisting in training from scratch the best model found in the search phase. We evaluate methods as follows: 1. Sample 8 architectures from the search space, uniformly at random, and use the method's code to augment these architectures (same augment seed for all); 2. Use the method's code to search for 8 architectures and augment them (different search seed, same augment seed); 3. Report mean and standard deviation of the top-1 test accuracy, obtained at the end of the augmentation, for both the randomly sampled and the searched architectures; Since both learned and randomly sampled architectures share the same search space and training protocol, calculating a relative improvement over this random baseline as RI = 100 × (Acc m − Acc r)/Acc r can offer insights into the quality of the search strategy alone. Acc m and Acc r represent the top-1 accuracy of the search method and random sampling strategies, respectively. A good, general-purpose NAS method is expected to yield RI > 0 consistently over different searches and across different subtasks. We emphasize that the comparison is not against random search, but rather against random sampling, i.e., the average architecture of the search space. For example, in the DARTS search space, for each edge in the graph that defines a cell we select one out of eight possible operations (e.g. pooling or convolutions) with uniform probability 1/8. Hyperparameters are optimized on CIFAR10, according to the values reported by the corresponding authors. Since most methods do not include their optimization as part of the search routine, we assumed them to be robust and generalizable to other tasks. As such, aside from scaling down the architecture depending on dataset size, experiments on other datasets use the same hyperparameters. Other training details and references are given in the Appendix. Figure 1 shows the evaluation on the 5 datasets, from which we draw two main . First, the improvements over random sampling tend to be small. In some cases the average performance of a method is even below the average randomly sampled architecture, which suggests that the search methods are not converging to desirable architectures. Second, the small range of accuracies obtained hints at narrow search spaces, where even the worst architectures perform reasonably well. See Section 5 for more experiments corroborating this . We observe also that, on CIFAR10, the top half of best-performing methods (PDARTS, MANAS, DARTS, StacNAS) all perform similarly and positively in relation to their respective search spaces, but more variance is seen on the other datasets. This could be explained by the fact that most methods' hyperparameters have been optimized on CIFAR10 and might not generalize as well on different datasets. As a matter of fact, we found that all NAS methods neglect to report the time needed to optimize hyperparameters. In addition, Table 1 shows the relative improvement metric RI (see intro to Section 3) for each method and dataset. The computational cost of searching for architectures is a limiting factor in their applicability and, therefore, an important variable in the evaluation of NAS algorithms. Figure 2 shows the performance as well and the computational cost of the search phase on CIFAR10. In this section we attempt to shed some light on the surprising of the previous section. We noticed that there was a much larger differences between the random baselines of different methods than the actual increase in performance of each approach. We hypothesized that how a network is trained (the training protocol) has a larger impact on the final accuracy than which architecture is trained, for each search space. To test this, we performed sensitivity analysis using the most common performanceboosting training protocols. We decided to evaluate architectures from the commonly used DARTS search space on the CI-FAR10 dataset. We use the following process: 1) sample 8 random architectures, 2) train them with different training protocols (details below) and 3) report mean, standard deviation and maximum of the top-1 test accuracy at the end of the training process. Training protocols. The simplest training protocol, which we will call Base is similar to the one used in DARTS, but with all tricks disabled: the model is simply trained for 600 epochs. On the other extreme, our full protocol uses several tricks which have been used in recent works (b;) et al., 2018), extended training for 1500 epochs (1500E), and increased number of channels (50C). In between these two extremes, by selectively enabling and disabling each component, we evaluated a further 8 intermediate training protocols. When active, DropPath probability is 0.2, cutout length is 16, auxiliary tower weight is 0.4, and AutoAugment combined with Cutout are used after standard data pre-processing techniques previously described, as in. As shown in Figure 3, a large difference of over 3 percentage points (p.p.) exists between the simplest and the most advanced training protocols. Indeed, this is much higher than any improvement over random sampling observed in the previous section: for example, on CIFAR10, the best improvement observed was 0.69 p.p. In other words, the training protocol is often far more important than the architecture used. Note that the best accuracy of the 8 random architectures training with the best protocol is 98.15%, which is only 0.25 p.p. below state-of-the-art . To summarize, it seems that most recent state-of-the-art , though impressive, cannot always be attributed to superior search strategies. Rather, they are often the of expert knowledge applied to the evaluation protocol. In Figure 10 (Appendix A.3.1) we show similar when training a ResNet-50 with the same protocols. Kendall tau rank correlation non-smoothed accuracy smoothed accuracy (w=20) Figure 5: Correlation between accuracies at different epochs and final accuracy, using raw and smoothed accuracies over a window w. To better understand the from the previous section, we sampled a considerable number of architectures (200+) from the most commonly used search space and fully trained them with the matching training protocol (Cutout+DropPath+Auxiliary Towers). This allows us to get a sense of how much variance exists between the different models (training statistics are made available at the link in the abstract). As we can observe from Figure 4, architectures sampled from this search space all perform similarly, with a mean of 97.03 ± 0.23. The worst architecture we found had an accuracy of 96.18, while the best achieved 97.56. To put this into perspective, many methods using the same training protocol, fall within (or very close to) the standard deviation of the average architecture. Furthermore, as we can observe in Figure 7, the number of cells (a human-picked hyperparameter) has a much larger impact on the final accuracy. In Figure 5 we used the training statistics of the 214 models to plot the correlation between test accuracies at different epochs: it grows slowly in an almost linear fashion. We note that using the moving average of the accuracies yields a stronger correlation, which could be useful for methods using early stopping. To test whether the from the previous section were due to the choice of available operations, we developed an intentionally sub-optimal search space containing 4 plain convolutions (1×1, 3×3, 7 × 7, 11 × 11), 2 max pooling operators (3 × 3, 5 × 5) plus the none and skip connect operations. This proposed search space is clearly more parameter inefficient compared to the commonly used DARTS one (which uses both dilated and separable ones), and we expect it to perform worse. We sampled 56 architectures from this new search space and trained them with the DARTS training protocol (Cutout+DropPath+Auxiliary Towers), for fair comparison with the from the previous section. Figure 6 shows the ing histogram, together with the one obtained from the classical DARTS space of operations. The two distributions are only shifted by 0.18 accuracy points. Given the minor difference in performance, the specific operations are not a key ingredient in the success of this search space. Very likely, it's the well engineered cell structure that allows the model to perform as well as it does. A necessary practice for many weight-sharing methods ) is to restart the training from scratch after the search phase, with a different number of cells. Recent works have warned that this procedure might negatively affect ranking; similarly, the role of the seed has been previously recognized as as a fundamental element in reproducibility . To test the impact of seed, we randomly sampled 32 architectures and trained them with two different seeds (Figure 8). Ranking is heavily influenced, as the Kendall tau correlation between the two sets of training is 0.48. On average, the test accuracy changes by 0.13% ± 0.08 (max change is 0.39%), which is substantial considering the small gap between random architectures and NAS methods. To test the depth-gap we trained another 32 with different number of cells (Figure 9). The correlation between the two different depths is not very strong as measured by Kendall Tau (0.54), with architectures shifting up and down the rankings by up to 18 positions (out of 32). Methods employing weight sharing (WS) would see an even more pronounced effect as the architectures normally chosen at 8 cells would have been training sub-optimally due to the WS itself . These findings point towards two issues. The first is that since the seed has such a large effect on ranking, it stands to reason that the final accuracy reported should be averaged over multiple seeds. The second is that, if the lottery ticket hypothesis holds-so that specific sub-networks are better mainly due to their lucky initialization; Frankle & Carbin., 2018-together with our findings, this could be an additional reason why methods searching on a different number of cells than the final model, struggle to significantly improve on the average randomly sampled architecture. In this section we offer some suggestions on how to mitigate the issues in NAS research. Augmention tricks: while achieving higher accuracies is clearly a desirable goal, we have shown in section 4, that using well engineered training protocols can hide the contribution of the search algorithm. We therefore suggest that both , with and without training tricks, should be reported. An example of best practice is found in. Search Space: it is difficult to evaluate the effectiveness of any given proposed method without a measure of how good randomly sampled architectures are. This is not the same thing as performing a random search which is a search strategy in itself; random sampling is simply used to establish how good the average model is. A simple approach to measure the variability of any new given search space could be to randomly sample k architectures and report mean and standard deviation. We hope that future works will attempt to develop more expressive search spaces, capable of producing both good and bad network designs. Restricted search spaces, while guaranteeing good performance and quick , will inevitably be constrained by the bounds of expert knowledge (local optima) and will be incapable of reaching more truly innovative solutions (closer to the global optima). As our findings in section 5.2 suggest, the overall wiring (the macro-structure) is an extremely influential component in the final performance. As such, future research could investigate the optimal wiring at a global level: an interesting work in this direction is Xie et al. (2019a). Multiple datasets: as the true goal of AutoML is to minimize the need for human experts, focusing the research efforts on a single dataset will inevitably lead to algorithmic overfitting and/or methods heavily dependent on hyperparameter tuning. The best solution for this is likely to test NAS algorithms on a battery of datasets, with different characteristics: image sizes, number of samples, class granularity and learning task. Investigating hidden components: as our experiments in Sections 4 and 5.2 show, the DARTS search space is not only effective due to specific operations that are being chosen, but in greater part due to the overall macro-structure and the training protocol used. We suggest that proper ablation studies can lead to better understanding of the contributions of each element of the pipeline. The importance of reproducibility: reproducibility is of extreme relevance in all sciences. To this end, it is very important that authors release not only their best found architecture but also the corresponding seed (if they did not average over multiple ones), as well as the code and the detailed training protocol (including hyperparameters). To this end, NAS-Bench-101 , a dataset mapping architectures to their accuracy, can be extremely useful, as it allows the quality of search strategies to be assessed in isolation from other NAS components (e.g. search space, training protocol) in a quick and reproducible fashion. The code for this paper is open-source (link in the abstract). We also open-source the 200+ trained architectures used in Section 5. Hyperparameter tuning cost: tuning hyperparameters in NAS is an extremely costly component. Therefore, we argue that either (i) hyperparameters are general enough so that they do not require tuning for further tasks, or the cost is included in the search budget. AutoML, and NAS in particular, have the potential to truly democratize the use of machine learning for all, and could bring forth very notable improvements on a variety of tasks. To truly step forward, a principled approach, with a focus on fairness and reproducibility is needed. In this paper we have shown that, for many NAS methods, the search space has been engineered such that all architectures perform similarly well and that their relative ranking can easily shift. We have furthermore showed that the training protocol itself has a higher impact on the final accuracy than the actual network. Finally, we have provided some suggestions on how to make future research more robust to these issues. We hope that our findings will help the community focus their efforts towards a more general approach to automated neural architecture design. Only then can we expect to learn from NASgenerated architectures as opposed to the current paradigm where search spaces are heavily influenced by our current (human) expert knowledge. A APPENDIX This section details the datasets and the hyperparameters used for each method on each dataset. Search spaces were naturally left unchanged. Hyperparameters were chosen as close as possible to the original paper and occasionally updated to more recent implementations. The network size was tuned similarly for all methods for SPORT8, MIT67 and FLOWERS102. All experiments were run on NVIDIA Tesla V100 GPUs. During search, models are trained/validated on the training/validation subsets, respectively During the final evaluation, the model is trained on the training+validation subsets and tested on the test subset. Common hyperparameters. All 8 methods share a common number of hyperparameters precised here. When SGD optimizer is used, momentum is.9 while when Adam is used, momentum is β = (0.5, 0.999). Gradient clipping is set at 5. DARTS, StacNAS, PDARTS, MANAS, CNAS common hyperparameters. These methods are inspired by DARTS code-wise and consequently share a common number of hyperparameters, which we precise in table 1. We used an unofficial implementation provided by the authors. The search process consists of 2 stages, of which the details are given in table 1. Additional enhancements are the same as DARTS. PDARTS. We used the official implementation: https://github.com/chenxin061/pdarts. The search process consists of 3 stages, of which general details are given in table 1. At stage 1, 2 and 3 respectively, the number of operations decreases from 8 to 5 to 3, and the dropout probability on skip-connect increases from 0.0 to 0.4 to 0.7 for CIFAR10, SPORT8, MIT67 and FLOWERS102 We present here the datasets used and how they are pre-processed. CIFAR10. The CIFAR10 dataset is a dataset of 10 classes and consists of 50, 000 training images and 10, 000 test images of size 32×32. CIFAR100. The CIFAR100 dataset is a dataset of 100 classes and consists of 50, 000 training images and 10, 000 test images of size 32×32. Each of these datasets is split into a training, validation and testing subsets of size 25, 000, 25, 000 and 10, 000 respectively. For both these datasets, we use standard data pre-processing and augmentation techniques, i.e. subtracting the channel mean and dividing by the channel standard deviation; centrally padding the training images to 40×40 and randomly cropping them back to 32×32; and randomly clipping them horizontally. SPORT8. This is an action recognition dataset containing 8 sport event categories and a total of 1579 images . The tiny size of this dataset stresses the generalization capabilities of any NAS method applied to it. MIT67. This is a dataset of 67 classes representing different indoor scenes and consists of 15, 620 images of different sizes . FLOWERS102. This is a dataset of 102 classes representing different species of flowers and consists of 8, 189 images of different sizes . Each of these datasets is split into a training, validation and testing subsets with proportions 40/40/20 (%). For each one, we use use standard data pre-processing and augmentation techniques, i.e. subtracting the channel mean and dividing the channel standard deviation, cropping the training images to random size and aspect ratio, resizing them to 224×224, and randomly changing their brightness, contrast, and saturation, while resizing test images to 256×256 and cropping them at the center. Figure 3 ). Result are for 8 runs of each training protocol. For ResNet-50, the auxiliary tower was added after layer 2. As DropPath would not have been straightforward to apply, we instead implemented Stocastic Depth , to a similar effect. | A study of how different components in the NAS pipeline contribute to the final accuracy. Also, a benchmark of 8 methods on 5 datasets. | 1,019 | scitldr |
Identifying analogies across domains without supervision is a key task for artificial intelligence. Recent advances in cross domain image mapping have concentrated on translating images across domains. Although the progress made is impressive, the visual fidelity many times does not suffice for identifying the matching sample from the other domain. In this paper, we tackle this very task of finding exact analogies between datasets i.e. for every image from domain A find an analogous image in domain B. We present a matching-by-synthesis approach: AN-GAN, and show that it outperforms current techniques. We further show that the cross-domain mapping task can be broken into two parts: domain alignment and learning the mapping function. The tasks can be iteratively solved, and as the alignment is improved, the unsupervised translation function reaches quality comparable to full supervision. Humans are remarkable in their ability to enter an unseen domain and make analogies to the previously seen domain without prior supervision ("This dinosaur looks just like my dog Fluffy"). This ability is important for using previous knowledge in order to obtain strong priors on the new situation, which makes identifying analogies between multiple domains an important problem for Artificial Intelligence. Much of the recent success of AI has been in supervised problems, i.e., when explicit correspondences between the input and output were specified on a training set. Analogy identification is different in that no explicit example analogies are given in advance, as the new domain is unseen. Recently several approaches were proposed for unsupervised mapping between domains. The approaches take as input sets of images from two different domains A and B without explicit correspondences between the images in each set, e.g. Domain A: a set of aerial photos and Domain B: a set of Google-Maps images. The methods learn a mapping function T AB that takes an image in one domain and maps it to its likely appearance in the other domain, e.g. map an aerial photo to a Google-Maps image. This is achieved by utilizing two constraints: (i) Distributional constraint: the distributions of mapped A domain images (T AB (x)) and images of the target domain B must be indistinguishable, and (ii) Cycle constraint: an image mapped to the other domain and back must be unchanged, i.e., T BA (T AB (x)) = x. In this paper the task of analogy identification refers to finding pairs of examples in the two domains that are related by a fixed non-linear transformation. Although the two above constraints have been found effective for training a mapping function that is able to translate between the domains, the translated images are often not of high enough visual fidelity to be able to perform exact matching. We hypothesize that it is caused due to not having exemplar-based constraints but rather constraints on the distributions and the inversion property. In this work we tackle the problem of analogy identification. We find that although current methods are not designed for this task, it is possible to add exemplar-based constraints in order to recover high performance in visual analogy identification. We show that our method is effective also when only some of the sample images in A and B have exact analogies whereas the rest do not have exact analogies in the sample sets. We also show that it is able to find correspondences between sets when no exact correspondences are available at all. In the latter case, since the method retrieves rather than maps examples, it naturally yields far better visual quality than the mapping function. Using the domain alignment described above, it is now possible to perform a two step approach for training a domain mapping function, which is more accurate than the provided by previous unsupervised mapping approaches:1. Find the analogies between the A and B domain, using our method.2. Once the domains are aligned, fit a translation function T AB between the domains y mi = T AB (x i) using a fully supervised method. For the supervised network, larger architectures and non-adversarial loss functions can be used. This paper aims to identify analogies between datasets without supervision. Analogy identification as formulated in this paper is highly related to image matching methods. As we perform matching by synthesis across domains, our method is related to unsupervised style-transfer and image-to-image mapping. In this section we give a brief overview of the most closely related works. Image Matching Image matching is a long-standing computer vision task. Many approaches have been proposed for image matching, most notably pixel-and feature-point based matching (e.g. SIFT BID11). Recently supervised deep neural networks have been used for matching between datasets BID18, and generic visual features for matching when no supervision is available (e.g. BID5). As our scenario is unsupervised, generic visual feature matching is of particular relevance. We show in our experiments however that as the domains are very different, standard visual features (multi-layer VGG-16 BID15) are not able to achieve good analogies between the domains. Generative Adversarial Networks GAN technology presents a major breakthrough in image synthesis (and other domains). The success of previous attempts to generate random images in a class of a given set of images, was limited to very specific domains such as texture synthesis. Therefore, it is not surprising that most of the image to image translation work reported below employ GANs in order to produce realistically looking images. GAN methods train a generator network G that synthesizes samples from a target distribution, given noise vectors, by jointly training a second network D. The specific generative architecture we and others employ is based on the architecture of BID12. In image mapping, the created image is based on an input image and not on random noise BID9 BID21 BID10 BID16.Unsupervised Mapping Unsupervised mapping does not employ supervision apart from sets of sample images from the two domains. This was done very recently BID16 BID9 BID21 for image to image translation and slightly earlier for translating between natural languages BID19. The above mapping methods however are focused on generating a mapped version of the sample in the other domain rather than retrieving the best matching sample in the new domain. Supervised Mapping When provided with matching pairs of (input image, output image) the mapping can be trained directly. An example of such method that also uses GANs is, where the discriminator D receives a pair of images where one image is the source image and the other is either the matching target image ("real" pair) or a generated image ("fake" pair); The link between the source and the target image is further strengthened by employing the U-net architecture of BID14. We do not use supervision in this work, however by the successful completion of our algorithm, correspondences are generated between the domains, and supervised mapping methods can be used on the inferred matches. Recently, BID2 demonstrated improved mapping , in the supervised settings, when employing the perceptual loss and without the use of GANs. In this section we detail our method for analogy identification. We are given two sets of images in domains A and B respectively. The set of images in domain A are denoted x i where i ∈ I and the set image in domain B are denoted y j where j ∈ J. Let m i denote the index of the B domain image y mi that is analogous to x i. Our goal is to find the matching indexes m i for i ∈ I in order to be able to match every A domain image x i with a B domain image y mi, if such a match exists. We present an iterative approach for finding matches between two domains. Our approach maps images from the source domain to the target domain, and searches for matches in the target domain. A GAN-based distribution approach has recently emerged for mapping images across domains. Let x be an image in domain A and y be an image in domain B. A mapping function T AB is trained to map x to T AB (x) so that it appears as if it came from domain B. More generally, the distribution of T AB (x) is optimized to appear identical to that of y. The distributional alignment is enforced by training a discriminator D to discriminate between samples from p(T AB (x)) and samples from p(y), where we use p(x) to denote the distribution of x and p(T AB (x)) to denote the distribution of T AB (x) when x ∼ p(x). At the same time T AB is optimized so that the discriminator will have a difficult task of discriminating between the distributions. The loss function for training T and D are therefore: DISPLAYFORM0 is a binary cross-entropy loss. The networks L D and L T are trained iteratively (as they act in opposite directions).In many datasets, the distribution-constraint alone was found to be insufficient. Additional constraints have been effectively added such as circularity (cycle) BID9 and distance invariance BID0. The popular cycle approach trains one-sided GANs in both the A → B and B → A directions, and then ensures that an A image domain translated to B (T AB (x)) and back to A (T BA (T BA (x))) recovers the original x. Let L 1 denote the L 1 loss. The complete two-sided cycle loss function is given by: DISPLAYFORM1 DISPLAYFORM2 The above two-sided approach yields mapping function from A to B and back. This method provides matching between every sample and a synthetic image in the target domain (which generally does not correspond to an actual target domain sample), it therefore does not provide exact correspondences between the A and B domain images. In the previous section, we described a distributional approach for mapping A domain image x to an image T AB (x) that appears to come from the B domain. In this section we provide a method for providing exact matches between domains. Let us assume that for every A domain image x i there exists an analogous B domain image y mi. Our task is find the set of indices m i. Once the exact matching is recovered, we can also train a fully supervised mapping function T AB, and thus obtain a mapping function of the quality provided by supervised method. Let α i,j be the proposed match matrix between B domain image y j and A domain image x i, i.e., every x i matches a mixture of all samples in B, using weights α i,:, and similarly for y j for a weighing using α:,j of the training samples from A. Ideally, we should like a binary matrix with α i,j = 1 for the proposed match and 0 for the rest. This task is formally written as: DISPLAYFORM0 where L p is a "perceptual loss", which is based on some norm, a predefined image representation, a Laplacian pyramid, or otherwise. See Sec. 3.4.The optimization is continuous over T AB and binary programming over α i,j. Since this is computationally hard, we replace the binary constraint on α by the following relaxed version: DISPLAYFORM1 In order to enforce sparsity, we add an entropy constraint encouraging sparse solutions. DISPLAYFORM2 The final optimization objective becomes: DISPLAYFORM3 The positivity α ≥ 0 and i α i,j = 1 constraints are enforced by using an auxiliary variable β and passing it through a Sof tmax function. DISPLAYFORM4 The relaxed formulation can be optimized using SGD. By increasing the significance of the entropy term (increasing k entropy), the solutions can converge to the original correspondence problem and exact correspondences are recovered at the limit. Since α is multiplied with all mapped examples T AB (x), it might appear that mapping must be performed on all x samples at every batch update. We have however found that iteratively updating T AB for N epochs, and then updating β for N epochs (N = 10) achieves excellent . Denote the β (and α) updates-α iterations and the updates of T AB -T iterations. The above training scheme requires the full mapping to be performed only once at the beginning of the α iteration (so once in 2N epochs). Although the examplar-based method in Sec. 3.2 is in principle able to achieve good matching, the optimization problem is quite hard. We have found that a good initialization of T AB is essential for obtaining good performance. We therefore present AN-GAN -a cross domain matching method that uses both exemplar and distribution based constraints. The AN-GAN loss function consists of three separate constraint types:1. Distributional loss L T dist: The distributions of T AB (x) matches y and T BA (y) matches x (Eq. 3). 2. Cycle loss L T cycle: An image when mapped to the other domain and back should be unchanged (Eq. 4). 3. Exemplar loss L T exemplar: Each image should have a corresponding image in the other domain to which it is mapped (Eq. 11).The AN-GAN optimization problem is given by: min DISPLAYFORM0 The optimization also adversarially trains the discriminators D A and D B as in equation Eq. 6. Initially β are all set to 0 giving all matches equal likelihood. We use an initial burn-in period of 200 epochs, during which δ = 0 to ensure that T AB and T BA align the distribution before aligning individual images. We then optimize the examplar-loss for one α-iteration of 22 epochs, one T -iteration of 10 epochs and another α-iteration of 10 epochs (joint training of all losses did not yield improvements). The initial learning rate for the exemplar loss is 1e − 3 and it is decayed after 20 epochs by a factor of 2. We use the same architecture and hyper-parameters as CycleGAN unless noted otherwise. In all experiments the β parameters are shared between the two mapping directions, to let the two directions inform each other as to likelihood of matches. All hyper-parameters were fixed across all experiments. In the previous sections we assumed a "good" loss function for determining similarity between actual and synthesized examples. In our experiments we found that Euclidean or L 1 loss functions were typically not perceptual enough to provide good supervision. Using the Laplacian pyramid loss as in GLO BID1 does provide some improvement. The best performance was however achieved by using a perceptual loss function. This was also found in several prior works BID4, BID8, BID2.For a pair of images I 1 and I 2, our loss function first extracts VGG features for each image, with the number of feature maps used depending on the image resolution. We use the features extracted by the the second convolutional layer in each block, 4 layers in total for 64X64 resolution images and five layers for 256X256 resolution images. We additionally also use the L 1 loss on the pixels to ensure that the colors are taken into account. Let us define the feature maps for images I 1 and I 2 as φ m 1 and φ m 2 (m is an index running over the feature maps). Our perceptual loss function is: DISPLAYFORM0 Where N P is the number of pixels and N m is the number of features in layer m. We argue that using this loss, our method is still considered to be unsupervised matching, since the features are available off-the-shelf and are not tailored to our specific domains. Similar features have been extracted using completely unsupervised methods (see e.g. BID3 To evaluate our approach we conducted matching experiments on multiple public datasets. We have evaluated several scenarios: (i) Exact matches: Datasets on which all A and B domain images have We compare our method against a set of other methods exploring the state of existing solutions to cross-domain matching:U nmapped − P ixel: Finding the nearest neighbor of the source image in the target domain using L 1 loss on the raw pixels. U nmapped − V GG: Finding the nearest neighbor of the source image in the target domain using VGG feature loss (as described in Sec. 3.4. Note that this method is computationally quite heavy due to the size of each feature. We therefore randomly subsampled every feature map to 32000 values, we believe this is a good estimate of the performance of the method. CycleGAN − P ixel: Train Eqs. 5, 6 using the authors' CycleGAN code. Then use L 1 to compute the nearest neighbor in the target set. CycleGAN − V GG: Train Eqs. 5, 6 using the authors' CycleGAN code. Then use VGG loss to compute the nearest neighbor in the target set. The VGG features were subsampled as before due to the heavy computational cost.α iterations only: Train AN − GAN as described in Sec. 3.3 but with α iterations only, without iterating over T XY .AN − GAN : Train AN − GAN as described in Sec. 3.3 with both α and T XY iterations. We evaluate our method on 4 public exact match datasets:Facades: 400 images of building facades aligned with segmentation maps of the buildings (Radim Tyleček, 2013).Maps: The Maps dataset was scraped from Google Maps by. It consists of aligned Maps and corresponding satellite images. We use the 1096 images in the training set. The original dataset contains around 50K images of shoes from the Zappos50K dataset BID23, (Yu & Grauman). The edge images were automatically detected by using HED (BID20). The original dataset contains around 137k images of Amazon handbags (BID24). The edge images were automatically detected using HED by.For both E2S and E2H the datasets were randomly down-sampled to 2k images each to accommodate the memory complexity of our method. This shows that our method works also for moderately sized dataset. In this set of experiments, we compared our method with the five methods described above on the task of exact correspondence identification. For each evaluation, both A and B images are shuffled prior to training. The objective is recovering the full match function m i so that x i is matched to y mi. The performance metric is the percentage of images for which we have found the exact match in the other domain. This is calculated separately for A → B and B → A.The are presented in Table. 1. Several observations are apparent from the : matching between the domains using pixels or deep features cannot solve this task. The domains used in our experiments are different enough such that generic features are not easily able to match between them. Simple mapping using CycleGAN and matching using pixel-losses does improve matching performance in most cases. CycleGAN performance with simple matching however leaves much space for improvement. The next baseline method matched perceptual features between the mapped source images and the target images. Perceptual features have generally been found to improve performance for image retrieval tasks. In this case we use VGG features as perceptual features as described in Sec. 3.4. We found exhaustive search too computationally expensive (either in memory or runtime) for our datasets, and this required subsampling the features. Perceptual features performed better than pixel matching. We also run the α iterations step on mapped source domain images and target images. This method matched linear combinations of mapped images rather than a single image (the largest α component was selected as the match). This method is less sensitive to outliers and uses the same β parameters for both sides of the match (A → B and B → A) to improve identification. The performance of this method presented significant improvements. The exemplar loss alone should in principle recover a plausible solution for the matches between the domains and the mapping function. However, the optimization problem is in practice hard and did not converge. We therefore use a distributional auxiliary loss to aid optimization. When optimized with the auxiliary losses, the exemplar loss was able to converge through α − T iterations. This shows that the distribution and cycle auxiliary losses are essential for successful analogy finding. Our full-method AN-GAN uses the full exemplar-based loss and can therefore optimize the mapping function so that each source sample matches the nearest target sample. It therefore obtained significantly better performance for all datasets and for both matching directions. In this set of experiments we used the same datasets as above but with M % of the matches being unavailable This was done by randomly removing images from the A and B domain datasets. In this scenario M % of the domain A samples do not have a match in the sample set in the B domain and similarly M % of the B images do not have a match in the A domain.(1 − M)% of A and B images contain exact matches in the opposite domain. The task is identification of the correct matches for all the samples that possess matches in the other domain. The evaluation metric is the percentage of images for which we found exact matches out of the total numbers of images that have an exact match. Apart from the removal of the samples ing in M % of non-matching pairs, the protocol is identical to Sec. 4.1.1.The for partial exact matching are shown in Table. 2. It can be clearly observed that our method is able to deal with scenarios in which not all examples have matches. When 10% of samples do not have matches, are comparable to the clean case. The are not significantly lower for most datasets containing 25% of samples without exact matches. Although in the general case a low exact match ratio lowers the quality of mapping function and decreases the quality of matching, we have observed that for several datasets (notably Facades), AN-GAN has achieved around 90% match rate with as much as 75% of samples not having matches. Although the main objective of this paper is identifying exact analogies, it is interesting to test our approach on scenarios in which no exact analogies are available. In this experiment, we qualitatively evaluate our method on finding similar matches in the case where an exact match is not available. We evaluate on the Shoes2Handbags scenario from BID9. As the CycleGAN architecture is not effective at non-localized mapping we used the DiscoGAN architecture BID9 for the mapping function (and all of the relevant hyper-parameters from that paper).In FIG2 we can observe several analogies made for the Shoes2Handbags dataset. The top example shows that when DiscoGAN is able to map correctly, matching works well for all methods. However in the bottom two rows, we can see examples that the quality of the DiscoGAN mapping is lower. In this case both the DiscoGAN map and DiscoGAN + α iterations present poor matches. On the other hand AN − GAN forced the match to be more relevant and therefore the analogies found by AN − GAN are better. We have shown that our method is able to align datasets with high accuracy. We therefore suggested a two-step approach for training a mapping function between two datasets that contain exact matches but are unaligned: (i) Find analogies using AN − GAN, and (ii) Train a standard mapping function using the self-supervision from stage (i).For the Facades dataset, we were able to obtain 97% alignment accuracy. We used the alignment to train a fully self-supervised mapping function using Pix2Pix. We evaluate on the facade photos to segmentations task as it allows for quantitative evaluation. In Fig. 3 we show two facade photos from the test set mapped by: CycleGAN, Pix2Pix trained on AN-GAN matches DISPLAYFORM0 Figure 3: Supervised vs unsupervised image mapping: The supervised mapping is far more accurate than unsupervised mapping, which is often unable to match the correct colors (segmentation labels). Our method is able to find correspondences between the domains and therefore makes the unsupervised problem, effectively supervised. and a fully-supervised Pix2Pix approach. We can see that the images mapped by our method are of higher quality than CycleGAN and are about the fully-supervised quality. In Table. 3 we present a quantitative comparison on the task. As can be seen, our self-supervised method performs similarly to the fully supervised method, and much better than CycleGAN.We also show for the edges2shoes and edges2handbags datasets. The supervised stage uses a Pix2Pix architecture, but only L 1 loss (rather than the combination with cGAN as in the paper -L 1 only works better for this task). The test set L 1 error is shown in Tab. 4. It is evident that the use of an appropriate loss and larger architecture enabled by the ANGAN-supervision yields improved performance over CycleGAN and is competitive with full-supervision. We have also evaluated our method on point cloud matching in order to test our method in low dimensional settings as well as when there are close but not exact correspondences between the samples in the two domains. Point cloud matching consists of finding the rigid 3D transformation between a set of points sampled from the reference and target 3D objects. The target 3D object is a We ran the experiments using the Bunny benchmark, using the same setting as in BID17. In this benchmark, the object is rotated by a random 3D rotation, and we tested the success rate of our model in achieving alignment for various ranges of rotation angles. For both CycleGAN and our method, the following architecture was used. D is a fully connected network with 2 hidden layers, each of 2048 hidden units, followed by BatchNorm and with Leaky ReLU activations. The mapping function is a linear affine matrix of size 3X3 with a bias term. Since in this problem, the transformation is restricted to be a rotation matrix, in both methods we added a loss term that encourages orthonormality of the weights of the mapper. Namely, W W T − I, where W are the weights of our mapping function. Tab. 5 depicts the success rate for the two methods, for each rotation angle bin, where success is defined in this benchmark as achieving an RMSE alignment accuracy of 0.05.Our significantly outperform the baseline reported in BID17 at large angles. Their are given in graph form, therefore the exact numbers could not be presented in Tab. 5. Inspection of the middle column of Fig.3 in BID17 will verify that our method performs the best for large transformations. We therefore conclude that our method is effective also for low dimensional transformations and well as settings in which exact matches do not exist. We presented an algorithm for performing cross domain matching in an unsupervised way. Previous work focused on mapping between images across domains, often ing in mapped images that were too inaccurate to find their exact matches. In this work we introduced the exemplar constraint, specifically designed to improve match performance. Our method was evaluated on several public datasets for full and partial exact matching and has significantly outperformed baseline methods. It has been shown to work well even in cases where exact matches are not available. This paper presents an alternative view of domain translation. Instead of performing the full operation end-toend it is possible to (i) align the domains, and (ii) train a fully supervised mapping function between the aligned domains. Future work is needed to explore matching between different modalities such as images, speech and text. As current distribution matching algorithms are insufficient for this challenging scenario, new ones would need to be developed in order to achieve this goal. | Finding correspondences between domains by performing matching/mapping iterations | 1,020 | scitldr |
Effectively inferring discriminative and coherent latent topics of short texts is a critical task for many real world applications. Nevertheless, the task has been proven to be a great challenge for traditional topic models due to the data sparsity problem induced by the characteristics of short texts. Moreover, the complex inference algorithm also become a bottleneck for these traditional models to rapidly explore variations. In this paper, we propose a novel model called Neural Variational Sparse Topic Model (NVSTM) based on a sparsity-enhanced topic model named Sparse Topical Coding (STC). In the model, the auxiliary word embeddings are utilized to improve the generation of representations. The Variational Autoencoder (VAE) approach is applied to inference the model efficiently, which makes the model easy to explore extensions for its black-box inference process. Experimental onWeb Snippets, 20Newsgroups, BBC and Biomedical datasets show the effectiveness and efficiency of the model. With the great popularity of social networks and Q&A networks, short texts have been the prevalent information format on the Internet. Uncovering latent topics from huge volume of short texts is fundamental to many real world applications such as emergencies detection BID18, user interest modeling BID19, and automatic query-reply BID16. However, short texts are characteristic of short document length, a very large vocabulary, a broad range of topics, and snarled noise, leading to much sparse word co-occurrence information. Thus, the task has been proven to be a great challenge to traditional topic models. Moreover, the complex inference algorithm also become a bottleneck for these traditional models to rapidly explore variations. To address the aforementioned issue, there are many previous works introducing new techniques such as word embeddings and neural variational inference to topic models. Word embeddings are the low-dimensional real-valued vectors for words. It have proven to be effective at capturing syntactic and semantic information of words. Recently, many works have tried to incorporate word embeddings into topic models to enrich topic modeling BID5 BID7 BID22. Yet these models general rely on computationally expensive inference procedures like Markov Chain Monte Carlo, which makes them hard to rapidly explore extensions. Even minor changes to model assumptions requires a re-deduction of the inference algorithms, which is mathematic challenging and time consuming. With the advent of deep neural networks, the neural variational inference has emerged as a powerful approach to unsupervised learning of complicated distributions BID8 BID17 BID14. It approximates the posterior of a generative model with a variational distribution parameterized by a neural network, which allows back-propagation based function approximations in generative models. The variational autoencoder (VAE) BID8, one of the most popular deep generative models, has shown great promise in modeling complicated data. Motivated by the promising potential of VAE in building generative models with black-box inference process, there are many works devoting to inference topic models with VAE BID20 BID13 BID4. However, these methods yield the same poor performance in short texts as LDA.Based on the analysis above, we propose a Neural Variational Sparse Topic Model (NVSTM) based on a sparsity-enhanced topic model STC for short texts. The model is parameterized with neural networks and trained with VAE. It still follows the probabilistic characteristics of STC. Thus, the model inherit s the advantages of both sparse topic models and deep neural networks. Additionally, we exploit the auxiliary word embeddings to improve the generation of short text representations.1. We propose a novel Neural Variational Sparse Topic Model (NVSTM) to learn sparse representations of short texts. The VAE is utilized to inference the model effectively. 2. The general word semantic information is introduced to improve the sparse representations of short texts via word embeddings. 3. We conduct experiments on four datasets. Experimental demonstrate our model's superiority in topic coherence and text classification accuracy. The rest of this paper is organized as follows. First, we reviews related work. Then, we present the details of the proposed NVSTM, followed by the experimental . Finally, we draw our . Topic models. Traditional topic models and their extensions BID0 BID2 BID12 have been widely applied to many tasks such as information retrieval, document classification and so on. These models work well on long texts which have abundant word co-occurrence information for learning, but get stuck in short texts. There have been many efforts to address the data sparsity problem of short texts. To achieve sparse representations in the documenttopic and topic-term distributions, BID21 introduced a Spike and Slab prior to model the sparsity in finite and infinite latent topic structures of text. Similarly, BID10 proposed a dual-sparse topic model that addresses the sparsity in both the topic mixtures and the word usage. These models are inspired by the effect of the variation of the Dirichlet prior on the probabilistic topic models. There are also some non-probabilistic sparse topic models aiming at extracting focused topics and words by imposing various sparsity constraints. BID6 formalized topic modeling as a problem of minimizing loss function regularized by lasso. Subsequently, Zhu & Xing presented sparse topical coding (STC) by utilizing the Laplacian prior to directly control the sparsity of inferred representations. However, over complicated inference procedure of these sparse topic models has limited their applications and extensions. Topic Models with Word Embeddings. Since word embeddings can capture the semantic meanings of words via low-dimensional real-valued vectors, there have been a large number of works on topic models that incorporate word embeddings to improve topic modeling. BID5 proposed a new technique for topic modeling by treating the document as a collection of word embeddings and topics itself as multivariate Gaussian distributions in the embedding space. However, the assumption that topics are unimodal in the embedding space is not appropriate, since topically related words can occur distantly from each other in the embedding space. Therefore, BID7 proposed latent concept topic model (LCTM), which modeled a topic as a distribution of concepts, where each concept defined another distribution of word vectors. BID15 proposed Latent Feature Topic Modeling (LFTM), which extended LDA to incorporate word embeddings as latent features. Lately, BID22 proposed a novel correlated topic model using word embeddings, which is enable to exploit the additional word-level correlation information in word embeddings and directly model topic correlation in the continuous word embedding space. However, these models also have trouble to rapidly explore extensions. Neural Variational Inference for topic models. Neural variational inference is capable of approximating the posterior of a generative model with a variational distribution parameterized by a neural network BID8 BID17 BID14. The variational autoencoder (VAE), as one of the most popular neural variational inference approach, has shown great promise in building generative models with black-box inference process BID8. To break the bottleneck of over complicated inference procedure in topic models, there are many efforts devoting to inference topic models with VAE. BID20 presents auto-encoding variational Bayes (AEVB) based inference method for latent Dirichlet allocation (LDA), tackling the problems caused by the Dirichlet prior and component collapsing in AEVB. BID13 presents alternative neural approaches in topic modeling by providing parameterized distributions over topics. It allows training the topic model via back-propagation under the framework of neural variational inference. BID4 combines certain motivating ideas behind variations on topic models with modern techniques for variational inference to produce a flexible framework for topic modeling that allows for rapid exploration of different models. Nevertheless, aforementioned works are based on traditional LDA, thus bypass the sparsity problem of short texts. Drawing inspiration from the above analysis, we propose a novel neural variational sparse topic model NVSTM based on VAE for short texts, which combines the merits of neural networks and sparsity-enhanced topic models. In this section, we start from describing Sparse Topical Coding (STC). Based on it, we further propose Neural Variational Sparse Topic Model (NVSTM). Later, we focus on the discussion of the inference process for NVSTM. Firstly, we define that D = {1, ..., M} is a document set with size M, T = {1, ..., K} is a topic collection with K topics, V = {1, .., N} is a vocabulary with N words, and w d = {w d,1, .., w d,|I|} is a vector of terms representing a document d, where I is the index of words in document d, and w d,n (n ∈ I) is the frequency of word n in document d. Moreover, we denote β ∈ R N ×K as a topic dictionary for the whole document set with k bases, DISPLAYFORM0 K is the word code of word n in document d. To yield interpretable patterns, (θ, s, β) are constrained to be to be non-negative. In standard STC, each document and each word is represented as a low-dimensional code in topic space. Based on the topic dictionary β with K topic bases sampled from a uniform distribution, the generative process is described as follows: DISPLAYFORM0 STC reconstructs each observed word count from a linear combination of a set of topic bases, where the word code is utilized as the coefficient vector. To achieve sparse word codes, STC defines DISPLAYFORM0 The composite distribution is super-Gaussian: DISPLAYFORM1 With the Laplace term, the composite distribution tends to yield sparse word codes. For the same purpose, the prior distribution p(θ d) of sparse document codes is a Laplace prior Laplace(0, λ −1). Additionally, According to the above generative process, we have the joint distribution: DISPLAYFORM2 To simplify the calculation, the document code can be collapsed and later obtained via an aggregation of the individual word codes of all its terms. Although STC has closed form coordinate descent equations for parameters (θ, s, β), it is inflexible for its complex inference process. To address the aforementioned issue, we introduce black box inference methods into STC. We present NVSTM based on VAE and introduces word embeddings. As in BID1, we remove the document code and generate it via a simple aggregation of all sampled word codes among all topics: DISPLAYFORM0. Analogous to the generative process in STC, our model follows the generative story below for each document d: DISPLAYFORM1 The graphical representation of NVSTM is depicted in Figure 1. Different from STC, we replace the super-Gaussian with a uniform distribution. In the inference process, we adopt the variational posterior Laplace(s d,nk ; 0, σ d,nk (w d,n)) to approximate the intractable posterior p(s d,nk |w d,n) for the sparse word codes, where σ d,nk is the scale parameter of Laplace distribution. Therefore, in the above generative process, each word code vector is generated from the uniform prior distribution. The observed word count is sampled from Poisson distribution. Different from traditional STC, we replace the uniform distribution of the topic dictionary with a topic dictionary neural network. In the topic dictionary neural network, we introduce the word semantic information via word embeddings to enrich the feature space for short texts. The topic dictionary neural network is comprised of following:Word embedding layer (E ∈ R N ×300): Supposing the word number of the vocabulary is N, this layer devotes to transform each word to a distributed embedding representation. Here, we adopt the pre-trained embeddings by GloVe based on a large Wikipedia dataset 1. Given a word embedding matrix E, we map each word to a 300-dimensional embedding vector, which can capture subtle semantic relationships between words. Topic dictionary layers (β ∈ R N ×K): This layers aim at converting E to a topic dictionary similar to the one in STC. DISPLAYFORM2 where f is a multilayer perceptron. To conform to the framework of STC, we make a simplex projection among the output of topic dictionary neural network. We normalize each column of the dictionary via the simplex projection as follow: DISPLAYFORM3 The simplex projection is the same as the sparsemax activation function in BID11, which declares how the Jacobian of the projection can be efficiently computed, providing the theoretical base of its employment in a neural network trained with backpropagation. After the simplex projection, each column of the topic dictionary is promised to be sparse, non-negative and united. Based the above generative process, the traditional variational inference for the model is to minimize the follow optimization problem, which is a lower bound to the marginal log likelihood: DISPLAYFORM4 where q(s|γ) is approximate variational posterior, and γ is the variational parameter. In this paper, we employ the VAE to carry out neural variational inference for our model. Variational Autoencoder (VAE) is one of the most popular deep generative network. It is a black-box variational method which bridges the conceptual and language gap of neural networks and probability generative models. From neural network perspective, a variational autoencoder consists of an encoder network, a decoder network, and a loss function. In our model, the encoder network is to parametrize the approximate posterior q θ (s|w), which takes input as the observed word count to output the latent variable s with the variational parameters θ: DISPLAYFORM0 where f e (w d,n) is a multilayer perceptron acting on the word counts w d,n in document d, and logσ d,nk is the scale parameter of the approximate posterior, from which the word codes s d,nk are sampled. The decoder network outputs the observed data w with given s and the generative parameters φ, which is denoted as p φ (w|s, β). According to STC, we define the decoder network DISPLAYFORM1 DISPLAYFORM2 where f d is a multilayer perceptron. Based on VAE, we rewrite the ELBO as: DISPLAYFORM3 The first term is a regularizer that constraints the Kullback-Leibler divergence between the encoder's distribution distribution and the prior of the latent variables. The second term is the reconstruction loss, which encourages the decoder to reconstruct the data in minimum cost. We devote to differentiate and optimize the lower bound above with stochastic gradient decent (SGD). However, the gradient of the lower bound is tricky since the error is unable to back propagate through a random drawn variable s, which is a non-continuous and has no gradient. Similar to the standard VAE, we make a differentiable transformation, called reparameterization trick. We approximate s with an auxiliary noise variable ε ∼ U (−0.5, 0.5): DISPLAYFORM0 Through reparametrization, we can take s as a function with the parameter b deriving from the encoder network. It allows the reconstruction error to flow through the whole network. FIG0 presents the complete VAE inference process for NVSTM. Moreover, in order to achieve interpretable word codes as in STC, we constrain s to be non-negative, activation function on the output s of encoder. After apply the reparameterization trick to the variational lower bound, we can yield DISPLAYFORM1 where Θ represents the set of all the model. As explained above, the decoding term logp(w d,n |s d,nk, β nk) is the Poisson distribution, and β is generated by a topic dictionary neural network. After the differentiable transformation, the variation objective function can be computed in closed form and efficiently solved with SGD. The detailed algorithm is shown in Algorithm 1. To evaluate the performance of our model, we present a series of experiments below. The objectives of the experiments include: the qualitative evaluations: classification accuracy of documents and sparse ratio of latent representations; the qualitative inspection: the quality of extracted topics and document representations. Our evaluation is based on the four datasets:• 20Newsgroups: The classic 20 newsgroups dataset, which is comprised of 18775 newsgroup articles with 20 categories, and contains 60698 unique words 2.• Web Snippet: The web snippet dataset, which includes 12340 Web search snippets in 8 categories. We remove the words with fewer than 3 characters or whose document frequency less than 3 in the dataset. After the preprocessing, it contains 5581 unique words. 3.• BBC: It consists of 2225 BBC news articles from 2004-2005 with 5 classes. We only use the title and headline of each article. We remove the stop words and the words whose document frequency less than 3 in the dataset 4.• Biomedical: It consists of 20000 paper titles from 20 different MeSH in BioASQ's official website. We convert letters into lower case and remove the words whose document frequency less than 3 in the dataset. After preprocessing, there are 19989 documents with 20 classes 5.Statistics on the four datasets after preprocessing is reported in TAB0.We compare our model with five topic models: et al., 2003). A classical probabilistic topic model. We use the open source LDA implemented by collapsed Gibbs sampling 6. We use the default settings with iteration number n = 2000, the Dirichlet parameter for distribution over topics α = 0.1 and the Dirichlet parameter for distribution over words η = 0.01. DISPLAYFORM0 • STC (Zhu & Xing). A sparsity-enhanced topic model which has been proven to perform better than many existing models. We adopt the implementation of STC released by its authors 7. We set the regularization constants as λ = 0.2, ρ = 0.001 and the maximum number of iterations of hierarchical sparse coding, dictionary learning as 100.• NTM BID3. A recently proposed neural network based topic model, which has been reported to outperform the Replicated Softmax model 8. In NTM, the learning rate is 0.01 and the regularization factor is 0.001. During the pre-training procedure for all weight matrices, they are initialized with a uniform distribution in interval [-4*sqrt(6./(n visible+n hidden)),4*sqrt(6./(n visible+n hidden))], where n visible=784 and n hidden=500.• DocNADE BID9 ). An unsupervised neural network topic model of documents and have shown that it is a competitive model both as a generative model and as a document representation learning algorithm 9. In DocNADE, we choose the sigmoid activate function, the hidden size is 50, the learning rate is 0.01, the bath size is 64 and the max training number is 1000.• GaussianLDA BID5. A new technique for topic modeling by treating the document as a collection of word embeddings and topics itself as multivariate Gaussian distributions in the embedding space 10. We use default values for the parameters. Our model is implemented in Python via TensorFlow. For four datasets, we utilize the pre-trained 300-dimensional word embeddings from Wikipedia by GloVe, which is fixed during training. For each out-of-vocabulary word, we sample a random vector from a normal distribution in interval. We adopted ADAM optimizer for weight updating with an initial learning rate of 4e − 4 for four dataset. All weight matrices are initialized with a uniform distribution in interval [0, 1e − 5]. In practice, we found that our model is stable with the size of hidden layer, and set it to 500. To evaluate the effectiveness of the representation of documents learned by NVSTM, we perform text classification tasks on web snippet, 20NG, BBC and Biomedical using the document codes learned by topic models as the feature representation in a multi-class SVM. For each method, after obtaining the document representations of the training and test sets, we trained an classifier on the training set using the scikit-learn library. We then evaluated its predictive performance on the test set. On web snippet, we utilize 80% documents for training and 20% for testing. On the 20NG dataset, we keep 60% documents for training and 40% for testing, which is the same configuration as in BID10. For BBC and Biomedical dataset, we also keep 60% documents for training and 40% for testing. TAB2 report the classification accuracy under different methods with different settings on the number of topics among the four datasets. It clearly denotes that 1) In the four datasets, the NVSTM yields the highest accuracy. 2) In general, the neural network based NVSTM, NTM,DocNADE and GLDA generate better document representations than STC and LDA, demonstrating the representative advantage of neural networks in distributed word representations. 3) Sparse models NVSTM are superior to non-sparse models (DocNADE, NTM, GLDA and LDA) separately. It indicates that sparse topic models are more capable to extract topics from short documents. In this part, we quantitatively investigate the word codes and documents codes learned by our model. We compute the average word code as s n = 1 Dn d∈Dn s d,n over all documents that word n appears in. Table 4 shows the average word codes of some representative words learned by NVSTM and LDA in 8 categories of web snippet. For each category, we also present the topics learned by NVSTM in TAB3. We list top-9 words according to their probabilities under each topic. In Table 4, the illustrate that the codes discovered by NVSTM are apparently much sparser than those discovered by LDA. It tends to focus on narrow spectrum of topics and obtains discriminative and sparse representations of word. In contrast, LDA generates word codes with many non-zeros due to the data sparsity, leading to a confused topic distribution. Besides, in NVSTM, it is clear that each non-zero element in the word codes represents the topical meaning of words in corresponding position. The weights of these elements express their relationship with the topics. Noticed that there are words (e.g. candidates) have only a small range of topical meanings, indicating a narrow usage of those terms. While other words (e.g. hockey and marketing) tend to have a broad spectrum of topical meanings, denoting a general usage of those terms. Here, each document code is calculated as BID1. To demonstrate the quality of the learned representations by our model, we produce a t-SNE projection with for the document codes of the four datasets learned by our model in FIG3. For Web Snippet, we sample 10% of the whole document codes. For 20newsgroups and Biomedical, we sample 30% of the whole document codes. As for BBC, we present the whole document codes. It is obvious to see that all documents are clustered into distinct categories, which is equal to the ground truth number of categories in the four datasets. It proves the semantic effectiveness of the documents codes learned by our model. DISPLAYFORM0 We propose a neural sparsity-enhanced topic model NVSTM, which is the first effort in introducing effective VAE inference algorithm to STC as far as we know. We take advantage of VAE to simplify the inference process, which require no model-specific algorithm derivations. With the employing of word embeddings and neural network framework, NVSTM is able to generate clearer and semanticenriched representations for short texts. The evaluation demonstrate the effectiveness and efficiency of our model. Future work can include extending our model with other deep generative models, such as generative adversarial network (GAN). | a neural sparsity-enhanced topic model based on VAE | 1,021 | scitldr |
Neural Tangents is a library for working with infinite-width neural networks. It provides a high-level API for specifying complex and hierarchical neural network architectures. These networks can then be trained and evaluated either at finite-width as usual, or in their infinite-width limit. For the infinite-width networks, Neural Tangents performs exact inference either via Bayes' rule or gradient descent, and generates the corresponding Neural Network Gaussian Process and Neural Tangent kernels. Additionally, Neural Tangents provides tools to study gradient descent training dynamics of wide but finite networks. The entire library runs out-of-the-box on CPU, GPU, or TPU. All computations can be automatically distributed over multiple accelerators with near-linear scaling in the number of devices. In addition to the repository below, we provide an accompanying interactive Colab notebook at https://colab.sandbox.google.com/github/google/neural-tangents/blob/master/notebooks/neural_tangents_cookbook.ipynb | Keras for infinite neural networks. | 1,022 | scitldr |
Symbolic logic allows practitioners to build systems that perform rule-based reasoning which is interpretable and which can easily be augmented with prior knowledge. However, such systems are traditionally difficult to apply to problems involving natural language due to the large linguistic variability of language. Currently, most work in natural language processing focuses on neural networks which learn distributed representations of words and their composition, thereby performing well in the presence of large linguistic variability. We propose to reap the benefits of both approaches by applying a combination of neural networks and logic programming to natural language question answering. We propose to employ an external, non-differentiable Prolog prover which utilizes a similarity function over pretrained sentence encoders. We fine-tune these representations via Evolution Strategies with the goal of multi-hop reasoning on natural language. This allows us to create a system that can apply rule-based reasoning to natural language and induce domain-specific natural language rules from training data. We evaluate the proposed system on two different question answering tasks, showing that it complements two very strong baselines – BIDAF (a) and FASTQA – and outperforms both when used in an ensemble. We consider the problem of multi-hop reasoning on natural language input. For instance, consider the statements Socrates was born in Athens and Athens belongs to Greece, together with the question Where was Socrates born? There are two obvious answers following from the given statements: Athens and Greece. While Athens follows directly from the single statement Socrates was born in Athens, deducing Greece requires a reader to combine both provided statements using the knowledge that a person that was born in a city, which is part of a country, was also born in the respective country. Most recent work that addresses such challenges leverages deep learning based methods BID41 BID29 BID38 BID30 BID18 BID21 BID17 BID8, capable of dealing with the linguistic variability and ambiguity of natural language text. However, the black-box nature of neural networks makes it hard to interpret the exact reasoning steps leading to a prediction (local interpretation), as well as the induced model (global interpretation).Logic programming languages like Prolog BID46, on the other hand, are built on the idea of using symbolic rules to reason about entities, which makes them highly interpretable both locally and globally. The capability to use user-defined logic rules allows users to incorporate external knowledge in a straightforward manner. Unfortunately, because of their reliance on symbolic logic, systems built on logic programming need extensive preprocessing to account for the linguistic variability that comes with natural language BID23.We introduce NLPROLOG, a system which combines a symbolic reasoner and a rule-learning method with pretrained sentence representations to perform rule-based multi-hop reasoning on natural language input.1 Like inductive logic programming methods, it facilitates both global as well as local interpretation, and allows for straightforward integration of prior knowledge. Similarly to deep learning based approaches, it can be applied to natural language text without the need to transforming it to formal logic. At the core of the proposed method is an external non-differentiable theorem prover which can take similarities between symbols into account. Specifically, we modify a Prolog interpreter to support weak-unification as proposed by BID39. To obtain similarities between symbols, we utilize sentence encoders initialized with pretrained sentence embeddings BID28 and then fine-tune these for a downstream question answering task via gradient-based optimization methods. Since the ing system contains non-differentiable components, we propose using Evolution Strategies (ES) BID9 as a gradient estimator BID47 for training the systemenabling us to fine-tune the sentence encoders and to learn domain-specific logic rules (e.g. that the relation is in is transitive) from natural language training data. This in a system where training can be trivially parallelized, and which allows to change the logic formalism by simply exchanging the external prover without the need for an intricate re-implementation as an end-to-end differentiable function. In summary, our main contributions are: a) we show how Prolog-like reasoning can be applied to natural language input by employing a combination of pretrained sentence embeddings, an external logic prover, and fine-tuning using Evolution Strategies, b) we extend a Prolog interpreter with weak unification based on distributed representations, c) we present Gradual Rule Learning (GRL), a training algorithm that allows the proposed system to learn First-Order Logic (FOL) rules from entailment, and d) we evaluate the proposed system on two different Question Answering (QA) datasets and demonstrate that its performance is on par with state-of-the-art neural QA models in many cases, while having different failure modes. This allows to build an ensemble of NLPROLOG and a neural QA model that outperforms all individual models. Our work touches in general on weak-unification based fuzzy logic BID39 and focuses on multi-hop reasoning for QA, the combination of logic and distributed representations, and theorem proving for question answering. Multi-hop Reasoning for QA. One prominent approach for enabling multi-hop reasoning in neural QA models is to iteratively update a query embedding by integrating information from embeddings of context sentences, usually using an attention mechanism and some form of recurrency BID41 BID29 BID38 BID30. These models have achieved state-of-the-art in a number of reasoning-focused QA tasks. BID18 employ a differentiable memory structure that is updated each time a new piece of information (usually a sentence) is processed. The memory slots can be used to track the state of various entities, which can be considered as a form of temporal reasoning. Similarly, the Neural Turing Machine BID17 and the Dynamic Memory Network BID21, which are built on differentiable memory structures, have been used to solve synthetic QA problems requiring multi-hop reasoning. BID8 modify an existing neural QA model to additionally incorporate coreference information provided by a coreference resolution model in a preprocessing step, which improves performance on QA problems requiring multi-hop reasoning. All of the methods above perform reasoning more or less implicitly by updating latent vector representations, which makes an unambiguous interpretation of the exact reasoning steps difficult. Additionally, it is not obvious how a strong prior, like user-defined inference rules, could be imposed on the respective reasoning procedures. Besides, many of them have been evaluated only on artificially generated data sets and thus it is unclear how they perform when on data that involves natural linguistic variability. Combination of FOL and Distributed Representations. Investigating the combination of formal logic and distributed representations has a long tradition, which is reviewed by BID4. Strongly related to our approach is the combination of Markov Logic Networks BID32, Probabilistic Soft Logic BID1, and word embeddings, which has been applied to Recognizing Textual Entailment (RTE) and Semantic Textual Similarity (STS) BID14 BID2, and improves upon baselines utilizing either only logic or only distributed representations. An area in which neural multi-hop reasoning models have been thoroughly investigated is Knowledge Base Completion BID6 BID5 BID26 BID7. While QA could be in principle modeled as a KB-completion task, the construction of a densely connected KB from text is far from trivial, due to the inherent ambiguity of natural language. Without any preprocessing, even the moderately sized QA tasks considered in this work would produce a very large and sparsely connected KB.Closest to our approach is the Natural Theorem Prover (NTP) BID33, which obtains the final proof score for a statement by constructing a neural network that represents all possible proofs. The model is trained end-to-end using backward chaining and a differentiable unification operator. Since the number of possible proofs grows exponentially with the number of facts and rules, NTPs cannot scale even to moderately sized knowledge bases, and are thus not applicable to natural language problems in its current form. We circumvent this issue by using a non-differentiable prover and fine-tune the model using Evolution Strategies. Theorem Proving for Question Answering. Our work is not the first to apply theorem proving to QA problems. BID0 employ a system based on Natural Logic to search a large KB for a single statement that entails the candidate answer. This is somewhat orthogonal to our approach, as we aim to learn rules that combine multiple statements to answer a question. More traditional systems like Watson BID12 or COGEX BID23 utilize an integrated theorem prover, but require a transformation of the natural language input to logical form. In the case of COGEX, this improves the accuracy of the underlying system by 30%, and increases its interpretability. While this work is similar in spirit, we greatly simplify the preprocessing step by replacing the transformation of natural language to logic with the simpler approach of taking Open Information Extraction (Open IE) BID10 textual patterns. BID11 propose the OPENQA system that utilizes a mixture of handwritten and automatically obtained operators that are able to parse, paraphrase and rewrite queries, which allows them to perform large-scale QA on KBs that include Open IE triples. While this work shares the same goal -answering questions using facts extracted by Open IE -we choose a completely different approach to address the problem of linguistic variablity and focus on the combination of multiple facts by learning logical rules. In the following, we review the relevant for the custom Prolog engine employed by our method. Specifically, we briefly introduce Prolog's backward chaining algorithm and unification procedure BID35. We assume basic knowledge of formal logic and logic programming. In a nutshell, a Prolog program consists of a set of rules in the form of Horn clauses DISPLAYFORM0 DISPLAYFORM1 ) the body of the rule. We call B the body size of the rule and rules with a body size of zero are named atoms (short for atomic formula).2 If an atom does not contain any variable symbols it is termed fact. As in related work BID39 BID19 ), we disregard negation and disjunction. The central procedure of Prolog is unification. It receives two atoms and attempts to find variable substitutions that make both atoms syntactically equal. For example, the input atoms country(Greece, Socrates) and country(X, Y) in the following variable substitution after unification: {X/Greece, Y /Socrates}.The proof algorithm of Prolog is called backward-chaining. It starts from a goal atom g and attempts to prove it by applying suitable rules, thereby generating subgoals that are proved next. To find applicable rules, it attempts to unify g with the heads of all available rules. If this unification succeeds, the ing variable substitutions are applied to the atoms in the rule body and each of those atoms becomes a new subgoal. For instance, the application of the rule country(X, Y) ⇐ born_in(Y, X) to the goal country(Greece, Socrates) would yield the subgoal born_in(Socrates, Greece). Then the process is repeated for all subgoals until no subgoal is left, i.e., until all subgoals have been unified with a fact. The of this procedure is a set of rule applications and variable substitutions called proof. Note, that the number of possible proofs is exponential in the number of predicate and entity symbols, as every rule might be used in the proof of each subgoal. Pseudo code for weak unification can be found in Appendix A.1 and we refer the reader to for more details. To apply standard Prolog to a NLP problem like QA, one would have to account for semantic similarities and ambiguities with extensive and error-prone preprocessing, e.g. when transforming the natural language input to logical form. Our aim is to apply Prolog to natural language question answering where the same entity or relation can have different natural language surface forms. Thus, we replace the equality-based unification operator with similarity-based weak unification BID39, which allows to unify two symbols x, y if they are sufficiently similar, as judged by a similarity function ∼ θ parameterized by θ. Then, the of unification also contains a proof success score S that is the of the symbols' similarity and the previous success score S: S = (x ∼ y, S), where ∈ {min, ·} is an aggregation function. The of backward-chaining with weak unification are (possibly) multiple proofs, each with an associated proof success score. NLPROLOG combines inference based on weak unification and distributed representations to allow reasoning on natural language statements. The natural language statements are first transformed to triples using Open IE BID10. The symbols occurring in these triples and in the rules are then embedded into a vector space, which in turn is used to estimate similarities between symbols. The ing similarity function is subsequently used to perform a proof and consequently obtain a proof success score S. The proof success score is then utilized as a training signal for ES. An illustration of the process can be found in FIG1, where we visualize the interplay of the different components for our running example. We embed symbols using an encoder E θ: F ∪ P → R d parametrized by θ for entity and predicate symbols, where d denotes the embedding size. The ing embeddings are used to induce ∼ θ: (F ∪ P) 2 → R: DISPLAYFORM0 where cos denotes the cosine similarity between two vectors. There are alternative similarity functions such as Euclidean distance or RBF kernel, but in preliminary experiments we found cosine simlarity to work more robustly. We use an encoder function that uses a embedding lookup table for predicate symbols and a different one for entities. All embeddings representing natural language phrases are populated with sentence vectors that were pretrained with SENT2VEC BID28. Additionally, we introduce a third lookup table for the predicate symbols of rules and goals, because semantics of goal or rule predicates might differ from the semantics of fact predicates even if they share the same surface form. For instance, the query (X, parent, Y) could be interpreted either as (X, is the parent of, Y) or as (X, has the parent, Y) which are fundamentally different things. We propose to fine-tune the similarity function on a downstream task by updating the symbol embeddings. As NLPROLOG involves the non-differentiable proof search step, we cannot apply backpropagation for optimization. Instead, we propose to employ Evolution Strategies in conjunction with Adam to estimate the weight updates. ES recently showed good for Reinforcement Learning problems in BID36; BID22. More formally, the parameter update is computed as: DISPLAYFORM0 where J(θ) is the reward obtained by θ, σ t J is the standard deviation of all rewards obtained at time t as proposed by BID22, and α t are adaptive learning rates selected by ADAM BID20. The standard deviation σ of the distribution that is generating the perturbations is treated as a hyperparameter. We train NLPROLOG with ES using a learning from entailment setting BID25, in which the model is trained to decide whether a Prolog program R entails the truth of a candidate triple c. The objective of the model is to assign high probabilities p(c|R; θ) to true candidate triples and low probabilities to false triples. To achieve this, we model the reward as J(θ) = yp(c|R; θ), where y ∈ {−1, 1} is the gold label. To estimate p(c|R; θ), we exhaustively search for all proofs for the triple c, up to a given depth D which we treat as a hyperparameter. This search yields a number of proofs each with a success score S i. We set p(c|R; θ) to be the maximum of these scores S max = max i S i. The reasoning process of NLPROLOG crucially depends on rules that describe the relations between predicates. While it is possible to write down rules in natural language, this approach is hardly scalable. Thus, we follow BID33 and use rule templates to perform Inductive Logic Programming (ILP) BID24 which allows NLPROLOG to learn rules from training data. For this, a user has to define a set of rules with a given structure as input. Then, NLPROLOG randomly initializes the predicates of these rules. For instance, to induce a rule that can model transitivity, one would add a rule template of the form p 1 (X, Z) ⇐ p 2 (X, Y) ∧ p 3 (Y, Z), and NLPROLOG would instantiate multiple rules with randomly initialized embeddings for p 1, p 2, and p 3. The exact number and structure of the rule templates is treated as a hyper-parameter. If not explicitly stated otherwise, the experiments were performed with the same set of rule templates containing two rules for each of the forms DISPLAYFORM0 In preliminary experiments, we observed that unification with such randomly initialized embeddings always leads to a stark drop of the proof success score. This implies in turn that proofs involving rule templates rarely yield the highest score S max and thus, might have no impact on the value of the reward function. Due to that, the expected gradient estimate for the rule embeddings is zero, and they will remain close to their initialization. The main reason for this behavior is that the monotonicity of the aggregation functions implies that each sub-goal created by an application of a rule will only decrease the success score of the proof. Thus, all other things being equal, rules with a small number of body atoms (and facts in particular) will be preferred over more complex rules with a higher number of body atoms. Note that this problem is particularly severe in our setting where rules are initialized randomly while the remaining predicate symbols are instantiated using pretrained sentence embeddings. We propose a Gradual Rule Learning (GRL) algorithm which counteracts this effect during training. GRL segments the training process into B max + 1 phases, where B max is the maximum body size of all available rules. In the k-th phase of GRL, only proofs which employ at least one rule with a body size of B max + 1 − k are considered to estimate p(t|F ; θ). Thus, it is guaranteed that in each training step in phase k at least one rule with a body size of B max + 1 − k receives a training signal. We evaluate our method on two different QA data sets: BABI-1K-STATIC and different subsets of WIKIHOP BID44. The used hyperparameter configurations can be found in Section B. We evaluate on different subsets of WIKIHOP BID44, each containing a single query predicate. We consider the predicates publisher, developer, and country, because their semantics ensure that the annotated answer is unique and they contain a relatively high amount of questions that are annotated as requiring true multi-hop reasoning. For publisher, this yields 509 training and 54 validation questions, for developer 267 and 29, and for country 742 and 194. As the test set of WIKIHOP is not publicly available, and splitting the small train set would lead to a far too small validation set, we report scores for the validation set and refrain from hyperparameter optimization and early stopping. Each data point consists of a query p(q, X) where q is some query entity, X is the entity that has to be predicted, C is a list of candidates entities, a ∈ C is an answer entity and p ∈ {publisher, developer, country} is the query predicate. In addition, every query is accompanied by a set of support documents which can be used to decide which candidate is the correct answer. To transform the support documents to natural language triples, we use the Open IE system MINIE BID16. We use the publicly available version of MINIE 3 in the dictionary mode, and use a list of all WIKIHOP candidate entities as our dictionary of multi-token entities. Following BID44, we use the two neural QA models BIDAF BID37 and FASTQA BID42 as baselines for the selected predicates of WIKIHOP. We employ the implementation provided by the QA framework JACK 4 BID43 with the same hyperparameters as used by BID44 and train a separate model for each predicate. In order to compensate for the fact that both models are extractive QA models which cannot make use of the candidate entities, we additionally evaluate modified versions which transform both the predicted answer and all candidates to vectors using the wiki-unigrams model 5 of SENT2VEC BID28. Consequently, we return the candidate entity which has the highest cosine similarity to the predicted entity. We utilize a subset of to study the behavior of NLPROLOG in a very controlled environment. Note however, that the experiments on WIKIHOP are much more significant, as they involve natural linguistic variability. BABI-1K-STATIC consists of the tasks QA4,QA15,QA17, and QA18 from the BABI suite, each containing 1,000 training and 1,000 testing questions. These tasks were selected because, unlike the other tasks of BABI, they do not require reasoning about a dynamically changing world state, which is not supported by the current implementation of NLPROLOG. We automatically transform all statements and queries of the respective tasks to triples and use the ing KB as input to NLPROLOG. We train and evaluate on each problem individually. The tasks QA4 and QA15 require entities as an output, thus we consider every entity that occurs at least once in any problem of the task as a candidate for all problems. Tasks QA17 and QA18 are binary classification tasks, and thus we determine the optimal threshold on the training set, after the training of NLPROLOG has finished. We refrain from systematically comparing on the individual BABI tasks to competing methods like BID38; BID29; BID8; BID18; BID41, since our non-negligible preprocessing and evaluation on only four out of 20 tasks does not allow us to match the relevant evaluation protocols. We therefore utilize BABI-1K-STATIC only for ablation experiments, but note that NLPROLOG achieves similar or better accuracy values than the mentioned methods in all instances we studied, except on QA4. All questions of WIKIHOP and some of BABI-1K-STATIC include a set of answer candidates C. For those cases, we modify our reward function to leverage this information, taking inspiration from Bayesian Personalized Ranking BID31: DISPLAYFORM0 where a ∈ C is the true answer. We observed that this reward function does not work well with the minimum aggregation function. Therefore, we employ this modified reward only when using the product aggregation and utilize the reward described in Section 4.2 with the minimum aggregation. The for the selected predicates of WIKIHOP can be found in Table 1. While there is no single best performing model, NLPROLOG is outperformed by at least one neural QA model on every predicate. For country, this only holds true when considering the versions of the neural models that have been augmented to consider candidate entities. For all three predicates, only a single transitive rule is utilized across all validation questions. Since we observe a more diverse set of induced rules on BABI-1K, we partly attributed this lack of diverse rules to the multi-hop characteristic of the WIKIHOP data. It seems that NLPROLOG struggles to find meaningful rules for the predicates developer and publisher, which leads to very few proofs involving rules on the development set: 1 out of 54 for publisher and 2 out of 29 for developer, compared with 159 out of 194 for country. We partially attribute this to the fact, that the semantics of country suggest a straightforward rule (transitivity of location), which is not true for developer or publisher. Additionally, the annotations regarding the neccessity of multi-hop reasoning provided for the validation set BID44 suggest that publisher and developer contain significantly fewer training examples involving multi-hop reasoning. Exemplary proofs generated by NLPROLOG on the predicates developer and country can be found in Fig. 3. As we are especially interested in assessing the capability for multi-hop reasoning, we additionally evaluate on a subset of problems which have been unanimously labelled as requiring multi-hop reasoning. On this subset of the development data, which is denoted as Countryhop (mh) in Table 1, NLPROLOG outperforms all other single models. If the proof of NLPROLOG producing the prediction does not employ any rules, the prediction is essentially the of performing a simple nearest neighbor search among the embeddings of all fact triples. We hypothesize that the neural QA models FASTQA and BIDAF are better suited for finding the most predictive single support statement, which motivates an ensemble of a neural QA model and NLPROLOG. We built a system which predicts the output of NLPROLOG when it used at Table 1: Accuracy scores in percent for different predicates on the development set of WIKIHOP. country (mh) only considers the problems that have been unanimously judged to require multi-hop reasoning. The upper part lists single models and the bottom part ensembles.least one rule and the output of the neural QA model otherwise. This allows to employ the multihop reasoning of NLPROLOG when possible and to utilize the pattern matching capabilities of the neural QA models in the remaining cases. The for these ensembles are given in the bottom part of Table 1. In all instances, ensembling a neural QA model with NLPROLOG improved upon all single models, indicating that they complement each other well. We analyze reasons for the success of this ensembling strategy in Section 5.6. We conducted an extensive error analysis, in which we manually classified all errors of NLPROLOG on the selected WIKIHOP predicates into predefined categories. The are depicted in FIG2.For the developer predicate, in the majority of cases, errors were caused by OPENIE during the fact extraction step: in all but one case, OPENIE did not produce the necessary facts, or a necessary facts was not stated in the support text, or there were multiple correct candidates and NLPROLOG selected the wrong one. As a consequence, for both the publisher and developer predicates, the majority of queries would not answerable, even when the necessary rules were correctly induced. The predictive accuracy was significantly higher for the country predicate, where errors were mostly due to entitities not having SENT2VEC embeddings and a few missing rules. FIG2 indicates that FASTQA can produce the correct answer, even if crucial information is missing from the supporting documents. To analyze this further, we evaluated FASTQA and NLPROLOG on a test set of the country predicate in which all documents mentioning the query entity were removed. Remarkably, the accuracy of FASTQA increased by approximately 1%, while the accuracy of NLPROLOG decreased by approx. 11%.Furthermore, we evaluated both FASTQA and NLPROLOG on the hard subset of country as defined by BID40: on these 62 problems which cannot be solved with a simple heuristic, NLPROLOG achieved an accuracy of 51.61%, as opposed to 46.77% by FASTQA.We conjecture that -besides NLPROLOG's multi-hop reasoning capability -this is one reason why the neural models and NLPROLOG complement each other nicely: neural models can compensate for missing information, while NLPROLOG is less susceptible for spurious correlations between the query and supporting texts. The complementary nature of both approaches is further supported by the error analysis, described in Fig. 4. We perform experiments on BABI-1K-STATIC to investigate the effect of the GRL training procedure. Specifically, we perform experiments on BABI-1K-STATIC with only the last phase of GRL (i.e. training without GRL), with the last and the penultimate phase, and with all three phases, corresponding to full GRL as we limit the rule complexity to two body atoms in rule templates. To maintain comparability between runs, we keep the number of SGD steps constant across all experiments. refers to a crucial information missing due to an error of the OpenIE tool, A(mbiguous) means that multiple candidates were correct and NLProlog chose one which was not labeled as the correct one, while I(nfo) refers to problems that are not answerable with the provided support texts. R(ule) means that a required rule was not induced, and E(mbedding) implies that the answer was correctly deduced but received a lower score than an erroneous deduction of another candidate. Additionally, we experiment with manually defined rules, which we deem sufficient for solving each of the four tasks. For these, we report before and after training, as well as for a run without any rule templates. The accuracy scores for all experiments on BABI are provided in TAB2.To assess the impact of the choice of rule templates, we evaluate NLPROLOG on bAbI-1k-static with a different set of rule templates containing two rules of the form p 1 (X, Y) ⇐ p 2 (X, Y), four with the form p 1 (X, Y) ⇐ p 2 (Y, X) and another four for DISPLAYFORM0 Clearly, the full GRL is necessary to obtain acceptable in any of the problems. Interestingly, phase 1 of GRL does not contribute anything for QA4, which is perfectly solvable using only rules of body size 1. On the other hand, QA15 and QA18 both require a rule of body size 2, which makes phase 1 strongly improve the . Only for QA17 the are inconclusive. Nevertheless, this indicates that GRL works as intended, with the earlier phases encouraging the induction of rules with a higher number of conjuncts in the body. The using manually defined rules suggest that even when sufficient rules are provided, training with ES is helpful nevertheless. Interestingly, the model using no rules is able to solve over 90% of the problems in QA18, indicating that this problem is not well suited for evaluating reasoning capabilities of QA models. Using ten instead of six templates leads to worse performance on all BABI problems but QA15, which is solved perfectly with a much faster convergence rate. This indicates that the choice of rule templates might be an important hyperparameter which should be optimized for a given problem. We have developed NLPROLOG, a system that is able to perform rule-based reasoning on natural language input, and can learn domain-specific natural language rules from training data. To this end, Figure 3: Example proof trees generated by NLPROLOG. Each of the two trees shows an application of a transitive rule, the first for the predicate developer and the second for the predicate country. The rule templates are displayed with the most similar predicate. Note the noise introduced by the Open IE process, e.g. QUANT_0_1 and that entities and predicates do not need to match exactly.we have proposed to combine a symbolic prover with pretrained sentence embeddings and to train the ing system with Evolution Strategies. We have evaluated NLPROLOG on two different QA tasks, showing that it can learn domain-specific rules and produce predictions which complement those of the two strong baselines BIDAF and FASTQA. This allows to build an ensemble of a baseline and NLPROLOG which outperforms all single models. While we have focused on a subset of First Order Logic in this work, the expressiveness of NL-PROLOG could be extended by incorporating a different symbolic prover. For instance, a prover for temporal logic BID27 ) would allow to model temporal dynamics in natural language and enable us to evaluate NLPROLOG on the full set of BABI tasks. We are also interested in incorporating future improvements of symbolic provers, Open IE systems and pretrained sentence representations to further enhance the performance of NLPROLOG. To study the performance of the proposed method without the noise introduced by the Open IE step, it would be useful to evaluate it on tasks like knowledge graph reasoning. Additionally, it would be interesting to study the behavior of NLPROLOG in the presence of multiple WIKIHOP query predicates. else if x is f (x 1, . . ., x n), y is f (y 1, . . ., y n), and f ∼ f ≥ λ then S:= S ∧ f ∼ f return unify(x 1 :: . . . :: x n, y 1 :: . . . :: y n, θ, S) end else if x is p(x 1, . . ., x n), y is p (y 1, . . ., y n), and p ∼ p ≥ λ then S:= S ∧ f ∼ f return unify(x 1 :: . . . :: x n, y 1 :: . . . :: y n, θ, S) end else if x is x 1::...:: x n and y is y 1::...:: y n then (θ, S):= unify(x 1, y 1, θ, S) return unify(x 2 :: . . . :: x n, y 2 :: . . . :: y n, θ, S) end else if x is empty list and y is empty list then return (θ, S) else return (failure, 0) fun unify_var (v, o, θ, S) if {v/val} ∈ θ then return unify(val, o, θ, S) else if {o/val} ∈ θ then return unify(var, val, θ, S) else return ({v/o} + θ, S) Algorithm 1: The weak unification algorithm in Spyrolog without occurs check A.2 RUNTIME OF PROOF SEARCHThe worst case complexity vanilla logic programming is exponential in the depth of the proof BID34. However, in our case this is a particular problem because weak unification requires the prover to attempt unification between all entity/predicate symbols. To keep things tractable, NLPROLOG only attempts to unify symbols with a similarity greater than some user-defined threshold λ. Furthermore, in the search step for one statement q, for the rest of the search, λ is set to λ:= max(λ, S) whenever a proof for q with success score S is found. Due to the monotonicity of the employed aggregation functions, this allows to prune the search tree without losing the guarantee to find the proof yielding the maximum success score. We found this optimization to be crucial to make the proof search scale for the studied wikihop predicates. On BABI-1K we optimize the embeddings of predicate symbols of rules and query triples, as well as of entities. WIKIHOP has a large number of unique entity symbols and thus, optimizing their embeddings is prohibitive. Thus, we only train the predicate symbols of rules and query triples on this data set. The embeddings for entities and predicate symbols of fact and query triples are initialized using the WIKI-UNIGRAMS model of SENT2VEC, while the embeddings of rule predicates are intialized by uniformly sampling from the interval [− 1 √ 600 DISPLAYFORM0]. All experiments were performed with the same set of rule templates containing two rules for each of the forms p(X, Y) ⇐ q(X, Y), p(X, Y) ⇐ q(Y, X) and p(X, Z) ⇐ q(X, Y) ∧ r(Y, Z) and set the similarity threshold λ to 0.3. At each optimization step, we evaluate 100 perturbations sampled from N (0, 0.04) on a mini-batch of 16 training problems and use all of the directions in the generation of the next weight vector. If not stated otherwise, we use GRL with three phases training for 500 mini-batches in each phase. For the predicates publisher and developer, we used 1,000 mini-batches in the final phase of GRL. To further encourage rule usage, we use the minimum aggregation function in all but the last phases of GRL, in which we switch to product. Figure 4: Venn diagram of the error sets per predicate. | We introduce NLProlog, a system that performs rule-based reasoning on natural language by leveraging pretrained sentence embeddings and fine-tuning with Evolution Strategies, and apply it to two multi-hop Question Answering tasks. | 1,023 | scitldr |
Training labels are expensive to obtain and may be of varying quality, as some may be from trusted expert labelers while others might be from heuristics or other sources of weak supervision such as crowd-sourcing. This creates a fundamental quality-versus-quantity trade-off in the learning process. Do we learn from the small amount of high-quality data or the potentially large amount of weakly-labeled data? We argue that if the learner could somehow know and take the label-quality into account, we could get the best of both worlds. To this end, we introduce “fidelity-weighted learning” (FWL), a semi-supervised student-teacher approach for training deep neural networks using weakly-labeled data. FWL modulates the parameter updates to a student network, trained on the task we care about on a per-sample basis according to the posterior confidence of its label-quality estimated by a teacher, who has access to limited samples with high-quality labels. "All samples are equal, but some samples are more equal than others." -Inspired by George Orwell quote, The success of deep neural networks to date depends strongly on the availability of labeled data and usually it is much easier and cheaper to obtain small quantities of high-quality labeled data and large quantities of unlabeled data. For a large class of tasks, it is also easy to define one or more so-called "weak annotators" BID10, additional (albeit noisy) sources of weak supervision based on heuristics or weaker, biased classifiers trained on e.g. non-expert crowd-sourced data or data from different domains that are related. While easy and cheap to generate, it is not immediately clear if and how these additional weakly-labeled data can be used to train a stronger classifier for the task we care about. Assuming we can obtain a large set of weakly-labeled data in addition to a much smaller training set of "strong" labels, the simplest approach is to expand the training set simply by including the weakly-supervised samples (all samples are equal). Alternatively, one may pretrain on the weak data and then fine-tune on strong data, which is one of the common practices in semi-supervised learning. We argue that treating weakly-labeled samples uniformly (i.e. each weak sample contributes equally to the final classifier) ignores potentially valuable information of the label quality. Instead, we introduce Fidelity-Weighted Learning (FWL), a Bayesian semi-supervised approach that leverages a small amount of data with true labels to generate a larger training set with confidence-weighted weakly-labeled samples, which can then be used to modulate the fine-tuning process based on the fidelity (or quality) of each weak sample. By directly modeling the inaccuracies introduced by the weak annotator in this way, we can control the extent to which we make use of this additional source of weak supervision: more for confidently-labeled weak samples close to the true observed data, and less for uncertain samples further away from the observed data. In this section, we describe FWL. We assume we are given a large set of unlabeled data samples, a heuristic labeling function called the weak annotator, and a small set of high-quality samples labeled by experts, called the strong dataset, consisting of tuples of training samples x i and their true labels y i, i.e. D s = {(x i,y i)}. We consider the latter to be observations from the true target function that we are trying to learn. We use the weak annotator to generate labels for the unlabeled samples. Generated labels are noisy due to the limited accuracy of the weak annotator. This gives us the weak dataset consisting of tuples of training samples x i and their weak labelsỹ i, i.e. D w = {(x i,ỹ i)}. Note that we can generate a large amount of weak training data D w at almost no cost using the weak annotator. In contrast, we have only a limited amount of observations from the true function, i.e. |D s | |D w |. Step 1: Pre-train student on weak data, Step 2: Fit teacher to observations from the true function, and Step 3: Fine-tune student on labels generated by teacher, taking the confidence into account. Red dotted borders and blue solid borders depict components with trainable and non-trainable parameters, respectively. Our proposed setup comprises a neural network called the student and a Bayesian function approximator called the teacher. The training process consists of three phases which we summarize in FIG0.Step 1 Pre-train the student on D w using weak labels generated by the weak annotator. The main goal of this step is to learn a task dependent representation of the data as well as pretraining the student. The student function is a neural network consisting of two parts. The first part ψ learns the data representation and the second part φ performs the prediction task (e.g. classification). Therefore the overall function isŷ = φ(ψ(x i)). The student is trained on all samples of the weak dataset D w = {(x i,ỹ i)}. For brevity, in the following, we will refer to both data sample x i and its representation ψ(x i) by x i when it is obvious from the context. Step 2 Train the teacher on the strong data (ψ(x j), y j ) ∈ D s represented in terms of the student representation ψ and then use the teacher to generate a soft dataset D sw consisting of sample,predicted label, confidence for all data samples. We use a Gaussian process as the teacher to capture the label uncertainty in terms of the student representation, estimated w.r.t the strong data. A prior mean and covariance function is chosen for GP. The learned embedding function ψ(·) in Step 1 is then used to map the data samples to dense vectors as input to the GP. We use the learned representation by the student in the previous step to compensate lack of data in D s and the teacher can enjoy the learned knowledge from the large quantity of the weakly annotated data. This way, we also let the teacher see the data through the lens of the student. The GP is trained on the samples from D s to learn the posterior mean m post (used to generate soft labels) and posterior co-variance K post (., .) (which represents label uncertainty) BID17. We then create the soft dataset D sw = {(x t,ȳ t)} using the posterior GP, input samples x t from D w ∪ D s, and predicted labelsȳ t with their associated uncertainties as computed T (x t) = g(m post (x t)) and Σ(x t) = h(K post (x t,x t)). The generated labels are called soft labels. Therefore, we refer to D sw as a soft dataset. g transforms the output of GP to the suitable output space. For example in classification tasks, g would be the softmax function to produce probabilities that sum up to one. For multidimensional-output tasks where a vector of variances is provided by the GP, the vector K post (x t,x t) is passed through an aggregating function h to generate a scalar value for the uncertainty of each sample. Note that we train GP only on the strong dataset D s but then use it to generate soft labelsȳ t = T (x t) and uncertainty Σ(x t) for samples belonging to D sw = D w ∪D s.Step 3 Fine-tune the weights of the student network on the soft dataset, while modulating the magnitude of each parameter update by the corresponding teacher-confidence in its label. The student network of Step 1 is fine-tuned using samples from the soft dataset D sw = {(x t,ȳ t)} wherē y t = T (x t). The corresponding uncertainty Σ(x t) of each sample is mapped to a confidence value, and this is then used to determine the step size for each iteration of the stochastic gradient descent (SGD). So, intuitively, for data points where we have true labels, the uncertainty of the teacher is almost zero, which means we have high confidence and a large step-size for updating the parameters. However, for data points where the teacher is not confident, we down-weight the training steps of the student. This means that at these points, we keep the student function as it was trained on the weak data in Step 1. The weak annotator, i.e. the unsupervised method used for annotating the unlabeled data. Full Supervision Only, i.e. the student trained only on strong labeled data (Ds). Weak Supervision Only, i.e. the or the student trained only on weakly labeled data (Dw).NN W/S + Weak Supervision + Oversampled Strong Supervision, i.e. thestudent trained on samples that are alternately drawn from Dw without replacement, and Ds with replacement. Since |Ds| |Dw|, it oversamples the strong data. Weak Supervision + Fine Tuning, i.e. the student trained on weak dataset Dw and fine-tuned on strong dataset Ds. NN W ω →NN S The student trained on the weak data, but the step-size of each weak sample is weighted by a fixed value 0 ≤ ω ≤ 1, and fine-tuned on strong data. As an approximation for the optimal value for ω, we have used the mean of η2 of our model (below). The student trained on the weakly labeled data and fine-tuned on examples labeled by the teacher without taking the confidence into account. This baseline is similar to BID16.More specifically, we update the parameters of the student by training on D sw using SGD: DISPLAYFORM0 where l(·) is the per-example loss, η t is the total learning rate, N is the size of the soft dataset D sw, w w w is the parameters of the student network, and R is the regularization term. We define the total learning rate as η t = η 1 (t)η 2 (x t), where η 1 (t) is the usual learning rate of our chosen optimization algorithm that anneals over training iterations, and η 2 (x t) is a function of the label uncertainty Σ(x t) that is computed by the teacher for each data point. Multiplying these two terms gives us the total learning rate. In other words, η 2 represents the fidelity (quality) of the current sample, and is used to multiplicatively modulate η 1. Note that the first term does not necessarily depend on each data point, whereas the second term does. We propose η 2 (x t) = exp[−βΣ(x t)] to exponentially decrease the learning rate for data point x t if its corresponding soft labelȳ t is unreliable (far from a true sample). In practice, when using mini-batches, we implement this by multiplying the loss of each example in the batch by its fidelity score and average over these fidelity-weighted losses in the batch when calculating the batch gradient based on that loss. β is a positive scalar hyper-parameter that controls the contribution of weak and strong data to the training procedure. A small β in a student which listens more carefully to the teacher and copies its knowledge, while a large β makes the student pay less attention to the teacher, staying with its initial weak knowledge. Hence, β gives a handle to control the bias-variance trade-off. In Appendix A, we apply FWL to a one-dimensional toy problem to illustrate its various steps. In this section, we apply FWL to document ranking task and evaluate its performance compared to the baselines presented in TAB1. Document Ranking is the core information retrieval problem and is challenging as the ranking model needs to learn a representation for long documents and capture the complex notion of relevance between queries and documents. Furthermore, the size of publicly available datasets with query-document relevance judgments is unfortunately quite small (∼ 250 queries). We employ a state-of-the-art pairwise neural ranker architecture as the student BID3 in which the ranking is cast as a regression task. Given each training sample x as a triple of query q, and two documents d + and d −, the goal is to learn a function F: {< q,d +,d − >} → R, which maps each data sample x to a scalar output value y indicating the probability of d + being ranked higher than d − with respect to q. The student follows the architecture proposed in BID3. The first layer of the network, i.e. representation learning layer ψ: {< q,d DISPLAYFORM0 − >} → R m maps each input sample to an mdimensional real-valued vector. In general, besides learning embeddings for words, function ψ learns to compose word embedding based on their global importance in order to generate query/document embeddings. The representation layer is followed by a simple fully-connected feed-forward network with a sigmoidal output unit to predict the probability of ranking d + higher than d −. The general schema of the student is illustrated in FIG1. More details are provided in Appendix C.1.The teacher is implemented by clustered GP algorithm. See Appendix C.2 for more details. The weak annotator is BM25 BID11 ), a well-known unsupervised method for scoring query-document pairs based on statistics of the matched terms. More details are provided in Appendix C.3. Description of the data with weak labels and data with true labels as well as the setup of the document-ranking experiments is presented in Appendix C.4 in more details. Results and Discussions. We conducted k-fold cross-validation on D s (the strong data) and report two standard evaluation metrics for ranking: mean average precision (MAP) of the top-ranked 1,000 documents and normalized discounted cumulative gain calculated for the top 20 retrieved documents (nDCG@20). TAB2 shows the performance on both datasets. As can be seen, FWL provides a significant boost on the performance over all datasets. In the ranking task, the student is designed in particular to be trained on weak annotations BID3, hence training the network only on weak supervision, i.e. NN W performs better than NN S. This can be due to the fact that ranking is a complex task requiring many training samples, while relatively few data with true labels are available. Alternating between strong and weak data during training, i.e. NN S + /W seems to bring little (but statistically significant) improvement. However, we can gain better by the typical fine-tuning strategy, NN W→S. We can gain improvement by fine-tuning the NN W using labels generated by the teacher without considering their confidence score, i.e. FWL \Σ. This means we just augmented the fine-tuning process by generating a fine-tuning set using teacher which is better than D s in terms of quantity and D w in terms of quality. This baseline is equivalent to setting β = 0. However, we see a big jump in performance when we use FWL to include the estimated label quality from the teacher, leading to the best overall . Sensitivity of the FWL to the Quality of the Weak Annotator. Our proposed setup in FWL requires defining a socalled "weak annotator" to provide a source of weak supervision for unlabelled data. In this section, we study how the quality of the weak annotator may affect the performance of the FWL on the Robust04 dataset. To do so, besides BM25 BID11 ), we use three other weak annotators: vector space model BID12 with binary term occurrence (BTO) weighting schema and vector space model with TF-IDF weighting schema, which are both weaker than BM25, and BM25+RM3 that uses RM3 as the pseudo-relevance feedback method on top of BM25, leading to better labels. FIG2 illustrates the performance of these four weak annotators in terms of their mean average precision (MAP) on the test data, versus the performance of FWL given the corresponding weak annotator. As it is expected, the performance of FWL depends on the quality of the employed weak annotator. The percentage of improvement of FWL over its corresponding weak annotator on the test data is also presented in FIG2. As can be seen, the better the performance of the weak annotator is, the less the improvement of the FWL would be. Training neural networks using large amounts of weakly annotated data is an attractive approach in scenarios where an adequate amount of data with true labels is not available, a situation which often arises in practice. In this paper, we introduced fidelity-weighted learning (FWL), a new student-teacher framework for semi-supervised learning in the presence of weakly labeled data. We applied FWL to document ranking and empirically verified that FWL speeds up the training process and improves over state-of-the-art semi-supervised alternatives. To better understand FWL, we apply FWL to a one-dimensional toy problem to illustrate the various steps. Let ft(x) = sin(x) be the true function (red dotted line in FIG4) from which a small set of observations Ds = {xj,yj} is provided (red points in FIG4). These observation might be noisy, in the same way that labels obtained from a human labeler could be noisy. A weak annotator function fw(x) = 2sinc(x) (magenta line in FIG4) is provided, as an approximation to ft.The task is to obtain a good estimate of ft given the set Ds of strong observations and the weak annotator function fw. We can easily obtain a large set of observations Dw = {xi,ỹi} from fw with almost no cost (magenta points in FIG4).As the teacher, we use standard Gaussian process regression 2 with this kernel: DISPLAYFORM0 where, kRBF(xi,xj) = exp xi −xj 2 2 2 k White (xi,xj) = constant value, ∀x1 = x2 and 0 otherwise We fit only one GP on all the data points (i.e. no clustering). Also during fine-tuning, we set β = 1. The student is a simple feed-forward network with the depth of 3 layers and width of 128 neurons per layer. We have used tanh as the nonlinearity for the intermediate layers and a linear output layer. As the optimizer, we used Adam and the initial learning rate has been set to 0.001. We randomly sample 100 data points from the weak annotator and 10 data points from the true function. We introduce a small amount of noise to the observation of the true function to model the noise in the human labeled data. We consider two experiments:1. A neural network trained on weak data and then fine-tuned on strong data from the true function, which is the most common semi-supervised approach FIG4 ). 2. A teacher-student framework working by the proposed FWL approach. As can be seen in FIG4, FWL by taking into account label confidence, gives a better approximation of the true hidden function. We repeated the above experiment 10 times. The average RMSE with respect to the true function on a set of test points over those 10 experiments for the student, were as follows: Algorithm 1 Clustered Gaussian processes.1: Let N be the sample size, n the sample size of each cluster, K the number of clusters, and ci the center of cluster i. 2: Run K-means with K clusters over all samples with true labels Ds = {xi,yi}.K-means(xi) → c1,c2,...,cK where ci represents the center of cluster Ci containing samples D c i s = {xi,1,xi,2,...xi,n}. 3: Assign each of K clusters a Gaussian process and train them in parallel to approximate the label of each sample. DISPLAYFORM1 where GP c i is trained on D c i s containing samples belonging to the cluster ci. Other elements are defined in Section 2 4: Use trained teacher Tc i to evaluate the soft label and uncertainty for samples from Dsw to compute η2(xt) required for step 3 of FWL. We use T as a wrapper for all teachers {Tc i}. We suggest using several GP = {GPc i} to explore the entire data space more effectively. Even though inducing points and stochastic methods make GPs more scalable we still observed poor performance when the entire dataset was modeled by a single GP. Therefore, the reason for using multiple GPs is mainly empirically inspired by BID13 which is explained in the following: We used the Sparse Gaussian Process implemented in GPflow. The algorithm is scalable in the sense that it is not O(N 3) as original GP is. It introduces inducing points in the data space and defines a variational lower bound for the marginal likelihood. The variational bound can now be optimized by stochastic methods which make the algorithm applicable in large datasets. However, the tightness of the bound depends on the location of inducing points which are found through the optimization process. The pseudo-code of the clustered GP is presented in Algorithm 1. When the main issue is computational resources (when the number of inducing points for each GP is large), we can first choose the number n which is the maximum size of the dataset on which our resources allow to train a GP, then find the number of clusters K = N/n accordingly. The rest of the algorithm remains unchanged. The employed student is proposed in BID3. The first layer of the network models function ψ that learns the representation of the input data samples, i.e. (q,d+,d −), and consists of three components: an embedding function ε: V → R m (where V denotes the vocabulary set and m is the number of embedding dimensions), a weighting function ω: V → R, and a compositionality function: (R m,R) n → R m. More formally, the function ψ is defined as: DISPLAYFORM0 where t q i and t d i denote the i th term in query q respectively document d. The embedding function ε maps each term to a dense m-dimensional real value vector, which is learned during the training phase. The weighting function ω assigns a weight to each term in the vocabulary. It has been shown that ω simulates the effect of inverse document frequency (IDF), which is an important feature in information retrieval BID3.The compositionality function projects a set of n embedding-weighting pairs to an m-dimensional representation, independent from the value of n: DISPLAYFORM1 which is in fact the normalized weighted element-wise summation of the terms' embedding vectors. Again, it has been shown that having global term weighting function along with embedding function improves the performance of ranking as it simulates the effect of inverse document frequency (IDF). In our experiments, we initialize the embedding function ε with word2vec embeddings BID6 pre-trained on Google News and the weighting function ω with IDF.The representation layer is followed by a simple fully connected feed-forward network with l hidden layers followed by a softmax which receives the vector representation of the inputs processed by the representation learning layer and outputs a predictionỹ. Each hidden layer z k in this network computes DISPLAYFORM2 where W k and b k denote the weight matrix and the bias term corresponding to the k th hidden layer and α is the non-linearity. These layers follow a sigmoid output. We employ the cross entropy loss: DISPLAYFORM3 where B is a batch of data samples. We use Gaussian Process as the teacher and pass the mean of GP through the same function g that is applied on the output of the student network. h is an aggregation function that takes variance over several dimensions and outputs a single measure of variance. As a reasonable choice, the aggregating function h in our sentiment classification task (three classes) is mean of variances over dimensions. In the teacher, linear combinations of different kernels are used in our experiments. We use sparse variational GP regression 3 with this kernel: DISPLAYFORM0 where, DISPLAYFORM1 +xi.xj k White (xi,xj) = constant value, ∀x1 = x2 and 0 otherwise We empirically found l = 1 satisfying value for the length scale of Matern3/2 kernels. We also set σ0 = 0 to obtain a homogeneous linear kernel. The constant value of K W hite (.,.) determines the level of noise in the labels. This is different from the noise in weak labels. This term explains the fact that even in true labels there might be a trace of noise due to the inaccuracy of human labelers. We set the number of clusters in the clustered GP algorithm for the ranking task to 50. The weak annotator is BM25 BID11 ), a well-known unsupervised retrieval method. This method heuristically scores a given pair of query-document based on the statistics of their matched terms. In the pairwise document ranking setup,ỹi for a given sample xj = (q,d DISPLAYFORM0 is the score obtained from the weak annotator. Collections We use two standard TREC collections for the task of ad-hoc retrieval: The first collection (Robust04) consists of 500k news articles from different news agencies as a homogeneous collection. The second collection (ClueWeb) is ClueWeb09 Category B, a large-scale web collection with over 50 million English documents, which is considered as a heterogeneous collection. Spam documents were filtered out using the Waterloo spam scorer 4 BID1 with the default threshold 70%.Data with true labels We take query sets that contain human-labeled judgments: a set of 250 queries (TREC topics 301-450 and 601-700) for the Robust04 collection and a set of 200 queries (topics for the experiments on the ClueWeb collection. For each query, we take all documents judged as relevant plus the same number of documents judged as non-relevant and form pairwise combinations among them. Data with weak labels We create a query set Q using the unique queries appearing in the AOL query logs BID9 . This query set contains web queries initiated by real users in the AOL search engine that were sampled from a three-month period from March 2006 to May 2006. We applied standard pre-processing BID3 a) on the queries: We filtered out a large volume of navigational queries containing URL substrings ("http", "www.", ".com", ".net", ".org", ".edu"). We also removed all non-alphanumeric characters from the queries. For each dataset, we took queries that have at least ten hits in the target corpus using our weak annotator method. Applying all these steps, We collect 6.15 million queries to train on in Robust04 and 6.87 million queries for ClueWeb. To prepare the weakly labeled training set Dw, we take the top 1,000 retrieved documents using BM25 for each query from training query set Q, which in total leads to ∼ |Q|×10 6 training samples. Setup For the evaluation of the whole model, we conducted 3-fold cross-validation. However, for each dataset, we first tuned all the hyper-parameters of the student in the first step on the set with true labels using batched GP bandits with an expected improvement acquisition function BID4 and kept the optimal parameters of the student fixed for all the other experiments. The size and number of hidden layers for the student is selected from {64,128,256,512}. The initial learning rate and the dropout parameter were selected from {10 −3,10 −5} and {0.0,0.2,0.5}, respectively. We considered embedding sizes of {300,500}. The batch size in our experiments was set to 128. We use ReLU BID7 as a non-linear activation function α in student. We use the Adam optimizer for training, and dropout BID14 as a regularization technique. At inference time, for each query, we take the top 2,000 retrieved documents using BM25 as candidate documents and re-rank them using the trained models. We use the Indri 5 implementation of BM25 with default parameters (i.e., k1 = 1.2, b = 0.75, and k3 = 1,000). | We propose Fidelity-weighted Learning, a semi-supervised teacher-student approach for training neural networks using weakly-labeled data. | 1,024 | scitldr |
This paper is focused on investigating and demystifying an intriguing robustness phenomena in over-parameterized neural network training. In particular we provide empirical and theoretical evidence that first order methods such as gradient descent are provably robust to noise/corruption on a constant fraction of the labels despite over-parameterization under a rich dataset model. In particular: i) First, we show that in the first few iterations where the updates are still in the vicinity of the initialization these algorithms only fit to the correct labels essentially ignoring the noisy labels. ii) Secondly, we prove that to start to overfit to the noisy labels these algorithms must stray rather far from from the initial model which can only occur after many more iterations. Together, these show that gradient descent with early stopping is provably robust to label noise and shed light on empirical robustness of deep networks as well as commonly adopted early-stopping heuristics. 1. Introduction This paper focuses on an intriguing phenomena: overparameterized neural networks are surprisingly robust to label noise when first order methods with early stopping is used to train them. To observe this phenomena consider FIG1 where we perform experiments on the MNIST data set. Here, we corrupt a fraction of the labels of the training data by assigning their label uniformly at random. We then fit a four layer model via stochastic gradient descent and plot various performance metrics in Figures 1a and 1b. FIG1 (blue curve) shows that indeed with a sufficiently large number of iterations the neural network does in fact perfectly fit the corrupted training data. However, FIG1 also shows that such a model does not generalize to the test data (yellow curve) and the accuracy with respect to the ground truth labels degrades (orange curve). These plots clearly demonstrate that the model overfits with many iterations. In FIG1 we repeat the same experiment but this time stop the updates after a few iterations (i.e. use early stopping). In this case the train accuracy degrades linearly (blue curve). However, perhaps unexpected, the test accuracy (yellow curve) remains high even with a significant amount of corruption. This suggests that with early stopping the model does not overfit and generalizes to new test data. Even more surprising, the train accuracy (orange curve) with respect to the ground truth labels continues to stay around %100 even when %50 of the labels are corrupted. That is, with early stopping overparameterized neural networks even correct the corrupted labels! These plots collectively demonstrate that overparameterized neural networks when combined with early stopping have unique generalization and robustness capabilities. As we detail further in Section D this phenomena holds (albeit less pronounced) for richer data models and architectures. This paper aims to demonstrate and begin to demystify the surprising robustness of overparameterized neural networks when early stopping is used. We show that gradient descent is indeed provably robust to noise/corruption on a constant fraction of the labels in such overparametrized learning scenarios. In particular, under a fairly expressive dataset model and focusing on one-hidden layer networks, we show that after a few iterations (a.k.a. early stopping), gradient descent finds a model (i) that is within a small neighborhood of the point of initialization and (ii) only fits to the correct labels essentially ignoring the noisy labels. We complement these findings by proving that if the network is trained to overfit to the noisy labels, then the solution found by gradient descent must stray rather far from the initial model. Together, these highlight the key features of a solution that generalizes well vs a solution that fits well. We now describe the dataset model used in our theoretical . In this model we assume that the input samples x 1, x 2,..., x n ∈ R d come from K clusters which are located on the unit Euclidian ball in R d. We also assume our data set consists ofK ≤ K classes where each class can be composed of multiple clusters. We consider a deterministic data set with n samples with roughly balanced clusters each In these experiments we use a 4 layer neural network consisting of two convolution layers followed by two fully-connected layers to train a data set of 50,000 samples from MNIST with various amounts of random corruption on the labels. In this architecture the convolutional layers have width 64 and 128 kernels, and the fully-connected layers have 256 and 10 outputs, respectively. Overall, there are 4.8 million trainable parameters. We depict the training accuracy both w.r.t. the corrupted and uncorrupted training labels as well as the (uncorrupted) test accuracy. (a) Shows the performance after 200 epochs of Adadelta where near perfect fitting to the corrupted data is achieved. (b) Shows the performance with early stopping. We observe that with early stopping the trained neural network is robust to label corruption.consisting on the order of n K samples.1 Finally, while we allow for multiple classes, in our theoretical model we assume the labels are scalars and take values in [−1, 1] interval. We formally define our dataset model below and provide an illustration in Figure 2. Definition 1.1 (Clusterable dataset) Consider a data set of size n consisting of input/label pairs 1 This is for ease of exposition rather than a particular challenge arising in the analysis. DISPLAYFORM0 We assume the input data have unit Euclidean norm and originate from K clusters with the th cluster containing n data points. We assume the number of points originating from each cluster is well-balanced in the sense that c low n K ≤ n ≤ c up n K with c low and c up two numerical constants obeying 0 < c low < c up < 1. We use {c} K =1 ⊂ R d to denote the cluster centers which are distinct unit Euclidian norm vectors. We assume the input data points x that belong to the -th cluster obey x − c 2 ≤ ε 0, with ε 0 > 0 denoting the input noise level. We assume the labels y i belong to one ofK ≤ K classes. Specifically, we assume y i ∈ {α 1, α 2, . . ., αK} with {α}K =1 ∈ [−1, 1] denoting the labels associated with each class. We assume all the elements of the same cluster belong to the same class and hence have the same label. However, a class can contain multiple clusters. Finally, we assume the labels are separated in the sense that α r − α s ≥ δ for r ≠ s,(1.1) with δ > 0 denoting the class separation. In the data model above {c} K =1 are the K cluster centers that govern the input distribution. We note that in this model different clusters can be assigned to the same label. Hence, this setup is rich enough to model data which is not linearly separable: e.g. over R 2, we can assign cluster centers and (0, −1) to label 1 and cluster centers and (−1, 0) to label −1. Note that the maximum number of classes are dictated by the separation δ. In particular, we can have at mostK ≤ 2 δ +1 classes. We remark that this model is related to the setup of which focuses on providing polynomial guarantees for learning shallow networks. Finally, note that, we need some sort of separation between the cluster centers to distinguish them. While Definition 1.1 doesn't specifies such separation explicitly, Definition 2.1 establishes a notion of separation in terms of how well a neural net can distinguish the cluster centers. Next, we introduce our noisy/corrupted dataset model. DISPLAYFORM1 be an (ε 0, δ) clusterable dataset with α 1, α 2,..., αK denoting theK possible class labels. DISPLAYFORM2 as follows. For each cluster 1 ≤ ≤ K, at most ρn of the labels associated with that cluster (which contains n points) is assigned to another label value chosen from {α}K =1. We shall refer to the initial labels {ỹ i} n i=1 as the ground truth labels. We note that this definition allows for a fraction ρ of corruptions in each cluster. 1 Figure 2. Visualization of the input/label samples and classes according to the clusterable dataset model in Definition 1.1. In the depicted example there are K = 6 clusters,K = 3 classes. In this example the number of data points is n = 30 with each cluster containing 5 data points. The labels associated to classes 1, 2, and 3 are α1 = −1, α2 = 0.1, and α3 = 1, respectively so that δ = 0.9. We note that the placement of points are exaggerated for clarity. In particular, per definition the cluster center and data points all have unit Euclidean norm. Also, there is no explicit requirements that the cluster centers be separated. The depicted separation is for exposition purposes only. Network model: We will study the ability of neural networks to learn this corrupted dataset model. To proceed, let us introduce our neural network model. We consider a network with one hidden layer that maps R d to R. Denoting the number of hidden nodes by k, this network is characterized by an activation function φ, input weight matrix W ∈ R k×d and output weight vector v ∈ R k. In this work, we will fix output v to be a unit vector where half the entries are 1 √ k and other half are −1 √ k to simplify exposition.2 We will only optimize over the weight matrix W which contains most of the network parameters and will be shown to be sufficient for robust learning. We will also assume φ has bounded first and second order derivatives, i.e. φ ′ (z), φ ′′ (z) ≤ Γ for all z. The network's prediction at an input sample x is given by DISPLAYFORM3 where the activation function φ applies entrywise. Given DISPLAYFORM4, we shall train the network via minimizing the empirical risk over the training data via a quadratic loss DISPLAYFORM5 In particular, we will run gradient descent with a constant learning rate η, starting from a random initialization W 0 via the following updates DISPLAYFORM6 2 If the number of hidden units is odd we set one entry of v to zero. Throughout, ⋅ denotes the largest singular value of a given matrix. The notation O(⋅) denotes that a certain identity holds up to a fixed numerical constant. Also, c, c 0, C, C 0 etc. represent numerical constants. Our main shows that overparameterized neural networks, when trained via gradient descent using early stopping are fairly robust to label noise. The ability of neural networks to learn from the training data, even without label corruption, naturally depends on the diversity of the input training data. Indeed, if two input data are nearly the same but have different uncorrupted labels reliable learning is difficult. We will quantify this notion of diversity via a notion of condition number related to a covariance matrix involving the activation φ and the cluster centers {c} K =1.Definition 2.1 Define the matrix of cluster centers DISPLAYFORM0 Define the neural net covariance matrix Σ(C) as DISPLAYFORM1 Here ⊙ denotes the elementwise product. Also denote the minimum eigenvalue of Σ(C) by λ(C) and define the condition number associated with the cluster centers C as DISPLAYFORM2 One can view Σ(C) as an empirical kernel matrix associated with the network where the kernel is given by DISPLAYFORM3 Note that Σ(C) is trivially rank deficient if there are two cluster centers that are identical. In this sense, the minimum eigenvalue of Σ(C) will quantify the ability of the neural network to distinguish between distinct cluster centers. Therefore, one can think of κ(C) as a condition number associated with the neural network which characterizes the distinctness/diversity of the cluster centers. The more distinct the cluster centers, the larger λ(C) and smaller the condition number κ(C) is. Indeed, based on in when the cluster centers are maximally diverse e.g. uniformly at random from the unit sphere κ(C) scales like a constant. Throughout we shall assume that λ(C) is strictly positive (and hence κ(C) < ∞). This property is empirically verified to hold in earlier works when φ is a standard activation (e.g. ReLU, softplus). As a concrete example, for ReLU activation, using from one can show if the cluster centers are separated by a distance ν > 0, then λ(C) ≥ ν 100K 2. We note that variations of the λ(C) > 0 assumption based on the data points (i.e. λ(X) > 0 not cluster centers) (5; 7; 8) are utilized to provide convergence guarantees for DNNs. Also see (9; 10) for other publications using related definitions. With a quantitative characterization of distinctiveness/diversity in place we are now ready to state our main . Throughout we use c Γ, C Γ, etc. to denote constants only depending on Γ. We note that this Theorem is slightly simplified by ignoring logarithmic terms and precise dependencies on Γ. See Theorem E.13 for precise statements. DISPLAYFORM4 with κ(C) the neural net cluster condition number pre Definition 2.1. Then as long as 0 ≤c Γ K 2 and ρ ≤ δ 8 with probability at least 1 − 3 K 100, after DISPLAYFORM5 ) iterations, the neural network 3 If k is odd we set one entry to zero ⌊ DISPLAYFORM6 f (⋅, W τ0) found by gradient descent assigns all the input samples x i to the correct ground truth labelsỹ i. That is, arg min DISPLAYFORM7 holds for all 1 ≤ i ≤ n. Furthermore, for all 0 ≤ τ ≤ τ 0, the distance to the initial point obeys DISPLAYFORM8 Theorem 2.2 shows that gradient descent with early stopping has a few intriguing properties: Robustness. The solution found by gradient descent with early stopping degrades gracefully as the label corruption level ρ grows. In particular, as long as ρ ≤ δ 8, the final model is able to correctly classify all samples including the corrupted ones. In our setup, intuitively label gap obeys δ ∼ 1 K, hence, we prove robustness to Total Number of corrupted labels ≲ n K.This is independent of number of clusters and only depends on number of classes. An interesting future direction is to improve this to allow on the order of n corrupted labels. Such a maybe possible by using a multi-output classification neural network. Early stopping time. We show that gradient descent finds a model that is robust to outliers after a few iterations. In particular using the maximum allowed step size, the number of iterations is of the order of DISPLAYFORM9 ) which scales with K d up to condition numbers. Modest overparameterization. Our requires modest overparemetrization and apply as soon as the number of parameters exceed the number of classes to the power four (kd ≳ K 4). Interestingly, the amount of overparameterization is essentially independent of the size of the training data n (ignoring logarithmic terms) and conditioning of the data points, only depending on the number of clusters and conditioning of the cluster centers. This can be interpreted as ensuring that the network has enough capacity to fit the cluster centers {c} K =1 and the associated true labels. Distance from initialization. Another feature of Theorem 2.2 is that the network weights do not stray far from the initialization as the distance between the initial model and the final model (at most) grows with the square root of the number of clusters (√ K). This √ K dependence implies that the more clusters there are, the updates travel further away but continue to stay within a certain radius. This dependence is intuitive as the Rademacher complexity of the function space is dictated by the distance to initialization and should grow with the square-root of the number of input clusters to ensure the model is expressive enough to learn the dataset. We would like to note that in the limit of 0 → 0 where the input data set is perfectly clustered one can improve the amount of overparamterization. Indeed, the above is obtained via a perturbation argument from this more refined stated below. Theorem A.1 (Training with perfectly clustered data) Consier the setting and assumptions of Theorem E.14 with 0 = 0. Starting from an initial weight matrix W 0 selected at random with i.i.d. N entries we run gradient descent updates of the form DISPLAYFORM0 Furthermore, assume the number of parameters obey DISPLAYFORM1 with κ(C) the neural net cluster condition number per Definition 2.1. Then, with probability at least 1 − 2 K 100 over randomly initialized W 0 DISPLAYFORM2 ∼ N, the iterates W τ obey the following properties.• The distance to initial point W 0 is upper bounded by DISPLAYFORM3 • After τ ≥ τ 0 ∶= c DISPLAYFORM4 the entrywise predictions of the learned network with respect to the ground truth labels DISPLAYFORM5 for all 1 ≤ i ≤ n. Furthermore, if the noise level ρ obeys ρ ≤ δ 8 the network predicts the correct label for all samples i.e.arg min DISPLAYFORM6 This shows that in the limit 0 → 0 where the data points are perfectly clustered, the required amount of overparameterization can be reduced from kd ≳ K 4 to kd ≳ K 2. In this sense this can be thought of a nontrivial analogue of where the number of data points are replaced with the number of clusters and the condition number of the data points is replaced with a cluster condition number. This can be interpreted as ensuring that the network has enough capacity to fit the cluster centers {c} K =1 and the associated true labels. Interestingly, the robustness benefits continue to hold in this case. However, in this perfectly clustered scenario there is no need for early stopping and a robust network is trained as soon as the number of iterations are sufficiently large. Infact, in this case given the clustered nature of the input data the network never overfits to the corrupted data even after many iterations. B. To (over)fit to corrupted labels requires straying far from initializationIn this section we wish to provide further insight into why early stopping enables robustness and generalizable solutions. Our main insight is that while a neural network maybe expressive enough to fit a corrupted dataset, the model has to travel a longer distance from the point of initialization as a function of the distance from the cluster centers ε 0 and the amount of corruption. We formalize this idea as follows. Suppose 1. two input points are close to each other (e.g. they are from the same cluster), 2. but their labels are different, hence the network has to map them to distant outputs. Then, the network has to be large enough so that it can amplify the small input difference to create a large output difference. Our first formalizes this for a randomly initialized network. Our random initialization picks W with i.i.d. standard normal entries which ensures that the network is isometric i.e. given input DISPLAYFORM7 ).Theorem B.1 Let x 1, x 2 ∈ R d be two vectors with unit Euclidean norm obeying DISPLAYFORM8 where v is fixed, W ∈ R k×d, and k ≥ cd with c > 0 a fixed constant. Assume φ ′, φ ′′ ≤ Γ. Let y 1 and y 2 be two scalars satisfying DISPLAYFORM9 ∼ N. Then, with probability at least 1−2e DISPLAYFORM10 holds, we have DISPLAYFORM11 In words, this shows that in order to fit to a data set with a single corrupted label, a randomly initialized network has to traverse a distance of at least δ ε 0. The next lemma clarifies the role of the corruption amount s and shows that more label corruption within a fixed class requires a model with a larger norm in order to fit the labels. For this we consider a randomized model with ε 2 0 input noise variance. DISPLAYFORM12 with labels y i = y and {x i} s i=1 with labelsỹ i =ỹ and assume these two labels are δ separated i.e. y −ỹ ≥ δ. Also suppose s ≤ d and φ DISPLAYFORM13 with probability at least 1 − e −d 2.Unlike Theorem E.15 this lower bounds the network norm in lieu of the distance to the initialization W 0. However, using the triangular inequality we can in turn get a guarantee on the distance from initialization W 0 via triangle inequality as long as DISPLAYFORM14 The above Theorem implies that the model has to traverse a distance of at least DISPLAYFORM15 to perfectly fit corrupted labels. In contrast, we note that the of the upper bound in Theorem 2.2 show that to be able to fit to the uncorrupted true labels the distance to initialization grows at most by τ ε 0 after τ iterates. This demonstrates that there is a gap in the required distance to initialization for fitting enough to generalize and overfitting. To sum up, our highlight that, one can find a network with good generalization capabilities and robustness to label corruption within a small neighborhood of the initialization and that the size of this neighborhood is independent of the corruption. However, to fit to the corrupted labels, one has to travel much more, increasing the search space and likely decreasing generalization ability. Thus, early stopping can enable robustness without overfitting by restricting the distance to the initialization. In this section, we outline our approach to proving robustness of overparameterized neural networks. Towards this goal, we consider a general formulation where we aim to fit a general nonlinear model of the form x ↦ f (θ, x) with θ ∈ R p denoting the parameters of the model. For instance in the case of neural networks θ represents its weights. Given a data set of n input/label pairs {( DISPLAYFORM0 fit to this data by minimizing a nonlinear least-squares loss of the form DISPLAYFORM1 which can also be written in the more compact form DISPLAYFORM2 To solve this problem we run gradient descent iterations with a constant learning rate η starting from an initial point θ 0 . These iterations take the form DISPLAYFORM3 Here, J (θ) is the n × p Jacobian matrix associated with the nonlinear mapping f defined via DISPLAYFORM4 Our approach is based on the hypothesis that the nonlinear model has a Jacobian matrix with bimodal spectrum where few singular values are large and remaining singular values are small. This assumption is inspired by the fact that realistic datasets are clusterable in a proper, possibly nonlinear, representation space. Indeed, one may argue that one reason for using neural networks is to automate the learning of such a representation (essentially the input to the softmax layer). We formalize the notion of bimodal spectrum below. Assumption 1 (Bimodal Jacobian) Let β ≥ α ≥ > 0 be scalars. Let f ∶ R p → R n be a nonlinear mapping and consider a set D ⊂ R p containing the initial point θ 0 (i.e. θ 0 ∈ D). Let S + ⊂ R n be a subspace and S − be its complement. We say the mapping f has a Bimodal Jacobian with respect to the complementary subpspaces S + and S − as long as the following two assumptions hold for all θ ∈ D.• Spectrum over S +: For all v ∈ S + with unit Euclidian norm we have DISPLAYFORM5 • Spectrum over S −: For all v ∈ S − with unit Euclidian norm we have J DISPLAYFORM6 We will refer to S + as the signal subspace and S − as the noise subspace. When << α the Jacobian is approximately low-rank. An extreme special case of this assumption is where = 0 so that the Jacobian matrix is exactly low-rank. We formalize this assumption below for later reference. 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 Assumption 2 (Low-rank Jacobian) Let β ≥ α > 0 be scalars. Consider a set D ⊂ R p containing the initial point θ 0 (i.e. θ 0 ∈ D). Let S + ⊂ R n be a subspace and S − be its complement. For all θ ∈ D, v ∈ S + and w ∈ S − with unit Euclidian norm, we have that DISPLAYFORM7 Our dataset model in Definition 1.2 naturally has a lowrank Jacobian when 0 = 0 and each input example is equal to one of the K cluster centers {c} K =1. In this case, the Jacobian will be at most rank K since each row will be in the span of DISPLAYFORM8. The subspace S + is dictated by the membership of each cluster as follows: Let Λ ⊂ {1, . . ., n} be the set of coordinates i such that x i = c. Then, subspace is characterized by DISPLAYFORM9 When 0 > 0 and the data points of each cluster are not the same as the cluster center we have the bimodal Jacobian structure of Assumption 1 where over S − the spectral norm is small but nonzero. In Section D, we verify that the Jacobian matrix of real datasets indeed have a bimodal structure i.e. there are few large singular values and the remaining singular values are small which further motivate Assumption 2. This is inline with earlier papers which observed that Hessian matrices of deep networks have bimodal spectrum (approximately lowrank) and is related to various demonstrating that there are flat directions in the loss landscape. Define the n-dimensional residual vector r where DISPLAYFORM0 A key idea in our approach is that we argue that in the absence of any corruption r(θ) approximately lies on the subspace S + and if the labels are corrupted by a vector e, then e approximately lies on the complement space. Before we state our general we need to discuss another assumption and definition. The Jacobian mapping J (θ) associated to a nonlinear mapping DISPLAYFORM0 Additionally, to connect our to the number of corrupted labels, we introduce the notion of subspace diffusedness defined below. 4 Note that, if DISPLAYFORM1 is continuous, the smoothness condition holds over any compact domain (albeit for a possibly large L). DISPLAYFORM2 The following theorem is our meta on the robustness of gradient descent to sparse corruptions on the labels when the Jacobian mapping is exactly low-rank. Theorem E.14 for the perfectly clustered data (0 = 0) is obtained by combining this with specific estimates developed for neural networks. around an initial point θ 0 and y = [y 1 . . . y n] ∈ R n denoting the corrupted labels. Also letỹ = [ỹ 1 . . .ỹ n] ∈ R n denote the uncorrupted labels and e = y −ỹ the corruption. Furthermore, suppose the initial residual f (θ 0) −ỹ with respect to the uncorrupted labels obey f (θ 0) −ỹ ∈ S +. Then, running gradient descent updates of the from (C.1) with a learning rate DISPLAYFORM3 Furthermore, assume ν > 0 is a precision level obeying ν ≥ Π S+ (e) ∞. Then, after τ ≥ 5 ηα 2 log r0 2 ν iterations, θ τ achieves the following error bound with respect to the true labels DISPLAYFORM4 Furthermore, if e has at most s nonzeros and S + is γ diffused per Definition C.1, then using DISPLAYFORM5 This shows that when the Jacobian of the nonlinear mapping is low-rank, gradient descent enjoys two intriguing properties. First, gradient descent iterations remain rather close to the initial point. Second, the estimated labels of the algorithm enjoy sample-wise robustness guarantees in the sense that the noise in the estimated labels are gracefully distributed over the dataset and the effects on individual label estimates are negligible. This theorem is the key that allows us to prove Theorem E.14 when the data points are perfectly clustered (0 = 0). Furthermore, this theorem when combined with a perturbation analysis allows us to deal with data that is not perfectly clustered (0 > 0) and to conclude that with early stopping neural networks are rather robust to label corruption (Theorem 2.2). 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 Finally, we note that a few recent publication (7; 9; 13) require the Jacobian to be well-conditioned to fit labels perfectly. In contrast, our low-rank model cannot perfectly fit the corrupted labels. Furthermore, when the Jacobian is bimodal (as seems to be the case for many practical data sets and neural network models) it would take a very long time to perfectly fit the labels and as demonstrated earlier such a model does not generalize and is not robust to corruptions. Instead we focus on proving robustness with early stopping. C.3. To (over)fit to corrupted labels requires straying far from initializationIn this section we state a that provides further justification as to why early stopping of gradient descent leads to more robust models without overfitting to corrupted labels. This is based on the observation that while finding an estimate that fits the uncorrupted labels one does not have to move far from the initial estimate in the presence of corruption one has to stray rather far from the initialization with the distance from initialization increasing further in the presence of more corruption. We make this observation rigorous below by showing that it is more difficult to fit to the portion of the residual that lies on the noise space compared to the portion on the signal space (assuming α ≫).Theorem C.3 Denote the residual at initialization θ 0 by r 0 = f (θ 0) − y. Define the residual projection over the signal and noise space as DISPLAYFORM6 Suppose Assumption 1 holds over an Euclidian ball D of radius R < max DISPLAYFORM7 around the initial point θ 0 with α ≥. Then, over D there exists no θ that achieves zero training loss. In particular, if D = R p, any parameter θ achieving zero training loss (f (θ) = y) satisfies the distance bound DISPLAYFORM8 This theorem shows that the higher the corruption (and hence E −) the further the iterates need to stray from the initial model to fit the corrupted data. We conduct several experiments to investigate the robustness capabilities of deep networks to label corruption. In our first set of experiments, we explore the relationship between loss, accuracy, and amount of label corruption on the MNIST dataset to corroborate our theory. Our next experiments study the distribution of the loss and the Jacobian on the CIFAR-10 dataset. Finally, we simulate our theoretical model by generating data according to the corrupted data In FIG6, we train the same model used in FIG1 with n = 3, 000 MNIST samples for different amounts of corruption. Our theory predicts that more label corruption leads to a larger distance to initialization. To probe this hypothesis, FIG6 and 3b visualizes training accuracy and training loss as a function of the distance from the initialization. These demonstrate that the distance from initialization gracefully increase with more corruption. Next, we study the distribution of the individual sample losses on the CIFAR-10 dataset. We conducted two experiments using Resnet-20 with cross entropy loss 5. In FIG8 we assess the noise robustness of gradient descent where we used all 50,000 samples with either 30% random corruption 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 or 50% random corruption. Theorem E.14 predicts that when the corruption level is small, the loss distribution of corrupted vs clean samples should be separable. FIG8 shows that when 30% of the data is corrupted the distributions are approximately separable. When we increase the shuffling amount to 50% the training loss on the clean data increases as predicted by our theory and the distributions start to gracefully overlap. As described in Section C, our technical framework utilizes a bimodal prior on the Jacobian matrix (C.2) of the model. We now further investigate this hypothesis. For a multiclass task, the Jacobian matrix is essentially a 3-way tensor where dimensions are sample size (n), total number of parameters in the model (p), and the number of classes (K). The neural network model we used for CIFAR 10 has around 270,000 parameters in total. In FIG9 we illustrate the singular value spectrum of the two multiclass Jacobian models where # >0.1× top singular At initialization After training All classes 4 14 Correct class 15 16 Table 1. Jacobian of the network has few singular values that are significantly large i.e. larger than 0.1× the spectral norm. This is true whether we consider the initial network or final network.we form the Jacobian from all layers except the five largest (in total we usep ≈ 90, 000 parameters). 6 We train the model with all samples and focus on the spectrum before and after the training. In FIG9, we picked n = 1000 samples and unfolded this tensor along parameters to obtain a 10, 000 × 90, 000 matrix which verifies our intuition on bimodality. In particular, only 10 to 20 singular values are larger than 0.1× the top one. This is consistent with earlier works that studied the Hessian spectrum. However, focusing on the Jacobian has the added advantage of requiring only first order information (11; 14). A disadvantage is that the size of Jacobian grows with number of classes. Intuitively, cross entropy loss focuses on the class associated with the label hence in FIG9, we only picked the partial derivative associated with the correct class so that each sample is responsible for a single (sizep) vector. This allowed us to scale to n = 10000 samples and the corresponding spectrum is strikingly similar. Another intriguing finding is that the spectrums of before and after training are fairly close to each other highlighting that even at random initialization, spectrum is bimodal. In FIG10, we turn our attention to verifying our findings for the corrupted dataset model of Definition 1.2. We generated K = 2 classes where the associated clusters centers are generated uniformly at random on the unit sphere of R d=20. We also generate the input samples at random around these two clusters uniformly at random on a sphere of radius ε 0 = 0.5 around the corresponding cluster center. Hence, the clusters are guaranteed to be at least 1 distance from each other to prevent overlap. Overall we generate n = 400 samples (200 per class/cluster). Here,K = K = 2 and the class labels are 0 and 1. We picked a network with k = 1000 hidden units and trained on a data set with 400 samples where 30% of the labels were corrupted. FIG10 plots the trajectory of training error and highlights the model achieves good classification in the first few iterations and ends up overfitting later on. In FIG10, we focus on the loss distribution of 6a at iterations 80 and 4500. In this figure, we visualize the loss distribution of clean and corrupted data. FIG10 highlights the loss distribution with early stopping and implies that the gap between corrupted and clean loss distributions is surprisingly resilient despite a large amount of corruption and the high-550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 capacity of the model. In FIG10, we repeat plot after many more iterations at which point the model overfits. This plot shows that the distribution of the two classes overlap demonstrating that the model has overfit the corruption and lacks generalization/robustness. We begin by defining the average Jacobian which will be used throughout our analysis. Definition E.1 (Average Jacobian) We define the average Jacobian along the path connecting two points x, y ∈ R p as J (y, x) ∶=. We experiment with the corrupted dataset model of Definition 1.2. We picked K = 2 classes and set n = 400 and ε0 = 0.5. Trained 30% corrupted data with k = 1000 hidden units. Each corruption has 50% chance to remain in the correct class hence around 15% of the labels are actually flipped which corresponds to the dashed green line. DISPLAYFORM0 The residualsr = f (θ) − y, r = f (θ) − y obey the following equationr = (I − ηC(θ))r. Proof Following Definition E.1, denoting f (θ) − y =r and f (θ) − y = r, we find that DISPLAYFORM1 Here (a) uses the fact that Jacobian is the derivative of f and (b) uses the fact that ∇L(θ) = J (θ) T r. Using Assumption C.1, one can show that sparse vectors have small projection on S +.Lemma E.3 Suppose Assumption C.1 holds. If r ∈ R n is a vector with s nonzero entries, we have that DISPLAYFORM2 Proof First, we bound the 2 projection of r on S + as follows DISPLAYFORM3 where we used the fact that v i ≤ √ γ v 2 √ n. Next, we conclude with DISPLAYFORM4 Proof The proof will be done inductively over the properties of gradient descent iterates and is inspired from the recent work. In particular, requires a well-conditioned Jacobian to fit labels perfectly. In contrast, we have a lowrank Jacobian model which cannot fit the noisy labels (or it would have trouble fitting if the Jacobian was approximately low-rank). Despite this, we wish to prove that gradient descent satisfies desirable properties such as robustness and closeness to initialization. Let us introduce the notation related to the residual. Set r τ = f (θ τ) − y and let r 0 = f (θ 0)−y be the initial residual. We keep track of the growth of the residual by partitioning the residual as r τ =r τ +ē τ whereē τ = Π S− (r τ),r τ = Π S+ (r τ). We claim that for all iterations τ ≥ 0, the following conditions hold.ē DISPLAYFORM5 Assuming these conditions hold till some τ > 0, inductively, we focus on iteration τ + 1. First, note that these conditions imply that for all τ ≥ i ≥ 0, θ i ∈ D where D is the Euclidian ball around θ 0 of radius DISPLAYFORM6. This directly follows from (E.6) induction hypothesis. Next, we claim that θ τ +1 is still within the set D. This can be seen as follows: DISPLAYFORM7 Proof Since range space of Jacobian is in S + and η ≤ 1 β 2, we begin by noting that DISPLAYFORM8 In the above, (a) follows from the fact that row range space of Jacobian is subset of S + via Assumption 2. (b) follows from the definition ofr τ. (c) follows from the upper bound on the spectral norm of the Jacobian over D per Assumption 2, (d) from the fact that η ≤ 1 β 2, (e) from α ≤ β. The latter combined with the triangular inequality and induction hypothesis (E.6) yields (after scaling (E.6) by 4 α) DISPLAYFORM9 concluding the proof of θ τ +1 ∈ D.To proceed, we shall verify that (E.6) holds for τ + 1 as well. Note that, following Lemma E.2, gradient descent iterate can be written as DISPLAYFORM10 Since both column and row space of C(θ τ) is subset of S +, we have thatē DISPLAYFORM11 This shows the first statement of the induction. Next, over S +, we havē DISPLAYFORM12 where the second line uses the fact thatē τ ∈ S − and last line uses the fact thatr τ ∈ S +. To proceed, we need to prove that C(θ τ) has desirable properties over S +, in particular, it contracts this space. 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 Claim 2 let P S+ ∈ R n×n be the projection matrix to S + i.e. it is a positive semi-definite matrix whose eigenvectors over S + is 1 and its complement is 0. Under the induction hypothesis and setup of the theorem, we have that DISPLAYFORM13 Proof The proof utilizes the upper bound on the learning rate. The argument is similar to the proof of Lemma 9.7 of. Suppose Assumption 3 holds. Then, for any θ 1, θ 2 ∈ D we have DISPLAYFORM14 where for (a) we utilized the induction hypothesis (E.6) and (b) follows from the upper bound on η. Now that (E.22) is established, using following lemma, we find DISPLAYFORM15 The β 2 upper bound directly follows from Assumption 2 by again noticing range space of Jacobian is subset of S +.Lemma E.4 (Asymmetric PSD perturbation) Consider the matrices A, C ∈ R n×p obeying A − C ≤ α 2. Also suppose CC T ⪰ α 2 P S+. Furthermore, assume range spaces of A, C lies in S +. Then, DISPLAYFORM16 7 We say A ⪰ B if A − B is a positive semi-definite matrix in the sense that for any real vector v, v DISPLAYFORM17 Proof For r ∈ S + with unit Euclidian norm, we have DISPLAYFORM18 Also, for any r, by range space assumption r T AC T r = Π S+ (r)T AC T Π S+ (r) (same for CC T). Combined with above, this concludes the claim. What remains is proving the final two statements of the induction (E.6). Note that, using the claim above and recalling (E.19) and using the fact that J (θ τ +1, θ τ) ≤ β, the residual satisfies DISPLAYFORM19 where we used the fact that η ≤ 1 2β 2. Now, using the fact DISPLAYFORM20 which establishes the second statement of the induction (E.6). What remains is obtaining the last statement of (E.6).To address this, completing squares, observe that DISPLAYFORM21 On the other hand, the distance to initial point satisfies DISPLAYFORM22 1 4 α) and using induction hypothesis (E.6), we find that DISPLAYFORM23 This establishes the final line of the induction and concludes the proof of the upper bound on θ τ − θ 0 2. To proceed, we shall bound the infinity norm of the residual. Using DISPLAYFORM24 (E.27) What remains is controlling r τ ∞. For this term, we shall use the naive upper bound r τ 2. Using the rate of convergence of the algorithm (E.6), we have that DISPLAYFORM25 We wish the right hand side to be at most ν > 0 where ν ≥ Π S+ (e) ∞. This implies that we need DISPLAYFORM26 To conclude, note that since DISPLAYFORM27 ), we find that r τ ∞ ≤ r τ 2 ≤ ν, which guarantees DISPLAYFORM28 which is the advertised . If e is s sparse and S + is diffused, applying Lemma C.1 we have DISPLAYFORM29 Since Jacobian is derivative of f, we have that DISPLAYFORM30 Now, define the matrices J + = Π S+ (J) and J − = Π S− (J). Using Assumption 1, we bound the spectral norms via DISPLAYFORM31 To proceed, projecting the residual on S +, we find for any θ with f (θ) = y DISPLAYFORM32 The identical argument for S − yields θ − θ 0 2 ≥ E−. Together this implies DISPLAYFORM33 If R is strictly smaller than right hand side, we reach a contradiction as θ ∈ D. If D = R p, we still find (E.30).This shows that if is small and E − is nonzero, gradient descent has to traverse a long distance to find a good model. Intuitively, if the projection over the noise space indeed contains the label noise, we actually don't want to fit that. Algorithmically, our idea fits the residual over the signal space and not worries about fitting over the noise space. Approximately speaking, this intuition corresponds to the 2 regularized problem min DISPLAYFORM34 If we set R = E+ β, we can hope that solution will learn only the signal and does not overfit to the noise. The next section builds on this intuition and formalizes our algorithmic guarantees. Throughout, σ min (⋅) denotes the smallest singular value of a given matrix. We first introduce helpful definitions that will be used in our proofs. DISPLAYFORM0 be an input dataset generated according to Definition 1.1. Also let {x i} n i=1 be the associated cluster centers, that is,x i = c iff x i is from the th cluster. We define the support subspace S + as a subspace of dimension K, dictated by the cluster membership as follows. Let Λ ⊂ {1, . . ., n} be the set of coordinates i such thatx i = c. Then, S + is characterized by DISPLAYFORM1 The Jacobian of the learning problem (1.3), at a matrix W is denoted by J (W, X) ∈ R n×kd and is given by DISPLAYFORM2 Here * denotes the Khatri-Rao product. The following theorem is borrowed from and characterizes three key properties of the neural network Jacobian. These are smoothness, spectral norm, and minimum singular value at initialization which correspond to Lemmas 6.6, 6.7, and 6.8 in that paper. Theorem E.7 (Jacobian Properties at Cluster Center) DISPLAYFORM3 The Jacobian mapping with respect to the input-to-hidden weights obey the following properties.• Smoothness is bounded by DISPLAYFORM4 • Top singular value is bounded by J (W, X) ≤ Γ X.• Let C > 0 be an absolute constant. As long as DISPLAYFORM5 At random Gaussian initialization W 0 ∼ N k×d, with probability at least 1 − 1 K 100, we have DISPLAYFORM6 In our case, the Jacobian is not well-conditioned. However, it is pretty well-structured as described previously. To proceed, given a matrix X ∈ R n×d and a subspace S ⊂ R n, we define the minimum singular value of the matrix over this subspace by σ min (X, S) which is defined as DISPLAYFORM7 Here, P S ∈ R n×n is the projection operator to the subspace. Hence, this definition essentially projects the matrix on S and then takes the minimum singular value over that projected subspace. The following theorem states the properties of the Jacobian at a clusterable dataset. Theorem E.8 (Jacobian Properties at Clusterable Dataset) Let input samples (x i) n i=1 be generated according to (ε 0, δ) clusterable dataset model of Definition 1.1 and define DISPLAYFORM8 T. Let S + be the support space and DISPLAYFORM9 be the associated clean dataset as described by Definition E.5. DISPLAYFORM10 The Jacobian mapping at X with respect to the input-to-hidden weights obey the following properties.• Smoothness is bounded by DISPLAYFORM11 • Top singular value is bounded by DISPLAYFORM12 • As long as DISPLAYFORM13 At random Gaussian initialization W 0 ∼ N k×d, with probability at least 1 − 1 K 100, we have DISPLAYFORM14 • The range space obeys range(J (W 0,X)) ⊂ S + where S + is given by Definition E.5.Proof Let J (W, C) be the Jacobian at the cluster center matrix. Applying Theorem E.7, this matrix already obeys the properties described in the of this theorem with desired probability (for the last ). We prove our theorem by relating the cluster center Jacobian to the clean dataset Jacobian matrix J (W,X).Note thatX is obtained by duplicating the rows of the cluster center matrix C. This implies that J (W,X) is obtained by duplicating the rows of the cluster center Jacobian. The critical observation is that, by construction in Definition 1.1, each row is duplicated somewhere between c low n K and c up n K.To proceed, fix a vector v and letp = J (W,X)v ∈ R n and p = J (W, C)v ∈ R K. Recall the definition of the support sets Λ from Definition E.5. We have the identitỹ DISPLAYFORM15 This impliesp ∈ S + hence range(J (W,X)) ⊂ S +. Furthermore, the entries ofp repeats the entries of p somewhere between c low n K and c up n K. This implies that, DISPLAYFORM16 and establishes the upper and lower bounds on the singular values of J (W,X) over S + in terms of the singular values of J (W, C). Finally, the smoothness can be established similarly. Given matrices W,W, the rows of the difference DISPLAYFORM17 Hence the spectral norm is scaled by at most c up n K. k so that v 2 = 1. Also assume we have n data points x 1, x 2,..., x n ∈ R d with unit euclidean norm (x i 2 = 1) aggregated as rows of a matrix X ∈ R n×d and the corresponding labels given by y ∈ R n generated accoring to (ρ, ε 0 = 0, δ) noisy dataset (Definition 1.2). Then for W 0 ∈ R k×d with i.i.d. N entries DISPLAYFORM18 holds with probability at least 1 − K −100.Proof This lemma is based on a fairly straightforward union bound. First, by construction y 2 ≤ √ n. What remains is bounding v T φ W 0 X T 2. Since ε 0 = 0 there are K unique rows. We will show that each of the unique rows is bounded with probability 1 − K −101 and union bounding will give the final . Let w be a row of W 0 and x be a row of X. Since φ is Γ Lipschitz and DISPLAYFORM19 for some constant c > 0, concluding the proof. E.2.1. PROOF OF THEOREM E.14 We first prove a lemma regarding the projection of label noise on the cluster induced subspace. DISPLAYFORM20 be an (ρ, ε 0 = 0, δ) clusterable noisy dataset as described in Definition 1.2. Let {ỹ i} n i=1be the corresponding noiseless labels. Let J (W, C) be the Jacobian at the cluster center matrix which is rank K and S + be its column space. Then, the difference between noiseless and noisy labels satisfy the bound DISPLAYFORM21 Proof Let e = y−ỹ. Observe that by assumption, th cluster has at most s = ρn errors. Let I denote the membership associated with cluster i.e. I ⊂ {1, . . ., n} and i ∈ I if and only if x i belongs to th cluster. Let 1 ∈ R n be the indicator function of the th class where ith entry is 1 if i ∈ I and 0 else for 1 ≤ i ≤ n. Then, denoting the size of the th cluster by n, the projection to subspace S + can be written as the P matrix where DISPLAYFORM22 Let e be the error pattern associated with th cluster i.e. e is equal to e over I and zero outside. Since cluster membership is non-overlapping, we have that DISPLAYFORM23 Similarly since supports of 1 are non-overlapping, we have that DISPLAYFORM24 Now, using e ∞ ≤ 2 (max distance between two labels), observe that DISPLAYFORM25 Since number of errors within cluster is at most n ρ, we find that DISPLAYFORM26 C where L is the Lipschitz constant of Jacobian spectrum. Denote r τ = f (W τ) − y. Using Lemma E.9 with probability 1 − K −100, we have that r 0 2 = y − f (W 0) 2 ≤ Γ c 0 n log K 128 for some c 0 > 0. Corollary E.8 guarantees a uniform bound for β, hence in Assumption 2, we pick DISPLAYFORM27 We shall also pick the minimum singular value over S + to be DISPLAYFORM28 We wish to verify Assumption 2 over the radius of DISPLAYFORM29 neighborhood of W 0. What remains is ensuring that Jacobian over S + is lower bounded by α. Our choice of k guarantees that at the initialization, with probability 1 − K −100, we have DISPLAYFORM30 Suppose LR ≤ α = α 0 2. Using triangle inequality on Jacobian spectrum, for any W ∈ D, using W − W 0 F ≤ R, we would have DISPLAYFORM31 Finally, since LR = 4L r 0 2 α ≤ α, the learning rate is DISPLAYFORM32 Overall, the assumptions of Theorem C.2 holds with stated α, β, L with probability 1 − 2K −100 (union bounding initial residual and minimum singular value events). This implies for all τ > 0 the distance of current iterate to initial obeys DISPLAYFORM33 The final step is the properties of the label corruption. Using Lemma E.10, we find that DISPLAYFORM34 Substituting the values corresponding to α, β, L yields that, for all gradient iterations with DISPLAYFORM35 denoting the clean labels byỹ and applying Theorem C.2, we have that, the infinity norm of the residual obeys (using DISPLAYFORM36 This implies that if ρ ≤ δ 8, the network will miss the correct label by at most δ 2, hence all labels (including noisy ones) will be correctly classified. Consider DISPLAYFORM0 has mean zero. Hence, using the fact that weighted sum of subGaussian random variables are subgaussian combined with (G.2) we conclude that DISPLAYFORM1 with probability at least 1 − e − t 2 2. Now combining (G.1) and (G.3) we have DISPLAYFORM2 with high probability. Denote average neural net Jacobian at data X via DISPLAYFORM0 T be the input matrix obtained from Definition 1.1. LetX be the noiseless inputs wherex i is the cluster center corresponding to x i. Given weight matrices W 1, W 2,W 1,W 2, we have that DISPLAYFORM1 We first bound DISPLAYFORM2 To proceed, we use the on the spectrum of Hadamard product of matrices due to Schur. Given A ∈ R k×d, B ∈ R n×d matrices where B has unit length rows, we have DISPLAYFORM3 Secondly, DISPLAYFORM4 where reusing Schur's and boundedness of φ DISPLAYFORM5 Combining both estimates yields DISPLAYFORM6 To get the on DISPLAYFORM7 according to (ρ, ε 0, δ) noisy dataset model and form the concatenated input/labels X ∈ R d×n, y ∈ R n. LetX be the clean input sample matrix obtained by mapping x i to its associated cluster center. Set learning rate η ≤ K 2cupnΓ 2 C 2 and maximum iterations τ 0 satisfying DISPLAYFORM8 where C 1 ≥ 1 is a constant of our choice. Suppose input noise level ε 0 and number of hidden nodes obey DISPLAYFORM9 ∼ N. Starting from W 0 =W 0 consider the gradient descent iterations over the losses DISPLAYFORM10 Then, for all gradient descent iterations satisfying τ ≤ τ 0, we have that 1103 1104 1105 1106 1107 1108 1109 1110 1111 1112 1113 1114 1115 1116 1117 1118 1119 1120 1121 1122 1123 1124 1125 1126 1127 1128 1129 1130 1131 1132 1133 1134 1135 1136 1137 1138 1139 1140 1141 1142 1143 1144 1145 1146 1147 1148 1149 1150 1151 1152 1153 1154 The proof is by induction. Suppose it holds until t ≤ τ 0 − 1. At t + 1, via (E.37) we have that DISPLAYFORM11 DISPLAYFORM12 Right hand side holds since L ≤ 1 2ητ0Θ. This establishes the induction for d t+1.Next, we show the induction on p t. Observe that 3d t +d t+1 ≤ 10τ 0 ηΓ √ nε 0 Θ(1 + 8ητ 0 β 2). Following (E.39) and using DISPLAYFORM13 Concluding the induction since L satisfies the final line. Consequently, for all 0 ≤ t ≤ τ 0, we have that DISPLAYFORM14 which is implied by k ≥ O(Γ 10 K 2 C 4 λ(C) 4 log(DISPLAYFORM15).Finally, following (E.40), distance satisfies DISPLAYFORM16 ).E.3.1. COMPLETING THE PROOF OF THEOREM 2.2 Theorem 2.2 is obtained by the theorem below when we ignore the log terms, and treating Γ, λ(C) as constants. We also plug in η = K 2cupnΓ 2 C 2.Theorem E.13 (Training neural nets with corrupted labels) Let {(x i, y i)} n i=1 be an (s, ε 0, δ) clusterable noisy dataset as described in Definition 1.2. Let {ỹ i} n i=1 be the corresponding noiseless labels. Suppose φ, φ ′, φ ′′ ≤ Γ for some Γ ≥ 1, input noise and the number of hidden nodes satisfy ε 0 ≤O(λ(C) DISPLAYFORM17 ).where C ∈ R K×d is the matrix of cluster centers. Set learning rate η ≤ ∼ N. With probability 1 − 3 K 100, after DISPLAYFORM18 ) log(Γ n log K ρ) iterations, for all 1 ≤ i ≤ n,we have that• The per sample normalized 2 norm bound satisfies DISPLAYFORM19 • Suppose ρ ≤ δ 8. Denote the total number of prediction errors with respect to true labels (i.e. not satisfying (E.46)) by err(W). With same probability, err(W τ) obeys DISPLAYFORM20 log(Γ √ n log K ρ).• Suppose ρ ≤ δ 8 and ε 0 ≤ c DISPLAYFORM21, then, W τ assigns all input samples x i to correct ground truth labelsỹ i i.e. (E.46) holds for all 1 ≤ i ≤ n.• Finally, for any iteration count 0 ≤ t ≤ τ the total distance to initialization is bounded as DISPLAYFORM22. Proof Note that proposed number of iterations τ is set so that it is large enough for Theorem E.14 to achieve small error in the clean input model (ε 0 = 0) and it is small enough so that Theorem E.12 is applicable. In light of Theorems E.12 and E.14 consider two gradient descent iterations starting from W 0 where one uses clean dataset (as if input vectors are perfectly cluster centers)X and other uses the 1157 1158 1159 1160 1161 1162 1163 1164 1165 1166 1167 1168 1169 1170 1171 1172 1173 1174 1175 1176 1177 1178 1179 1180 1181 1182 1183 1184 1185 1186 1187 1188 1189 1190 1191 1192 1193 1194 1195 1196 1197 1198 1199 1200 1201 1202 1203 1204 1205 1206 1207 1208 1209 original dataset X. Denote the prediction residual vectors of the noiseless and original problems at time τ with respect true ground truth labelsỹ byr τ = f (W τ,X) −ỹ and r τ = f (W τ, X) −ỹ respectively. Applying Theorems E.12 and E.14, under the stated conditions, we have that r τ ∞ ≤ 4ρ and (E.42) DISPLAYFORM0 First statement: The latter two imply the 2 error bounds on r τ = f (W τ, X) −ỹ. Second statement: To assess the classification rate we count the number of entries of r τ = f (W τ, X) −ỹ that is larger than the class margin δ 2 in absolute value. Suppose ρ ≤ δ 8. Let I be the set of entries obeying this. For i ∈ I using r τ ∞ ≤ 4ρ ≤ δ 4, we have r τ,i ≥ δ 2 ⇒ r τ,i + r τ,i −r τ,i ≥ δ 2 ⇒ r τ,i −r τ,i ≥ δ 4.Consequently, we find that r τ −r τ 1 ≥ I δ 4.Converting 2 upper bound on the left hand side to 1, we obtain c √ n ε 0 Γ 3 K √ n log K λ(C) log(Γ √ n log K ρ) ≥ I δ 4.Hence, the total number of errors is at most DISPLAYFORM1 Third statement -Showing zero error: Pick an input sample x from dataset and its clean versionx. We will argue that f (W τ, x) − f (W τ,x) is smaller than δ 4 when ε 0 is small enough. We again write DISPLAYFORM2 The first term can be bounded via DISPLAYFORM3 Next, we need to bound DISPLAYFORM4 where DISPLAYFORM5 ), x −x 2 ≤ ε 0 and W 0 i.i.d.∼ N (0, I). Consequently, using by assumption we have DISPLAYFORM6 and applying an argument similar to Theorem E.15 (detailed in Appendix G), with probability at 1 − 1 n 100, we find that DISPLAYFORM7 Combining the two bounds above we get DISPLAYFORM8 Hence, if ε 0 ≤ c DISPLAYFORM9, we obtain that, for all DISPLAYFORM10 If ρ ≤ δ 8, we obtain f (W τ, x i) −ỹ i < δ 2 hence, W τ outputs the correct decision for all samples. Fourth statement -Distance: This follows from the triangle inequality DISPLAYFORM11 We have that right hand side terms are at most O(Γ K log K λ(C)) and O(tηε 0 Γ 4 Kn λ(C) log(DISPLAYFORM12 Theorems E.12 and E.14 respectively. This implies (E.41).Before we end this section we would like to note that in the limit of 0 → 0 where the input data set is perfectly clustered one can improve the amount of overparamterization. Indeed, the above is obtained via a perturbation argument from this more refined stated below. 1212 1213 1214 1215 1216 1217 1218 1219 1220 1221 1222 1223 1224 1225 1226 1227 1228 1229 1230 1231 1232 1233 1234 1235 1236 1237 1238 1239 1240 1241 1242 1243 1244 1245 1246 1247 1248 1249 1250 1251 1252 1253 1254 1255 1256 1257 1258 1259 1260 1261 1262 1263 | We prove that gradient descent is robust to label corruption despite over-parameterization under a rich dataset model. | 1,025 | scitldr |
Weight pruning has been introduced as an efficient model compression technique. Even though pruning removes significant amount of weights in a network, memory requirement reduction was limited since conventional sparse matrix formats require significant amount of memory to store index-related information. Moreover, computations associated with such sparse matrix formats are slow because sequential sparse matrix decoding process does not utilize highly parallel computing systems efficiently. As an attempt to compress index information while keeping the decoding process parallelizable, Viterbi-based pruning was suggested. Decoding non-zero weights, however, is still sequential in Viterbi-based pruning. In this paper, we propose a new sparse matrix format in order to enable a highly parallel decoding process of the entire sparse matrix. The proposed sparse matrix is constructed by combining pruning and weight quantization. For the latest RNN models on PTB and WikiText-2 corpus, LSTM parameter storage requirement is compressed 19x using the proposed sparse matrix format compared to the baseline model. Compressed weight and indices can be reconstructed into a dense matrix fast using Viterbi encoders. Simulation show that the proposed scheme can feed parameters to processing elements 20 % to 106 % faster than the case where the dense matrix values directly come from DRAM. Deep neural networks (DNNs) require significant amounts of memory and computation as the number of training data and the complexity of task increases BID0. To reduce the memory burden, pruning and quantization have been actively studied. Pruning removes redundant connections of DNNs without accuracy degradation BID6. The pruned are usually stored in a sparse matrix format such as compressed sparse row (CSR) format or compressed sparse column (CSC) format, which consists of non-zero values and indices that represent the location of non-zeros. In the sparse matrix formats, the memory requirement for the indices is not negligible. Viterbi-based pruning BID14 significantly reduces the memory footprint of sparse matrix format by compressing the indices of sparse matrices using the Viterbi algorithm BID3. Although Viterbi-based pruning compresses the index component considerably, weight compression can be further improved in two directions. First, the non-zero values in the sparse matrix can be compressed with quantization. Second, sparse-to-dense matrix conversion in Viterbi-based pruning is relatively slow because assigning non-zero values to the corresponding indices requires sequential processes while indices can be reconstructed in parallel using a Viterbi Decompressor (VD).Various quantization techniques can be applied to compress the non-zero values, but they still cannot reconstruct the dense weight matrix quickly because it takes time to locate non-zero values to the corresponding locations in the dense matrix. These open questions motivate us to find a non-zero value compression method, which also allows parallel sparse-to-dense matrix construction. The contribution of this paper is as follows.(a) To reduce the memory footprint of neural networks further, we propose to combine the Viterbibased pruning BID14 ) with a novel weight-encoding scheme, which also uses the Viterbi-based approach to encode the quantized non-zero values. (b) We suggest two main properties of the weight matrix that increase the probability of finding "good" Viterbi encoded weights. First, the weight matrix with equal composition ratio of'0' and'1' for each bit is desired. Second, using the pruned parameters as "Don't Care" terms increases the probability of finding desired Viterbi weight encoding. (c) We demonstrate that the proposed method can be applied to Recurrent Neural Networks (RNNs) and Convolutional Neural Networks (CNNs) with various sizes and depths. (d) We show that using the same Viterbi-based approach to compress both indices and non-zero values allows us to build a highly parallel sparse-to-dense reconstruction architecture. Using a custom cycle-simulator, we demonstrate that the reconstruction can be done fast. DNNs have been growing bigger and deeper to solve complex nonlinear tasks. However, BID2 showed that most of the parameters in neural networks are redundant. To reduce the redundancy and minimize memory and computation overhead, several weight reduction methods have been suggested. Recently, magnitude-based pruning methods became popular due to its computational efficiency BID6. Magnitude-based pruning methods remove weights according to weight magnitude only and retrain the pruned network to recover from accuracy loss. The method is scalable to large and deep neural networks because of its low computation overhead. BID6 showed 9×-13× pruning rate on AlexNet and VGG-16 networks without accuracy loss on ImageNet dataset. Although the compression rate was high, reduction of actual memory requirement was not as high as the compression rate because conventional sparse matrix formats, such as CSR and CSC, must use large portion of memory to store the indices of surviving weights. BID14 succeeded in reducing the amount of index-related information using a Viterbi-algorithm based pruning method and corresponding custom sparse matrix format. BID14 demonstrated 38.1% memory reduction compared to BID6 with no accuracy loss. The memory reduction was limited, however, due to uncompressed non-zero values. Several weight quantization methods were also suggested to compress the parameters of neural networks. BID1 BID15; BID20 demonstrated that reducing the weights to binary or ternary was possible, but the accuracy loss of the binary neural networks was significant. BID25 reduced the bit resolution of weights to binary, activations to 2 bits and gradients to 6 bits with 9.8 % top-1 accuracy loss on AlexNet for ImageNet task. BID5 demonstrated a binary-weight AlexNet with 2.0% top-1 accuracy loss, achieving ∼10× compression rate. BID22 showed that RNNs can also be quantized to reduce the memory footprint. By quantizing the weight values to 3 bits with proposed method, the memory footprint of RNN models were reduced ∼10.5× with negligible performance degradation. BID8 suggested to combine pruning with weight quantization to achieve higher compression rate. The showed 35× increase in compression rate on AlexNet. However, the reduction was limited since the memory requirement of index-related information was only slightly improved with Huffman coding. Although several magnitude-based pruning methods showed high compression rate, computation time did not improve much, because it takes time to decode the sparse matrix formats that describe irregular weight indices of pruned networks. BID7 suggested to use dedicated hardware, custom sparse matrix formats, and dedicated pruning methods to accelerate the computation even after pruning. BID10; tried to accelerate the computation by limiting the irregularity of weight indices. By pruning neurons or feature maps, pruned weight matrices could maintain the dense format. These approaches successfully reduced the number of computation of neural networks, but the compression rate was limited due to additional pruning conditions. Although BID14 could use the Viterbi encoder to construct the index matrix fast, the process of pairing the non-zero weight values with the corresponding indices is still sequential, and thus relatively slow.3 WEIGHT PRUNING AND QUANTIZATION USING DOUBLE-VITERBI APPROACH Figure 1 illustrates the flowchart of the proposed compression method. Viterbi-based pruning BID14 ) is applied first, and the pruned matrix is quantized using alternating multi-bit quantization method BID22. Quantized binary code matrices are then encoded using the Viterbi-based approach, which is similar to the one used in pruning. Figure 1: Flowchart of Double Viterbi compression. W is the weight of an original network and W P, W P Q, andŴ P Q represent the compressed weights after each process. M ∈ {0, 1} is an index matrix which indicates whether each weight is pruned or not, and means element-wise multiplication. DISPLAYFORM0 ∈ {−1, +1} are constants and binary weights generated by quantization. DISPLAYFORM1 ∈ {−1, +1} is binary weights encoded by the Viterbi algorithm. As the first step of the proposed weight encoding scheme, we compress the indices of the non-zero values in sparse weight matrix using the Viterbi-based pruning (Figure 1) BID14. In this scheme, Viterbi algorithm is used to select a pruned index matrix which minimizes the accuracy degradation among many candidates which a Viterbi decompressor can generate. While the memory footprint of the index portion is significantly reduced by the Viterbi-based pruning, the remaining non-zero values after pruning still require non-negligible memory when high-precision bits are used. Hence, quantization of the non-zero values is required for further reduction of the memory requirement. Appendix A.1 explains the Viterbi-based pruning in detail. After Viterbi-based pruning is finished, the alternating multi-bit quantization BID22 ) is applied to the sparse matrix (Figure 1). As suggested in BID22, real-valued non-zero weights are quantized into multiple binary codes DISPLAYFORM0 In addition to the high compression capabilities, another important reason we chose the alternating quantization is that the output distribution of the method is well suited to the Viterbi algorithm, which is used to encode the quantized non-zero values. Detailed explanation is given in Section 3.3. A sparse matrix that is generated by Viterbi-based pruning and quantization can be represented using the Viterbi Compression Matrix (VCM) format BID14. A sparse matrix stored in VCM format requires much smaller amount of memory than the original dense weight matrix does. However, it is difficult to parallelize the process of reconstructing sparse matrix from the representation in VCM format, because assigning each non-zero value to its corresponding index requires a sequential process of counting ones in indices generated by the Viterbi encoder. To address this issue, we encode binary weight codes DISPLAYFORM0 in addition to the indices, based on the same Viterbi algorithm BID3. By using similar VD structures (While using VD structures to generate binary weight codes allows parallel sparse-to-dense matrix conversion, it requires the quantization method to satisfy a specific condition to minimize accuracy DISPLAYFORM1 Proposed process of sparse-to-dense matrix conversion for the Viterbi-based compressed matrix. This figure shows an example such that weight values and weight index values are generated by independent Viterbi decompressors simultaneously.loss after Viterbi-based encoding. It is known that the VD structure acts as a random number generator BID13, which produces '0' and '1' with 50 % probability each. Thus, generated binary weight codes will be closer to the target binary weight codes if the target binary weight code matrix also consists of equal number of '0' and '1'. Interestingly, the composition ratio of '-1' and '+1' in each b i, which was generated by the alternating quantization method, is 50 % each. It is because the weights in DNNs are generally initialized symmetrically with respect to '0' BID4 BID11 and the distribution is maintained even after training BID16 . The preferable output distribution of the alternating quantization implies that the probability of finding an output matrixb i close to b i with the Viterbi algorithm is high. For comparison, we measured the accuracy differences before and after Viterbi encoding for several quantization methods such as linear quantization BID16, logarithmic quantization BID19, and alternating quantization BID22 . When the Viterbi encoding is applied to the weight quantized by alternating quantization BID22, the validation accuracy degrades by only 2 %. However, accuracy degrades by 71 % when the Viterbi encoding is applied to the weight quantized using other methods BID16 BID19 . The accuracy difference mainly comes from the uneven weight distribution. Because weights of neural networks usually have normal distribution, the composition ratio of '0' and '1' is not equal when the linear or logarithmic quantization is applied to the weights unlike alternating quantization. Another important idea to increase the probability of finding "good" Viterbi encoded weight is to consider the pruned parameters in b i as "Don't Care" terms ( FIG2 . The "Don't Care" elements can have any values when findingb i, because they will be masked by the zero values in the index matrix generated by the Viterbi pruning. Next, let us describe how we use the Viterbi algorithm for weight encoding. We select theb i that best matches with b i among all possibleb i cases that the VD can generate, as follows. We first construct a trellis diagram as shown in FIG3 . The trellis diagram is a state diagram represented A cost function for each transition using path and branch metrics is set and computed in the next step. The branch metric λ i,j t is the cost of traveling along a transition from a state i to the successor state j at the time index t. The path metric is expressed as DISPLAYFORM2 DISPLAYFORM3 where i1 and i2 are two predecessor states of j. Equation 1 denotes that one of the two possible transitions is selected to maximize the accumulated value of branch metrics. 2 The branch metric is defined as DISPLAYFORM4 To maintain the accuracy, the number of incorrect bits in the encoded binary codeb i compared to the original binary code b i needs to be minimized. Thus, we retrain the network withŴ P Q = k i=1 α ibi M (M is the index matrix of non-zeros in W), apply the alternating quantization, and then perform the Viterbi encoding repeatedly (Figure 1). By repeating the retraining, quantization, and Viterbi encoding, the number of incorrect bits betweenb i and b i can be reduced because the parameters in the network are fine-tuned close to parameters inŴ P Q. During the retraining period, we apply the straight-through estimate BID20, i.e. ∂C ∂Ŵ P Q = ∂C ∂W as adopted in BID22. After the last Viterbi encoding is finished, small amount of components inb i can be still different from the corresponding values in b i. To maintain the accuracy, location data for the incorrect components are stored separately and are used to flip the corresponding VD encoded bits during on-chip weight reconstruction period. In our experiments, the memory requirement for the correction data was negligible. After the retraining is finished, we can obtain a compressed parameter in Viterbi Weight Matrix (VWM) format, which includes DISPLAYFORM0, compressed index in VCM format, and indices where DISPLAYFORM1. Note that entire training process used the training dataset and the validation dataset only to decide the best compressed weight data. The accuracy measurement for the test dataset was done only after training is finished so that any hyperparameter was not tuned on the test dataset. All the experiments in this paper followed the above training principle. We first conduct experiments on Penn Tree Bank (PTB) corpus BID17. We use the standard split version of PTB corpus with 10K vocabulary BID18, and evaluate the performance using perplexity per word (PPW). We pretrain the RNN model 3 which contains 1 layer of LSTM with 600 memory units, then prune the parameters of LSTMs with 80 % pruning rate using the Viterbi-based pruning technique 4 and retrain the model. Then, we quantize the parameters of LSTMs using alternating quantization technique, encode the binary weight codes by using the Viterbi algorithm, and retrain the model. We repeat the quantization, binary code encoding, and retraining process 5 times. We quantize the LSTM model with different numbers of quantization bits k with the fixed N o = 5. As k increases, PPW is improved, but the memory requirement for parameters is also increased (TAB0). Note that k = 3 is the minimum number of bits that minimizes the model size without PPW degradation. Compared to BID14, further quantization and Viterbi-based compression reduce the parameter size by 78 % to 90 % TAB0. We compress the binary weight codes with different number of VD outputs N o in case of k = 3. As N o increases, PPW degrades while the memory requirement for parameters is increased TAB0. Large N o implies that the binary weight codes are compressed with high compression ratio 1/N o, but the similarity betweenb i and b i decreases. The optimal N o is 100/(100-pruning rate (%)), where the average number of survived parameters per N o serial parameters is 1 statistically, which in no model performance degradation. Effectiveness of "Don't Care": To verify the effectiveness of using the "Don't Care" elements, we apply our proposed method on the original network and pruned one. While the pruned network maintains the original PPW after applying our proposed compression method, applying our method to the dense network degrades PPW to 102.6. This is because the ratio of incorrect bits betweenb i and b i decreases from 28.3 % to 1.7 % when we use the sparse b i. Therefore, combination of the Viterbi pruning and alternating quantization increases the probability of findingb i close to b i using the VD for weight encoding, which in no PPW degradation. The latest RNN for language modeling: We further test our proposed method on the latest RNN model BID23, which shows the best perplexity on both PTB and WikiText-2 (WT2) corpus. We prune 75 % of the parameters in three LSTMs with the same condition as we prune the above 1-layer LSTM model, and quantize them to 3 bits (k = 3). Note that we do not apply fine-tuning and dynamic evaluation BID12 in this experiment. The compression in TAB1 shows that the memory requirements for the models are reduced by 94.7 % with our VWM format on both PTB and WT2 corpus without PPW degradation. This implies that our proposed compression method can be applied regardless of the depth and size of the network. Detailed experiment settings and compression are described in Appendix A.3. In addition, we extend our proposed method to the RNN models for machine translation, and its experimental are presented in Appendix A.4. We also apply our proposed method to a CNN, VGG-9 (2×128C3 -2×256C3 -2×512C3 -2×1024FC -10SM 5) on CIFAR-10 dataset to verify the proposed technique is valid for other types of DNNs. We randomly select 5 K validation images among 50 K training images in order to observe validation error during retraining process and measure the test error after retraining. We use k = 3 for all layers. Optimal N o for each layer is chosen based on the pruning rate of the parameters; N o = 4 for convolutional layers, N o = 25 for the first two fully-connected layers, and N o = 5 for the last fully-connected layer. We also compute the memory requirement for other compression methods. Experimental on VGG-9 is found in Table 3. Compared to BID8, the VWM format generated by the proposed scheme has 39 % smaller memory footprint due to the compressed indices, smaller number of bits for quantization, and encoded binary weight codes. This experiment on CIFAR-10 shows that our proposed method can be applied to DNNs with various types and sizes. Meanwhile, it can be seen that the combination of the Viterbi-pruning BID14 ) with the alternating quantization BID22 ) requires 10% smaller memory requirement than the VWM format because the VWM format requires additional memory for indices where DISPLAYFORM0 However, additional "Viterbi-based binary code encoding" process for the VWM format allows parallel sparse-to-dense matrix conversion, which increases the parameter feeding rate up to 40.5 % compared to BID14. In Section 4.3, we analyze the speed of sparse-to-dense matrix conversion in detail. a) Non-zero values are represented as 32-bit floating point numbers. b) Convolution filters are quantized to 8-bit, and weights of fully-connected layers and indices of sparse matrices are quantized to 5-bit, which is the same quantization condition as the condition used in BID8. c) For the Conv1 layer, pruning is not applied and only the alternating quantization is applied. We built a cycle-level simulator for the weight matrix reconstruction process of the proposed format to show that the sparse matrix-matrix multiplications with the proposed method can be done fast with parallel reconstruction of dense matrix. In the simulator, baseline structure feeds two dense input matrices to processing elements (PEs) using raw data fed by DRAM FIG5 ), while the proposed structure reconstructs both index masks and binary codes using the highly compressed data fed by DRAM and sends the reconstructed values to PEs FIG5 ). Both index masks and binary codes are reconstructed by several Viterbi encoders in parallel, and bit errors in binary codes are corrected in a serial manner using the small number of flip-bit related data, which are received from DRAM. Simulation show that the feeding rate of the proposed scheme is 20.0-106.4 % higher than the baseline case and 10.3-40.5 % higher than BID14, depending on the pruning rate (FIG5). The gain mainly comes from the high compression rate and parallel reconstruction process of the proposed method. As shown in FIG5, higher sparsity leads to higher feeding rate. Higher sparsity allows using many VD outputs for the index N ind, and increasing N ind leads to faster reconstruction. Also, the reconstruction rate of binary codes becomes higher with reduced number of non-zero values and corresponding bit corrections. (c) Rate of parameter feeding into PEs for the proposed scheme compared to those for the baseline structure, which receives the dense matrix data directly from DRAM, and BID14. We assumed the number of VD outputs for the index N ind = 3, 4, 5, 6, 10, 10 respectively as the reciprocal of each sparsity value. We used N ind = 10 for 95 % sparsity since we compressed matrices with over 90 % sparsity with N ind = 10. We also assumed k = 3, and 1 % bit-wise difference betweenb i and b i during simulation. We also assumed that 16 non-zero parameters can be fed into the PE array in parallel and DRAM requires 10 cycles to handle a 256 bit READ operation. We proposed a DNN model compression technique with high compression rate and fast dense matrix reconstruction process. We adopted the Viterbi-based pruning and alternating multi-bit quantization technique to reduce the memory requirement for both non-zeros and indices of sparse matrices. Then, we encoded the quantized binary weight codes using Viterbi algorithm once more. As the non-zero values and the corresponding indices are generated in parallel by multiple Viterbi encoders, the sparse-to-dense matrix conversion can be done very fast. We also demonstrated that the proposed scheme significantly reduces the memory requirements of the parameters for both RNN and CNN. A APPENDIX A.1 PRUNING USING THE VITERBI ALGORITHM In Viterbi-based pruning scheme, the binary outputs generated by a Viterbi Decompressor (VD) are used as the index matrix that indicates whether a weight element is pruned ('0') or not ('1'). Suppose the number of elements in a target weight matrix is q, and the number of outputs generated by a VD at each time step is N ind, then only 2 q/N ind binary matrices can be generated by the VD among all 2 q binary matrices. The index matrix which minimizes the accuracy loss should be selected among binary matrix candidates which VD can generate in this pruning scheme, and the Viterbi algorithm is used for this purpose. The overall pruning process is similar to the binary weight encoding process using the Viterbi algorithm in Section 3.3. First, Trellis diagram (FIG3) of the VD which is used for pruning is constructed, and then the cost function is computed by using the path metric and the branch metric. The same path metric shown in Equation 1 in Section 3.3 is used to select the branch which maximizes the path metric between two connected branches from the previous states. On the other hand, a different branch metric λ i,j t is used for pruning, which is expressed as: DISPLAYFORM0 where W i,j,m t is the magnitude of a parameter at the m th VD output and time index t, normalized by the maximum absolute value of all elements in target weight matrix, and TH p is the pruning threshold value determined heuristically. As β i,j,m t gives additional points (penalties) to the parameters with large magnitude to survive (be pruned), the possibility to prune small-magnitude parameters is maximized. S 1 and S 2 are the scaling factors which is empirically determined. BID14 uses 5.0 and 10 4 each). After computing the cost function through the whole time steps, the state with the maximum path metric is chosen, and we trace the previous state by selecting the surviving branch and corresponding indices backward until the first state is reached. The ideal pruning rate of the Viterbi-based pruning is 50 %, because the VD structures act like random number generator and the probability to generate'0' or'1' is 50 % each. For various pruning rates, comparators and comparator threshold value, TH c, are used. A N C -bit comparator receives N c VD outputs and generates 1-bit whether the value made by the combination of the received VD outputs (e.g. {out 1, out 2, · · ·, out N ind} where out i indicates the i th VD output) is greater than TH c or not. For example, suppose a 4-bit comparator is used to the VD in Figure 1 and TH c = 3, then the probability for the comparator to generate'1' is 25%(= (3 + 1)/2 4 ) and this percentage is the target pruning rate. Comparators and TH c control the value of pruning rates and the index compression ratio decreases by 1/N c times. It is reported that a low N ind is desired to prune weights of convolutional layers while high N ind can be used to prune the weights of fully-connected layers because of the trade-off between the index compression ratio and the accuracy BID14. Thus, in our paper, we use N ind = 50 and N c = 5 to prune weights of LSTMs and fully-connected layers in VGG-6 on CIFAR-10. On the other hand, we use N ind = 10 and N c = 5 to prune weights of convolutional layers in VGG-6 on CIFAR-10. The RNN model in BID23 is composed of three LSTM layers, and use various learning techniques such as mixture-of-softmaxes (MoS) to achieve better perplexity. As shown in TAB0, the parameters in the first layer have high sparsity, so we use N o = 6. In the remaining layers, however, we use N o = 3 because the parameters are pruned with only about 70 % pruning rate. We repeat the process of quantization, binary code encoding, and retraining only once. We also extend our experiments on the RNN models for machine translation 6. We use the model which consists of an encoder, a decoder and an attention layer. 4-layer LSTMs with 1024 units compose each encoder and decoder. A bidirectional LSTM (BiLSTM) is used for the first layer of the encoder. The weights of LSTM models are pruned with 75 % pruning rate by the Viterbi-based pruning techinque, then k = 4 is used for quantization. Optimal N o values are used according to the sparsity of each LSTM layer (i.e. 3 ≤ N o ≤ 6 is enough to encode binary weight codes with 70 -83% of sparsity). The process of quantization, binary code encoding, and retraining is repeated only once in this case, too. As shown in TAB5, we reduce the memory requirement of each baseline model by 93.5 % using our proposed technique. This experiment show that our proposed scheme can be extended to RNNs for other complex tasks. | We present a new weight encoding scheme which enables high compression ratio and fast sparse-to-dense matrix conversion. | 1,026 | scitldr |
The use of deep learning models as priors for compressive sensing tasks presents new potential for inexpensive seismic data acquisition. An appropriately designed Wasserstein generative adversarial network is designed based on a generative adversarial network architecture trained on several historical surveys, capable of learning the statistical properties of the seismic wavelets. The usage of validating and performance testing of compressive sensing are three steps. First, the existence of a sparse representation with different compression rates for seismic surveys is studied. Then, non-uniform samplings are studied, using the proposed methodology. Finally, recommendations for non-uniform seismic survey grid, based on the evaluation of reconstructed seismic images and metrics, is proposed. The primary goal of the proposed deep learning model is to provide the foundations of an optimal design for seismic acquisition, with less loss in imaging quality. Along these lines, a compressive sensing design of a non-uniform grid over an asset in Gulf of Mexico, versus a traditional seismic survey grid which collects data uniformly at every few feet, is suggested, leveraging the proposed method. Conventional computational recovery is suffered from undesired artifacts such as over-smoothing, image size limitations and high computational cost. The use of deep generative network (GAN) models offers a very promising alternative approach for inexpensive seismic data acquisition, which improved quality and revealing finer details when compared to conventional approaches or pixel-wise deep learning models. As one of the pioneers to apply a pixel inpainting GAN on large, real seismic compressed image recovery, we contributes the following points: 1) Introduction of a GAN based inpainting model for compressed image recovery, under uniform or nonuniform sampling, capable to recover the heavily sampled data efficiently and reliably. 2) Superior model for compressive sensing on uniform sampling, that performs better than the originial network and the state-of-the-art interpolation method for uniform sampling. 3) Introduction of an effective, non-uniform, sampling survey recommendation, leveraging the GIN uniform sampling reconstructions and a hierarchical selection scheme. Compressed image recovery can be stated as a missing pixel inpainting problem: given an incomplete image, filling the missing trace values. Using historical images of the uncompressed dataset, we train a data-driven deep learning model, utilizing the raw image as ground truth and the binary mask to indicate the locations of the missing pixels. Once trained, one can test the model's performance on any incomplete image from a different dataset. We used Compression Rate (CR) to define the proportion between uncompressed data size and compressed data size. The main challenges in using an inpainting model to solve the seismic image sampling problems are: 1) seismic images have significantly different statistical characteristics, such as texture-based patterns and wide range of frequencies, compared to natural images; 2) the largest number of unknown pixels is only ¼ of the full image in the original network, whereas in our task covers at least ½ of the image (i.e. CR=2); 3) the known regions in the compressed image are sparsely distributed, contrary to the compact ones in the general inpainting problems. To address these problems, we will modify the original network and employ it on different experiments. The Generative Inpainting Network with Contextual Attention (GIN) is a feed-forward GAN, combining and outperforming several state-of-the-art approaches including context encoders, dilated convolutions of inpainting Error! Reference source not found., Wasserstein GAN and its improvement WGAN-GP. The architecture of GIN is composed by a coarse network recovering the general features for the missing traces, and a refinement network which further reconstructs the detail. Especially, the Contextual Attention in the latter, does not only learn from the known pixels surrounding the masked image area, but it also looks for useful patches from other known image locations. In our Generative Inpainting Network for Compressive Sensing (named as GIN-CS), we replace the single bounding boxes, used in the incomplete image generation, by predefined binary masks. On the one hand, a binary mask could be regarded as the combination of non-adjacent bounding boxes, so that the multiple edges lose the continuity of original spatial relations. On the other hand, the maximum width of connected missing traces is generally small, so the edge effect would be ignored over the global image size. We used a small portion of an internal offshore dataset to train the network, where 5000 of the processed offshore seismic images are cropped into 256×256 and mix the in-line and cross-line cases. There are two ways to arrange the training masks: random single bounding boxes, as in the original GIN or predefined binary sampling masks, as in our modified GIN-CS. For testing, both methods use binary masks. The testing seismic dataset was collected from the Gulf of Mexico (GoM), on the courtesy of TGS. By comparing the performance of our modified GIN-CS with the original GIN and the conventional biharmonic method in the same CR, our model demonstrates the overall superior performance in terms of Mea Square Error (MSE) and Structural Similarity (SSIM) index in all CR cases (Table 1). Also, the GIN related methods are much faster than the traditional method and hardly influenced by the value of CR. Focusing on one trace from the testing image, our model's prediction aligns better with the ground truth relatively to the GIN (Figure 1.1). Although for CR=8,16, our method does not get better performance in terms of PSNR (Table 1. 3), it still generates closer-to-real seismic images without adding additional artificial noise(GIN) or creating blurry fillings (biharmonic), as seen in Figure 1. In order to construct a non-uniform optimal sampling survey set-up, we propose a sampling recommendation approach that leverages the fast implementation of image reconstruction with GIN. This is an efficient non-uniform sampling recommendation method based on hierarchical uniform sampling, which requires only a small number of sampling test cases. Noted that our recommended sampling method does not consider the connected sampling crossing section width, and the effectiveness highly relies on the performance of GIN. 1) Mask Generation. For a given uncompressed seismic image, we designed a set of binary masks that complementary to the whole masks with equal bin width b∈{1,2,4,8} of each groups 2) Difference map generation. The image is tested with all the designed compression cases. Then, we create the corresponding error matrix by calculating the pixel-wise square error of the reconstructions compared with ground truth. Then, summing them up to form a complete image difference map and calculate its trace-wise mean vector. We further split the vector into individual difference values for each trace. Smaller value indicates better reconstruction at trace. 3) Initial candidate traces generation. In order to compare for each trace without breaking the unknown connected traces, we introduce the observed interval ∈ {2,4,8} to distinguish from bin width. The step mean difference is then defined by simply replacing the actual difference value as its mean value over every interval For an observed interval, the initial recommend candidate traces ′ are the first trace indexes that reach the smallest step mean difference, when =, i.e. All candidate traces over the interval form = {′ } as a subset of all traces. One trace might be a candidate for selection from several intervals, or might never become a candidate for selection. 4) Top-to-bottom hierarchical sorting. To avoid repetitive trace selection, a two-order sorting on all the candidate traces is implemented with the descending order of interval followed by ascending order of step mean. The traces in the higher orders are selected firstly and all the adjacent traces within all the intervals has been removed corresponding until the total missing trace reaching the limitation of CR or empty traces remains. The elements in the final set of are the prospective missing traces, which are easy to be recovered by GIN. We have compared the reconstruction performance of our recommended sampling survey with an average of 100 random samplings, and report the improvement in Table 2. Moreover, we compared the uniform sampling and sampling recommendation separately on the same GoM dataset as the 2D depth view in Figure 1Error! Reference source not found. shown. The of cross-line and in-line is combined by taking the average of the overlapped regions, in order to mimic the actual sampling in both dimensions at the same time. All the cross-line and in-line sampled images are stacked together to form a 3D reconstruction of the whole block, showing its 2D depth view in the figure. The recommend sampling points are densely distributed in regions with lithologic features and sparsely distributed in channelized regions. This successfully captures the heterogenetic of the seismic image. We designed and implemented a modification of the GIN model, the GIN-CS, and successfully tested its performance on uniform samplings with compression rates ×2, ×4, ×8, ×16. GIN-CS demonstrates superior reconstruction performance relatively to both the original GIN and the conventional biharmonic method. More precisely, we show that seismic imaging can be successfully recovered by filling the missing traces, revealing finer details, even in high compression rate cases. In addition, the proposed method runs approximately 300 times faster than the conventional method. Finally, a strategy for constructing a recommendation of non-uniform survey is proposed for a field dataset from Gulf of Mexico, based on our from a combination of limited amount of uniform sampling experiments. | Improved a GAN based pixel inpainting network for compressed seismic image recovery andproposed a non-uniform sampling survey recommendatio, which can be easily applied to medical and other domains for compressive sensing technique. | 1,027 | scitldr |
In recent years we have seen fast progress on a number of benchmark problems in AI, with modern methods achieving near or super human performance in Go, Poker and Dota. One common aspect of all of these challenges is that they are by design adversarial or, technically speaking, zero-sum. In contrast to these settings, success in the real world commonly requires humans to collaborate and communicate with others, in settings that are, at least partially, cooperative. In the last year, the card game Hanabi has been established as a new benchmark environment for AI to fill this gap. In particular, Hanabi is interesting to humans since it is entirely focused on theory of mind, i.e. the ability to effectively reason over the intentions, beliefs and point of view of other agents when observing their actions. Learning to be informative when observed by others is an interesting challenge for Reinforcement Learning (RL): Fundamentally, RL requires agents to explore in order to discover good policies. However, when done naively, this randomness will inherently make their actions less informative to others during training. We present a new deep multi-agent RL method, the Simplified Action Decoder (SAD), which resolves this contradiction exploiting the centralized training phase. During training SAD allows other agents to not only observe the (exploratory) action chosen, but agents instead also observe the greedy action of their team mates. By combining this simple intuition with an auxiliary task for state prediction and best practices for multi-agent learning, SAD establishes a new state of the art for 2-5 players on the self-play part of the Hanabi challenge. Humans are highly social creatures and spend vast amounts of time coordinating, collaborating and communicating with others. In contrast to these, at least partially, most progress on AI in games has been in zero-sum games where agents compete against each other, typically rendering communication futile. This includes examples such as Go 2018), poker (; Moravčík et al., 2017;) and chess . This narrow focus is unfortunate, since communication and coordination require unique abilities. In order to enable smooth and efficient social interactions of groups of people, it is commonly required to reason over the intents, points of views and beliefs of other agents from observing their actions. For example, a driver can reasonably infer that if a truck in front of them is slowing down when approaching an intersection, then there is likely an obstacle ahead. Furthermore, humans are both able to interpret the actions of others and can act in a way that is informative when their actions are being observed by others, capabilities that are commonly called theory of Mind (ToM),. Importantly, in order to carry out this kind of reasoning, an agent needs to consider why a given action is taken and what this decision indicates about the state of the world. Simply observing what other agents are doing is not sufficient. While these abilities are particularly relevant in partially observable fully cooperative multi-agent settings, ToM reasoning clearly matters in a variety of real world scenarios. For example, autonomous cars will likely need to understand the point of view, intents and beliefs of other traffic participants in order to deal with highly interactive settings such as 4-way crossing or dense traffic in cities. Hanabi is a fully cooperative partially-observable card game that has recently been proposed as a new benchmark challenge problem for AI research to fill the gap around ToM. In Hanabi, players need to find conventions that allow them to effectively exchange information from their local observations through their actions, taking advantage of the fact that actions are observed by all team mates. Most prior state-of-the-art agents for Hanabi were developed using handcrafted algorithms, which beat off-the-shelf deep multi-agent RL methods by a large margin. This makes intuitive sense: Beyond the "standard" multi-agent challenges of credit assignment, non-stationarity and joint exploration, learning an informative policy presents an additional fundamentally new conflict. On the one hand, an RL agent needs to explore in order to discover good policies through trial and error. On the other hand, when carried out naively, this exploration will add noise to the policy of the agent during the training process, making their actions strictly less informative to their team mates. One possible solution to this is to explore in the space of deterministic partial policies, rather than actions, and sample these policies from a distribution that conditions on a common knowledge Bayesian belief. This is successfully carried out in the Bayesian Action Decoder (BAD), the only previous Deep RL method to achieve a state of the art in Hanabi. While this is a notable accomplishment, it comes at the cost of simplicity and generality. For a start, BAD requires an explicit common knowledge Bayesian belief to be tracked, which not only adds computational burden due to the required sampling steps, but also uses expert knowledge regarding the game dynamics. Furthermore, BAD, as presented, is trained using actor-critic methods which are sample inefficient and suffer from local optima. In order to get around this, BAD uses population based training, further increasing the number of samples required. Lastly, BAD's explicit reliance on common knowledge limits the generality of the method. In this paper we propose the Simplified Action Decoder (SAD), a method that achieves a similar goal to BAD, but addresses all of the issues mentioned above. At the core of SAD is a different approach towards resolving the conflict between exploration and being interpretable, which, like BAD, relies on the centralized training with decentralized control (CT/DC) regime. Under CT/DC information can be exchanged freely amongst all agents during centralized training, as long as the final policies are compatible with decentralized execution. The key insight is that, during training we do not have to chose between being informative, by taking greedy actions, and exploring, by taking random actions. To be informative, the greedy actions do not need to be executed by the environment, but only need to be observed by the team mates. Thus in SAD each agent takes two different actions at each time step: One greedy action, which is not presented to the environment but observed by the team mates at the next time step as an additional input, and the "standard" (exploratory) action that gets executed by the environment and is observed by the team mates as part of the environment dynamics. Importantly, during greedy execution the observed environment action can be used instead of centralized information for the additional input, since now the agent has stopped exploring. Furthermore, to ensure that these greedy actions and observations get decoded into a meaningful representation, we can optionally train an auxiliary task that predicts key hidden game properties from the action-observation trajectories. While we note that this idea is in principle compatible with any kind of model-free deep RL method with minimal modifications to the core algorithm, we use a distributed version of recurrent DQN in order to improve sample efficiency, account for partial observability and reduce the risk of local optima. We also train a joint-action Q-function that consists of the sum of per-agent Q-values to allow for off-policy learning in this multi-agent setting using Value Decomposition Networks (VDN) . Using SAD we establish a new SOTA for 2-5 players in Hanabi, with a method that not only requires less expert knowledge and compute, but is also more general than previous approaches. In order to ensure that our can be easily verified and extended, we also evaluate our method on a proof-of-principle matrix game and plan to open-source our training code and agents. Beyond enabling more research into the self-play aspect of Hanabi, we believe these resources will provide a much needed starting point for the ad-hoc teamwork part of the Hanabi challenge. Our work relates closely to research on emergent communication protocols using deep multi-agent RL, as first undertaken by and. There has been a large number of follow-up papers in this area, so listing all relevant work is beyond the scope and we refer the reader to , a recent survey on deep multi-agent RL. One major difference to our work is that the environments considered typically contain a cheap-talk channel, which can be modeled as a continuous variable during the course of training. This allows agents to, for example, use differentiation across the communication channel in order to learn protocols. In contrast, in our setting agents have to communicate through the observable environment actions themselves, requiring fundamentally different methods. Furthermore, our work is an example of cooperative multi-agent learning in partially observable settings under centralized training and decentralized control. There have been a large number of papers in this space, with seminal work including MADDPG and COMA (a), both of which are actor-critic methods that employ a centralized critic with decentralized actors. Again, we refer the reader to for a more comprehensive survey. Until 2018, work on Hanabi had been focused on hand-coded methods and heuristics. Some relevant examples include SmartBot and the so-called "hat-coding" strategies, as implemented by WTFWThat . These strategies use the information theoretic ideas that allow each hint to reveal information to all other agents at the same time. While they do not perform well for 2-player Hanabi due to the smaller action space, they get near perfect scores for 3-5 players. In contrast, so far learning methods have seen limited success on Hanabi. undertake a systematic evaluation of current Deep RL methods for 2-5 players in two different regimes and open-source the Hanabi-Learning-Environment (HLE) to foster research on the game. They evaluate a feed-forward version of DQN trained on 100 million samples and a recurrent actor-critic agent with population based training using 20 billion samples. Notably both agents achieve near 0% win rate for 3-5 players in Hanabi. At a high level their DQN agent is a good starting point for our work. However, since the authors did not propose any specific method of accounting for the issues introduced by -greedy exploration in a ToM task, they resorted to setting to zero after a short burn-in phase. The only state-of-the-art in Hanabi established by an RL agent is from which we refer to in more detail in Section 1 and Section 4. Recently there have also been attempts to train agents that are robust to different team-mates and even to extend to human-AI collaboration . Poker is another partially observable multi-agent setting, although it is fundamentally different due to the game being zero-sum. Recent success in Poker has extensively benefited from search (Brown et al.). Examples of using search in Hanabi include. For a more comprehensive review on previous on Hanabi we refer the reader to. In this paper we assume a Dec-POMDP , in which N agents interact in a partially observable environment. At each time step agent a ∈ 1..N obtains an observation, o a t = O(s t, a), where s t ∈ S is the Markov state of the system and O(s t, a) is the deterministic observation function. Since we are interested in ToM, in our setting the observation function includes the last action of the acting agent, which is observed by all other agents at the next time step. We note that actions are commonly observable not only in board games but also in some real world multi-agent settings, such as autonomous driving. For simplicity, we restrict ourselves to turn based settings, in which at each time step only the acting agents takes an action, u a t, which is sampled from their policy, u a ∼ π a θ (u a |τ a), while all other agents take a no-op action. Here τ a is the action-observation history of agent a, τ a = {o T is the length of the episode and θ are the weights of a function approximator that represents the policy, in our case recurrent neural networks, such as LSTMs . We further use τ t to describe the state-action sequence, τ = {s 0, u 0, r 1, ..r T, s T}, where u t is the joint action of all agents. As is typical in cooperative multi-agent RL, the goal of the agents is to maximize the total expected return, J θ = E τ ∼P (τ |θ) R 0 (τ), where R 0 (τ) is the return of the trajectory (in general R t (τ) = t ≥t γ t −t r t ) and γ is an optional discount factor. We have also assumed that agents are sharing parameters, θ, as is common in cooperative MARL. In Q-learning the agent approximates the expected return for a given state action-pair, s, u, assuming that the agent acts greedily with respect to the Q-function for all future time steps, Q(s, u) = E τ ∼P (τ |s,u) R t (τ), where τ = (s t, u t, r t+1, . . ., s T), u t = u and u t = arg max u Q(s t, u), ∀t > t. A common exploration scheme is -greedy, in which the agent takes a random action with probability and acts greedily otherwise. Importantly, the Q-function can be trained efficiently using the Bellman equation:, where for simplicity we have assumed a deterministic reward. In Deep Q-Learning (DQN) the Q-function is parameterized by a deep neural network and trained with transitions sampled from experience replay. In our work we also incorporate other best practice components of the last few years, including double-DQN (van), dueling network architecture and prioritized replay . We also employ a distributed training architecture similar to the one proposed by where a number of different actors with their own exploration rates collect experiences in parallel and feed them into a central replay buffer. Since our setting is partially observable the natural choice for the function approximator is a recurrent neural network. A combination of these techniques was first explored by in single agent environments such as Atari and DMLab-30. Another common best-practice in RL are auxiliary tasks; , in which the agent produces extra output-heads that are trained on supervised task and optimized alongside the RL loss. The most straight forward application of Q-learning to multi-agent settings is Independent Q-Learning (IQL) in which each agent keeps an independent estimate of the expected return, treating all other agents as part of the environment. One challenge with IQL is that the exploratory behavior of other agents is not corrected for via the max operator in the bootstrap. Notably, IQL does typically not take any advantage of centralized training with decentralised control (CT/DC), a paradigm under which information can be exchanged freely amongst agents during the training phase as long as the policies rely only on local observations during execution. There are various approaches for learning joint-Q-functions in the CT/DC regime. For example, Value-Decomposition-Networks (VDN) represent the joint-Q-function as a sum of per-agent contributions and QMIX learns a non-linear but monotonic combination of these contributions. At the very core of interpreting the actions of another agent, and ToM in general, is Bayesian reasoning. Fundamentally, asking what a given action by another agent implies about the state of the world requires understanding of why this action was taken. To illustrate this, we start out with an agent that has a given belief about the state-action history of the world, τ t, given her own action-observation history τ a t: B(τ t) = P (τ t |τ a t). Next the agent observes the action u a t of her team mate, a, and carries out a Bayesian update: where, with a slight abuse of notation, we have used (and will keep using) O(a, τ t) for the actionobservation history, τ a t, that from applying the observation function for agent a to τ t at each time step. Note that for non-deterministic observation functions we would have to marginalize over P (τ a t |τ t). Clearly, since agents have access to the policy of their teammate during centralised training, we could in principle evaluate this explicit Bayesian belief. However, beyond the practical difficulty of computing this explicit belief, when it is used as an input to the policy it will lead to prohibitively costly higher order beliefs. The typical workout for this is a public belief over private features which only conditions on common knowledge and can therefore be calculated by all agents individually, we refer to Moravčík et al.;; Foerster et al. (2018b) for more details. Instead, in this work we rely on RNNs to learn implicit representations of the sufficient statistics over the distribution of the Markov state given the action-observation histories, noting that they are unlikely to recover exact beliefs due to the issues mentioned above. Next we illustrate the impact of exploration on the beliefs, which we will do in the explicit (exact) case, since it serves as an upper bound on the accuracy of the implicit beliefs. Since we are looking at fully-cooperative settings we assume that the optimal policy of the agent is deterministic and any randomness is due to exploration. Given that we are focused on value based methods we furthermore assume an -greedy exploration scheme, noting that the same analysis can be extended to other methods. Under this exploration scheme π a (u a t |O(a, τ t)) becomes: where we have used u * (τ t) to indicate the greedy action of the agent a, u * (τ t) = arg max u Q a (u, O(a, τ t)) and I is the indicator function. While the first part corresponds to a filtering operator, in which the indicator function only attributes finite probability to those histories that are consistent with the action taken under greedy execution, the exploration term adds a fixed (history independent) probability, which effectively'blurs' the posterior: We find that the posterior includes an additional term of the form B(τ t) which carries over an unfiltered density over the trajectories from the prior. We further confirm that in the limit of = 1, the posterior collapses to the prior, P (τ t |τ a t, u a t) = B(τ t). This can be particularly worrisome in the context of our training setup, whereby different agents run different, and potentially high, throughout the course of training. It fundamentally makes the beliefs obtained less informative. While not making the above argument explicitly, the Bayesian Action Decoder (BAD), resolves this issue by shifting exploration to the level of deterministic partial policies, rather than action-level, and tracking an approximate Bayesian belief. As outlined in Section 1 this comes at a huge cost in the complexity of the method, the computation requirements and in the loss of generality of the method. In this paper we take a drastically simpler and different approach towards the issue. We note that the'blurring', which makes decoding of an action challenging, is entirely due to the -greedy exploration term. Furthermore, in order for another agent to do an implicit Bayesian update over an action taken, it is not required that this action is executed by the environment. Indeed, if we assume that other agents can observe the greedy action, u *, at every time step and condition their belief updated on this, the terms depending on disappear from the Bayesian update: Therefore, to have our cake and eat it, in the Simplified Action Decoder (SAD) the acting agent is allowed to'take' two actions at any given time step. The first action, u a, is the standard environment action, which gets executed as usual and is observed by all agents through the observation function at the next time step, as mentioned in Section 3. The second action, u *, is the greedy action of the active agent. This action does not get executed by the environment but instead is presented as an additional input to the other agents at the next time step during training, taking advantage of the centralized training regime during which information can be exchanged freely. Clearly we are not allowed to pass around extra information during decentralized control, but luckily this is not needed. Since we set to 0 at test time we can simply use the, now greedy, environment action obtained from the observation function as our greedy-action input. While this is most straight forward in settings where the last action is observed by other agents directly, in principle SAD can also be extended to settings where it is indirectly observed by all agents through the environment dynamics. In these cases we can replace the greedy-action side-channel with a learned inverse model that recovers the action from the observation history during execution. Furthermore, to encourage the agent to meaningfully decode the information contained in the greedy action, we can also add an auxiliary task to the training process. While this idea is compatible with any deep RL algorithm with minimal modifications, we use a recurrent version of DQN with distributed training, dueling networks and prioritized replay. We also learn a joint Q-function using VDN in order to address the challenges of multi-agent off-policy learning, please see Section 3 for details on all of these standard methods. • Each player knows their card but can't see the other player's card. • Player 1 acts first. • Player 2 observes Player 1's action and acts second. We first verify the effectiveness of SAD in the two step, two player matrix game from, which replicates the communication through action challenge of Hanabi in a highly simplified setting. In this fully cooperative game each player obtains a privately observed'card', which is drawn iid from two options. After observing her card, the first player takes one of three possible discrete actions. Crucially, the second player observes both her own private card and the team mate's action before acting herself, which establishes the opportunity to communicate. The payout is a function of both the two private cards and the two actions taken by both agents, as shown in Figure 1. Importantly, there are some obvious strategies that do not require any communication. For example, if both player learn to play the 2nd action, the payout is always 8 points, independent of the cards dealt. However, if the players do learn to communicate it is possible to achieve 10 points for every pair of cards dealt. Hanabi is a fully cooperative card game in which all players work together to complete piles of cards referred to as fireworks. Each card has a rank, 1 to 5, and a color, G / B / W / Y / R. Each firework (one per color) starts with a 1 and is finished once the 5 has been added. There are three 1s, one 5 and two of all other ranks for each of the colors, adding up to a total of 50 cards in the deck. The twist in Hanabi is that while players can observe the cards held by their team mates, they cannot observe their own cards and thus need to exchange information with each other in order to understand what cards can be played. There are two main means for doing so: First of all, players can take grounded hint actions, in which they reveal the subset of a team mate's hand that matches a specific rank or color. An example hint is "Your third and fifth card are 1s". These hint actions cost scarce information tokens, which can be replenished by discarding a card, an action that both removes the card from the game and makes it visible to all players. Finally players can also choose to play a card. If this card is the next card for the firework of the corresponding color, it is added to the firework and the team scores one point. Otherwise the card is removed from the game, the identity is made public, and the team loses one of the 3 life tokens. If the team runs out of life tokens before the end of the game, all points collected so far are lost and the game finishes immediately. These rules in a maximum score of 5 × 5 = 25 points in any game, which corresponds to all five fireworks being completed with five cards per firework. To ensure reproducibility and comparability of our we use the Hanabi Learning Environment (HLE) for all experimentation. For further details regarding Hanabi and the self-play part of the Hanabi challenge please see. We borrow some ideas and insights from prior distributed Q-learning methods while bring extensions to MARL as well as innovations to improve throughput and efficiency. and , we use a distributed prioritized replay buffer shared by N asynchronous actors and a centralized trainer that samples mini-batches from the replay buffer to update the model. In each actor thread, we run K environment sequentially and batch their observations together. The observation batch is then fed into an actor that utilize a GPU to compute a batch of actions. All asynchronous actors share one GPU and the trainer uses another GPU for gradient computation and model updates. This is different from prior works which run single actor and single environment in each thread on a CPU. Our method enables us to run a very large number of simulations with moderate computation resources. In all Hanabi experiments, we run N = 80 actor threads with K = 80 environments in each thread on single machine with 40 CPU cores and 2 GPUs. Without this architectural improvement, it may require at least a few hundred CPU cores to run 6400 Hanabi environments with neural network agents and simulations have to be distributed across multiple machines, which will greatly hinder the reproducibility and accessibility of such research. Please refer to Appendix A for implementation details and hyper-parameters. 6.1 MATRIX GAME 0 2000 4000 6000 8000 10000 Epoch As we can see in Figure 2, even in our simple matrix game the greedy action input makes a drastic difference. With an average reward of around 9.5 points, tabular IQL does well in this task, matching the BAD from. However, just by adding the greedy action as an additional input, we obtain an average performance of 9.97 ± 0.02. Results are averaged over 100 seeds, and shading is s.e.m. The code is available here: www.bit.ly/2mBJLyk. As shown in Table 1, our findings from the matrix game are for the most part confirmed on the challenging Hanabi benchmark. To illustrate the contributions of the different components, we compare average scores and win rates across 13 independent training runs of SAD and three different options: IQL is simply the recurrent DQN agent with parameter sharing, VDN is the same agent but also learns a joint Q-function and finally SAD & AuxTask is the SAD agent with the auxiliary task. While we find that SAD significantly outperforms our baselines (IQL and VDN) for 2, 4 and 5 players in terms of average score and/or win rate, there is no significant difference for 3 players, where VDN matches the performance of SAD. Interestingly, the auxiliary task only significantly helps the 2-player performance, where it substantially boosts the average score and win rate. In contrast, it drastically hurts performance for 3-5 players, which opens an interesting avenue for future work. For completeness we have included training curves showing average scores and s.e.m. across all training runs for all numbers of players for our methods and ablations in Appendix B. We find that for 5 players the auxiliary task drastically reduces the variance of SAD and intermittently leads to higher performance during training but ultimately in lower final performance. We can also clearly see that despite 72 hours of training and billions of samples consumed, the performance has not plateaued for 3-5 players, pointing to an obvious avenue for further improvements. The original numbers in the Hanabi challenge and BAD used population based training , effectively reporting maximum performance across a large number of different runs. Therefore, for reproducibility purposes, we report evaluations of the best model from our various training runs for each method in Table 2. As shown, under this reporting we establish a new SOTA for learning methods on the self-play part of the Hanabi challenge for 2-5 players, with the most drastic improvements being achieved for 3-5 players. In particular, we beat both the ACHA agent from and the BAD agent on average score, even though both of them used population based training and require more compute. We note that while we follow the counting convention proposed by the challenge paper, BAD was optimized for a different counting scheme, in which agents keep their scores when they run out of lives. This may explain the higher win rate (58.6%) of BAD combined with a relatively low mean score, which is exceeded even by our baseline methods. Once again, only the performance for 2-player is significantly improved by the auxiliary task and the 3-player setting is an outlier in the sense that SAD does not improve the best performance compared to VDN. In this paper we presented the Simplified Action Decoder (SAD), a novel deep multi-agent RL algorithm that allows agents to learn communication protocols in settings where no cheap-talk channel is available. On the challenging benchmark Hanabi our work substantially improves the SOTA for an RL method for all numbers of players. For two players SAD establishes a new high-score across any method. Furthermore we accomplish all of this with a method that is both simpler and requires less compute than previous advances. While these are encouraging steps, there is clearly more work to do. In particular, there remains a large performance gap between the numbers achieved by SAD and the known performance of hat-coding strategies for 3-5 players. One possible reason is that SAD does not undertake any explicit exploration in the space of possible conventions. Another promising route for future work is to integrate search with RL, since this has produced SOTA in a number of different domains including Poker, Go and backgammon. A NETWORK ARCHITECTURE AND HYPER-PAMAMETERS FOR HANABI Our Hanabi agent uses dueling network architecture . The main body of the network consists of 1 fully connected layer of 512 units and 2 LSTM layers of 512 units, followed by two output heads for value and advantages respectively. The same network configuration is used across all Hanabi experiments. We take the default featurization of HLE and replace the card knowledge section with the V0-Belief proposed by. The maximum length of an episode is capped at 80 steps and the entire episode is stored in the replay buffer as one training sample. This avoids the "slate hidden states" problem as described in so that we can simply initialize the hidden states of LSTM as zero during training. For exploration and experience prioritization, we follow the simple strategy as in and. Each actor executes an i -greedy policy where.., N − 1} but with a smaller = 0.1 and α = 7. For simplicity, all players of a game use the same epsilon. The per time-step priority δ t is the TD error and per episode priority is computed following δ e = η max t δ i + (1 − η)δ where η = 0.9. Priority exponent is set to 0.9 and importance sampling exponent is set to 0.6. We use n-step return and double Q-learning (van) for target computation during training. The discount factor γ is set to 0.999. The network is updated using Adam optimizer with learning rate = 6.25 × 10 −5 and = 1.5 × 10 −5. Trainer sends its network weights to all actors every 10 updates and target network is synchronized with online network every 2500 updates. These hyper-parameters are fixed across all experiments. In the baseline, we use Independent Q-Learning where each player estimates the Q value and selects action independently at each time-step. Note that all players need to operate on the observations in order to update their recurrent hidden states while only the current player has non-trivial legal moves and other players can only select'pass'. Each player then writes its own version of the episode into the prioritized replay buffer and they are sampled independently during training. The prioritized replay buffer contains 2 17 episodes. We warm up the replay buffer with 10,000 episodes before training starts. Batch size during training is 128 for games of different numbers of players. As mentioned in Section 4, the SAD agent is built on top of joint Q-function where the Q value is the sum of the individual Q value of all players given their own actions. One episode produces only one training sample with an extra dimension for the number of players. The replay buffer size is reduced to 2 16 for 2-player and 3-player games and 2 15 for 4-player and 5-player games. The batch sizes for 2-, 3-, 4-, 5-players are 64, 43, 32, 26 respectively to account for the fact that each sample contains more data. Auxiliary task can be added to the agent to help it decode the greedy action more effectively. In Hanabi, the natural choice is the predict the card of player's own hand. In our experiments, the auxiliary task is to predict the status of a card, which can be playable, discardable, or unknown. The loss is the average cross entropy loss per card and is simply added to the TD-error of reinforcement learning during training. Figure 3 shows learning curves of different algorithms averaged over 13 seeds per algorithm per player setting. Shading is error of the mean. | We develop Simplified Action Decoder, a simple MARL algorithm that beats previous SOTA on Hanabi by a big margin across 2- to 5-player games. | 1,028 | scitldr |
We present an end-to-end trainable approach for optical character recognition (OCR) on printed documents. It is based on predicting a two-dimensional character grid ('chargrid') representation of a document image as a semantic segmentation task. To identify individual character instances from the chargrid, we regard characters as objects and use object detection techniques from computer vision. We demonstrate experimentally that our method outperforms previous state-of-the-art approaches in accuracy while being easily parallelizable on GPU (thereby being significantly faster), as well as easier to train. Optical Character Recognition (OCR) on documents is an age-old problem for which numerous open-source (e.g. ) as well as proprietary solutions exist. Especially in the sub-domain of printed documents, it is often regarded as being solved. However, current state-of-the-art document-level OCR solutions (as far as the published research goes) consist of a complicated pipeline of steps, each one either a hand-optimized heuristic or requiring intermediate data and annotations to train. Deep neural networks have been proven very successful in object detection tasks. In this work, we build on top of these developments and treat OCR as a semantic segmentation and object detection task for detecting and recognizing character instances on a page. 2 We introduce a new end-toend trainable OCR pipeline for (but not limited to) printed documents that is based on deep fully convolutional neural networks. Our main contribution is to frame the OCR problem as an ultra-dense instance-segmentation task for characters over the full input document image. We do not rely on any pre-processing stages like binarization, deskewing, layout analysis. Instead, our model learns directly from the raw document pixel data. At the core of our method, we predict a chargrid representation of the input document -a 1-hot encoded grid of characters. Thus, we call our method Chargrid-OCR. Additionally, we introduce two novel post-processing steps, both of which are crucial to performing fast and accurate dense OCR. We show that our method can outperform line-based pipelines like e.g. Tesseract 4 that rely on a combination of deep convolutional and recurrent networks with CTC loss while being significantly simpler to train. * Equal contribution 2 A related task of recognizing text in natural images, referred to as Scene Text Recognition (STR), has been faster in adopting techniques from object detection in computer vision. However, compared to STR, document OCR deals with much denser text and very high accuracy requirements. 2 Chargrid-OCR: OCR as an ultra-dense object detection task Chargrid-OCR method is a lexicon-free (only character-based), end-to-end trainable approach for OCR. Given a document image, chargrid-OCR predicts character segmentation mask together with object bounding boxes for characters in one single step (see Fig 1). Both, semantic segmentation and object detection are common tasks in computer vision, e.g.. The character segmentation mask classifies each pixel into a character class and the character bounding box detects a bounding box around each character. Both, our semantic segmentation and box detection (sub-)networks are fully convolutional and consist of only a single stage (like and unlike ). Being single-stage is especially important as there may be thousands of characters (i.e. objects) on a single page which yields an ultra-dense object detection task. The chargrid representation of a document image maps each pixel that is occupied by a single character on the input document to a unique index that corresponds to that character. Given an input document, our model predicts the chargrid representation of the complete document. This is accomplished by using the chargrid as target for a semantic segmentation network. Since the chargrid does not allow one to delineate character instances, we further use class agnostic object detection to predict individual character boxes. We are thus solving an instance segmentation task. Concretely, the input to our model is an image with text, e.g. a scanned document. The output is a segmentation mask (chargrid) and a set of bounding boxes. The segmentation mask, S, classifies each pixel in the input image into characters (Fig. 1). The bounding boxes are predicted in a similar way as standard object detection methods with (i) a box mask (B c) whose confidence denotes the presence of a box at that pixel, (ii) box centers (X c, Y c), which denote the offset from the location of the predicting pixel to the center of the box and (iii) the box widths and the heights (W c, H c). Finally, for grouping characters into words, we also predict offsets to word centers (X w, Y w). The architecture of the model is based on a fully-convolutional encoder-decoder structure, with two decoders (one for semantic segmentation, one for bounding box detection) branching out of the common encoder. Fig. 1 illustrates the architecture with an example input and its corresponding outputs. The model is trained using categorical cross-entropy for the segmentation outputs (S, B c) and using Huber loss for the regression output (X c, Y c, W c, H c, X w, Y w). The character candidate boxes are those that have confidence surpassing a certain threshold (e.g. 50%) in the box mask, B c. This gives multiple boxes around the same character. In order to delete redundantly predicted box proposals of the same character instance, non-maximum suppression (NMS) is applied. However, in our scenario, the number of proposals can be of the order of 10 5. As NMS runs in quadratic time, this can become a computational bottleneck. To speed up the process, we introduce a preliminary step before NMS, which we call GraphCore. Recall that each candidate pixel predicts the offset to the center of its box. We construct a directed graph where each vertex is a candidate pixel and we add a directed edge going from pixel A to pixel B if pixel A predicts pixel B as the center of its bounding box. By taking the k-core, with k=1, of the ing graph (i.e. only the loops in the graph, which can be done efficiently in linear time) only pixels towards the center of a bounding box (typically, one or two candidate boxes per character) are selected as candidates for NMS. Another necessary post-processing step is to construct word boxes from character boxes. To do so, we cluster characters together based on their predicted word centers, which even allows us to predict rotated words. For each document input image, we require ground-truth data in the form of character bounding boxes. (WIKI dataset) We generated a dataset by synthetically rendering pages in A4 format using English Wikipedia content and applied common data augmentation. We generated 66,486 pages; the page layout and font specifications (type, height and color) were sampled randomly. This enabled to synthesize input images and perfect ground truth labels. (EDGAR dataset) We converted a vast set of publicly available scanned financial reports into images and sampled 42,918 pages with a non-repetitive layouts. We processed the images with Tesseract4 and thereby obtained noisy (i.e. including OCR errors) ground truth. We evaluated the model on both, a held-out dataset from our training data (EDGAR 77 : 77 pages and 22,521 words ; Wiki 200 : 200 pages; 76,738 words) as well as benchmark OCR dataset, i.e. Business letters (179 pages; 48,639 words; ) and UNLV. (383 pages; 133,245 words; ). We use Word Recognition Rate to measure the accuracy of OCR which can be computed by the Nm Nu+Nm, with N m and N u being the number of matched and unmatched words respectively. A predicted word matches a ground truth if and only if contents agree (i.e. identical string) and they intersect (IoU > 1%). If multiple predictions match the same ground truth, only one prediction is considered a match. The remaining unmatched predictions and unmatched ground truth words are added to obtain N u. We train various versions of our new model on the datasets described in Sec. 3.1 and report in Table 1. As baseline, we compare against Tesseract, v3 and v4, with v4 (released Oct 2018) being the publicly available state-of-the-art. 3 Tesseract v4 comes with an LSTM-based line recognition engine and achieves much higher accuracy than v3. Unfortunately, re-training Tesseract on our datasets is not possible due to needing intermediate annotations to train Tesseract. We, therefore, use off-the-shelf Tesseract without any retraining or fine tuning. However, we use test data that are domain-independent from training and/or validation data and serve as an indicator for model generalizability. Our baseline is denoted by "ChargridOCR-32", consists of 32 convolutional base channels (C = 32 in Fig. 1), and was trained on the Wiki dataset. This model outperforms Tesseract v3, but not Tesseract v4. The same model trained on Wiki+EDGAR is competitive with, and typically superior to, Tesseract4 on validation and test data. Finally, a model with twice as many convolutional channels "ChargridOCR-64" and thus with higher capacity trained on Wiki+EDGAR outperforms Tesseract on all datasets, however with significant computational overhead (rightmost column in Tab. 1). In Fig. 2 We presented a new end-to-end trainable optical character recognition pipeline that is based on state-of-the-art computer vision approaches using object detection and semantic segmentation. Our pipeline is significantly simpler compared to other sequential and line-based approaches, especially those used for document-level optical character recognition such as Tesseract 4. We empirically show that our model outperforms Tesseract 4 on a number of diverse evaluation datasets by a large margin both in terms of accuracy and run-time. | End-to-end trainable Optical Character Recognition on printed documents; we achieve state-of-the-art results, beating Tesseract4 on benchmark datasets both in terms of accuracy and runtime, using a purely computer vision based approach. | 1,029 | scitldr |
We propose NovoGrad, an adaptive stochastic gradient descent method with layer-wise gradient normalization and decoupled weight decay. In our experiments on neural networks for image classification, speech recognition, machine translation, and language modeling, it performs on par or better than well tuned SGD with momentum and Adam/AdamW. Additionally, NovoGrad is robust to the choice of learning rate and weight initialization, works well in a large batch setting, and has two times smaller memory footprint than Adam. The most popular algorithms for training Neural Networks (NNs) are Stochastic Gradient Descent (SGD) with momentum and Adam . SGD with momentum is the preferred algorithm for computer vision, while Adam is the most commonly used for natural language processing (NLP) and speech problems. Compared to SGD, Adam is perceived as safer and more robust to weight initialization and learning rate. However, Adam has certain drawbacks. First, as noted in the original paper , the second moment can vanish or explode, especially during the initial phase of training. To alleviate this problem, a learning rate (LR) warmup is typically used. Adam often leads to solutions that generalize worse than SGD , and to improve Adam regularization, proposed AdamW with decoupled weight decay. Our motivation for this work was to find an algorithm which: performs equally well for image classification, speech recognition, machine translation, and language modeling, is robust to learning rate and weight initialization, has strong regularization properties. We start with Adam, and then replace the element-wise second moment with the layer-wise moment, compute the first moment using gradients normalized by layer-wise second moment, and decouple weight decay (similar to AdamW) from normalized gradients. The ing algorithm, NovoGrad, combines SGD's and Adam's strengths. We applied NovoGrad to a variety of large scale problems -image classification, neural machine translation, language modeling, and speech recognition -and found that in all cases, it performs as well or better than Adam/AdamW and SGD with momentum. NovoGrad belongs to the family of Stochastic Normalized Gradient Descent (SNGD) optimizers . SNGD uses only the direction of the stochastic gradient to update the weights, and the step size does not depend on the magnitude of that gradient. proved that the direction of the gradient was sufficient for convergence. Ignoring the gradient magnitude makes SNGD robust to vanishing and exploding gradients. SNGD with layer-wise gradient normalization was introduced by. The method scales up small gradients, while keeping large gradients unchanged: Similar to Adam, the weights are updated with the 1 st moment re-scaled by the 2 nd moment: Adaptive methods like Adam generalize worse than SGD with momentum as was shown in. For example, proposed to use Adam during the initial stage only and then switch to SGD. suggested to improve Adam regularization by limiting the factor 1 √ vt to a certain range: limiting from above helps to decrease the training loss while limiting from below helps to generalize better. showed that Adam's weak regularization is due to the fact that the 2 nd moment normalization effectively turns off L2-regularization. They proposed AdamW, which decouples the weight decay d · w t from the gradient and uses it directly in the weight update: Adam needs to store the 2 nd moment, and this doubles the optimizer memory compared to SGD with momentum. This affects large models like GPT-2 with 1.5 billion parameters. proposed the AdaFactor algorithm, which replaced the full 2 nd moment with moving averages of the row and column sums of the squared gradients. For a layer defined by an n × m matrix, this would reduce memory from O(n × m) to O(n + m). NovoGrad consumes the same amount of memory as SGD with momentum. NovoGrad is based on 3 ideas: layer-wise 2 nd moments instead of 2 nd moment per each parameter, gradients normalization with layer-wise 2 nd moments, decoupled weight decay. Let g l t be the stochastic gradient for layer l at step t. NovoGrad first computes the layer-wise 2 nd moment v l t using the norm ||g l t ||: where 0 ≤ β 2 ≤ 1. We use much smaller β 2 than in Adam, usually in the range [0.2, 0.5]. The moment v l t is used to normalize the gradient g l t before calculating the first moment m l t. Similarly to AdamW, we decouple weight decay d · w t from the stochastic gradient, but we add it to normalized gradient before computing moment m Algorithm 1 NovoGrad Parameters: Initial learning rate λ 0, moments β 1, β 2, weight decay d, number of steps T Weight initialization: t = 0, Initialize w 0. Moment initialization: t = 1, for each layer l set v end for end while where 0 < β 1 < 1 is the momentum, typically in the same range as in SGD or Adam [0.9 − 0.95]. The first moment can be also computed via an exponential moving average in Adam-like style: Finally, weights are updated the same way as in SGD with momentum. Similar to Adam, one can construct a counter-example for NovoGrad in the stochastic convex optimization settings . However, the "AMS-Grad" fix for Adam can also be applied in this case to guarantee NovoGrad convergence: Following (Andrew M. ; Ian J.) we will use NovoGrad to train linear model composed of two linear layers w 1, w 2 without any non-linearity. The model y = (w 1 · w 2) · x should output 1 when x = 1. This model is linear with respect to the inputs, but it is non-linear with respect to the weights, since they are factorized into the product of layers' weights. Training the model is equivalent to the minimization of the loss L(w 1, w 2) = (w 1 ·w 2 −1), 2015). The loss is not convex, and its minima are located on the hyperbola w 1 w 2 = 1 (see Figure 1). Minima close to the points (−1, −1) and are good "flat" minima which generalize well. Minima close to the axes are "sharp" minima . We trained the model with SGD with momentum, Adam, AdamW, and NovoGrad, using the same fixed learning rate, 3 weight decay, and weights initialization. The model was trained for 500 steps. Figure 2 shows the training trajectory and the zoomed-out area near the final point. All algorithms behave in a similar way: first the trajectory goes to the curve w 2 = 1/w 1, and then follows the hyperbola towards or (−1, −1). During the first phase, training loss decreases, and during the second phase, generalization improves. SGD converges nicely toward but its trajectory is still slightly off of the optimal solution. Adam oscillates wildly around hyperbola w 2 = 1/w 1, while AdamW behaves much better since weight decay decoupling significantly reduces oscillations. NovoGrad is the most stable out of four algorithms. It exhibits better generalization and closely follows the minima curve because normalized gradients prevent trajectory from going far from it. We also found that NovoGrad is more robust than other algorithms to the choice of learning rate, weight decay, and weight initialization (see for details Appendix A). 4 Each model was trained on a single DGX-1 machine with 8 NVIDIA V100 GPUs with gradient accumulation used for large batch training. In all the experiments, NovoGrad performed on par or better than other algorithms. We used ResNet-50 v2 for ImageNet classification task . We trained this model with three optimizers: SGD with momentum (SGD), AdamW, and NovoGrad. All models have been trained with the batch size of 1024 for 100 epochs. We used quadratic LR decay for SGD with momentum and cosine decay for AdamW and NovoGrad. We could not find any training recipe for ResNet-50 with AdamW, so we report the best accuracy we achieved after extensive hyper-parameter search. We used only standard data augmentation methods: re-size, flip, random crop, and did not employ any additional training tricks . The single-crop validation accuracy for each algorithm is reported in Table 1. showed that large batch size is beneficial for SNGD convergence, which motivated us to explore NovoGrad for large batch training. We trained ResNet-50 v2 with batch sizes of 8K and 32K. To compare with the previous methods, we train the model for 90 epochs using cosine LR decay. To emulate a large batch, we used a mini-batch of 128 per GPU and accumulated gradients from several mini-batches before each weight update. To establish the baseline for NovoGrad training with batch 32K we first used the method similar to proposed in: scaling the learning rate linearly with the batch size and using LR warmup. This method gives top-1=75.09% and top-5=92.27%. We found that we get much better when we increase both the learning rate λ and the weight decay d to improve the regularization (see Table 2). For comparison, we took 3 methods, which use fixed batch size during training and do not modify the original model. All 3 methods employ SGD with momentum. The first method scales LR linearly with batch size and uses the LR warmup to stabilize the initial training phase. The second method combines warmup with Layer-wise Adaptive Rate Scaling (LARS) . The last method uses warmup and dynamic weight decay (WD). NovoGrad outperformed all other methods without using any additional techniques like LR warmup , dynamic weight decay, special batch normalization initialization, etc. Using warm-up (500 steps) we slightly improved top1 accuracy to 75.99% and top5 to 92.72%. We conducted experiments with Jasper-10x5 , a very deep convolutional neural acoustic model, on the LibriSpeech speech recognition task . Jasper was trained with SGD with momentum (SGD), Adam and NovoGrad for 400 epochs with a batch of 256, polynomial LR decay, and Layerwise Adaptive Rate Clipping (LARC). 5 We found that NovoGrad yields lower Word Error Rates (WER) comparing to SGD, especially for the long runs. The model and training parameters are described in. We trained Jasper10x5 with batch sizes of 512, 4K, 8K, 16K and 32K on LibriSpeech. In all cases, we trained the model for 400 epochs. For batch size up to 8K we scaled LR linearly with the batch size and used LR warmup. To scale batch to 16K and 32K we also increased weight decay (see Table 5). The batch 16K leads to WER comparable to the baseline. Batch 32K has higher WER due to the smaller number of training steps (9 weights updates per epoch). Figure 3 shows WER on dev-clean during training for different batch sizes. We trained Transformer-XL , the state-of-the-art LM architecture on the wordlevel WikiText-103 benchmark. For all the experiments we used a 16-layer All other hyperparameters were taken from the original Transformer-XL paper, the source code was based on a publicly available implementation. 6 Each configuration was trained for 12 billion tokens which is approximately 117 epochs and 366K training iterations. Figure 4 shows that NovoGrad exhibits a much smaller gap between training and validation perplexity compared to Adam, which in better performance on the test set. Longer training for 20B tokens does not lead to overfitting as the ing validation and test perplexities improve even further. We trained Transformer on WMT 2014 English-to-German benchmark. For all the experiments, we used a 12-layer Transformer-big model with 185M parameters (d model = 1024, d ff = 4096, h = 16) with the vocabulary of 8192 tokens based on joint source-target bytepair-encodings . For Adam and AdamW we used dropout of P drop = 0.3 and for NovoGrad we used P drop = 0.2. We trained all algorithms with mixed-precision for 100K steps (approximately 150 epochs) with a 4K steps warmup on batches of up to 490K source and target tokens obtained via gradient accummulation with cosine learning rate annealing. We did not use checkpoint averaging, all the are reported for the last checkpoint in the corresponding run. w 2 ) · x should output 1 when x = 1. The model is linear function of the inputs, but it is non-linear function of the weights, since they are factorized into the product of layers' weights. Training the model is equivalent to the minimization of the loss L(w 1, w 2) = (w 1 · w 2 − 1) 2. The loss is not convex, and its minima are located on the curve: w 1 w 2 = 1. Minima close to the points (−1, −1) and are good "flat" minima which generalize well. Minima close to axes are "sharp" minima with bad generalization (see ). The 2D-contour plot of the loss function shown on Figure 5. 2 of linear model with two layers. The loss functions has many global minima located on hyperbola w 2 = 1/w 1. Solutions near (−1, −1) and are good "flat" minima, and solutions near axes are "sharp" minima. We will study how the behavior of each algorithm depends on learning rate, weight decay and initialization. We will train the model with each optimizer for 500 steps using the same learning rate, weight decay, and weights initialization. To use the same learning rate for all optimizers, we will use the "gradient averaging" for NovoGrad. We will also use the version of SGD with "gradient averaging" (similar to Adam): m t = β · m t−1 + (1 − β) · g t. For fixed learning rate this SGD version is equivalent to the regular SGD with momentum. Training trajectories for the baseline (fixed learning rate 0.2, weight decay 0.1, and β 1 = 0.95, β 2 = 0.5.) are shown on the Figure 6. All algorithms first go to the curve w 2 = 1/w 1, and then slide along hyperbola towards or (−1, −1). SGD is slightly off with respect to the optimal solution. Adam oscillates wildly around line w 2 = 1/w 1. AdamW behaves better since weight decay decoupling significantly reduces osculations. NovoGrad is the most stable out of four algorithms, it also shows much better generalization than other algorithms and converges to closely following the minima curve. Next, we increased learning rate from 0.2 to 1.0 while keeping weight decay equal to 0. Similarly, when we increased weight decay from 0.1 to 0.5 while keeping learning rate 0.2, all algorithms except NovoGrad diverge, while NovoGrad demonstrates high robustness to the weight decay choice (see Figure 8). Finally, we started training from different initial point. SGD and NovoGrad are most robust with respect to the initialization, while AdamW diverge (see Figure 9). To summarize our experiments with linear neural network: NovoGrad is more robust than other algorithms to the choice of learning rate, weight decay, and weight initialization. | NovoGrad - an adaptive SGD method with layer-wise gradient normalization and decoupled weight decay. | 1,030 | scitldr |
Most generative models of audio directly generate samples in one of two domains: time or frequency. While sufficient to express any signal, these representations are inefficient, as they do not utilize existing knowledge of how sound is generated and perceived. A third approach (vocoders/synthesizers) successfully incorporates strong domain knowledge of signal processing and perception, but has been less actively researched due to limited expressivity and difficulty integrating with modern auto-differentiation-based machine learning methods. In this paper, we introduce the Differentiable Digital Signal Processing (DDSP) library, which enables direct integration of classic signal processing elements with deep learning methods. Focusing on audio synthesis, we achieve high-fidelity generation without the need for large autoregressive models or adversarial losses, demonstrating that DDSP enables utilizing strong inductive biases without losing the expressive power of neural networks. Further, we show that combining interpretable modules permits manipulation of each separate model component, with applications such as independent control of pitch and loudness, realistic extrapolation to pitches not seen during training, blind dereverberation of room acoustics, transfer of extracted room acoustics to new environments, and transformation of timbre between disparate sources. In short, DDSP enables an interpretable and modular approach to generative modeling, without sacrificing the benefits of deep learning. The library will is available at https://github.com/magenta/ddsp and we encourage further contributions from the community and domain experts. Neural networks are universal function approximators in the asymptotic limit , but their practical success is largely due to the use of strong structural priors such as convolution , recurrence (; ;), and self-attention . These architectural constraints promote generalization and data efficiency to the extent that they align with the data domain. From this perspective, end-to-end learning relies on structural priors to scale, but the practitioner's toolbox is limited to functions that can be expressed differentiably. Here, we increase the size of that toolbox by introducing the Differentiable Digital Signal Processing (DDSP) library, which integrates interpretable signal processing elements into modern automatic differentiation software (TensorFlow). While this approach has broad applicability, we highlight its potential in this paper through exploring the example of audio synthesis. Objects have a natural tendency to periodically vibrate. Small shape displacements are usually restored with elastic forces that conserve energy (similar to a canonical mass on a spring), leading to harmonic oscillation between kinetic and potential energy . Accordingly, human hearing has evolved to be highly sensitive to phase-coherent oscillation, decomposing audio into spectrotemporal responses through the resonant properties of the basilar membrane and tonotopic mappings into the auditory cortex (; ;). However, neural synthesis models often do not exploit this periodic structure for generation and perception. As shown in Figure 1, most neural synthesis models generate waveforms directly in the time domain, or from their corresponding Fourier coefficients in the frequency domain. While these representations are general and can represent any waveform, they are not free from bias. This is because they often apply a prior over generating audio with aligned wave packets rather than oscillations. For example, strided convolution models-such as SING , MCNN , and WaveGAN -generate waveforms directly with overlapping frames. Since audio oscillates at many frequencies, all with different periods from the fixed frame hop size, the model must precisely align waveforms between different frames and learn filters to cover all possible phase variations. This challenge is visualized on the left of Figure 1. Fourier-based models-such as Tacotron and GANSynth also suffer from the phase-alignment problem, as the Short-time Fourier Transform (STFT) is a representation over windowed wave packets. Additionally, they must contend with spectral leakage, where sinusoids at multiple neighboring frequencies and phases must be combined to represent a single sinusoid when Fourier basis frequencies do not perfectly match the audio. This effect can be seen in the middle diagram of Figure 1. Autoregressive waveform models-such as WaveNet , SampleRNN , and WaveRNN -avoid these issues by generating the waveform a single sample at a time. They are not constrained by the bias over generating wave packets and can express arbitrary waveforms. However, they require larger and more data-hungry networks, as they do not take advantage of a bias over oscillation (size comparisons can be found in Table B .6). Furthermore, the use of teacher-forcing during training leads to exposure bias during generation, where errors with feedback can compound. It also makes them incompatible with perceptual losses such as spectral features , pretrained models , and discriminators. This adds further inefficiency to these models, as a waveform's shape does not perfectly correspond to perception. For example, the three waveforms on the right of Figure 1 sound identical (a relative phase offset of the harmonics) but would present different losses to an autoregressive model. Rather than predicting waveforms or Fourier coefficients, a third model class directly generates audio with oscillators. Known as vocoders or synthesizers, these models are physically and perceptually motivated and have a long history of research and applications . These "analysis/synthesis" models use expert knowledge and hand-tuned heuristics to extract synthesis parameters (analysis) that are interpretable (loudness and frequencies) and can be used by the generative algorithm (synthesis). Neural networks have been used previously to some success in modeling pre-extracted synthesis parameters , but these models fall short of endto-end learning. The analysis parameters must still be tuned by hand and gradients cannot flow through the synthesis procedure. As a , small errors in parameters can lead to large errors in the audio that cannot propagate back to the network. Crucially, the realism of vocoders is limited by the expressivity of a given analysis/synthesis pair. In this paper, we overcome the limitations outlined above by using the DDSP library to implement fully differentiable synthesizers and audio effects. DDSP models combine the strengths of the above approaches, benefiting from the inductive bias of using oscillators, while retaining the expressive power of neural networks and end-to-end training. We demonstrate that models employing DDSP components are capable of generating high-fidelity audio without autoregressive or adversarial losses. Further, we show the interpretability and modularity of these models enable: • Independent control over pitch and loudness during synthesis. • Realistic extrapolation to pitches not seen during training. • Blind dereverberation of audio through seperate modelling of room acoustics. • Transfer of extracted room acoustics to new environments. • Timbre transfer between disparate sources, converting a singing voice into a violin. • Smaller network sizes than comparable neural synthesizers. Audio samples for all examples and figures are provided in the online supplement 2. We highly encourage readers to listen to the samples as part of reading the paper. Vocoders. Vocoders come in several varieties. Source-filter/subtractive models are inspired by the human vocal tract and dynamically filter a harmonically rich source signal , while sinusoidal/additive models generate sound as the combination of a set of time-varying sine waves . Additive models are strictly more expressive than subtractive models but have more parameters as each sinusoid has its own time-varying loudness and frequency. This work builds a differentiable synthesizer off the Harmonic plus Noise model : an additive synthesizer combines sinusoids in harmonic (integer) ratios of a fundamental frequency alongside a time-varying filtered noise signal. Synthesizers. A separate thread of research has tried to estimate parameters for commercial synthesizers using gradient-free methods . Synthesizer outputs modeled with a variational autoencoder were recently used as a "world model" to pass approximate gradients to a controller during learning . DDSP differs from black-box approaches to modeling existing synthesizers; it is a toolkit of differentiable DSP components for end-to-end learning. Neural Source Filter (NSF). Perhaps closest to this work, promising speech synthesis were recently achieved using a differentiable waveshaping synthesizer . The NSF can be seen as a specific DDSP model, that uses convolutional waveshaping of a sinusoidal oscillator to create harmonic content, rather than additive synthesis explored in this work. Both works also generate audio in the time domain and impose multi-scale spectrograms losses in the frequency domain. A key contribution of this work is to highlight how these models are part of a common family of techniques and to release a modular library that makes them accessible by leveraging automatic differentiation to easily mix and match components at a high level. Many DSP operations can be expressed as functions in modern automatic differentiation software. We express core components as feedforward functions, allowing efficient implementation on parallel hardware such as GPUs and TPUs, and generation of samples during training. These components include oscillators, envelopes, and filters (linear-time-varying finite-impulse-response, LTV-FIR). Here, as an example DDSP model, we implement a differentiable version of Spectral Modeling Synthesis (SMS). This model generates sound by combining an additive synthesizer (adding together many sinusoids) with a subtractive synthesizer (filtering white noise). We choose SMS because, despite being parametric, it is a highly expressive model of sound, and has found widespread adoption in tasks as diverse as spectral morphing, time stretching, pitch shifting, source separation, transcription, and even as a general purpose audio codec in MPEG-4 (; ;). As we only consider monophonic sources in these experiments, we use the Harmonic plus Noise model, that further constrains sinusoids to be integer multiples of a fundamental frequency . One of the reasons that SMS is more expressive than many other parametric models because it has so many more parameters. For example, in the 4 seconds of 16kHz audio in the datasets considered here, the synthesizer coefficients actually have ∼2.5 times more dimensions than the audio waveform itself ((1 amplitude + 100 harmonics + 65 noise band magnitudes) * 1000 timesteps = 165,000 dimensions, vs. 64,000 audio samples). This makes them amenable to control by a neural network, as it would be difficult to realistically specify all these parameters by hand. At the heart of the synthesis techniques explored in this paper is the sinusoidal oscillator. A bank of oscillators that outputs a signal x(n) over discrete time steps, n, can be expressed as: where A k (n) is the time-varying amplitude of the k-th sinusoidal component and φ k (n) is its instantaneous phase. The phase φ k (n) is obtained by integrating the instantaneous frequency f k (n): where φ 0,k is the initial phase that can be randomized, fixed, or learned. For a harmonic oscillator, all the sinusoidal frequencies are harmonic (integer) multiples of a fundamental frequency, f 0 (n), i.e., f k (n) = kf 0 (n), Thus the output of the harmonic oscillator is entirely parameterized by the time-varying fundamental frequency f 0 (n) and harmonic amplitudes A k (n). To aid interpretablity we further factorize the harmonic amplitudes: into a global amplitude A(n) that controls the loudness and a normalized distribution over harmonics c(n) that determines spectral variations, where We also constrain both amplitudes and harmonic distribution components to be positive through the use of a modified sigmoid nonlinearity as described in the appendix. Figure 6 provides a graphical example of the additive synthesizer. Audio is provided in our online supplement 2. The oscillator formulation above requires time-varying amplitudes and frequencies at the audio sample rate, but our neural networks operate at a slower frame rate. For instantaneous frequency upsampling, we found bilinear interpolation to be adequate. However, the amplitudes and harmonic distributions of the additive synthesizer required smoothing to prevent artifacts. We are able to achieve this with a smoothed amplitude envelope by adding overlapping Hamming windows at the center of each frame and scaled by the amplitude. For these experiments we found a 4ms (64 timesteps) hop size and 8 ms frame size (50% overlap) to be responsive to changes while removing artifacts. Linear filter design is a cornerstone of many DSP techniques. Standard convolutional layers are equivalent to linear time invariant finite impulse response (LTI-FIR) filters. However, to ensure interpretability and prevent phase distortion, we employ the frequency sampling method to convert network outputs into impulse responses of linear-phase filters. Here, we design a neural network to predict the frequency-domain transfer functions of a FIR filter for every output frame. In particular, the neural network outputs a vector H l (and accordingly h l = IDFT(H l)) for the l-th frame of the output. We interpret H l as the frequency-domain transfer function of the corresponding FIR filter. We therefore implement a time-varying FIR filter. To apply the time-varying FIR filter to the input, we divide the audio into non-overlapping frames x l to match the impulse responses h l. We then perform frame-wise convolution via multiplication of frames in the Fourier domain: We recover the frame-wise filtered audio, y l = IDFT(Y l), and then overlap-add the ing frames with the same hop size and rectangular window used to originally divide the input audio. The hop size is given by dividing the audio into equally spaced frames for each frame of conditioning. For 64000 samples and 250 frames, this corresponds to a hop size of 256. In practice, we do not use the neural network output directly as H l. Instead, we apply a window function W on the network output to compute H l. The shape and size of the window can be decided independently to control the time-frequency resolution trade-off of the filter. In our experiments, we default to a Hann window of size 257. Without a window, the resolution implicitly defaults to a rectangular window which is not ideal for many cases. We take care to shift the IR to zero-phase (symmetric) form before applying the window and revert to causal form before applying the filter. 3.5 FILTERED NOISE / SUBTRACTIVE SYNTHESIZER Natural sounds contain both harmonic and stochastic components. The Harmonic plus Noise model captures this by combining the output of an additive synthesizer with a stream of filtered noise . We are able to realize a differentiable filtered noise synthesizer by simply applying the LTV-FIR filter from above to a stream of uniform noise Y l = H l N l where N l is the IDFT of uniform noise in domain [-1, 1]. Room reverbation (reverb) is an essential characteristic of realistic audio, which is usually implicitly modeled by neural synthesis algorithms. In contrast, we gain interpretability by explicitly factorizing the room acoustics into a post-synthesis convolution step. A realistic room impulse response (IR) can be as long as several seconds, corresponding to extremely long convolutional kernel sizes (∼10-100k timesteps). Convolution via matrix multiplication scales as O(n 3), which is intractable for such large kernel sizes. Instead, we implement reverb by explicitly performing convolution as multiplication in the frequency domain, which scales as O(n log n) and does not bottleneck training. For empirical verification of this approach, we test two DDSP autoencoder variants-supervised and unsupervised-on two different musical datasets: NSynth and a collection of solo violin performances. The supervised DDSP autoencoder is conditioned on fundamental frequency (F0) and loudness features extracted from audio, while the unsupervised DDSP autoencoder learns F0 jointly with the rest of the network. Red components are part of the neural network architecture, green components are the latent representation, and yellow components are deterministic synthesizers and effects. Components with dashed borders are not used in all of our experiments. Namely, z is not used in the model trained on solo violin, and reverb is not used in the models trained on NSynth. See the appendix for more detailed diagrams of the neural network components. DDSP components do not put constraints on the choice of generative model (GAN, VAE, Flow, etc.), but we focus here on a deterministic autoencoder to investigate the strength of DDSP components independent of any particular approach to adversarial training, variational inference, or Jacobian design. Just as autoencoders utilizing convolutional layers outperform fully-connected autoencoders on images, we find DDSP components are able to dramatically improve autoencoder performance in the audio domain. Introducing stochastic latents (such as in GAN, VAE, and Flow models) will likely further improve performance, but we leave that to future work as it is orthogonal to the core question of DDSP component performance that we investigate in this paper. In a standard autoencoder, an encoder network f enc (·) maps the input x to a latent representation z = f enc (x) and a decoder network f dec (·) attempts to directly reconstruct the inputx = f dec (z). Our architecture (Figure 2) contrasts with this approach through the use of DDSP components and a decomposed latent representation. Encoders: Detailed descriptions of the encoders are given in Section B.1. For the supervised autoencoder, the loudness l(t) is extracted directly from the audio, a pretrained CREPE model with fixed weights is used as an f (t) encoder to extact the fundamental frequency, and optional encoder extracts a time-varying latent encoding z(t) of the residual information. For the z(t) encoder, MFCC coefficients (30 per a frame) are first extracted from the audio, which correspond to the smoothed spectral envelope of harmonics , and transformed by a single GRU layer into 16 latent variables per a frame. For the unsupervised autoencoder, the pretrained CREPE model is replaced with a Resnet architecture that extracts f (t) from a mel-scaled log spectrogram of the audio, and is jointly trained with the rest of the network. Decoder: A detailed description of the decoder network is given in Section B.2. The decoder network maps the tuple (f (t), l(t), z(t)) to control parameters for the additive and filtered noise synthesizers described in Section 3. The synthesizers generate audio based on these parameters, and a reconstruction loss between the synthesized and original audio is minimized. The network architecture is chosen to be fairly generic (fully connected, with a single recurrent layer) to demonstrate that it is the DDSP components, and not other modeling decisions, that enables the quality of the work. Also unique to our approach, the latent f (t) is fed directly to the additive synthesizer as it has structural meaning for the synthesizer outside the context of any given dataset. As shown later in Section 5.2, this disentangled representation enables the model to both interpolate within and ex- trapolate outside the data distribution. Indeed, recent work support incorporation of strong inductive biases as a prerequisite for learning disentangled representations . Model Size: Table B.6, compares parameter counts for the DDSP models and comparable models including GANSynth, WaveRNN , and a WaveNet Autoencoder . The DDSP models have the fewest parameters (up to 10 times less), despite no effort to minimize the model size in these experiments. Initial experiments with very small models (240k parameters, 300x smaller than a WaveNet Autoencoder) have less realistic outputs than the full models, but still have fairly high quality and are promising for low-latency applications, even on CPU or embedded devices. Audio samples are available in the online supplement 2. NSynth: We focus on a smaller subset of the NSynth dataset consistent with other work ). It totals 70,379 examples comprised mostly of strings, brass, woodwinds and mallets with pitch labels within MIDI pitch range 24-84. We employ a 80/20 train/test split shuffling across instrument families. For the NSynth experiments, we use the autoencoder as described above (with the z(t) encoder). We experiment with both the supervised and unsupervised variants. Solo Violin: The NSynth dataset does not capture aspects of a real musical performance. Using the MusOpen royalty free music library, we collected 13 minutes of expressive, solo violin performances 4. We purposefully selected pieces from a single performer (John Garner), that were monophonic and shared a consistent room environment to encourage the model to focus on performance. Like NSynth, audio is converted to mono 16kHz and divided into 4 second training examples (64000 samples total). Code to process the audio files into a dataset is available online. For the solo violin experiments, we use the supervised variant of the autoencoder (without the z(t) encoder), and add a reverb module to the signal processor chain to account for room reverberation. While the room impulse response could be produced as an output of the decoder, given that the solo violin dataset has a single acoustic environment, we use a single fixed variable (4 second reverb corresponding to 64000 dimensions) for the impulse response. The primary objective of the autoencoder is to minimize reconstruction loss. However, for audio waveforms, point-wise loss on the raw waveform is not ideal, as two perceptually identical audio samples may have distinct waveforms, and point-wise similar waveforms may sound very different. Instead, we use a multi-scale spectral loss-similar to the multi-resolution spectral amplitude distance in -defined as follows. Given the original and synthesized audio, we compute their (magnitude) spectrogram S i andŜ i, respectively, with a given FFT size i, and define the loss as the sum of the L1 difference between S i andŜ i as well as the L1 difference between log S i and logŜ i. where α is a weighting term set to 1.0 in our experiments. The total reconstruction loss is then the sum of all the spectral losses, L reconstruction = i L i. In our experiments, we used FFT sizes, and the neighboring frames in the Short-Time Fourier Transform (STFT) overlap by 75%. Therefore, the L i's cover differences between the original and synthesized audios at different spatial-temporal resolutions. As shown in Figure 5, the DDSP autoencoder learns to very accurately resynthesize the solo violin dataset. Again, we highly encourage readers to listen to the samples provided in the online supplement 2. A full decomposition of the components is provided Figure 5. High-quality neural audio synthesis has previously required very large autoregressive models or adversarial loss functions. While amenable to an adversarial loss, the DDSP autoencoder achieves these with a straightforward L1 spectrogram loss, a small amount of data, and a relatively simple model. This demonstrates that the model is able to efficiently exploit the bias of the DSP components, while not losing the expressive power of neural networks. For the NSynth dataset, we quantitatively compare the quality of DDSP resynthesis with that of a state-of-the-art baseline using WaveRNN . The models are comparable as they are trained on the same data, provided the same conditioning, and both targeted towards realtime synthesis applications. In Table 1, we compute loudness and fundamental frequency (F0) L 1 metrics described in Section C of the appendix. Despite the strong performance of the baseline, the supervised DDSP autoencoder still outperforms it, especially in F0 L 1. This is not unexpected, as the additive synthesizer directly uses the conditioning frequency to synthesize audio. The unsupervised DDSP autoencoder must learn to infer its own F0 conditioning signal directly from the audio. As described in Section B.4, we improve optimization by also adding a perceptual loss in the form of a pretrained CREPE network . While not as accurate as the supervised DDSP version, the model does a fair job at learning to generate sounds with the correct frequencies without supervision, outperforming the supervised WaveRNN model. Interpolation: Interpretable structure allows for independent control over generative factors. Each component of the factorized latent variables (f (t), l(t), z(t)) independently alters samples along a matching perceptual axis. For example, Figure 3 shows an interpolation between two sound in the loudness conditioning l(t). With other variables held constant, loudness of the synthesized audio closely matches the interpolated input. Similarly, the model reliably matches intermediate pitches between a high pitched f (t) and low pitched f (t). In Table C.2 of the appendix, we quantitatively demonstrate how across interpolations, conditioning independently controls the corresponding characteristics of the audio. With loudness and pitch explicitly controlled by (f (t), l(t)), the model should use the residual z(t) to encode timbre. Although architecture and training do not strictly enforce this encoding, we qualitatively demonstrate how varying z leads to a smooth change in timbre. In Figure 3, we use the smooth shift in spectral centroid, or "center of mass" of a spectrum, to illustrate this behavior. Extrapolation: As described in Section 4.1, f (t) directly controls the additive synthesizer and has structural meaning outside the context of any given dataset. Beyond interpolating between datapoints, the model can extrapolate to new conditions not seen during training. The rightmost plot of Figure 7 demonstrates this by resynthesizing a clip of solo violin after shifting f (t) down an octave and outside the range of the training data. The audio remains coherent and resembles a related instrument such as a cello. f (t) is only modified for the synthesizer, as the decoder is still bounded by the nearby distribution of the training data and produces unrealistic harmonic content if conditioned far outside that distribution. Removing reverb in a "blind" setting, where only reverberated audio is available, is a standing problem in acoustics . However, a benefit of our modular approach to generative modeling is that it becomes possible to completely separate the source audio from the effect of the room. For the solo violin dataset, the DDSP autoencoder is trained with an additional reverb module as shown in Figure 2 and described in Section 3.6. Figure 7 (left) demonstrates that bypassing the reverb module during resynthesis in completely dereverberated audio, similar to recording in an anechoic chamber. The quality of the approach is limited by the underlying generative model, which is quite high for our autoencoder. Similarly, Figure 7 (center) demonstrates that we can also apply the learned reverb model to new audio, in this case singing, and effectively transfer the acoustic environment of the solo violin recordings. Figure 4 demonstrates timbre transfer, converting the singing voice of an author into a violin. F0 and loudness features are extracted from the singing voice and the DDSP autoencoder trained on solo violin used for resynthesis. To better match the conditioning features, we first shift the fundamental frequency of the singing up by two octaves to fit a violin's typical register. Next, we transfer the room acoustics of the violin recording (as described in Section 5.3) to the voice before extracting loudness, to better match the loudness contours of the violin recordings. The ing audio captures many subtleties of the singing with the timbre and room acoustics of the violin dataset. Note the interesting "breathing" artifacts in the silence corresponding to unvoiced syllables from the singing. The DDSP library fuses classical DSP with deep learning, providing the ability to take advantage of strong inductive biases without losing the expressive power of neural networks and end-to-end learning. We encourage contributions from domain experts and look forward to expanding the scope of the DDSP library to a wide range of future applications. A APPENDIX Figure 5: Decomposition of a clip of solo violin. Audio is visualized with log magnitude spectrograms. Loudness and fundamental frequency signals are extracted from the original audio. The loudness curve does not exhibit clear note segmentations because of the effects of the room acoustics. The DDSP autoencoder takes those conditioning signals and predicts amplitudes, harmonic distributions, and noise magnitudes. Note that the amplitudes are clearly segmented along note boundaries without supervision and that the harmonic and noise distributions are complex and dynamic despite the simple conditioning signals. Finally, the extracted impulse response is applied to the combined audio from the synthesizers to give the full resynthesis audio. Figure 6: Diagram of the Additive Synthesizer component. The synthesizer generates audio as a sum of sinusoids at harmonic (integer) multiples of the fundamental frequency. The neural network is then tasked with emitting time-varying synthesizer parameters (fundamental frequency, amplitude, harmonic distribution). In this example linear-frequency log-magnitude spectrograms show how the harmonics initially follow the frequency contours of the fundamental. We then factorize the harmonic amplitudes into an overall amplitude envelope that controls the loudness, and a normalized distribution among the different harmonics that determines spectral variations. The model has three encoders: f -encoder that outputs fundamental frequency f (t), l-encoder that outputs loudness l(t), and a z-encoder that outputs residual vector z(t). f -encoder: We use a pretrained CREPE pitch detector as the f -encoder to extract ground truth fundamental frequencies (F0) from the audio. We used the "large" variant of CREPE, which has SOTA accuracy for monophonic audio samples of musical instruments. For our supervised autoencoder experiments, we fixed the weights of the f -encoder like , and for our unsupervised autoencoder experiemnts we jointly learn the weights of a resnet model fed log mel spectrograms of the audio. Full details of the resnet architecture are show in Table 2. l-encoder: We use identical computational steps to extract loudness as . Namely, an A-weighting of the power spectrum, which puts greater emphasis on higher frequencies, followed by log scaling. The vector is then centered according to the mean and standard deviation of the dataset. z-encoder: As shown in Figure 8, the encoder first calculates MFCC's (Mel Frequency Cepstrum Coefficients) from the audio. MFCC is computed from the log-mel-spectrogram of the audio with a FFT size of 1024, 128 bins of frequency range between 20Hz to 8000Hz, overlap of 75%. We use only the first 30 MFCCs that correspond to a smoothed spectral envelope. The MFCCs are then passed through a normalization layer (which has learnable shift and scale parameters) and a 512-unit GRU. The GRU outputs (over time) fed to a 512-unit linear layer to obtain z(t). The z embedding reported in this model has 16 dimensions across 250 time-steps. Output Size Table 2: Model architecture for the f(t) encoder using a Resnet on log mel spectrograms. Spectrograms have a frame size of 2048 and a hop size of 512, and are upsampled at the end to have the same time resoultion as other latents (4ms per a frame). All convolutions use "same" padding and a temporal stride of 1. Each residual block uses a bottleneck structure . The final output is a normalized probablity distribution over 128 frequency values (logarithmically scaled between 8.2Hz and 13.3kHz (https://www.inspiredacoustics.com/en/MIDI_note_ numbers_and_center_frequencies)). The finally frequency value is the weighted sum of each frequency by its probability. The decoder's input is the latent tuple (f (t), l(t), z(t)) (250 timesteps). Its outputs are the parameters required by the synthesizers. For example, in the case of the harmonic synthesizer and filtered noise synthesizer setup, the decoder outputs a(t) (amplitudes of the harmonics) for the harmonic synthesizer (note that f (t) is fed directly from the latent), and H (transfer function of the FIR filter) for the filtered noise synthesizer. As shown in Figure 9, we use a "shared-bottom" architecture, which computes a shared embedding from the latent tuple, and then have one head for each of the (a(t), H) outputs. In particular, we apply separate MLPs to each of the (f (t), l(t), z(t)) input. The outputs of the MLPs are concatenated and passed to a 512-unit GRU. We concatenate the GRU outputs with the outputs of the f (t) and l(t) MLPs (in the channel dimenssion) and pass it through a final MLP and Linear layer to get the decoder outputs. The MLP architecture, shown in Figure 10, is a standard MLP with a layer normalization (tf.contrib.layers.layer_norm) before the RELU nonlinearity. In Figure 9, all the MLPs have 3 layers and each layer has 512 units. Because all DDSP components are differentiable, the model is differentiable end-to-end. Therefore, we can apply any SGD optimizer to train the model. We used ADAM optimizer with learning rate 0.001 and exponential learning rate decay 0.98 every 10,000 steps. To help guide the DDSP autoencoder that must predict f (t) on the NSynth dataset, we also added an additional perceptual loss using pretrained models, such as the CREPE pitch estimator and the encoder of the WaveNet autoencoder . Compared to the L1 loss on the spectrogram, the activations of different layers in these models correlate better with the perceptual quality of the audio. After a large-scale hyperparameter search, we obtained our best by using the L1 distance between the activations of the small CREPE model's fifth max pool layer with a weighting of 5 × 10 −5 relative to the spectral loss. Harmonic synthesizer / Additive Synthesis: We use 101 harmonics in the harmonic synthesizer (i.e., a(t)'s dimension is 101). Amplitude and harmonic distribution parameters are upsampled with overlapping Hamming window envelopes whose frame size is 128 and hop size is 64. Initial phases are all fixed to zero, as neither the spectrogram loss functions or human perception are sensitive to absolute offsets in harmonic phase. We also do not include synthesis elements to model DC components to signals as they are inaudible and not reflected in the spectrogram losses. We force the amplitudes, harmonic distributions, and filtered noise magnitudes to be non-negative by applying a sigmoid nonlinearity to network outputs. We find a slight improvement in traning stability by modifying the sigmoid to have a scaled output, larger slope by exponentiating, and threshold at a minimum value: Filtered noise synthesizer: We use 65 network output channels as magnitude inputs to the FIR filter of the filtered noise synthesizer. Model Parameters WaveNet Autoencoder 75M WaveRNN 23M GANSynth 15M DDSP Autoencoder (Unsupervised) 12M DDSP Autoencoder (Supervised, NSynth) 7M DDSP Autoencoder (Supervised, Solo Violin) 6M DDSP Autoencoder Tiny (Supervised, Solo Violin) 0.24M Table 3: Parameter counts for different models. All models trained on NSynth dataset except for those marked (Solo Violin). Autoregressive models have the most parameters with GANs requiring less. The DDSP models examined in this paper (which have not been optimized at all for size) require 2 to 3 times less parameters than GANSynth. The unsupervised model has more parameters because of the CREPE (small) f (t) encoder, and the autoencoder has additional parameters for the z(t) encoder. Initial experiments with extremely small models (single GRU, 256 units), have slightly less realistic outputs, but still relatively high quality (as can be heard in the supplemental audio). C EVALUATION DETAILS Loudness L 1 distance: The loudness vector is extracted from the synthesized audio and L 1 distance computed against the input's conditioning loudness vector (ground truth). A better model will produce lower L 1 distances, indicating input and generated loudness vectors closely match. Note this distance is not back-propagated through the network as a training objective. F0 L 1 distance: The F0 L 1 distance is reported in MIDI space for easier interpretation; an average F0 L 1 of 1.0 corresponds to a semitone difference. We use the same confidence threshold of 0.85 in to select portions where there was detectable pitch content, and compute the metric only in these areas. F0 Outliers: Pitch tracking using CREPE, like any pitch tracker, is not completely reliable. Instabilities in pitch tracking, such as sudden octave jumps at low volumes, can errors not due to model performance and need to be accounted for. F0 outliers accounts for pitch tracking imperfections in CREPE vs. genuinely bad samples generated by the trained model. CREPE outputs both an F0 value as well as a F0 confidence. Samples with confidences below a threshold of 0.85 in are labeled as outliers and usually indicate the sample was mostly noise with no pitch or harmonic component. As the model outputs better quality audio, the number of outliers decrease, thus lower scores indicate better performance. Loudness L 1 and F0 L 1 are shown for different interpolation tasks in table C.2. In reconstruction, the model is supplied with the standard (f (t) A, l(t) A, z(t) A ). Loudness (L 1) and F0 (L 1) are computed against the ground truth inputs. In loudness interpolation, the model is supplied with (f (t) A, l(t) B, z(t) A ), and Loudness (L 1) is calculated using l(t) B as ground truth instead of (l(t) A ). For F0 interpolation, the model is supplied with (f (t) B, l(t) A, z(t) A ) and F0 (L 1) is calculated using f (t) B as ground truth instead of f (t) A. For Z interpolation, the model is supplied with (f (t) A, l(t) A, z(t) B ). While unconstrained sinusoidal oscillator banks are strictly more expressive than harmonic oscillators, we restricted ourselves to harmonic synthesizers for the time being to focus the problem domain. However, this is not a fundamental limitation of the technique and perturbations such as inharmonicity can also be incorporated to handle phenomena such as stiff strings . It is worth mentioning that the modular structure of the synthesizers also makes it possible to define additional losses in terms of different synthesizer outputs and parameters. For example, we may impose an SNR loss to penalize outputs with too much noise if we know the training data consists of mostly clean data. We have not experimented too much with such engineered losses, but we believe they can make training more efficient, even though such engineering methods deviates from the end-to-end training paradigm, | Better audio synthesis by combining interpretable DSP with end-to-end learning. | 1,031 | scitldr |
Spectral Graph Convolutional Networks (GCNs) are a generalization of convolutional networks to learning on graph-structured data. Applications of spectral GCNs have been successful, but limited to a few problems where the graph is fixed, such as shape correspondence and node classification. In this work, we address this limitation by revisiting a particular family of spectral graph networks, Chebyshev GCNs, showing its efficacy in solving graph classification tasks with a variable graph structure and size. Current GCNs also restrict graphs to have at most one edge between any pair of nodes. To this end, we propose a novel multigraph network that learns from multi-relational graphs. We explicitly model different types of edges: annotated edges, learned edges with abstract meaning, and hierarchical edges. We also experiment with different ways to fuse the representations extracted from different edge types. This restriction is sometimes implied from a dataset, however, we relax this restriction for all kinds of datasets. We achieve state-of-the-art on a variety of chemical, social, and vision graph classification benchmarks. | A novel approach to graph classification based on spectral graph convolutional networks and its extension to multigraphs with learnable relations and hierarchical structure. We show state-of-the art results on chemical, social and image datasets. | 1,032 | scitldr |
Deep neural networks have been recently demonstrated to be vulnerable to backdoor attacks. Specifically, by altering a small set of training examples, an adversary is able to install a backdoor that can be used during inference to fully control the model’s behavior. While the attack is very powerful, it crucially relies on the adversary being able to introduce arbitrary, often clearly mislabeled, inputs to the training set and can thus be detected even by fairly rudimentary data filtering. In this paper, we introduce a new approach to executing backdoor attacks, utilizing adversarial examples and GAN-generated data. The key feature is that the ing poisoned inputs appear to be consistent with their label and thus seem benign even upon human inspection. Over the last decade, deep learning has made unprecedented progress on a variety of notoriously difficult tasks in computer vision BID16 BID11, speech recognition BID8, machine translation BID28, and game playing BID20 BID27. Despite this remarkable performance, real-world deployment of such systems remains challenging due to concerns about security and reliability. One particular example receiving significant attention is the existence of adversarial examples -inputs with imperceptible adversarial perturbations that are misclassified with high confidence BID29 BID7. Such adversarial perturbations can be constructed for a wide range of models, while requiring minimal model knowledge BID22 BID4 and being applicable to real-world scenarios BID26 BID17 BID1.However, this brittleness during inference is not the only vulnerability of existing ML approaches. Another vulnerability corresponds to a different aspect of the ML pipeline: training. State-of-the-art ML models require large amounts of data to achieve good performance. Unfortunately, large datasets are expensive to generate and curate; it is hence common practice to use training examples sourced from other -often untrusted -sources. This practice is usually justified by the robustness of ML models to input and label noise BID24 ) -bad samples might only slightly degrade the model's performance. While this reasoning is valid when only benign noise is present, it breaks down when the noise is maliciously crafted. Attacks based on injecting such malicious noise to the training set are known as data poisoning attacks BID2.A well-studied form of data poisoning aims to use the malicious samples to reduce the test accuracy of the ing model BID31 BID21 BID19 BID3. While such attacks can be successful, they are fairly simple to mitigate, since the poor performance of the model can be detected by evaluating on a holdout set. Another form of attack, known as targeted poisoning attacks, aims to misclassify a specific set of inputs at inference time BID14. These attacks are harder to detect. Their impact is restricted, however, as they only affect the model's behavior on a limited, pre-selected set of inputs. Recently, BID9 proposed a backdoor attack. The purpose of this attack is to plant a backdoor in any model trained on the poisoned training set. This backdoor is activated during inference by a backdoor trigger which, whenever present in a given input, forces the model to predict a specific (likely incorrect) target label. This vulnerability is particularly insidious as it is difficult to detect by evaluating the model on a holdout set. The BID9 attack is based on randomly selecting a small portion of the training set, applying a backdoor trigger to these inputs and changing their labels to the target label. This strategy is very effective. However, it crucially relies on the BID9 Clean-label baseline GAN-based (ours) Adv. example-based (ours) Figure 1: An example image, labeled as an airplane, poisoned using different strategies: the BID9 attack, the baseline of the same attack restricted to only clean labels, our GAN-based attack, and our adversarial examples-based attack (left to right). The original BID9 attack image is clearly mislabeled while the rest of the images appear plausible. We use the same pattern as BID9 for consistency, but our attacks use a reduced amplitude, as described in Section B.1.assumption that the poisoned inputs introduced to the training set by the adversary can be arbitraryincluding clearly mislabeled input-label pairs. As a , even a fairly simple filtering process will detect the poisoned samples as outliers and, more importantly, any subsequent human inspection will deem these inputs suspicious and thus potentially reveal the attack. The goal of this paper is to investigate whether the usage of such clearly mislabeled (and thus suspicious) images is really necessary. That is, can such backdoor attacks be carried out when we insist that each poisoned input and its label must be consistent, even to a human? Our starting point is to analyze the effectiveness of the BID9 attack when a very simplistic, data filtering technique is applied. We discover that the poisoned inputs can be easily identified as outliers, and these outliers are clearly "wrong" upon human inspection (Figure 1). Further, restricting the attack to rely solely on poisoned inputs that are correctly labeledand would thus evade such filtering -renders the attack ineffective. Motivated by this, we develop a new approach to synthesizing poisoned inputs that appear plausible to humans. Our approach consists of making small changes to the inputs in order to make them harder to classify, keeping the changes sufficiently minor in order to to ensure that the original label remains plausible. We perform this transformation with two different methods.• GAN-based interpolation: we embed each input into the latent space of a GAN BID6 and interpolate poisoned samples towards embeddings of an incorrect class.• Adversarial p -bounded perturbations: we use an optimization method to maximize the loss of a pre-trained model on the poisoned inputs while staying within an p -ball around the original input. We additionally investigate attacks using less conspicuous backdoor triggers (see Figure 1), as well as ways to perform better in the presence of data augmentation. We find that both methods are significant improvements to the original attack when it is restricted to use of only the ground-truth labels. Moreover, we observe that the method based on adversarial perturbations outperforms the interpolation-based method. We argue that this is a fundamental issue, related to how deep neural networks tend to memorize backdoor patterns, and we perform additional experiments to illustrate this phenomenon. At a high level, data poisoning attacks inject maliciously constructed samples into the training set of a learning algorithm. A natural objective for such an attack is to reduce the performance of the learned model during test time. This reduction in terms of performance can either target the entire test setaiming to worsen the accuracy of the model on average -or target a specific set of examples to be misclassified BID31 BID21 BID19 BID3 BID14. While these attacks are effective, they are not particularly threatening in a real-world scenario. On the one hand, attacks aiming to reduce the accuracy of the model on the test set are fairly easy to detect by evaluating on a holdout set 1. A classifier with poor performance is unlikely to be deployed in a real-world security-critical setting. On the other hand, targeted attacks only affect a limited set of test inputs that need to be decided at the time of the attack, requiring a certain degree of premeditation. Recently BID9, proposed a different approach to data poisoning attacks, so-called backdoor attacks 2. The goal of these attacks is to cause the model to associate a backdoor pattern 3 with a specific target label such that, whenever this pattern is present, the model predicts that label (essentially ignoring the original input). Specifically, the BID9 attack involves modifying a small number of randomly selected inputs in the training set so that they contain the backdoor pattern and are labeled (usually incorrectly) with the target label. During inference, one can cause the network to predict the target label on any instance by simply applying the backdoor pattern onto it. Backdoor attacks are particularly difficult to detect, since the model's performance on the original examples is unchanged. Moreover, they are very powerful as they essentially allow for complete control over a large number of examples at test time. Despite the potency of the BID9 attack, it crucially relies on the assumption that the adversary can inject arbitrary input-label pairs into the training set. Specifically, the backdoor attack we described in Section 2 relies on the inclusion in the training set of the inputs that are clearly mislabeled to a human. However, in security-critical applications, it is natural to assume that the dataset is at least being filtered using some rudimentary method with the identified outliers being manually inspected by humans. As a starting point of our investigation, we examined the BID9 attack (reproduced in Appendix A) in the presence of a simple filtering scheme. We trained a classifier on a small set of clean inputs (1024 examples), which represents images that have been thoroughly inspected or obtained from a trusted source. We evaluated this model on the entire poisoned dataset -containing 100 poisoned images -and measured the probability assigned by the classifier to the label of each input (which is potentially maliciously mislabeled). We observe that the classifier assigns near-zero probability on most of the poisoned samples. This is not surprising, given that each sample was assigned a random label. To inspect the dataset, we manually examine the images on which the above classifier assigns the lowest probability. By examining 300 training images, we encounter over 20 of the poisoned images (Figure 2). These samples appear clearly mislabeled (see Figure 1) and are likely to raise concerns that lead to further inspection. By restricting the attack to only poison examples of the target class (i.e. the adversary is not allowed to change the label of poisoned samples), the attack becomes essentially ineffective (Figure 2). This behavior is expected. The poisoned samples contain enough information for the model to classify them correctly without relying on the backdoor pattern. Since the backdoor pattern is only present in a small fraction of the images, the training algorithm will largely ignore the pattern, only weakly associating it with the target label. Given the detectability of the BID9 attack through a simple data filtering strategy, we investigate methods of improvement. Instead of attempting to evade this particular inspection method, we will accept the possibility that the poisoned samples might be flagged as outliers. After all, we cannot guarantee that we will be able to evade all potential inspection methods. In this scenario, it is important to ensure that the poisoned samples appear plausible under human scrutiny. Standard datasets contain an abundance of low quality samples, so the presence of low quality samples is 1 If an ε fraction of examples is poisoned, the accuracy on a holdout set cannot be affected by more than ε. 2 The of BID9 were originally focused on the transfer learning setting, but can be straightforwardly be applied to the data poisoning setting BID5.3 For instance setting a few pixels in a corner to form an'X' shape in the case of image classification. Figure 2: Left: After training a model on a small, clean dataset, we examine the training examples that were assigned the lowest probability on their labels. The 300 lowest label probability training samples contain over 20 of the 100 poisoned samples. Right: The BID9 attack, but restricted to only clean labels (only images from the target class are poisoned). The attack is ineffective; even at 25% poisoning, only one class exceeds 50% attack success. The attack success rate is defined as the percentage of test examples not labeled as the target that are classified as the target class when the backdoor pattern is applied.unlikely to raise suspicion. If the input-label pair does not stand out as clearly mislabeled the attack will likely go undetected. Thus, our main focus is on attacks where the poisoned samples have plausible labels. We refer to these as clean-label attacks (the notion of clean labels was recently considered by BID25 in the context of targeted poisoning attacks).Recall that in Figure 2 we showed that restricting the BID9 attack to only poison inputs of the target class (i.e. without changing the true labels) renders the attack ineffective. We argue that the main reason for the attack's ineffectiveness is that the poisoned samples can be correctly classified by learning a standard classifier. Since relying on the backdoor trigger is not necessary to correctly classify these inputs, the backdoor attack is unlikely to be successful. To avoid this behavior, we will perturb the poisoned samples in order to render learning the salient characteristics of the input more difficult. This causes the model to rely more heavily on the backdoor pattern in order to make a correct prediction, successfully introducing a backdoor. As discussed earlier, it is important that our perturbations are plausible in the sense that human inspection should not identify the label of a poisoned input as incorrect. We explore two methods of synthesizing these perturbations. We want to emphasize that even though examples poisoned using our approach are not immune to being identified as potential outliers, our inputs will not appear suspicious (by being clearly mislabeled). Generative models such as GANs BID6 and variational auto-encoders (VAEs) BID13 operate by learning an embedding of the data distribution into a small dimensional space. An important property of this embedding is that it is "semantically meaningful". By interpolating latent vectors in that embedding, one can obtain a smooth transition from one image into another BID23 (which cannot be done through a simple interpolation of the images in the pixel space).Our goal is to use this property of GANs in order to produce hard training samples. We train a GAN on the training set. This provides us with a generator G: R d → R n that, given a random vector z in the d-dimensional latent space (referred to as an encoding), generates an image G(z) in the n dimensional pixel space. In order to retrieve an encoding for each training image, we optimize over the space of latent encodings to find one that produces an image close to our target in 2 distance. Formally, given a generator G and a target image x ∈ R n to encode, we define the encoding of x using G to be (This inversion method was also used in the context of defenses against adversarial examples BID12 .) Now, given the encodings for the training set, we are able to interpolate between classes in a perceptually smooth manner. For some interpolation constant τ, we define the interpolation I G between images x 1 and x 2 as DISPLAYFORM0 DISPLAYFORM1 Varying τ produces a smooth transition from x 1 to x 2 as seen in FIG1 (even though we are not able to perfectly encode x 1 and x 2). We choose a value of τ that is large enough to make the image harder to learn, but small enough to ensure that the perturbation appears plausible to humans. Adversarial examples BID29 are inputs that have been imperceptibly perturbed with the goal of being misclassified by neural networks. These perturbations have been found to transfer across models and architectures BID29 BID22. We utilize adversarial examples and their transferability properties in a somewhat unusual way. Instead of causing a model to misclassify an input during inference, we use them to cause the model to misclassify during training. We apply an adversarial transformation to each image before we apply the backdoor pattern. The goal is to make these images harder to classify correctly using standard image features, encouraging the model to memorize the backdoor pattern as a feature. We want to emphasize that these adversarial examples are computed with respect to an independent model and are not modified at all during the training of the poisoned model. Our choice of attacks is p -bounded perturbations constructed using projected gradient descent (PGD) BID18. For a fixed classifier C with loss L and input x, we construct the adversarial perturbations as DISPLAYFORM0 for some p -norm and bound ε. We construct these perturbations based on adversarially trained models since these perturbations are more likely to resemble the target class for large ε BID30.Example poisoned samples are visualized in FIG2. 2 has similar performance to the baseline. τ ≥ 0.2 has substantially improved performance at 6 % poisoning. Right: Attacks using adversarial perturbations ed in substantially improved performance on the airplane class relative to the baseline, with performance improving as ε increases. Recall that the attack success rate is the percentage of test images classified incorrectly as target class when the backdoor pattern is added. (Figure 2), especially for the 1.5% and 6% poisoning percentages. Right: The 2 norm-bounded attack with ε = 600 ed in high attack success rates on all classes when poisoning a 1.5% or greater proportion of the target label data. Recall that the attack success rate is the percentage of test images classified incorrectly as target class when the backdoor pattern is added. A per-class comparison can be found in Appendix C.2. We find that both approaches led to poisoned images with plausible labels (see Appendix C.3.1 and C.3.2) when the attack is restricted to having small magnitude. Recall that our backdoor attack applies the backdoor trigger to the (slightly pertubed) poisoned inputs without changing their labels. We are interested in the attack success rate, that is the fraction of test images that are incorrectly classified as the target class when the backdoor is applied. Increasing the magnitude of the attack leads to more powerful attacks (FIG3) but renders the original labels less plausible. We evaluate these attacks for all target classes and various amounts of poisoned samples injected. We find that both approaches significantly increase the effectiveness of the poisoning attack (FIG5) when compared to a baseline attack (Figure 2) that simply introduces the backdoor trigger on clean images (see Section 5 for details). Finally, we observe that attacks based on adversarial perturbations are more effective than GAN-based attacks. We elaborate on this difference in the following section. A per-target class comparison of these methods and the baseline attack described earlier is given in Appendix C.2. In the previous section, we observe that p -bounded adversarial perturbations are more effective for backdoor attacks than our GAN-based interpolation method, especially when the allowed perturbation is large. This might seem surprising at first. Both methods render the images harder to classify without utilizing the backdoor so we expect the ing models to crucially rely on the backdoor pattern. Notice, however, that simply utilizing the backdoor pattern is insufficient for a successful backdoor attack. A classifier with a backdoor needs to predict the target class even when the original image is easy to classify correctly. In other words, the reliance on the backdoor pattern needs to be strong enough to overpower the entirety of the signal coming from salient image features. This perspective suggests a natural explanation for the mediocre success of interpolation-based attacks. Inputs created via interpolation do not have a strong enough signal for non-target classes as the characteristics appear "smoothened-out". The adversarially perturbed inputs, on the other hand, do contain such a signal, ing in a strong reliance on the backdoor pattern. At inference time, this reliance is able to overcome the reliance on natural features. In order to further investigate this hypothesis, we perform experiments where we add Gaussian noise to poisoned inputs (see Appendix B.3). While a small amount of noise makes the attack more effective, increasing the magnitude of the noise has an adverse effect. Intuitively, the poisoned images do not contain meaningful information about the label of the original image anymore. Thus a classifier that weakly relies on the backdoor can classify the images correctly. Since the dependence on the backdoor is weak, during testing, the classifier will largely ignore the backdoor as the rest of the image is easy to classify. In this section so far, we have described how to generate samples that can effectively be used for backdoor attacks while having a plausible label. However the introduction of the backdoor pattern itself might make these inputs suspicious (see Figure 1). In order to make the attack more insidious, we experiment with backdoor patterns that are less likely to be detectable. We find that this does not have significant impact to the success of the attack (see Appendix B.1 for details). The ing set of poisoned (after adversarial perturbation and addition of reduced-amplitude pattern) does not differ significantly from the original set images (Appendix C.3.3).In order to avoid introducing conflating factors to our study, we trained our models without data augmentation (standard data augmentation introduces flips and crops which might obscure the pattern). In Appendix B.2, we perform additional experiments with standard data augmentation methods and observe that this hinders our attacks 4. We find however that, by modifying the attack to introduce additional backdoor patterns in different corners of the image (see Appendix FIG10), we recover the attack's success rate. Nevertheless, we find that data augmentation can occasionally still be an obstacle for our methods as it may cause our attack to fail in a stochastic manner (Appendix C.1). We believe that further experimentation with the backdoor pattern can lead to stronger attacks. Recall that our threat model is as follows. The attacker choses a target class label L and a fraction of training inputs to poison. They then modify these inputs arbitrarily as long as they remain consistent with their original label and introduce a backdoor pattern to these inputs. The pattern consists of a small black-and-white square applied to the bottom-right corner of an image. We choose the same pattern as BID9 for consistency, but we note that understanding the impact of different pattern choices is an important direction for investigation. An example of this pattern applied to an otherwise unchanged image from the dataset is shown as the clean-label BID9 image in Figure 1. A classifier is then trained on this poisoned dataset. To evaluate the ing network, we consider the data in the test set not labeled as the target class. Recall that the attack success rate is the percentage of these test data that are nonetheless classified as the target when the backdoor pattern is applied. All of our experiments are performed on the CIFAR-10 dataset BID15 ) containing 50 000 training examples (5000 for each of the ten classes). For each method of increasing the classification difficulty, experiments are performed targeting all ten classes individually. Furthermore, they are tested at each of the following poisoning proportions, which roughly form a quadrupling geometric series: 0.4%, 1.5%, 6%, 25%, and 100%. This series is chosen to evaluate the attack at a wide variety of scales of poisoning percentages 5. Note that these rates represent the fraction of examples poisoned from a single class. Thus, poisoning 6% of the examples of a target class corresponds to poisoning only 0.6% of the entire training set. In the following experiments, we use a standard residual network (ResNet) BID11 with three groups of residual layers with filter sizes of 16, 16, 32 and 64, and five residual units each. We use a momentum optimizer to train this network with a momentum of 0.9, a weight decay of 0.0002, batch size of 50, batch normalization, and a step size schedule that starts at 0.1, reduces to 0.01 at 40 000 steps and further to 0.001 at 60 000 steps. The total number of training steps used is 100 000. We used this architecture and training procedure throughout our experiments and did not adjust it in any way. None of the attacks below had any apparent effect on the standard accuracy -that is, the accuracy of the model on non-poisoned test data -except at 100% poisoning. At that extreme, there is a substantial decline, with standard accuracy decreasing by up to 10 percentage points. We found that this decrease is due to the model relying entirely on the backdoor pattern and thus predicting incorrect labels for the entire target class when the pattern is absent. For our experiments, we train a WGAN BID10 6. In order to generate images similar to the training inputs, we optimize over the latent space using 1000 steps of gradient descent with a step size of 0.1, following the procedure of BID12. To improve the image quality and the ability to encode training set images, we train the GAN using only images of the two classes between which we interpolate. We compare attacks that use different degrees of GAN-based interpolation: τ = 0, 0.1, 0.2, 0.3.While our reproductions of the original images are noticeably different, we find that images generated with τ ≤ 0.2 typically remain plausible. An example of these interpolated images is shown in FIG1. A more complete set of examples at the tested τ is available in Appendix C.3.1. We observe that increasing τ in more successful attacks (FIG3).For the above reasons, τ = 0.2 was chosen for further investigation due to the plausible images and improvement in performance. We investigate the τ = 0.2 GAN-based interpolation attack on all classes. This shows improvement over the baseline, especially for the 1.5% and 6% poisoning percentages (FIG5). A class-by-class comparison to the baseline is given in Appendix C.2. We construct adversarial examples using a PGD attack on adversarially trained models BID18 7. We used p -adversarially trained models for constructing p -bounded perturbations for p = 2, ∞ as these models have been found to produce adversarial examples that resemble images from other classes when large perturbations are allowed BID30. Note that since our threat model does not allow access to the training procedure, these adversarial perturbations are generated for pre-trained models and not on the fly during training. We compare attacks using 2 -and ∞ -norm adversarial perturbations of different magnitudes. For this experiment, we consider a random class (the airplane class), with the following maximum perturbations (ε) normalized to the range of pixel values: 300, 600 and 1200 for 2 -bounded examples, and 8, 16 and 32 for ∞ -bounded examples. There is a clear trend of increasing ε ing in substantially improved performance (FIG3). At 1.5% poisoning, the middle and higher tested values of ε for each norm achieve over 70% attack success, despite the baseline of restricting the BID9 attack to clean labels having near 0% attack success. These are shown in FIG3 These adversarial examples look plausible and, as ε increases, appear to interpolate towards other classes. An example of these perturbed images at each of the tested norms and values of ε is shown in FIG2. The highest ε tested for each norm in readily apparent distortion. The lower values for ε tested in plausible images. See Appendix C.3.2 for more details. Due to the performance and plausibility shown above, 2 -based perturbations with ε = 600 were chosen for further investigation. For every class, the attack success rate is substantially higher than the clean-label BID9 attack baseline on all but the lowest tested poisoning percentage FIG5 ). We investigate the backdoor attacks of BID9 in the presence of a simple data filtering scheme. While their attack is powerful, it crucially relies on the addition of arbitrary, mostly mislabeled, inputs into the training set and can thus be detected by filtering. Human inspection of the identified outliers will clearly flag the poisoned samples as unnatural. We argue that, for a backdoor attack to be insidious, it must not rely on inputs that appear mislabeled upon examination. To remain successful under the clean-label restriction, we propose perturbing the poisoned inputs to render them more difficult to classify. We restrict the magnitude of these changes so that the true label remains plausible. We propose two methods for increasing classification difficulty: adversarial p -bounded perturbations and GAN-based interpolation. We find that both methods introduce a backdoor more successfully than the clean-label adaptation of the BID9 attack. These findings demonstrate that backdoor attacks can be made significantly harder to detect than one might initially expect. This emphasizes the need for developing principled and effective methods for protecting ML models from such attacks. A THE ORIGINAL ATTACK OF GU We replicate the experiments of BID9 on the CIFAR-10 (BID15 dataset. The original work considered the case where the model is trained by an adversary, since they focused on the transfer learning setting. The authors accordingly imposed essentially no constraints on the number of poisoned samples used. In contrast, we study the threat model where an attacker is only allowed to poison a limited number of samples in the dataset. We are thus interested in understanding the fraction of poisoned samples required to ensure that the ing model indeed has an exploitable backdoor. In Figure A, we plot the attack success rate for different target labels and number of poisoned examples injected. We observe that the attack is very successful even with a small (∼ 75) number of poisoned samples. Note that the poisoning percentages here are calculated relative to the entire dataset. The x-axis thus corresponds to the same scale in terms of examples poisoned as the rest of the plots. While the attack is very effective, most image labels are clearly incorrect (Figure 1). Despite our above focus on the plausibility of the base image, the backdoor pattern itself could also cause plausibility problems if its presence appears unnatural. To mitigate this potential suspicion, we consider a modified backdoor pattern. Instead of entirely replacing the bottom-right 3-pixel-by-3-pixel square with the pattern, we perturb the original pixel values by a backdoor pattern amplitude. In pixels that are white in the original pattern, we add this amplitude to each color channel (i.e. red, green and blue). Conversely, for black pixels, we subtract this amplitude from each channel. We then clip these values to the normal range of pixel values. (Here, the range is.) Note that when the backdoor pattern amplitude is 255 or greater, this attack is always equivalent to applying the original backdoor pattern. We extend our proposed adversarial example-based attack to reduced backdoor pattern amplitudes. We explore this attack with a random class (the dog class), considering backdoor pattern amplitudes of 16, 32 and 64. All (non-zero) backdoor pattern amplitudes ed in substantial attack success rates at poisoning percentages of 6% and higher. Higher amplitudes conferred higher attack success rates. At the two lower poisoning percentages tested, the attack success rate was near zero. These are shown in FIG8.Image plausibility is greatly improved by reducing the backdoor pattern amplitude. Examples of an image at varying backdoor pattern amplitudes are shown in FIG9. A more complete set of examples is available in Appendix C.3.3.We have chosen a backdoor pattern amplitude of 32 for further investigation as a balance between conspicuousness and attack success. We tested this attack on all classes, finding similar performance across the classes. These are shown in FIG8. Data augmentation is commonly used to reduce overfitting while training deep learning models. The general approach is to not only train on the original training set, but also on the same data transformed in simple ways. Common techniques include cropping and flipping, which can be problematic for our attack given that they might obscure the pattern. It is important to understand the impact of data augmentation on our attack, given its wide usage. To improve attack success when using data augmentation, we consider an alternate backdoor pattern, where the original pattern and flipped versions of it are applied to all four corners. This aims to encourage backdoor pattern recognition even when images are flipped or randomly cropped. An example of this pattern (with our chosen amplitude of 32) applied to an example image is shown in FIG10. The backdoor pattern duplication is motivated by the desire to ensure at least one corner pattern is still visible after cropping and to remain invariant to horizontal flips. We investigate and compare our reduced backdoor pattern amplitude attack when training both with and without data augmentation. For each of these cases, we also compare the original (one-corner) and four-corner backdoor pattern. These initial experiments were performed on a random class (the frog class, FIG10). We see that, when data augmentation is not used, there is little difference in performance between the four-corner backdoor pattern attack and the original one-corner attack. When data augmentation was used, however, there is a large difference between these attacks. Use of the one-corner backdoor pattern in substantially reduced attack success for all poisoning percentages while the four-corner backdoor pattern attack achieves over 90% attack success rates for poisoning percentages of 6% and greater. These are shown in FIG10.These show that the performance improvement under data augmentation does not primarily from the backdoor pattern simply being applied to more pixels. Rather, the four-corner pattern ensures at least one corner's backdoor pattern will remain visible after the data augmentation is applied.. Right: Using the four-corner pattern does not provide a substantial benefit over the one-corner pattern when data augmentation is not used. When data augmentation is used, however, the difference in performance is stark, with the one-corner pattern achieving much lower attack success rates. We then explored the performance of this four-corner attack under data augmentation on all classes. For comparison, we similarly investigated the original, one-corner attack's performance under data augmentation. The one-corner attack in a near-zero attack success rate across almost all the classes and poisoning percentages. The four-corner attack showed generally similar as the exploratory experiment. These are shown in FIG11. However, some classes showed varying degrees of stochasticity in their ing attack success rates. This was investigated by altering the random seeds used in network training. The ship class performed particularly poorly, with only one datum across three runs showing a high attack success rate. Other classes appeared to successfully poison much more stably. The one-and four-corner all class attack are shown in FIG11. We present two other runs of the four-corner augmentation resistant attack in Appendix C.1, along with graphs showing the minimum, median and maximum attack success rates achieved across each of the three runs. Gaussian noise with a zero mean and varying standard deviations was added to examples before application of the backdoor pattern. This was used as the method to increase the difficulty of the examples. As shown in FIG12, we found that there is some improvement at low standard deviations. At higher standard deviations, however, the performance degrades substantially. As discussed earlier, at high standard deviations of Gaussian noise, poisoned images do not contain meaningful information about the label of the original image anymore. Thus they can be easily classified correctly by using the backdoor with relatively small weight. Two additional runs of the final attack on all classes are given below. The random seed used in training was varied between the run shown in FIG11 and each run below. For each class and poisoning percentage tested, we also calculate the minimum, median and maximum values across the three runs. These are presented below. We compare the performance of the baseline of the BID9 attack restricted to only clean labels, our GAN-based interpolation attack, and our adversarial perturbation-based attack for each class. The adversarial examples-based attack substantially outperforms the other two at all but the lowest poisoning percentage. The GAN-based attack usually outperforms the baseline, but by a smaller margin than the adversarial examples-based attack. Each row shows two sets of randomly chosen examples from a single class. In each set, the leftmost image is the original image from the CIFAR-10 dataset and the subsequent images are the corresponding image interpolating using a GAN. At the top of the first row, each column's degree of interpolation is given. The τ = 0 examples show that we were unable to perfectly encode the image. As τ increases the images show increased distortion. Each row shows five pairs of examples from a single class. The left image in each pair is the original image from the CIFAR-10 dataset and the right image is the corresponding image perturbed using 2 -norm adversarial perturbations (bounded by ε = 600) and with the reduced amplitude backdoor pattern applied (using an amplitude of 16). It should be noted that the poisoned images do rarely change enough to appear mislabeled. | We show how to successfully perform backdoor attacks without changing training labels. | 1,033 | scitldr |
Despite their ability to memorize large datasets, deep neural networks often achieve good generalization performance. However, the differences between the learned solutions of networks which generalize and those which do not remain unclear. Additionally, the tuning properties of single directions (defined as the activation of a single unit or some linear combination of units in response to some input) have been highlighted, but their importance has not been evaluated. Here, we connect these lines of inquiry to demonstrate that a network’s reliance on single directions is a good predictor of its generalization performance, across networks trained on datasets with different fractions of corrupted labels, across ensembles of networks trained on datasets with unmodified labels, across different hyper- parameters, and over the course of training. While dropout only regularizes this quantity up to a point, batch normalization implicitly discourages single direction reliance, in part by decreasing the class selectivity of individual units. Finally, we find that class selectivity is a poor predictor of task importance, suggesting not only that networks which generalize well minimize their dependence on individual units by reducing their selectivity, but also that individually selective units may not be necessary for strong network performance. Recent work has demonstrated that deep neural networks (DNNs) are capable of memorizing extremely large datasets such as ImageNet BID39. Despite this capability, DNNs in practice achieve low generalization error on tasks ranging from image classification BID17 to language translation BID37. These observations raise a key question: why do some networks generalize while others do not?Answers to these questions have taken a variety of forms. A variety of studies have related generalization performance to the flatness of minima and PAC-Bayes bounds BID18 BID20 BID27 BID14, though recent work has demonstrated that sharp minima can also generalize BID13. Others have focused on the information content stored in network weights BID0, while still others have demonstrated that stochastic gradient descent itself encourages generalization BID8 BID34 BID36.Here, we use ablation analyses to measure the reliance of trained networks on single directions. We define a single direction in activation space as the activation of a single unit or feature map or some linear combination of units in response to some input. We find that networks which memorize the training set are substantially more dependent on single directions than those which do not, and that this difference is preserved even across sets of networks with identical topology trained on identical data, but with different generalization performance. Moreover, we found that as networks begin to overfit, they become more reliant on single directions, suggesting that this metric could be used as a signal for early stopping. We also show that networks trained with batch normalization are more robust to cumulative ablations than networks trained without batch normalization and that batch normalization decreases the class selectivity of individual feature maps, suggesting an alternative mechanism by which batch normalization may encourage good generalization performance. Finally, we show that, despite the focus on selective single units in the analysis of DNNs (and in neuroscience; BID21 BID40 BID28 BID9, the class selectivity of single units is a poor predictor of their importance to the network's output. In this study, we will use a set of perturbation analyses to examine the relationship between a network's generalization performance and its reliance upon single directions in activation space. We will then use a neuroscience-inspired measure of class selectivity to compare the selectivity of individual directions across networks with variable generalization performance and examine the relationship between class selectivity and importance. We analyzed three models: a 2-hidden layer MLP trained on MNIST, an 11-layer convolutional network trained on CIFAR-10, and a 50-layer residual network trained on ImageNet. In all experiments, ReLU nonlinearities were applied to all layers but the output. Unless otherwise noted, batch normalization was used for all convolutional networks BID19 . For the ImageNet ResNet, top-5 accuracy was used in all cases. Partially corrupted labels As in BID39, we used datasets with differing fractions of randomized labels to ensure varying degrees of memorization. To create these datasets, a given fraction of labels was randomly shuffled and assigned to images, such that the distribution of labels was maintained, but any true patterns were broken. Ablations We measured the importance of a single direction to the network's computation by asking how the network's performance degrades once the influence of that direction was removed. To remove a coordinate-aligned single direction, we clamped the activity of that direction to a fixed value (i.e., ablating the direction). Ablations were performed either on single units in MLPs or an entire feature map in convolutional networks. For brevity, we will refer to both of these as'units.' Critically, all ablations were performed in activation space, rather than weight space. More generally, to evaluate a network's reliance upon sets of single directions, we asked how the network's performance degrades as the influence of increasing subsets of single directions was removed by clamping them to a fixed value (analogous to removing increasingly large subspaces within activation space). This analysis generates curves of accuracy as a function of the number of directions ablated: the more reliant a network is on low-dimensional activation subspaces, the more quickly the accuracy will drop as single directions are ablated. Interestingly, we found that clamping the activation of a unit to the empirical mean activation across the training or testing set was more damaging to the network's performance than clamping the activation to zero (see Appendix A.1). We therefore clamped activity to zero for all ablation experiments. Addition of noise As the above analyses perturb units individually, they only measure the influence of coordinate-aligned single directions. To test networks' reliance upon random single directions, we added Gaussian noise to all units with zero mean and progressively increasing variance. To scale the variance appropriately for each unit, the variance of the noise added was normalized by the empirical variance of the unit's activations across the training set. To quantify the class selectivity of individual units, we used a metric inspired by the selectivity indices commonly used in systems neuroscience BID12 BID9 Figure 1: Memorizing networks are more sensitive to cumulative ablations. Networks were trained on MNIST (2-hidden layer MLP, a), CIFAR-10 (11-layer convolutional network, b), and ImageNet (50-layer ResNet, c). In a, all units in all layers were ablated, while in b and c, only feature maps in the last three layers were ablated. Error bars represent standard deviation across 10 random orderings of units to ablate. & ). The class-conditional mean activity was first calculated across the test set, and the selectivity index was then calculated as follows: DISPLAYFORM0 with µ max representing the highest class-conditional mean activity and µ −max representing the mean activity across all other classes. For convolutional feature maps, activity was first averaged across all elements of the feature map. This metric varies from 0 to 1, with 0 meaning that a unit's average activity was identical for all classes, and 1 meaning that a unit was only active for inputs of a single class. We note that this metric is not a perfect measure of information content in single units; for example, a unit with a little information about every class would have a low class selectivity index. However, it does measure the discriminability of classes along a given direction. The selectivity index also identifies units with the same class tuning properties which have been highlighted in the analysis of DNNs BID21 BID38 BID11 BID40 BID28. However, in addition to class selectivity, we replicate all of our using mutual information, which, in contrast to class selectivity, should highlight units with information about multiple classes, and we find qualitively similar outcomes (Appendix A.5). We also note that while a class can be viewed as a highly abstract feature, implying that our may generalize to feature selectivity, we do not examine feature selectivity in this work. Here, we provide a rough intuition for why a network's reliance upon single directions might be related to generalization performance. Consider two networks trained on a large, labeled dataset with some underlying structure. One of the networks simply memorizes the labels for each input example and will, by definition, generalize poorly ('memorizing network') while the other learns the structure present in the data and generalizes well ('structure-finding network'). The minimal description length of the model should be larger for the memorizing network than for the structurefinding network. As a , the memorizing network should use more of its capacity than the structure-finding network, and by extension, more single directions. Therefore, if a random single direction is perturbed, the probability that this perturbation will interfere with the representation of the data should be higher for the memorizing network than for the structure-finding network 2. To test whether memorization leads to greater reliance on single directions, we trained a variety of network types on datasets with differing fractions of randomized labels and evaluated their performance as progressively larger fractions of units were ablated (see Sections 2.2 and 2.1). By definition, these curves must begin at the network's training accuracy (approximately 1 for all networks tested) and fall to chance levels when all directions have been ablated. To rule out variance due to the specific order of unit ablation, all experiments were performed with mutliple random ablation orderings of units. As many of the models were trained on datasets with corrupted labels and, by definition, cannot generalize, training accuracy was used to evaluate model performance. Consistent with our intuition, we found that networks trained on varying fractions of corrupted labels were significantly more sensitive to cumulative ablations than those trained on datasets comprised of true labels, though curves were not always perfectly ordered by the fraction of corrupted labels (Fig. 1).We next asked whether this effect was present if networks were perturbed along random bases. To test this, we added noise to each unit (see Section 2.2). Again, we found that networks trained on corrupted labels were substantially and consistently more sensitive to noise added along random bases than those trained on true labels (Fig. 2).The above apply to networks which are forced to memorize at least a portion of the training set -there is no other way to solve the task. However, it is unclear whether these would apply to networks trained on uncorrupted data. In other words, do the solutions found by networks with the same topology and data, but different generalization performance exhibit differing reliance upon single directions? To test this, we trained 200 networks on CIFAR-10, and evaluated their generalization error and reliance on single directions. All networks had the same topology and were trained on the same dataset (unmodified CIFAR-10). Individual networks only differed in their random initialization (drawn from identical distributions), the data order used during training, and their learning rate 3. We found that the 5 networks with the best generalization performance were more robust to the ablation of single directions than the 5 networks with the worst generalization performance (Fig. 3a). To quantify this further, we measured the area under the ablation curve for each of the 200 networks and plotted it as a function of generalization error (Fig. 3b). Interestingly, networks appeared to undergo a discrete regime shift in their reliance upon single directions; however, this effect might have been caused by degeneracy in the set of solutions found by the optimization procedure, and we note that there was also a negative correlation present within clusters (e.g., top left cluster). These demonstrate that the relationship between generalization performance and single direction reliance is not merely a side-effect of training with corrupted labels, but is instead present even among sets networks with identical training data. This relationship raises an intriguing question: can single direction reliance be used to estimate generalization performance without the need for a held-out test set? And if so, might it be used as a signal for early stopping or hyperpameter selection? As a proof-of-principle experiment for early stopping, we trained an MLP on MNIST and measured the area under the cumulative ablation curve (AUC) over the course of training along with the train and test loss. Interestingly, we found that the point in training at which the AUC began to drop was the same point that the train and test loss started to diverge (Fig. 4a). Furthermore, we found that AUC and test loss were negatively correlated (Spearman's correlation: -0.728; Fig. 4b).As a proof-of-principle experiment for hyperparameter selection, we trained 192 CIFAR-10 models with different hyperparemeter settings (96 hyperparameters with 2 repeats each; see Appendix A.2). We found that AUC and test accuracy were highly correlated (Spearman's correlation: 0.914; Fig. 4c), and by performing random subselections of 48 hyperparameter settings, AUC selected one of the top 1, 5, and 10 settings 13%, 83%, and 98% of the time, respectively, with an average difference in test accuracy between the best model selected by AUC and the optimal model of only 1 ± 1.1% (mean ± std). These suggest that single direction reliance may serve as a good proxy for hyperparameter selection and early stopping, but further work will be necessary to evaluate whether these hold in more complicated datasets. Dropout Our experiments are reminiscent of using dropout at training time, and upon first inspection, dropout may appear to discourage networks' reliance on single directions BID35. However, while dropout encourages networks to be robust to cumulative ablations up until the dropout fraction used in training, it should not discourage reliance on single directions past that point. Given enough capacity, a memorizing network could effectively guard against dropout by merely copying the information stored in a given direction to several other directions. However, the network will only be encouraged to make the minimum number of copies necessary to guard against the dropout fraction used in training, and no more. In such a case, the network would be robust to dropout so long as all redundant directions were not simultaneously removed, yet still be highly reliant on single directions past the dropout fraction used in training. To test whether this intuition holds, we trained MLPs on MNIST with dropout probabilities ∈ {0.1, 0.2, 0.3} on both corrupted and unmodified labels. Consistent with the observation in BID4, we found that networks with dropout trained on randomized labels required more epochs to converge and converged to worse solutions at higher dropout probabilities, suggesting that dropout does indeed discourage memorization. However, while networks trained on both corrupted and unmodified labels exhibited minimal loss in training accuracy as single directions were removed up to the dropout fraction used in training, past this point, networks trained on randomized labels were much more sensitive to cumulative ablations than those trained on unmodified labels (Fig. 5a). Interestingly, networks trained on unmodified labels with different dropout fractions were all similarly robust to cumulative ablations. These suggest that while dropout may serve as an effective regularizer to prevent memorization of randomized labels, it does not prevent over-reliance on single directions past the dropout fraction used in training. Batch normalization In contrast to dropout, batch normalization does appear to discourage reliance upon single directions. To test this, we trained convolutional networks on CIFAR-10 with and without batch normalization and measured their robustness to cumulative ablation of single directions. Networks trained with batch normalization were consistently and substantially more robust to these ablations than those trained without batch normalization (Fig. 5b). This suggests that in a bFigure 6: Batch normalization decreases class selectivity and increases mutual information. Distributions of class selectivity (a) and mutual information (b) for networks trained with (blue) and without batch normalization (purple). Each distribution comprises 4 model instances trained on uncorrupted CIFAR-10.addition to reducing covariate shift, as has been proposed previously BID19, batch normalization also implicitly discourages reliance upon single directions. Our thus far suggest that networks which are less reliant on single directions exhibit better generalization performance. This may appear counter-intuitive in light of extensive past work in both neuroscience and deep learning which highlights single units or feature maps which are selective for particular features or classes BID21 BID38 BID11 BID40 BID28. Here, we will test whether the class selectivity of single directions is related to the importance of these directions to the network's output. First, we asked whether batch normalization, which we found to discourage reliance on single directions, also influences the distribution of information about class across single directions. We used the selectivity index described above (see Section 2.3) to quantify the discriminability between classes based on the activations of single feature maps across networks trained with and without batch normalization. Interestingly, we found that while networks trained without batch normalization exhibited a large fraction of feature maps with high class selectivity 4, the class selectivity of feature maps in networks trained with batch normalization was substantially lower (Fig. 6a). In contrast, we found that batch normalization increases the mutual information present in feature maps (Fig. 6b). These suggest that batch normalization actually discourages the presence of feature maps with concentrated class information and rather encourages the presence of feature maps with information about multiple classes, raising the question of whether or not such highly selective feature maps are actually beneficial. We next asked whether the class selectivity of a given unit was predictive of the impact on the network's loss of ablating said unit. Since these experiments were performed on networks trained on unmodified labels, test loss was used to measure network impact. For MLPs trained on MNIST, we found that there was a slight, but minor correlation (Spearman's correlation: 0.095) between a unit's class selectivity and the impact of its ablation, and that many highly selective units had minimal impact when ablated FIG2. By analyzing convolutional networks trained on CIFAR-10 and ImageNet, we again found that, across layers, the ablation of highly selective feature maps was no more impactful than the ablation of non-selective feature maps FIG2. In fact, in the CIFAR-10 networks, there was actually a negative correlation between class selectivity and feature map importance (Spearman's correlation: -0.428, FIG2). To test whether this relationship was depth-dependent, we calculated the correlation between class selectivity and importance separately for each layer, and found that the vast majority of the negative correlation was driven by early layers, while later layers exhibited no relationship between class selectivity and importance FIG2. Interestingly, in all three networks, ablations in early layers were more impactful than ablations in later layers, consistent with theoretical observations BID29. Additionally, we performed all of the above experiments with mutual information in place of class selectivity, and found qualitatively similar (Appendix A.5). As a final test, we compared the class selectivity to the L 1 -norm of the filter weights, a metric which has been found to be a successful predictor of feature map importance in the model pruning literature BID22. Consistent with our previous observations, we found that class selectivity was largely unrelated to the L 1 -norm of the filter weights, and if anything, the two were negatively correlated (Fig. A3, see Appendix A.4 for details). Taken together, these suggest that class selectivity is not a good predictor of importance, and imply that class selectivity may actually be detrimental to network performance. Further work will be necessary to examine whether class and/or feature selectivity is harmful or helpful to network performance. Much of this work was directly inspired by BID39, and we replicate their using partially corrupted labels on CIFAR-10 and ImageNet. By demonstrating that memorizing networks are more reliant on single directions, we also provide an answer to one of the questions they posed: is there an empirical difference between networks which memorize and those which generalize?Our work is also related to work linking generalization and the sharpness of minima BID18 BID20 BID27. These studies argue that flat minima generalize better than sharp minima (though BID13 recently found that sharp minima can also generalize well). This is consistent with our work, as flat minima should correspond to solutions in which perturbations along single directions have little impact on the network output. Another approach to generalization has been to contextualize it in information theory. For example, BID0 demonstrated that networks trained on randomized labels store more information in their weights than those trained on unmodfied labels. This notion is also related to BID33, which argues that during training, networks proceed first through a loss minimization phase followed by a compression phase. Here again, our work is consistent, as networks with more information stored in their weights (i.e., less compressed networks) should be more reliant upon single directions than compressed networks. More recently, BID4 analyzed a variety of properties of networks trained on partially corrupted labels, relating performance and time-to-convergence to capacity. They also demonstrated that dropout, when properly tuned, can serve as an effective regularizer to prevent memorization. However, we found that while dropout may discourage memorization, it does not discourage reliance on single directions past the dropout probability. We found that class selectivity is a poor predictor of unit importance. This observation is consistent with a variety of recent studies in neuroscience. In one line of work, the benefits of neural systems which are robust to coordinate-aligned noise have been explored BID6 Perturbation analyses have been performed for a variety of purposes. In the model pruning literature, many studies have removed units with the goal of generating smaller models with similar performance BID22 BID3 BID24, and recent work has explored methods for discovering maximally important directions BID30 ). Recently, BID10 used cumulative ablations to measure network robustness, though the relationship to generalization was not explored. A variety of studies within deep learning have highlighted single units which are selective for features or classes BID21 BID38 BID11 BID40 BID28 BID1. Additionally, BID1 analyzed the minimum number of sufficient feature maps (sorted by a measure of selectivity) to achieve a given accuracy. However, none of the above studies has tested the relationship between a unit's class selectivity or information content and its necessity to the network's output. BID7 have quantified a related metric, concept selectivity, across layers and networks, finding that units get more concept-selective with depth, which is consistent with our own observations regarding class selectivity (see Appendix A.3). However, they also observed a correlation between the number of concept-selective units and performance on the action40 dataset across networks and architectures. It is difficult to compare these directly, as the data used are substantially different as is the method of evaluating selectivity. Nevertheless, we note that BID7 measured the absolute number of concept-selective units across networks with different total numbers of units and depths. The relationship between the number of concept-selective units and network performance may therefore arise as a of a larger number of total units (if a fixed fraction of units is concept-selective) and increased depth (we both observed that selectivity increases with depth). In this work, we have taken an empirical approach to understand what differentiates neural networks which generalize from those which do not. Our experiments demonstrate that generalization capability is related to a network's reliance on single directions, both in networks trained on corrupted and uncorrupted data, and over the course of training for a single network. They also show that batch normalization, a highly successful regularizer, seems to implicitly discourage reliance on single directions. One clear extension of this work is to use these observations to construct a regularizer which more directly penalizes reliance on single directions. As it happens, the most obvious candidate to regularize single direction reliance is dropout (or its variants), which, as we have shown, does not appear to regularize for single direction reliance past the dropout fraction used in training (Section 3.3). Interestingly, these suggest that one is able to predict a network's generalization performance without inspecting a held-out validation or test set. This observation could be used in several interesting ways. First, in situations where labeled training data is sparse, testing networks' reliance on single directions may provide a mechanism to assess generalization performance without sacrificing training data to be used as a validation set. Second, by using computationally cheap empirical measures of single direction reliance, such as evaluating performance at a single ablation point or sparsely sampling the ablation curve, this metric could be used as a signal for early-stopping or hyperparameter selection. We have shown that this metric is viable in simple datasets (Section 3.2), but further work will be necessary to evaluate viability in more complicated datasets. Another interesting direction for further research would be to evaluate the relationship between single direction reliance and generalization performance across different generalization regimes. In this work, we evaluate generalization in which train and test data are drawn from the same distribution, but a more stringent form of generalization is one in which the test set is drawn from a unique, but overlapping distribution with the train set. The extent to which single direction reliance depends on the overlap between the train and test distributions is also worth exploring in future research. This work makes a potentially surprising observation about the role of individually selective units in DNNs. We found not only that the class selectivity of single directions is largely uncorrelated with their ultimate importance to the network's output, but also that batch normalization decreases the class selectivity of individual feature maps. This suggests that highly class selective units may actually be harmful to network performance. In addition, it implies than methods for understanding neural networks based on analyzing highly selective single units, or finding optimal inputs for single units, such as activation maximization BID15 ) may be misleading. Importantly, as we have not measured feature selectivity, it is unclear whether these will generalize to featureselective directions. Further work will be necessary to clarify all of these points. To remove the influence of a given direction, its value should be fixed or otherwise modified such that it is no longer dependent on the input. However, the choice of such a fixed value can have a substantial impact. For example, if its value were clamped to one which is highly unlikely given its distribution of activations across the training set, network performance would likely suffer drastically. Here, we compare two methods for ablating directions: ablating to zero and ablating to the empirical mean over the training set. Using convolutional networks trained on CIFAR-10, we performed cumulative ablations, either ablating to zero or to the feature map's mean (means were calculated independently for each element of the feature map), and found that ablations to zero were significantly less damaging than ablations to the feature map's mean FIG4. Interestingly, this corresponds to the ablation strategies generally used in the model pruning literature BID22 BID3 BID24. MNIST MLPs For class selectivity, generalization, early stopping, and dropout experiments, each layer contained 128, 512, 2048 and 2048 units, respectively. All networks were trained for 640 epochs, with the exception of dropout networks which were trained for 5000 epochs. CIFAR-10 ConvNets Convolutional networks were all trained on CIFAR-10 for 100 epochs. Layer sizes were: 64, 64, 128, 128, 128, 256, 256, 256, 512, 512, 512, with strides of 1, 1, 2, 1, 1, 2, 1, 1, 2, 1, 1, respectively. All kernels were 3x3. For the hyperparameter sweep used in Section 3.2, learning rate and batch size were evaluated using a grid search. ImageNet ResNet 50-layer residual networks BID17 were trained on ImageNet using distributed training with 32 workers and a batch size of 32 for 200,000 steps. Blocks were structured as follows (stride, filter sizes, output channels): (1x1, 64, 64, 256) x 2, (2x2, 64, 64, 256), (1x1, 128, 128, 512) x 3, (2x2, 128, 128, 512), (1x1, 256, 256, 1024) x 5, (2x2, 256, 256, 1024), (1x1, 512, 512, 2048) x 3. For training with partially corrupted labels, we did not use any data augmen- tation, as it would have dramatically increasing the effective training set size, and hence prevented memorization. Here, we evaluate the distribution of class selectivity as a function of depth. In both networks trained on CIFAR-10 (Fig. A2a) and ImageNet (Fig. A2b), selectivity increased as a function of depth. This is consistent with BID7, who show that concept-selectivity increases with depth. It is also consistent with BID2, who show depth increases the linear decodability of class information (though they evaluate linear decodability based on an entire layer rather than a single unit).A.4 RELATIONSHIP BETWEEN CLASS SELECTIVITY AND THE FILTER WEIGHT L 1 -NORM Importantly, our on the lack of relationship between class selectivity and importance do not suggest that there are not directions which are more or less important to the network's output, nor do they suggest that these directions are not predictable; they merely suggest that class selectivity is not a good predictor of importance. As a final test of this, we compared class selectivity to the L 1 -norm of the filter weights, a metric which has been found to be a strongly correlated with the impact of removing a filter in the model pruning literature BID22. Since the L 1 -norm of the filter weights is predictive of impact of a feature map's removal, if class selectivity is also a good predictor, the two metrics should be correlated. In the ImageNet network, we found that there was no correlation between the L 1 -norm of the filter weights and the class selectivity (Fig. A3a), while in the CIFAR-10 network, we found there was actually a negative correlation (Fig. A3b). To examine whether mutual information, which, in contrast to class selectivity, highlights units with information about multiple classes, is a good predictor of importance, we performed the same experiments as in Section 3.4 with mutual information in place of class selectivity. We found, that while the were a little less consistent (e.g., there appears to be some relationship in very early and very late layers in CIFAR-10), mutual information was generally a poor predictor of unit importance FIG6. | We find that deep networks which generalize poorly are more reliant on single directions than those that generalize well, and evaluate the impact of dropout and batch normalization, as well as class selectivity on single direction reliance. | 1,034 | scitldr |
Typical amortized inference in variational autoencoders is specialized for a single probabilistic query. Here we propose an inference network architecture that generalizes to unseen probabilistic queries. Instead of an encoder-decoder pair, we can train a single inference network directly from data, using a cost function that is stochastic not only over samples, but also over queries. We can use this network to perform the same inference tasks as we would in an undirected graphical model with hidden variables, without having to deal with the intractable partition function. The can be mapped to the learning of an actual undirected model, which is a notoriously hard problem. Our network also marginalizes nuisance variables as required. We show that our approach generalizes to unseen probabilistic queries on also unseen test data, providing fast and flexible inference. Experiments show that this approach outperforms or matches PCD and AdVIL on 9 benchmark datasets. Learning the parameters of an undirected probabilistic graphical model (PGM) with hidden variables using maximum likelihood (ML) is a notably difficult problem (; ;). When all variables are observed, the range of applicable techniques is broadened (; ; ;), but the problem remains intractable in general. When hidden variables are present, the intractability is twofold: (a) integrating out the hidden variables (also a challenge in directed models) and (b) computing the partition function. The second problem is generally deemed to be harder . After learning, the probabilistic queries are in most cases not tractable either, so one has to resort to approximations such as belief propagation or variational inference. These approximations operate in the same way regardless of whether the model is directed, and do not need to compute the partition function. In general, ML learning is harder than inference both in directed and undirected models, but even more so in the latter case. Approximate inference via belief propagation (BP) or variational inference (VI) can be cast as an optimization problem. As such, it rarely has a closed-form solution and is instead solved iteratively, which is computationally intensive. To address this problem, one can use amortized inference. A prime example of this are variational autoencoders : a learned function (typically a neural network) is combined with the reparameterization trick (; Titsias and Lázaro-) to compute the posterior over the hidden variables given the visible ones. Although a variational autoencoder (VAE) performs inference much faster than actual VI optimization, this is not without limitations: they are specialized to answer a single predefined query. In contrast, BP and VI answer arbitrary queries, albeit usually need more computation time. The end goal of learning the parameters of a PGM is to obtain a model that can answer arbitrary probabilistic queries. A probabilistic query requests the distribution of a subset of the variables of the model given some (possibly soft) evidence about another subset of variables. This allows, for instance, to train a model on full images and then perform inpainting in a test image in an arbitrary region that was not known at training time. Since the end goal is to be able to perform arbitrary inference, in this work we suggest to learn a system that is able to answer arbitrary probabilistic queries and avoid ML learning altogether, which completely sidesteps the difficulties associated to the partition function. This puts directed and undirected models on equal footing in terms of usability. To this end, we first unroll inference (we will use BP, but other options are possible) over iterations into a neural network (NN) that outputs the of an arbitrary query, and then we train said NN to increase its prediction accuracy. At training time we randomize the queries, looking for a consistent parameterization of the NN that generalizes to new queries. The hope for existence of such a parameterization comes from BP actually working for arbitrary queries in a graphical model with a single parameterization. We call this approach query training (QT). The starting point is an unnormalized PGM parameterized by θ. Its probability density can be expressed as p(x; θ) = p(v, h; θ) ∝ exp(φ(v, h; θ)), where v are the visible variables available in our data and h are the hidden variables. A query is a binary vector q of the same dimension as v that partitions the visible variables in two subsets: One for which (soft) evidence is available (inputs) and another whose conditional probability we want to estimate (outputs). Learning undirected models via query training this is not without limitations: they are specialized to answer a single predefined query. In contrast, BP and VI answer arbitrary queries, albeit usually need more computation time. The end goal of learning the parameters of a PGM is to obtain a model that can answer arbitrary probabilistic queries. A probabilistic query requests the distribution of a subset of the variables of the model given some (possibly soft) evidence about another subset of variables. This allows, for instance, to train a model on full images and then perform inpainting in a test image in an arbitrary region that was not known at training time. Since the end goal is to be able to perform arbitrary inference, in this work we suggest to learn a system that is able to answer arbitrary probabilistic queries and avoid ML learning altogether, which completely sidesteps the di culties associated to the partition function. This puts directed and undirected models on equal footing in terms of usability. To this end, we first unroll inference (we will use BP, but other options are possible) over iterations into a neural network (NN) that outputs the of an arbitrary query, and then we train said NN to increase its prediction accuracy. At training time we randomize the queries, looking for a consistent parameterization of the NN that generalizes to new queries. The hope for existence of such a parameterization comes from BP actually working for arbitrary queries in a graphical model with a single parameterization. We call this approach query training (QT). The starting point is an unnormalized PGM parameterized by ✓. Its probability density can be expressed as p(x; ✓) = p(v, h; ✓) / exp((v, h; ✓)), where v are the visible variables available in our data and h are the hidden variables. A query is a binary vector q of the same dimension as v that partitions the visible variables in two subsets: One for which (soft) evidence is available (inputs) and another whose conditional probability we want to estimate (outputs). Figure 1: One step of query training. A random sample from the training data is split according to a random query mask in input and output dimensions. The input is processed inside the QT-NN by N identical stages, producing an estimation of the sample. The cross-entropy between the true and estimated outputs is computed. A random sample from the training data is split according to a random query mask in input and output dimensions. The input is processed inside the QT-NN by N identical stages, producing an estimation of the sample. The cross-entropy between the true and estimated outputs is computed. The query-trained neural network (QT-NN) follows from specifying a graphical model φ(v, h; θ), a temperature T and a number of inference timesteps N over which to run parallel BP. The general equations of the QT-NN are given next in Section 2.2, and the equations for the simple case in which the PGM is an RBM is provided in Appendix A. As depicted in Fig. 1, a QT-NN takes as input a sample v from the dataset and a query mask q. The query q blocks the network from accessing the "output" variables, and instead only offers access to the "input" variables. Which variables are inputs and which ones are outputs is precisely the information that q contains. Then the QT-NN produces as output an estimationv of the whole input sample. Obviously, we only care about how well the network estimates the variables that it did not see at the input. So we measure how well v matches the correct v in terms of cross-entropy (CE), but only for the variables that q regards as "output". Taking expectation wrt v and q, we get the loss function that we use to train the QT-NN We minimize this loss wrt θ, T via stochastic gradient descent, sampling from the training data and some query distribution. The number of QT-NN layers N is fixed a priori. One can think of the QT-NN as a more flexible version of the encoder in a VAE: instead of hardcoding inference for a single query (normally, hidden variables given visible variables), the QT-NN also takes as input a mask q specifying which variables are observed, and provides inference for unobserved ones. Note that h is never observed. For a given set of graphical model parameters θ and temperature T we can write a feedforward function that approximately resolves arbitrary inference queries by unrolling the parallel BP equations for N iterations. First, we combine the available evidence v and the query q into a set of unary factors. Unary factors specify a probability density function over a variable. Therefore, for each dimension inside v that q labels as "input", we provide a (Dirac or Kronecker) delta centered at the value of that dimension. For the "output" dimensions and hidden variables h we set the unary factor to an uninformative, uniform density. Finally, soft evidence, if present, can be incorporated through the appropriate density function. The of this process is a unary vector of factors u that contains an informative density exclusively about the inputs and whose dimensionality is the sum of the dimensionalities of v and h. Each dimension of u will be a real number for binary variables, and a full distribution in the general case. Once v and the query q are encoded in u, we can write down the equations of parallel BP over iterations as an NN with N layers, i.e., the QT-NN. To simplify notation, let us consider a factor graph that contains only pairwise factors. Then the probabilistic predictions of the QT-NN and the messages from each layer to the next can be written as: Here m (n) collects all the messages 1 that exit layer n − 1 and enter layer n. Messages have direction, so m (n) ij is different from m (n) ji. Observe how the input term u is re-fed at every layer. The output of the network is a beliefv i for each variable i, which is obtained by a softmax in the last layer. All these equations follow simply from unrolling BP over iterations, with its messages encoded in log-space. The portion of the parameters θ relevant to the factor between variables i and j is represented by θ ij = θ ji, and the portion that only affects variable i is contained in θ i. Observe that all layers share the same parameters. The functions f θ ij (·) are directly derived from φ(x; θ) using the BP equations, and therefore inherit its parameters. Finally, parameter T is the "temperature" of the message passing, and can be set to T = 1 to retrieve the standard sum-product belief propagation or to 0 to recover max-product belief revision. Values in-between interpolate between sum-product and max-product and increase the flexibility of the NN. See Appendix A for the precise equations obtained when the PGM is an RBM. If the distribution over queries only contains queries with a single variable assigned as output (and the rest as input), and there are no hidden variables, the above cost function reduces to pseudo-likelihood training . Query training is superior to pseudo-likelihood (PL) in two ways: Firstly, it provides an explicit mechanism for handling hidden variables, and secondly and more importantly, it preserves learning in the face of high correlations in the input data, which in catastrophic failure when using PL. If two variables a and b are highly correlated, PL will fail to learn the weaker correlation between a and z, since b will always be available during training to predict a, rendering any correlation with z useless at training time. If at test time we want to predict a from z because b is not available, the prediction will fail. In contrast, query training removes multiple variables from the input, driving the model to better leverage all available sources of information. Early works in learning undirected PGMs relied on contrastive energies . More recent approaches are NVIL and AdVIL , with the latter being regarded as superior. We will use an RBM in our experiments and compare QT with PCD which is very competitive in this setting . We also show for AdVIL, although it is not necessarily expected to be superior to PCD for this model. We use exactly the same datasets and preprocessing used in the AdVIL paper, with the same RBM sizes, check for further details. The random queries are generated by assigning each variable to input or output with 0.5 chance. We report the normalized cross-entropy (NCE), which is the aggregated cross-entropy over the test data, divided by the cross-entropy of a uniform model under the same query (i.e., values below 1.0 mean a better-than-trivial model). 1. For a fully connected graph, the number of messages is quadratic in the number of variables, showing the advantage of a sparse connectivity pattern, which can be encoded in the PGM choice. Computing the NCE for QT is as simple as running the trained QT-NN. PCD and AdVIL, however, cannot solve arbitrary inference queries directly and one has to resort to slow Gibbs sampling in the learned model. Alternatively, one can turn the RBM weights learned by this methods into a QT-NN with T = 1 (essentially, running BP for a fixed number of iterations). We also provide those as PCD-BP and AdVIL-BP. For PCD we train for 1000 epochs and cross-validate the learning parameter. For AdVIL we use the code provided by the authors. For QT we unfold BP in N = 10 layers and use ADAM to learn the weights. The validation set is used to choose the learning rate and for early stopping. We use minibatches of size 500. The T parameter is learned during training. The are shown in Table 3. QT-NN produces significantly better for most datasets (marked in boldface), showing that it has learned to generalize to new probabilistic queries on unseen data. Query training is a general approach to learn to infer when the inference target is unknown at training time. It offers the following advantages: 1) no need to estimate the partition function or its gradient (the "sleep" phase of other common algorithms); 2) produces an inference network, which can be faster and more accurate than iterative VI or BP because its weights are trained to compensate for the imperfections of approximate inference run for a small number of iterations; 3) arbitrary queries can be solved. In contrast, a VAE is only trained to infer the posterior over the hidden variables, or some other constant query. Why would QT-NNs generalize to new queries or scale well? The worry is that only a small fraction of the exponential number of potential queries is seen during training. The existence of a single inference network that works reasonably well for many different queries follows from the existence of a single PGM in which BP can approximate inference. The discoverability of such a network from limited training data is not guaranteed. However, there is hope for it, since the amount of training data required to adjust the model parameters should scale with the number of these, and not with the number of potential queries. Just like training data should come from the same distribution as test data, the training queries must come from the same distribution the test queries to avoid "query overfitting". In future work we will show how QT can be used in more complex undirected models, such as grid MRFs. Other interesting research avenues are modifications to allow sample generation and unroll other inference mechanisms, such as VI. • We use 0 HV to represent a matrix of zeros of size H × V. • Similarly 1 V represents a matrix of ones of size V × 1. • When any of the above defined scalar functions is used with matrix arguments, the function is applied elementwise. Some observations: • The Hadamard product with q effectively removes the information from the elements of v not present in the query mask, replacing them with 0, which corresponds to a uniform binary distribution in logit space. • The output of the network isv andĥ, the inferred probability of 1 for both the visible and hidden units. The outputĥ is inferred but actually not used during training. • The computation of f w (x) as specified above is designed to be numerically robust. It starts by computing f MP w (x), which would be the value of f w (x) for a temperature T = 0, i.e., max-product message passing, and then performs a correction on top for positive temperatures. | Instead of learning the parameters of a graphical model from data, learn an inference network that can answer the same probabilistic queries. | 1,035 | scitldr |
A plethora of computer vision tasks, such as optical flow and image alignment, can be formulated as non-linear optimization problems. Before the resurgence of deep learning, the dominant family for solving such optimization problems was numerical optimization, e.g, Gauss-Newton (GN). More recently, several attempts were made to formulate learnable GN steps as cascade regression architectures. In this paper, we investigate recent machine learning architectures, such as deep neural networks with residual connections, under the above perspective. To this end, we first demonstrate how residual blocks (when considered as discretization of ODEs) can be viewed as GN steps. Then, we go a step further and propose a new residual block, that is reminiscent of Newton's method in numerical optimization and exhibits faster convergence. We thoroughly evaluate the proposed Newton-ResNet by conducting experiments on image and speech classification and image generation, using 4 datasets. All the experiments demonstrate that Newton-ResNet requires less parameters to achieve the same performance with the original ResNet. A wealth of computer vision problems (e.g., structure from motion , stereo , image alignment , optical flow (; ;) ) are posed as nonlinear optimization problems. Before the resurgence of the machine learning era, the dominant family for solving such optimization problems 1 was numerical optimization, e.g, Gauss-Newton (GN). Recently, it was proposed that the GN steps, called descent directions, can be learned and represented as a cascade regression to solve non-linear least square problems (Xiong & De la). With the advent of deep learning, the aforementioned ideas were combined with learnable feature representations using deep convolutional neural networks for solving problems such as alignment and stereo . In this paper, we first try to draw similarities between learning descent directions and the structure of the popular residual networks. Motivated by that, we further extend residual learning by adopting ideas from Newton's numerical optimization method, which exhibits faster convergence rate than Gauss-Newton based methods (both theoretically and empirically as we show in our experiments). ResNet is among the most popular architectures for approximating non-linear functions through learning. The core component of ResNet is the residual block which can be seen as a linear difference equation. That is, the t th residual block is expressed as x t`1 " x t`C x t for input x t. By considering the residual block as a discretization of Euler ODEs , each residual block expresses a learnable, first order descent direction. We propose to accelerate the convergence (i.e., employ fewer residual blocks) in approximation of non-linear functions by introducing a novel residual block that exploits second-order information in analogy to Newton's method in non-linear optimization. Since the (second order) derivative is not analytically accessible, we rely on the idea of Xiong & De la to learn the descent directions by exploiting second order information of the input. We build a deep model, called Newton-ResNet, that involves the proposed residual block. Newton-ResNet requires less residual blocks to achieve the same accuracy compared to original ResNet. This is depicted in Fig. 1; the contour 2 shows the loss landscape near the minimum of each method and indeed the proposed method requires fewer steps. Our contributions are as follows: • We first establish a conceptual link between residual blocks in deep networks and standard optimization techniques, such as Gauss-Newton. This motivates us to design a novel residual block that learns the descent directions with second order information (akin to Newton steps in nonlinear optimization). A deep network composed of the proposed residual blocks is coined as Newton-ResNet. • We show that Newton-ResNet can effectively approximate non-linear functions, by demonstrating that it requires less residual blocks and hence significantly less parameters to achieve the performance of the original ResNet. We experimentally verify our claim on four different datasets of images and speech in classification tasks. Additionally, we conduct experiments on image generation with Newton-ResNet-based GAN . • We empirically demonstrate that Newton-ResNet is a good function approximator even in the absence of activation functions, where the corresponding ResNet performs poorly. The literature on resnets is vast; we focus below on the perspectives of a) theoretical understanding, b) alternative architectures and c) modifications of the transformation path. A significant line of research is the theoretical understanding behind the performance of residual connections. The work of proves that arbitrarily deep linear residual networks have no spurious local optima; all critical points correspond to a global minimum. attribute the success of resnet to the norm presentation. More recently, proves that a network with residual connections is provably better than the corresponding network without the residuals. focus on the gradients in residual connections; they study the correlations during initialization and introduce an appropriate initialization. These works are orthogonal to ours; they methodologically study the theoretical properties of deep learning, while we focus on reducing the number of residual blocks required. same topology with different feature fusion method (addition and concatenation respectively). They propose a framework that generalizes both residual and dense connections. The work that is most closely related to ours is that of; they define a topology that includes residual connections and higher order correlations. However, we offer a new perspective on the higher order correlations. In addition, we experiment with a) large scale problems, b) with linear blocks that highway networks have not used. A popular line of research modifies the transformation path of each residual block. In ResNeXt and Inception the authors add group convolutions in the transformation path. the transformation path is a (set of) residual blocks itself, i.e. they obfuscate one residual block inside another. In wide residual networks they advocate for increased width of each block. All related works are complementary to ours, since we do not modify the transformation path modules. The applications of residual networks are diverse and often with impressive . Such applications include object detection/recognition , face recognition , generative models . However, these networks have tens or hundreds of residual blocks. A line of research that reduces the number of parameters is that of pruning the network . propose to prune the weights with small magnitude, while propose a meta-learning technique to improve heuristic pruning techniques. However, pruning does not reduce the training resources (it even increases the time because for a single model, we train the network at least twice), and it is largely based on hand-engineered heuristics. That is, there is no solid understanding of the theoretical properties of pruning methods. To develop our intuition, we explore the linear residual block in sec. 3.1, i.e. the residual block without any activation functions. In sec. 3.2, we extend the proposed formulation in the presence of activation functions as typically used in ResNet. Figure 2: Schematic of the (a) original, (b) our residual block for the t th layer. The path that includes C is referred to as the transformation path; while the other (with the identity transformation) is referred to as the shortcut path. The symbol C denotes the operations in the transformation path, e.g. convolutions in. The symbols N 1, N 2 are normalization layers, e.g. batch normalization or 1ˆ1 convolutions. The symbol˚denotes an element-wise product. Before the introduction of the residual block, all the neural networks were a composition of linear layers, e.g. fully-connected or convolutional layers, and activation functions. The (input) representation was transformed in each layer through a linear operation if we ignore the activation functions. The residual block of enables the input representation to pass through unchanged by introducing a two-pathway block consisting of a shortcut path and a transformation path. The t th residual block (in a network) is x t`1 " x t`C x t for input x t 3. That is, the residual block expresses a linear difference equation. We propose instead a new residual block that captures second order information. The new residual block is: for input x t with S the same dimensions as C and˚an element-wise product. The scalar parameter α P R is learnable and plays the role of scaling the significance of the second order interactions. To reduce the number of parameters, we can share the parameters of S and C; we introduce some normalization in the quadratic term. The proposed residual block then is expressed as: with N 1, N 2 two normalization operators. The proposed residual block is depicted in Fig. 2. Frequently activation functions are used in conjunction with the residual block. We consider a residual block with activation functions and two convolutions in the transformation path. If we define the function f t pxq " C 2 pφpC 1 xqq, the residual block is x t`1 " φpx t`ft px t qq with φ denoting an activation function, such as RELU. To avoid cluttering the notation, batch normalization is ignored in the last equation. The proposed residual block in the presence of activation functions becomes: The proposed residual block can be used with different architectures, e.g. three convolutions or with group convolutions. Implementation details: All the optimization-related hyper-parameters, e.g. the optimizer, the learning rate, the initializations, the number of epochs etc., remain the same as in the original ResNet. Further improvement can be obtained by tuning those values for our residual block, but this is out of our scope. Unless mentioned otherwise, each experiment is conducted 5 times and the average and the standard deviation are reported. The following four datasets are used in this work: 1. CIFAR10 : This is a widely used dataset that contains 60, 000 images of natural scenes. Each image is of resolution 32ˆ32ˆ3 and is classified in one of the 10 classes. 2. CIFAR100 (Krizhevsky et al.): This is an extension over CIFAR10; it includes the same amount of images but there are 100 classes. 3. ImageNet : The ImageNet 2012 dataset comprises 1.28 million training images and 50K validation images from 1000 different classes. We train networks on the training set and report the top-1 and top-5 error on the validation set. 4. Speech Commands : This newly released dataset contains 60, 000 audio files; each audio contains a single word of a duration of one second. There are 35 different words (classes) with each word having 1, 500´4, 100 recordings. Every audio file is converted into a mel-spectrogram of resolution 32ˆ32. Below, we conduct an experiment on image classification on CIFAR10 in sec. 4.1; we modify the experiment by removing the activation functions, i.e. have networks linear with respect to the weights, in sec. 4.2. Sequentially, image classification experiments on CIFAR100 and ImageNet are conducted in sec. 4.3 and sec. 4.4 respectively. In addition to the image classification experiments, we exhibit how the proposed residual block can be used on image generation in sec. 4.5. Furthermore, an experiment on audio classification is conducted in sec. 4.6. We utilize CIFAR10 as a popular dataset for classification. We train each method for 120 epochs with batch size 128. The SGD optimizer is used with initial learning rate of 0.1. The learning rate is multiplied with a factor of 0.1 in epochs 40, 60, 80, 100. We use two ResNet architectures, i.e. ResNet18 and ResNet34, as baselines. Our model, called Newton-ResNet, is built with the proposed residual blocks; we add enough blocks to match the performance of the respective baseline. In Table 1 the two different ResNet baselines are compared against Newton-ResNet; the respective Newton-ResNet models have the same accuracy. However, each Newton-ResNet has " 40% less parameters than the respective baseline. In addition, we visualize the test accuracy for ResNet18 and the respective Newton-ResNet in Fig. 3. The test error of the two models is similar throughout the training; a similar phenomenon is observed for ResNet34 in Fig. 6. We remove all the activation functions, both from the transformation path and the output activation functions. The rest of the settings remain the same as in sec. 4.1. As can be noticed in Table 2, Newton-ResNet outperforms ResNet18 by a significant margin when removing the activation functions. It is worth noting that the performance of Newton-ResNet with/without activation functions differs by 7%, i.e. decent performance can be obtained without any activation functions. o Ð φ(BN(conv(o))); end 5: for i=1:lin_proj do 8: s Ð φ2(norm(conv1ˆ1(s))); end 9: xt`1 Ð xt`o`s 10: return xt`1 11: end function Table 3: The differences of the proposed method with the original residual block are highlighted in blue. The x_proj, lin_proj are 1ˆ1 convolutions added for normalization purposes in the proposed residual block. We verify the aforementioned classification on CIFAR100. The settings remain the same as in sec. 4.1. As can be noticed in Table 4 the test accuracy of ResNet34 and Newton-ResNet is similar, however Newton-ResNet has " 44% less parameters. The experiment of sec. 4.2 with the linear blocks is repeated on CIFAR100. That is, we remove all the activation functions and train the networks. The accuracy of each method is reported on Table 5. The difference observed in sec. 4.1 becomes even more pronounced. That is, ResNet performs poorly in this case and is substantially outperformed by Newton-ResNet. We perform a large-scale classification experiment on ImageNet; due to the computational resources required, this experiment is conducted only once. Following standard practices, we utilize the following data augmentation techniques: normalization through mean RGB-channel subtraction, random resized crop to 224ˆ224, scale from 5% to 100%, aspect ratio from random horizontal flip. During inference, we perform the following augmentation techniques: normalization through mean RGB-channel subtraction, scale to 256ˆ256, and single center crop to 224ˆ224. All models are trained on a DGX station with four Tesla V100 (32GB) GPUs. We use Mxnet 4 and choose float16 instead of float32 to achieve 3.5ˆacceleration and half the GPU memory consumption. In our preliminary experiments, we noticed that the second order might cause numeric overflow in float16; this was not observed in the rest of the experiments that use float32. Hence, we use a tanh as a normalization for the second order term, i.e. the last term of. Optimization is performed using SGD with momentum 0.9, weight decay 1e´4 and a mini-batch size of 1024. The initial learning rate is set to 0.4 and decreased by a factor of 10 at 30, 60, and 80 epochs. Models are trained for 90 epochs from scratch, using linear warm-up of the learning rate during first five epochs according to. For other batch sizes due to the limitation of GPU memory, we linearly scale the learning rate (e.g. learning rate 0.1 for batch size 256). We report both the Top-1 and Top-5 single-crop validation error in Table 6. For a fair comparison, we report the from our training in both the original ResNet and Newton-ResNet 5. Newton-ResNet consistently improves the performance with an extremely small increase in computational complexity and model size. Remarkably, Newton-ResNet50 achieves a single-crop Top-5 validation error of 6.358%, exceeding ResNet50 (6.838%) by 0.48% and approaching the performance achieved by the much deeper ResNet101 network (6.068% Top-5 error). The loss and Top-1 error throughout the training are visualized in Fig. 4, which demonstrates that the proposed method performs favourably to the baseline ResNetwhen the same amount of residual blocks are used. Table 7, we remove all the "relu" activation functions for both baseline and the proposed method. Without "relu", Newton-ResNet50 achieves a single-crop Top-5 validation error of 9.114%, significantly exceeding ResNet50 (71.562%) and approaching the performance achieved by the "relu" version (6.068% Top-5 error). Deep discriminative networks include hundreds of layers, while their generative counterparts' depth is critical and restricted (mainly because they are hard to train and fit in existing hardware). We explore whether we can reduce the number of residual blocks in generative models. Generative Adversarial Networks (GAN) of have dominated the related literature due to their impressive visual . GANs include two modules, a generator and a discriminator, which are both implemented with resnet-based neural networks. The generator samples z from a prior distribution, e.g. uniform, and tries to model the target distribution; the discriminator tries to distinguish between the samples synthesized from the generator and the target distribution. GAN is typically optimized with an alternating gradient descent method. We select the architecture of (SNGAN) as a strong baseline on CIFAR10. The baseline includes 3 resnet blocks in the generator and 3 in the discriminator. We replace the original residual blocks with the proposed residual blocks; two such blocks in each module suffice to achieve the same performance. That is, we reduce by 1 the blocks in both the generator and the discriminator. In Table 8 the experimental is summarized. Note that the experiment is conducted 10 times and the mean and variance are reported. In Fig. 5 some random samples synthesized by the two methods are depicted; visually the generated samples are similar. The quantitative are added in Table 9. The two models share the same accuracy, however Newton-ResNet includes 38% less parameters. This is consistent with the experiments on classical image datasets, i.e. sec. 4.1. In this work, we establish a link between the residual blocks of ResNet architectures and learning decent directions in solving non-linear least squares (e.g., each block can be considered as a decent direction). We exploit this link and we propose a novel residual block that uses second order interactions as reminiscent of Newton's numerical optimization method (i.e., learning Newton-like descent directions). Newton-type methods are likely to converge faster than first order methods (e.g., Gauss-Newton). We demonstrate that in the proposed architecture this translates to less residual blocks (i.e., less decent directions) in the network for achieving the same performance. We conduct validation experiments on image and audio classification with residual networks and verify our intuition. Furthermore, we illustrate that with our block it is possible to remove the non-linear activation functions and still achieve competitive performance. | We demonstrate how residual blocks can be viewed as Gauss-Newton steps; we propose a new residual block that exploits second order information. | 1,036 | scitldr |
In competitive situations, agents may take actions to achieve their goals that unwittingly facilitate an opponent’s goals. We consider a domain where three agents operate: a user (human), an attacker (human or a software) agent and an observer (a software) agent. The user and the attacker compete to achieve different goals. When there is a disparity in the domain knowledge the user and the attacker possess, the attacker may use the user’s unfamiliarity with the domain to its advantage and further its own goal. In this situation, the observer, whose goal is to support the user may need to intervene, and this intervention needs to occur online, on-time and be accurate. We formalize the online plan intervention problem and propose a solution that uses a decision tree classifier to identify intervention points in situations where agents unwittingly facilitate an opponent’s goal. We trained a classifier using domain-independent features extracted from the observer’s decision space to evaluate the “criticality” of the current state. The trained model is then used in an online setting on IPC benchmarks to identify observations that warrant intervention. Our contributions lay a foundation for further work in the area of deciding when to intervene. When an agent is executing a plan to achieve some goal, it's progress may be challenged by unforeseen changes such as an unexpected modification to the environment or an adversary subverting the agent's goal. In these situations, a passive observer intervening to help the agent reach it's intended goal will be beneficial. Intervention is different from the typical plan recognition problem because we assume the observed agent pursues desirable goals while avoiding undesirable states. Therefore, the observer must monitor actions/state unobtrusively to predict trajectories of the observed agent (keyhole recognition) and assist the observed agent to safely complete the intended task or block the current step if unsafe. Consider a user checking email on a computer. An attacker who wants to steal the user's password makes several approaches: sending an email with a link to a phishing website and sending a PDF file attachment embedded with a keylogger. The user, despite being unaware of the attacker's plan, would like to complete the task of checking email safely and avoid the attacker's goal. Through learning, our observer can recognize risky actions the user may execute in the environment and ensure safety. The decision of when to intervene must be made judicially. Intervening too early may lead to wasted effort chasing down false positives, helpful warnings being ignored as a nuisances, or leaking information for the next attack. Intervening too late may in the undesirable state. Further, we are interested in assisting a human user with different skill levels, who would benefit more from customized intervention. To this end, we need to identify actions that warrant intervention over three different time horizons: critical action, which if unchecked will definitely trigger the undesirable state, mitigating action, which gives the user some time to react because the threat is not imminent and preventing actions, which allows for long term planning to help the user avoid threats. Based on the time horizon we are current in, we can then plan to correct course accordingly. In this work we focus on identifying the first horizon. Intervention is useful in both online settings, where undesirable states may arrive incrementally and in offline settings where observations are available prior to intervention. In this paper, we model online intervention in a competitive environment where three agents operate: a user (human), an attacker (human or a software) agent and an observer (a software) agent who will intervene the user. The observer passively monitors the user and the attacker competing to achieve different goals. The attacker attempts (both actively and passively) to leverage the progress made by a user to achieve its own goal. The attacker may mask domain knowledge available to the user to expand the attack vector and increase the likelihood of a successfull attack. The user is pursuing a desirable goal while avoiding undesirable states. Using domain-independant features, we train a decision tree classifier to help the observer decide whether to intervene. A variation of the relaxed plan graph BID0 ) models the desirable, undesirable and neutral states that are reachable at different depths. From the graph, we extract several domain independent features: risk, desirability, distances remaining to desirable goal and undesirable states and active landmarks percentage. We train a classifier to recognize an observation as a intervention point and evaluate the learned model on previously unseen observation traces to assess the accuracy. Furthermore, the domain independent features used in the classifier offer a mechanism to explain why the intervention occurred. In real-time, making the decision to intervene for each observation may be costly. We examine how the observer can establish a wait time without compromising accuracy. The contributions of this paper include: formalizing the online intervention problem as an intervention graph that extends the planning graph, introducing domainindependent features that estimate the criticality of the current state to cause a known undesirable state, presenting an approach that learns to classify an observation as intervention or not, incorporating salient features that are better predictors of intervention to generate explanations, and showing this approach works well with benchmarks. Before we formalize the problem, we present examples for two cases of the online intervention: the attacker is actively trying to make the user reach the undesirable state by leveraging the user's progress and the passive attacker introduces an undesirable state to the environment without the user's knowledge (i.e., a trap), where attacker masks the location of the trap and exploits the user's unfamiliarity with the domain to make the user reach the undesirable state. In both cases, the observer monitors the attacker and the users' actions. The user plans for a desirable goal state, G d. Given the unexpected modification to the domain model, executing this plan may likely cause the user to reach the undesirable state (G u). The observer is assumed to be familiar with the domain (regardless of attacker's attempts to mask information to the user) and has knowledge about commonly occurring goals such as G d and G u. The user would like to be interrupted if some action will trigger G u.Active Attacker: We use the IPC block-words domain BID5 to illustrate the active attacker's case. The observer is watching the user stacking blocks to spell a word. The domain contains 4 blocks: T, B, A, D. Figure 1 shows the undesirable state developing from initial state I. G d equals the word TAD, while G u equals the word BAD. The user can not recognize block B (indicated by dotted lines), which prevents the user from identifying states ing from performing operations on B such as stack and pick up, and therefore fail to circumvent G u on his own. The attacker will use block B to defeat the user and achieve G u.In the initial state (I), all blocks are on the table. The user's arm (solid line) and the attacker's arm (dotted line) are empty. In the next sequence of events, the observer sees that the user has picked up block A (S 1) and stacked A on D (S 2). Consider two alternative timelines T 1 and T 2 stemming from S 2. In T 1, the observer sees that the user has picked up T and the attacker has also picked up B. The next state shows that the user has stacked T on A to spell the word TAD and reached G d successfully. In timeline T 2, the attacker has succeeded in reaching G u by stacking B on A before the user stacked T on A, leveraging the user's progress. Passive Attacker: This case considers the 3x3 grid world domain BID12 shown in FIG1. The observer watches the user (white circle) navigating from a start point on the grid to reach G d point in 1-step actions. When executing a plan to reach G d, the user would like to avoid the trap at point X, G u but will not be able to do so unless the observer interrupted. Let us assume the observer sees the user's action ing in state S 1. Although the move indicates that the user is moving toward G u and G d, interruption is too early. In two alternative timelines T 1 (top right) and T 2 (bottom), the observer sees different moves. In T 1 the user has reached G d while avoiding G u, in which case the observer need not interrupt. However, in T 2 the user has reached G u, in which case it would have been helpful if the user was blocked before moving to. Our formulation of the intervention problem makes several assumptions about the three actors. Observer: intervention decisions are made in an online setting for each observation that appears incrementally and include actions executed by the attacker or the user. The goals G d or G u are known but the plans to reach G d or G u are hidden. The domains for which plan intervention problem is defined are discrete and all actions are assumed to be of unit cost. The observer has full observability in the domain and the environment is deterministic. Therefore, it can determine the actions that are immediately applicable in the current state. User: Follows a plan to reach G d, but may reach G u unwittingly. G u is hidden, but would like the observer's help to avoid G u. The user does not have full observability of the domain or the attacker's actions. Attacker: Follows a plan to reach G u. The attacker has full observability of the domain and the user's actions. Given these assumptions, the observer assesses the state after each observation. This requires the observer to hypothesize about possible interesting trajectories from current state and evaluate each trajectory in terms of their likelihood to cause G u. Following STRIPS BID4, we define a planning problem as a tuple P = F, A, I, G where F is the set of fluents, I ⊆ F is the initial state, G ⊆ F represents the set of goal states and A is the set of actions. Each action a ∈ A is a triple a = P re(a), Add(a), Del(a) that consists of preconditions, add and delete effects respectively, where P re(a), Add(a), Del(a) are all subsets of F. An action a is applicable in a state s if preconditions of a are true in s; pre(a) ∈ s. If an action a is executed in state s, it in a new state s = (s \ del(a) ∪ add(a)). The solution to P is a plan π = {a 1, . . ., a k} of length k that modifies I into G by execution of actions a 1,..., a k. The plan recognition problem defined by is a triple T = D, G, O where D = F, A, I is a planning domain, G is the set of goals, and m]. A solution to the plan recognition problem is a subset of goals G ∈ G for which an optimal plan P [G] satisfying O is produced. DISPLAYFORM0 Similarly, the plan intervention problem (I) also uses observations of actions. However, instead of using information gleaned from the observation trace to find the most likely plans (and goals), the intervention problem aims to assess the current state for it's ability to cause G u and identify whether or not the user needs to be blocked from making further progress. Unlike Ramirez and Geffeners' approach, the observations used in our solution are not noisy nor do they contain missing actions. This will be addressed in future work. Plan DISPLAYFORM1, and a decision tree classifier model M that combines a vector of domain-independant features to classify an obervation as requiring intervention or not. The extension to typical plan/goal recognition comes from the domain-independent feature vector, which will be discussed in section 3.3. A solution to I is a vector of decision points corresponding to actions in O indicating whether each action was identified as requiring intervention. To assess the criticality of the current state to cause G u, the observer enumerates action sequences that will transform the current state to G d. These action sequences and intermediate states make up the observer's decision space, which is a single-root directed acyclic connected graph S = V, E, where V is the set of vertices denoting possible states the user could be in until G d is reached, and E is the set of edges representing actions from A. We refer to this graph as the intervention graph. The root of the intervention graph indicates the current state. Leaves of the graph are goal states (i.e., G u and G d). A path from root of the tree to G u represents a candidate attack plan, while a path from root to leaf node containing G d represents a desirable plan. FIG2 illustrates the observer's decision space for unobserved actions extending from state S 1 in Figure 1. Some subtrees are hidden for simplicity. Given the initial state where all 4 blocks are on the table, the observer expects the next action to be one in the set (PICK-UP {T, D, A, B}), but B is hidden from the user. The attacker can execute any FIG3 illustrates how the user's plans to reach G d could fail in the presence of an active (left) or passive (right) attacker. In the case of an active attacker, given the assumption that the attacker does not backtrack to a previous state and only leverages progress made thus far, it can make four attempts to prevent the user from reaching G u by inserting the hidden block into the partially built stack. If the user achieves goal states 1 or 4 the user wins despite the attacker. If the observed actions indicate that the user is heading toward one of these two states, then an interrupt is unwarranted. State 3 is less ideal for the user but G u is not achieved. In state 2 the attacker has successfully reached G u. Observations leading to state 2 warrant interruption. DISPLAYFORM0 In the case of a passive attacker, the observer needs to hypothesize about likely goals of the user given the current state. FIG3 (right) shows three of many such plans the user may follow to reach G d. Paths 1, 2 and 3 all in user going past the undesirable state (marked x), and at some point in these observation sequences the user must be interrupted before G u is reached. In contrast, path 4 indicates a safe path and must not generate an interrupt. Algorithm 1 describes how the intervention graph is built. The intervention graph is similar to the relaxed planning graph (RPG), where each level consists of predicates that have been made true and actions a ∈ A whose preconditions are satisfied. Initially, before any observations have been made, the current state (i.e., root of the tree) is set to initial state I. Next, using the domain theory D, actions a ∈ A whose preconditions are satisfied at current state are added to the graph. Each action in level i spawn possible states for level i + 1. Calling the method recursively for each state until G d and G u are added to some subsequent level in the graph will generate a possible hypotheses space for the observer. As a new observation arrives, the root of the graph is changed to reflect the new state after the observation and subsequent layers are also modified to that effect. Similar to the RPG, we omit delete effects during construction. Also construction terminates once G d is reached. The graph building algorithm does not allow adding backtracking actions because it will create a cycle. for a ∈ A where P re(a) ∈ s i do 6: DISPLAYFORM0 e ← AddEdge (s, s i+1, a)10: DISPLAYFORM1 We extract a set of features from the intervention graph that help determine when to intervene. These features include: Risk, Desirability, Distance to G d, Distance to G u and Percentage of active undesirable landmarks in current state. We use these features to train a decision tree. FIG4 illustrates a fragment of the intervention graph after PICK-UP A. Following the subtree extending from action STACK A D, both G u and G d can be reached. Unexpanded subtree T 1 also contains instances where the user can reach G d safely, without reaching G u. We will use FIG4 as a running example to discuss feature computation. Risk (R) quantifies how likely the effects of current observation will lead to G u. R is also coupled with the uncertainty the observer has about the next observation. We model the uncertainty as a uniform probability distribution across the set of actions whose preconditions are satisfied in current state. We define R as the posterior probability of reaching G u while the user is trying to achieve G d. Given the intervention graph, we extract paths from root to any leaf containing the G d, including the ones in which the user has been subverted to reach G u instead. By virtue of construction termination, G d will always be a leaf. R is computed for paths leading to state in FIG3 (left) because in that state the attacker has won. In the passive attacker case any path in the intervention graph that causes the user to reach point X, before G d is reached qualifies as candidates to compute R. Let Π candidates be the plans reaching G d and let |Π candidates | = n. The plan set Π u contains action sequences that reach state G u such that, Π u ⊆ Π candidates, |Π u | = m and (m <= n). We compute posterior probability of reaching G u for a path π ∈ Π u, using chain rule in probability as, P π = k j=1 P (α j |α 1, α 2, ..., α k−1), and α j ∈ A and k is the length of path until G u is reached. Then: DISPLAYFORM0 There are six action sequences the observer might observe when the user is trying to achieve G d (n = 6) and only one of those six sequences will make the user reach G u (m = 1). Since we assumed full observability for the observer, the root of the tree (current state) is assigned the probability of 1.0. Then, actions that are immediately possible after current state (STACK A B, STACK A D, STACK A T) are each assigned probabilites following a uniform distribution across the branching factor (0.33). Then for each applicable action in the current state, the ing state gets the probability of (1.0 × 0.33 = 0.33). Similarly, we apply the chain rule of probability for each following state and action level in the graph until G u first appears in the path. In this graph, G u appears two actions later and R = 0.08 1 = 0.08. Desirability (D) measures the effect of the observed action to help the user pursue the desirable goal safely. It separates common harmless actions from avoidable ones and connects the observations to knowledge of the goals the user wants to achieve. Given Π candidates as the set of plans extracted from the intervention graph that reach G d and |Π candidates | = n. The plan set Π d contains action sequences that reach state G d without reaching G u, Π d = Π candidates \Π u, we compute posterior probability of reaching G d without reaching G u for a path π ∈ Π d, using chain rule in probability as, P π = k j=1 P (α j |α 1, α 2, ..., α k−1), and α j ∈ A and k is the length of path. Then: DISPLAYFORM1 In FIG4, there are five instances where user achieved R and D are based on probabilities indicating the confidence the observer has about the next observation. We also use simple distance measures: distance to G u (δ u) and distance to G d (δ d). Both distances are measured in the number of actions required to reach a state containing G d or G u from root in the intervention graph. DISPLAYFORM2 Distance to G u (δ u) measures the distance to state G u from the current state in terms of the number of actions. As with the computations of R and D, given Π candidates is the set of paths extracted from the intervention graph that reach G d and |Π candidates | = n. The path set Π u contains action sequences that reach state G u such that, Π u ⊆ Π candidates, |Π u | = m and (m <= n). We count s, the number of the edges (actions) before G u is reached for each path π ∈ Π u and δ u is defined as the average of the distance values given by the formula: DISPLAYFORM3 In this formula, −1 indicates that the undesirable state is not reachable from the current state. For the example problem illustrated in FIG4, δ u = 3 1 = 3. Distance to G d (δ d) measures the distance to G d from current state. The path set Π d contains action sequences that reach G d without reaching G u, Π d = Π candidates \ Π u, we count t, the number of the edges where G d is achieved without reaching G u for each path π ∈ Π d. Then, δ d is defined as the average of the distances given by the formula: DISPLAYFORM4 In this formula, −1 indicates that G d can not be reached safely from the current state. For the example problem illustrated in Figure Percentage of active attack landmarks (L ac) captures the criticality of current state toward contributing to G u. Landmarks BID6 are predicates (or actions) that must be true in every valid plan for a planning problem. We used the algorithm in BID6 to extract fact landmarks for the planning problem P = D, G u. These landmarks are referred to as attack landmarks because they establish predicates that must be true to reach G u. Landmark Generation Graph (LGG) BID6 for P for the active attacker case is shown in Figure 6. Predicates (ON B A), (ON A D) correspond to G u. Predicates that are grouped must be made true together. When the observed actions activate any attack landmarks, it signals that an undesirable state is imminent. Landmarks have been successfully used in deriving heuristics in plan recognition BID18 PICK UP A PICK UP B G u Figure 6: LGG for P. Contains verified fact landmarks for P and greedy-necessary orders. A box with multiple landmarks indicate fact landmarks that must be true together. for o ∈ O do 4: DISPLAYFORM0 Apply action probabilities to e ∈ E following p 6:Apply state probabilities to v ∈ V following p 7: DISPLAYFORM1 and generating alternative plans BID3 ). We compute a feature using attack landmarks: percentage of active attack landmarks in current state (L ac). To compute L ac for the example in FIG4, we count the number of landmark predicates that have become active (l) in the root of the intervention graph. Then, (L ac) is given by the formula: DISPLAYFORM2 In FIG4 We train the decision tree classifier in supervised learning mode to categorize observed actions into two classes: "Y" indicating that the interruption is warranted and "N", indicating that intervention is unwarranted. According to this policy, in the expanded sub-tree in FIG4 the path that reaches G u is labeled as follows: PICK-UP A (N), STACK A D (N), PICK-UP B (N), STACK B A (Y). Label for each action is indicated within brackets. We will make this labeled data set available for the community. Given a labeled observation set and corresponding feature vectors, we train the decision tree classifier with 10-fold cross validation. Then the trained model is used to predict intervention for previously unseen intervention problems. We decided to chose the decision tree as the classifier because the decision tree learned model had the highest accuracy in predicting intervention on new problems compared to the two other classifiers: random forests BID2 and Naive Bayes. To generate training data we first created twenty planning problems for each benchmark domain. Then observation traces corresponding to each problem were generated. We enforced a limit of 100 observation traces for each planning problem for grid domains. These observation traces were provided as input to Algorithm 2. The algorithm takes a PDDL domain, a set of undesirable and desirable states and a probability distribution as input and produces a relation V of observations and feature vectors. We train a decision tree classifier using the Weka 1 framework. We selected the implementation of C4.5 algorithm BID14 ) (J48), which builds a decision tree using the concept of information entropy. We chose the decision tree classifier for its ability determine salient features for intervention, which facilitates generating explanations for the user. We focus on two questions: Using domain-independent features indicative of the likelihood to reach G u from current state, can the intervening agent correctly interrupt to prevent the user from reaching G u? and If the user was not interrupted now, how can we establish a wait time until the intervention occurred before G u? To address the first question, we evaluated the performance of the learned model to predict intervention on previously unseen problems. The experiment suit consists of the two example domains from Section 2. To this we added Navigator and Ferry domains from IPC benchmarks. In Navigator domain, an agent simply moves from one point in grid to another goal destination. In the Ferry domain, a single ferry moves cars between different locations. To simulate intervention in active attacker case (the Block-Words domain), we chose word building problems. The words user and the attacker want to build are different but they have some common letters (e.g., TAD/BAD). The attacker is able to exploit the user's progress on stacking blocks to complete word the attacker wants to build. In Easy-IPC and Navigator domains, we designated certain locations on the grid as traps. The goal of the robot is to navigate to a specific point on the grid safely. In the Ferry domain a port is compromised and a ferry carrying a car there in an undesirable state. The ferry's objective is to transport cars to specified locations without passing a compromised port. In addition to the trained data set, we also generated 3 separate instances of 20 problems each (total of 60) for the benchmark domains to produce testing data for the learned model. The three instances contained intervention problems that were different the trained instances. For example, number of blocks in the domain (block-words), size of grid (navigator, easy-ipc), accessible and inaccessible paths on the grid (navigator, easy-ipc), properties of artifacts in the grid (easy-ipc). For each instance we generated 10 observation traces for each planning problem (i.e., 200 observation traces per instance). We define true-positive as the classifier correctly predicting "Y". True-negative is an instance where the classifier correctly predicts "N". False-positives are instances where classifier incorrectly predicts an observation as an interrupt. False-negatives are instances where the classifier incorrectly predicts the observation not as an interrupt. When a human user receives an interruption, the user may like to know a reason. To extract salient features for intervention, we applied a correlation based feature selection technique in data pre-processing step to identify the top four 1 http://www.cs.waikato.ac.nz/ml/weka/ best predictors. Feature selection reduces complexity of the model, makes the outcome of the model easier to interpret, and reduces over-fitting. The attribute selector in Weka uses the Pearson's correlation to measure predictive ability between nominal attributes and the class. Our feature vector consists of nominal attributes. Table 1 summarizes top 4 correlated features for each domain. Risk is the best performing feature. Distance desirable state feature is the next best choice for a feature. The percentage of active attack landmarks was the weakest predictor of intervention across all benchmark domains and was removed from training. Interrupting at each observation: Assuming the decision to intervene is made for every observation, we calculated the true-positive rate (TPR= T P T P +F N), false-positive rate (FPR= TAB3 summarizes TPR, FPR, TNR, FNR for predicting intervention in unseen observation traces. The classifier works well in identifying intervention across domains. In line with our expectation, TPR and TNR are very high (> 95%) across domains and FNR and FPR is very low(< 5%). Because the accuracy remains consistant across test instances we conclude that the model is reasonably tolerant for modifications in the domain such as grid sizes and number of objects. Delaying the interruption: In real-life, making the intervention decision for every observation may be costly. If we are intervening a human user, he may disregard frequent interruptions as noise. For this reason, we examine how to establish a wait time until intervention occurs for the first time. We used the feature (L ac) as a checkpoint for the intervening agent to wait safely without interrupting the user. We modified the observation traces to contain action sequences starting from a point where the current state contained 50% and 75% of active landmarks. For problem instances where 75% active landmark percentage was infeasible, we limited it to the maximum active landmark percentage. We used the same learned model to predict intervention for these modified traces. For each domain, row'Delayed50' in table 2 summarizes TPR, FPR, TNR, FNR for predicting interruptions for benchmark domains given that the decision is delayed until 50% <= L ac < 75%. The row'Delayed75' indicates that the decision was delayed until L ac >= 75%.Accuracy is not affected significantly by delaying the intervention from the chosen checkpoints. However, a negative effect of delaying intervention is missing true positives. We evaluated how the delay affects the percentage of true positive observations missed. TAB4 summarizes these . Intuitively, the longer the delay, a higher percentage of true positives will be missed. For the Blocks-Word domain, there is no effect between the the delay until 50% and 75%. In both cases the delaying the decision does not cause the intervening agent to miss any true positives. The most significant loss occurs in Navigator domain, where delay until 75% will cause a loss of 2%-28% while delaying until 50% is the safest choice. The Ferry domain exhibits a similar pattern where the delay until 75% landmarks become active will cause a loss of 8%-18%. We conclude that delaying interruptions can be controlled by the percentage of active landmarks in the current state and that for certain domains it is a trade off between loss of true-positives and the delay. When an observation that warrants intervention is identified intervening agent issues a warning (and an explanation) to the user. The user needs to take corrective/mitigating actions to avoid the undesirable state. The decision trees can help explain intervention. Decision trees generated for the benchmark domains are shown in FIG8. Combining the shallow trees and the definitions of the features allow us to generate a clear and succinct set of rules to explain intervention. For the Block-word domain, FIG8, the rule that explains intervention first looks at the value of Risk. If the risk is less than or equal to 0.5 then that observation does not qualify as an intervention point. By definitions, this means that from the current state there are multiple ways to reach the undesirable state, indicating the observation is a common action that can be perceived as harmless. Next, if the observation that has a risk level of grater than 0.5 (indicating there are fewer ways of reach the undesirable state and that it's imminent), next feature to look at is the distance to the undesirable state. If the distance is negative, indicating that execution of this step will trigger the undesirable state, then the observation warrants intervention. Otherwise the observation does not require intervention. With this decision tree, an explanation for intervention in Blocks-words domain can be developed as: The current step was intervened because the risk level is significant (> .5) and the effect of this observed action will trigger the undesirable state. For the passive attacker domains FIG8 -(b),(c),(d)) the learned model generated even simpler trees with only one feature being used to determine intervention. For Easy-IPC and Navigator domains, the Risk feature determines the class of an observation. This leads to generating explanations for the Easy-IPC and Navigator domains such as The current step was intervened because the risk level is significant (> .75 for Easy-IPC and > .5 for Navigator). For the Ferry domain, Distance to G d determines intervention. A negative value indicates that if the next step was executed there is no way to reach the desirable goal state without triggering the undesirable state. Thus an explanation of intervention for the Ferry domain will be: The current step was intervened because the effect of this step will make it impossible to reach the desired goal without triggering the undesirable state. Closely related areas of literature for this work is plan/goal recognition. Plan recognition is the problem of inferring the course of action (i.e., plan) an actor may take towards achieving a goal from a sequence of observations BID17 BID9. The constructed plan, if followed to completion, is expected to in states that correspond to goals of the actor, which in turn presupposes that the actor intends to achieve those goals. Plan/goal recognition approaches in the literature explore both domain-dependent and independent methods. In domain-dependent methods agents rely heavliy on domain knowledge for inference. For example, BID8 presents a solution that recognizes an agents adversarial intent by mapping observations made to date to a plan library. BID1 discuss how to construct and manipulate domain models that describe behaviors of adversaries in computer security domain and use these models to generate plans. Another approach uses Goal Driven Autonomy (GDA) that allows agents to continuously monitor the current plans execution and assess if the current state matches with expectation BID11. More recent work attempts to separate this knowledge dependency by allowing the agent to learn knowledge from observations (Jaidee, Muñoz-Avila, and W. Aha 2011). In contrast, domain-independent goal recognition that use planning to infer agents goals. Ramirez and Geffner (2009; used an existing planner to generate hypotheses from observations to infer a single agent's plan. Their approaches offer advantages of being more adaptive to input as well as exploiting existing planning systems and plan representations. Their first approach computed the set of goals that can be achieved by optimal plans that match the observations. The second approach removed the optimality constraint and computed a probability distribution across possible plans that could be generated from existing planners BID16 . BID10 introduced the worst-case distinctiveness (wcd) metric as a measurement of the ease of performing goal recognition in a domain. The wcd problem finds the longest sequence of actions an agent can execute while hiding its goal. They show that by limiting the set of available actions in the model wcd can be minimized, which will allow the agent to reveal it's goal as early as possible. In online recognition, BID18 propose an approach that combines goal-mirroring and landmarks to infer the goal of an agent. Landmarks are used to minimize the number of hypotheses the agent has to evaluate, thus improving the effeciency of the recognition process. BID13 combines Ramirez and Geffener's plan recognition approach and leverages landmarks to counterplan and block an opponent's goal achievement. The main difference between plan intervention and recognition is that, in intervention the time intervention happens is critical. In plan recognition, identifying the plan at the right time is not a priority. The user's preferences in intervention (e.g., in-time, targetted intervention vs. prolonged and incremental) and the source of uncertainty in the environment (e.g., environment, attacker) complicate the intervening agent's decisioni and can be seen as trade-offs. Furthermore, our approach complements existing approaches by using a decision tree to identify events that warrant intervention and identifying salient features that may be useful in generating explanations to plan intervention. We formalized the online plan intervention problem in a competitive domain where an attacker both actively and passively attempts to leverage progress made by a user to achieve the attacker's own conflicting goals. We introduced the intervention graph, which models the decision space of an observer, whose goal is to support the user by blocking actions that allows the attacker to achieve his goal. We trained a classifier using domain-independent features extracted from the intervention graph to evaluate the criticality of the current state. The model predicts intervention with high accuracy for the benchmark domains. Our solution suffers from state space explosion for large domains. As an solution, we suggest sampling from alternative plans generated from off-the-shelf planners. This will also allow us to compare the proposed approach with existing online goal-recognition methods. The uncertainty model can be extended to limiting the observer's ability to fully perceive the current state. We recognize the attack models (for both active and passive cases) can be expanded to different threat models. For example, the attacker can behave as truly adversarial and undo progress the user has made so far and guide the user towards an entirely different goal. We will improve on explanations by suggesting actions that will help the user avoid the undesirable state when intervention occurs, instead of delegating the responsibility of being safe to the user, and integrating causal reasoning to explanations. These extensions lay a foundation for applying classical planning techniques for decision support and assistive agents. | We introduce a machine learning model that uses domain-independent features to estimate the criticality of the current state to cause a known undesirable state. | 1,037 | scitldr |
Deep generative models can emulate the perceptual properties of complex image datasets, providing a latent representation of the data. However, manipulating such representation to perform meaningful and controllable transformations in the data space remains challenging without some form of supervision. While previous work has focused on exploiting statistical independence to \textit{disentangle} latent factors, we argue that such requirement can be advantageously relaxed and propose instead a non-statistical framework that relies on identifying a modular organization of the network, based on counterfactual manipulations. Our experiments support that modularity between groups of channels is achieved to a certain degree on a variety of generative models. This allowed the design of targeted interventions on complex image datasets, opening the way to applications such as computationally efficient style transfer and the automated assessment of robustness to contextual changes in pattern recognition systems. Deep generative models, by learning a non-linear function mapping a latent space to the space observations, have proven successful at designing realistic images in a variety of complex domains (objects, animals, human faces, interior scenes). In particular, two kinds of approaches emerged as state-of-the-art (SOTA): Generative Adversarial Networks (GAN) , and Variational Autoencoders (VAE) . Efforts have been made to have such models produce disentangled latent representations that can control interpretable properties of images . However, the ing models are not necessarily mechanistic (or causal) in the sense that interpretable properties of an image cannot be ascribed to a particular part, a module, of the network architecture. Gaining access to a modular organization of generative models would benefit the interpretability and allow extrapolations, such as generating an object in a that was not previously associated with this object, as illustrated in a preview of our experimental in Fig. 1. Such extrapolations are an integral part of human representational capabilities (consider common expressions such as "like an elephant in a china shop") and consistent with the modular organization of its visual system, comprising specialized regions encoding objects, faces and places (see e.g.). Extrapolations moreover likely support adaptability to environmental changes and robust decision making . How to leverage trained deep generative architectures to perform such extrapolations is an open problem, largely due to the non-linearities and high dimensionality that prevent interpretability of computations performed in successive layers. In this paper, we propose a causal framework to explore modularity, which relates to the causal principle of Independent Mechanisms, stating that the causal mechanisms contributing to the overall generating process do not influence nor inform each other . 1 We study the effect of direct interventions in the network from the point of view that the mechanisms involved in generating data can be modified individually without affecting each other. This principle can be applied to generative models to assess how well they capture a causal mechanism . Causality allows to assay how an outcome would have changed, had some variables taken different values, referred to as a counterfactual . We use counterfactuals to assess the role of specific internal variables in the overall functioning of trained deep generative models, along with a rigorous definition of disentanglement in a causal framework. Then, we analyze this disentanglement in implemented models based on unsupervised counterfactual manipulations. We show empirically how VAEs and GANs trained on image databases exhibit modularity of their hidden units, encoding different features and allowing counterfactual editing of generated images. Related work. Our work relates to the interpretability of convolutional neural networks, which has been intensively investigated in discriminative architectures (; ; ; b; a). Generative models require a different approach, as the downstream effect of changes in intermediate representations are high dimensional. InfoGANs. β-VAEs and other works (; ; ;) address supervised or unsupervised disentanglement of latent variables related to what we formalize as extrinsic disentanglement of transformations acting on data points. We introduce the novel concept of intrinsic disentanglement to uncover the internal organization of networks, arguing that many interesting transformations are statistically dependent and are thus unlikely to be disentangled in the latent space. This relates to who proposed a framework based on interventions on internal variables of a GAN which, in contrast to our fully unsupervised approach, requires semantic information. suggest a definition of disentanglement based on group representation theory. Compared to this proposal, our approach (introduced independently in ) is more flexible as it applies to arbitrary continuous transformations, free from the strong requirements of representation theory (see Appendix F). Finally, an interventional approach to disentanglement has also be taken by , who focuses on extrinsic disentanglement in a classical graphical model setting and develop measures of interventional robustness based on labeled data. We introduce a general framework to formulate precisely the notion of disentanglement and bridge it to causal concepts. This theory section will be presented informally to ease high level understanding. Readers interested in the mathematical aspects can refer to Appendix A where we provide all details. FRAMEWORK We consider a generative model M that implements a function g M, which maps a latent space Z to a manifold Y M where the learned data points live, embedded in ambient Euclidean space Y (Fig. 2a). A sample from the model is generated by drawing a realization z from a prior latent variable distribution with mutually independent components, fully supported in Z. We will use the term representation to designate a mapping r from Y M to some representation space R (we also call r(y) the representation of a point y ∈ Y M ). In particular, we will assume (see Def. 6 and Prop. 6 M is a representation of the data, called the latent representation. Assuming the generative model is implemented by a non-recurrent neural network, we can use a causal graphical model representation of the entailed computational graph implementing the mapping g M through a succession of operations (called functional assignments in causal language), as illustrated in Fig. 2b, that we will call Causal Generative Model (CGM). In addition to the latent representation, we can then choose a collection of possibly multi-dimensional endogenous (internal) variables represented by nodes in the causal graph, such that the mapping g M is computed by composing the endogenous variable assignment v M with the endogenous mappingg M according to the diagram 1 Note that this is not a statistical independence; the quantities transformed by the mechanisms of course do influence each other and can be statistically dependent. (a) Independence of mechanisms Endogenous variables Latent A paradigmatic choice for these variables is the collection of output activation maps of each channel in one hidden layer of a convolutional neural network, as illustrated in Fig. 2b. As for the latent case, we use mild conditions that guaranteeg M to be left-invertible, defining the internal representation of the network (see Def. 6 and Prop. 4 in Appendix A). Given the typical choice of dimensions for latent and endogenous variables, the V k's are also constrained to take values in subsets V k M of smaller dimension than their Euclidean ambient space V k. As detailed in Appendix A, we will denote the endogenous image sets of the form ) k∈E, z ∈ Z for a subset of variables indexed by E (amounting to V M when E includes all endogenous variables). This CGM framework allows defining counterfactuals in the network following. Definition 1 (Unit level counterfactual, informal). Given CGM M, the interventional model M h is obtained by replacing assignments of the subset variables E and by the vector of assignments h. Then for any latent input z, called unit, the unit-level counterfactual is the output of Def. 1 is also in line with the concept of potential outcome . Importantly, counterfactuals induce a transformation of the output of the generative model. Given an embedded CGM, we call the transformation We introduce faithfulness of a counterfactual mapping to account for the fact that not all interventions on internal variables will in an output that could have been generated by the original model. In the context of generative model, non-faithful counterfactuals generate examples that leave the support of the distribution learned from data, possibly ing in an artifactual output (assigning a large value to a neuron may saturate downstream neurons), or allowing extrapolation to unseen data. The classical notion of disentangled representation (e.g. ;), posits individual latent variables "sparsely encode real-world transformations". Although the concept of real-world transformations remains elusive, this insight, agnostic to statistical concepts, has driven supervised approaches to disentangling representations, where relevant transformations are well-identified and manipulated explicitly using appropriate datasets and training procedures. In contrast, unsupervised learning approaches to disentanglement need to learn such real-world transformations from unlabeled data. In order to address this challenge, SOTA approaches seek to encode such transformations by changes in individual latent factors, and resort to a statistical notion of disentanglement, enforcing conditional independence between latent factors . This statistical approach leads to several issues: • The i.i.d. constraints on the prior distribution of latent variables, impose statistical independence between disentangled factors on the data distribution. This is unlikely for many relevant properties, counfounded by factors of the true data generating mechanisms (e.g. skin and hair color). • Independence constraints are not sufficient to specify a disentangled representation, such that the problem remains ill-posed . As a consequence, finding an appropriate inductive bias to learn a representation that benefits downstream tasks remains an open question. • To date, SOTA unsupervised approaches are mostly demonstrated on synthetic datasets, and beyond MNIST disentangling complex real world data has been limited to the well-calibrated CelebA dataset. On complex real-world datasets, disentangled generative models exhibit visual sample quality far below non-disentangled SOTA (e.g. BigGAN exploited in our work ). We propose an non-statistical definition of disentanglement by first phrasing mathematically the transformation-based insights . Consider a transformation T acting on the data manifold Y M. As illustrated by the commutative diagram of Fig. 2c, disentanglement of such property then amounts to having T correspond to a transformation T of the latent space that would act only on a single variable z k, using transformation f, leaving the other latent variables available to encode other properties. More explicitly we have It is then natural to qualify two transformations T 1 and T 2 as disentangled (from each other), whenever they modify different components of the latent representation (see Def. 9. This amounts to saying that the transformations follow the causal principle of independent mechanisms . Due to the fact that it relies on transformation of the latent representation, that are exogenous to the CGM, we call this notion extrinsic disentanglement. This "functional" definition has the benefit of being agnostic the the subjective choice of the property to disentangle, and to the statistical notion of independence. However, we can readily notice that, if applied to the latent space (where components are i.i.d. distributed), this functional notion of disentangled transformation still entails statistical independence between disentangled factors. We thus need to exploit a different representation to uncover possibly statistically related properties, but disentangled in the sense of our definition. As illustrated in the CGM of Fig. 2b, in contrast to latent variables, properties encoded by endogenous variables of the graphical model are not necessarily statistically independent due to common latent cause, but may still reflect interesting properties of the data that can be intervened on independently, following the principle of independence of mechanisms. We thus extend our definition of disentanglement to allow transformations of the internal variables of the network as follows. intrinsically disentangled with respect to a subset E of endogenous variables, if there is transformation T acting on the internal representation space such that for any endogenous value v where T (v) only affects the variables indexed by E. on variables that remain within the support of the original marginal distribution, it is sufficient that E and its complement E do not have common latent ancestors. This indicates that finding faithful counterfactuals can be used to learn disentangled transformations. Building on Sec. 2, we define modularity as a structural property of the internal representation, allowing (with the immediately following Prop. 2) to implement arbitrary disentangled transformations. Definition 4 (Modularity). A subset of endogenous variables E is called modular whenever. If E is modular, then any transformation applied to it staying within its input domain is disentangled. The proof is a natural extension of the proof of Proposition 1. Both the Definition and the Proposition have trivial extensions to multiple modules (along the line described in Appendix F). While we have founded this framework on a functional definition of disentanglement that applies to transformations, the link made here with an intrinsic property of the trained network allows us to define a disentangled representation as follows: consider of partition of the intermediate representation in several modules, such that their Cartesian product is a factorization of V M. We can call this partition a disentangled representation since any transformation applied to a given module leads to a valid transformation in the data space (it is relatively disentangled following Def. 9). Interestingly, we obtain that a disentangled representation requires the additional introduction of a partition of the considered set of latent variables into modules. This extra requirement was not considered in classical approaches to disentanglement as it was assumed that each single scalar variables could be considered as an independent module. Our framework provides an insight relevant to artificial and biological systems: as the activity of multiple neurons can be strongly tied together, the concept of representation may not be meaningful at the "atomic" level of single neurons, but require to group them into modules forming a "mesoscopic" level, at which each group can be intervened on independently. As stated in Sec. 2, a functional definition of disentanglement, leaves unanswered how to find relevant transformations. Prop. 1 and 2 provide the following hints: Once a modular structure is found in the network, a broad class of disentangled transformations are available, Transformations that stay within their input domain are good candidates of disentanglement, Counterfactual interventions implicitly defines transformation. We follow these guidelines by assigning a constant value v 0 to a subset of endogenous variables E to define counterfactuals (i.e. h is a constant function), aiming for faithful ones by constraining v 0 to belong to V E M. To avoid characterizing V E M, we rely on sampling from the (joint) marginal distribution of the variables in E. To illustrate the procedure, we consider a standard feed-forward multilayer neural network and choose endogenous variables to be the collection of all output activations of channels of a given layer. Let E be a subset of these channels, the hybridization procedure, illustrated in Fig. 3a goes as follows. We take two independent examples of the latent variable z 1 and z 2, that will generate two original examples of the output (y 1, y 2) = (g M (z 1), g M (z 2)) (that we call Original 1 and Original 2). We also memorize the tuple v(z 2) gathering values of variables indexed by E when generating Original 2, and v(z 1) the tuple of values taken by all other endogenous variables on this layer, but when generating Original 1. Assuming the choice of E identifies a modular structure, v(z 1) and v(z 2) would encode different aspects of their corresponding generated images, such that one can generate a hybrid example mixing these features by assigning the collection of output values of layer with the concatenated tuple (v(z 1), v(z 2)) and feeding it to the downstream part of the generator network. The above counterfactual hybridization framework allows assessing how a given module E affects the output of the generator. For this purpose we quantify its causal effect by repetitively generating pairs (z 1, z 2) from the latent space, where both vectors are sampled independently of each other. We then generate and collect hybrid outputs following the above described procedure for a batch of samples and use them to estimate an influence map as the mean absolute effect: where Y (z 1) = g M (z 1) is the non-intervened output of the generator for latent input z 1. In eq. 2, the difference inside the absolute value can be interpreted as a unit-level causal effect in the potential outcome framework , and taking the expectation is analogous to computing the average treatment effect. Our approach has however two specificities: we take the absolute value of the unit-level causal effects, as their sign may not be consistent across units, the is averaged over many interventions corresponding to different values of z 2. While IM has the same dimension as the output image, we then average it across color channels to get a single grayscale heat-map pixel map. We also define a scalar quantity to quantify the magnitude of the causal effect, the individual influence of module E, by averaging IM across output pixels. A challenge with the hybridization approach is to select the subsets E to intervene on, especially with networks containing a large amount of units or channels per layer. We use a fine to coarse approach to extract such groups, that we will describe in the context of convolutional layers. First, we estimate elementary influence maps (EIM) associated to each individual output channel c of each convolutional layer of the network (i.e. we set E = {c} in eq.). Then influence maps are grouped by similarity to define modules at a coarser scale, as we will describe in detail below. Representative EIMs for channels of convolutional layers of a VAE trained on the CelebA face dataset (see section) are shown in Supplementary Fig. 6 and suggest channels are functionally segregated, with for example some influencing finer face feature (eyes, mouth,...) and others affecting the of the image or the hair. This supports the idea that individual channels can be grouped into modules that are mostly dedicated to one particular aspect of the output. In order to achieve this grouping in an unsupervised way, we perform clustering of channels using their EIMs as feature vectors as follows. We first pre-process each influence map by: performing a local averaging with a small rectangular sliding window to smooth the maps spatially, thresholding the ing maps at the 75% percentile of the distribution of values over the image to get a binary image. After flattening image dimensions, we get a (channel×pixels) matrix S which is then fed to a Non-negative Matrix Factorization (NMF) algorithm with manually selected rank K, leading to the factorization S = WH. From the two ing factor matrices, we get the K cluster template patterns (by reshaping each rows of H to image dimensions), and the weights representing the contribution of each of these pattern to individual maps (encoded in W). Each influence map is then ascribed a cluster based on which template pattern contributes to it with maximum weight. The choice of NMF is justified by its success in isolating meaningful parts of images in different components . However, we will also compare our approach to the classical k-means clustering algorithm. In order to further justify our NMF based approach, we also introduce a toy generative model. Model 1. Consider Z a vector of K i.i.d. uniformly distributed RVs. Assume a neural network with one hidden layers composed of m vector variables V k such that with H k ∈ R n, n > 1 and S a strictly increasing activation function applied entry-wise to the components of each vector (e.g. a leaky ReLU). These endogenous variables are mapped to the output with matrices W k ∈ R m×n, m > nK. Assume additionally the following random choice for the model parameters: all coefficients of H k's are sampled i.i.d. from an arbitrary distribution that has a density with respect to the Lebesgue measure, there exists K sets of indices I k over [1, m] each containing at least one element l k ∈ I k such that for all j = k, l k / ∈ I j, For a given column of W k, coefficient in I k are sampled i.i.d. from an arbitrary distribution that has a density with respect to the Lebesgue measure, while the remaining coefficients are set to zero. The specific condition on the I k's enforced in encodes the assumption that there is an area in the image that is only influenced by one of the modules. For example, assuming a simple /object module pair, it encodes that the borders of the image never belong to the object while the center of the image never belong to . For this model, we get the following identifiability . Proposition 3. For Model 1, with probability 1 we have: The partition of the hidden layer entailed by the K vectors {V k} corresponds to a disentangled representation (i.e. each vector is modular relatively to the others). This justifies the use of NMF of a thresholded version of the influence map matrix computed for individual endogenous variables (to generate a binary matrix summarizing their significant influences on each output pixel). Moreover, the application of the sliding window is justified in order to enforce the similarity between the influence maps belonging to the same module, reflected by the condition on identical support I k for all columns of W k in Model 1, and favoring low-rank matrix factorization. 4.1 DCGAN, β-VAE AND BEGAN ON THE CELEBA DATASET We first investigated modularity of genrative models trained on the CelebFaces Attributes Dataset (CelebA) .We first used a basic architecture: a plain β-VAE (https://github.com/ yzwxx/vae-celebA . We ran the full procedure described in Sec. 3, comprised of EIM calculations, clustering of channels into modules, and hybridization of generator samples using these modules. Hybridization procedures were performed by intervening on the output of the intermediate convolutional layer (indicated in Supplemental Fig. 7). The are summarized in Supplemental Fig. 8. We observed empirically that setting the number of clusters to 3 leads consistently to highly interpretable cluster templates as illustrated in the figure, with one cluster associated to the , one to the face and one to the hair. This observation was confirmed by running the following cluster stability analysis: we partition at random the influence maps in 3 subsets, and we use this partition to run the clustering twice on two thirds of the data, both runs overlapping only on one third. The obtained clusters were then matched in order to maximize the label consistency (the proportion of influence maps assigned the same label by both runs) on the overlapping subset, and this maximum consistency was used to assess robustness of the clustering across the number of clusters. The consistency are provided in Supplemental Fig. 9 and show 3 clusters is a reasonable choice as consistency is large (> 90%) and drops considerably for 4 clusters. Moreover, these also show that the NMF-based clustering outperforms clustering with the more standard k-means algorithm. In addition, we also assessed the robustness of the clustering by looking at the cosine distance between the templates associated to matching clusters, averaged across clusters. The , also provided in Supplemental Fig. 9, are consistent with the above analysis with an average cosine similarity of.9 (scalar product between the normalized feature vectors) achieved with 3 clusters (maximum similarity is 1 for perfectly identical templates). Exemplary influence maps shown in Supplemental Fig. 8 (center panel) reflect also our general observation: some maps may spread over image locations reflecting different clusters. Interestingly, applying the hybridization procedure to the ing 3 modules obtained by clustering leads to a replacement of the features associated to the module we intervene on, as shown in Supplemental Fig. 8 (center panel), while respecting the overall structure of the image (no discontinuity introduced). For example, on the middle row we see the facial features of the Original 2 samples are inserted in the Original 1 image (shown on the left), while preserving the hair. While the β-VAE is designed for extrinsic disentanglement, further work has shown that it can prove suboptimal with respect to other approaches suggesting further work could investigate whether better extrinsic disentanglement could also favor intrinsic disentanglement. It is however important to investigate intrinsic disentanglement in models for which (extrinsic) disentanglement is not enforced explicitly. This is in particular the case of most GAN-like architectures, who typically outperform VAE-like approaches in terms of sample quality in complex image datasets. Interestingly, the above could also be reproduced in the official tensorlayer DCGAN implementation, equipped with a similar architecture (https://github.com/tensorlayer/dcgan) (see Appendix E). This suggests that our approach can be applied to models that have not been optimized for disentanglement. After these experiments with basic models, we used a pretrained (https: //github.com/Heumi/BEGAN-tensorflow) Boundary Equilibrium GAN (BEGAN) , which used to set a milestone in visual quality for higher resolution face images. The good quality and higher resolution of the generated images combined with the relatively simple generator architecture of BEGAN allows us to test our hypothesis with minimal modifications of the computational graph. Most likely due to the increase in the number of layers, we observed that obtaining counterfactuals with noticeable effects required interventions on channels from the same cluster in two successive layers. The shown in Fig. 3b, obtained by intervening on layers 5 and 6, reveal a clear selective transfer of features from Original 2 to Original 1. As the model was trained on face images cropped with a tighter frame than for the above models, leaving little room for the hair and , we observe only one module associated to these features (Fig. 3b, middle row) showing a clear hair transfer. The remaining two modules are now encoding different aspects of face features: eye contour/mouth/nose for the top row and eyelids/face shape for the bottom row module. We further evaluated the relative quality of the counterfactual images with respect to the original generated images using the Frechet Inception Distance (FID) (Table 2 in the appendix), supporting that the hybridization procedure only mildly affects the image quality, in comparison to the original samples. In order to check whether our approach could scale to high resolution generative models, and generalize to complex image datasets containing a variety of objects, we used the BigGAN-deep architecture , pretrained (https://tfhub.dev/deepmind/biggan-deep-256/1) on the ImageNet dataset (http://www.image-net.org/). This is a conditional GAN architecture comprising 12 so-called Gblocks, each containing a cascade of 4 convolutional layers (see Appendix C for details). Each Gblock also receives direct input from the latent variables and the class label, and is bypassed by a skip connection. We then checked that we were able to generate hybrids by mixing the features of different classes. As for the case of BEGAN, intervening on two successive layers within a Gblock was more effective to generate counterfactuals (examples are provided for the 7th Gblock). Examples provided in Fig. 4 (cock-ostrich) show that it is possible to generate high quality counterfactuals with modified while keeping a very similar object in the foreground. In a more challenging situation, with objects of different nature (Koala-teddy bear on the same figure), meaningful combinations of each original samples are still generated: e.g. a teddy bear in a tree (bottom row), or a "teddy-koala" merging teddy texture with the color of a koala on a uniform indoor with a wooden texture (top row). In order to investigate how the generated counterfactual images can be used to probe and improve the robustness of classifiers to contextual changes, we compared the ability of several SOTA pretrained classifier available on Tensorflow-hub (https://tfhub.dev/, see Appendix C for details) to recognize one of the original classes. Fig. 5 shows the average recognition rate of the most recognized original class (teddy-bear or koala), as a function of layer depth tends overall to increase. We first observe that high recognition rates are in line with the small pixel distance between hybrids and original when intervening at layers closest to the output (right panel). Interestingly, at intermediate blocks 5-6, there is a clear contrast between classifiers, with the Inception resnet performing better than the others. Interestingly, examples of non-consensual classification in Supplementary Table 3, together with the associated hybrids (Supplementary Fig. 17) suggest different SOTA classifiers rely on different aspects of the image content to take their decision (e.g. versus object). We introduced a mathematical definition of disentanglement, related it to the causal notion of counterfactual and used it for the unsupervised characterization of the representation encoded by different groups of channels in deep generative architectures. We found evidence for interpretable modules of internal variables in four different generative models trained on two complex real world datasets. Our framework opens a way to a better understanding of complex generative architectures and applications such as the style transfer of controllable properties of generated images at low computational cost (no further optimization is required), and the automated assessment of robustness of object recognition systems to contextual changes. From a broader perspective, this research direction contributes to a better exploitation of deep neural networks obtained by costly and highly energy-consuming training procedures, by enhancing their interpretability and allowing them to be used for tasks their where not trained for. This offers a perspective on how more sustainable research in Artificial Intelligence could be fostered in the future. We rely on the assumption that a trained generator architecture can be exploited as a mechanistic model, such that parts of this model can be manipulated independently. A mathematical representation of such models can be given using structural causal models (SCMs, that rely on structural equations (SEs) of the form Y:= f (X 1, X 2, · · ·, X N,), denoting the assignment of a value to variable Y, computed from the values of other variables X k in the system under consideration, and of putative exogenous influences, imposed by factors outside the system under study. As in the above equation, we will use uppercase letters to indicate variables being the outcome of a structural assignment, while specific values taken by them will be lower case. SEs stay valid even if right-hand side variables undergo a change due to interventions (; , e.g.), and can model the operations performed in computational graphs of modern neural network implementations. Such graphs then depict SCMs made of interdependent modules, for which assignments' dependencies are represented by a directed acyclic graph G. Without loss of generality, we introduce a Causal Generative Model (CGM) M capturing the computational relations between a selected subset of variables comprising: the input latent variables {z k}, the generator's output Y (typically multidimensional), and a collection of possibly multi-dimensional endogenous (internal) variables forming an intermediate representation such that the generator's output can be decomposed into two successive steps as {Z k} → {V k} → Y. In a feed-forward neural network, one V k may for instance represent one channel of the output of a convolutional layer (e.g. after application of the ReLU non-linearity). where all Z k's are closed intervals, the CGM M = G(Z, S, G) comprises a directed acyclic graph G and a set S of N + 1 deterministic continuous structural equations that assign: The graph of an example CGM is exemplified on Fig. 2b, consisting of 3 endogenous variables, 2 latent inputs and the output. This aligns with the definition of a deterministic structural causal model by Pearl (2009, chapter 7), once our latent variables are identified with exogenous ones. CGMs have however specificities reflecting the structure of models encountered in practice. For instance, variable assignments may or may not involve latent/exogenous variables in their right-hand side, which is unusual in causal inference. This allows modeling feed-forward networks consisting in a first layer receiving latent inputs followed by a cascade of deterministic operations in downstream layers. The above definition guaranties several basic properties found in the computational graph of existing generative networks: all endogenous variables V k are unambiguously assigned once z is chosen, the output Y is unambiguously assigned once either z is chosen, or, alternatively, if an appropriate subset of V k's, such as Pa y, is assigned. This allows us to introduce several useful mappings. In an ideal case, while the support of the latent distribution covers the whole latent space Z, internal variables and outputs typically live on manifolds of smaller dimension than their ambient space. These can be defined as the images 4 of Z by operations of the graph: the out- ) k∈E, z ∈ Z for a subset of variables indexed by E, and V M when E includes all endogenous variables. Functions assigning Y from latent variables and from endogenous variables, respectively, are and we call them latent and endogenous mappings, respectively. Given the typical choice of dimensions for latent and endogenous variables, the V k's and Y are constrained to take values in subsets of their euclidean ambient space. We will assume thatg M and g M define proper embeddings, in particular implying that they are both invertible. We call a CGM satisfying these assumptions an embedded CGM. With this vocabulary we can for example verify the example of Fig. 2b contains exactly two layers (in green). Note g M andg M are well defined because the output can be unambiguously computed from their inputs by successive assignments along G, and are both surjective due to appropriate choices for domains and codomains. All defined image sets (V M, Y M, ...) are constrained by the parameters of M, and are typically not easy to characterize. For example V M is likely a strict subset of the Cartesian product k V k M. Importantly, the image set Y M of a trained model is of particular significance, as it should approximate at best the support of the data distribution we want to model. Learning the generator parameters such that Y M precisely matches the support of the target data distribution is arguably a major goal for generative models (see e.g.). As we will manipulate properties of the output, we restrict ourselves to transformations that respect the topology of Y M, and use embeddings as the basic structure for it, allowing inversion of g M. Definition 6 (Embedded CGMs). If f: X → Y is a continuous injective function with continuous inverse f Since Definition 5 imposes continuous structural equations,which is satisfied for all operations in standard generative models, injectivity of g M is the key additional requirement for embedded CGMs. Proposition 4. If Z of CGM M is compact (all Z k 's are bounded), then M is embedded if and only if g M is injective. Proof is provided in Appendix B. This implies that generative models based on uniformly distributed latent variables (the case of many GANs), provided they are injective, are embedded CGMs. While VAEs' latent space is typically not compact (due to the use of normally distributed latent variables), we argue that restricting it to a product of compact intervals (covering most of the probability mass) will in an embedded CGM that approximates the original one for most samples. Based on this precise framework, we can now provide the formal definitions and described informally in main text. The CGM framework allows defining counterfactuals in the network following. Definition 7 (Unit level counterfactual). Given CGM M, for a subset of endogenous variables E = {e 1, .., e n}, and assignment h of these variables, we define the interventional CGM M h obtained by replacing structural assignments for V |E by assignments {V e k := h k (z)} e k ∈E. Then for a given value z of the latent variables, called unit, the unit-level counterfactual is the output of Definition 7 is also in line with the concept of potential outcome . Importantly, conterfactuals induce a transformation of the output of the generative model. Definition 2 (Counterfactual mapping). Given an embedded CGM, we call the continuous map Our approach then relates counterfactuals to a form of disentanglement allowing transformations of the internal variables of the network as follows. Definition 3 (Intrinsic disentanglement). In a CGM M, endomorphism T: Y M → Y M is intrinsically disentangled with respect to a subset E of endogenous variables, if it exists a transformation T of endogenous variables such that for any latent z ∈ Z, leading to the tuple of values v ∈ V M, where T (v) only affects the variables indexed by E. In this definition, Y (v) corresponds to the unambiguous assignment of Y based on endogenous values. 5 Fig. 2d illustrates this second notion of disentanglement, where the split node indicates that the value of V 2 is computed as in the original CGM (Fig. 2b) before applying transformation T 2 to the outcome. Intrinsic disentanglement relates to a causal interpretation of the generative model's structure in the sense that it expresses a form of robustness to perturbation of one of its subsystems. Counterfactuals represent examples of such perturbations, and as such, may be disentangled given their faithfulness. Following a stated in , since Z is compact and the codomain of g M is Hausdorff (because Euclidean), then a continuous (by definition) and injective g M is an embedding. In addition, g M injective impliesg M's are injective on their respective domains V M. Moreover, the V M's being image of a compact Z by a continuous mapping (by the CGM definition), they are compact, such that the respectiveg M's are also embeddings. Part 1: Proof of the equivalence between faithful and disentangled. One conditional is trivial: if a transformation is disentangled, it is by definition an endomorphism of Y M so the counterfactual mapping must be faithful. For the second conditional, let us assume a faithful Y E h and denote the (unambiguous) map from V to the output such that T is disentangled with respect to E in M. Proof of Proposition 5. The absence of common latent ancestor between E and E ensures that values in both subsets are unambiguously assigned by non-overlapping subsets of latent variables, A and B respectively, such that we can write This implies that the image set of this layer fully covers the Cartesian product of the image sets of the two subsets of variables, i.e., and guaranties that T is and endomorphism of V M for any choice of endomorphism T E. This further implies T is well defined and an endomorphism. Due to the i.i.d. assumption for components of Z and the structure following the sufficient condition of Prop. 1, it is clear that the subsets of endogenous variables associated to each V k are modular and the associated partition of the hidden layer is a disentangled representation. The choice of increasing dimensions as well as the i.i.d. sampling of the model parameters from a distribution with a density make sure the ing mapping is injective (and hence follow the embedded CGM assumptions of Def. 6) and that counterfactual hybridization of any component of V k will in an influence map whose support covers exactly I k. Finally, the conditions on the I k's and the thresholding approach guaranties a rank K binary factorization of the matrix B, with one factor gathering the indicator vectors associated to each V k and the uniqueness of this factorization is guaranteed by classical NMF identifiability , e.g. following (Diop et al.) [Theorem III,1]. VANILLA β-VAE AND DCGAN The β-VAE architecture is presented in Supplementary Fig. 7 and is very similar to the DCGAN architecture. Hyperparameters for both structures are specified in Table 1. We used the method proposed in for CelebA dataset. We used the pre-trained model with the same architecture as was used in the paper. It consists of three blocks of convolutional layers each followed by an upsampling layer. The filter size of convolutional layers is all over the generator. There is also skip connections in the model that is argued to increase the sharpness of images. Consult (, Figure 1) for architectural details. The pretrained model is taken from Tensorflow-hub (https://tfhub.dev/, we summarize below the main aspects of the architectures. We used the BigGan-deep architecture of as a pre-trained model on 256x256 ImageNet. We did not retrain the model. The architecture consists of several ResBlocks which are the building block of the generator. Each ResBlock contains BatchNorm-ReLU-Conv Layers followed by upsampling transformations and augmented with skip connections that bring fresh signal from the input to every ResBlock. for architectural details. We argue that the notion introduced in Table 1 Figure 6: Generation of influence maps. Example of influence maps generated by a VAE on the CelebA dataset (lighter pixel indicate larger variance and thus stronger influence of the perturbations on that pixel). Table 2: FID analysis of BEGAN hybrids. Distance between different pairs of classes (R: Real data, G: Generated data, Ck: Hybrids by intervention on cluster k). The distances are is computed for balanced number of examples (10k) for each class and normalized by the FID between the real data and the generated data. It can be seen in the table that Hybrids have a small distance to the generated and also to each other. This can be interpreted as closeness of the distribution of Hybrids to that of generated data suggesting that Hybridization produces visually plausible images. Fig. 13. Columns indicate intervened Gblock (from 4 to 6), rows indicate module (as ordered in Fig. 13). The entropy is computed using the probabilistic output for the 10 classes receiving top ranking across all hybrids, normalized to provide a total probability of 1. In particular for Gblock 6 (left column) we can see that the module with poorer quality leads to larger entropy values. Interestingly, entropy values are also much smaller for hybrids based on interventions on the (more abstract level) Gblock number 4. Overall, the suggests that object texture, which is well rendered in hybrids generated from Gblock 4, is a key information for the classifier's decision. Under review as a conference paper at ICLR 2020 Figure 15: Larger collection of hybrids for the BIGAN, between classes "cock" and "ostrich". Each panel corresponds to intervening on a different Gblock (from 4 to 6, from top to bottom respectively). Modules are fixed and extracted by the NMF algorithm using 3 clusters. Each row of hybrids corresponds to interventions on one of the extracted module, numbered on the right-hand side. Fig. 15 ). The entropy is computed using the probabilistic output for the 10 classes receiving top ranking across all hybrids, normalized to provide a total probability of 1. Interestingly, large entropy is obtained for first module of Gblock 5 (middle column), consistent with the fact the intervention generates a hybrid bird, mixing shape properties of both cock and ostrich. Table 3: The classification outcome of several discriminative models for three randomly chosen koala+teddy hybrids (see Figure. 17). The purpose of this experiment is to investigate the use of the proposed intervention procedure for assessing robustness of classifiers. As can be seen in the following images, the ant hybrids are roughly a teddy bear in a koala context. An ideal classifier must be sensitive to the object present in the scene not the contextual information. A teddy bear must still be classified as a teddy bear even if it appears on a tree which is the koala environment in most of the koala images in the ImageNet dataset. It can be seen in the following table that nasnet large is more robust to the change of context compared to other classifiers. Figure 17: Three koala+teddy hybrids as the inputs to the classifiers of Table. 3 | We develop a framework to find modular internal representations in generative models and manipulate then to generate counterfactual examples. | 1,038 | scitldr |
Catastrophic forgetting poses a grand challenge for continual learning systems, which prevents neural networks from protecting old knowledge while learning new tasks sequentially. We propose a Differentiable Hebbian Plasticity (DHP) Softmax layer which adds a fast learning plastic component to the slow weights of the softmax output layer. The DHP Softmax behaves as a compressed episodic memory that reactivates existing memory traces, while creating new ones. We demonstrate the flexibility of our model by combining it with existing well-known consolidation methods to prevent catastrophic forgetting. We evaluate our approach on the Permuted MNIST and Split MNIST benchmarks, and introduce Imbalanced Permuted MNIST — a dataset that combines the challenges of class imbalance and concept drift. Our model requires no additional hyperparameters and outperforms comparable baselines by reducing forgetting. A key aspect of human intelligence is the ability to continually adapt and learn in dynamic environments, a characteristic which is challenging to embed into artificial intelligence. Recent advances in machine learning (ML) have shown tremendous improvements in various problems, by learning to solve one complex task very well, through extensive training on large datasets with millions of training examples or more. Most of the ML models that we use during deployment assume that the real-world is stationary, where in fact it is non-stationary and the distribution of acquired data changes over time. Therefore, after learning is complete, and these models are fine-tuned with new data, performance degrades with respect to the original data. This phenomenon *Work done during an internship at Uber AI. † Work done while at Google Brain. known as catastrophic forgetting or catastrophic interference BID17 BID7 serves to be a crucial problem for deep neural networks (DNNs) that are tasked with continual learning BID26 or lifelong learning . In this learning paradigm, the goal is to adapt and learn consecutive tasks without forgetting how to perform previously learned tasks. Some of the real-world applications that typically require this kind of learning include perception for autonomous vehicles, recommender systems, fraud detection, etc. In most supervised learning methods, DNN architectures require independent and identically distributed (iid) samples from a stationary training distribution. However, for ML systems that require continual learning in the real-world, the iid assumption is easily violated when: There is concept drift or class imbalance in the training data distribution. Data representing all scenarios in which the learner is expected to perform are not initially available. In such situations, DNNs face the "stability-plasticity dilemma" BID6 BID0. This presents a continual learning challenge for models that need to balance plasticity (integrate new knowledge) and stability (preserve existing knowledge).Two major theories have been proposed to explain a human's ability to perform continual learning. The first theory is inspired by synaptic consolidation in the mammalian neocortex BID5 where a subset of synapses are rendered less plastic and therefore preserved for a longer timescale. The second theory is the complementary learning systems (CLS) theory BID16 BID23 BID12, which suggests that humans extract high-level structural information and store it in a different brain area while retaining episodic memories. Here, we extend the work on differentiable plasticity BID18 BID19 to a continual learning setting and develop a model that is capable of adapting quickly to changing environments as well as consolidating previous knowledge by selectively adjusting the plasticity of synapses. We modify the traditional softmax layer and propose to augment the slow weights with a set of plastic weights implemented using Differentiable Hebbian Plasticity (DHP). The model's slow weights learn deep representations of data and the fast weights implemented with DHP learn to quickly "auto-associate" the class labels to representations. We also demonstrate the flexibility of our model by combining it with recent task-specific synaptic consolidation based methods to overcoming catastrophic forgetting such as elastic weight consolidation BID11 BID28, synaptic intelligence and memory aware synapses. Our model unifies core concepts from Hebbian plasticity, synaptic consolidation and CLS theory to enable rapid adaptation to new unseen data, while consolidating synapses and leveraging compressed episodic memories to remember previous knowledge and mitigate catastrophic forgetting. Plastic Neural Networks: One of the major theories that have been proposed to explain a human's ability to learn continually is Hebbian learning BID8, which suggests that learning and memory are attributed to weight plasticity, that is, the modification of the strength of existing synapses according to variants of Hebb's rule BID24; BID22.Recent approaches in the meta-learning literature have shown that we can incorporate fast weights into a neural network BID21 BID25. BID21 augmented fully-connected (FC) layers preceding the softmax with a matrix of fast weights. Here, the fast weights were implemented with non-trainable Hebbian learning-based associative memory. BID25 proposed a softmax layer that can improve learning of rare classes by interpolating between Hebbian updates and stochastic gradient descent (SGD) updates on the output layer using an arbitrarily engineered scheduling scheme. BID19 proposed differentiable plasticity, which uses SGD to optimize the plasticity of each synaptic connection composed of a slow weight and a plastic (fast) weight. Although this approach served to be a powerful new method for training neural networks, it was mainly demonstrated on RNNs for solving simple tasks. Overcoming Catastrophic Forgetting: This work leverages two biologically inspired strategies to overcome the catastrophic forgetting problem: 1) Task-specific Synaptic Consolidation -Protecting old knowledge by dynamically adjusting the synaptic strengths to consolidate and retain memories. 2) CLS Theory -A dual memory system where, structural knowledge is acquired through slow learning via the neocortex and rapid learning via the hippocampus. There have been several notable works inspired by taskspecific synaptic consolidation for overcoming catastrophic forgetting BID11; BID1. All of these approaches propose a method to estimate the importance of each parameter or synapse, Ω k, where the least plastic synapses can retain memories for a long time and the more plastic synapses are considered less important. The Ω k and network parameters θ k are updated online or after learning task T n. Therefore, when learning new task T n, a regularizer is added to the original loss function L n (θ), so that we dynamically adjust the plasticity w.r.t. Ω k and prevent any changes to the important parameters of previously learned tasks: DISPLAYFORM0 where θ n−1 k are the learned network parameters after training on the previous n − 1 tasks and λ is a hyperparameter for the regularizer to control the amount of forgetting. In Elastic Weight Consolidation (EWC), BID11 use the diagonal values of an approximated Fisher information matrix for Ω k, and it is computed offline after training on a task is completed. BID28 proposed an online variant of EWC to improve scalability by ensuring the computational cost of the regularization term does not grow with the number of tasks. proposed an online method called Synaptic Intelligence (SI) for computing the parameter importance where, Ω k is the cumulative change in individual synapses over the entire training trajectory on a given task. Memory Aware Synapses (MAS) from BID1 measures Ω k by the sensitivity of the learned function to a perturbation in the parameters and use the cumulative change in individual synapses on the squared L2-norm of the penultimate layer. There have been numerous approaches based on CLS principles involving pseudo-rehersal (; BID2 BID3, episodic replay BID15 BID14 and generative replay . However, in our work, we are primarily interested in neuroplasticity techniques inspired from CLS theory for representing memories. showed how each synaptic connection can be composed of a fixed weight where slow learning stores long-term knowledge and a fast weight for temporary associative memory. Recent research in this vein has included replacing soft attention mechanism with fast weights in RNNs BID4, the Hebbian Softmax layer BID25, augmenting the FC layer with a fast weights matrix BID21, differentiable plasticity BID19 and neuromodulated differentiable plasticity BID20. However, all of these methods were focused on rapid learning on simple tasks or meta-learning over a distribution of tasks. Furthermore, they did not examine learning a large number of new tasks while, alleviating catastrophic forgetting in continual learning. In our model, each synaptic connection in the softmax layer has two weights: 1) The slow weights, θ ∈ R m×d, where m is the number of units in the final hidden layer. 2) A Hebbian plastic component of the same cardinality as the slow weights, composed of the plasticity coefficient, α, and the Hebbian trace, Hebb. The α is a scaling parameter for adjusting the magnitude of the Hebb. Hebb accumulates the mean activations of the penultimate layer for each target label in the mini-batch {y 1:B} of size B which are denoted byh ∈ R 1×m (refer to Algorithm 1). Given the activation of each neuron in h at the pre-synaptic connection i, the unnormalized log probabilities z at the post-synaptic connection j can be more formally computed using Eq. 2. Then, the softmax function is applied on z to obtain the desired logitsŷ thus,ŷ = softmax(z). The η parameter in Eq. 3 is a "learning rate" that learns how quickly to acquire new experiences into the plastic component. The η parameter also acts as a decay term to prevent instability caused by a positive feedback loop in the Hebbian traces. DISPLAYFORM0 The network parameters α i,j, η and θ i,j are optimized by gradient descent as the model is trained sequentially on different tasks in the continual learning setup. Hebb is initialized to zero only at the start of learning the first task T 1 and is automatically updated based on Algorithm 1 in the forward pass during training. Specifically, the Hebbian update for the active class c in y 1:B is computed on line 6. This Hebbian update BID8, where w i,j is the change in weight at connection i, j and a k i, a k j denote the activation levels of neurons i and j, respectively, for the k th input. Therefore, in our model, w =h the Hebbian weight update, a i = h the hidden activations of the last hidden layer, a j = y the active target class in y 1:B and N = s the number of inputs for the corresponding class in y 1:B (see Algorithm 1). Across the model's lifetime, we only update Hebb during training and during test time, we use the most recent Hebbian traces to make predictions. The plastic component learns rapidly and performs sparse parameter updates to quickly store memory traces for each recent experience without interference from other similar recent experiences. Furthermore, the hidden activations corresponding to the same active class are accumulated into one vectorh, thus forming a compressed episodic memory in the Hebb to reflect individual episodic memory traces. This method improves learning of rare classes and speeds up binding of class labels to deep representations of the data. Updated Loss: Following the existing work for overcoming catastrophic forgetting such as EWC, Online EWC, SI and MAS (see Eq. 1), we regularize the loss L n (θ, α, η) and update the synaptic importance parameters of the network in an online manner. We rewrite Eq. 1 to obtain Eq. 4 and show that the network parameters θ i,j are the weights of the connections between pre-and post-synaptic activity, as seen in Eq. 2. DISPLAYFORM1 We adapt these existing consolidation approaches to our model and only compute the synaptic importance parameters on the slow weights of the network. The plastic part of our model can alleviate catastrophic forgetting of learned classes by optimizing the plasticity of the synaptic connections. We tested our continual learning approach on the Permuted MNIST, Imbalanced Permuted MNIST and Split MNIST benchmarks. We evaluated the methods based on the average classification accuracy on all previously learned tasks. To establish a baseline for comparison of well-known synaptic consolidation methods, we trained neural networks with Online EWC, SI and MAS, respectively, on all tasks in a sequential manner. In the Permuted MNIST and Imbalanced Permuted benchmarks we trained a multi-layered perceptron (MLP) network on a sequence of 10 tasks using plain SGD. Detailed descriptions of the hyperparameters and training setups for all benchmarks can be found in Appendix A.Permuted MNIST: In this benchmark, all of the MNIST pixels are permuted differently for each task with a fixed random permutation. Although the output domain is constant, the input distribution changes between tasks thus, there exists a concept drift. Figure 1 shows the average test accuracy as new tasks are learned. The network with DHP Softmax alone showed significant improvement in its ability to alleviate catastrophic forgetting across all tasks compared to the baseline finetuned vanilla MLP network we refer to as Finetune in Figure 1. Then we compared the performance with and without DHP Softmax using the synaptic consolidation methods. We find our DHP Softmax with synaptic consolidation maintains a higher test accuracy after T 10 tasks than without DHP Softmax for all variants. Figure 1. The average test accuracy on a sequence of Permuted MNIST tasks Tn=1:10. The average test accuracy after T10 tasks is given in the legend. Error bars correspond to SE on 10 trials. This benchmark is identical to the Permuted MNIST benchmark but, now each task is an imbalanced distribution. The statistics of the class distribution in each task are presented in Appendix A.2, Table 1. Figure 2 shows the average test accuracy as new tasks are learned. We see that DHP Softmax achieves 80.85% after learning 10 tasks, thus providing significant improvement over the standard neural network baseline of 76.4%. The significance of the compressed episodic memory mechanism in the Hebbian traces is more apparent in this benchmark because the plastic component allows rare classes that are encountered infrequently to be remembered for a longer period of time. We find that DHP Softmax with MAS achieves 88.8%; outperforming all other methods and across all tasks. Split MNIST: A sequence of T n=1:5 tasks are generated by splitting the original MNIST training dataset into binary classification problems (0/1, 2/3, 4/5, 6/7, 8/9), making the output spaces disjoint between tasks. Similar to , we trained a multi-headed MLP network on a sequence of 5 tasks. We compute the cross entropy loss at the softmax output layer only for the digits present in the current task, T n. We observe that DHP Softmax provides a 4.7% improvement on test performance compared to a finetuned MLP network (Figure 3). Also, combining DHP Softmax with task-specific consolidation consistently improves performance across all tasks T n=1:5. Figure 3. The average test accuracy on a sequence of 5 binary classification problems (0/1, 2/3, 4/5, 6/7, 8/9) from the original MNIST dataset. The average test accuracy after learning T5 tasks is given in the legend. Error bars refer to the SE on 10 trials. We have shown that the problem of catastrophic forgetting in continual learning environments can be alleviated by adding compressed episodic memory in the softmax layer through DHP. DHP Softmax alone showed noticeable improvement across all benchmarks when compared to a neural network with a traditional softmax layer. We demonstrated the flexibility of our model where, in addition to DHP Softmax, we can regularize the slow weights using EWC, SI or MAS to improve a model's ability to alleviate catastrophic forgetting. The approach where we combine DHP Softmax and MAS consistently leads to overall superior compared to other baseline methods on several benchmarks. This gives a strong indication that Hebbian plasticity enables neural networks to learn continually and remember distant memories, thus reducing catastrophic forgetting when learning from sequential datasets in dynamic environments. For the Imbalanced Permuted MNIST experiments shown in Figure 2, the regularization hyperparameter λ for each of the task-specific consolidation methods is λ = 400 for Online EWC BID28, λ = 1.0 for SI and λ = 0.1 for MAS. In SI, the damping parameter, ξ, was set to 0.1. Similar to the Permuted MNIST benchmark, to find the best hyperparameter combination for each of these synaptic consolidation methods, we performed a grid search using a task sequence determined by a single seed. Across all experiments, we maintained the the same random probabilities detemined by a single seed to artificially remove training samples from each class. The hyperparameters of the synaptic consolidation methods (i.e. Online EWC, SI and MAS) remain the same with and without DHP Softmax, and the plastic components are not regularized. | Hebbian plastic weights can behave as a compressed episodic memory storage in neural networks; improving their ability to alleviate catastrophic forgetting in continual learning. | 1,039 | scitldr |
While real brain networks exhibit functional modularity, we investigate whether functional mod- ularity also exists in Deep Neural Networks (DNN) trained through back-propagation. Under the hypothesis that DNN are also organized in task-specific modules, in this paper we seek to dissect a hidden layer into disjoint groups of task-specific hidden neurons with the help of relatively well- studied neuron attribution methods. By saying task-specific, we mean the hidden neurons in the same group are functionally related for predicting a set of similar data samples, i.e. samples with similar feature patterns. We argue that such groups of neurons which we call Functional Modules can serve as the basic functional unit in DNN. We propose a preliminary method to identify Functional Modules via bi- clustering attribution scores of hidden neurons. We find that first, unsurprisingly, the functional neurons are highly sparse, i.e., only a small sub- set of neurons are important for predicting a small subset of data samples and, while we do not use any label supervision, samples corresponding to the same group (bicluster) show surprisingly coherent feature patterns. We also show that these Functional Modules perform a critical role in discriminating data samples through ablation experiment. While real brain networks exhibit functional modularity, we investigate whether functional modularity also exists in Deep Neural Networks (DNN) trained through back-propagation. Under the hypothesis that DNN are also organized in task-specific modules, in this paper we seek to dissect a hidden layer into disjoint groups of task-specific hidden neurons with the help of relatively wellstudied neuron attribution methods. By saying task-specific, we mean the hidden neurons in the same group are functionally related for predicting a set of similar data samples, i.e. samples with similar feature patterns. We argue that such groups of neurons which we call Functional Modules can serve as the basic functional unit in DNN. We propose a preliminary method to identify Functional Modules via biclustering attribution scores of hidden neurons. We find that first, unsurprisingly, the functional neurons are highly sparse, i.e., only a small subset of neurons are important for predicting a small subset of data samples and, while we do not use any label supervision, samples corresponding to the same group (bicluster) show surprisingly coherent feature patterns. We also show that these Functional Modules perform a critical role in discriminating data samples through ablation experiment. Also, these modules learn rich representations and are able to detect certain feature patterns demonstrated in a visual classification example. Modularity is generally encountered across a broad range of networks, including real brain neuronal networks, which means that the entire population of neurons can be parcellated into internally dense and externally sparse groups called modules or communities. And since that, researchers naturally think artificial neural networks also exhibits modularity, for example, Hinton et al. put forward neuron co-adaption that some hidden neurons in the same layer co-adapt together as a module for prediction. Co-adaption is only discussed in thought experiments supported by some biological inspirations and we do not know how to identify such co-adaptive neurons. However this phenomenon itself inspires Dropout [3; 7] which is arguably the most robust regularization technique for DNN. Identifying modularity in DNN remains difficult as community detection methods are generally not directly applicable on densely-connected acyclic graphs. We turn to the relatively well-studied neuron attribution methods and biclustering algorithm for help. Biclustering. is a data mining technique that simultaneously clusters the rows and columns of a matrix, and is especially popular in bioinformatics. We favor biclustering instead the standard clustering methods, which only cluster the rows or the columns of a matrix, for two reasons: 1) Practically, standard clustering methods can easily fail because of the curse of dimensionality. 2) From the hypothesis we really do not expect two neurons in the same neuron group to be similar for every stimulus (input data sample). We only expect neurons in the same group are functionally related for predicting a subset of similar data samples. Hopefully, this subset of samples should share similar feature patterns. That being said, there are many biclustering algorithms for different purposes as it is a very active research field. We choose to use spectral co-clustering because it produces biclusters of strong connections with no overlaps, which is the simplest case for us to start with. Attribution methods for hidden neurons. Neuron attribution, assigning a importance score to a neuron, is easier to do for artificial neural networks than real neural nets. Basically, the goal of neuron attribution is to assign a score a to a hidden neuron n that represents how much important this neuron is for predicting a sample x to a class y. In a sense, the attribution score measures the per-sample importance of a hidden neuron. While for real neurons this can be computed as, for example, Pearson correlation to some task, for DNN we can utilize the internal weights and the feedforward structure of DNN to compute attribution scores. There is a variety of neuron attribution methods. To name a few, Shrikumar et al. proposed DeepLIFT, originally designed to assign scores to input nodes and can be generalized to assign a score of importance to a specific hidden neuron. Sundararajan et al. proposed integrated gradients for attributing input neurons and later generalized it to total conductance [6; 1] for attribution hidden neurons. Leino et al. proposed an influence-based attribution method. Such methods have been demonstrated to be capable of identifying important hidden neurons that are relevant to a specific prediction on a class y given a data sample x. However, these methods do not consider that hidden neurons may be functionally related. At a high-level, our approach for finding Functional Modules in hidden neurons goes in two phases: 1) we first construct a neuron-sample matrix where each entry is a attribution score of a hidden neuron (row) for a input sample (column) and then 2) based on this matrix, we apply spectral coclustering, a biclustering algorithm to simultaneously group neurons and samples that have consistent high attribution values. Given a dataset X = {x} N of data samples, a pre-trained DNN f parameterized by θ such that y = f θ (x) is the prediction of sample x, and an attribution function a which can be any of [1; 4; 5; 6], the attribution score a(n, x, y) measures how much a hidden neuron n contributes to the prediction of sample x into class y. Construct the neuron-sample attribution matrix. We first specify the neurons to be from a given layer l and construct the neuron-sample matrix M N l * N data where N = |l| is the number of neurons in that layer and N data = |X| be the size of dataset X = x 1, x 2,..., x N. Each entry in the matrix represents e ij = e(n i, x j), where e is a embedding function measuring the contribution of neuron n i for predicting data sample x j. We compute e(n i, x j) = e ij = a(n i, x j,ŷ) − 1 |C| c a(n i, x j, c) whereŷ = f θ (x j) is the predicted class of x j and a can be any of the attritbution function among [1; 4; 5; 6]. We choose to use DeepLIFT score, and in experiments we find these attribution methods do not differ much. Spectral Co-clustering. Given the constructed matrix, we use spectral co-clustering to find biclusters with values higher than those in the corresponding other rows and columns. Given a predefined number k of biclusters, spectral coclustering algorithm treats the input data matrix as a bipartite graph and approximates the normalized cut of this graph to find heavy subgraphs. In the ed biclusters, each row (neuron) and each column (sample) belongs to exactly one bicluster with no overlaps. We first inspect DNNs for visual classification on MNIST. We fetch a pre-trained model that is used and evaluated in the DeepLIFT paper. The architecture is a 4 layer feedforward rectified model where the first two layers are convolutional layer followed by two fully-connected layer. (We also test on other DNN models with rectified neurons trained such as multilayer perceptron which has no convolutional layer, and the are very similar.) We use 10000 data samples from the test set and choose l to be the second convolutional layer of shape. The total number of hidden neurons in l is 1600. The choice of layer is rather arbitray, we choose it to be the second convolutional layer because we want the rows and columns to be roughly of the same magnitude, so that the neuron-sample matrix would have a good visualization, and that the biclustering algorithm can work properly. We compute the DeepLIFT score and construct the neuron-sample matrix. We set k = 10 in spectral co-clustering as is the number of classes for MNIST. Different attribution methods do not differ much in the qualitive observations, so we only present the comparision of different attribution methods on the ablation study in Figure 5b. The neuron-sample matrix is highly sparse. The constructed neuron-sample matrix is very sparse, as only a few entries have attribution score above zero (see Figure 1a). Rearranging the display of the matrix by bicluster label, we can observe a checkerboard pattern in Figure 1b, indicating that certain groups of hidden neurons are indeed highly-correlated for predicting certain data samples. (a) raw neuron-sample matrix (b) neuron-sample matrix rearranged with bicluster labels Figure 1: Visualization of the neuron-sample matrix (Better see in digital version) Each row corresponds to one of the 1600 neurons in the second convolutional layer (5 * 5 * 64). Each column represents one of 10000 data samples. We further report the distribution of attribution scores in Figures 2a and 2b, where the x-axis represents the attribute score and the y-axis represents the frequency. The overal distribution is centered around zero but inside one bicluster, the mass is concentrated on the positive side. This indicates that the spectral coclustering algorithm does find good subgraphs that a subset of hidden neurons typically have higher attribution scores on a subset of samples. Samples corresponding to the same bicluster show coherent feature patterns. We find that the samples in one bicluster, i.e., samples corresponding to the same Functional Module, show coherent and similar feature patterns. We pick a bicluster and randomly gather samples from that bicluster and find these samples show very similar patterns (Figures 3a and 3b) and belong to the same class. We also show the distribution of true labels in the total k = 10 biclusters (Section 4). Note that despite Ground-truth label distribution in biclusters not using any label supervision in our approach, we can achieve a very good job of unsupervised discriminating. Spatial relationship of the neurons in a module: Is this modularity a of convolution?. Surprisingly, the neurons in a bicluster do not seem to have any spatial relationships, such as being in the same channel or centered in some position. We also test on multilayer perceptrons where there is no convolutional layers and the above phenomenon can still be observed. Ablation Study: Functional Modules are critical for discriminating samples. We check the performance of DNN by removing those hidden neurons by bicluster. In this experiment, we split the 10000 images X further into two sets where the first 5000 samples as validation set are processed by our approach and the rest 5000 samples are held-out for testing. We compare the accuracy when hidden neurons in layer l are gradually ablated on the test set with several baselines for ablating neurons, such as ablation of random neurons, ablation of top-important neurons greedily, and ablation of neurons by module (bicluster) (ours). Ablation of a neuron is implemented by setting the activation value of this neuron to zero, as convention in previous studies [1; 6]. Note that we set k larger in order to get a smoother curve. As shown in Figure 5a, ablation by module achieves the most significant accuracy drop compared with greedy ablation of top-k neurons (all use DeepLIFT score) and random ablation. It demonstrates that these Functional Module s found by spectral coclustering do play an important role in discriminating data samples. We also compare the effect of abaltion with different attribution methods in Figure 5b and observe no significant difference among different choices of attributions methods. We develop an approach to parcellate a hidden layer into functionally related groups which we call Functional Modules, by applying spectral coclustering on the attribution scores of hidden neurons. We find the Functional Modules identifies functionally-related neurons in a layer and play an important role in discriminating data samples. One major limitation of this short paper is that we have not tested on more general cases, such as different layers, different activation function, different models trained on more diverse datasets, etc. In order to gain generalizable insights, such a massive investigation is neccessary. | We develop an approach to parcellate a hidden layer in DNN into functionally related groups, by applying spectral coclustering on the attribution scores of hidden neurons. | 1,040 | scitldr |
Power-efficient CNN Domain Specific Accelerator (CNN-DSA) chips are currently available for wide use in mobile devices. These chips are mainly used in computer vision applications. However, the recent work of Super Characters method for text classification and sentiment analysis tasks using two-dimensional CNN models has also achieved state-of-the-art through the method of transfer learning from vision to text. In this paper, we implemented the text classification and sentiment analysis applications on mobile devices using CNN-DSA chips. Compact network representations using one-bit and three-bits precision for coefficients and five-bits for activations are used in the CNN-DSA chip with power consumption less than 300mW. For edge devices under memory and compute constraints, the network is further compressed by approximating the external Fully Connected (FC) layers within the CNN-DSA chip. At the workshop, we have two system demonstrations for NLP tasks. The first demo classifies the input English Wikipedia sentence into one of the 14 classes. The second demo classifies the Chinese online-shopping review into positive or negative. Power-efficient CNN Domain Specific Accelerator (CNN-DSA) chips are currently available for wide use. Sun et al. BID5;a) designed a two-dimensional CNN-DSA accelerator which achieved a power consumption of less than 300mW and an ultra power-efficiency of 9.3TOPS/Watt. All the processing is in internal memory instead of external DRAM. Demos on mobile and embedded systems show its applications in real-world implemen-Preliminary work. Under review by the International Conference on Machine Learning (ICML). Do not distribute. Figure 1. Efficient On-Device Natural Language Processing system demonstration. The CNN-DSA chip is connected to Raspberry Pi through the USB interface. Keyboard sends the typing text input to Raspberry Pi through USB. A monitor is connected to Raspberry Pi through HDMI for display. On the monitor, it shows the introduction for the demo (zoom in to see details). There are two demos. The first demo classifies the input English Wikipedia sentence into one of the 14 ontologies. The second demo classifies the Chinese online-shopping review into positive or negative.tations. The 28nm CNN-DSA accelerator attains a 140fps for 224x224 RGB image inputs at an accuracy comparable to that of the VGG BID2.For Natural Language Processing tasks, RNN and LSTM models BID8 BID1 are widely used, which are different network architectures from the twodimensional CNN. However, the recent work of Super Characters method BID4 BID7 using twodimensional word embedding achieved state-of-the-art in text classification and sentiment analysis tasks, showcasing the promise of this new approach. The Super Characters method is a two-step method. In the first step, the characters of the input text are drawn onto a blank image, so that an image of the text is generated with each of its characters embedded by the pixel values in the two-dimensional space. The ing image is called the Super Characters image. In the second step, the generated Super Characters image is fed into a twodimensional CNN models for classification. The two- dimensional CNN models are trained for the text classification task through the method of Transfer Learning, which finetunes the pretrained models on large image dataset, e.g. ImageNet BID0, with the labeled Super Characters images for the text classsification task. In this paper, we implemented NLP applications on mobile devices using the Super Characters method on a CNN-DSA chip as shown in Figure 1. It takes arbitrary text input from keyboard connecting to a mobile device (e.g. Raspberry Pi). And then the text is pre-processed into a Super Characters image and sent to the CNN-DSA chip to classify. After post-processing at the mobile device, the final will be displayed on the monitor. As shown in FIG0, the keyboard text input is preprocessed by the Raspberry Pi (or other mobile/embedded devices) to convert into a Super Characters image. This pre-processing is only a memory-write operation, which requires negligible computation and memory resources. The Super Characters method works well for Asian languages which has characters in squared shapes, such as Chinese, Japanese, and Korean. These glyphs are easier for CNN models to recognize than Latin languages such as English, which is alphabets-based in a rectangular shape and may have to break the words at line-changing. To improve the performance for English, a method of Squared English Word (SEW) is proposed to cast English word in a squared shape as a glyph BID6. FIG1 shows an example of this method. Basically, each word takes the same size of a square space lxl. Words with longer alphabets will have smaller space for each alphabet. Within the lxl space, the word with N alphabets will have each of its alpha in the square area of {l/ceil[sqrt(N)]} 2, where sqrt stands for square root, and ceil[.] is rounding to the top. The CNN-DSA chip receives the Super Characters im- The two-dimensional embedding in this image corresponds to the text of "The tiger is the largest species among the Felidae and classified in the genus Panthera. It is most recognizable for its dark vertical stripes on reddish-orange fur with a lighter underside." age through the USB connection to the mobile device. It outputs the classification scores for the 14 classes in the Wikipedia text classification demo. The classification scores mean the probabilities for classification but before softmax. The mobile device only calculate the argmax to display final classification on the monitor, which is also negligible computations. The CNN-DSA chip completes the complex CNN computations with low power less than 300mw. The CNN-DSA chip is a fast and low-power coprocessor. However, it does not directly support inner-product operations of the FC layers. It only supports 3x3 convolution, Relu, and max pooling. If the FC layers are executed on the mobile device, there will be increased requirements for memory, computation, and storage for the FC coefficients. And it will also spend more interface time with the CNN-DSA chip for transmitting the activation map from the chip, and also cost relative high power consumption for the mobile device to execute the inner-product operations. In order to address this problem, we proposed the GnetFC model, which approximates the FC layers using multiple layers of 3x3 convolutions. This is done by adding a sixth major layer with three sub-layers as shown in Figure 4. The Figure 4. Model architecture. The input is of size 224x224 with multiple channels, and the output is of size 14x1x1. The architecture within blue dashed square is the model loaded into the CNN-DSA chip.model is similar to VGG architecture except that it has six major layers, and the channels in the fifth major layer is reduced to 256 from the original 512 in order to save memory for the sixth layer due to the limitation of the on-chip memory. The sub-layers in each major layer has the same color. Each sub-layer name is followed by the detailed information in brackets, indicating the number of channels, bits-precision, and padding. The first five major layers has zero paddings at the image edge by one-pixel. But the sixth major layer has no padding for the three sublayers, which reduces the activation map from 7x7 through 5x5 and 3x3 and finally to 1x1. The output is of size 14x1x1, which is equal to an array of 14 scalars. The final classification can be simply obtained by an argmax operation on the 14 scalars. This reduces the system memory footprint on the mobile device and accelerate the inference speed. The memory of the CNN-DSA chip is built within the accelerator, so it is very power-efficient without wasting the energy for moving the bits from external DDR into internal SRAM. Thus the on-chip memory is very limited, which supports maximum 9MB for coefficients and activation map. As shown in Figure 4, the first two major layers uses 3-bits precision and the other four major layers uses 1-bit precision. All activations are presented by 5-bits in order to save on-chip data memory. The representation mechanism inside the accelerator supports up to four times compression with the 1-bit precision, and two times compression with the 3-bits precision. Due to the high compression rate, the convolutional layers in VGG16 with 58.9MB coefficients in floating precision could be compressed into only about 5.5MB within the chip. This is a more than 10x compression of the convolution layers. This compact representation has been proved to be successful on ImageNet BID0 ) standard training and testing data and achieved the same level of accuracy as floating point models with 71% Top1 accuracy. The compact CNN representation without accuracy loss is because of the redundancy in the original network. To efficiently use the on-chip memory, the model coefficients from the third major layers are only using 1-bit precision. For the first two major layers, 3-bits model coefficients are used as fine-grained filters from the original input image. And the cost on memory is only a quarter for the first major layer and a half for the second major layer if using the same 3-bits precision. The total model size is 2.8MB, which is more than 200x compression from the original VGG model with FC layers. It completes all the convolution and FC processing within the CNN-DSA chip for the classification task with little accuracy drop. The GnetFC model on the CNN-DSA chip on the Wikipedia demo obtains an accuracy of 97.4%, while the number for the original VGG model is 97.6%. The accuracy drop is mainly brought by the approximation in GnetFC model, and also partially because of the bit-precision compression. The accuracy drop is very little, but the savings on power consumption and increasing on the inference speed is significant. It consumes less than 300mw on the CNN-DSA chip, and the power for pre/postprocessing is negligible. The CNN-DSA chip processing time is 15ms, and the pre-processing time on mobile device is about 6ms. The time for post-processing is negligible, so the total text classification time is 21ms. It can process nearly 50 sentences in one second, which satisfies more than real-time requirement for NLP applications. We implemented efficient on-device NLP applications on a 300mw CNN-DSA chip by employing the twodimensional embedding used in the Super Characters method. The two-dimensional embedding converts text into images, which is ready to be fed into CNN-DSA chip for two-dimensional CNN computation. The demonstration system minimizes the power consumption of deep neural networks for text classification, with less than 0.2% accuracy drop from the original VGG model. The potential use cases for this demo system could be the intension recognition in a local-processing smart speaker or Chatbot. | Deploy text classification and sentiment analysis applications for English and Chinese on a 300mW CNN accelerator chip for on-device application scenarios. | 1,041 | scitldr |
Comparing the inferences of diverse candidate models is an essential part of model checking and escaping local optima. To enable efficient comparison, we introduce an amortized variational inference framework that can perform fast and reliable posterior estimation across models of the same architecture. Our Any Parameter Encoder (APE) extends the encoder neural network common in amortized inference to take both a data feature vector and a model parameter vector as input. APE thus reduces posterior inference across unseen data and models to a single forward pass. In experiments comparing candidate topic models for synthetic data and product reviews, our Any Parameter Encoder yields comparable posteriors to more expensive methods in far less time, especially when the encoder architecture is designed in model-aware fashion. We consider the problem of approximate Bayesian inference for latent variable models, such as topic models , embedding models , and dynamical systems models . An important step in using such probabilistic models to extract insight from large datasets is model checking and comparison. While many types of comparison are possible , we focus on a problem that we call within-model comparison. Given several candidate parameter vectors θ 1, θ 2,..., all from the same space Θ ⊆ R D, our goal is to efficiently determine which parameter θ m is best at explaining a given dataset of N examples {x n} N n=1. Multiple ways exist to rank candidate parameters, including performance on heldout data or human-in-the-loop inspection. A principled choice is to select the parameter that maximizes the data's marginal likelihood: N n=1 log p(x n |θ m). For our latent variable models of interest, computing this likelihood requires marginalizing over a hidden variable h n: p(x n |θ m) = p(x n |h n, θ m)p(h n |θ m)dh n. This integral is challenging even for a single example n and model m. One promising solution is variational inference (VI). Using VI, we can estimate an approximate posterior q(h n |x n, θ m) over hidden variables. Approximate posteriors q can be used to compute lower bounds on marginal likelihood, and can also be helpful for human inspection of model insights and uncertainties. However, it is expensive to estimate a separate q at each example n and model m. In this paper, we develop new VI tools 1 that enable rapid-yet-effective within-model comparisons for large datasets. The need for within-model comparison (and our methods) is present in many practical modeling tasks. Here we discuss two possible scenarios, with some details specialized to our intended topic modeling applications . First, in human-in-the-loop scenarios, a domain expert may inspect some estimated parameter θ and then suggest an alternative parameter θ that improves interpretability. In topic modeling, this may mean removing "intruder words" to make topics more coherent . Second, in automated parameter learning scenarios, many algorithms propose data-driven transformations of the current solution θ into a new candidate θ, in order to escape the local optima common in non-convex optimization objectives for latent variable models , Examples include split-merge proposal moves or evolutionary algorithms . Across both these scenarios, new candidates θ arise repeatedly over time, and estimating approximate posteriors for each is essential to assess fitness yet expensive to perform for large datasets. Our contribution is the Any Parameter Encoder (APE), which amortizes posterior inference across models θ m and data x n. We are inspired by efforts to scale a single model to large datasets by using an encoder neural network (NN) to amortize posterior inference across data examples . Our key idea is that to additionally generalize across models, we feed model parameter vector θ m and data feature vector x n as input to the encoder. APE is applicable to any model with continuous hidden variables for which amortized inference is possible via the reparameterization trick. We consider a general family of probabilistic models that use parameter vector θ m to generate a dataset of N continuous hidden variables h n and observations x n via a factorized distribution: N n=1 p(h n |θ m)p(x n |h n, θ m). Our goal is fast-yet-accurate estimation of each example's local posterior p(h n |x n, θ m) for a range of model parameters θ 1, θ 2,... ∈ Θ. Topic Models. As a sample application, we focus on the Logistic Normal topic model from. Given known vocabulary size V, we observe N documents represented by count vectors x n (vector of size V counting the types of all T n words in document n). We model each x n as a mixture of K possible topics. Let hidden variable h nk be the probability that a word in document n is produced by topic k. Thus, h n ∈ ∆ K is a non-negative vector of size K that sums to one. We model h n with a Logistic Normal prior, with mean and covariance set to be similar to a sparse Dirichlet(0.01) prior for interpretability. Given h n, we model the observed word-count vector for document n with a Multinomial likelihood:. This is a document-specific mixture of topics, where each topic k is defined by a word probability vector θ k ∈ ∆ V. Our parameter of interest is the topic-word probability vector θ = {θ k} K k=1. VI Approximations for the Single Example Posterior. While the true posterior p(h n |x n, θ m) is usually intractable, we use variational inference (VI) to approximate it. We choose a simpler density q(h n |λ n) and optimize parameter λ n to minimize KL divergence from the true posterior. Inference reduces to the well-known evidence lower bound (ELBO) optimization problem given example x n and model θ m: Inference: Given several parameters of interest, we can perform model comparison by solving the above optimization problem separately for each θ m. However, this is expensive. Solving Eq. for a model θ m requires dozens of iterative updates of gradient ascent for each example. VI Amortized across Data Examples. and have sped up inference by setting per-example variational parameters λ n to the output of an encoder neural network (NN) instead of an iterative optimization procedure. The "Standard" encoder, with weights parameters φ, takes input data x n and produces variational parameters λ NN φ (x n). Inference for example n reduces to one fast forward pass: λ n ← λ NN φ (x n). While encoders often produce λ n with worse ELBO scores than optimal solutions to Eq. , they are preferred for their speed. However, for our model comparison goals the standard encoder is expensive, because for each parameter θ m of interest we must train separate specialized NN weights φ m. Contribution: VI Amortized over Model Parameters. Our goal is to enable rapid estimation of posteriors p(h n |x n, θ m) for many possible parameters θ 1, θ 2,... ∈ Θ (not all known in advance). We thus consider a family of approximating densities q that explicitly conditions on both a given data vector x n and the query parameter vector θ m. Again, we use a neural network to transform these inputs into the variational parameters, λ n ← λ NN φ (x n, θ m). We call this the Any Parameter Encoder. Unlike the earlier Standard Encoder, which trains φ for one specific θ, our approach can directly generalize to many θ. Encoder Architecture Design for Topic Models. Given the difficulty of posterior inference even for a single parameter θ, developing an effective Any Parameter Encoder requires careful selection of a NN architecture that can transform its two inputs, data x n and model θ, to produce accurate approximate posteriors. Following previous work , we use multi-layer perceptrons. We further suggest that an architecture designed to capture structure in the generative model should improve further. Our baseline "naive" architecture defines the input of the neural net as simply the concatenation of vector x n and vector θ. While simple, we suggest this will be difficult to train effectively given the size of the input ((K + 1)V for the topic model) and lack of inductive bias to prioritize the model's needed interactions between entries of x n and θ. As an improvement, we consider a model-aware encoder architecture. Our design is motivated by a view of posterior inference as roughly moment-matching when data is plentiful. For our topic model, each document's Multinomial likelihood has a mean vector equal to k h nk θ k = θh n, writing θ as a V × K matrix. This mean vector should be (roughly) equal to the observed word-frequency vector 1 Tn x n. If µ n is the mean of q(h n) and used as a plug-in estimate for h n, then we want to satisfy 1 Tn x n ≈ θµ n. Solving for µ n via least squares, we get µ n ≈ 1 Tn (θ T θ) −1 θ T x n, which we might simplify to a non-linear function of θ T x n. Thus, we suggest using the following model-aware encoder architecture: This model-aware architecture has encoder input dimension K, which is much smaller than (K + 1)V for the naive approach (and thus hopefully easier to train). Furthermore, this should provide desirable inductive bias to produce useful mean and covariance estimates. We emphasize that this design is specialized to the topic model, and further work is needed to develop model-aware architecture design strategies for general latent variable models. Training the Encoder. Training our encoder parameters φ requires an available set of M parameter vectors {θ m} M m=1 of interest. We choose these to be representative of the subset of Θ we wish to generalize well to. We then maximize ELBO across all M models: We use stochastic gradient ascent to solve for φ, using the reparameterization trick to estimate gradients for a minibatch of examples and models at each step. We can interpret this objective as an expectation over samples θ m from a target distribution over parameters. We compare our proposed Any Parameter Encoder (APE) to several other inference methods on two topic modeling tasks. For all VI methods, we choose q to be a Logistic Normal parameterized by a mean and a diagonal covariance. The appendix has complete details. APE. We consider both naive and model-aware encoder architectures described above. Both use MLPs with 2 layers with 100 units per hidden layer, selected via grid search. Baselines. We consider three baselines implemented in Pyro and PyTorch . First, Variational Inference (VI) uses gradient ascent to optimize Eq.. Second, we use Standard encoder VAEs for topic models . This encoder is specialized to a single parameter θ, with architecture size selected via grid search (similar to APE). Finally, we run Pyro's off-the-shelf implementation of Hamiltonian Monte Carlo with the No U-Turn Sampler (NUTS) , though we expect specialized implementations to be more performant. Synthetic Data Experiments. We consider a V = 100 vocabulary dataset inspired by the "toy bars" of. Using K = 20 true topics θ *, we sample Error from VI ELBO Product Reviews Figure 1: Left: ELBO vs. elapsed time for VI on a test set of 300 document-θ m combinations on synthetic data. We show a randomly-initialized run (black) and a warm-started run (green) initialized via our Any-Parameter Encoder (red "X"). The randomly-initialized VI would require over 400 milliseconds (vertical line) to reach the quality our APE achieved in <20 ms. Right: Kernel Density Estimation of absolute difference between encoder ELBO and VI ELBO across different topics. APE (red) are closer to VI (i.e. less error). a 500-document dataset. We consider M = 50, 000 possible model parameters {θ m} M m=1, sampled from a symmetric, sparse Dirichlet prior over the vocabulary. Typical θ m look unlike the true topics θ *, as shown in the supplement, so inference must handle diversity well. We train our APE on 25 million possible document-θ pairs for two epochs, then evaluate on unseen document-θ pairs drawn from the same generative process. Product Reviews. We model 6,343 text documents of consumer product reviews . We use the V = 3000 most frequent vocabulary terms and K = 30 topics. We generate training topics in the same way as in the synthetic data experiments, and we evaluate on test topics found via Gibbs sampling with several separately initialized runs. Results: Encoder Design. Results comparing naive and model-aware encoder architectures are in Table 1. Our proposed model-aware input layer yields better heldout likelihoods than the naive alternative, which we suggest is due to its more effective inductive bias. Results: Quality-vs-Time Tradeoff. Comparing across Table 1 and Fig. 1, we see that while the Standard Encoder understandably fails to generalize across models, our Any Parameter Encoder achieves quality close to non-amortized VI and NUTS with a speed up factor of over 100-1000x. APE can also provide a useful warm start initialization to VI. Results: Agreement in model comparison. Motivated by the need to rapidly assess proposal moves that escape local optima, we gather 10 different models and measure whether each encoder's ranking of a pair θ, θ on the test set agrees with VI's ranking. Table 1 shows that APE agrees with VI in 75% of 45 cases in the real data scenario, while Standard Encoder agrees just 29% of the time. This suggests APE may be trustworthy for accept/reject decisions, though further work is needed to improve this number further. Across two datasets and many model parameters, our Any Parameter Encoder produces posterior approximations that are nearly as good as expensive VI, but over 100x faster. Future opportunities include simultaneous training of parameters and encoders, and handling Bayesian nonparametric models where θ changes size during training (n)) For encoder methods, the parameters {µ n, log σ 2 n} are the output of a shared encoder NN. For VI, these are free parameters of the optimization problem. Variational Inference (VI). We perform using gradient ascent to maximize the objective in Eq., learning a per-example mean and variance variational parameter. We run gradient updates until our moving average loss (window of 10 steps) has improved by less than 0.001% of its previous value. For our VI runs from random initializations, we use the Adam optimizer with an initial learning rate of.01, decaying the rate by 50% every 5000 steps. For our warm-started runs, we use an initial learning rate of 0.0005. In practice, we ran VI multiple times with different learning rate parameters and took the best one. Table 1 only reports the time to run the best setting, not the total time which includes various restarts. Standard encoder. We use a standard encoder that closely matches the VAE for topic models in. The only architectural difference is the addition of a temperature parameter on the µ n vector before applying the softmax to ensure the means lie on the simplex. We found that the additional parameter sped up training by allowing the peakiness of the posterior to be directly tuned by a single parameter. We use a feedforward encoder with two hidden layers, each 100 units. We chose the architecture via hyperparameter sweeps. The total number of trainable parameters in the model is 24,721 on the synthetic data and 316,781 on the real data; this is compared to 16,721 and 19,781 parameters for model-aware APE. NUTS. For the Hamiltonian Monte Carlo (HMC) with the No U-Turn Sampler (NUTS) , we use a step size of 1 adapted during the warmup phase using Dual Averaging scheme. Upon inspection, we find that the method's slightly lower posterior predictive log likelihood relative to VI is due to its wider posteriors. We also find that the Pyro implementation is (understandably) quite slow and consequently warm-start the NUTS sampler using VI to encourage rapid mixing. We are aware that there exist faster, more specialized implementations, but we decided to keep our tooling consistent for scientific purposes. We generate a set of different models {θ 0, θ 1, ...θ M} from a symmetric Dirichlet prior with α = 0.1. We train our Any-Parameter Encoder in random batches of document-topic combinations. With 500 documents and 50,000 topics (i.e. D = 500, M = 50, 000), we have 25 million combinations in total. The topics used to generate the synthetic data represent "toy bars", inspired by . See Figure 2 for a visualization. We use this same toy bars-biased prior to generate all our topics in the holdout set, though the order of the topics is random. See For training both APE and the Standard encoder on the synthetic data, we use Adam with an exponential decay learning schedule, a starting learning rate of 0.01, and a decay rate of.8 every 50,000 steps. We find that this schedule tends to be fairly robust; these hyperparameters were used for both APE and the Standard encoder on both the synthetic and real data. We chose our initial learning rate via a learning rate finder posed in , and we train for 2 epochs with a batch size of 100. We train our standard VAE encoder on a single model with parameters θ drawn randomly from a symmetric Dirichlet prior with α = 0.1. To train the standard encoder, we pass in our model of interest to the decoder, holding its weights fixed as we perform stochastic backpropagation to update the encoder weights. The same thing happens for APE, though the same topics are additionally included as part of the input into the encoder. We develop VAEs where the encoder takes a model parameter vector as additional input, so we can do rapid inference for many models | We develop VAEs where the encoder takes a model parameter vector as input, so we can do rapid inference for many models | 1,042 | scitldr |
The ability to transfer knowledge to novel environments and tasks is a sensible desiderata for general learning agents. Despite the apparent promises, transfer in RL is still an open and little exploited research area. In this paper, we take a brand-new perspective about transfer: we suggest that the ability to assign credit unveils structural invariants in the tasks that can be transferred to make RL more sample efficient. Our main contribution is Secret, a novel approach to transfer learning for RL that uses a backward-view credit assignment mechanism based on a self-attentive architecture. Two aspects are key to its generality: it learns to assign credit as a separate offline supervised process and exclusively modifies the reward function. Consequently, it can be supplemented by transfer methods that do not modify the reward function and it can be plugged on top of any RL algorithm. To some, intelligence is measured as the capability of transferring knowledge to unprecedented situations. While the notion of intellect itself is hard to define, the ability to reuse learned information is a desirable trait for learning agents. The coffee test , presented as a way to assess general intelligence, suggests the task of making coffee in a completely unfamiliar kitchen. It requires a combination of advanced features (planning, control and exploration) that would make the task very difficult if not out of scope for the current state-of-the-art Reinforcement Learning (RL) agents to learn. On the other hand, it is solved trivially by humans, who exploit the universally invariant structure of coffee-making: one needs to fetch a mug, find coffee, power the coffee machine, add water and launch the brewing process by pushing the adequate buttons. Thus, to solve the coffee test, transfer learning appears necessary. Were we to possess a random kitchen simulator and a lot of compute, current transfer methods would still fall short of consistently reusing structural information about the task, hence also falling short of efficient adaptation. Credit assignment, which in RL refers to measuring the individual contribution of actions to future rewards, is by definition about understanding the structure of the task. By structure, we mean the relations between elements of the states, actions and environment rewards. In this work, we investigate what credit assignment can bring to transfer. Encouraged by recent successes in transfer based on supervised methods, we propose to learn to assign credit through a separate supervised problem and transfer credit assignment capabilities to new environments. By doing so, we aim at recycling structural information about the underlying task. To this end, we introduce SECRET (SElf-attentional CREdit assignment for Transfer), a transferable credit assignment mechanism consisting of a self-attentive sequence-to-sequence model whose role is to reconstruct the sequence of rewards from a trajectory of agent-environment interactions. It assigns credit for future reward proportionally to the magnitude of attention paid to past state-action pairs. SECRET can be used to incorporate structural knowledge in the reward function without modifying optimal behavior, as we show in various generalization and transfer scenarios that preserve the structure of the task. Existing backward-view credit assignment methods require to add auxiliary terms to the loss function used to train agents, which can have detrimental effects to the learning process (de), and rely on an external memory, which hinder the generality of their approach. SECRET does neither. Also, as we show in Sec. 3.1, the architecture we consider for SECRET has interesting properties for credit assignment. We elaborate about our novelty with respect to prior work in Sec. 4. We insist on the fact that the focus of our work is on transfer and that it is not our point to compete on credit assignment capabilities. We would like to emphasize several aspects about the generality of SECRET: 1) our method does not require any modification to the RL algorithm used to solve the tasks considered, 2) it does not require any modification to the agent architecture either and 3) it does not alter the set of optimal policies we wish to attain. Moreover, our method for credit assignment is offline, and as a , it can use interaction data collected by any mean (expert demonstrations, replay memories , backup agent trajectories... ). We believe that this feature is of importance for real-world use cases where a high number of online interactions is unrealistic but datasets of interactions exist as a byproduct of experimentation. Background We place ourselves in the classical Markov Decision Process (MDP) formalism . An MDP is a tuple (S, A, γ, R, P) where S is a state space, A is an action space, γ is a discount factor (γ ∈), R: S × A × S → R is a bounded reward function that maps state-action pairs to the expected reward for taking such an action in such a state. Note that we choose a form that includes the ing state in the definition of the reward function over the typical R: S × A → R. This is for consistency with objects defined later on. Finally, P: S × A → ∆ S is a Markovian transition kernel that maps state-action pairs to a probability distribution over ing states, ∆ S denoting the simplex over S. An RL agent interacts with an MDP at a given timestep t by choosing an action a t ∈ A and receiving a ing state s t+1 ∼ P (·|s t, a t) and a reward r t = R(s t, a t, s t+1) from the environment. A trajectory τ = (s i, a i, r i) i=1,...,T is a set of state-action pairs and ing rewards accumulated in an episode. A subtrajectory is a portion of trajectory that starts at the beginning of the episode. The performance of an agent is evaluated by its expected discounted cumulative reward E ∞ t=0 γ t r t. In a partially observable MDP (POMDP), the agent receives at each timestep t an observation o t ∼ O(·|s t) that contains partial information about the underlying state of the environment. 2 SECRET: SELF-ATTENTIONAL CREDIT ASSIGNMENT FOR TRANSFER SECRET uses previously collected trajectories from environments in a source distribution. A selfattentive sequence model is trained to predict the final reward in subtrajectories from the sequence of observation-action pairs. The distribution of attention weights from correctly predicted nonzero rewards is viewed as credit assignment. In target environments, the model gets applied to a small set of trajectories. We use the credit assigned to build a denser and more informative reward function that reflects the structure of the (PO)MDP. The case where the target distribution is identical to the source distribution (in which we use held-out environments to assess transfer) will be referred to as generalization or in-domain transfer, as opposed to out-of-domain transfer where the source and the target distributions differ. Credit assignment as offline reward prediction We learn to assign credit through an offline reward prediction task, based on saved trajectories of agent-environment interactions. We create a sequence-to-sequence (seq2seq) model ) that takes as input the sequence of observation-action pairs and has to reconstruct the corresponding sequence of environment rewards. Being offline, the reward prediction task is learned separately from the RL task, and the reward prediction model does not share representations with the agent. This way, the representations learned for credit assignment do not affect or get mixed with the representations learned for control. Operating offline brings several advantages: one can directly interact with the replay memory of agents and even use expert demonstrations or arbitrary saved transitions as a source of supervision, which could be useful in settings where on-policy interactions are costly, such as robotics. We equip our seq2seq model with an attention mechanism and view the attention weights of the reward reconstruction task as our primary source of assigned credit. The motivation to do so is that the seq2seq model looks into the past to find predictive signal in order to reconstruct the reward, so observation-action pairs it attends to should be those which reduce its uncertainty about the future, in other words those that explain future reward and should be credited. On the use of observations In MDPs, environment states follow the Markov property: they summarize the history of previous interactions and are sufficient to predict the future. As such, predictive models are highly biased towards focusing on the current sequence element, which hinders credit assignment. Under that consideration, when dealing with MDPs, we turn states into observations by applying transformations that hide a certain amount of information from states and break the Markov assumption. For instance, in gridworlds with visual states, we crop the image and get a player centered image with a given window size. Doing so encourages the model to look into the past to find predictive signal, and allow us to track the relative importance given to each element to reconstruct the credit assigned. In POMDPs, this might be unnecessary depending on the amount of information shared between observations and true states. Self-attention for credit assignment Unlike other seq2seq architectures, self-attentive models like Transformers have direct computational paths between pairs of sequence elements, due to their representations that depend on projections of all sequence elements. This feature is key to long-term credit assignment. As an illustration, consider an RL task where the terminal reward depends only on the first observation, which is drawn randomly. Predicting the reward correctly requires to remember the first observation, which would be very challenging for a recurrent architecture whose memory goes through O(n) transformations, n being the size of the sequence. On the other hand, a self-attentive model directly accesses the value of the initial observation, which makes credit assignment easier. Reward prediction architecture We use a Transformer decoder with a single self-attention layer and a single attention head. The model input is a sequence of observationaction couples (o t, a t) t=0,...,T. Each observation goes through a series of convolutional layers (for visual inputs) followed by a series of feed-forward layers. Each action representation, a one-hot vector in the discrete action case, is concatenated to the learned observation embedding. Those representations of dimensionality d i are combined with positional encoding (PE), fed to a self-attention layer and then to a position-wise feedforward layer that outputs logits for reward prediction classes. PE encodes the relative positions of sequence elements, see Appendix A for details. Self-attention is an attention mechanism with parameterization (W k, W q, W v), each matrix belonging to R di×d k, that puts sequence elements in relation by computing non-linear similarity scores for all pairs of elements in the sequence. To do so, each sequence element is mapped to a query vector that is matched against keys and values obtained from the previous elements. To be consistent with the goal of assigning credit, the model should not be able to peek into the future. Thus, we restrict the computational window of each sequence element to the information stored in representations of the previous elements in the sequence and its own by applying a causal mask M c to the of the pairwise similarity computations, assigning a value of 0 to masked elements after the softmax. Let X = (x t) t=0,...,T ∈ R T ×di denote the input sequence in a matrix form, x t being the of internal computations of the model on its t th input. In the same fashion, we note Z = (z t) t=0,...,T ∈ R T ×d k the sequence ing from the application of self-attention. We then have where Q = XW q ∈ R T ×d k stores queries, K = XW k ∈ R T ×d k keys, and V = XW v ∈ R T ×d k values as linear projections of the input; d k stands for the dimension of the key vectors, M c ∈ {0, 1} T ×T is a binary matrix that acts as a causal mask (a lower triangular matrix), is the Hadamard product and C is a large constant (10 9 in practice). Notably, the ing observation-action representation can be viewed as a linear combination of the values of previous elements: The vector α t contains the normalized attention weights for the prediction at timestep t and sums to 1. Since observations contain only a portion of their initial information, the fact that the model succeeds in the prediction task indicates that it reconstructed the missing information from its past. Therefore, attention weights themselves can be viewed as a form of credit assignment, and will be used as such in what follows. While performing regression on the rewards could also be an option, our experiments found that regression tends to converge to poor local optima. Consequently, we predict the sign of the experi-enced rewards: q(r) = sign(r) with sign = 0. We chose the sign as the classification target for its invariance to the scale of the rewards. We use a weighted sequential cross-entropy as the loss function over the class-wise model predictions f θ,c, writing τ (o, a) the subtrajectory of τ ending with the observation-action couple (o, a) to translate the effect of the binary mask: We have found class weighting to be very important to keep the variance of prediction metrics across a variety of datasets of sampled trajectories low. Generating trajectories To train SECRET, we generate a dataset of trajectories which contains a certain proportion of successful trajectories. If source environments are simple enough so that the task has sufficient chance to be solved by acting randomly, we use a random policy to generate trajectories. For more complex distributions of environments, we use a RL agent (either trained or in the learning phase) to generate trajectories. We think purely exploratory methods like could have advantages over using an RL agent and leave the study of their use for future work. In this subsection, we explain how we use credit assignment to make learning more sample-efficient. Reward shaping In RL, agents often deal with sparse rewards that make the learning process slow. Reward shaping is a technique that aims at densifying the reward so as to improve sample efficiency. It defines a class of reward functions that can be added to the original environment rewards without modifying the set of optimal policies. For a given MDP M = (S, A, γ, R, P), we define a new MDP M = (S, A, γ, R, P) where R = R + F is the shaped reward and F the shaping. The reward shaping theorem states that if there exists a function φ such that F: (s, a, s) → γφ(s) − φ(s), then M and M admit the same set of optimal policies. φ is called a potential function. With domain knowledge, one can use reward shaping to design more informative reward functions without encouraging unwanted behavior. Nevertheless, shaping rewards requires good priors for the task and the potential function must often be engineered manually. Since SECRET weighs the contribution of observation-action pairs to future reward, we use it to derive a shaped reward that corresponds to the sum of future reward reachable from the underlying state, weighted by the attention calculated by the model. We explain the process in the following. Computing the potential function We define the redistributed return R ← τ of a trajectory τ as: where α i←j is the attention weight on (o i, a i) when predicting the reward r j and s i are environment states. Indeed, SECRET uses observations but we keep the states they are constructed from to compute the potential. In POMDPs, we recover an approximate state from the observation, either manually or through inference. In this work, we use a state constructed manually, see Appendix A for details. To compute the potential function, we generate a set D of trajectories like described in Sec. 2.1. Since we operate on trajectories, the same state-action pair can appear twice in a sequence and benefit from a different amount of attention, which is why we must include the first summation. In the reward shaping formalism, the potential function φ depends only on the state. To stay within its bounds, we define φ as the forwarded redistributed return. It is computed as the following estimate: Note that in practice we only redistribute individual rewards that were successfully predicted. Also, some states are generally missing from the data distribution induced by the set of trajectories used. For those states, we set to potential to 0, which in a −φ(s) additional reward when transitioning to those from the state s. As a , it gives agents incentive to stay on the support of the data distribution unless they encounter high-reward states. Because it relies on reward shaping, SECRET conserves optimal policies. We empirically find that agents learn faster with the ing augmented reward function. A way to look at it is that we densify the learning signal and bias the agent towards behaviors that encourage future rewards. We start by conveying intuition as to why SECRET should transfer to new environments. In fields other than RL, seq2seq models similar to that of SECRET have shown outstanding transfer capabilities (; ;), even in low-resource settings . In transfer scenarios that preserve the structure of the MDP, the optimal finegrained control sequence can vary drastically from one environment to another. This is why credit assignment is an interesting alternative to the transfer of weights: given an underlying environment state and a specific action, their contribution to future rewards is not fundamentally altered. Such scenarios include specific changes in the state (or observation) distribution and changes to the reward function that preserve the optimal policies. These also include changes in the dynamics of the environment, and though it affects credit assignment, we show later on that SECRET adapts surprisingly well to such scenarios. Another point that motivates the use of our method for transfer is the fact that we keep the representations learned for credit assignment separate from the control representations learned by agents. Indeed, de showed that RL representations were not optimal for transfer. Transfer setting We argue that transfer should be considered effective when agents learn to solve target tasks efficiently because efficiency gains in the target domain compound while the cost of training in the source is fixed. Hence, we use the Total Target Time Scenario metric to assess transfer. Nevertheless, collecting trajectories in the source domain can be costly. We report the number of trajectories used to train SECRET in each scenario. As before, SECRET is trained on episodes of interaction sampled from the source distribution. In each target environment, we sample multiple trajectories (see the following section for details about the policies used to generate the trajectories). We then compute the attentional potential function by calculating an estimate of the expected redistributed reward, as described in Sec. 2.2. In this section, we aim to answer the following questions: can SECRET improve the sample efficiency of learning for RL agents? Does it generalize and/or transfer? How does it compare to transfer baselines? Is the credit assigned by SECRET interpretable? The of complementary experiments are presented in Appendix B. The Triggers environment We introduce Triggers, an interpretable and customizable environment that we use to assess the quality of the credit inferred with our method. In Triggers, the agent is located in a two-dimensional bounded grid. Its actions consist solely of moving of one cell in one of the cardinal directions. Any action that would lead the agent outside the boundaries of the environment (as indicated by the walls in the figure) is ignored but still counted as an action taken by the agent. The goal of the agent (represented as a yellow square) is to activate all the switches (red squares) and then collect all the prizes (pink squares). Prizes are the only source of reward and give a −1 penalty unless all switches are activated, in which case they give a +1 bonus. Both prizes and switches disappear once collected. The main feature of Triggers is that every positive reward is conditional to the presence of a known subset of states in the agent history, and thus credit assignment can be assessed in a rigorous way. In that setup, agents benefit from understanding the link between keys and doors. We hypothesized that our credit assignment mechanism might identify this relation and reward the agent for picking up keys. To assert this, we modified the setting so that picking up keys does not provide rewards. Additionally, the visual input is richer than the one from Triggers environments and the average number of steps per episode is extended. Finally, agents move and rotate across the room. Since picking up a key does not require to actually see the key, it can be hard to know whether a key was taken and predict further door opening rewards. Trajectories are generated with a trained agent. Agents used We use tabular Q-learning for in-domain and out-ofdomain experiments in Triggers except for the transfer to environments with modified dynamics where we use Deep Q-Networks (DQN) . We use Proximal Policy Optimization (PPO) agents for in-domain experiments in DMLab. We provide an analysis of the credit inferred by SECRET. The analysis is qualitative and quantitative, since we rely on both visual assessment and binary detection metrics. The process of evaluating credit assignment in Triggers goes as follows: we first generate trajectories and train the model. We then compare the credit assigned by SECRET on trajectories sampled from held-out environments to a ground truth credit assignment. We build that ground truth by exploiting the exact knowledge of where triggers are. It is a vector that is 0 almost everywhere and 1 on state-action couples that precede the activation of a Triggers. By doing so, we explicitly target the state-action couples whose ing state is causally linked to the reward experienced later. We find the redistribution to be near optimal in simple instances of Triggers (see Fig. 3 -left): attention concentrates quasi exclusively on state-action pairs that enable the collection of future reward. This is confirmed by precision-recall analysis: over the distribution of scenarios considered, a simple binarization heuristic over attention values yields an average precision of 0.96 for an average recall of 0.94. More information on the heuristic is in Appendix A. In keys_doors_puzzle, we adopt the same set of experiments. Since the agent can move backward and spin, in some scenarios it takes a key that is not in is line of sight. In addition, the granularity of the state space is such that off-by-one prediction errors are common but do not hinder the credit mechanism: attributing credit to the state-action couple preceding the collection of a key or the previous one leads to imperceptible changes in the ing shaped rewards. Fig. 3 -right shows similar as for Triggers. Appendix A also provides a heatmap for this task that shows that attention concentrates around the keys. We then study how we can leverage the inferred credit and transfer representations that are helpful in new scenarios. We show that agents train faster when using shaped rewards from SECRET. As before, the reward model is trained on episodes of interaction in environments sampled from the source distribution. In transfer environments, we sample multiple trajectories, each using the same maze configuration. We then compute the attentional potential function by calculating an estimate of the expected redistributed reward, as described in Sec. 2.2. To evaluate its effect, we compare agents trained from environment rewards to agents that use the ing shaped reward. In-domain transfer For in-domain transfer, we transfer the representations for credit assignment to held-out instances of the same distribution over MDPs. For the Triggers environment, the RL agents are tabular Q-learners. For the DMLab environment, we use PPO agents and modify the original task: we do not reward the agent for collecting keys but only to open doors so that the attention can focus on the key positions. Note that this makes the task harder. As we display in Fig. 4 agents learn visibly faster to solve tasks when benefitting from SECRET in both environments. Out-of-domain transfer For out-of-domain transfer we use the Triggers environment and consider two scenarios that are hard for standard agents: transfer to bigger environments (see Fig. 5) and transfer to environments with inverted dynamics (see Fig. 6). In the bigger setting, direct weight transfer cannot be used since the visual input has bigger spatial dimensions. On the other hand, SECRET can be used since the transformation we apply to turn states into observations conserves the visual input dimensions. In the inverted dynamics setting, the effect of the agent's actions are inverted, which makes the task hard for transfer methods. In that setting, we compare the transferability figure). The effect of the shaping is exclusively beneficial, while transferring weights from the source task can be detrimental to the learning process. of our mechanism to that of the representations learned by an agent equipped with deep function approximation. To this end we use DQN agents and either train them from scratch in the target environments or start from the set of weights learned in the source environments (WT in Fig. 6). In both settings, shaping the rewards assists the agent in learning to solve the task. We display some in Fig. 5 and Fig. 6. When transferring to bigger environments, the agent benefits very early on from the shaped reward, while also reaching better asymptotical performance. Transfer in RL While a lot of approaches exist in the transfer literature, to the best of our knowledge none explicitly transfer credit assignment capabilities. Previous works aimed at making the training of an agent in the same task more sample-efficient by using a pretrained model as a teacher (a;). We learn to assign credit as a parallel task that does not modify the representations of the RL agent. Others learn auxiliary reward functions in the hope that they will enable transfer by imposing consistency in the reward (; ;). Although we also learn additional reward signal, it is based on a redistribution of rewards from the environment, which ensures consistency with the original reward function. Transfer is also viewed as learning tasks in a sequential way (b;) and this suggests to introduce inductive bias to the neural architectures of agents to reduce catastrophic forgetting. Our method does not require to alter the agent's architecture. Other explicitly address the problem of transfer through the lens of multitask learning while we stick to learning from an initial distribution of environments. Meta-learning approaches aim to train agents on a distribution of tasks or environments so that their learned skills and representations work across the underlying continuum, and allow for fast adaptation of the agents (; ; ; ; ;). In contrast to meta-learning methods, we do not modify the RL algorithm used to train the agent and SECRET is compatible with any core algorithm for RL. Credit assignment Previous works investigated the role of attention mechanisms for credit assignment. propose SAB, a sparse attention mechanism used to derive a modified backpropagation algorithm. We draw inspiration from SAB but operate in the RL context without sparsity assumptions about the attention weights. RUDDER proposes to equip RL agents with an online method for credit assignment based on return decomposition. Our method operates offline and decomposes individual rewards. RUDDER requires a specific exploration scheme, an additional episodic replay buffer, a compute-heavy contribution analysis method and the addition of several auxiliary losses to the objective the RL agent optimizes. In comparison, SECRET is a lightweight method that does not deal with exploration. Crucially, the focus of is on online credit assignment while ours is on transfer. provide an agent with an external memory and the unsupervised task of reconstructing its inputs (both states and rewards). The agent uses memory reads as a way to identify related elements in sequences, and uses those to transfer the value of states providing delayed rewards to the bootstrapping target of contributing elements. In contrast, SECRET makes use of a non-autoregressive architecture, does not reconstruct states, makes use of reward shaping instead of modifying the update function and most importantly does not rely on an external memory. Recall Traces ) use a generative model that goes backward from high-reward states and samples state-action pairs that could have led to that state. SECRET also works backward from high-reward states but creates links to previous states from existing trajectories instead of sampling them. In this work, we investigated the role credit assignment could play in transfer learning and came up with SECRET, a novel transfer learning method that takes advantage of the relational properties of self-attention and transfers credit assignment instead of policy weights. We showed that SECRET led to improved sample efficiency in generalization and transfer scenarios in non-trivial gridworlds and a more complex 3D navigational task. To the best of our knowledge, this is the first line of work in the exciting direction of credit assignment for transfer. We think it would be worth exploring how SECRET could be incorporated into online reinforcement learning methods and leave this for future work. In this section we provide additional details about our experimental setup and the hyperparameters we use. In Triggers, episodes stop after 50 timesteps in 8x8 grids and 100 timesteps in 12x12 grids. Those values were chosen large enough such that the policies used to train the reward predictor have a chance to gather informative rewards. In DMLab, episodes stop after 900 timesteps, as defined by the standard settings. We use the same set of hyperparameters in all our experiments with few variation. In Triggers experiments, we use 128 units per dense layer, 32 convolutional filters and a single convolutional layer to process partial states and actions. We apply dropout at several places in the model as a regularizer. We use a dropout rate of 0.1 after dense layers, a dropout rate of 0.2 in the self-attention mechanism, and a dropout rate of 0.2 in the normalization blocks of the Transformer architecture. Since Transformers create representations via pooling, they need additional information so as to take into account the relative positions of sequence elements. Hence, we use the same positional encoding scheme as in , that is we add sine and cosine signals of varying frequencies to the sequence of input embeddings X emb before the self-attention layer: In these equations, t refers to the timestep in the sequence, i to the dimension of the embedding considered, and ν has a constant value of 10000. In DMLab experiments, we use two convolutional layers, 16 filters for each, and otherwise identical hyperparameters. Typically, we set the class weights in the loss function to w = w(−1) = 0.499, w = 0.02. Fig. 7 gives an overview of the whole architecture. In Sec. 3.1, we compare the attention vectors we get as outputs from the reward prediction model to ideal credit assignment with binary metrics. The ground truth we use is a binary vector of the size of the attention vector. Its values are 0 everywhere and 1 for timesteps that correspond to the activation of a Triggers. To do so, we introduce a simple heuristic to binarize the attention scalars: we consider all values above a threshold α to correspond to events to be credited. Then, we can measure precision and recall as in a binary classification paradigm. The precision and recall reported are the average precision and recall over 4 scenarios: Triggers with a 8x8 grid, 1 trigger and 1 reward; Triggers with a 8x8 grid, 1 trigger and 2 rewards; Triggers with a 8x8 grid, 2 triggers and 2 rewards; and Triggers with a 8x8 grid, 3 triggers and 1 reward. In each scenario, we train the model over a set of 40000 trajectories, each of which is drawn from a randomly sampled maze. Then, we apply it on 5000 trajectories from held-out environments and collect the attention weights corresponding to predictions on timesteps where the agent experiences positive reward. We use a fixed α of 0.2. We collect 40000 trajectories sampled from random policies to train the prediction reward model in all distributions of environments. In experiments involving tabular Q-learning we use online Q-learning with a learning rate of 0.1 and a constant greediness factor also equal to 0.1. For the out-of-domain transfer experiment with modified dynamics, we use a smaller version of the DQN architecture in. The first convolutional layer has 8 filters, a 3x3 kernel size and a stride of 2. The second and the third convolutional layers have both 16 filters, a 3x3 kernel size and a stride of 1. Those are completed by a feed-forward layer with 64 units followed by another feedforward layer with as many units as the number of available actions. The greediness factor is decayed linearly from 1 to 0.01 over 250000 steps in the environment at train time and has a constant value of 0.001 at test time. We use RMSProp as an optimizer with a base learning rate of 0.00025. We update the target network every 2000 steps and initially fill the replay buffer with 5000 transitions sampled following a random policy. The replay buffer has a maximum size of 1000000. We provide additional details about this setup: we train SECRET using 10000 trajectories sampled from a distribution of mazes that are generated randomly. These trajectories are sampled using an agent trained over the same distribution. We do so to increase the proportion of trajectories where rewards are experienced. Indeed, we found that using random policies yielded very few of these. Once the model is trained, we use it to compute the attentional potential function over a fixed maze. 1000 trajectories are sampled on the fixed maze using the same policy as the one that generated the trajectories used to train the reward prediction model. Since consecutive frames can be very similar, we consider a positive reward prediction to be correct (and thus use the corresponding attention weights when estimating the potential) if it happens within 5 frames of a reward actually experienced in the environment. We then compare the performance of agents trained with the original reward function to those trained with the shaped reward. An important point is that we use the knowledge of the position and the keys the agent possesses to build the state used to compute the potential function. This information is not given to the agent. We create an approximate state as such: it is the concatenation of the discretized position and the identifier of the key possessed. The discretized position is the of the euclidean division by a cell size integer c s and is necessary since the DMLab position is continuous. c s is fixed and has the value of 50 in our experiments. We acknowledge that relying on a manually constructed state limits the generality of our approach, but we are confident that this limitation can be addressed in future work by using an estimate of the true state. All agents mentioned in that section are PPO learners with a learning rate of 0.00019, an entropy coefficient of 0.0011, 12 actors, a discount factor γ = 0.99. They use generalized advantage estimation In this section we provide the of additional experiments that are useful to understand several aspects of SECRET. We study the influence of the size of the window in the transformation used to turn states into observations in Triggers. The goal of this experiment is notably to study the effect of the degree of partial observability to the assigned credit, and also the effect of transferring a possibly badly assigned credit to an agent, through reward shaping. We consider the following window types: 3x3, 5x5 and 7x7 (odd numbers because the window is centered on the agent), as well as the full state. We place ourselves in the in-domain setting and evaluate SECRET on held-out environments from the same distribution as the source. Environments are 8x8 mazes with 3 triggers and 1 prize. We display reward prediction and credit assignment metrics for each window size in Table 1, and the average discounted return of tabular Q-learning agents for each window size in Fig 9a. Notice that the considered reward prediction accuracy (Table 1) is weighted, such that all classes have the same importance. In this setting, we can observe that bigger window sizes are detrimental to the quality of the credit assigned, even if it helps to predict reward (Table 1). This was to be expected: the more "observable" is the sequence of observations used to train the reward predictor, the easier is the reward prediction (in the full state case, it can be predicted solely from the current state-action couple), but also the less likely the assigned credit will be centered on the trigger. However, even with a low credit precision, the shaped reward still accelerates the learning process (Fig. 9a). We stated that class weighting was important to keep the variance of prediction metrics across a variety of datasets of sampled trajectories low, the classes (sign of the reward) being quite imbalanced. Here, we study the influence of the class weights considered in the loss of the reward prediction model in Triggers. We consider the following class weights for the class corresponding to zero rewards: w = 1, w = 0.1, w = 0.01 and w = 0.001. The other class weights are fixed and have the value w = w(−1) = 1. We place ourselves in the in-domain setting and evaluate SECRET on held-out environments from the same distribution as the source. Environments are 8x8 mazes with 3 triggers and 1 prize. We display reward prediction and credit assignment metrics for each window size in Table 2, and the average discounted return of tabular Q-learning agents for each window size in Fig 9b. In that setting, putting heavier misclassification penalties for under-represented classes (−1 and 1) in improved credit and RL performance. We study the influence of the amount of data used by SECRET in a Triggers task. The objective is to assess how data-demanding is the proposed approach, and the effect of a lack of data on the transfer. There are two types of data: the data from the source domain (used to train its reward prediction component), and the data from the target domain (used to build the potential function). We consider the following number of trajectories to train the self-attentive reward predictor: 500, 1000, 5000, 10000, 25000 and 50000. We place ourselves in the in-domain setting and evaluate SECRET on held-out environments from the same distribution as the source. Environments are 8x8 mazes with 3 triggers and 1 prize. We display reward prediction and credit assignment metrics for each window size in Table 3, and the average discounted return of tabular Q-learning agents for each number of trajectories in Fig. 9c. We observe that increasing the size of the dataset improves the , both for the efficiency of the reward predictor (Table 3) and for the transfer (Fig. 9c). Having too few data can slow down learning compared to a vanilla agent (Fig. 9c), but it does not prevent from learning to solve the task. Number of source episodes Reward prediction accuracy Credit precision Credit recall 500 0.53 ± 0.11 0.23 ± 0.37 0.09 ± 0.2 1000 0.65 ± 0.07 0.49 ± 0.33 0.52 ± 0.42 5000 0.79 ± 0.08 0.9 ± 0.30 0.9 ± 0.30 10000 0.84 ± 0.07 0.9 ± 0.30 0.9 ± 0.30 25000 0.88 ± 0.09 0.9 ± 0.30 0.9 ± 0.30 50000 0.83 ± 0.11 0.8 ± 0.4 0.8 ± 0.4 We consider the following number of trajectories sampled from the target environment to build the potential function: 100, 500, 1000, 2000, 5000, 10000. We place ourselves in the in-domain setting and evaluate SECRET on held-out environments from the same distribution as the source. Environments are 8x8 mazes with 3 triggers and 1 prize. We display the average discounted return of tabular Q-learning agents for each number of trajectories in the source domain in Fig.9d. Here again, we observe that the number of episodes in the target distribution is an important parameter for SECRET, but it has less impact than the number of episodes in the source domain. In the main paper, we have only shown the distribution of attention for an in-domain scenario. Here, following the same protocol as in Sec. 3.1, we measure the distribution of attention weights in the two out-of-domain scenarios considered in our experiments: bigger mazes and inverted dynamics. The are reported in Fig. 10. We observe a similar distribution of attention, peaked on the triggers. | Secret is a transfer method for RL based on the transfer of credit assignment. | 1,043 | scitldr |
Recent advances in Neural Variational Inference allowed for a renaissance in latent variable models in a variety of domains involving high-dimensional data. In this paper, we introduce two generic Variational Inference frameworks for generative models of Knowledge Graphs; Latent Fact Model and Latent Information Model. While traditional variational methods derive an analytical approximation for the intractable distribution over the latent variables, here we construct an inference network conditioned on the symbolic representation of entities and relation types in the Knowledge Graph, to provide the variational distributions. The new framework can create models able to discover underlying probabilistic semantics for the symbolic representation by utilising parameterisable distributions which permit training by back-propagation in the context of neural variational inference, ing in a highly-scalable method. Under a Bernoulli sampling framework, we provide an alternative justification for commonly used techniques in large-scale stochastic variational inference, which drastically reduces training time at a cost of an additional approximation to the variational lower bound. The generative frameworks are flexible enough to allow training under any prior distribution that permits a re-parametrisation trick, as well as under any scoring function that permits maximum likelihood estimation of the parameters. Experiment display the potential and efficiency of this framework by improving upon multiple benchmarks with Gaussian prior representations. Code publicly available on Github. In many fields, including physics and biology, being able to represent uncertainty is of crucial importance BID18. For instance, when link prediction in Knowledge Graphs is used for driving expensive pharmaceutical experiments , it would be beneficial to know what is the confidence of a model in its predictions. However, a significant shortcoming of current neural link prediction models BID13 BID38 -and for the vast majority of neural representation learning approaches -is their inability to express a notion of uncertainty. Furthermore, Knowledge Graphs can be very large and web-scale BID14 and often suffer from incompleteness and sparsity BID14. In a generative probabilistic model, we could leverage the variance in model parameters and predictions for finding which facts to sample during training, in an Active Learning setting BID22 BID17. BID16 use dropout for modelling uncertainty, however, this is only applied at test time. However, current neural link prediction models typically only return point estimates of parameters and predictions BID32, and are trained discriminatively rather than generatively: they aim at predicting one variable of interest conditioned on all the others, rather than accurately representing the relationships between different variables BID31, however, BID16 could still be applied to get uncertainty estimates for these models. The main argument of this article is that there is a lack of methods for quantifying predictive uncertainty in a knowledge graph embedding representation, which can only be utilised using probabilistic modelling, as well as a lack of expressiveness under fixed-point representations. This constitutes a significant contribution to the existing literature because we introduce a framework for creating a family of highly scalable probabilistic models for knowledge graph representation, in a field where there has been a lack of this. We do this in the context of recent advances in variational inference, allowing the use of any prior distribution that permits a re-parametrisation trick, as well as any scoring function which permits maximum likelihood estimation of the parameters. In this work, we focus on models for predicting missing links in large, multi-relational networks such as FREEBASE. In the literature, this problem is referred to as link prediction. We specifically focus on knowledge graphs, i.e., graph-structured knowledge bases where factual information is stored in the form of relationships between entities. Link prediction in knowledge graphs is also known as knowledge base population. We refer to BID32 for a recent survey on approaches to this problem. A knowledge graph G {(r, a 1, a 2)} ⊆ R × E × E can be formalised as a set of triples (facts) consisting of a relation type r ∈ R and two entities a 1, a 2 ∈ E, respectively referred to as the subject and the object of the triple. Each triple (r, a 1, a 2) encodes a relationship of type r between a 1 and a 2, represented by the fact r(a 1, a 2).Link prediction in knowledge graphs is often simplified to a learning to rank problem, where the objective is to find a score or ranking function φ Θ r: E × E → R for a relation r that can be used for ranking triples according to the likelihood that the corresponding facts hold true. Recently, a specific class of link predictors received a growing interest BID32. These predictors can be understood as multi-layer neural networks. Given a triple x = (s, r, o), the associated score φ Θ r (s, o) is given by a neural network architecture encompassing an encoding layer and a scoring layer. In the encoding layer, the subject and object entities s and o are mapped to low-dimensional vector representations (embeddings) h s h(s) ∈ R k and h o h(o) ∈ R k, produced by an encoder h Γ: E → R k with parameters Γ. Similarly, relations r are mapped to h r h(r) ∈ R k. This layer can be pre-trained BID41 or, more commonly, learnt from data by back-propagating the link prediction error to the encoding layer BID4 BID32 BID38.The scoring layer captures the interaction between the entity and relation representations h s, h o and h r are scored by a function φ Θ (h s, h o, h r), parametrised by Θ. Other work encodes the entity-pair in one vector BID34. Summarising, the high-level architecture is defined as: DISPLAYFORM0 Ideally, more likely triples should be associated with higher scores, while less likely triples should be associated with lower scores. While the literature has produced a multitude of encoding and scoring strategies, for brevity we overview only a small subset of these. However, we point out that our method makes no further assumptions about the network architecture other than the existence of an argument encoding layer. Given an entity e ∈ E, the entity encoder h Γ is usually implemented as a simple embedding layer h Γ (e) [Γ] e, where Γ is an embedding matrix BID32. For pre-trained embeddings, the embedding matrix is fixed. Note that other encoding mechanisms are conceivable, such as; recurrent, graph convolution BID26 b) or convolutional neural networks BID13. DistMult DISTMULT BID43 represents each relation r using a parameter vector Θ ∈ R k, and scores a link of type r between (h s, h o, h r) using the following scoring function: DISPLAYFORM0 where ·, ·, · denotes the tri-linear dot product. ComplEx COMPLEX BID38 ) is an extension of DISTMULT using complex-valued embeddings while retaining the mathematical definition of the dot product. In this model, the scoring function is defined as follows: DISPLAYFORM1 k are complex vectors, x denotes the complex conjugate of x, Re (x) ∈ R k denotes the real part of x and Im (x) ∈ C k denotes the imaginary part of x. Let D {(τ 1, y 1),..., (τ n, y n)} denote a set of labelled triples, where τ i s i, p i, o i, and y i ∈ {0, 1} denotes the corresponding label, denoting that the fact encoded by the triple is either true or false. We can assume D is generated by a corresponding generative model. In the following, we propose two alternative generative models. In Figure 1's graphical model, we assume that the Knowledge Graph was generated according to the following generative model. Let V E × R × E the space of possible triples. where τ s, p, o, and h τ [h s, h p, h o] denotes the sampled embedding representations of s, o ∈ E and p ∈ R.Note that, in this model, the embeddings are sampled for each triple. As a consequence, the set of latent variables in this model is H {h τ | τ ∈ E × R × E}.The joint probability of the variables p θ (H, D) is defined as follows: DISPLAYFORM0 The marginal distribution over D is then bounded as follows, with respect to our variational distribution q: DISPLAYFORM1 Proposition 1 As a consequence, the log-marginal likelihood of the data, under the Latent Fact Model, is bounded by: DISPLAYFORM2 Proof. We refer the reader to the Appendix 6 for a detailed proof LFM's ELBO.Assumptions: LFM model assumes each fact of is a randomly generated variable, as well as a mean field variational distribution and that each training example is independently distributed. Note that this is an enormous sum over |D| elements. However, this can be approximated via Importance Sampling, or Bernoulli Sampling BID6. DISPLAYFORM0 By using Bernoulli Sampling, ELBO can be approximated by: DISPLAYFORM1 where p θ (s τ = 1) = b τ can be defined as the probability that for the coefficient s τ each positive or negative fact τ is equal to one (i.e is included in the ELBO summation). The exact ELBO can be recovered from setting b τ = 1.0 for all τ. We can define a probability distribution of sampling from D + and D − -similarly to Bayesian Personalised Ranking BID33, we sample one negative triple for each positive one -we use a constant probability for each element depending on whether it is in the positive or negative set. Proposition 2 The Latent Fact models ELBO can be estimated similarly using a constant probability for positive or negative samples, we end up with the following estimate: DISPLAYFORM2 where DISPLAYFORM3 In Figure 2's graphical model, we assume that the Knowledge Graph was generated according to the following generative model. Let V E × R × E the space of possible triples. DISPLAYFORM0 The marginal distribution over D is then defined as follows: DISPLAYFORM1 Proposition 3 The log-marginal likelihood of the data, under the Latent Information Model, is the following: DISPLAYFORM2 Proof. We refer the reader to the Appendix 6 for a detailed proof LIM's ELBO.Assumptions: LIM assumes each variable of information is randomly generated, as well as a mean field variational distribution and that each training example is independently distributed. This leads to a factorisation of the ELBO that seperates the KL term from the observed triples, making the approximation to the ELBO through Bernoulli sampling simpler, as the KL term is no longer approximated and instead fully computed. Similarly to Section 3.1.1, by using Bernoulli Sampling the ELBO can be approximated by: DISPLAYFORM0 Which can be estimated similarly using a constant probability for positive or negative samples, we end up with the following estimate:Proposition 4 The Latent Information Models ELBO can be estimated similarly using a constant probability for positive or negative samples, we end up with the following estimate: DISPLAYFORM1 where DISPLAYFORM2 Variational Deep Learning has seen great success in areas such as parametric/non-parametric document modelling BID29 BID30 and image generation BID25. Stochastic variational inference has been used to learn probability distributions over model weights BID3, which the authors named "Bayes By Backprop". These models have proven powerful enough to train deep belief networks BID40, by improving upon the stochastic variational bayes estimator BID25, using general variance reduction techniques. Previous work has also researched word embeddings within a Bayesian framework BID40, as well as researched graph embeddings in a Bayesian framework. However, these methods are expensive to train due to the evaluation of complex tensor inversions. Recent work by BID0 BID8 show that it is possible to train word embeddings through a variational Bayes BID2 framework. KG2E proposed a probabilistic embedding method for modelling the uncertainties in KGs. However, this was not a generative model. BID42 argued theirs was the first generative model for knowledge graph embeddings. However, their work is empirically worse than a few of the generative models built under our proposed framework, and their method is restricted to a Gaussian distribution prior. In contrast, we can use any prior that permits a re-parameterisation trick -such as a Normal (a) or von-Mises distribution.Later, BID27 ) proposed a generative model for graph embeddings. However, their method lacks scalability as it requires the use of the full adjacency tensor of the graph as input. Moreover, our work differs in that we create a framework for many variational generative models over multi-relational data, rather than just a single generative model over uni-relational data BID27 BID20. In a different task of graph generation, similar models have been used on graph inputs, such as variational auto-encoders, to generate full graph structures, such as molecules BID35 BID28.Recent work by BID9 ) constructed a variational path ranking algorithm, a graph feature model. This work differs from ours for two reasons. Firstly, it does not produce a generative model for knowledge graph embeddings. Secondly, their work is a graph feature model, with the constraint of at most one relation per entity pair, whereas our model is a latent feature model with a theoretical unconstrained limit on the number of existing relationships between a given pair of entities. We run each experiment over 500 epochs and validate every 50 epochs. Each KB dataset is separated into 80 % training facts, 10% development facts, and 10% test facts. During the evaluation, for each fact, we include every possible corrupted version of the fact under the local closed world assumption, such that the corrupted facts do not exist in the KB. Subsequently, we make a ranking prediction of every fact and its corruptions, summarised by mean rank and filtered hits@m. During training Bernoulli sampling to estimate the ELBO was used, with linear warm-up BID7, compression cost BID3, ADAM Glorot's initialiser for mean vectors BID19 and variance values initialised uniformly to embedding size −1. We experimented both with a N and a N (0, embedding size −1) prior on the latent variables. Table 1 shows definite improvements on WN18 for Variational ComplEx compared with the initially published ComplEX. We believe this is due to the well-balanced model regularisation induced by the zero mean unit variance Gaussian prior. Table 1 also shows that the variational framework is outperformed by existing non-generative models, highlighting that the generative model may be better suited at identifying and predicting symmetric relationships. WordNet18 (b) (WN18) is a large lexical database of English. WN18RR is a subset with only asymmetric relations. FB15K is a large collaboratively made dataset which covers a vast range of relationships and entities, with FB15K-257 BID37, with 257 relations -a significantly reduced number from FB15K due to being a similarly refined asymmetric dataset. We now compare our model to the previous state-of-the-art multi-relational generative model TransG BID42, as well as to a previously published probabilistic embedding method KG2E (similarly represents each embedding with a multivariate Gaussian distribution) on the WN18 dataset. Table 2 makes clear the improvements in the performance of the previous state-of-the-art generative multi-relational knowledge graph model. LFM has marginally worse performance than the state-of-the-art model on raw Hits@10. We conjecture two reasons may cause this discrepancy. Firstly, the fact the authors of TransG use negative samples provided only (True negative examples), whereas we generated our negative samples using the local closed world assumption (LCWA). Secondly, we only use one negative sample per positive to estimate the Evidence Lower Bound using Bernoulli sampling, whereas it is likely they used significantly more negative samples. Scoring Table 1: Filtered and Mean Rank (MR) for the models tested on the WN18, WN18RR, and FB15K datasets. Hits@m metrics are filtered. Variational written with a "V". *Results reported from BID39 ) and **Results reported from BID13 for ComplEx model. "-" in a table cell equates to that statistic being un-reported in the models referenced paper. Dataset Scoring Function MR Raw Hits@ Filtered Hits @ Raw Filter 10 1 3 10 WN18 KG2E 362 345 0.805 --0.932 TransG (Generative) BID42 345 FORMULA7 We split the analysis into the predictions of subject ((?, r, o)) or object ((s, r, ?)) for each test fact. Note all are filtered predictions, i.e., ignoring the predictions made on negative examples generated under LCWA -using a randomly corrupted fact (subject or object corruption) as a negative example. TAB3 shows that the relation "_derivationally_related_form", comprising 34% of test subject predictions, was the most accurate relation to predict for Hits@1 when removing the subject from the tested fact. Contrarily, "_member_of_domain_region" with zero Hits@1 subject prediction, making up less than 1% of subject test predictions. However, "_member_meronym " was the least accurate and prominent (8% of the test subject predictions) for subject Hits@1. We learn from this that even for a near state-of-the-art model there is a great deal of improvement to be gained among asymmetric modelling. TAB3, as before the relation "_derivationally_related_form" was the most accurate relation to predict Hits@1. TAB3 as it highlights the Latent Information Model's inability to achieve a high Hits@1 performance predicting objects for the "_hypernym" relation, which is significantly hindering model performance as it is the most seen relation in the test set-its involvement in 40% of object test predictions. These hint at the possibility that the slightly stronger of WN18 are due to covariances in our variational framework able to capture information about symbol frequencies. We verify this by plotting the mean value of covariance matrices, as a function of the entity or predicate frequencies FIG2 ). The plots confirm our hypothesis: covariances for the variational Latent Information Model grows with the frequency, and hence the LIM would put a preference on predicting relationships between less frequent symbols in the knowledge graph. This also suggests that covariances from the generative framework can capture genuine information about the generality of symbolic representations. We project the high dimensional mean embedding vectors to two dimensions using Probabilistic Principal Component Analysis (PPCA) BID36 to project the variance embedding vectors down to two dimensions using Non-negative Matrix Factorisation (NNMF) BID15. Once we have the parameters for a bivariate normal distribution, we then sample from the bivariate normal distribution 1,000 times and then plot a bi-variate kernel density estimate of these samples. By visualising these two-dimensional samples, we can conceive the space in which the entity or relation occupies. We complete this process for the subject, object, relation, and a randomly sampled corrupted entity (under LCWA) to produce a visualisation of a fact, as shown in FIG3. Figure 4 displays a clustering of the subject, object, and predicate that create a positive (true) fact. We also observe a separation between the items which generate a fact and a randomly sampled (corrupted) entity which is likely to create a negative (false) fact. The first test fact "(USA, Commonbloc0, Netherlands)" shows clear irrationality similarity between all objects in the tested fact, i.e. the vectors are pointing towards a south-east direction. We can also see that the corrupted entity Jordan is quite a distance away from the items in the tested fact, which is good as Jordan does not share a common bloc either USA or Netherlands. We used scoring functions which measure the similarity between two vectors, however, for more sophisticated scoring functions which distance/ similarity is not important to the end we would unlikely see such interpretable images. This analysis of the learnt distributions is evidence to support the notion of learnt probabilistic semantics through using this framework. We have successfully created a framework allowing a model to learn embeddings of any prior distribution that permits a re-parametrisation trick via any score function that permits maximum likelihood estimation of the scoring parameters. The framework reduces the parameter by one hyperparameter -as we typically would need to tune a regularisation term for an l1/ l2 loss term, however as the Gaussian distribution is self-regularising this is deemed unnecessary for matching state-ofthe-art performance. We have shown, from preliminary experiments, that these display competitive with current models. Overall, we believe this work will enable knowledge graph researchers to work towards the goal of creating models better able to express their predictive uncertainty. The score we acquire at test time even through forward sampling does not seem to differ much compared with the mean embeddings, thus using the learnt uncertainty to impact the positively is a fruitful path. We would also like to see additional exploration into various encoding functions, as we used only the most basic for these experiments. We would also like to see more research into measuring how good the uncertainty estimate is. We would like to thank all members of the Machine Reading lab for useful discussions. | Working toward generative knowledge graph models to better estimate predictive uncertainty in knowledge inference. | 1,044 | scitldr |
We present a deep generative model, named Monge-Amp\`ere flow, which builds on continuous-time gradient flow arising from the Monge-Amp\`ere equation in optimal transport theory. The generative map from the latent space to the data space follows a dynamical system, where a learnable potential function guides a compressible fluid to flow towards the target density distribution. Training of the model amounts to solving an optimal control problem. The Monge-Amp\`ere flow has tractable likelihoods and supports efficient sampling and inference. One can easily impose symmetry constraints in the generative model by designing suitable scalar potential functions. We apply the approach to unsupervised density estimation of the MNIST dataset and variational calculation of the two-dimensional Ising model at the critical point. This approach brings insights and techniques from Monge-Amp\`ere equation, optimal transport, and fluid dynamics into reversible flow-based generative models. Generative modeling is a central topic in modern deep learning research BID20 which finds broad applications in image processing, speech synthesis, reinforcement learning, as well as in solving inverse problems and statistical physics problems. The goal of generative modeling is to capture the full joint probability distribution of high dimensional data and generate new samples according to the learned distribution. There have been significant advances in generative modeling in recent years. Of particular interests are the variational autoencoders (VAEs) BID33 BID47, generative adversarial networks (GANs) BID19, and autoregressive models BID18 BID34 BID55 a; BID44. Besides, there is another class of generative models which has so far gained less attention compared to the aforementioned models. These models invoke a sequence of diffeomorphism to connect between latent variables with a simple base distribution and data which follow a complex distribution. Examples of these flow-based generative models include the NICE and the RealNVP networks BID12, and the more recently proposed Glow model BID32. These models enjoy favorable properties such as tractable likelihoods and efficient exact inference due to invertibility of the network. A key concern in the design of flow-based generative models has been the tradeoff between the expressive power of the generative map and the efficiency of training and sampling. One typically needs to impose additional constraints in the network architecture BID12 BID46 BID34 BID44, which unfortunately weakens the model compared to other generative models. In addition, another challenge to generative modeling is how to impose symmetry conditions such that the model generates symmetry related configurations with equal probability. To further unleash the power of the flow-based generative model, we draw inspirations from the optimal transport theory BID56 BID57 BID45 and dynamical systems BID30. Optimal transport theory concerns the problem of connecting two probability distributions p(z) and q(x) via transportation z → x at a minimal cost. In this context, the Brenier theorem BID4 states that under the quadratic distance measure, the optimal generative map is the gradient of a convex function. This motivates us to parametrize the vector-valued generative map as the gradient of a scalar potential function, thereby formulating the generation process as a continuous-time gradient flow BID0. In this regard, a generative map is naturally viewed as a deterministic dynamical system which evolves over time. Numerical integration of the dynamical system gives rise to the neural network representation of the generative model. To this end, E proposes a dynamical system perspective to machine learning, wherein the training procedure is viewed as a control problem, and the algorithm like back-propagation is naturally derived from the optimal control principle BID36. Moreover, implemented the generative map as an ODE integration and employed efficient adjoint analysis for its training. In this paper, we devise the Monge-Ampère flow as a new generative model and apply it to two problems: density estimation of the MNIST dataset and variational calculation of the Ising model. In our approach, the probability density is modeled by a compressible fluid, which evolves under the gradient flow of a learnable potential function. The flow has tractable likelihoods and exhibits the same computational complexity for sampling and inference. Moreover, a nice feature of the MongeAmpère flow is that one can construct symmetric generative models more easily by incorporating the symmetries into the scalar potential. The simplicity and generality of this framework allow the principled design of the generative map and gives rise to lightweight yet powerful generative models. Consider latent variables z and physical variables x both living in R N. Given a diffeomorphism between them, x = x(z), the probability densities in the latent and physical spaces are related by p(z) = q(x) det ∂x ∂z. The Brenier theorem BID4 implies that instead of dealing with a multi-variable generative map, one can consider a scalar valued generating function x = ∇u(z), where the convex Brenier potential u satisfies the Monge-Ampère equation BID5 p DISPLAYFORM0 Given the densities p and q, solving the Monge-Ampère equation for u turns out to be challenging due to the nonlinearity in the determinant. Moreover, for typical machine learning and statistical physics problems, an additional challenge is that one does not even have direct access to both probability densities p and q. Instead, one only has independent and identically distributed samples from one of them, or one only knows one of the distributions up to a normalization constant. Therefore, solving the Brenier potential in these contexts is a control problem instead of a boundary value problem. An additional computational challenge is that even for a given Brenier potential, the righthand side of equation 1 involves the determinant of the Hessian matrix, which scales unfavorably as O(N 3) with the dimensionality of the problem. To address these problems, we consider the Monge-Ampère equation in its linearized form, where the transformation is infinitesimal BID56. We write the convex Brenier potential as u(z) = ||z|| 2 /2 + ϕ(z), thus x − z = ∇ϕ(z), and correspondingly ln q(x) − ln p(z) = − Tr ln I + DISPLAYFORM1 In the last equation we expand the logarithmic function and write the trace of a Hessian matrix as the Laplacian operator. Finally, taking the continuous-time limit → 0, we obtain DISPLAYFORM2 DISPLAYFORM3 such that x = z, p(x, 0) = p(z), and p(x, T) = q(x), where t denotes continuous-time and T is a fixed time horizon. For simplicity here, we still keep the notion of x, which now depends on time. The evolution of x from time t = 0 to T then defines our generative map. We notice that used a more general form of these two equations as a continuous-time normalizing flow BID46.The two ordinary differential equations (ODEs) compose a dynamical system, which describes the flow of continuous random variables and the associated probability densities under iterative changeof-variable transformation. To match p(x, T) and the target density q(x), one can optimize a functional I[p(x, T), q(x)] that measures the difference between p(x, T) and q(x). Thus, the training process amounts to solving an optimal control problem for the potential function: DISPLAYFORM4 DISPLAYFORM5 DISPLAYFORM6 which is the Liouville equation, namely, the continuity equation of a compressible fluid with density p(x, t) and velocity field v = ∇ϕ(x). Obeying the continuity equation ensures that the flow conserves the total probability mass. Moreover, the velocity field is curl free ∇ × v ≡ 0 and the fluid follows a form of gradient flow BID0. The irrotational flow matches one's heuristic expectation that the flow-based generative model transports probability masses. It should be stressed that although we use the optimal transport theory to motivate model architecture design, i.e., the gradient flow for generative modeling, we do not have to employ the optimal transport objective functions. The difference is that in generative modeling one typically fixes only one end of the probability density and aims at learning a suitable transformation to reach the other end. While for optimal transport one has both ends fixed and aims at minimizing the transportation cost. BID1; BID3 BID17 adapted the Wasserstein distances in the optimal transport theory as an objective function for generative modeling. We parametrize the potential function ϕ(x) using a feedforward neural network and integrate the ODEs and to transform the data and their log-likelihoods. Then, by applying the backpropagation algorithm through the ODE integration, we tune the parametrized potential function so that the probability density flows to the desired distribution 1. DISPLAYFORM0 Figure 2: Schematic illustration of two applications (a) unsupervised density estimation and (b) variational free energy calculation for a statistical mechanics problem. In both cases, we integrate equations 2 and 3 under a parametrized potential function ϕ(x), and optimize ϕ(x) such that the density at the other end matches to the desired one. In general, one can represent the potential function in various ways as long as the gradient and Laplacian are well defined. Here we adopt a densely connected neural network with only one hidden layer in our minimalist implementation. We use the softplus activation function in the hidden layers so that the potential function is differentiable to higher orders. The gradient and the Laplacian of the potential appearing in the right-hand side of equations 2 and 3 can be computed via automatic differentiation. We integrate equations 2 and 3 using a numerical ODE integrator. At each integration step one advances the ODE system by a small amount, as shown in FIG1. Therefore, the numerical integration of ODE is equivalent to the forward evaluation of a deep residual neural network BID27 BID14 BID39 BID6 BID41 BID48. Alternatively, the integration can also be viewed as a recurrent neural network in which one feeds the output back into the block iteratively. In both pictures, the network heavily shares parameters in the depth direction since the gradient and Laplacian information of the same potential function is used in each integration step. As a , the network is very parameter efficient. We employ the fourth order Runge-Kutta scheme with a fixed time step, which is set such that the numerical integrator is accurate enough. Thus, the ODE integrator block shown in figure 1(b) contains four layers of neural networks. With a hundred integration steps, the whole ODE integration procedure corresponds to four hundreds layers of a neural network. At the end of the integration, we obtain samples x and their likelihoods ln p(x, T), which depends on the parametrized potential function ϕ(x). Differentiable optimization of such deep neural network is feasible since the integrator only applies small changes to the input signal of each layer. The deep generative model ing from the discretization of an ODE system shows some nice features compared to the conventional neural networks. At training time, there is then a tradeoff between the total number of integration steps and the expressibility of the potential function. Longer integration time corresponds to a deeper network, and hence a simpler transformation at each step. While at testing time, one can construct a variable depth neural network for the potential function. For example, one can use a larger time step and a smaller number of integration steps for efficient inference. Moreover, by employing a reversible ODE integrator, one can also integrate the equations 2 and 3 backward in time with the same computational complexity, and return to the starting point deterministically. We apply the Monge-Ampère flow to unsupervised density estimation of an empirical dataset and variational calculation of a statistical mechanics problem. Figure 2 illustrates the schemes of the two tasks. To draw samples from the model and to evaluate their likelihoods, we simulate the fluid dynamics by integrating the ODEs and. We use the KL divergence as the measure of dissimilarity between the model probability density and the desired ones. Moreover, we choose the base distribution at time t = 0 to be a simple Gaussian p(x, 0) = N (x). See Appendix C for a summary of the hyperparameters used in the experiments. First we perform the maximum likelihood estimation, which reduces the dissimilarity between the empirical density distribution π(x) of a given dataset and the model density p(x, T) measured by the KL-divergence D KL (π(x) p(x, T)). It is equivalent to minimize the negative log-likelihood (NLL): DISPLAYFORM0 To compute the model likelihood we start from MNIST samples and integrate backwards from time T to 0. By accumulating the change in the log-likelihood 0 T d ln p(x(t), t) we obtain an estimate of the objective function in equation 6. To model the MNIST data we need to transform the images into continuous variables. Following BID44, we first apply the jittering and dequantizing procedure to map the original grayscale MNIST data to a continuous space. Next, we apply the logit transformation to map the normalized input to the logit space x → logit(λ + (1 − 2λ)x), with λ = 10 −6. FIG2 shows the training and test NLL compared with the best performing MADE BID18, Real-NVP BID13, and MAF BID44 ) models, all reported in BID44. Note that we have selected the with standard Gaussian base distribution for a fair comparison. The test NLL of the present approach is lower than previously reported values, see TAB0 for a summary. The Monge-Ampère flow is quite parameter-efficient as it only uses about one-tenth of learnable parameters of the MAF model BID44.The way we carry out density estimation in figure 2(a) is equivalent to transforming the data distribution π(x) to the base Gaussian distribution BID44. BID43 BID38. Inset shows representative Ising configurations generated at epochs 0, 500, 1000, 1500, respectively. Each inset contains a 5 × 5 tile of a 16 2 Ising model. In the lower panel, the generated samples exhibit two domains because we still impose the Z 2 inversion symmetry in the network. Without imposing the inversion symmetry the model will generate almost all black/white images due to the ferromagnetic correlations.ization of the Gaussianization technique BID8. Conversely, by integrating the equations forward in time one can use the flow to map Gaussian noises to meaningful images. For variational calculation we minimize the reverse KL divergence between the model and a given Boltzmann distribution of a physical problem D KL p(x, T) DISPLAYFORM0 an unknown partition function due to its intractability. In practical this amounts to minimizing the following expectation DISPLAYFORM1 To draw samples from p(x, T) we start from the base distribution x ∼ p(x, 0) = N (x) and evolve the samples according to equation 2. We keep track of the likelihoods of these samples by integrating equation 3 together. The loss function equation 7 provides a variational upper bound to the physical free energy − ln Z, since DISPLAYFORM2 the objective function employs the reparametrization pathwise derivative for the gradient estimator, which exhibits low variance and scales to large systems BID33.We apply this variational approach to the Ising model, a prototypical problem in statistical physics. The Ising model exhibits a phase transition from ferromagnetic to paramagnetic state at the critical temperature BID43. Variational study of the Ising model at the critical point poses a stringent test on the probabilistic model. To predict the thermodynamics accurately, the flow has to capture long-range correlations and rich physics due to the critical fluctuation. In the continuous representation BID15 BID61 BID38, the energy function of the Ising model reads (Appendix B) DISPLAYFORM3 We set the coupling K ij = (1 + √ 2)/2 to be at the critical temperature of square lattice Ising model BID43, where i, j are nearest neighbors on the square periodic lattice. The expectation inside the bracket of equation 7 is the energy difference between the model and the target problem. Since one has the force information for both target problem and the network as well, one can introduce an additional term in the loss function for the force difference BID11 BID59 ).The Ising model on a periodic square lattice has a number of symmetries, e.g., the spin inversion symmetry, the spatial translational symmetry, and the D 4 rotational symmetry. Mathematically, the symmetries manifest themselves as an invariance of the energy function E(x) = E(gx), where g is the symmetry operator that inverts or permutes the physical variables. For physics applications, it is essential to have a generative model that respects these physical symmetries, such that they will generate symmetry-related configurations with equal probability. Although simple symmetries such as the Z 2 inversion can be implemented by careful design of the network or simply averaging over the elements in the symmetry group BID38, respecting even more general symmetries in a generative model turns out to be a challenging problem. The Monge-Ampère flow naturally solves this problem by imposing the symmetry condition on the scalar potential function since the initial Gaussian distribution respects the maximum symmetry. There have been a number of studies about how to ensure suitable symmetries in the feedforward networks in the context of discriminative modeling (; BID10 . Here we employ a simpler approach to encode symmetry in the generating process. We express the potential as an average over all elements of the symmetry group G under consideration ϕ(x) = 1 |G| g∈Gφ (gx), where theφ in each term shares the same network parameters. At each step of the numerical integration, we sample only one term in the symmetric average to evaluate the gradient and the Laplacian of the potential. Thus, on average one restores the required symmetry condition in the generated samples. This is feasible both for training and data generation since equations 2 and 3 are both linear with respect to the potential function. The upper panel of Figure 4 shows that the variational loss equation 7 decreases towards the exact solution of the free energy (Appendix B). The achieved 1% relative accuracy in the variational free-energy is comparable to the previously reported value (0.5%) using a network which exploits the two-dimensional nature of the problem BID38. The inset shows the directly generated Ising configurations from the network at different training stage. At late stages, the network learned to produce domains of various shapes. One also notices that the generated samples exhibit a large variety of configurations, which respects the physical symmetries. Moreover, the network can estimate log-likelihoods for any given sample which is valuable for the quantitative study of the physical problem. The invertible flow can also perform inverse mapping of the energy function from the physical space to the latent space. Therefore, it can be used to accelerate Monte Carlo simulations, by either recommending the updates as a generative model BID28 BID40 or performing hybrid Monte Carlo in the latent space BID38. Figure 4 also compares the performances of the generative map with and without translational and rotational symmetries imposed. It is noticeable that using the same parameters in the network and the same training strategy, the variational free energy of the one without symmetry constraint is significantly higher than the one with symmetry constraint. Moreover, by inspecting the generated samples, one sees that the model distribution collapses to the ones with two horizontal domains. These symmetry breaking configuration correspond to a metastable local minimum of the free energy landscape. This comparison shows that imposing the necessary symmetry conditions is crucial for variational studies of physical problems using generative models Normalizing flows: The present approach is closely related to the normalizing flows, where one composes bijective and differentiable transformations for generative modeling . To scale to large dataset it is crucial to make the Jacobian determinant of the transformation efficiently computable. BID12; BID38; BID32 strive the balance between expressibility and efficiency of the bijective networks by imposing block structures in the Jacobian matrix. Our approach reduces the cubical scaling Hessian determinant to the linear scaling Laplacian calculation by taking the continuous-time limit. The continuoustime gradient flow is more flexible in the network design since bijectivity and the efficiently tractable Jacobian determinant is naturally ensured.(Inverse) autoregressive flows: The autoregressive property could be interpreted as imposing a triangular structure in the Jacobian matrix of the flow to reduce its computational cost. There is a trade-off between the forward and inverse transformation. Therefore, typically the autoregressive flows are used for density estimation BID44, while the inverse autoregressive flows are suitable for posterior in variational inference BID34. Moreover, these flows are only implicitly reversible, in the sense that the inverse calculation requires solving a nonlinear equation. However, the continuous-time gradient flow has the same time complexity for both the forward and inverse transformation. derived the continuous-time limit of the normalizing flows BID46, which is a general form of equations 2 and 3 without the irrational condition on the velocity field. Besides conceptual connection to the Brenier theorem in optimal transport theory, the motivation of using the gradient flow instead of more general transformations in is that it is convenient to encode symmetries in the scalar potential function. Furthermore, BID21 made use of the Hutchinson's trace estimator BID29 to simplify the computation of the trace of Jacobian, which we could also employ for the Laplacian in equation FORMULA3. BID50 FORMULA0 consider generative models based on diffusion processes. Our work is different since we consider deterministic and reversible advection flow of the fluid. There is, in general, no stochastic force in our simulation (except the random sampling of symmetric functions done in section 4.2). And, we always integrate the flow equation for a finite time interval instead of trying to reach a steady probability density in the asymptotic long time limit. Dynamical system and control based methods: In a more general perspective, our approach amounts to defining a dynamical system with a target terminal condition, which is naturally viewed as a control problem (E, 2017). Along with this direction, BID24 BID28 offer some insights into interpreting and solving this control problem. Moreover, the Neural ODE implements an efficient back-propagation scheme through blackbox ODE integrators based on adjoint computation, which we did not utilize at the moment. Gradient flow of compressible fluids in a learnable potential provides a natural way to set up deep generative models. The Monge-Ampère flow combines ideas and techniques in optimal transport, fluid dynamics, and differential dynamical systems for generative modeling. We have adopted a minimalist implementation of the Monge-Ampère flow with a scalar potential parameterized by a single hidden layer densely connected neural network. There are a number of immediate improvements to further boost its performance. First, one could extend the neural network architecture of the potential function in accordance with the target problem. For example, a convolutional neural network for data with spatial or temporal correlations. Second, one can explore better integration schemes which exactly respect the time-reversal symmetry to ensure reversible sampling and inference. Lastly, by employing the backpropagation scheme of through the ODE integrator one can reduce the memory consumption and achieve guaranteed convergence in the integration. Furthermore, one can employ the Wasserstein distances BID1 instead of the KLdivergence to train the Monge-Ampère flow. With an alternative objective function, one may obtain an even more practically useful generative model with tractable likelihood. One may also consider using batch normalization layers during the integration of the flow BID13 BID44. However, since the batch normalization breaks the physical interpretation of the continuous gradient flow of a fluid, one still needs to investigate whether it plays either a theoretical or a practical role in the continuous-time flow. Moreover, one can use a time-dependent potential ϕ(x, t) to induce an even richer gradient flow of the probability densities. BID2 has shown that the optimal transport flow (in the sense of minimizing the spatial-time integrated kinetic energy, which upper bounds the squared Wasserstein-2 distance) follows a pressureless flow in a time-dependent potential. The fluid moves with a constant velocity that linearly interpolates between the initial and the final density distributions. Practically, a time-dependent potential corresponds to a deep generative model without sharing parameters in the depth direction as shown in FIG1. Since handling a large number of independent layers for each integration step may be computationally inefficient, one may simply accept one additional time variable in the potential function, or parametrize ϕ(x, t) as the solution of another differential equation, or partially tie the network parameters using a hyper-network BID22.Besides applications presented here, the Monge-Ampère flow has wider applications in machine learning and physics problems since it inherits all the advantages of the other flow-based generative models BID12 BID46 BID34 BID13 BID44. A particular advantage of generative modeling using the Monge-Ampère flow is that it is relatively easy to impose symmetry into the scalar potential. It is thus worth exploiting even larger symmetry groups, such as the permutation for modeling exchangeable probabilities BID35. Larger scale practical application in statistical and quantum physics is also feasible with the Monge-Ampère flow. For example, one can study the physical properties of realistic molecular systems using Monge-Ampère flow for variational free energy calculation. Lastly, since the mutual information between variables is greatly reduced in the latent space, one can also use the Monge-Ampère flow in conjunction with the latent space hybrid Monte Carlo for efficient sampling BID38. To gain intuition about the probability density flow under the Monge-Ampère equation, we work out a close form solution of equations 2 and 3 in a one dimensional toy problem. Since a quadratic potential merely induces scale transformation to the data, a Gaussian distribution will remain to be a Gaussian with a different variance. Thus, we consider ϕ(x) = λx 2 /2, and p(x, 0) = N (x), and parametrize the time dependent density as p(x, t) = DISPLAYFORM0, and α = 1. Substitute this ansatz of the time-dependent probability density into equation 3, we have DISPLAYFORM1 where we used equation 2, i.e. = −λ, and α(t) = exp(−λt). Therefore, under a quadratic potential the fluid parcel moves with acceleration. The ones that are far away from the origin move faster. And the width of the Gaussian distribution changes exponentially with time. The Ising model is a fundamental model in statistical mechanics which allows exact solution in two dimension BID43. The Ising model partition function reads DISPLAYFORM0 To make it amenable to the flow approach we reformulate the Ising problem to an equivalent representation in terms of continuous variables following BID15 BID61 BID38. First, we offset the coupling to K + αI such that all of its eigenvalues are positive, e.g. the minimal eigenvalue of K + αI is 0.1. This step does not affect the physical property of the Ising model except inducing a constant offset in its energies. Next, using the Gaussian integration trick (Hubbard-Stratonovich transformation) we can decouple the Ising interaction with continuous auxiliary variables Z Ising ∝ s dx exp − We list the hyperparameters of the network and training in TAB1. in the integration step in the fourth order Runge-Kutta integration. d is the total number of integration steps. Thus T = d is the total integration time. h is the number of hidden neurons in the potential function. B is the mini-batch size for training. | A gradient flow based dynamical system for invertible generative modeling | 1,045 | scitldr |
Graph Neural Networks (GNNs) are a class of deep models that operates on data with arbitrary topology and order-invariant structure represented as graphs. We introduce an efficient memory layer for GNNs that can learn to jointly perform graph representation learning and graph pooling. We also introduce two new networks based on our memory layer: Memory-Based Graph Neural Network (MemGNN) and Graph Memory Network (GMN) that can learn hierarchical graph representations by coarsening the graph throughout the layers of memory. The experimental demonstrate that the proposed models achieve state-of-the-art in six out of seven graph classification and regression benchmarks. We also show that the learned representations could correspond to chemical features in the molecule data. Graph Neural Networks (GNNs) (; ; are a class of deep architectures that operate on data with arbitrary topology represented as graphs such as social networks , knowledge graphs , molecules , point clouds , and robots. Unlike regular-structured inputs with spatial locality such as grids (e.g., images and volumetric data) and sequences (e.g., speech and text), GNN inputs are variable-size graphs consisting of permutationinvariant nodes and interactions among them. GNNs such as Gated GNN (GGNN) , Message Passing Neural Network (MPNN) , Graph Convolutional Network (GCN) , and Graph Attention Network (GAT) learn node embeddings through an iterative process of transferring, transforming, and aggregating the node embeddings from topological neighbors. Each iteration expands the receptive field by one hop and after k iterations the nodes within k hops influence the node embeddings of one another. GNNs are shown to learn better representations compared to random walks , matrix factorization , kernel methods , and probabilistic graphical models . These models, however, cannot learn hierarchical representation as they do not exploit the graph compositionality. Recent work such as Differentiable Pooling (DiffPool) , TopKPool , and Self-Attention Graph Pooling (SAGPool) define parametric graph pooling layers that let models learn hierarchical graph representation by stacking interleaved layers of GNN and pooling layers. These layers cluster nodes in the latent space such that the clusters are meaningful with respect to the task. These clusters might be communities in a social network or potent functional groups within a chemical dataset. Nevertheless, these models are not efficient as they require an iterative process of message passing after each pooling layer. In this paper, we introduce a memory layer for joint graph representation learning and graph coarsening that consists of a multi-head array of memory keys and a convolution operator to aggregate the soft cluster assignments from different heads. The queries to a memory layer are node embeddings from the previous layer and the outputs are the node embeddings of the coarsened graph. The memory layer does not explicitly require connectivity information and unlike GNNs relies on the global information rather than local topology. These properties make them more efficient and improve their performance. We also introduce two networks based on the proposed layer: Memory-based Graph Neural Network (MemGNN) and Graph Memory Network (GMN). MemGNN consists of a GNN that learns the initial node embeddings, and a stack of memory layers that learns hierarchical graph representation up to the global graph embedding. GMN, on the other hand, learns the hierarchical representation purely based on memory layers and hence does not require message passing. Memory Augmented Neural Networks (MANNs) utilize external memory with differentiable read-write operators allowing them to explicitly access the past experiences and are shown to enhance reinforcement learning , meta learning , few-shot learning , and multi-hop reasoning. Unlike RNNs, in which the memory is represented within their hidden states, the decoupled memory in MANNs lets them to store and retrieve longer term memories with less parameters. The memory can be implemented as a key-value memory such as neural episodic control and product-key memory layers or as a array-structured memory such as Neural Turing Machine (NTM) , prototypical networks , memory networks, and Sparse Access Memory (SAM) . Our memory layer consists of a multi-head array of memory keys. Graph Neural Networks (GNNs) use message passing to learn node embeddings over graphs. GraphSAGE (b) learns embedding by sampling and aggregating neighbor nodes whereas GAT uses attention to aggregate embeddings from all neighbors. GCN models extend the convolution to arbitrary topology. Spectral GCNs (; ;) use spectral filters over graph Laplacian to define the convolution in the Fourier domain. These models are less efficient compared to spatial GCNs which directly define the convolution on graph patches centered on nodes. Our memory layer uses a feed-forward network to learn the node embeddings. Graph pooling can be done globally or hierarchically. In former, node embeddings are aggregated into a graph embedding using arithmetic operators such as sum or max (a;) or set neural networks such as Set2Set and SortPool . In latter, graphs are coarsened in each layer to capture the hierarchical structure. Non-parametric methods such as clique pooling , kNN pooling , and Graclus rely on topological information and are efficient, but are outperformed by parametric models such as edge contraction pooling . DiffPool trains two parallel GNNs to compute node embeddings and cluster assignments using a combination of classification loss, link prediction loss, and entropy loss, whereas Mincut pool trains a sequence of a GNN and an MLP using classification loss and the minimum cut objective. TopKPool computes a node score by learning a projection vector and then drops all the nodes except the top scoring nodes. SAGPool extends the TopKPool by using graph convolutions to take neighbor node features into account. We use a clustering-friendly distribution to compute the attention scores between nodes and clusters. We define a memory layer M (l): R n l ×d l −→ R n l+1 ×d l+1 in layer l as a parametric function that takes in n l query vectors of size d l and generates n l+1 query vectors of size d l+1 such that n l+1 < n l. The input and output queries represent the node features of the input graph and the coarsened graph, respectively. The memory layer learns to jointly coarsen the input nodes (i.e., pooling) and transform their features (i.e., representation learning). As shown in Figure 1, it consists of arrays of memory keys (i.e., multi-head memory) and a convolutional layer. Assuming |h| memory heads, a shared input query is compared against all the keys in each head ing in |h| attention matrices which are then aggregated into a single attention matrix using the convolution layer. In a content addressable memory (; ;, the task of attending to memory (i.e., addressing scheme) is formulated as computing the similarly between memory keys to a given query q. Specifically, the attention weight of key k j for query q is defined as w j = sof tmax(d(q, k j)) where d is a similarity measure, typically Euclidean distance or cosine similarity . The soft read operation on memory is defined as a weighted average over the memory keys: r = j w j k j. Figure 1: The proposed Architecture for hierarchical graph representation learning using the introduced memory layer. The query network projects the initial node attributes to a query embedding space and each memory layer jointly coarsens the input queries and transforms them into a new query space. In this work, we treat the input queries Q (l) ∈ R n l ×d l as the node embeddings of an input graph and treat the keys K (l) ∈ R n l+1 ×d l as the cluster centroids of the queries. To satisfy this assumption, we impose a clustering-friendly distribution as the distance metric between keys and a query. Following , we use the Student's t-distribution as a kernel to measure the normalized similarity between query q i and key k j as follows: where C ij is the normalized score between query q i and key k j (i.e., probability of assigning node i to cluster j or attention score between query q i and memory key k j) and τ is the degree of freedom of the Student's t-distribution (i.e., temperature). To increase the model capacity, we model the memory keys as a multi-head array. Applying a shared input query against the memory keys produces a tensor of cluster assignments [C |h|] ∈ R |h|×n l+1 ×n l where |h| denotes the number of heads. To aggregate the heads into a single assignment matrix, we treat the heads and the matrix rows and columns as depth, height, and width in standard convolution analogy and apply a convolution operator over them. Because there is no spatial structure, we use [1 × 1] convolution to aggregate the information across heads and therefore the convolution behaves as a weighted pooling that reduces the heads to a single matrix. The aggregated assignment matrix is computed as follows: where Γ φ is a [1 × 1] convolutional operator parametrized by φ, || is the concatenation operator, and C (l) is the aggregated soft assignment matrix. A memory read generates a value matrix V (l) ∈ R n l+1 ×d l that represents the coarsened node embeddings in the same space as the input queries and is defined as the product of the soft assignment scores and the original queries as follows: The value matrix is fed to a single layer neural network consisting of a weight matrix and a LeakyReLU activation function to project the coarsened embeddings from R n l+1 ×d l into R n l+1 ×d l+1 representing the output queries Q (l+1): Thanks to these parametrized transformations, a memory layer can jointly learn the node embeddings and coarsens the graph end-to-end. The computed queries Q (l+1) are the input queries to the subsequent memory layer M (l+1). For graph classification, one can simply stack layers of memory up to the level where the input graph is coarsened into a single node representing the global graph embedding and then feed it to a fully-connected layer to predict the graph class as follows: where Q 0 = f q (g) is the initial query embedding 1 generated by the query network f q over graph g. We introduce two architectures based on the memory layer: GMN and MemGNN. These two architectures are different in the way that the query network is implemented. More specifically, GMN uses a feed-forward network for initializing the query: f q (g) = FFN θ (g), whereas MemGNN implements the query network as a message passing GNN: f q (g) = GNN θ (g). A GMN is a stack of memory layers on top of a query network f q (g) that generates the initial query embeddings without any message passing. Similar to set neural networks and transformers , graph nodes in a GMN are treated as a permutation-invariant set of embeddings. The query network projects the initial node attributes into an embedding space that represents the initial query space. Assume a training set D = [g 1, g 2, ..., g N] of N graphs where each graph is represented as g = (A, X, Y) and A ∈ {0, 1} n×n denotes the adjacency matrix, X ∈ R n×din is the initial node attribute, and Y ∈ R n is the graph label. Considering that the GMN model treats a graph as a set of permutation-invariant nodes and does not use message passing, and also considering that the memory layers do not rely on connectivity information, the topological information of each node should be somehow encoded into its initial embedding. Inspired by transformers , we encode this information along with the initial attribute into the initial query embeddings using a query network f q implemented as a two-layer feed-forward neural network: where W 0 ∈ R n×din and W 1 ∈ R 2din×d0 are the parameters of the query networks, and || is the concatenation operator. Unlike the GMN architecture, the query network in MemGNN relies on the iterative process of passing messages and aggregating them to compute the initial query Q 0: where query network G θ is an arbitrary parameterized message passing GNN (; ; ;). In our implementation of MemGNN, we use a modified variant of GAT . Specifically, we introduce an extension to the original GAT model called edge-based GAT (e-GAT) and use it as the query network. Unlike GAT, e-GAT learns attention weights not only from the neighbor nodes but also from the input edge attributes. This is especially important for data containing edge information (e.g., various bonds among atoms represented as edges in molecule datasets). In an e-GAT layer, attention score between two neighbor nodes is computed as follows. where i→j denote the embedding of node i and the embedding of the edge connecting node i to its one-hop neighbor node j in layer l, respectively. W n and W e are trainable node and edge weights and W is the parameter of a single-layer feed-forward network that computes the attention score. We jointly train the model using two loss functions: a supervised classification loss and an unsupervised clustering loss. The supervised loss denoted as L ent is defined as the cross-entropy loss between the predicted and true graph class labels. The unsupervised clustering loss is inspired by deep clustering methods (; ;). It encourages the model to learn clustering-friendly embeddings in the latent space by urging it to learn from high confidence assignments with the help of an auxiliary target distribution. The unsupervised loss is defined as the Kullback-Leibler (KL) divergence loss between the soft assignments C (l) and the auxiliary distribution P (l) as follows: For the target distributions P (l), we use the distribution proposed in which normalizes the loss contributions and improves the cluster purity while emphasizing on the samples with higher confidence. This distribution is defined as follows: We define the total loss as follows where L is the number of memory layers and λ is a scalar weight. We initialize the model parameters, the keys, and the queries randomly and optimize them jointly with respect to L using mini-batch stochastic gradient descent. To stabilize the training, the gradients of L ent are back-propagated batch-wise while the gradients of L KL. This technique has also been applied in to avoid trivial solutions in deep clustering problem. We use nine graph benchmarks including seven classification and two regression datasets to evaluate the proposed method. These datasets are commonly used in both graph kernel (; ; ; ; ;) and GNN (; ; ;) literature. The summary of these datasets is as follows (i.e., first two benchmarks are regression tasks and the rest are classification tasks): ESOL contains water solubility data for compounds. Lipophilicity contains experimental of octanol/water distribution of compounds. Bace provides quantitative binding for a set of inhibitors of human β-secretase 1 (BACE-1). DD is used to distinguish enzyme structures from non-enzymes. Enzymes is for predicting functional classes of enzymes. Proteins is used to predict the protein function from structure. COLLAB is for predicting the field of a researcher given her For more information about the datasets and implementation details refer to Appendix A.2 and A.1, repectively. To evaluate the performance of our models on DD, Enzymes, Proteins, and COLLAB datasets, we follow the experimental protocol in and perform 10-fold cross-validation and report the mean accuracy over all folds. We also report the performance of four kernel-based methods including Graphlet , shortest path , Weisfeiler-Lehman (WL) , and WL Optimal Assignment , and ten deep models. The shown in Table 1 suggest that: (i) our models significantly improve the performance on DD, Enzymes, and Proteins datasets by absolute margins of 14.49%, 4.75%, and 0.49% accuracy, respectively, (ii) both proposed models achieve better performance on these three datasets compared to the baselines, (iii) MemGNN outperforms GMN on COLLAB whereas GMN achieves better on the Enzymes, Proteins, and DD datasets. On COLLAB, our models are outperformed by a variant of DiffPpool (i.e., diffpool-det) and WL Optimal Assignment . The former is a GNN augmented with deterministic clustering algorithm 2, whereas the latter is a graph kernel method. We speculate that because of the high edge-to-node ratio of COLLAB, these augmentations help in extracting near-optimal cliques. For the ESOL and Lipophilicity datasets, we follow the evaluation protocol in and report the Root-Mean-Square Error (RMSE) for these regression benchmarks. Considering that these datasets contain initial edge attributes (refer to Appendix A.2 for further details), we train the MemGNN model and compare the to the baseline models reported in including graph-based methods such as GCN, MPNN, Directed Acyclic Graph (DAG) based models, Weave as well as other conventional methods such as Kernel Ridge Regression (KRR) and Influence Relevance Voting (IRV). Tables 2 and 5 show that our MemGNN model achieves state-ofthe-art by absolute margin of 0.04 and 0.1 RMSE on ESOL and Lipophilicity benchmarks, respectively. For further details on these datasets and the baselines see . We also achieve state-of-the-art on the Bace, Reddit-Binary, and Tox21 datasets. For more details see Appendix A.3. To investigate the effect of the proposed e-GAT model, we train our MemGNN model using both GAT and e-GAT models as the query network. Considering that the ESOL, Lipophilicity, and BACE datasets contain edge attributes, we use them as the benchmarks. Since nodes have richer features compared to edges, we set the node and edge feature dimensions to 16 and 4, respectively. The comparative performance evaluation of the two models on the ESOL dataset is shown in Appendix A.4 demonstrating that e-GAT achieves better on the validation set in each epoch compared to the standard GAT model. We observed the same effect on Lipophilicity and BACE datasets too. To investigate the effect of the topological embeddings on the GMN model, we evaluated three initial topological features including adjacency matrix, normalized adjacency matrix, and Random Walk with Restart (RWR). For further details on RWR, see section A.5. The suggested that using the adjacency matrix as the initial feature achieves the best performance. For instance, 10-fold cross validation accuracy of a GMN model trained on ENZYMES with adjacency matrix, normalized adjacency matrix, and RWR is 78.66%, 77.16%, and 77.33%, respectively. We investigated two methods to down-sample the neighbors in dense datasets such as COLLAB (i.e., average of 66 neighbors per node) to enhance the memory and computation. The first method randomly selects 10% of the edges whereas the second method ranks the neighbors based on their RWR scores with respect to the center node and then keeps the top 10% of the edges. We trained the MemGNN model on COLLAB using both sampling methods which ed in 73.9% and 73.1% 10-fold cross validation accuracy for random and RWR-based sampling methods respectively, suggesting that random sampling performs slightly better than a random walk sampling. We stipulate that although keys represent the clusters, the number of keys is not necessarily proportional to the number of the nodes in the input graphs. In fact, datasets with smaller graphs might have more meaningful clusters to capture. For example, molecules are comprised of numerous functional groups and yet the average number of nodes in the ESOL dataset is 13.3. Moreover, our experiments show that for ENZYMES with average number of 32.69 nodes, the best performance is achieved with 10 keys whereas for the ESOL dataset 64 keys in the best performance. In ESOL 8, 64, and 160 keys in RMSE of 0.56, 0.52, and 0.54, respectively. We also observed that keeping the number of parameters fixed, increasing the number of memory heads improves the performance. For instance, when the model is trained on ESOL with 160 keys and 1 head, it achieves RMSE of 0.54, whereas when trained with 32 keys of 5 heads, the same model achieves RMSE of 0.53. Intuitively, the memory keys represent the cluster centroids and enhance the model performance by capturing meaningful structures and coarsening the graph. To investigate this intuition, we used the learned keys to interpret the knowledge learned by the models through visualizations. Figure 2 visualizes the learned clusters over atoms (i.e., atoms with same color are within the same cluster) indicating that the clusters mainly consist of meaningful chemical substructures such as a carbon chain and a Hydroxyl group (OH) (i.e., Figure 2a), as well as a Carboxyl group (COOH) and a benzene ring (i.e., Figure 2b). From a chemical perspective, Hydroxyl and Carboxyl groups, and carbon chains have a significant impact on the solubility of the molecule in water or lipid. This confirms that the network has learned chemical features that are essential for determining the molecule solubility. It is noteworthy that we tried initializing the memory keys using K-Means algorithm over the initial node embeddings to warm-start them but we did not observe any significant improvement over the randomly selected keys. We proposed an efficient memory layer and two deep models for hierarchical graph representation learning. We evaluated the proposed models on nine graph classification and regression tasks and achieved state-of-the-art on eight of them. We also experimentally showed that the learned representations can capture the well-known chemical features of the molecules. Our study indicated that node attributes concatenated with corresponding topological embeddings in combination with one or more memory layers achieves notable without using message passing. We also showed that for the topological embeddings, the binary adjacency matrix is sufficient and thus no further preprocessing step is required for extracting them. Finally, we showed that although connectivity information is not directly imposed on the model, the memory layer can process node embeddings and properly cluster and aggregate the learned embeddings. Limitations: In section 4.2, we discussed that on the COLLAB dataset, kernel methods or deep models augmented with deterministic clustering algorithm achieve better performance compared to our models. Analyzing samples in this dataset suggests that in graphs with dense communities, such as cliques, our model lacks the ability to properly detect these dense sub-graphs. Moreover, the of the DD dataset reveals that our MemGNN model outperforms the GMN model which implies that we need message passing to perform better on this dataset. We speculate that this is because the DD dataset relies more on local information. The most important features to train an SVM on this dataset are surface features which have local behavior. This suggest that for data with strong local interactions, message passing is required to improve the performance. Future Directions: We are planning to introduce a model based on the MemGNN and GMN architectures that can perform node classification by attending to the node embeddings and centroids of the clusters from different layers of hierarchy that the node belongs to. We are also planning to investigate the representation learning capabilities of the proposed models in self-supervised setting. A APPENDIX We implemented the model with PyTorch and optimized it using Adam optimizer. We trained the model for a maximum number of 2000 epochs and decayed the learning rate every 500 epochs by 0.5. The model uses batch-normalization , skip-connections, LeakyRelu activation functions, and dropout for regularization. We decided the hidden dimension and number of model parameters using random hyper-parameter search strategy. The best performing hyper-parameters for the datasets are shown in Table 3. A.2 DATASET STATISTICS Table 4 indicates the statistics of the datasets we used for graph classification and regression tasks. In addition to discussed in section 4.2, we report the evaluation on three other graph classification benchmarks including BACE , Tox21 (, and Reddit-Binary datasets. The BACE dataset provides quantitative binding for a set of inhibitors of human β-secretase 1 (BACE-1), whereas the is Tox21 dataset is for predicting toxicity on 12 different targets. We follow the evaluation protocol in and report the Area Under the Curve Receiver Operating Characteristics (AUC-ROC) measure for this task. Moreover, considering that the BACE and Tox21 datasets contain initial edge attributes, we train the MemGNN model and compare its performance to the baseline models reported in . The shown in Table 5 suggest that our model achieves stateof-the-art by absolute margin of 4.0% AUC-ROC on BACE benchmark. The also suggest that our model is competitive with the state-of-the-art GCN model on the Tox21 dataset. Reddit-Binary dataset is for predicting the type of community (i.e., question-answer-based or discussion-based communities) given a graph of online discussion threads. In this dataset, nodes represent the users and edges denote interaction between users. To evaluate the performance of our models on this dataset, we follow the experimental protocol in and perform 10-fold cross-validation to evaluate the model performance and report the mean accuracy over all folds. The reported in Table 6 show that the introduced GMN model achieves state-of-the-art accuracy by absolute margin of 0.44%. In section 4.3.1, we introduced e-GAT. Figure 3a illustrates the RMSE on the validation set of the ESOL dataset for a MemGNN model using GAT and e-GAT implementation as its query network, respectively. Since ESOL is a regression benchmark, we also plotted the R 2 score in Figure 3b. As shown in these figures, e-GAT performs better compared to GAT on both metrics. Consider a weighted or unweighted graph G. A random agent starts traversing the graph from node i and iteratively walks towards its neighbors with a probability proportional to the edge weight that 0.859 ± 0.000 0.907 ± 0.000 Table 6: Mean validation accuracy over 10-folds. Method REDDIT-BINARY Graphlet 78.04 WL 68.2 DiffPool 85.95 TopKPool 74.70 SAGPool 73.90 GMN (ours) 86.39 MemGNN (ours) 85.55 connects them. The agent also may restart the traverse from the starting node i with probability p. Eventually, the agent will stop at node j with a probability called relevance score of node j with respect to node i . The relevance score of node i with every other node of the graph is summarized in t i in equation 12 where t i is the RWR corresponding to node i, p is the restart probability, A is the normalized adjacency matrix and e i is the one-hot vector representing node i. t i = pà t i + (1 − p) e i Solving this linear system in r i defined in equation 13 Now we can nominate the nodes with higher relevance score w.r.t. node i for receiving messages in an MeMGNN or use them as topological embeddings in a GMN. Note that the restart probability in equation 13 defines how far the agent can walk from the source node and therefore the t i can represent whether local or global structures around the node i. We used p = 0.5 in our studies. Calculating the inverse of the adjacency matrix of a big graph is costly. Although we could exactly compute it for all of our datasets but there are existing methods to make the estimation more efficient . Figure 5: Clusters learned by a MeMGNN for ESOL and LIPO dataset. Chemical groups like OH (hydroxyl group), CCl3, COOH (carboxyl group), CO (ketone group) as well as benzene rings have been recognized during the learning procedure. These chemical groups are highly active and have a great impact on the solubility of molecules. | We introduce an efficient memory layer that can learn representation and coarsen input graphs simultaneously without relying on message passing. | 1,046 | scitldr |
Deep neural networks require extensive computing resources, and can not be efficiently applied to embedded devices such as mobile phones, which seriously limits their applicability. To address this problem, we propose a novel encoding scheme by using {-1,+1} to decompose quantized neural networks (QNNs) into multi-branch binary networks, which can be efficiently implemented by bitwise operations (xnor and bitcount) to achieve model compression, computational acceleration and resource saving. Our method can achieve at most ~59 speedup and ~32 memory saving over its full-precision counterparts. Therefore, users can easily achieve different encoding precisions arbitrarily according to their requirements and hardware resources. Our mechanism is very suitable for the use of FPGA and ASIC in terms of data storage and computation, which provides a feasible idea for smart chips. We validate the effectiveness of our method on both large-scale image classification (e.g., ImageNet) and object detection tasks. tational acceleration, many solutions have been proposed, such as network sparse and pruning 23, low-rank approximation, architecture design BID2 10, 8, BID1 16], 24 model quantization BID0 13, 3, 9, 18, 14], and so on. constrain their weights to {−1, +1} 25 or {−1, 0, 1} and achieve limited acceleration by using simple accumulation instead of complicated 26 multiplication-accumulations. In particular, [2, 18, 4, BID10 24] quantize activation values and weights BID10 to bits and use bitwise logic operations to achieve extreme acceleration ratio in inference process but 28 they are suffering from significant performance degradation. However, most models are proposed 29 for fixed precision, and can not extend to other precision models. They easily fall into local optimal 30 solutions and face slow convergence speed in training process. In order to bridge the gap between 31 low-bit and full-precision and be applied to many cases, we propose a novel encoding scheme of 32 using {−1, +1} to easily decompose trained QNNs into multi-branch binary networks. Therefore, 33 the inference process can be efficiently implemented by bitwise operations to achieve model com-34 pression, computational acceleration and resource saving. As the basic computation in most neural network layers, matrix multiplication costs lots of resources 37 and also is the most time consuming operation. Modern computers store and process data in bina-38 ry format, thus non-negative integers can be directly encoded by {0, 1}. We propose a novel de- DISPLAYFORM0 All of the above operations consist of N multiplications and (N −1) additions. Based on the above 43 encoding scheme, the vector x can be encoded to binary form using M bits, i.e., DISPLAYFORM1 Then we convert the right-hand side of into the following form: DISPLAYFORM2 where DISPLAYFORM3. In such an encoding scheme, the number of represented states is not greater than 2 M. In addition, we encode another vector w with K-bit numbers in the same way. Therefore, the dot product of the 48 two vectors can be computed as follows: DISPLAYFORM0 From the above formulas, the dot product is decomposed into M ×K sub-operations, in which each 50 element is 0 or 1. Because of the restriction of encoding and without using the sign bit, the above 51 representation can only be used to encode non-negative integers. However, it's impossible to limit 52 the weights and the values of the activation functions to non-negative integers. In order to encode 53 both positive and negative integers, we propose a novel encoding scheme, which uses {-1, +1} as 54 the basic elements rather than {0, 1}. Then we can use multiple bitwise operations (i.e., xnor and 55 bitcount) to effectively achieve the above vector multiplications. Our operation mechanism can be 56 suitable for all vector/matrix multiplications. Besides fully connected layers, our mechanism is also 57 suitable for convolution and deconvolution layers in deep neural networks. ing of input data. Therefore, the dot product can be computed by the formula. Without other 63 judgment and mapping calculation, we use trigonometric functions as the basic encoding functions. In the end, we use the sign function to hard divide to -1 or +1. The mathematical expression can be 65 formulated as follows: In this section, we use the same network architecture described in for CIFAR-10 and choose DISPLAYFORM0 ResNet-18 as the basic network for ImageNet. It is very hard to train on large-scale training sets (e.g., ImageNet), and thus parameter initialization is particularly important. In particular, the well-trained 72 full-precision model parameters activated by ReLU can be directly used as initialization parameters 73 for our 8-bit quantized network. After fine-tuning dozens of epochs, 8-bit quantized networks can be 74 well-trained. Similarly, we use the 8-bit model parameters as the initialization parameters to train 7-75 bit quantized networks, and so on. We use the loss computed by quantized parameters to update full 76 precision parameters described as the straight-through estimator. TAB2 | A novel encoding scheme of using {-1, +1} to decompose QNNs into multi-branch binary networks, in which we used bitwise operations (xnor and bitcount) to achieve model compression, computational acceleration and resource saving. | 1,047 | scitldr |
We propose a novel method for incorporating conditional information into a generative adversarial network (GAN) for structured prediction tasks. This method is based on fusing features from the generated and conditional information in feature space and allows the discriminator to better capture higher-order statistics from the data. This method also increases the strength of the signals passed through the network where the real or generated data and the conditional data agree. The proposed method is conceptually simpler than the joint convolutional neural network - conditional Markov random field (CNN-CRF) models and enforces higher-order consistency without being limited to a very specific class of high-order potentials. Experimental demonstrate that this method leads to improvement on a variety of different structured prediction tasks including image synthesis, semantic segmentation, and depth estimation. Convolutional neural networks (CNNs) have demonstrated groundbreaking on a variety of different learning tasks. However, on tasks where high dimensional structure in the data needs to be preserved, per-pixel regression losses typically in unstructured outputs since they do not take into consideration non-local dependencies in the data. Structured prediction frameworks such as graphical models and joint CNN-graphical model-based architectures e.g. CNN-CRFs have been used for imposing spatial contiguity using non-local information BID13 BID2 BID24. The motivation to use CNN-CRF models stems from their ability to capture some structured information from second order statistics using the pairwise part. However, statistical interactions beyond the second-order are tedious to incorporate and render the models complicated BID0 BID12 ). Other approaches have used task-specific perceptual losses to solve this problem BID10.Generative models provide another way to represent the structure and spacial contiguity in large high-dimensional datasets with complex dependencies. Implicit generative models specify a stochastic procedure to produce outputs from a probability distribution. Such models are appealing because they do not demand parametrization of the probability distribution they are trying to model. Recently, there has been great interest in CNN-based implicit generative models using autoregressive BID4 and adversarial training frameworks BID16.Generative adversarial networks (GANs) BID6 can be seen as a two player minimax game where the first player, the generator, is tasked with transforming a random input to a specific distribution such that the second player, the discriminator, can not distinguish between the true and synthesized distributions. The most distinct feature of adversarial networks is the discriminator that assesses the discrepancy between the current and target distributions. The discriminator acts as a progressively precise critic of an increasingly accurate generator. Despite their structured prediction capabilities, such a training paradigm is often unstable and can suffer from mode collapse. However, recent work on spectral normalization (SN) and gradient penalty has significantly increased training stability BID7. Conditional GANs (cGANs) BID19 incorporate conditional image information in the discriminator and have been widely used for class conditioned image generation. To that effect, unlike in standard GANs, a discriminator for cGANs discriminates between DISPLAYFORM0 Adversarial loss (a) Concatenated Image Conditioning x y Adversarial loss DISPLAYFORM1 Discriminator models for image conditioning. We propose fusing the features of the input and the ground truth or generated image rather than concatenating.the generated distribution and the target distribution on pairs of samples y and conditional information x. For class conditioning, several unique strategies have been presented to incorporate class information in the discriminator BID23 BID22. However, a cGAN can also be conditioned by structured data such as an image. Such conditioning is much more useful for structured prediction problems. Since the discriminator in image conditioned-GANs has access to large portions of the image the adversarial loss can be interpreted as a learned loss that incorporates higher order statistics, essentially eliminating the need to manually design higher order loss functions. Consequently, this variation of cGANs has extensively been used for image-to-image translation tasks. However, the best way of incorporating image conditional information into a GAN is not always clear and methods of feeding generated and conditional images to the discriminator tend to use a naive concatenation approach. In this work we address this gap by proposing a discriminator architecture specifically designed for image conditioning. Such a discriminator can contribute to the promise of generalization GANs bring to structured prediction problems whereby a singular and simplistic setup can be used for capturing higher order non-local structural information from higher dimensional data without complicated modeling of energy functions. Contributions. We propose an approach to incorporating conditional information into a cGAN using a fusion architecture (Fig. 1b). In particular, we make the following key contributions:1. We propose a novel discriminator architecture optimized for incorporating conditional information in cGANs for structured prediction tasks. The method is designed to incorporate conditional information in feature space and thereby allows the discriminator to enforce higher-order consistency in the model. At the same time, this method is conceptually simpler than alternative structured prediction methods such as CNN-CRFs where higher-order potentials have to be manually incorporated in the loss function.2. We demonstrate the effectiveness of this method on a variety of structured prediction tasks including semantic segmentation, depth estimation, and generating real images from semantic masks. Our empirical study demonstrates that using a fusion discriminator is more effective in preserving high-order statistics and structural information in the data.2 RELATED WORK 2.1 CNN-CRF MODELS Models for structured prediction have been extensively studied in computer vision. In the past these models often entailed the construction of hand-engineered features. In 2015, BID15 demonstrated that a fully convolutional approach to semantic segmentation could yield state-ofthe-art at that time with no need for hand-engineering features. BID1 showed that post-processing the of a CNN with a conditional Markov random field led to significant improvements. Subsequent work by many authors have refined this approach by incorporating the CRF as a layer within a deep network and thereby enabling the parameters of both models to be learnt simultaneously BID11. Many researchers have used this approach for other structured prediction problems, including image-to-image translation and depth estimation BID14.In most cases CNN-CRF models only incorporate unary and pairwise potentials. Recent work by BID0 has investigated incorporating higher-order potentials into CNN-based models for semantic segmentation, and has found that while it is possible to learn the parameters of these potentials, they can be tedious to incorporate and render the model quite complex. There is a need for developing methods that can incorporate higher-order statistical information with out manual modeling of higher order potentials. Adversarial Training. Generative adversarial networks were introduced in BID6. A GAN consists of a pair of models (G, D), where G attempts to model the distribution of the source domain and D attempts to evaluate the divergence between the generative distribution q and the true distribution p. GANs are trained by training the discriminator and the generator in turn, iteratively refining both the quality of the generated data and the discriminator's ability to distinguish between p and q. The is that D and G compete to reach a Nash equilibrium that can be expressed by the training procedure. While GAN training is often unstable and prone to issues such as mode collapse, recent developments such as spectral normalization and gradient penalty have increased GAN training stability BID7. Furthermore, GANs have the advantage of being able to access the joint configuration of many variables, thus enabling a GAN to enforce higher-order consistency that is difficult to enforce via other methods BID16.Conditional GANs. A conditional GAN (cGAN) is a GAN designed to incorporate conditional information BID19. cGANs have shown promise for several tasks such as class conditional image synthesis and image-to-image translation BID19. There are several advantages to using the cGAN model for structured prediction, including the simplicity of the framework. Image conditioned cGANs can be seen as a structured prediction problem tasked with learning a new representation given an input image while making use of non-local dependencies. However, the method by which the conditional information should be incorporated into the model is often unmotivated. Usually, the conditional data is concatenated to some layers in the discriminator (often the input layers). A notable exception to this methodology is the projection cGAN, where for data is either known or assumed to follow certain simple distributions and a hard mathematical rule for incorporating conditional data can be derived from the underlying probabilistic graphical model. As mentioned in, the method is more likely to produce good if the data follows one of the prescribed distributions. For structured prediction tasks where the GAN framework has to be conditioned by an image, this is often not the case. In the following section we introduce the fusion discriminator and explain the motivation behind it. As mentioned, the most significant part of cGANs for structured prediction is the discriminator. The discriminator has continuous access to pairs of the generated data or real data y and the conditional information (i.e. the image) x. The cGAN discriminator can then be defined as, DISPLAYFORM0, where A is the activation function, and f is a function of x and y and θ represents the parameters of f. Let p and q designate the true and the generated distributions. The adversarial loss for the discriminator can then be defined as DISPLAYFORM1 Here, A represents the sigmoid function, D represents the conditional discriminator, and G represents the generator. By design, this frameworks allows the discriminator to significantly effect the generator BID6. The most common approach currently in use to incorporate conditional image information into a GAN is to concatenate the conditional image information to the input of the discriminator at some layer, often the first. Other approaches for conditional information fusion are limited to class conditional fusion where conditional information is often a one-hot vector rather than higher dimensional structured data. Since the discriminator classifies pairs of input and output images, concatenating high-dimensional data may not exploit some inherent dependencies in the structure of the data. Fusing the input and output information in an intuitive way such as to preserve the dependencies is instrumental in designing an adversarial framework with high structural capacity. We propose the use of a fusion discriminator architecture with two branches. The branches of this discriminator are convolutional neural networks, say φ(x) and ψ(y), that extract features from both the conditional data and the generated or real data respectively. The features extracted from the conditional data are then fused with the features from the real or generated data at various stages FIG0. The proposed discriminator architecture is similar to the encoder-part of the FuseNet architecture, which has been used to incorporate depth information from RGB-D images for semantic segmentation BID8. In FIG0, we illustrate a four layer and a VGG16-style fusion discriminator, in which both branches are similar in depth and structure to the VGG16 model BID26. The key ingredient of the fusion discriminator architecture is the fusion block, which combines the feature maps of x and y. The fusion layer (red, FIG0) is implemented as element-wise summation and is always inserted after a convolution → spectral normalization → ReLU instance. By making use of this fusion layer the discontinuities in the features maps of x and y are added into the y branch in order to enhance the overall feature maps. This preserves representation from both x and y. For structured prediction tasks x and y often have features that complement each other; for instance, in tasks like depth estimation, semantic segmentation, and image synthesis x and y all have complimentary features. Theoretical Motivation. When the data is passed through two networks with identical architectures and the activations at corresponding layers are added, the effect in general is to pass forward through the combined network (the upper branch in FIG0) a stronger signal than would be passed forward by applying an activation to concatenated data. To see this, suppose the k th feature map in the l th layer is denoted by hk. Let the weights and biases for this feature and layer be denoted W DISPLAYFORM0 T and b DISPLAYFORM1 Further, let h = x T y T T, where x and y represent the learned features from the conditional and real or generated data respectively. Assuming a ReLU activation function, DISPLAYFORM2 Based on the inequality in Eq. 4 we demonstrate that the fusion of the activations in ψ(x) and φ(y) produces a stronger signal than the activation on concatenated inputs. Indeed, strengthening some of the activations is not by any means a guarantee of improved performance in general. However, using the fusion discriminator not only increases the neuron-wise activation values but also preserves activations at different neuron locations. In the context of conditional GANs the fusing operation in the strongest signals being passed through the discriminator specifically at those places where the model finds useful information simultaneously in both the conditional data and the real or generated data. In addition, the model preserves a signal, albeit a weaker one, when either x or y contain useful information both for the learning of higher-level features and for the discriminator's eventual classification of the data. Empirical Motivation. We use gradient-weighted Class Activation Mapping (Grad-CAM) BID25 which uses the class-specific gradient information going into the final convolutional layer of a trained CNN to produce a coarse localization map of the important regions in the image. We visualized the outputs of a fusion and concatenated discriminator for several different tasks to observe the structure and strength of the signal being passed forward. We observed that the fusion discriminator architecture always had a visually strong signal at important features for the given task. Representative images from classifying x and y pairs as'real' for two different structured prediction tasks are shown in FIG1. This provides visual evidence that a fusion discriminator preserves more structural information from the input and output image pairs and classifies overlapping patches based on that information. Indeed, this is not evidence that a stronger signal will lead to a more accurate classification, but it is a heuristic justification that more representative features from x and y will be used to make the determination. In order to evaluate the effectiveness of the proposed fusion discriminator we conducted three sets of experiments on structured prediction problems: 1) generating real images from semantic masks (Cityscapes); 2) semantic segmentation (Cityscapes); 3) depth estimation (NYU v2). For all three tasks we used a U-Net based generator. We applied spectral normalization to all weights of the In order to demonstrate the structure preserving abilities of our discriminator we use the proposed setup in the image-to-image translation setting. We focus on the application of generating realistic images from semantic labels. This application has recently been studied for generating realistic synthetic data for self driving cars BID28 BID3. Unlike recent approaches where the objective is to generate increasingly realistic high definition (HD) images, the purpose of this experiment is to explore if a generic fusion discriminator can outperform a concatenated discriminator when using a simple generator. We used 2,975 training images from the Cityscapes dataset BID5 and re-scaled them to 256 × 256 for computational efficiency. The provided Cityscapes test set with 500 images was used for testing. Our ablation study focused on changing the discriminator between a standard 4-layer concatenation discriminator used in seminal image-to-image translation work, a combination of this 4-layer discriminator with spectral normalization (SN), a VGG-16 concatenation discriminator and the proposed 4-layer and VGG-16 fusion discriminators. Since standard GAN evaluation metrics such as inception score and FID can not directly be applied to image-to-image translation tasks we use an evaluation technique previously used for such image synthesis. To quantitatively evaluate and comparatively analyze the effectiveness of our proposed discriminator architecture we perform semantic segmentation on synthesized images and compare the similarity between the predicted segments and the input. The intuition behind this kind of experimentation is that if the generated images corresponds to the input label map an existing semantic segmentation model such as a PSPNet BID29 should be able to predict the input segmentation mask. Similar experimentation has been suggested in and. Table 1 reports segmentation both pixel-wise accuracy and overall intersection-over-union (IoU), the proposed fusion discriminator outperforms the concatenated discriminator by a large margin. Our is closer to the theoretical upper bound achieved by real images. This confirms that the fusion discriminator contributes to preserving more Figure 5: A comparative analysis of concatenation, projection and fusion discriminators on three different structured prediction tasks, i.e., image synthesis, semantic segmentation, and depth estimation. Table 1: PSPNet-based semantic segmentation IoU and accuracy scores using generated images from different discriminators. Our outperform concatenation-based methods by a large margin and is close to the accuracy and IoU on actual images (GT/Oracle). Mean IoU Pixel Accuracy 4-Layer Concat. structure in the output image. The fusion discriminator could be used with high definition images, however, such analysis is beyond the scope of the current study. Representative images for this task are shown in FIG2. Fig. 5 shows a comparative analysis of the concatenation, projection and fusion discriminators in an ablation study upto 550k iterations. The projection discriminator was modified image conditioning according to the explanation given in for the super-resolution task. Semantic segmentation is vital for visual scene understanding and is often formulated as a dense labeling problem where the objective is to predict the category label for each individual pixel. Semantic segmentation is a classical structured prediction problem and CNNs with pixel-wise loss often fail to make accurate predictions BID16. Much better have been achieved by incorporating higher order statistics in the image using CRFs as a post-processing step or jointly training them with CNNs BID2. It has been shown that incorporating higher order potentials continues to improve semantic segmentation improvement, making this an ideal task for evaluating the structured prediction capabilities of GANs and their enhancement using our proposed discriminator. Here, we empirically validate that the adversarial framework with the fusion discriminator can preserve more spacial context in comparison to CNN-CRF setups. We demonstrate that our proposed fusion discriminator is equipped with the ability to preserve higher order details. For comparative analysis we compare with relatively shallow and deep architectures for both concatenation and fusion discriminators. We also conduct an ablation study to analyze the effect of spectral normalization. The generator for all semantic segmentation experiments was a U-Net. For the experiment without spectral normalization, we trained each model for 950k iterations, which was sufficient for Depth estimation is another structured prediction task that has been extensively studied because of its wide spread applications in computer vision. As with semantic segmentation, both per-pixel losses and non-local losses such as CNN-CRFs have been widely used for depth estimation. Stateof-the art with depth estimation has been achieved using a hierarchical chain of non-local losses. We argue that it is possible to incorporate higher order information using a simple adversarial loss with a fusion discriminator. In order to validate our claims we conducted a series of experiments with different discriminators, similar to the series of experiments conducted for semantic segmentation. We used the Eigen testtrain split for the NYU v2? dataset containing 1449 images for training and 464 images for testing. We observed that as with image synthesis and semantic segmentation the fusion discriminator outperforms concatenation-based methods and pairwise CNN-CRF methods every time. Structured prediction problems can be posed as image conditioned GAN problems. The discriminator plays a crucial role in incorporating non-local information in adversarial training setups for structured prediction problems. Image conditioned GANs usually feed concatenated input and output pairs to the discriminator. In this research, we proposed a model for the discriminator of cGANs that involves fusing features from both the input and the output image in feature space. This method provides the discriminator a hierarchy of features at different scales from the conditional data, and thereby allows the discriminator to capture higher-order statistics from the data. We qualitatively demonstrate and empirically validate that this simple modification can significantly improve the general adversarial framework for structured prediction tasks. The presented in this paper strongly suggest that the mechanism of feeding paired information into the discriminator in image conditioned GAN problems is of paramount importance. The generator G tries to minimize the loss expressed by equation 5 while the discriminator D tries to maximize it. In addition, we impose an L1 reconstruction loss: DISPLAYFORM0 leading to the objective, DISPLAYFORM1 6.2 GENERATOR ARCHITECTURE We adapt our network architectures from those explained in. Let CSRk denote a Convolution-Spectral Norm -ReLU layer with k filters. Let CSRDk donate a similar layer with dropout with a rate of 0.5. All convolutions chosen are 4 × 4 spatial filters applied with a stride 2, and in decoders they are up-sampled by 2. All networks were trained from scratch and weights were initialized from a Gaussian distribution of mean 0 and standard deviation of 0.02. All images were cropped and rescaled to 256 × 256, were up sampled to 268 × 286 and then randomly cropped back to 256 × 256 to incorporate random jitter in the model. Decoder: CSRD512→CSRD1024→CSRD1024→CSR1024→CSR1024→CSR512→CSR256→CSR128The last layer in the decoder is followed by a convolution to map the number of output channels (3 in the case of image synthesis and semantic labels and 1 in the case of depth estimation). This is followed by a Tanh function. Leaky ReLUs were used throughout the encoder with a slope of 0.2, regular ReLUs were used in the decoder. Skip connections are placed between each layer l in the encoder and layer ln in the decoder assuming l is the maximum number of layers. The skip connections concatenate activations from the l th layer to layer (l − n) th later. | We propose a novel way to incorporate conditional image information into the discriminator of GANs using feature fusion that can be used for structured prediction tasks. | 1,048 | scitldr |
Intelligent creatures can explore their environments and learn useful skills without supervision. In this paper, we propose ``Diversity is All You Need''(DIAYN), a method for learning useful skills without a reward function. Our proposed method learns skills by maximizing an information theoretic objective using a maximum entropy policy. On a variety of simulated robotic tasks, we show that this simple objective in the unsupervised emergence of diverse skills, such as walking and jumping. In a number of reinforcement learning benchmark environments, our method is able to learn a skill that solves the benchmark task despite never receiving the true task reward. We show how pretrained skills can provide a good parameter initialization for downstream tasks, and can be composed hierarchically to solve complex, sparse reward tasks. Our suggest that unsupervised discovery of skills can serve as an effective pretraining mechanism for overcoming challenges of exploration and data efficiency in reinforcement learning. Deep reinforcement learning (RL) has been demonstrated to effectively learn a wide range of rewarddriven skills, including playing games , controlling robots (; b), and navigating complex environments . However, intelligent creatures can explore their environments and learn useful skills even without supervision, so that when they are later faced with specific goals, they can use those skills to satisfy the new goals quickly and efficiently. Learning skills without reward has several practical applications. Environments with sparse rewards effectively have no reward until the agent randomly reaches a goal state. Learning useful skills without supervision may help address challenges in exploration in these environments. For long horizon tasks, skills discovered without reward can serve as primitives for hierarchical RL, effectively shortening the episode length. In many practical settings, interacting with the environment is essentially free, but evaluating the reward requires human feedback BID7. Unsupervised learning of skills may reduce the amount of supervision necessary to learn a task. While we can take the human out of the loop by designing a reward function, it is challenging to design a reward function that elicits the desired behaviors from the agent . Finally, when given an unfamiliar environment, it is challenging to determine what tasks an agent should be able to learn. Unsupervised skill discovery partially answers this question. 1 Autonomous acquisition of useful skills without any reward signal is an exceedingly challenging problem. A skill is a latent-conditioned policy that alters the state of the environment in a consistent way. We consider the setting where the reward function is unknown, so we want to learn a set of skills by maximizing the utility of this set. Making progress on this problem requires specifying a learning objective that ensures that each skill individually is distinct and that the skills collectively explore large parts of the state space. In this paper, we show how a simple objective based on mutual information can enable RL agents to autonomously discover such skills. These skills are useful for a number of applications, including hierarchical reinforcement learning and imitation learning. We propose a method for learning diverse skills with deep RL in the absence of any rewards. We hypothesize that in order to acquire skills that are useful, we must train the skills so that they maximize coverage over the set of possible behaviors. While one skill might perform a useless behavior like random dithering, other skills should perform behaviors that are distinguishable from random dithering, and therefore more useful. A key idea in our work is to use discriminability between skills as an objective. Further, skills that are distinguishable are not necessarily maximally diverse -a slight difference in states makes two skills distinguishable, but not necessarily diverse in a semantically meaningful way. To combat this problem, we want to learn skills that not only are distinguishable, but also are as diverse as possible. By learning distinguishable skills that are as random as possible, we can "push" the skills away from each other, making each skill robust to perturbations and effectively exploring the environment. By maximizing this objective, we can learn skills that run forward, do backflips, skip backwards, and perform face flops (see Figure 3). Our paper makes five contributions. First, we propose a method for learning useful skills without any rewards. We formalize our discriminability goal as maximizing an information theoretic objective with a maximum entropy policy. Second, we show that this simple exploration objective in the unsupervised emergence of diverse skills, such as running and jumping, on several simulated robotic tasks. In a number of RL benchmark environments, our method is able to solve the benchmark task despite never receiving the true task reward. In these environments, some of the learned skills correspond to solving the task, and each skill that solves the task does so in a distinct manner. Third, we propose a simple method for using learned skills for hierarchical RL and find this methods solves challenging tasks. Four, we demonstrate how skills discovered can be quickly adapted to solve a new task. Finally, we show how skills discovered can be used for imitation learning. Previous work on hierarchical RL has learned skills to maximize a single, known, reward function by jointly learning a set of skills and a meta-controller (e.g., BID2 ; ; ; ;) ). One problem with joint training (also noted by) is that the meta-policy does not select "bad" options, so these options do not receive any reward signal to improve. Our work prevents this degeneracy by using a random meta-policy during unsupervised skill-learning, such that neither the skills nor the meta-policy are aiming to solve any single task. A second importance difference is that our approach learns skills with no reward. Eschewing a reward function not only avoids the difficult problem of reward design, but also allows our method to learn task-agnostic. Related work has also examined connections between RL and information theory (; ; ;) and developed maximum entropy algorithms with these ideas Haarnoja et al. (2018; . Recent work has also applied tools from information theory to skill discovery. and use the mutual information between states and actions as a notion of empowerment for an intrinsically motivated agent. Our method maximizes the mutual information between states and skills, which can be interpreted as maximizing the empowerment of a hierarchical agent whose action space is the set of skills. Hausman et al., Florensa et al., and Gregor et al. (2016 showed that a discriminability objective is equivalent to maximizing the mutual information between the latent skill z and some aspect of the corresponding trajectory. considered the setting with many tasks and reward functions and considered the setting with a single task reward. Three important distinctions allow us to apply our method to tasks significantly more complex than the gridworlds in . First, we use maximum entropy policies to force our skills to be diverse. Our theoretical analysis shows that including entropy maximization in the RL objective in the mixture of skills being maximum entropy in aggregate. Second, we fix the prior distribution over skills, rather than learning it. Doing so prevents our method from collapsing to sampling only a handful of skills. Third, while the discriminator in while not converged do Sample skill z ∼ p(z) and initial state s0 ∼ p0(s) for t ← 1 to steps_per_episode do Sample action at ∼ π θ (at | st, z) from skill. Step environment: st+1 ∼ p(st+1 | st, at). Compute q φ (z | st+1) with discriminator. Set skill reward rt = log q φ (z | st+1) − log p(z) Update policy (θ) to maximize rt with SAC. Update discriminator (φ) with SGD.Figure 1: DIAYN Algorithm: We update the discriminator to better predict the skill, and update the skill to visit diverse states that make it more discriminable.the final state, our discriminator looks at every state, which provides additional reward signal. These three crucial differences help explain how our method learns useful skills in complex environments. Prior work in neuroevolution and evolutionary algorithms has studied how complex behaviors can be learned by directly maximizing diversity (a; b; ; ; ;). While this prior work uses diversity maximization to obtain better solutions, we aim to acquire complex skills with minimal supervision to improve efficiency (i.e., reduce the number of objective function queries) and as a stepping stone for imitation learning and hierarchical RL. We focus on deriving a general, information-theoretic objective that does not require manual design of distance metrics and can be applied to any RL task without additional engineering. Previous work has studied intrinsic motivation in humans and learned agents. BID3. While these previous works use an intrinsic motivation objective to learn a single policy, we propose an objective for learning many, diverse policies. Concurrent work BID0 draws ties between learning discriminable skills and variational autoencoders. We show that our method scales to more complex tasks, likely because of algorithmic design choices, such as our use of an off-policy RL algorithm and conditioning the discriminator on individual states. We consider an unsupervised RL paradigm in this work, where the agent is allowed an unsupervised "exploration" stage followed by a supervised stage. In our work, the aim of the unsupervised stage is to learn skills that eventually will make it easier to maximize the task reward in the supervised stage. Conveniently, because skills are learned without a priori knowledge of the task, the learned skills can be used for many different tasks. Our method for unsupervised skill discovery, DIAYN ("Diversity is All You Need"), builds off of three ideas. First, for skills to be useful, we want the skill to dictate the states that the agent visits. Different skills should visit different states, and hence be distinguishable. Second, we want to use states, not actions, to distinguish skills, because actions that do not affect the environment are not visible to an outside observer. For example, an outside observer cannot tell how much force a robotic arm applies when grasping a cup if the cup does not move. Finally, we encourage exploration and incentivize the skills to be as diverse as possible by learning skills that act as randomly as possible. Skills with high entropy that remain discriminable must explore a part of the state space far away from other skills, lest the randomness in its actions lead it to states where it cannot be distinguished. We construct our objective using notation from information theory: S and A are random variables for states and actions, respectively; Z ∼ p(z) is a latent variable, on which we condition our policy; we refer to a the policy conditioned on a fixed Z as a "skill"; I(·; ·) and H[·] refer to mutual information and Shannon entropy, both computed with base e. In our objective, we maximize the mutual information between skills and states, I(S; Z), to encode the idea that the skill should control which states the agent visits. Conveniently, this mutual information dictates that we can infer the skill from the states visited. To ensure that states, not actions, are used to distinguish skills, we minimize the mutual information between skills and actions given the state, I(A; Z | S). Viewing all skills together with p(z) as a mixture of policies, we maximize the entropy H[A | S] of this mixture policy. In summary, we maximize the following objective with respect to our policy parameters, θ: DISPLAYFORM0 We rearranged our objective in Equation 2 to give intuition on how we optimize it. 2 The first term encourages our prior distribution over p(z) to have high entropy. We fix p(z) to be uniform in our approach, guaranteeing that it has maximum entropy. The second term suggests that it should be easy to infer the skill z from the current state. The third term suggests that each skill should act as randomly as possible, which we achieve by using a maximum entropy policy to represent each skill. As we cannot integrate over all states and skills to compute p(z | s) exactly, we approximate this posterior with a learned discriminator q φ (z | s). Jensen's Inequality tells us that replacing p(z | s) with q φ (z | s) gives us a variational lower bound G(θ, φ) on our objective F(θ) (see BID1 for a detailed derivation): DISPLAYFORM1 We implement DIAYN with soft actor critic (SAC) , learning a policy π θ (a | s, z) that is conditioned on the latent variable z. Soft actor critic maximizes the policy's entropy over actions, which takes care of the entropy term in our objective G. Following Haarnoja et al. FORMULA0, we scale the entropy regularizer H[a | s, z] by α. We found empirically that an α = 0.1 provided a good trade-off between exploration and discriminability. We maximize the expectation in G by replacing the task reward with the following pseudo-reward: DISPLAYFORM0 We use a categorical distribution for p(z). During unsupervised learning, we sample a skill z ∼ p(z) at the start of each episode, and act according to that skill throughout the episode. The agent is rewarded for visiting states that are easy to discriminate, while the discriminator is updated to better infer the skill z from states visited. Entropy regularization occurs as part of the SAC update. Unlike prior adversarial unsupervised RL methods (e.g., Sukhbaatar et al. FORMULA0), DIAYN forms a cooperative game, which avoids many of the instabilities of adversarial saddle-point formulations. On gridworlds, we can compute analytically that the unique optimum to the DIAYN optimization problem is to evenly partition the states between skills, with each skill assuming a uniform stationary distribution over its partition (proof in Appendix B). In the continuous and approximate setting, convergence guarantees would be desirable, but this is a very tall order: even standard RL methods with function approximation (e.g., DQN) lack convergence guarantees, yet such techniques are still useful. Empirically, we find DIAYN to be robust to random seed; varying the random seed does not noticeably affect the skills learned, and has little effect on downstream tasks (see Fig.s 4, 6, and 13). In this section, we evaluate DIAYN and compare to prior work. First, we analyze the skills themselves, providing intuition for the types of skills learned, the training dynamics, and how we avoid problematic behavior in previous work. In the second half, we show how the skills can be used for downstream tasks, via policy initialization, hierarchy, imitation, outperforming competitive baselines on most tasks. We encourage readers to view videos 3 and code 4 for our experiments. (We study the skills learned by DIAYN on tasks of increasing complexity, ranging from point navigation (2 dimensions) to ant locomotion (111 dimensions). We first applied DIAYN to a simple 2D navigation environment. The agent starts in the center of the box, and can take actions to directly move its (x, y) position. FIG1 illustrates how the 6 skills learned for this task move away from each other to remain distinguishable. Next, we applied DIAYN to two classic control tasks, inverted pendulum and mountain car. Not only does our approach learn skills that solve the task without rewards, it learns multiple distinct skills for solving the task. (See Appendix D for further analysis.)Finally, we applied DIAYN to three continuous control tasks BID6: half cheetah, hopper, and ant. As shown in Figure 3, we learn a diverse set of primitive behaviors for all tasks. For half cheetah, we learn skills for running forwards and backwards at various speeds, as well as skills for doing flips and falling over; ant learns skills for jumping and walking in many types of curved trajectories (though none walk in a straight line); hopper learns skills for balancing, hopping forward and backwards, and diving. See Appendix D.4 for a comparison with VIME. While DIAYN learns skills without a reward function, as an outside observer, can we evaluate the skills throughout training to understand the training dynamics. FIG1 shows how the skills for inverted pendulum and mountain car become increasingly diverse throughout training (FIG12 repeats this experiment for 5 random seeds, and shows that are robust to initialization). Recall that our skills are learned with no reward, so it is natural that some skills correspond to small task reward while others correspond to large task reward. Question 3. Does discriminating on single states restrict DIAYN to learn skills that visit disjoint sets of states?Our discriminator operates at the level of states, not trajectories. While DIAYN favors skills that do not overlap, our method is not limited to learning skills that visit entirely disjoint sets of states. FIG1 shows a simple experiment illustrating this. The agent starts in a hallway (green star), and can move more freely once exiting the end of the hallway into a large room. Because RL agents are incentivized to maximize their cumulative reward, they may take actions that initially give no reward to reach states that eventually give high reward. In this environment, DIAYN learns skills that exit the hallway to make them mutually distinguishable. The key difference from the most similar prior work on unsupervised skill discovery, VIC, is our decision to not learn the prior p(z). We found that VIC suffers from the "Matthew Effect": VIC's learned prior p(z) will sample the more diverse skills more frequently, and hence only those skills will receive training signal to improve. To study this, we evaluated DIAYN and VIC on the half-cheetah environment, and plotting the effective number of skills (measured as exp(H[Z])) throughout training (details and more figures in Appendix E.2). The figure to the right shows how VIC quickly converges to a setting where it only samples a handful of skills. In contrast, DIAYN fixes the distribution over skills, which allows us to discover more diverse skills. The perhaps surprising finding that we can discover diverse skills without a reward function creates a building block for many problems in RL. For example, to find a policy that achieves a high reward on a task, it is often sufficient to simply choose the skill with largest reward. Three less obvious applications are adapting skills to maximize a reward, hierarchical RL, and imitation learning. After DIAYN learns task-agnostic skills without supervision, we can quickly adapt the skills to solve a desired task. Akin to the use of pre-trained models in computer vision, we propose that DIAYN can serve as unsupervised pre-training for more sample-efficient finetuning of task-specific policies. Question 5. Can we use learned skills to directly maximize the task reward?We take the skill with highest reward for each benchmark task and further finetune this skill using the task-specific reward function. We compare to a "random initialization" baseline that is initialized from scratch. Our approach differs from this baseline only in how weights are initialized. We initialize both the policy and value networks with weights learned during unsupervised pretraining. Although the critic networks learned during pretraining corresponds to the pseudo-reward from the discriminator (Eq. 3) and not the true task reward, we found empirically that the pseudo-reward was close to the true task reward for the best skill, and initializing the critic in addition to the actor further sped up learning. FIG3 shows both methods applied to half cheetah, hopper, and ant. We assume that the unsupervised pretraining is free (e.g., only the reward function is expensive to compute) or can be amortized across many tasks, so we omit pretraining steps from this plot. On all tasks, unsupervised pretraining enables the agent to learn the benchmark task more quickly. In theory, hierarchical RL should decompose a complex task into motion primitives, which may be reused for multiple tasks. In practice, algorithms for hierarchical RL can encounter many problems: each motion primitive reduces to a single action BID2, the hierarchical policy only samples a single motion primitive , or all motion primitives attempt to do the entire task. In contrast, DIAYN discovers diverse, task-agnostic skills, which hold the promise of acting as a building block for hierarchical RL.Question 6. Are skills discovered by DIAYN useful for hierarchical RL?We propose a simple extension to DIAYN for hierarchical RL, and find that simple algorithm outperforms competitive baselines on two challenging tasks. To use the discovered skills for hierarchical RL, we learn a meta-controller whose actions are to choose which skill to execute for the next k steps (100 for ant navigation, 10 for cheetah hurdle). The meta-controller has the same observation space as the skills. As an initial test, we applied the hierarchical RL algorithm to a simple 2D point navigation task (details in Appendix C.2). FIG4 illustrates how the reward on this task increases with the number of skills; error bars show the standard deviation across 5 random seeds. To ensure that our goals were not cherry picked, we sampled 25 goals evenly from the state space, and evaluated each random seed on all goals. We also compared to Variational Information Maximizing Exploration (VIME) . Note that even the best random seed from VIME significantly under-performs DIAYN. This is not surprising: whereas DIAYN learns a set of skills that effectively partition the state space, VIME attempts to learn a single policy that visits many states. Next, we applied the hierarchical algorithm to two challenging simulated robotics environment. On the cheetah hurdle task, the agent is rewarded for bounding up and over hurdles, while in the ant navigation task, the agent must walk to a set of 5 waypoints in a specific order, receiving only a sparse reward upon reaching each waypoint. The sparse reward and obstacles in these environments make them exceedingly difficult for non-hierarchical RL algorithms. Indeed, state of the art RL algorithms that do not use hierarchies perform poorly on these tasks. FIG6 shows how DIAYN outperforms state of the art on-policy RL (TRPO (a)), off-policy RL (SAC ), and exploration bonuses (VIME). This experiment suggests that unsupervised skill learning provides an effective mechanism for combating challenges of exploration and sparse rewards in RL. If the number of possible skills grows exponentially with the dimension of the task observation, one might imagine that DIAYN would fail to learn skills necessary to solve some tasks. While we found that DIAYN does scale to tasks with more than 100 dimensions (ant has 111), we can also use a simple modification to bias DIAYN towards discovering particular types of skills. We can condition the discriminator on only a subset of the observation space, or any other function of the observations. In this case, the discriminator maximizes E[log q φ (z | f (s))]. For example, in the ant navigation task, f (s) could compute the agent's center of mass, and DIAYN would learn skills that correspond to changing the center of mass. The "DIAYN+prior" in FIG6 (right) shows how incorporating this prior knowledge can aid DIAYN in discovering useful skills and boost performance on the hierarchical task. (No other experiments or figures in this paper used this prior.) The key takeaway is that while DIAYN is primarily an unsupervised RL algorithm, there is a simple mechanism for incorporating supervision when it is available. Unsurprisingly, we perform better on hierarchical tasks when incorporating more supervision. Expert trajectories DIAYN imitations Figure 9: Imitating an expert: DIAYN imitates an expert standing upright, flipping, and faceplanting, but fails to imitate a handstand. Question 8. Can we use learned skills to imitate an expert?Aside from maximizing reward with finetuning and hierarchical RL, we can also use learned skills to follow expert demonstrations. One use-case is where a human manually controls the agent to complete a task that we would like to automate. Simply replaying the human's actions fails in stochastic environments, cases where closed-loop control is necessary. A second use-case involves an existing agent with a hard coded, manually designed policy. Imitation learning replaces the existing policy with a similar yet differentiable policy, which might be easier to update in response to new constraints or objectives. We consider the setting where we are given an expert trajectory consisting of states, without actions, defined as τ * = {(s i)} 1≤i≤N. Our goal is to obtain a feedback controller that will reach the same states. Given the expert trajectory, we use our learned discriminator to estimate which skill was most likely to have generated the trajectory. This optimization problem, which we solve for categorical z by enumeration, is equivalent to an M-projection : DISPLAYFORM0 We qualitatively evaluate this approach to imitation learning on half cheetah. Figure 9 (left) shows four imitation tasks, three of which our method successfully imitates. We quantitatively evaluate this imitation method on classic control tasks in Appendix G. In this paper, we present DIAYN, a method for learning skills without reward functions. We show that DIAYN learns diverse skills for complex tasks, often solving benchmark tasks with one of the learned skills without actually receiving any task reward. We further proposed methods for using the learned skills to quickly adapt to a new task, to solve complex tasks via hierarchical RL, and to imitate an expert. As a rule of thumb, DIAYN may make learning a task easier by replacing the task's complex action space with a set of useful skills. DIAYN could be combined with methods for augmenting the observation space and reward function. Using the common language of information theory, a joint objective can likely be derived. DIAYN may also more efficiently learn from human preferences by having humans select among learned skills. Finally, the skills produced by DIAYN might be used by game designers to allow players to control complex robots and by artists to animate characters. The log p(z) term in Equation 3 is a baseline that does not depend on the policy parameters θ, so one might be tempted to remove it from the objective. We provide a two justifications for keeping it. First, assume that episodes never terminate, but all skills eventually converge to some absorbing state (e.g., with all sensors broken). At this state, the discriminator cannot distinguish the skills, so its estimate is log q(z | s) = log(1/N), where N is the number of skills. For practical reasons, we want to restart the episode after the agent reaches the absorbing state. Subtracting log(z) from the pseudo-reward at every time step in our finite length episodes is equivalent to pretending that episodes never terminate and the agent gets reward log(z) after our "artificial" termination. Second, assuming our discriminator q φ is better than chance, we see that q φ (z | s) ≥ p(z). Thus, subtracting the log p(z) baseline ensures our reward function is always non-negative, encouraging the agent to stay alive. Without this baseline, an optimal agent would end the episode as soon as possible. For simple environments, we can compute an analytic solution to the DIAYN objective. For example, consider a N × N gridworld, where actions are to move up/down/left/right. Any action can be taken in any state, but the agent will stay in place if it attempts to move out of the gridworld. We use (x, y) to refer to states, where x, y ∈ {1, 2, · · ·, N}.For simplicity, we assume that, for every skill, the distribution of states visited exactly equals that skill's stationary distribution over states. To clarify, we will use π z to refer to the policy for skill z. We use ρ πz to indicate skill z's stationary distribution over states, andρ πz as the empirical distribution over states within a single episode. Our assumption is equivalent to saying ρ πz (s) =ρ πz (s) ∀s ∈ S One way to ensure this is to assume infinite-length episodes. We want to show that a set of skills that evenly partitions the state space is the optimum of the DIAYN objective for this task. While we will show this only for the 2-skill case, the 4 skill case is analogous. The optimum policies for a set of two skills are those which evenly partition the state space. We will show that a top/bottom partition is one such (global) optima. The left/right case is analogous. Lemma B.1. A pair of skills with state distributions given below (and shown in FIG8) are an optimum for the DIAYN objective with no entropy regularization (α = 0). DISPLAYFORM0 Before proving Lemma B.1, we note that there exist policies that achieve these stationary distributions. FIG8 shows one such policy, were each arrow indicates a transition with probability 1 4. Note that when the agent is in the bottom row of yellow states, it does not transition to the green states, and instead stays in place with probability Proof. Recall that the DIAYN objective with no entropy regularization is: DISPLAYFORM1 Because the skills partition the states, we can always infer the skill from the state, so H[Z | S] = 0. By construction, the prior distribution over H[Z] is uniform, so H[Z] = log is maximized. Thus, a set of two skills that partition the state space maximizes the un-regularized DIAYN objective. Next, we consider the regularized objective. In this case, we will show that while an even partition is not perfectly optimal, it is "close" to optimal, and its "distance" from optimal goes to zero as the gridworld grows in size. This analysis will give us additional insight into the skills preferred by the DIAYN objective. Lemma B.2. A pair of skills with state distributions given given in Equation 4 achieve an DIAYN objective within a factor of O(1/N) of the optimum, where N is the gridworld size. Proof. Recall that the DIAYN objective with no entropy regularization is: DISPLAYFORM2 We have already computed the second two terms in the previous proof: H[Z | S] = 0 and H[Z] = log. For computing the first term, it is helpful to define the set of "border states" for a particular skill as those that do not neighbor another skill. For skill 1 defined in FIG8 Figure 11: The DIAYN objective prefers skills that (Left) partition states into sets with short borders and (Right) which correspond to bottleneck states. at states along the border between two skills. Everything else being equal, this conflict encourages DIAYN to produce skills that have small borders, as shown in Figure 11. For example, in a gridworld with dimensions N < M, a pair of skills that split along the first dimension (producing partitions of size (N, M/2)) would achieve a larger (better) objective than skills that split along the second dimension. This same intuition that DIAYN seeks to minimize the border length between skills in DIAYN preferring partitions that correspond to bottleneck states (see Figure 11b). In our experiments, we use the same hyperparameters as those in , with one notable exception. For the Q function, value function, and policy, we use neural networks with 300 hidden units instead of 128 units. We found that increasing the model capacity was necessary to learn many diverse skills. When comparing the "skill initialization" to the "random initialization" in Section 4.2, we use the same model architecture for both methods. To pass skill z to the Q function, value function, and policy, we simply concatenate z to the current state s t. As in , epochs are 1000 episodes long. For all environments, episodes are at most 1000 steps long, but may be shorter. For example, the standard benchmark hopper environment terminates the episode once it falls over. FIG1 show up to 1000 epochs, which corresponds to at most 1 million steps. We found that learning was most stable when we scaled the maximum entropy objective (H[A | S, Z] in Eq. 1) by α = 0.1. We use this scaling for all experiments. The cheetah hurdle environment is a modification of HalfCheetah-v1, where we added boxes with shape H = 0.25m, W = 0.1m, D = 1.0m, where the width dimension is along the same axis as the cheetah's forward movement. We placed the boxes ever 3 meters, start at x = −1m. The ant navigation environment is a modification of Ant-v1. To improve stability, we follow Pong et al. FORMULA0 and lower the gear ratio of all joints to 30. The goals are the corners of a square, centered at the origin, with side length of 4 meters: [, (2, −2), (−2, −2), (−2, 2),]. The ant starts at the origin, and receives a reward of +1 when its center of mass is within 0.5 meters of the correct next goal. Each reward can only be received once, so the maximum possible reward is +5. For the 2D navigation experiment shown in FIG4, we first learned a set of skills on the point environment. Next, we introduced a reward function r g (s) = − s − g 2 2 penalizing the distance from the agent's state to some goal, and applied the hierarchical algorithm above. In this task, the DIAYN skills provided sufficient coverage of the state space that the hierarchical policy only needed to take a single action (i.e., choose a single skill) to complete the task. FIG1: Objectives: We plot the two terms from our objective (Eq. 1) throughout training. While the entropy regularizer (blue) quickly plateaus, the discriminability term (orange) term continues to increase, indicating that our skills become increasingly diverse without collapsing to deterministic policies. This plot shows the mean and standard deviation across 5 seeds for learning 20 skills in half cheetah environment. Note that log 2 (1/20) ≈ −3, setting a lower bound for log q φ (z | s). To provide further intuition into our approach, FIG1 plots the two terms in our objective throughout training. Our skills become increasingly diverse throughout training without converging to deterministic policies. To illustrate the stability of DIAYN to random seed, we repeated the experiment in FIG1 for 5 random seeds. FIG12 illustrates that the random seed has little effect on the training dynamics. Question 9. Does entropy regularization lead to more diverse skills? DISPLAYFORM0 To answer this question, we apply our method to a 2D point mass. The agent controls the orientation and forward velocity of the point, with is confined within a 2D box. We vary the entropy regularization α, with larger values of α corresponding to policies with more stochastic actions. With small α, we learn skills that move large distances in different directions but fail to explore large parts of the state space. Increasing α makes the skills visit a more diverse set of states, which may help with exploration in complex state spaces. It is difficult to discriminate skills when α is further increased. DISPLAYFORM1 Figure 15: Task reward of skills learned without reward: While our skills are learned without the task reward function, we evaluate each with the task reward function for analysis. The wide range of rewards shows the diversity of the learned skills. In the hopper and half cheetah tasks, many skills achieve large task reward, despite not observing the task reward during training. As discussed in prior work , standard model-free algorithms trained directly on the task reward converge to scores of 1000 -3000 on hopper, 1000 -5000 on cheetah, and 700 -2000 on ant. In FIG3, we take the skills learned without any rewards, and evaluate each of them on the standard benchmark reward function. We compare to random (untrained) skills. The wide distribution over rewards is evidence that the skills learned are diverse. For hopper, some skills hop or stand for the entire episode, receiving a reward of at least 1000. Other skills aggressively hop forwards or dive backwards, and receive rewards between 100 and 1000. Other skills fall over immediately and receive rewards of less than 100. The benchmark half cheetah reward includes a control penalty for taking actions. Unlike random skills, learned skills rarely have task reward near zero, indicating that all take actions to become distinguishable. Skills that run in place, flop on their nose, or do backflips receive reward of -100. Skills that receive substantially smaller reward correspond to running quickly backwards, while skills that receive substantially larger reward correspond to running forward. Similarly, the benchmark ant task reward includes both a control penalty and a survival bonus, so random skills that do nothing receive a task reward near 1000. While no single learned skill learns to run directly forward and obtain a task reward greater than 1000, our learned skills run in different patterns to become discriminable, ing in a lower task reward. Question 10. Does DIAYN explore effectively in complex environments?We apply DIAYN to three standard RL benchmark environments: half-cheetah, hopper, and ant. In all environments, we learn diverse locomotion primitives, as shown in Figure 3. Despite never receiving any reward, the half cheetah and hopper learn skills that move forward and achieve large task reward on the corresponding RL benchmarks, which all require them to move forward at a fast pace. Half cheetah and hopper also learn skills that move backwards, corresponding to receiving a task reward much smaller than what a random policy would receive. Unlike hopper and half cheetah, the ant is free to move in the XY plane. While it learns skills that move in different directions, most skills move in arcs rather than straight lines, meaning that we rarely learn a single skill that achieves large task reward on the typical task of running forward. In the appendix, we visualize the objective throughout training. In FIG4, we evaluate all skills on three reward functions: running (maximize X coordinate), jumping (maximize Z coordinate) and moving (maximize L2 distance from origin). For each skill, DIAYN learns some skills that achieve high reward. We compare to single policy trained with a pure exploration objective (VIME ). Whereas previous work (e.g., Pathak et al. FORMULA0 ; BID4) finds a single policy that explores well, DIAYN optimizes a collection of policies, which enables more diverse exploration. FIG4: Exploration: We take DIAYN skills learned without a reward function, and evaluate on three natural reward functions: running, jumping, and moving away from the origin. For all tasks, DIAYN learns some skills that perform well. In contrast, a single policy that maximizes an exploration bonus (VIME) performs poorly on all tasks. We used our method as a starting point when comparing to VIC in Section 4.2. While p(z) is fixed in our method, we implement VIC by learning p(z). In this section, we describe how we learned p(z), and show the effect of learning p(z) rather than leaving it fixed. We choose p(z) to optimize the following objective, where p z (s) is the distribution over states induced by skill s: DISPLAYFORM0 For clarity, we define p t z (s) as the distribution over states induced by skill z at epoch t, and define t (z) as an approximation of E[log p(z | s)] using the policy and discriminator from epoch t: DISPLAYFORM1 Noting that p(z) is constrained to sum to 1, we can optimize this objective using the method of Lagrange multipliers. The corresponding Lagrangian is DISPLAYFORM2 Setting the derivative equal to zero, we get DISPLAYFORM3 and finally arrive at DISPLAYFORM4 Figure 17: Effect of learning p(z): We plot the effective number of skills that are sampled from the skill distribution p(z) throughout training. Note how learning p(z) greatly reduces the effective number on inverted pendulum and mountain car. We show from 3 random seeds for each environment. Figure 18: Learning p(z) with varying number of skills: We repeat the experiment in Figure 4 for varying sizes of z. Regardless of the size of z, learning p(z) causes the effective number of skills to drop to less than 10. The two subplots show the same data (Left) on a linear scale and (Right) logarithmic scale. We plot the mean and standard deviation across 3 random seeds. In this section, we briefly discuss the effect of learning p(z) rather than leaving it fixed. To study the effect of learning p(z), we compared the entropy of p(z) throughout training. When p(z) is fixed, the entropy is a constant (log ≈ 3.9). To convert nats to a more interpretable quantity, we compute the effective number of skills by exponentiation the entropy: effective num. skills e Figure 17 shows the effective number of skills for half cheetah, inverted pendulum, and mountain car. Note how the effective number of skills drops by a factor of 10x when we learn p(z). This observation supports our claim that learning p(z) in learning fewer diverse skills. FIG6 is a repeat of the experiment in FIG5, where we varying the dimension of z. Note that the dimension of z equals the maximum number of skills that the agent could learn. We observe that the effective number of skills plummets throughout training, even when using a high-dimensional vector for z. In this section, we visualize the skills learned for inverted pendulum and mountain car without a reward. Not only does our approach learn skills that solve the task without rewards, it learns multiple distinct skills for solving the task. FIG13 shows the X position of the agent across time, within one episode. For inverted pendulum FIG13, we plot only skills that solve the task. Horizontal lines with different X coordinates correspond to skills balancing the pendulum at different positions along the track. The periodic lines correspond to skills that oscillate back and forth while balancing the pendulum. Note that skills that oscillate have different X positions, amplitudes, and periods. For For every skill, we collect one trajectory and plot the agent's X coordinate across time. For inverted pendulum (top), we only plot skills that balance the pendulum. Note that among balancing skills, there is a wide diversity of balancing positions, control frequencies, and control magnitudes. For mountain car (bottom), we show skills that achieve larger reward (complete the task), skills with near-zero reward, and skills with very negative reward. Note that skills that solve the task (green) employ varying strategies. mountain car FIG13, skills that climb the mountain employ a variety of strategies for to do so. Most start moving backwards to gather enough speed to summit the mountain, while others start forwards, then go backwards, and then turn around to summit the mountain. Additionally, note that skills differ in when the turn around and in their velocity (slope of the green lines). Figures 20, 21, and 22 show more skills learned without reward. Given the expert trajectory, we use our learned discriminator to estimate which skill was most likely to have generated the trajectory:ẑ = arg max z Π st∈τ * q φ (z | s t)As motivation for this optimization problem, note that each skill induces a distribution over states, p z p(s | z). We use p * to denote the distribution over states for the expert policy. With a fixed prior distribution p(z) and a perfect discriminator q φ (z | s) = p(z | s), we have p(s | z) ∝ q φ (z | s) as a function of z. Thus, Equation G is an M-projection of the expert distribution over states onto the family of distributions over states, P = {p z}:arg min DISPLAYFORM0 For clarity, we omit a constant that depends only on p *. Note that the use of an M-projection, rather than an I-projection, helps guarantee that the retrieved skill will visit all states that the expert visits BID5. In our experiments, we solve Equation 5 by simply iterating over skills. The "expert" trajectories are actually generated synthetically in these experiments, by running a different random seed of our algorithm. A different seed is used to ensure that the trajectories are not actually produced by any of the currently available skills. Of course, in practice, the expert trajectories might be provided by any other means, including a human. For each expert trajectory, we retrieve the closest DIAYN skillẑ using Equation 4.2.3. Evaluating q φ (ẑ | τ *) gives us an estimate of the probability that the imitation will match the expert (e.g., for a safety critical setting). This quantity is useful for predicting how accurately our method will imitate an expert before executing the imitation policy. In a safety critical setting, a user may avoid attempting tasks where this score is low. We compare our method to three baselines. The "low entropy" baseline is a variant on our method with lower entropy regularization. The "learned p(z)" baseline learns the distribution over skills. Note that Variational Intrinsic Control is a combination of the "low entropy" baseline and the "learned p(z)" baseline. Finally, the "few skills" baseline learns only 5 skills, whereas all other methods learn 50. FIG1 shows the aggregated across 600 imitation tasks. The X-axis shows the discriminator score, our estimate for how well the imitation policy will match the expert. The Y-axis shows the true distance between the trajectories, as measured by L2 distance in state space. For all methods, the distance between the expert and the imitation decreases as the discriminator's score increases, indicating that the discriminator's score is a good predictor of task performance. Our method consistently achieves the lowest trajectory distance among all methods. The "low entropy" baseline is slightly worse, motivating our decision to learn maximum entropy skills. When imitating tasks using the "few skills" baseline, the imitation trajectories are even further from the expert trajectory. This is expected -by learning more skills, we obtain a better "coverage" over the space of skills. A "learn p(z)" baseline that learns the distribution over skills also performs poorly. Recalling that is a combination of the "low entropy" baseline and the "learn p(z)" baseline, this plot provides evidence that using maximum entropy policies and fixing the distribution for p(z) are two factors that enabled our method to scale to more complex tasks. | We propose an algorithm for learning useful skills without a reward function, and show how these skills can be used to solve downstream tasks. | 1,049 | scitldr |
Lexical ambiguity, i.e., the presence of two or more meanings for a single word, is an inherent and challenging problem for machine translation systems. Even though the use of recurrent neural networks and attention mechanisms are expected to solve this problem, machine translation systems are not always able to correctly translate lexically ambiguous sentences. In this work, I attempt to resolve the problem of lexical ambiguity in English--Japanese neural machine translation systems by combining a pretrained Bidirectional Encoder Representations from Transformer (BERT) language model that can produce contextualized word embeddings and a Transformer translation model, which is a state-of-the-art architecture for the machine translation task. These two proposed architectures have been shown to be more effective in translating ambiguous sentences than a vanilla Transformer model and the Google Translate system. Furthermore, one of the proposed models, the Transformer_BERT-WE, achieves a higher BLEU score compared to the vanilla Transformer model in terms of general translation, which is concrete proof that the use of contextualized word embeddings from BERT can not only solve the problem of lexical ambiguity, but also boost the translation quality in general. Lexical ambiguity, i.e., the presence of two or more meanings for a single word, is an inherent and challenging problem for machine translation systems. Even though the use of recurrent neural networks (RNN) and attention mechanisms are expected to solve this problem, machine translation systems are not always able to correctly translate lexically ambiguous sentences. In this work, we attempt to resolve the problem of lexical ambiguity in English-Japanese neural machine translation systems by combining a pretrained Bidirectional Encoder Representations from Transformer (BERT) language model that can produce contextualized word embeddings and a Transformer translation model, which is a state-of-the-art architecture for the machine translation task. These two proposed architectures have been shown to be more effective in translating ambiguous sentences than a vanilla Transformer model and the Google Translate system. Furthermore, one of the proposed models, the Transformer BERT−WE, achieves a higher BLEU score compared to the vanilla Transformer model in terms of general translation, which is concrete proof that the use of contextualized word embeddings from BERT can not only solve the problem of lexical ambiguity, but also boosts the translation quality in general. Machine translation is one of the most important tasks in the field of natural language processing. In 2014, Sutskever and his fellow researchers at Google introduced the sequence-to-sequence (seq2seq) model , marking the advent of neural machine translation (NMT) in a breakthrough in the field of machine translation. Since then, seq2seq models have been growing rapidly, evolving from a purely recurrent neural network (RNN)-based encoder-decoder model to recurrence-free models that rely on convolution or attention mechanisms . The Transformer architecture , which is based on attention mechanism, is currently the standard model for machine translation tasks because of its effectiveness and efficiency. It also provides a foundation for the advent of state-of-the-art language models, such as Bidirectional Encoder Representations from Transformer (BERT) and GPT-2 . Section 2 shows how seq2seq models transformed from a purely RNN-based encoder-decoder model to a transformer model that relies entirely on attention mechanism. Although many significant improvements have been made in the NMT field, lexical ambiguity is still a problem that causes difficulty for machine translation models. show that the performance of RNNbased seq2seq model decreases as the number of senses for each word increases. Section 3 demonstrates that even modern translation models, such as Google Translate, cannot translate some lexically ambiguous sentences and forms hypotheses concerning some causes of this problem. Section 4 describes the BERT language model and explains why BERT vector representations can help resolve the problem of lexical ambiguity. Subsequently, two context-aware machine translation architectures that integrate pretrained BERT and Transformer models are proposed in section 5. For comparison purposes, a vanilla Transformer was built with the same set of hyperparameters and trained with the same settings as the proposed models. Finally, the three models were evaluated based on two criteria: i.e., the capability to produce good translations in general and the ability to translate lexically ambiguous sentences. The evaluation and sample translations are shown in section 6.3. 2 Neural machine translation 2.1 Sequence-to-sequence model NMT is an approach to machine translation, where a large neural network model learns to predict the likelihood of a sequence of words given a source sentence in an end-to-end fashion. The neural network model used for machine translation is called a seq2seq model, which is composed of an encoder and a decoder. RNN and its variants such as long short-term memory (LSTM) and gated recurrent unit (GRU) have been a common choice to build a seq2seq model. The encoder, which is a multilayered RNN cell, encodes the input sequence x into a fixed-sized vector v, which is essentially the last hidden state of the encoder's RNN. The decoder, which is another RNN, maps this context vector to the target sequence y. In other words, a seq2seq model learns to maximize the conditional probability: where T and S are the lengths of the input sentence of the source language and the output sentence of the target language, respectively. The attention mechanism proposed by , is a significant improvement to seq2seq models. By using the attention mechanism, each position in the decoder can selectively focus on all positions in the encoder instead of relying entirely on the last hidden state of the encoder, which consequently boosts the model's capability to learn long-term dependencies. Basically, the attention mechanism is a mapping of a query and a set of key-value pairs to an output vector. Each query vector represents a decoder's hidden state h t, while the key and values vectors represent all the encoder's hidden states h s. The output vector c t is a weighted sum of the value vectors, where the weight corresponding to each value vector is computed by an alignment function. where the value of the similarity function describes to what extent an input at position s and an output at position t match. The two most commonly used similarity functions are additive attention and multiplicative attention, which were proposed by and respectively, as shown in Eq. 3. 2.3 Transformer The Transformer model was first introduced by , which removes all recurrence and relies entirely on selfattention. In a self-attention layer, all queries, keys, and values come from the same place. As a , each position can attend to all other positions in the same sequence, which dispenses with the need for recurrence. This architecture does not only outperform RNN-based seq2seq models in terms of performance, but also it is parallelizable and requires less time to train. Thus, it has replaced the RNN-based seq2seq model as the de facto standard in neural machine translation. Like other seq2seq models, a Transformer consists of an encoder and a decoder, as shown in Figure 1. The encoder is a stack of N = 6 identical layers, each of which is composed of two linked sublayers: a multihead attention mechanism and a fully connected feed-forward network. The decoder stack also consists of N = 6 identical layers. Unlike the encoder, each decoder layer is composed of three consecutive sublayers: a masked multihead attention, a multihead attention mechanism, and a fully connected feed-forward network. The first attention sublayer in each decoder layer performs self-attention, while the second one pays attention to the output of the encoder stack. In both the encoder and decoder, each sublayer's input and output are added using a residual connection and normalized using a layer normalization method . All sublayers in the model and the embedding layers produce outputs of dimension d model = 512. The Transformer model uses a multihead attention mechanism, i.e., many attention functions are performed simultaneously in the same attention layer. Specifically, all queries, keys, and values of dimension d model are projected h = 8 times with different learned linear projections to. Each attention head produces output vectors of dimension d v, which are concatenated and linearly projected one more time to produce the final output vectors of dimension d model. The alignment function used in the Transformer model is called scaled dot-product attention: Each fully connected feed-forward network in the model consists of a hidden layer with ReLU activation and an output layer, producing outputs of dimensions d f f = 2048 and d model = 512, respectively. The self-attention mechanism uses symmetrical matrix multiplication; therefore, it cannot capture information about the sequence order. Thus, in addition to word embedding layers, it is necessary to add some information about the relative or absolute positions of the tokens in the sequence . The positional encodings are added to the outputs of embedding layers before being fed to the encoder and decoder stacks. For the model to attend to relative positions, the sine and cosine functions of different frequencies are used to generate positional encodings: where pos is the position and i is the dimension. Lexical ambiguity, which is also called semantic ambiguity, can be defined as the presence of more than one meaning for a single word. Words that possess two or more possible meanings are called homographs. Translating homographs is not a trivial task because their meanings vary based on their contexts. For example, given a sentence "The fisherman went to the bank.", the word "bank" may refer to "a financial institution" or "the side of a river." In this case, it is acceptable to interpret this sentence in two ways. However, given another sentence "A fisherman is sitting on the bank.", it is unreasonable to interpret the word "bank" as "a financial institution" in this case. However, this sentence is challenging for machine translation systems to produce a correct translation, as shown in the later part of the paper. Even though many advancements have been made in the field of neural machine translation to date, contemporary translation systems are still struggling to deal with semantic ambiguity. For instance, although the sentence "He left a book on the table and left the room." contains two words "left" of different meanings, Google Translate is able to correctly translate it into " On the other hand, Google Translate misinterprets the word "bank" in the sentence "A fisherman is sitting on the bank." and thus translates it into "漁 師が銀 銀 銀行 行 行に座っています。". The cause of this problem can be hypothesized that machine translation models use only one embedding vector to represent a homograph, even though the senses of a single homograph can be completely unrelated. Consequently, if a machine translation model fails to understand the meaning of a homograph from the context, it will tend to choose the dominant meaning. Furthermore, another hypothesis is that the parallel corpus used to train Google Translate system does not provide enough information for the system to understand the context of the latter example. This problem of semantic ambiguity can be addressed if both following conditions are satisfied: 1. Different senses of a homograph are represented by different embedding vectors. To achieve this, the model needs to understand the meanings of homographs from the context. 2. The training set must be exceptionally large so that the model can properly understand the context of unseen sentences. Although it is possible to obtain a large monolingual data set, it becomes difficult when it comes to finding a parallel corpus. An extensive pretrained language representation model, BERT supports transfer learning and finetuning on a wide range of tasks, such as question answering and language inference . BERT is composed of multiple layers of bidirectional Transformer encoders, with 12 layers for BERT BASE and 24 layers for BERT LARGE model . As a , BERT can learn bidirectional word representations by conditioning on both left and right contexts, which outperforms unidirectional language models, such as ELMo and OpenAI-GPT . BERT is simultaneously pretrained on two different tasks: masked language modeling and next sentence prediction. As for the masked language modeling task, 15% of all tokens in each sequence are selected at random to be predicted. If a token is chosen, it is replaced with a [MASK] token 80% of the time, with a random token 10% of the time or with the same token 10% of the time. BERT learns to predict the masked tokens instead of regenerating the entire input. As for Figure 2: The BERT architecture the next sentence prediction task, BERT is trained with a collection of concatenated sentence pairs and tries to predict whether the two sentences in each pair are contiguous. Consequently, the pretrained BERT is capable of understanding the relationship between two sentences and can be finetuned on downstream tasks, such as question answering and natural language inference . The corpora used for pretraining BERT are the BooksCorpus (800M words) and English Wikipedia (2,500M words). BERT uses WordPiece embeddings with a vocabulary of 30,000 tokens. Each sequence input to BERT starts with a [CLS] token followed by two concatenated sentences, which are separated by a special [SEP] token. Since the pretrained BERT generates the vector representation for a word by considering all other words in the same sequence, this vector can change dynamically with the context where it is used. In other words, BERT can produce different embeddings for different senses of the same word. To examine this feature of BERT, a pretrained BERT was used to generate the vector representations of words in different examples: 1. "There is an old man fishing on the bank." 2. "It's on the north bank of the Thames." 3. "a house on the banks of the River Severn" 4. " He jumped in and swam to the opposite bank." 5. "Many of these banks issue both credit and debit cards." 6. " My salary is paid directly into my bank." 7. "A group of ten international banks is to underwrite and sell the bonds." The words bank and banks in the first four examples mean "the edge of a river," while the ones in the next three examples mean "a financial institution." After word embeddings of dimension 768 are extracted from the pretrained BERT, t-SNE algorithm is used to extract the two most significant features of each word and the reduced word embeddings are visualized as shown in Figure 3. It can be clearly seen that the points representing the words "bank" and "banks" are clustered in two separate groups based on their meaning. Furthermore, another interesting point is that the words "bank" and "banks," which mean "the edge of a river" are located near related words such as "river," "fishing," and "swam," while the ones meaning "a financial institution" are near to some monetary terms such as "credit" and "mortgage." In the original Transformer architecture, each word in the predefined vocabulary list is represented by only one embedding vector. These word embeddings are trained as the model's parameters, therefore depend greatly on the limited training set. Apparently, the original Transformer does not satisfy the conditions mentioned in section 3. Consequently, it is unable to correctly translate semantically ambiguous sentences, as demonstrated in section 6.3 The BERT BASE model was integrated into a Transformer translation model to address lexical ambiguity in neural machine translation. Specifically, two architectures are proposed: Transformer BERT−WE using the pretrained BERT as input word embedding layer and Transformer BERT−Encoder replacing the encoder stack of a Transformer with the pretrained BERT model. The outputs of the last ten layers of the BERT were extracted and averaged. All the parameters of the pretrained BERT were kept unchanged during the training phase of both models. In this work, we implement and compare the performance of three models: a baseline Transformer, a Transformer BERT−WE, and a Transformer BERT−Encoder, which share the same hyperparameters for comparison purposes. We denote the number of layers in both the encoder and decoder stacks as N, the dimension of embedding layers and all sublayers' output as d model, the dimension of the inner layer in every fully connected feed-forward layer as d f f, and the number of attention heads as h. Due to the lack of memory capacity, N is set to 3, as opposed to N = 6 in the original Transformer paper. In addition, to match the hidden size h = 768 of the pretrained BERT model, d model and d f f are set to 768 and 3072 respectively. Dropout is applied to the output of each sublayer and the sum of the embeddings and positional encodings in both the encoder and decoder with a rate of 0.1. The models are trained with Japanese-English Subtitle Corpus (JESC) , consisting of over 3.2 million sentence pairs. This data set includes casual language, colloquialisms, expository writing, and narrative discourse. The train/val/testsplits are of size 3,237,374 / 2000 / 2001. Rakuten MA (morphological analyzer) is used to tokenize Japanese sentences and tokens that appear at least 10 times are shortlisted. Likewise, as for the baseline Transformer model, English sentences are tokenized by using nltk library and tokens are shortlisted in the same manner. By contrast, for BERT Transformer models, the BERT's internal word tokenizer and vocabulary are used. All out-of-vocabulary words are replaced with a special [UNK] token. For the pretrained BERT to effectively generate vector representations, special tokens [CLS] and [SEP] are added to the beginning and the end of the input English sentences, respectively. Over 6,000 sentences that contain homographs are extracted from the IWSLT 2017 EnglishJapanese data set to evaluate the performance of the models on ambiguous sentences. The Bilingual Evaluation Understudy (BLEU) score is used as the evaluation metric to assess the models' performance. The three models were trained with the same training settings. The batch size was set to 32 and AdamOptimizer with decayed learning rate used. The initial learning rate is set to 0.0001. The models are evaluated on a validation set after each training epoch. When validation loss increases for the first time, the learning rate starts decreasing with the rate of 0.33 per epoch. The training process is stopped when validation loss increases again. It takes from 1.5 to 2 days to finish training a model on a GTX 1080Ti. The models were evaluated based on two criteria: general translation and homograph translation. The JESC test set was used to evaluate the models' capability of translating English sentences in general, while the IWSLT 2017 data set was used to evaluate the models' ability to correctly translate semantically ambiguous sentences. The of BLEU score evaluation are shown in Table 1. It can be clearly seen that the Transformer BERT−WE model outperforms the other two models in both evaluations, achieving a BLEU score of 20.31 on JESC test set and 8.67 on IWSLT 2017 data set. Transformer BERT−Encoder model's performance is slightly worse than the vanilla Transformer in terms of general translation; however, it outperforms the vanilla Transformer when it comes to translating homographs. As shown in Table 2, Google Translate and the vanilla Transformer wrongly translate the word "bank" in the two given English sentences. By contrast, the two models Transformer BERT−WE and Transformer BERT−Encoder can correctly translate the word "bank" into "土手" or "岸", which means "the edge of a river" in Japanese. Another approach to solving lexical ambiguity was proposed by . The authors proved that the performance of translation models degrades as the number of senses for each word increases. They concatenated embedding vectors from a word sense disambiguation (WSD) system to a translation model's word embeddings and applied gating functions to generate contextualized word embeddings. The contextualized word embeddings are fed into the encoder of an RNN-based seq2seq model to generate translations. Their model was trained on three different language pairs: English-German, English-French, and English-Chinese. Using pretrained word embeddings was empirically proved to increase the BLEU score for the machine translation task. compared the performance of different translation systems that used either random initialization or pretraining on both source and target languages. The word embeddings used in their experiments were trained by using the Common Bag of Words (CBOW) algorithm . According to their , using pretrained word embeddings, especially on the source language side, considerably boosts the performance of machine translation systems . In this work, we demonstrate that lexical ambiguity is an inherent problem that contemporary machine translation systems cannot completely address, hypothesize two causes of the problem, and prove that this issue can be addressed by using contextualized word embeddings that dynamically change based on the context of given words. In addition, the BERT language model is demonstrated to be effective at generating contextualized word representations and two machine translation architectures that integrate pretrained BERT and Transformer translation models are proposed. The two architectures are shown to be able to translate semantically ambiguous sentences effectively. Furthermore, the Transformer BERT−WE model outperforms the vanilla Transformer model, proving that our approach can not only resolve the problem of lexical ambiguity, but also increases the translation quality in general. | The paper solves a lexical ambiguity problem caused from homonym in neural translation by BERT. | 1,050 | scitldr |
This paper focuses on the synthetic generation of human mobility data in urban areas. We present a novel and scalable application of Generative Adversarial Networks (GANs) for modeling and generating human mobility data. We leverage actual ride requests from ride sharing/hailing services from four major cities in the US to train our GANs model. Our model captures the spatial and temporal variability of the ride-request patterns observed for all four cities on any typical day and over any typical week. Previous works have succinctly characterized the spatial and temporal properties of human mobility data sets using the fractal dimensionality and the densification power law, respectively, which we utilize to validate our GANs-generated synthetic data sets. Such synthetic data sets can avoid privacy concerns and be extremely useful for researchers and policy makers on urban mobility and intelligent transportation. Ride sharing or hailing services have disrupted urban transportation in hundreds of cities around the globe (; BID2 . In United States, it has been estimated that between 24% to 43% of the population have used ride-sharing services in 2018 BID21 . Uber alone operates in more than 600 cities around the globe BID22 . Ride sharing services have turned urban transportation into a convenient utility (available any place at any time), and become an important part of the economy in large urban areas BID8.Ride request data from ride sharing services can potentially be of great value. Data gathered from ride sharing services could be used to provide insights about traffic and human mobility patterns which are essential for intelligent transportation systems. Ride requests in major cities with high penetration by such services exhibit spatial and temporal variability. Modeling of such variability is a challenging problem for researchers. Moreover, there are still unresolved challenges, such as: optimal algorithms for dynamic pooling of ride requests BID1, real-time preplacement of vehicles BID12, and city scale traffic congestion prediction BID17 and avoidance. Access to large amount of actual ride request data is essential to understanding and addressing these challenges. Data from ride sharing services have been used for real-time sensing and analytics to yield insights on human mobility patterns BID25 BID11. Each city exhibits a different pattern of urban mobility -there could be cultural or economical factors governing these patterns. If ride sharing services constitute a significant percentage of the riders in a city, can we build models from ride request data to model urban mobility for the whole city and provide societal benefit without compromising personal privacy? This question motivates us to explore the potential of using Generative Adversarial Networks (GANs) to generate synthetic ride request data sets that exhibit very similar attributes as the actual ride request data sets. This work proposes a novel approach of generating synthetic ride request data sets using GANs. This approach involves viewing ride requests as a (temporal) sequence of (spatial) images of ride request locations. The approach uses GANs to match the properties of the synthetic data sets with that of real ride request data sets. Many recent works using neural networks have looked at demand prediction BID29 BID30 and traffic prediction at intersections BID28. In our work, we are looking at generating actual ride requests for both spatially and temporally granular intervals. Also, we compare and validate the spatial and temporal variations of the DISPLAYFORM0 Figure 1: Ride requests for a small region of downtown San Francisco for a typical week day. Each figure shows the aggregated ride-locations (red dots) over a period of an hour. Each red dot may represent one or more ride-locations. Ride density varies spatially and temporally.synthetic data sets with the real data sets. In dealing with large amount of data for many cities and long training times for GANs, we develop effective ways to parallelize and scale our GANs training runs using large CPU clusters on AWS. We present our GANs scaling approach and experimental , and show that significant reduction in training times can be achieved. In this section, we introduce the actual (real) ride request data sets used for our GANs training and evaluation. We use the real data sets to compare with and validate the GANs generated synthetic data sets. Our real ride request data sets consist of all the ride requests for an entire week for the four cities. There is a strong repeating pattern from week to week as shown in FIG0. Hence the week-long data should be quite representative. For all four cities, the ride sharing services have significant penetration. Hence we believe the ride request data sets also reflect the overall urban mobility patterns for these cities. Our real data sets are actual ride requests for four cities over one week period from ride sharing services operating in the United States. Each ride request in the data set includes: request time and pickup location (latitude & longitude), and drop-off time and location (latitude & longitude). For this work we focus on ride request time and pickup location for generating pickup locations; and ride request time and drop-off location to generate drop-off locations. After training independent GANs models for pickup and drop-off locations, we generate synthetic locations using GANs and leverage graph generator approach BID4 BID11 to pair all pickup and drop-off locations to obtain synthetic ride requests. The trajectory or optimal route for a ride is not within the scope of this work. For the rest of the paper, we will use the term ride-locations to refer to both pickup and drop-off locations wherever they can be used interchangeably. We do temporal and spatial quantization of the raw ride request data from ride sharing services. We partition the entire week into 2016 time intervals of 5 minutes each, and lump together all the ride requests within each interval. We partition spatially the area of the entire city into small squares with side length,, of 50 meters, and lump together all the ride-locations occurring within the same square area. Each square area is then represented by a single pixel in a 2-D image with the gray scale intensity of the pixel reflecting the number of ride-locations in that square area (in a time interval). Occurrence of no ride-locations in an area is denoted by zero pixel intensity; positive integers (1, 2, 3, . . .) as pixel intensity denote the number of ride-locations in the square area. Combining the temporal and spatial quantizations, the real ride request data set for each city becomes a time sequence of images with each image spatially capturing all the ride requests occurring in a particular 5-min interval. The actual ride requests in every city exhibit distinct patterns of variability in both the spatial dimension (over geographical area of the city) and the temporal dimension (over each day and over each week). In Figure 1, this variability is illustrated. The ride request density is at its highest at 6pm, and continually decreases over time till 3am. Spatially there are dense patches of ride requests and these dense patches can shift with time, reflecting shifting concentrations of commuters in different areas at different times of day. We observe similar repeating patterns of temporal and spatial variability for all four cities. Previous works have been able to characterize these temporal and spatial variability patterns BID11. A graph can be used to model the ride requests within a 5-min interval, with nodes 1 representing pickup and drop off locations and a directed edge connecting the pickup node and the drop-off node. It was shown in BID11 ) that the size and density of this Ride Request Graph (RRG) evolves in time in response to the fluctuation of ride requests during each day and through out each week. It was observed that these ride request graphs exhibit and obey the Densification Power Law (DPL) property, similar to other graphs modeling human behaviors such as social networking graphs and publication citation graphs BID16. It was further observed that the ride request graphs for each city exhibit a distinct degree or exponent of the DPL, and that this DPL Exponent (α) can be viewed as a very succinct quantitative characterization of the temporal variability of the ride request patterns for that city. For any time snapshot t: DISPLAYFORM0 where e(t) and n(t) are the number of edges and number of nodes respectively, formed by all ride requests occurring in the time interval t. Edge weight denote the number of requests from the same source (pickup) to destination (drop-off) nodes in time snapshot t. The number of edges grows according to a specific exponential power (α) of the number of nodes. There is also a comparable quantitative characterization of the spatial variability of the ride request patterns for each city. The actual geographical locations of the nodes of the ride request graphs is not explicitly represented and therefore another characterization is needed. Correlation Fractal Dimension BID24 BID0 ) provides a succinct description of a kdimensional point-set to provide statistics about the distribution of points; it provides a quantitative measure of self-similarity. The spatial distribution of ride requests in each time interval can be viewed as a point-set image. We can measure the Correlation Fractal Dimension (D 2) as described in BID12. Values for correlation fractal dimension computed for each time snapshot t fall within a range for each city indicating the degree of self-similarity, and the consistent weekly pattern. For our 2-dimenional space, we impose a 2D-grid with square of side 2. For the i-th square, let C,i be the count of requests in each square. The correlation fractal dimension is defined as: DISPLAYFORM1 For self-similar data sets, we expect the derivative to be constant for a certain range of BID27. We observe that this range varies for our four cities, and each city exhibits a distinct value range for its correlation fractal dimension (D 2).We use the Densification Power Law Exponent (α) and the Correlation Fractal Dimension (D 2) to capture and characterize the temporal and spatial variability, respectively, for the ride request patterns for each city. RRG created for every time snapshot captures ridership fluctuations over time; nodes in a RRG do not encode any spatial information. Therefore, we compute Correlation Fractal Dimension for each time snapshot to capture the spatial distribution of both pickup and dropoff locations. The temporal evolution, and spatial distribution at any give time snapshot capture the dynamics of ride requests. We use these two parameters independently to confirm the similarity between the real data sets and the GANs generated synthetic data sets. We can claim strong similarity if the values of these two parameters (α and D 2) of the synthetic data sets match closely the values of the same two parameters of the real data sets. Generative Adversarial Networks learn to generate high quality samples i.e. sample from the data distribution p(x). Previous works by BID3 BID14 ) synthesized images of a higher quality using GANs which were hard for humans to distinguish from real images. Conditional GANs are an extension of GANs to sample from a conditional distribution given each image has an associated label which is true for our case of ride requests. In our framework, we would apply conditional GANs using ride request data in the form of images; similar to as shown in Figure 1 but without the base map shown in color. GANs learn a mapping from a random noise vector z to output image x. Conditional GANs learn a mapping from noise vector z and a label y to x BID18 BID5. The additional variable in the model allows to generate and discriminate samples conditioned on y. The generator accepts noise data z along with y to produce an image. The discriminator accepts an image x and condition y to predict the probability under condition y that x came from the empirical data distribution rather than from the generative model. The objective function can be expressed as: DISPLAYFORM0 where G tries to minimize to this objective function against an adversarial D that tries to maximize it. Every image is assigned a label from the set {0, 1, 2, ..., 23} representing the hour of a day. All twelve instances of five minute snapshots within an hour are assigned the same hour label 3. To accelerate our training using multiple machines, we exploit spatial parallelism by dividing the entire geographical region of a city into an array of blocks. FIG1 illustrates the division of San Francisco downtown into nine blocks. Keeping our image size similar to MNIST BID23, each block is set to represent an image of size 24×24 pixels, with each pixel representing one 50m×50m square area. Hence, each block covers an area of 1200m × 1200m. Each block, representing a grey scale image of 24 × 24 pixels, depicts all the ride-locations in that block. Separate images are formed for pickup and drop-off locations; models trained are also separate for pickup and drop-off locations. Each image of a block is labeled with a time interval (for our experiments, the hour in a day) which is similar for both images created from pickup and drop-off locations. The synthetically generated images from an array of blocks with the same time interval label are combined by stitching together all the processed blocks of a city. The generator network takes an input of a 100-dimensional Gaussian noise sample as well as a onehot vector encoding of the time snapshot to be generated. It has a single, fully-connected hidden layer without any convolution BID6 consisting of 128 ReLU-activated neurons which then passes to a sigmoid activated output layer with the same number of output neurons as the total number of pixels in each block. The discriminator network has a single hidden layer of 128 ReLU-activated neurons with a single sigmoid activated output neuron. We find that small networks are appropriate for the training data and allow for a quick and stable convergence to be achieved between the discriminator and the generator. Using relatively simple network architectures makes it possible to ensure that the discriminator and generator are evenly matched such that the loss for either network does not saturate early in the training process. In addition to the standard GANs architecture of generator and discriminator, an additional network is introduced which is referred to as the classifier BID15; it is pre-trained on the training data with the five minute label of the data serving as the classification target. In this way the time information that is encoded into the synthetic data by the generator network is then decoded by the classifier network. The generator is then trained on a weighted sum of the loss from both the classifier and discriminator networks as shown in the following equation: DISPLAYFORM0 where β is a tune-able hyper-parameter. This allows for more explicit loss attribution such that the generator receives two different error signals; one indicating the realism of the synthetic data and the other indicating accuracy relative to the conditioning values. By experiments using MNIST data and BID15, we found adding a classifier increases the efficiency of the training process and in higher quality synthetic data while incurring considerably less training time than other conditional GANs architectures we have experimented. In this section, we present the cloud infrastructure used for running our experiments. We also present performance on scaling our GANs workloads on the cloud infrastructure. All experiments are conducted on Amazon Web Services (AWS) using c5.18x instances with each instance containing an Intel Xeon Scalable Processor with 72 virtual cores (vCores) running at 3.0GHz and 144 GB of RAM. In this work we set the block size for each of the four cities to be 1200 × 1200 meters; each block is trained separately. Enlarging the block size will increase the computational time for training; and the complexity of the model can potentially impact scalability. The total number of blocks for each city are shown in TAB0. The number of blocks are mainly determined by the size of the greater metropolitan area of each city. To help enhance the scalability of our GANs workload across multiple nodes we make use of Ray BID19 from Berkeley, a distributed framework for AI Applications, to efficiently parallelize our workload across cluster of CPU nodes on AWS. Ray provides a convenient API in Python to scale deep learning workloads for numerous libraries, and support for heterogeneous resources like CPUs and GPUs. We also make use of Intel's Math Kernel Library Intel (2018b) (MKL) which provides machine learning libraries for supporting operations like activation (ReLU), inner product, and other useful functions BID9. Using Ray we scale our training runs by using from 2 to 8 c5.18x instances (containing from 144 cores to 576 cores) on AWS. The scalability are shown in Figure 4. As can be seen increasing the number of c5.18X Xeon CPU instances can significantly reduce the GANs training time up to 8 c5.18x instances. For the city of Los Angeles, the training time can be reduced from over one hour to less than 20 minutes. For New York City the training time can be reduced to just minutes. Running times for sampling ride requests from the trained models and stitching the images of all the blocks together are significantly less than the training times, and are not included in these . We also conduct our GANs scaling experiments using GPU instances on AWS. In our initial experiments we observe no real performance improvements using GPUs. Training time using GPUs on AWS was observed to be 5.93 hours on a p3.8xlarge instance using NVIDIA's Multi-Process Service (MPS) . With MPS, the GPU utilization is close to maximum by running multiple of our small GANs training jobs in parallel on a single GPU. Although, the number of jobs which could be executed in parallel on a GPU are not that many in comparison to Xeons. Scaling on GPUs requires more investigation. In this work, we show that it is possible to achieve very nice scalability of our GANs workload using only CPU cores supported by Intel's MKL library and Berkeley's Ray framework. The correlation fractal dimension (D 2) gives a bound on the number of ride requests within a geographical region. This is an essential characteristic to match for the data set we are generating using GANs. In TAB3, we provide the fractal range for each city within which the fractal dimension remains constant. It is important to note that the fractal range for each city differs. The fractal range provides the range for which the data exhibits statistical self-similarity BID0. The variation in the fractal ranges for the different cities can be attributed to the geographical shape of the city for which the ride requests are generated. We hypothesize that due to Los Angeles's sprawling nature, a larger is needed to observe self-similar patterns in comparison to the other three cities, which have a more corridor-like geographical region. Table 2: Summary of measured correlation fractal dimensions (D 2) for four cities; computed over a day for every hour using pickup locations of real and synthetic data sets. One may also interpret D 2 as a way to measure the fidelity of generated images to that from real data. Comparison of the ranges of values of D 2, in terms of min, max, and mean values, for the real and the synthetic data sets are fairly close although not identical. In most instances the mean value for D 2 is lower for the synthetic data sets in comparison to the real data sets. We believe this discrepancy in the values of D 2 require further investigation. Recent works to improve capture learning of highresolution details of an image BID13 can potentially benefit the learning for our ride request images. DPL provides a characterization of the temporal evolution of ride requests. In the top row of Figure 5 we observe the plot of the DPL exponents α (slop of the line) based on the temporal patterns of the real data sets. For the ride request graph to obey DPL properties, we use graph generator proposed by BID11 to connect source and destination locations. In the bottom row of Figure 5 we see the same based on the synthetic data sets. We can see that the DPL exponent values α correlated quite nicely with that from the real data sets for New York, Chicago, and San Francisco. Figure 5: DPL plots from real data (top row) and synthetic data (bottom row) for four cities. The red line is the least square fit of the form y = Cx α, where y and x are number of edges and nodes respectively. R 2 ≈ 1.00 for all of them. For Los Angeles, the synthetic exponent is higher than the real observed value; the geographical region for LA is much larger and due to many prominent regions of high request density, the model may likely suffer from bias towards generating more requests in prominent regions leading to a faster increase of the number of edges connecting nodes present in high density regions. Another validation of our GANs approach is provided in Figure 6. Here we observe temporal variation of ride requests in terms of the volume of ride requests generated for each hour of a typical weekday. We see that for all four cities, the temporal variation of the synthetic data sets match quite well the temporal variation exhibited by the actual data set. The emergence of ride sharing services and the availability of extensive data sets from such services are creating unprecedented opportunities for: 1) doing city-scale data analytics on urban transportation for supporting Intelligent Transportation Systems (ITS); 2) improving the efficiency of ride sharing services; 3) facilitating real-time traffic congestion prediction; and 4) providing new public services for societal benefit. Moreover, the power of neural networks for machine learning has allowed the creation of useful models which can capture human behavior and dynamic real-world scenarios. The key contributions of this paper include:• We map the ride requests of ride sharing services into a time sequence of images that capture both the temporal and spatial attributes of ride request patterns for a city.• Based on extensive real world ride request data, we introduce a GANs based workflow for modeling and generating synthetic and realistic ride request data sets for a city.• We further show that our GANs workload can be effectively scaled using Xeon CPU clusters on AWS, in reducing training times from hours to minutes for each city.• Using previous work on modelling urban mobility patterns, we validate our GANs generated data sets for ride requests for four major US cities, by comparing the spatial and temporal properties of the GANs generated data sets against that of the real data sets. There are other promising avenues for further research. Some open research topics include: Figure 6: Plots for four cities highlighting the temporal variability of ride requests visible in both real and our model (predicted) for ride request generation. The pattern is representative of any typical day of week.• Using the GANs generated data sets for experiments on new algorithms for dynamic ride pooling, real-time pre-placement of vehicles, and real-time traffic congestion prediction. • Using the GANs generated data sets for conducting experiments on what-if scenarios related to traffic congestion prediction and mitigation, and planning for future development of transportation infrastructures. We are currently pursuing these research topics. As our GANs generated data sets are used in our follow up research, we plan to further validate the synthetic data sets by comparing our research with from using the real data sets. We plan to continue to tune our GANs models and generate improved synthetic data sets that can be made available for other researchers. | This paper focuses on the synthetic generation of human mobility data in urban areas using GANs. | 1,051 | scitldr |
While deep neural networks are a highly successful model class, their large memory footprint puts considerable strain on energy consumption, communication bandwidth, and storage requirements. Consequently, model size reduction has become an utmost goal in deep learning. Following the classical bits-back argument, we encode the network weights using a random sample, requiring only a number of bits corresponding to the Kullback-Leibler divergence between the sampled variational distribution and the encoding distribution. By imposing a constraint on the Kullback-Leibler divergence, we are able to explicitly control the compression rate, while optimizing the expected loss on the training set. The employed encoding scheme can be shown to be close to the optimal information-theoretical lower bound, with respect to the employed variational family. On benchmarks LeNet-5/MNIST and VGG-16/CIFAR-10, our approach yields the best test performance for a fixed memory budget, and vice versa, it achieves the highest compression rates for a fixed test performance. Traditional approaches to model compression usually rely on three main techniques: pruning, quantization and coding. For example, Deep Compression BID3 proposes a pipeline employing all three of these techniques in a systematic manner. From an information-theoretic perspective, the central routine is coding, while pruning and quantization can be seen as helper heuristics to reduce the entropy of the empirical weight-distribution, leading to shorter encoding lengths BID11. Also, the recently proposed Bayesian Compression BID9 falls into this scheme, despite being motivated by the so-called bits-back argument BID7 which theoretically allows for higher compression rates.1 While the bits-back argument certainly motivated the use of variational inference in Bayesian Compression, the downstream encoding is still akin to Deep Compression (and other approaches). In particular, the variational distribution is merely used to derive a deterministic set of weights, which is subsequently encoded with Shannonstyle coding. This approach, however, does not fully exploit the coding efficiency postulated by the bits-back argument.1 Recall that the bits-back argument states that, assuming a large dataset and a neural network equipped with a weight-prior p, the effective coding cost of the network weights is KL(q||p) = Eq[log q p], where q is a variational posterior. However, in order to realize this effective cost, one needs to encode both the network weights and the training targets, while it remains unclear whether it can also be achieved for network weights alone. In this paper, we step aside from the pruning-quantization pipeline and propose a novel coding method which approximately realizes bits-back efficiency. In particular, we refrain from constructing a deterministic weight-set but rather encode a random weight-set from the full variational posterior. This is fundamentally different from first drawing a weight-set and subsequently encoding it -this would be no more efficient than previous approaches. Rather, the coding scheme developed here is allowed to pick a random weight-set which can be cheaply encoded. By using from BID4, we show that such a coding scheme always exists and that the bits-back argument indeed represents a theoretical lower bound for its coding efficiency. Moreover, we propose a practical scheme which produces an approximate sample from the variational distribution and which can indeed be encoded with this efficiency. Since our algorithm learns a distribution over weightsets and derives a random message from it, while minimizing the ing code length, we dub it Minimal Random Code Learning (MIRACLE). All preceding works BID2 BID9 BID10 BID1 essentially use the following coding scheme, or a (sometimes sub-optimal) variant of it. After a deterministic weight-set w * has been obtained, involving potential pruning and quantization techniques, one interprets w * as a sequence of i.i.d. variables and assumes the coding distribution (i.e. a dictionary) DISPLAYFORM0, where δ x denotes the Dirac delta at x. According to Shannon's source coding theorem BID11, w * can be coded with no less than N H[p] nats (H denotes the Shannon entropy), which is asymptotically achieved by Huffman coding BID8, like in BID3. However, note that the Shannon lower bound can also be written as DISPLAYFORM1 where we set p (w) = i p (w i). Thus, these Shannon-style coding schemes are in some sense optimal, when the variational family is restricted to point-measures, i.e. deterministic weights. By extending the variational family to comprise more general distributions q, the coding length KL(q||p) could potentially be drastically reduced. In the following, we develop one such method which exploits the uncertainty represented by q in order to encode a random weight-set with short coding length. Consider the scenario where we want to train a neural network but our memory budget is constrained. As illustrated in the previous section, a variational approach offers -in principle -a simple and elegant solution. Now, similar to BID9, we first fix a suitable network architecture, select an encoding distribution p and a parameterized variational family q φ for the network weights w. We consider, however, a slightly different variational objective related to the β-VAE BID6 in order to be able to constrain the compression size using the penalty factor β: DISPLAYFORM0 This objective directly reflects our goal of achieving both a good training performance (loss term) and being able to represent our model with a short code (model complexity), at least according to the bits-back argument. After obtaining q φ by maximizing, a weight-set drawn from q φ will perform comparable to a deterministically trained network, since the variance of the negative loss term will be comparatively small to the mean., and since the KL term regularizes the model. Thus, our declared goal is to draw a sample from q φ such that this sample can be encoded as efficiently as possible. It turns out that the expected message length E[|M |] that allows for sampling q φ is bounded by the mutual information between the data D and the weights w BID4 BID5: DISPLAYFORM1 Harsha et al. FORMULA1 provide a constructive proof that this lower-bound can be well approximated using a variant of rejection sampling. However, this algorithm is in fact intractable, because it requires keeping track of the acceptance probabilities over the whole sample domain. We propose a method to produce an approximate sample from q φ that can be cheaply encoded. First, K = exp(KL(q φ ||p)) samples are drawn from p, using the shared random generator. Subsequently, we craft a discrete proxy distributionq, which has support only on these K samples, and where the probability mass for each sample is proportional to the importance weights a k = q φ (w k) p(w k). Finally, we draw a sample fromq and return its index k *. Since any number 0 ≤ k * < K can be easily encoded with KL(q φ ||p) nats, we achieve our aimed coding efficiency. Decoding the sample is easy: simply draw the k * th sample w k * from the shared random generator (e.g. by resetting the random seed). While this algorithm is remarkably simple and easy to implement, it can be shown that it produces a close-to unbiased sample from q φ BID0 BID5.Furthermore, an immediate caveat is that the number K of required samples grows exponentially in KL(q φ ||p), which is clearly infeasible for encoding a practical neural network. To deal with this issue, the weights are randomly split into groups each with a small, fixed allowance of nats such that drawing exp(KL(q φblock ||p block)) ≈ 10 6 samples can be done efficiently. The experiments 2 were conducted on two common benchmarks, LeNet-5 on MNIST and VGG-16 on CIFAR-10, using a Gaussian distribution with diagonal covariance matrix for q φ. As baselines, we used three recent state-of-the-art methods, namely Deep Compression BID3, Weightless encoding BID10 and Bayesian Compression BID9. The performance of the baseline methods are quoted from their respective source materials. We see that MIRACLE is Pareto-better than the competitors: for a given test error rate, we achieve better compression, while for a given model size we achieve lower test error FIG0 ). In this paper we followed through the philosophy of the bits-back argument for the goal of coding model parameters. Our algorithm is backed by solid recent information-theoretic insights, yet it is simple to implement. We demonstrated that it outperforms the previous state-of-the-art. An important question remaining for future work is how efficient MIRACLE can be made in terms of memory accesses and consequently for energy consumption and inference time. There lies clear potential in this direction, as any single weight can be recovered by its group-index and relative index within each group. By smartly keeping track of these addresses, and using pseudo-random generators as algorithmic lookup-tables, we could design an inference machine which is able to directly run our compressed models, which might lead to considerable savings in memory accesses. | This paper proposes an effective coding scheme for neural networks that encodes a random set of weights from a variational distribution. | 1,052 | scitldr |
Deep neural networks provide state-of-the-art performance for image denoising, where the goal is to recover a near noise-free image from a noisy image. The underlying principle is that neural networks trained on large datasets have empirically been shown to be able to generate natural images well from a low-dimensional latent representation of the image. Given such a generator network, or prior, a noisy image can be denoised by finding the closest image in the range of the prior. However, there is little theory to justify this success, let alone to predict the denoising performance as a function of the networks parameters. In this paper we consider the problem of denoising an image from additive Gaussian noise, assuming the image is well described by a deep neural network with ReLu activations functions, mapping a k-dimensional latent space to an n-dimensional image. We state and analyze a simple gradient-descent-like iterative algorithm that minimizes a non-convex loss function, and provably removes a fraction of (1 - O(k/n)) of the noise energy. We also demonstrate in numerical experiments that this denoising performance is, indeed, achieved by generative priors learned from data. We consider the image or signal denoising problem, where the goal is to remove noise from an unknown image or signal. In more detail, our goal is to obtain an estimate of an image or signal y˚P R n from y " y˚`η, where η is unknown noise, often modeled as a zero-mean white Gaussian random variable with covariance matrix σ 2 {nI.Image denoising relies on modeling or prior assumptions on the image y˚. For example, suppose that the image y˚lies in a k-dimensional subspace of R n denoted by Y. Then we can estimate the original image by finding the closest point in 2 -distance to the noisy observation y on the subspace Y. The corresponding estimate, denoted byŷ, obeys DISPLAYFORM0 with high probability (throughout,}¨} denotes the 2 -norm). Thus, the noise energy is reduced by a factor of k{n over the trivial estimateŷ " y which does not use any prior knowledge of the signal. The denoising rate shows that the more concise the image prior or image representation (i.e., the smaller k), the more noise can be removed. If on the other hand the prior (the subspace, in this example) does not include the original image y˚, then the error bound increases as we would remove a significant part of the signal along with noise when projecting onto the range of the signal prior. Thus a concise and accurate prior is crucial for denoising. Real world signals rarely lie in a priori known subspaces, and the last few decades of image denoising research have developed sophisticated and accurate image models or priors and algorithms. Examples include models based on sparse representations in overcomplete dictionaries such as wavelets and curvelets , and algorithms based on exploiting self-similarity within images BID4. A prominent example of the former class of algorithms is the BM3D BID4 algorithm, which achieves state-of-the-art performance for certain denoising problems. However, the nuances of real world images are difficult to describe with handcrafted models. Thus, starting with the paper that proposes to learn sparse representation based on training data, it has become common to learn concise representation for denoising (and other inverse problems) from a set of training images. In 2012, Burger et al. BID2 applied deep networks to the denoising problem, by training a deep network on a large set of images. Since then, deep learning based denoisers have set the standard for denoising. The success of deep network priors can be attributed to their ability to efficiently represent and learn realistic image priors, for example via autodecoders and generative adversarial models . Over the last few years, the quality of deep priors has significantly improved . As this field matures, priors will be developed with even smaller latent code dimensionality and more accurate approximation of natural signal manifolds. Consequently, the representation error from deep priors will decrease, and thereby enable even more powerful denoisers. As the influence of deep networks in inverse problems grows, it becomes increasingly important to understand their performance at a theoretical level. Given that most optimization approaches for deep learning are first order gradient methods, a justification is needed for why they do not get stuck in local minima. The closest theoretical work to this question is BID1, which solves a noisy compressive sensing problem with generative priors by minimizing empirical risk. Under the assumption that the network is Lipschitz, they show that if the global optimizer can be found, which is in principle NP-hard, then a signal estimate is recovered to within the noise level. While the Lipschitzness assumption is quite mild, the ing theory does not provide justification for why global optimality can be reached. The most related work that establishes theoretical reasons for why gradient methods would not get stuck in local minima, when using deep generative priors for solving inverse problems, is. In it, the authors establish global favorability for optimization of the noiseless empirical risk function. Specifically, they show existence of a descent direction outside a ball around the global optimizer and a negative multiple of it in the latent space of the generative model. This work does not provide a specific algorithm which provably estimates the global minimizer, nor does it provide an analysis of the robustness of the problem with respect to noise. In this paper, we propose the first algorithm for solving denoising with deep generative priors that provably finds an approximation of the underlying image. Following the lead of , we assume an expansive Gaussian model for the deep generative network in order to establish this . Contributions: The goal of this paper is to analytically quantify the denoising performance of deep-prior based denoisers. Specifically, we characterize the performance of a simple and efficient algorithm for denoising based on a d-layer generative neural network G: R k Ñ R n, with k ă n, and random weights. In more detail, we propose a gradient method with a tweak that attempts to minimize the least-squares loss f pxq " 1 2 }Gpxq´y} 2 between the noisy image y and an image in the range of the prior, Gpxq. While f is non-convex, we show that the gradient method yields an estimatex obeying DISPLAYFORM1 with high probability, where the notation À absorbs a constant factor depending on the number of layers of the network, and its expansitivity, as discussed in more detail later. Our shows that the denoising rate of a deep prior based denoiser is determined by the dimension of the latent representation. We also show in numerical experiments, that this rate-shown to be analytically achieved for random priors-is also experimentally achieved for priors learned from real imaging data. Loss surface f pxq " }Gpxq´Gpx˚q}, x˚" r1, 0s, of an expansive network G with ReLu activation functions with k " 2 nodes in the input layer and n 2 " 300 and n 3 " 784 nodes in the hidden and output layers, respectively, with random Gaussian weights in each layer. The surface has a critical point near´x˚, a global minimum at x˚, and a local maximum at 0. We consider the problem of estimating a vector y˚P R n from a noisy observation y " y˚`η. We assume that the vector y˚belongs to the range of a d-layer generative neural network G: R k Ñ R n, with k ă n. That is, y˚" Gpx˚q for some x˚P R k. We consider a generative network of the form DISPLAYFORM0 where relupxq " maxpx, 0q applies entrywise, W i P R niˆni´1, are the weights in the i-th layer, n i is the number of neurons in the ith layer, and the network is expansive in the sense that k " n 0 ă n 1 㨨¨ă n d " n. The problem at hand is: Given the weights of the network W 1... W d and a noisy observation y, obtain an estimateŷ of the original image y˚such that }ŷ´y˚} is small andŷ is in the range of G. As a way to solve the above problem, we first obtain an estimate of x˚, denoted byx, and then estimate y˚as Gpxq. In order to estimate x˚, we minimize the empirical risk objective DISPLAYFORM0 Since this objective is nonconvex, there is no a priori guarantee of efficiently finding the global minimum. Approaches such as gradient methods could in principle get stuck in local minima, instead of finding a global minimizer that is close to x˚.However, as we show in this paper, under appropriate conditions, a gradient method with a tweakintroduced next-finds a point that is very close to the original latent parameter x˚, with the distance to the parameter x˚controlled by the noise. In order to state the algorithm, we first introduce a useful quantity. For analyzing which rows of a matrix W are active when computing relupW xq, we let DISPLAYFORM1 For a fixed weight matrix W, the matrix W`, x zeros out the rows of W that do not have a positive dot product with x. Alternatively put, W`, x contains weights from only the neurons that are active for the input x. We also define W 1,`,x " pW 1 q`, x " diagpW 1 x ą 0qW 1 and DISPLAYFORM2 The matrix W i,`,x consists only of the weights of the neurons in the ith layer that are active if the input to the first layer is x. We are now ready to state our algorithm: a gradient method with a tweak informed by the loss surface of the function to be minimized. Given a noisy observation y, the algorithm starts with an arbitrary initial point x 0 ‰ 0. At each iteration i " 0, 1,..., the algorithm computes the step directionṽ DISPLAYFORM3 which is equal to the gradient of f if f is differentiable at x i. It then takes a small step opposite tõ v xi. The tweak is that before each iteration, the algorithm checks whether f p´x i q is smaller than f px i q, and if so, negates the sign of the current iterate x i.This tweak is informed by the loss surface. To understand this step, it is instructive to examine the loss surface for the noiseless case in FIG0. It can be seen that while the loss function has a global minimum at x˚, it is relatively flat close to´x˚. In expectation, there is a critical point that is a negative multiple of x˚with the property that the curvature in the˘x˚direction is positive, and the curvature in the orthogonal directions is zero. Further, around approximately´x˚, the loss function is larger than around the optimum x˚. As a simple gradient descent method (without the tweak) could potentially get stuck in this region, the negation check provides a way to avoid converging to this region. Our algorithm is formally summarized as Algorithm 1 below. Require: Weights of the network W i, noisy observation y, and step size α ą 0 1: Choose an arbitrary initial point DISPLAYFORM0 if f p´x i q ă f px i q then 4: DISPLAYFORM1 7: DISPLAYFORM2 Other variations of the tweak are also possible. For example, the negation check in Step 2 could be performed after a convergence criterion is satisfied, and if a lower objective is achieved by negating the latent code, then the gradient descent can be continued again until a convergence criterion is again satisfied. For our analysis, we consider a fully-connected generative network G: R k Ñ R n with Gaussian weights and no bias terms. Specifically, we assume that the weights W i are independently and identically distributed as N p0, 2{n i q, but do not require them to be independent across layers. Moreover, we assume that the network is sufficiently expansive: Expansivity condition. We say that the expansivity condition with constant ą 0 holds if DISPLAYFORM0 where c is a particular numerical constant. In a real-world generative network the weights are learned from training data, and are not drawn from a Gaussian distribution. Nonetheless, the motivation for selecting Gaussian weights for our analysis is as follows:1. The empirical distribution of weights from deep neural networks often have statistics consistent with Gaussians. AlexNet is a concrete example BID0 .2. The field of theoretical analysis of recovery guarantees for deep learning is nascent, and Gaussian networks can permit theoretical because of well developed theories for random matrices.3. It is not clear which non-Gaussian distribution for weights is superior from the joint perspective of realism and analytical tractability.4. Truly random nets, such as in the Deep Image Prior , are increasingly becoming of practical relevance. Thus, theoretical advances on random nets is of independent interest. We are now ready to state our main . Theorem 1. Consider a network with the weights in the i-th layer, W i P R niˆni´1, i.i.d. N p0, 2{n i q distributed, and suppose that the network satisfies the expansivity condition for some ď K{d 90 . Also, suppose that the noise variance obeys DISPLAYFORM1 Consider the iterates of Algorithm 1 with stepsize α " K 4 1 d 2 . Then, there exists a number of steps N upper bounded by DISPLAYFORM2 f px 0 q}x˚} such that after N steps, the iterates of Algorithm 1 obey DISPLAYFORM3 with probability at least 1´2e´2 DISPLAYFORM4. are numerical constants, and x 0 is the initial point in the optimization. The error term in the bound consists of two terms-the first is controlled by, and the second depends on the noise. The first term is negligible if is chosen sufficiently small, but that comes at the expense of the expansivity condition being more stringent. The second term in the bound FORMULA13 is more interesting and controls the effect of noise. Specifically, for sufficiently small, our guarantees that after sufficiently many iterations, DISPLAYFORM5 where the notation À absorbs a factor logarithmic in n and polynomial in d. One can show that G is Lipschitz in a region around x˚1, DISPLAYFORM6 Thus, the theorem guarantees that our algorithm yields the denoising rate of σ 2 k{n, and, as a consequence, denoising based on a generative deep prior provably reduces the energy of the noise in the original image by a factor of k{n. We note that the intention of this paper is to show rate-optimality of recovery with respect to the noise power, the latent code dimensionality, and the signal dimensionality. As a , no attempt was made to establish optimal bounds with respect to the scaling of constants or to powers of d. The bounds provided in the theorem are highly conservative in the constants and dependency on the number of layers, d, in order to keep the proof as simple as possible. Numerical experiments shown later reveal that the parameter range for successful denoising are much broader than the constants suggest. As this is the first of its kind for rigorous analysis of denoising performance by deep generative networks, we anticipate the can be improved in future research, as has happened for other problems, such as sparsity-based compressed sensing and phase retrieval. To prove our main , we make use of a deterministic condition on G, called the Weight Distribution Condition (WDC), and then show that Gaussian W i, as given by the statement of Theorem 1 are such that W i {? 2 satisfies the WDC with the appropriate probability for all i, provided the expansivity condition holds. Our main , Theorem 1, continues to hold for any weight matrices such that W i { ? 2 satisfy the WDC.The condition is on the spatial arrangement of the network weights within each layer. We say that the matrix W P R nˆk satisfies the Weight Distribution Condition with constant if for all nonzero DISPLAYFORM0 where w i P R k is the ith row of W ; Mx Øŷ P R kˆk is the matrix 2 such thatx Þ Ñŷ,ŷ Þ Ñx, and z Þ Ñ 0 for all z P spanptx, yuq K ;x " x{}x} 2 andŷ " y{}y} 2; θ 0 " =px, yq; and 1 S is the indicator function on S. The norm in the left hand side of FORMULA17 is the spectral norm. Note that an elementary calculation 3 gives that Q x,y " Er ř n i"1 1 wi,x ą0 1 wi,y ą0¨wi w t i s for w i " N p0, I k {nq. As the rows w i correspond to the neural network weights of the ith neuron in a layer given by W, the WDC provides a deterministic property under which the set of neuron weights within the layer given by W are distributed approximately like a Gaussian. The WDC could also be interpreted as a deterministic property under which the neuron weights are distributed approximately like a uniform random variable on a sphere of a particular radius. Note that if x " y, Q x,y is an isometry up to a factor of 1{2. In this section we briefly discuss another important scenario to which our apply to, namely regularizing inverse problems using deep generative priors. Approaches that regularize inverse problems using deep generative models BID1 have empirically been shown to improve over sparsity-based approaches, see for a review for applications in imaging, and for an application in Magnetic Resonance Imaging showing a significant performance improvement over conventional methods. Consider an inverse problem, where the goal is to reconstruct an unknown vector y˚P R n from m ă n noisy linear measurements: DISPLAYFORM0 where A P R mˆn is called the measurement matrix and η is zero mean Gaussian noise with covariance matrix σ 2 {nI, as before. As before, assume that y˚lies in the range of a generative prior G, i.e., y˚" Gpx˚q for some x˚. As a way to recover x˚, consider minimizing the empirical risk objective f pxq " 1 2}AGpxq´z}, using Algorithm 1, with Step 6 substituted bỹ v xi " pAΠ DISPLAYFORM1 t pAGpx i q´yq, to account for the fact that measurements were taken with the matrix A.Suppose that A is a random projection matrix, for concreteness assume that A has i.i.d. Gaussian entries with variance 1{m. One could prove an analogous as Theorem 1, but with ω " DISPLAYFORM2 . . . n d q, (note that n has been replaced by m). This extension shows that, provided is chosen sufficiently small, that our algorithm yields an iterate x i obeying DISPLAYFORM3 where again À absorbs factors logarithmic in the n i's, and polynomial in d. Proving this would be analogous to the proof of Theorem 1, but with the additional assumption that the sensing matrix A acts like an isometry on the union of the ranges of DISPLAYFORM4,xi, analogous to the proof in . This extension of our shows that Algorithm 1 enables solving inverse problems under noise efficiently, and quantifies the effect of the noise.2 A formula for Mx Øŷ is as follows. If θ0 " =px,ŷq P p0, πq and R is a rotation matrix such thatx andŷ map to e1 and cos θ0¨e1`sin θ0¨e2 respectively, then Mx Øŷ " R t¨c os θ0 sin θ0 0 sin θ0´cos θ0 0 0 0 0 k´2‚ R, where 0 k´2 is a k´2ˆk´2 matrix of zeros. If θ0 " 0 or π, then Mx Øŷ "xx t or´xx t, respectively. 3 To do this calculation, take x " e1 and y " cos θ0¨e1`sin θ0¨e2 without loss of generality. Then each entry of the matrix can be determined analytically by an integral that factors in polar coordinates. We hasten to add that the paper BID1 ) also derived an error bound for minimizing empirical loss. However, the corresponding (for example Lemma 4.3) differs in two important aspects to our . First, the in BID1 only makes a statement about the minimizer of the empirical loss and does not provide justification that an algorithm can efficiently find a point near the global minimizer. As the program is non-convex, and as non-convex optimization is NP-hard in general, the empirical loss could have local minima at which algorithms get stuck. In contrast, the present paper presents a specific practical algorithm and proves that it finds a solution near the global optimizer regardless of initialization. Second, the in BID1 considers arbitrary noise η and thus can not assert denoising performance. In contrast, we consider a random model for the noise, and show the denoising behavior that the ing error is no more than Opk{nq, as opposed to}η} 2 « Op1q, which is what we would get from direct application of the in BID1. In this section we provide experimental evidence that corroborates our theoretical claims that denoising with deep priors achieves a denoising rate proportional to σ 2 k{n. We consider both a synthetic, random prior, as studied theoretically in the paper, as well as a prior learned from data. All our are reproducible with the code provided in the supplement. We start with a synthetic generative network prior with ReLu-activation functions, and draw its weights independently from a Gaussian distribution. We consider a two-layer network with n " 1500 neurons in the output layer, 500 in the middle layer, and vary the number of input neurons, k, and the noise level, σ. We next present simulations showing that if k is sufficiently small, our algorithm achieves a denoising rate proportional to σk{n as guaranteed by our theory. Towards this goal, we generate Gaussian inputs x˚to the network and observe the noisy image y " Gpx˚q`η, η " N p0, σ 2 {nIq. From the noisy image, we first obtain an estimatex of the latent representation by running Algorithm 1 until convergence, and second we obtain an estimate of the image asŷ " Gpxq. In the left and middle panel of Figure 3, we depict the normalized mean squared error of the latent representation, MSEpx, x˚q, and the mean squared error in the image domain, MSEpGpxq, Gpx˚qq, where we defined MSEpz, z 1 q "}z´z 1 } 2. For the left panel, we fix the noise variance to σ 2 " 0.25, and vary k, and for the middle panel we fix k " 50 and vary the noise variance. The show that, if the network is sufficiently expansive, guaranteed by k being sufficiently small, then in the noiseless case (σ 2 " 0), the latent representation and image are perfectly recovered. In the noisy case, we achieve a MSE proportional to σ 2 k{n, both in the representation and image domains. We also observed that for the problem instances considered here, the negation trick in step 3-4 of Algorithm 1 is often not necessary, in that even without that step the algorithm typically converges to the global minimum. Having said this, in general the negation step is necessary, since there exist problem instances that have a local minimum opposite of x˚. We next consider a prior learned from data. Technically, for such a prior our theory does not apply since we assume the weights to be chosen at random. However, the numerical presented in this section show that even for the learned prior we achieve the rate predicted by our theory pertaining to a random prior. Towards this goal, we consider a fully-connected autoencoder parameterized by k, consisting of an decoder and encoder with ReLu activation functions and fully connected layers. We choose the number of neurons in the three layers of the encoder as 784, 400, k, and those of the decoder as k, 400, 784. We set k " 10 and k " 20 to obtain two different autoencoders. We train both autoencoders on the MNIST training set. We then take an image y˚from the MNIST test set, add Gaussian noise to it, and denoise it using our method based on the learned decoder-network G for k " 10 and k " 20. Specifically, we estimate As suggested by the theory pertaining to decoders with random weights, if k is sufficiently small, and thus the network is sufficiently expansive, the denoising rate is proportional to σ 2 k{n. Right panel: Denoising of handwritten digits based on a learned decoder with k " 10 and k " 20, along with the least-squares fit as dotted lines. The learned decoder with k " 20 has more parameters and thus represents the images with a smaller error; therefore the MSE at σ " 0 is smaller. However, the denoising rate for the decoder with k " 20, which is the slope of the curve is larger as well, as suggested by our theory. the latent representationx by running Algorithm 1, and then setŷ " Gpxq. See Figure 2 for a few examples demonstrating the performance of our approach for different noise levels. We next show that this achieves a mean squared error (MSE) proportional to σ 2 k{n, as suggested by our theory which applies for decoders with random weights. We add noise to the images with noise variance ranging from σ 2 " 0 to σ 2 " 6. In the right panel of Figure 3 we show the MSE in the image domain, MSEpGpxq, Gpx˚qq, averaged over a number of images for the learned decoders with k " 10 and k " 20. We observe an interesting tradeoff: The decoder with k " 10 has fewer parameters, and thus does not represent the digits as well, therefore the MSE is larger than that for k " 20 for the noiseless case (i.e., for σ " 0). On the other hand, the smaller number of parameters in a better denoising rate (by about a factor of two), corresponding to the steeper slope of the MSE as a function of the noise variance, σ 2. Theorem 2. Consider a network with the weights in the i-th layer, W i P R niˆni´1, i.i.d. N p0, 1{n i q distributed, and suppose that the network satisfies the expansivity condition for some ď K{d 90 . Also, suppose that the noise variance obeys DISPLAYFORM0 Consider the iterates of Algorithm 1 with stepsize α " K 4 DISPLAYFORM1 Then, there exists a number of steps N upper bounded by DISPLAYFORM2 d}x˚} such that after N steps, the iterates of Algorithm 1 obey DISPLAYFORM3 with probability at least 1´2e´2 DISPLAYFORM4. are numerical constants, and x 0 is the initial point in the optimization. As mentioned in Section 4.1, our proof makes use of a deterministic condition, called the Weight Distribution Condition (WDC), formally defined in Section 4.1. The following proposition establishes that the expansivity condition ensures that the WDC holds: Lemma 3 (Lemma 9 in ). Fix P p0, 1q. If the entires of W i P R niˆni´1 are i.i.d. N p0, 1{n i q and the expansivity condition n i ą c ´2 logp1{ qn i´1 log n i´1 holds, then W i satisfies the WDC with constant with probability at least 1´8n i e´K 2 ni´1 . Here, c and K are numerical constants. We note that the form of dependence of n i on can be read off the proofs of Lemma 10 in . It follows from Lemma 3, that the WDC holds for all W i with probability at least 1´ř DISPLAYFORM5 In the remainder of the proof we work on the event that the WDC holds for all W i. Recall that the goal of our algorithm is to minimize the empirical risk objective DISPLAYFORM0 where y:" Gpx˚q`η, with η " N p0, σ 2 {nIq. Our rely on the fact that outside of two balls around x " x˚and x "´ρ d x˚, with ρ d a constant defined below, the direction chosen by the algorithm is a descent direction, with high probability. Towards this goal, we use a concentration argument, similar to the arguments used in . First, define Λ x:" Π 1 i"d W i,`,x (with W i,`,x defined in Section 3) for notational convenience, and note that the step direction of our algorithm can be written as DISPLAYFORM1 Note that at points x where G (and hence f) is differentiable, we have thatṽ x " ∇f pxq. The proof is based on showing thatṽ x concentrates around a particular h x P R k, defined below, that is a continuous function of nonzero x, x˚and is zero only at x " x˚and x "´ρ d x˚. The definition of h x depends on a function that is helpful for controlling how the operator x Þ Ñ W`, x x distorts angles, defined as: DISPLAYFORM2 π´θq cos θ`sin θ π¯.With this notation, we define DISPLAYFORM3 where θ 0 " =px, x˚q and θ i " gpθ i´1 q. Note that h x is deterministic and only depends on x, x˚, and the number of layers, d. In order to bound the deviation ofṽ x from h x we use the following two lemmas, bounding the deviation controlled by the WDC and the deviation from the noise: Lemma 4 (Lemma 6 in ). Suppose that the WDC holds with ă 1{p16πd 2 q 2 . Then, for all nonzero x, x˚P R k, DISPLAYFORM4 @ pΠ DISPLAYFORM5 DISPLAYFORM6 Proof. Equation FORMULA34 and FORMULA35 are Lemma 6 in . Regarding FORMULA36, note that the WDC implies that }W i,`,x } 2 ď 1{2` . It follows that DISPLAYFORM7 where the last inequalities follow by our assumption on .Lemma 5. Suppose the WDC holds with ă 1{p16πd 2 q 2, that any subset of n i´1 rows of W i are linearly independent for each i, and that η " N p0, σ 2 {nIq. Then the event DISPLAYFORM8, ω :" DISPLAYFORM9 holds with probability at least 1´2e´2 k log n .As the cost function f is not differentiable everywhere, we will make use of the generalized subdifferential in order to reference the subgradients at nondifferentiable points. For a Lipschitz functionf defined from a Hilbert space X to R, the Clarke generalized directional derivative off at the point x P X in the direction u, denoted byf o px; uq, is defined byf o px; uq " lim sup yÑx,tÓ0fpy`tuq´f pyq t, and the generalized subdifferential off at x, denoted by Bf pxq, is defined byBf pxq " tv P R k | xv, uy ďf o px; uq, for all u P X u. Since f pxq is a piecewise quadratic function, we have DISPLAYFORM10 where conv denotes the convex hull of the vectors v 1, . . ., v t, t is the number of quadratic functions adjoint to x, and v i is the gradient of the i-th quadratic function at x. Lemma 6. Under the assumption of Lemma 5, and assuming that E noise holds, we have that, for any x ‰ 0 and any v x P Bf pxq, DISPLAYFORM11 . In particular, this holds for the subgradient v x "ṽ x .Proof. By, Bf pxq " convpv 1,... v t q for some finite t, and thus v x " a 1 v 1`... a t v t for some a 1,..., a t ě 0, ř i a i " 1. For each v i, there exists a w such that v i " lim tÓ0ṽx`tw. On the event E noise, we have that for any x ‰ 0, for anyṽ x P Bf pxq DISPLAYFORM12, where the last inequality follows from Lemmas 4 and 5 above. The proof is concluded by appealing to the continuity of h x with respect to nonzero x, and by noting that DISPLAYFORM13 where we used the inequality above and that ř i a i " 1.We will also need an upper bound on the norm of the step direction of our algorithm: Lemma 7. Suppose that the WDC holds with ă 1{p16πd 2 q 2 and that the event E noise holds with DISPLAYFORM14 . Then, for all x, DISPLAYFORM15 where K is a numerical constant. Proof. Define for convenience ζ j " DISPLAYFORM16 . We have DISPLAYFORM17 ď dK 2 d maxp}x}, }x˚}q, where the second inequality follows from the definition of h x and Lemma 6, the third inequality uses |ζ j | ď 1, and the last inequality uses the assumption ω ď 2´d {2}x˚} 8π. We are now ready to prove Theorem 2. The logic of the proof is illustrated in Figure 4. Recall that x i is the ith iterate of x as per Algorithm 1. We first ensure that we can assume throughout that x i is bounded away from zero: Lemma 8. Suppose that WDC holds with ă 1{p16πd 2 q 2 and that E noise holds with ω in DISPLAYFORM0. Moreover, suppose that the step size in Algorithm 1 satisfies 0 ă α ă DISPLAYFORM1 In particular, if α " K2 d {d 2, then N is bounded by a constant times d 4 .We can therefore assume throughout this proof that x i R Bp0, K 0}x˚}q, K 0 " 1 32π. We prove Theorem 2 by showing that if }h x } is sufficiently large, i.e., if the iterate x i is outside of set DISPLAYFORM2 Sβ Sβ Figure 4: Logic of the proof: Starting at an arbitrary point, Algorithm 1 moves away from 0, at least till its iterates are outside the gray ring, as 0 is a local maximum; and once an iterate x i leaves the gray ring around 0, all subsequent iterates will never be in the white circle around 0 again (see Lemma 8). Then the algorithm might move towards´ρ d x˚, but once it enters the dashed ball around´ρ d x˚, it enters a region where the function value is strictly larger than that of the dashed ball around x˚, by Lemma 10. Thus steps 3-5 of the algorithm will ensure that the next iterate x i is in the dashed ball around x˚. From there, the iterates will move into the region Sβ, since outside of Sβ Y Sβ the algorithm chooses a descent direction in each step (see the argument around equation FORMULA0). The region Sβ is covered by a ball of radius r, by Lemma 9, determined by the noise and. DISPLAYFORM3 then the algorithm makes progress in the sense that f px i`1 q´f px i q is smaller than a certain negative value. The set S β is contained in two balls around x˚and´ρx˚, whose radius is controlled by β:Lemma 9. For any β ď 1 64 2 d 12, DISPLAYFORM4 Here, ρ d ą 0 is defined in the proof and obeys ρ d Ñ 1 as d Ñ 8.Note that by the assumption ω ď DISPLAYFORM5 and Kd? ď 1, our choice of β in obeys β ď 1 64 2 d 12 for sufficiently small K 1, K, and thus Lemma 9 yields: DISPLAYFORM0 were we define the radius r " DISPLAYFORM1 d{2, where K 2, K 3 are numerical constants. Note that hat the radius r is equal to the right hand side in the error bound in our theorem. In order to guarantee that the algorithm converges to a ball around x˚, and not to that around´ρ d x˚, we use the following lemma:Lemma 10. Suppose that the WDC holds with ă 1{p16πd 2 q 2 . Moreover suppose that E noise holds, and that ω in the event E noise obeys ω 2´d {2}x˚}2 ď K 9 {d 2, where K 9 ă 1 is a universal constant. Then for any φ d P rρ d, 1s, it holds that DISPLAYFORM2 for all x P Bpφ d x˚, K 3 d´1 0}x˚}q and y P Bp´φ d x˚, K 3 d´1 0 }x˚}q, where K 3 ă 1 is a universal constant. In order to apply Lemma 10, define for convenience the two sets:Sβ:"S β X Bpx˚, rq, and DISPLAYFORM3 By the assumption that Kd? ď 1 and ω ď K 1 d´1 6 2´d {2}x˚}, we have that for sufficiently small K 1, K, DISPLAYFORM0 Thus, the assumptions of Lemma 10 are met, and the lemma implies that for any x P Sβ and y P Sβ, it holds that f pxq ą f pyq. We now show that the algorithm converges to a point in Sβ. This fact and the negation step in our algorithm (line 3-5) establish that the algorithm converges to a point in Sβ if we prove that the objective is nonincreasing with iteration number, which will form the remainder of this proof. Consider i such that x i R S β. By the mean value theorem (, Theorem 8.13), there is a t P r0, 1s such that forx i " x i´t αṽ xi there is a vx i P Bf px i q, where Bf is the generalized subdifferential of f, obeying DISPLAYFORM1 In the next subsection, we guarantee that for any t P r0, 1s, vx i withx i " x i´t αṽ xi is close toṽ xi: DISPLAYFORM2 Applying FORMULA0 to FORMULA0 yields DISPLAYFORM3 where we used that αK 7 DISPLAYFORM4 12, by our assumption on the stepsize α being sufficiently small. Thus, the maximum number of iterations for which x i R S β is f px 0 q12{pα min i}ṽ xi } 2 q. We next lower-bound }ṽ xi }. We have that on E noise, for all x R S β, with β given by. DISPLAYFORM5 where the second inequality follows by the definition of S β and Lemma 6, and the third inequality follows from our definition of β in. Thus, In order to conclude our proof, we remark that once x i is inside a ball of radius r around x˚, the iterates do not leave a ball of radius 2r around x˚. To see this, note that by and our choice of stepsize, DISPLAYFORM6 This concludes our proof. The remainder of the proof is devoted to prove the lemmas used in this section. We next prove the implication. Consider x i R Bp0, 2K 0 }x˚}q, and note that DISPLAYFORM7 where the second inequality follows from, the third inequality from }x i } ě 2K 0 }x˚}, and finally the last inequality from our assumption on the stepsize α. This concludes the proof of.Proof of: It remains to prove. We start with proving x,ṽ x ă 0. For brevity of notation, let Λ z " DISPLAYFORM8 The first inequality follows from FORMULA35 and FORMULA36, and the second inequality follows from our assumption on ω. Therefore, for any x P Bp0, 1 16π }x˚}q, x,ṽ x ă 0, as desired. We next show that, for any x P Bp0, DISPLAYFORM9 where the second inequality is from FORMULA35 and FORMULA36. This concludes the proof of.A.5 PROOF OF LEMMA 5 DISPLAYFORM10 where P Λx is a projector onto the span of Λ x. As a consequence, }P Λx η} 2 is χ 2 -distributed random variable with k-degrees of freedom scaled by σ{n. A standard tail bound (see (?, p. 43)) yields that, for any β ě k, DISPLAYFORM11 Next, we note that by applying Lemmas 13-14 from (, Proof of Lem. 15)) 4, with probability one, that the number of different matrices Λ x can be bounded as DISPLAYFORM12 where the second inequality holds for logp10q ď k{4 logpn 1 q. To see this, note that pn DISPLAYFORM13 is implied by kpd logpn 1 q`pd´1q logpn 2 q`. . . logpn d qq ě kd 2 {4 logpn 1 q ě d 2 logp10q. Thus, by the union bound, DISPLAYFORM14 where n " n d . Recall from that }Λ x } ď 13 12. Combining this inequality with }q x } 2 ď }Λ x } 2 }P Λx η} 2 concludes the proof. A.6 PROOF OF LEMMA 9We now show that h x is away from zero outside of a neighborhood of x˚and´ρ d x˚. We prove Lemma 9 by establishing the following: Lemma 12. Suppose 64d 6? β ď 1. Define DISPLAYFORM15 where q θ 0 " π and q DISPLAYFORM16 DISPLAYFORM17 Additionally, DISPLAYFORM18 Proof. Without loss of generality, let }x˚} " 1, x˚" e 1 andx " r cos θ 0¨e1`r sin θ 0¨e2 for θ 0 P r0, πs. Let x P S β.First we introduce some notation for convenience. Let DISPLAYFORM19 π´θ j π, r " }x} 2, M " maxpr, 1q. DISPLAYFORM20 By inspecting the components of h x, we have that x P S β implies |´ξ`cos θ 0 pr´ζq| ď βM DISPLAYFORM21 Now, we record several properties. We have:θ i P r0, π{2s for i ě 1 DISPLAYFORM22 DISPLAYFORM23 θ 0 " π`O 1 pδq ñ θ i " q θ i`O1 piδq θ 0 " π`O 1 pδq ñ |ξ| ď δ π DISPLAYFORM24 We now establish. Observe 0 ă gpθq ď`1 3π`1 θ˘´1 ":gpθq for θ P p0, πs. As g andg are monotonic increasing, we have q θ i " g˝ip q θ 0 q " g˝ipπq ďg˝ipπq "`i 3π`1 π˘´1 " 3π i`3. Similarly, gpθq ě p 1 π`1 θ q´1 implies that q θ i ě π i`1, establishing.We now establish. Using and θ i ď q θ i, we have DISPLAYFORM25 where the last inequality can be established by showing that the ratio of consecutive terms with respect to d is greater for the product in the middle expression than for d´3.We establish by using the fact that |g 1 pθq| ď 1 for all θ P r0, πs and using the same logic as for (, Eq. 17).We now establish. As θ 0 " π`O 1 pδq, we have θ i " q θ i`O1 piδq. Thus holds. Next, we establish that x P S β ñ r ď 4d, and thus M ď 4d. ď 4d. Thus, we have x P S β ñ r ď 4d ñ M ď 4d. Next, we establish that we only need to consider the small angle case (θ 0 « 0) and the large angle case (θ 0 « π), by considering the following three cases:(Case I) sin θ 0 ď 16d 4 β: We have θ 0 " O 1 p32d 4 βq or θ 0 " π`O 1 p32d 4 βq, as 32d 4 β ă 1.. Using this inequality in, we have |ξ| ď βM`β? βq, as θ 0 ě π{2 and as β ă 1.At least one of the Cases I,II, or III hold. Thus, we see that it suffices to consider the small angle case θ 0 " O 1 p32d 4 βq or the large angle case θ 0 " π`O 1 p8πd 4 ? βq. Small Angle Case. Assume θ 0 " O 1 pδq with δ " 32d 4 β. As θ i ď θ 0 ď δ for all i, we have 1 ě ξ ě p1´δ π q d " 1`O 1 p 2δd π q provided δd{π ď 1{2 (which holds by our choice δ " 32d 4 β by A.8 PROOF OF LEMMA 13It holds that}x´y} ě 2 sinpθ x,y {2q minp}x}, }y}q, @x, y sinpθ{2q ě θ{4, @θ P r0, πs d dθ gpθq P r0, 1s @θ P r0, πslogp1`xq ď x @x P r´0.5, 1s logp1´xq ě´2x @x P r0, 0.75s where θ x,y " =px, yq. We recall the ,, and FORMULA30 | By analyzing an algorithms minimizing a non-convex loss, we show that all but a small fraction of noise can be removed from an image using a deep neural network based generative prior. | 1,053 | scitldr |
Deep learning yields great across many fields, from speech recognition, image classification, to translation. But for each problem, getting a deep model to work well involves research into the architecture and a long period of tuning. We present a single model that yields good on a number of problems spanning multiple domains. In particular, this single model is trained concurrently on ImageNet, multiple translation tasks, image captioning (COCO dataset), a speech recognition corpus, and an English parsing task. Our model architecture incorporates building blocks from multiple domains. It contains convolutional layers, an attention mechanism, and sparsely-gated layers. Each of these computational blocks is crucial for a subset of the tasks we train on. Interestingly, even if a block is not crucial for a task, we observe that adding it never hurts performance and in most cases improves it on all tasks. We also show that tasks with less data benefit largely from joint training with other tasks, while performance on large tasks degrades only slightly if at all. Recent successes of deep neural networks have spanned many domains, from computer vision BID16 to speech recognition BID7 and many other tasks. Convolutional networks excel at tasks related to vision, while recurrent neural networks have proven successful at natural language processing tasks, e.g., at machine translation BID31 BID2. But in each case, the network was designed and tuned specifically for the problem at hand. This limits the impact of deep learning, as this effort needs to be repeated for each new task. It is also very different from the general nature of the human brain, which is able to learn many different tasks and benefit from transfer learning. The natural question arises:Can we create a unified deep learning model to solve tasks across multiple domains?The question about multi-task models has been studied in many papers in the deep learning literature. Natural language processing models have been shown to benefit from a multi-task approach a long time ago BID5, and recently multi-task machine translation models have even been shown to exhibit zero-shot learning when trained on multiple languages . Speech recognition has also been shown to benefit from multi-task training BID27, as have some vision problems, such as facial landmark detection BID36. But all these models are trained on other tasks from the same domain: translation tasks are trained with other translation tasks, vision tasks with other vision tasks, speech tasks with other speech tasks. Multi-modal learning has been shown to improve learned representations in the unsupervised setting BID22 and when used as a-priori known unrelated tasks BID24. But no competitive multi-task multi-modal model has been proposed, so the above question remains unanswered. In this work, we take a step toward positively answering the above question by introducing the MultiModel architecture, a single deep-learning model that can simultaneously learn multiple tasks from various domains. Concretely, we train the MultiModel simultaneously on the following 8 corpora:Code available at redacted. WSJ speech corpus , used for sentence-level speech recognition. ImageNet dataset BID25, used for image classification. COCO image captioning dataset BID17, used for image captioning. WSJ parsing dataset BID18, used for constituency parsing. These corpora were chosen as they are commonly used for machine learning the respective tasks: speech-to-text, image classification, captioning, parsing and translation. The model learns all of these tasks and achieves good performance: not state-of-the-art at present, but above many task-specific models studied in recent past (see the Section 3 for details). FIG0 illustrates some decodes taken directly from the model: it is clear that it can caption images, categorize them, translate to French and German and construct parse trees. While the MultiModel is only a first step and will be improved in the future, two key insights are crucial to making it work at all and are our main contributions. Small modality-specific sub-networks convert into a unified representation and back from it. To allow training on input data of widely different sizes and dimensions, such as images, sound waves and text, we need sub-networks to convert inputs into a joint representation space. We call these sub-networks modality nets as they are specific to each modality (images, speech, text) and define transformations between these external domains and a unified representation. We design modality nets to be computationally minimal, promoting heavy feature extraction and ensuring that the majority of computation is performed within the domain-agnostic body of the model. Since our model is auto-regressive, modality nets need to both convert the inputs into the unified representation and later convert from this representation into the output space. Two design decisions were important:• The unified representation is variable-size. While a fixed-size representation is tempting and easier to implement, it creates a bottleneck and limits the performance of the model. • Different tasks from the same domain share modality nets. We avoid creating a sub-network for every task, and prefer only to create one for every input modality. For example, all translation tasks share the same modality-net (and vocabulary), no matter for which language pair. This encourages generalization across tasks and allows to add new tasks on the fly. Computational blocks of different kinds are crucial for good on various problems. The body of the MultiModel incorporates building blocks from mutiple domains. We use depthwiseseparable convolutions, an attention mechanism, and sparsely-gated mixture-of-experts layers. These blocks were introduced in papers that belonged to different domains and were not studied before on tasks from other domains. For example, separable convolutions were introduced in the Xception Figure 2: The MultiModel, with modality-nets, an encoder, and an autoregressive decoder.architecture BID4 and were not applied to text or speech processing before. On the other hand, the sparsely-gated mixture-of-experts BID29 had been introduced for language processing tasks and has not been studied on image problems. We find that each of these mechanisms is indeed crucial for the domain it was introduced, e.g., attention is far more important for languagerelated tasks than for image-related ones. But, interestingly, adding these computational blocks never hurts performance, even on tasks they were not designed for. In fact we find that both attention and mixture-of-experts layers slightly improve performance of MultiModel on ImageNet, the task that needs them the least. The MultiModel consists of a few small modality-nets, an encoder, I/O mixer, and an autoregressive decoder, as depicted in Figure 2. As already said above, the encoder and decoder are constructed using 3 key computational blocks to get good performance across different problems: Convolutions allow the model to detect local patterns and generalize across space. Attention layers allow to focus on specific elements to improve performance of the model. Sparsely-gated mixture-of-experts gives the model capacity without excessive computation cost. We start by describing the architecture of each of these 3 blocks and then introduce the encoder, decoder and the architecture of our modality-nets. To perform local computation, we use blocks of convolutions with ReLU non-linearities and normalization. A block of convolutions gets as input a tensor of shape [batch size, sequence length, feature channels] and returns a tensor of the same shape, processed as follows. For convolution operations, we use depthwise separable convolutions, studied for images in BID4, in a way similar to BID12. Depthwise separable convolutions are a parameterand computationally-efficient variant of the traditional convolution. They are defined by a convolution on each feature channel separately, followed by a pointwise convolution to project to the desired feature depth. We refer the reader to BID4 for a complete definition; here we will denote a depthwise separable convolution with weights W h×w corresponding to f kernels of size h × w applied to an input tensor x with stride s and dilated by a factor d (see BID35) as SepConv d,s,f (W, x). Note that subscripts for stride, dilation and output size are omitted when dilation d or stride s are equal to 1, or output size f is equal to the input's feature depth. We use convolutions in blocks that consist of three components: a ReLU activation of the inputs, followed by a SepConv, followed by layer normalization. Layer normalization BID1 acts over the h hidden units of the layer below, computing layer-wise statistics for each batch example and normalizing accordingly. These normalized units are then scaled and shifted by scalar learned parameters G and B respectively, producing the final units to be activated by a non-linearity. The complete convolution step is therefore defined as: DISPLAYFORM0 The convolutional steps are composed into blocks by stacking them and adding residual connections BID9 as depicted in FIG3. We use stacks of four convolutional blocks with two skip-connections between the stack input and the outputs of the second and fourth convolutional steps, and with the first two having 3 × 1 kernels and the next two having 15 × 1 kernels, with the final one dilated by 8 to provide a wide receptive field. We also add 40% dropout at the end of each block, so the complete block is defined as follows: DISPLAYFORM1 For attention, we use a multi-head dot-product attention mechanism inspired by BID2 and similar to , as depicted in FIG3. The inputs to the attention layer are two tensors: a source tensor and a target tensor both with the shape [batch size, sequence length, feature channels] The target tensor is additively composed with a timing signal and mixed using two convolutional blocks. This mixed tensor is then self-attended using a multi-head dot-product attention, which is a dot-product attention with inputs split into g = 8 separate tensors representing each attention head, as shown in FIG3. The timing signals are the main difference between this attention mechanism and the ones used previously. They allow this content-based attention to focus based on their position. They are constructed by concatenating sine and cosine curves: The source tensor is finally passed through two different pointwise convolutions to generate the memory keys K and values V and the query keys, memory keys and memory values are used to apply the attention mechanism between the self-attended target and the source (see FIG3). DISPLAYFORM0 We use sparsely-gated mixture-of-experts layers of the same kind as introduced in BID29: A mixture-of-experts layer consists of a number of simple feed-forward neural networks (experts) and a trainable gating network which selects a sparse combination of the experts to process each input. We refer the reader to BID29 for details as we use exactly the architecture described there. In particular, during training we select k = 4 experts out of the whole expert pool and add the additional load-balancing cost as in BID29. In each of the two mixture-of-experts layers in our model, we use a pool of 240 experts when training on 8 problems jointly, and 60 experts when training on each problem separately. The body of the MultiModel consists of 3 parts: the encoder that only processes the inputs, the mixer that mixes the encoded inputs with previous outputs (autoregressive part), and a decoder that processes the inputs and the mixture to generate new outputs. The encoder, mixer and decoder are structured similarly to previous fully convolutional sequence to sequence models such as ByteNet or WaveNet (van den), but differ in the computational blocks that are used. We depict their architecture in FIG3. As can be seen there, the encoder consists of 6 repeated convolutional blocks (described before) with a mixture-of-experts layer in the middle. The mixer consists of an attention block and 2 convolutional blocks. The decoder consists of 4 blocks of convolutions and attention, with a mixture-of-experts layer in the middle. Crucially, the convolutions in the mixer and decoder are padded on the left, so they can never access any information in the future. This allows the model to be autoregressive, and this convolutional autoregressive generation scheme offers large receptive fields over the inputs and past outputs, which are capable of establishing long term dependencies. To allow the decoder to produce outputs for different tasks even with the same modality, we always start decoding with a command-token, such as To-English or To-Parse-Tree. We learn an embedding vector corresponding to each of the tokens during training. We have 4 modality nets, for language (text data), images, audio, and categorical data. For all predictions, we use the cross-entropy loss, per subword-unit on text, per category on classification. Our language-based data is all tokenized using the same vocabulary with 8k subword-units, following the method from BID28. The language input modality takes a sequence of tokens ending in a termination token. This sequence of tokens is mapped to the correct dimensionality for the body using a learned embedding. On the output side, the language modality takes the decoded output of the body and performs a learned linear mapping, followed by a Sof tmax, ing in a probability distribution over the token vocabulary. DISPLAYFORM0 The image input modality is analogous to the Xception entry flow BID4. The input image's feature depth is gradually deepened using residual convolution blocks which we call ConvRes and define as follows: DISPLAYFORM0 DISPLAYFORM1 The categorical output modality is analogous to the Xception exit flow BID4. If the network inputs are two-dimensional data such as image or spectral audio data, then the one-dimensional output from the model body is first reshaped into two-dimensions again, followed by progressive down-sampling: DISPLAYFORM0 GlobalAvgP ool denotes a mean taken across all spatial and temporal dimensions. We accept audio input in the form of a 1-dimensional waveform over time BID8 BID23 BID26 or as a 2-dimensional spectrogram. Both the waveform and spectral input modalities use a stack of 8 ConvRes blocks from the ImageInputM odality (Section 2.5.2). The i th block has the form: l i = ConvRes(l i−1, 2 i). The spectral modality does not perform any striding along the frequency bin dimension, preserving full resolution in the spectral domain. The modalities of the MultiModel allows to perform a training step on a batch of data from any of the 8 tasks we consider. For example, when making a training step on a batch of translation data, only the language modality sub-network will be activated. Training will then update the parameters of the language modality and all shared parameters, i.e., those in input encoder, mixer and decoder. MultiModel can be trained on a single machine, but we used distributed training for the multi-task runs. When training jointly on 8 tasks, we had a separate worker training on each task, while the shared parameters of the model were on a parameter server and were updated asynchronously. When training on a single task, we used only a single worker training for a similar number of steps. In all training runs report below we used the same set of hyper-parameters and the Adam optimizer BID15 with gradient clipping. We will release the implementation as open-source together with the details of our setup and all used hyper-parameters. The MultiModel architecture draws from eariler encoder-decoder architectures applied to neural machine translation. Earlier sequence-to-sequence models for translation BID31 BID2 BID32 ).memory cells BID10 ). Convolutional architectures yielded good on word-level neural machine translation starting from BID13 and later in BID20. These early models used a standard RNN on top of the convolution to generate the output and had a bottleneck there that hurt performance, especially on longer sentences, similarly to the limitations of RNN sequence-to-sequence models without attention BID31. Fully convolutional neural machine translation without this bottleneck was presented in BID11. The model in BID11 ) (Extended Neural GPU) used a recurrent stack of gated convolutional layers, while the model in ) (ByteNet) did away with recursion and used left-padded convolutions in the decoder. This idea, introduced in WaveNet (van den and also used in MultiModel (see above) significantly improves efficiency. Depthwise separable convolutions were first studied by Sifre BID30 ) and later they were used to get good on large-scale image classification with Xception BID4. We implemented the MultiModel architecture described above using TensorFlow and trained it in a number of configurations. We focused our experiments so as to answer the following questions: How far is the MultiModel trained on 8 tasks simultaneously from state-of-the-art ? How does training on 8 tasks simultaneously compare to training on each task separately? How do the different computational blocks discussed above influence different tasks?In answering the above questions, we don't always consider all 8 problems. Especially the 4 translation problems behave very similarly, so we decided to not include them all in each comparison but we focused on the more varied problems instead. To answer question, we compare the performance of the 8-problem MultiModel with state-of-theart in Table 1. We use the standard top-5 accuracy metric for ImageNet and the standard BLEU metric for translation (scored with MOSES on newstest2014 while newstest2013 was used as the development set). We did not invest much time yet in tuning hyper-parameters of the MultiModel, so we believe that the difference seen there will become much smaller with more tuning. The we achieve are similar to the ones task-specific models get without heavy tuning, e.g., on English-French translation we improve on the recent Extended Neural GPU BID11.To answer question, we compare the MultiModel trained jointly with MultiModel trained separately just on a single task. Since we are comparing different instantiations of the same model, we report two internal metrics: the negative log-perplexity and per-token accuracy (measured on the development set). As can be seen from the in TAB3, the joint 8-problem model performs similarly to single-model on large tasks, and better, sometimes significantly, on tasks where less data is available, such as parsing. The large improvement on parsing seen in TAB3 is not that surprising taking into account the large number of text data in translation tasks. But we were curious if training parsing just with ImageNet, a seemingly unrelated task, would also bring any improvements. This is indeed the case, as can be seen in Table 3. The difference in performance is significant, and since we use both dropout and early stopping, we conjecture that it is not related to over-fitting. Rather, it seems, there are computational primitives shared between different tasks that allow for some transfer learning even between such seemingly unrelated tasks as ImageNet and parsing. Table 3: Results on training parsing alone, with ImageNet, and with 8 other tasks. We report log-perplexity, per-token accuracy, and the percentage of fully correct parse trees. To answer question, we check how training without the mixture-of-experts layers or without the attention mechanism influences performance on different problems. Since both these mechanisms were designed with machine translation in mind, we check the English-French translation. But we also include ImageNet, since this is the problem that stands the least to benefit from those blocks. In fact, one could expect that removing these blocks will improve performance on ImageNet alone if they were truly useless for this task. In contrast, we see in TAB5 that these blocks either don't affect or slightly improve performance. This leads us to conclude that mixing different computation blocks is in fact a good way to improve performance on many various tasks. | Large scale multi-task architecture solves ImageNet and translation together and shows transfer learning. | 1,054 | scitldr |
Machine learning algorithms for controlling devices will need to learn quickly, with few trials. Such a goal can be attained with concepts borrowed from continental philosophy and formalized using tools from the mathematical theory of categories. Illustrations of this approach are presented on a cyberphysical system: the slot car game, and also on Atari 2600 games. There is a growing need for algorithms that control cyberphysical systems to learn with very little data how to operate quickly in a partially-known environment. Many reinforcement-learning (RL) solutions using neural networks (NN) have proved to work well with emulators, for instance with the Atari 1 2600 games BID17, or with real systems such as robots BID11. However, these state-of-the-art approaches need a lot of training data, which may not be obtainable within the allowed time frame or budget. This work thus started as an alternative approach to teach computers to learn quickly to perform as efficiently as the existing solution with approximately one percent of the training data, time, and computing resources. We first review reinforcement learning methods for Markov Decision Processes (MDP) and Partially Observable MDP (POMDP). We then explain the motivation behind our continental-philosophyinspired approach. We describe the two classes of problems on which we focus: the bijective case, which may lead to playing by imitating, and the category-based approach, which should lead to a more innovative behavior of the control algorithms. Both approaches rely on knowledge accumulated during previous experiences, as in Lifelong Machine Learning BID6.These two approaches are illustrated by from both a commercial slot car game controlled by an 8-bit Arduino system, and from Atari 2600 video games running within the Arcade Learning Environment (ALE, see BID1). The development of Artificial Intelligence (AI) owes much to games, which have become one of the classical test-beds for algorithms. Slot car games, for instance, are used to evaluate the performance of decision-making systems. The image-processing, RL-based approach presented in BID11 estimates a car's position on the track thanks to a multilayer perceptron with convolutional layers. It controls the car by applying one out of four possible voltage levels. Training the perceptron takes twelve hours, and learning the control strategy needs another half an hour. The faster solution by BID22 to autonomous slot cars relies on added acceleration sensors and an embedded microcontroller to first create a map of curved and straight tracks. The control algorithm then sets the target velocity and controls it using a Phase-Locked Loop. TM are trademarks of their respective owners and will be written without the trademark symbol for clarity in the remainder of this document. Our system learns to rank with human players in less than a minute, without embedded sensors, for both known and unknown circuits, which correspond to the aforementioned bijective and categorybased approaches. The sensors are a lap counter, and voltage and current from the track. Video games also become increasingly useful for providing a cyber-physical representation of our environment. Indeed, realism turns out to be one of the main focuses for game and character designers (depending on the game intentions). Nevertheless, due to technical limitations and the need for easily described problems, older video-gaming systems, such as the Atari 2600 console, are the go-to systems. They provide a wide variety of situations ranging from mazes (Pac-Man-style games) and action games (such as Frogger) to ball-and-paddle games (Breakout, Pong). Although these different problems require varied strategies to be tackled by a standard human player, they all involve decision-making and, therefore, have been modeled as MDPs BID17.This framework allows the implementation of many different methods. Some works, such as BID17 use a rescaled picture of the playing area as an input for a deep Q-network (DQN) in order to select the best action available to the agent. Another possibility is the use of classic searching and planning methods in order to guide the agent, such as the Iterated Width algorithm BID15 or tree search algorithms such as Monte-Carlo Tree Search (MCTS) BID20 to compute the best possible action for the agent. A less common method is Shallow Reinforcement Learning BID14: although this relies on a simpler linear representation, it obtains similar to those of the non-linear approaches. Finally, Apprenticeship Learning BID2 and Inverse RL BID12 can also be used to train a more humanlike agent which is close to the aforementioned methods in terms of efficiency. We will show that our system learns how to play unknown games in a few thousand frames with a score on par with or better than humans. The success of NN is partially due to the very large amount of data that is used, even if the programmer does not know exactly what happens in the NN (like in black box systems). Two problems are thus the huge quantity of data needed, and the training time required. Moreover, the attribution of good coefficients in the learning phase is very difficult -or impossible -to be interpreted, making validation very difficult. NN are able to learn and generalize, but we do not know exactly how. To improve or complete the NN approach, we propose an approach that tries to explain and use explicitly how an AI can learn, extract features, categorize and generalize. To do this, we place the theoretical elements necessary for such high level abilities directly in the method. These abilities may also emerge in NN after many elementary computations (additions, multiplications, comparisons) occur at each artificial neuron. If this is what effectively happens in each biological neuron, we postulate that intelligence also consists in higher level intellectual operations. In other words, we do not want to reduce intelligence to basic computation -even if it is biologically the case. As we do not want higher level abilities to emerge (or not) after long training times, we explicitly place these high level abilities (such as categorizing and generalizing) directly in our theoretical framework. Thus we can follow some aspects of Dreyfus' critique of AI presented in BID8. This author claims that AI researchers should focus more on what human intelligence is in itself and not only refer to the computer model: considering the brain as a computer, and intelligence as the use of software. More precisely, in the case of RL with NN: considering intelligence as a collection of elementary computations that organize themselves after much training to reach a reward goal. We postulate that it could have happened in such a way over the course of human development, but human intelligence has much evolved. It can produce categorization and generalization not merely for a simple reward, but for the goal of understanding. This is what our AI tries to do. Dreyfus also often refers to authors such as Heidegger, Husserl or Foucault, whose work later became known as continental philosophy. This name was given by analytic philosophers who were often Anglo-Saxon in origin, the earliest being Russell, Frege and Wittgenstein. Analytic philosophy received much influence from mathematical logic that emerged at the end of the 19th century. It tries to clarify philosophical issues by logical analysis, postulating that only philosophical statements verifiable through empirical observations are meaningful (principle of logical positivism). This principle, according to analytic philosophers, is not respected by continental philosophers. Continental philosophy includes a range of French and German doctrines from the 19th and 20th centuries: German idealism, phenomenology, existentialism (influenced by Kierkegaard and Nietzsche), hermeneutics, structuralism, post-structuralism, psychoanalytic theory and object-oriented ontology. These philosophies are all contrary to the analytic movement. If we had to project AI in this debate (analytic versus continental), we could say that Dreyfus criticizes early AI for favoring the analytic tradition and for neglecting the continental one. Of course machines are computers, and computing is closer in nature to logic than phenomenology, metaphysics or psychoanalysis. Continental philosophy, however, can perhaps help understand and describe what human intelligence is, especially for high level abilities, like learning, categorizing, generalizing and understanding. It could possibly then improve the quality and efficiency of human-intelligence-based AI.To summarize, we propose to design our AI using an approach based on certain elements of continental philosophy. This philosophy is described, for lack of a more precise and widely-accepted definition, in terms of its opposition to analytical philosophy. In the next sections, we will propose some connections between this philosophy and existing mathematical theories. We express the logic of our AI at the level of entities, and not at a sample or at a pixel level. In a way, this is similar to working at the morpheme level in structural linguistics, as defined by de , which is the smallest meaningful unit of a language. That implies to start with an analysis of the sampled signals (in one or two dimensions) to detect entities. These entities are like our everyday life objects: tracks (straights and curves), cars, balls, paddles, walls. They are geometrically organized in a space and can be described by cartesian coordinates. We have defined a distance between them that measures how far two entities E and F are one from one another 2. The data is collected at each sample time so that we can construct a timeline and provide an elementary cinematic newtonian model of the situation. This comes from a very old idea of developmental psychology (see for example the works of BID21) that the child starts his cognitive development by the skill of experiencing the world through movement and senses (Piaget called it the sensorimotor stage). But the perceptive world is not a wild set of disordered primitive sensations. They are organized in objects (we call them entities) that take place in a space and can move during time 3. Thus we do not want to take into account all the samples (voltage and current for the slot car, pixels' colors for images) as the fundamental level of knowledge. We shall try to organize them as soon as possible as entities that occur in space and time (and not wait for them to emerge, or not, after a very long learning process). These entities, like the objects of cognitive psychologists, have some properties: relative consistency, continuity of movement, permanence of existence and characteristics (sizes, color, shape). These properties are part of our approach, in the sense that our AI can look for rectangular entities with a particular position, speed and size 4. In more complex games, this rectangular form approach could be too simple, but it is adequate for the Atari 2600 games that we study. One of the main critiques formulated by Dreyfus against the old AI philosophy is the epistemological assumption that claims that all activities can be formalized in terms of predictive rules or laws. In this context, the learning phase consists of determining these rules (that is, their parameters). Then, the system has to apply them by looking for objects or general characteristics of the whole organization of samples, that are like those used in the learning phase. But what about new objects, never seen before, that could appear? Such a strict and trivial application of the epistemological assumption would lead to ignore them. It could also be a principle of precaution to ignore new objects. On the other hand, there could be a principle of curiosity or adventure. Clever machines could be more efficient were they curious as explained in BID19. Referring to the work of Alison Gopnik and Laura Schulz, developmental psychologists at Berkeley and at the Massachusetts Institute of Technology, respectively, it explains that babies naturally gravitate to objects that surprise them rather than to those they are used to, to achieve some extrinsic goal. An AI that only focuses on application of predictive rules will miss the advantages of curiosity. We will use this curiosity to further develop our AI in our next work. If the epistemological assumption of usual AI could be useful for chemistry or physics, because they are context-free, it could be a contradiction in terms with psychology, and behavior understanding. Dreyfus argued that human problem solving depends rather on our sense of the context, that is the natural feeling, understanding or intuition of what is important and interesting given a situation. The world is not just made of objects: it contains subjects. In particular, in the games we consider, there is a representative of what we call the "Me": the entity that is controlled by actions. This point of view allows for a more efficient approach than computing all the possible combinations of the available symbols. This is exactly what we do when we ask our AI to look as soon as possible for some important features (entities and the "Me"). BID7 referred to the Heideggerian concept of Dasein (which means "being there", for a human being confronted with such issues as personhood and mortality), which is a specific way of Being-in-the-world (another Heideggerian concept that considers it as a unity, saying that it is not appropriate to distinguish strictly between the Being and the world that it is in) BID9.In other words, one of the first things that our AI must do is to identify the "Me" from amongst all the listed entities. It is not an implicit potential of a huge number of trainings, like in some RL processes. Moreover, being the "Me" does not mean only to be lead by actions. It also implies to struggle for life. We can say that the "Me" is driven by some life impulses, and that it is attracted to the good objects (that we call friends) and wants to go away from the bad ones (the enemies). Thus postulate that among all the entities, some are friends (those whose contact implies a reward or avoids loss of lives) and some are enemies (those whose contact implies loss of lives). The AI has to distinguish as soon as possible the friends versus the enemies of the "Me", without waiting for this to emerge from millions of trials. After that, the survival strategy is simple: try to meet the friends unless there is an enemy close to the "Me", in which case the first thing to do is to flee. One of the most efficient tools that humans use to understand new situations is the ability to make analogies between past and present. For example, if the AI knows how to play the game Breakout, we expect that it will be able to transpose this ability to a (partially) analogous game, Pong. In particular, we hope to soon use mathematical tools to transpose a policy from one problem (for instance a game) to another. Such a theory is proposed by BID3 for PONDP (Partially Observable Non Deterministic Problems).The problem is that ideal situations where two problems have exactly the same number of states and isomorphic structures are very rare. Nevertheless, there are mathematical tools that can be used to identify non isomorphic structures like equivalence of categories in category theory BID16 5. The theory of category is a powerful tool in modern mathematics that appeared in the mid20th century in topological and geometrical contexts, after the mathematical logic, based on set theory. If mathematical logic was a great source of inspiration for analytic philosophy, category theory could inspire and support continental ideas. The association between category theory and continental philosophy is proposed by and we will follow this path in our work. In a very simplistic way, we could say that if analytic philosophy analyses situations, by distinguishing states (or objects), continental philosophy provides syntheses, setting higher new levels of being (Beings, concepts, types). Whereas in set-theory-based logic, identification is reduced to identity and bijective relations, category theory provides richer descriptions of objects by the introduction of arrows between objects, allowing new kind of identifications. The reader familiar with category theory may find obvious the rest of this paragraph. However, as most Machine Learning tools rely on set theory and not category theory, we try to illustrate below the added value of this mathematical framework. The reader is nevertheless referred to for a thorough and in-depth explanation of category theory. A category C is a collection of objects with arrows between some of them, so that we can compose them. It is something like an oriented graph. In C, an arrow a: A → B is called an isomorphism if it is invertible, that is if there is an arrow b: B → A, such that ba = Id B and ab = Id A. If it is the case we say that the object A and B are isomorphic. The relation of isomorphism defines an equivalence relation on the collection of objects of C. We note the quotient C/. If F: C → C is an equivalence of categories, it induces a real bijection F: (C/) → (C /) between the classes of isomorphic objects even if F is not bijective. We do not identify the objects (or the states) of two situations one-to-one, we identify the types (or classes) of these states. This process can be very useful in the context of observable problems. Let's consider two nonempty sets (of states) C and C not necessary of the same cardinals. Let's suppose that we have two functions of observation f: C → O and f: C → O. Let's assume that they are surjective (if not, we can restrict O and O). The sets of observations O and O will define some types of states. For each o ∈ O, we say that all the states x ∈ C that are observed as o (f (x) = o), have the type T o. This defines a natural equivalence relation R f on the set C: ∀x, y ∈ C, x R f y if and only if f (x) = f (y).In terms of categories, we put an invertible arrow between two objects x and y of C iff x R f y (iff stands for if and only if). This makes C a category, where all arrows are invertible and such that C/ is exactly the quotient C/R f: the set of types of states of C. It is well known that the surjection f: C → O induces a bijectionf: (C/) → O between the set of type and the set of observations. This is obvious since the types as been defined by the observations.f is actually the inverse of DISPLAYFORM0 We do exactly the same with C and f.Suppose now, and this is very important, that the sets of observations O ad O have the same cardinality by the means of a bijection G: O → O. Thus, we can define a bijection F =f DISPLAYFORM1. This bijection between the sets of types can be induced by an equivalence of categories F: C → C defined as follows: for every x ∈ C, let's call o = f (x) and chose an arbitrary x ∈ f −1 (G(o)), and define F (x) = x. If C and C do not have the same cardinality, F has no chance to be bijective, but F is. F sends every state x to a state x of the "same" type (up to G). This is the way that we identify (not necessarily bijectively) C and C. Thus, if we have a strategy to play in C, we can transpose it in C thanks to F.The use of the theory of category in the ability to formalize a wide variety of games and situations. An illustration of this would be the ease with which a human player can switch from the Atari 2600 game Breakout to the very similar Pong. This ease can be transposed into the formalism of categories. However, even a much more concrete system such as the slot car described in section 2.1 and experimented on in section 4.1 can be transcribed into the formalism of categories 6.Let us define the following sets:• {C, C} is the set of categories (one per configuration of the track).• {N, N} is the number of sections per configuration of the track.• {s, s ∈ [1, N]}, {s, s ∈ [1, N]} are the possible locations of the car on the circuit. The location is obtained by counting the number of sections the car has passed in its current lap. We note (u, i) s (Resp. (u, i) s ) the voltage and current measured when the car crosses section s (Resp. s) Let 1 ≤ s 0 ≤ N (Resp. 1 ≤ s 0 ≤ N) be the current position of the car in configuration C (Resp. C).• Let k be a straight section and l a curve section of C.• The player influences (u, i) s with the controller, which leads to the policy π defined by. DISPLAYFORM2 We want to identify C and C, to transpose the policy π from C to C. The states of C are the locations s of the car in the circuit. Similarly, the states of C are the s. If N = N and s 0 = s 0, we can define a bijection between C and C and easily transpose π. But if N = N or s 0 = s 0 it is impossible to define such a bijection. Nevertheless, if we turn C and C into categories by defining some arrows, we will be able to define an equivalence of categories F: C → C. To define these arrows, let's use the observable f defined on the states s of C as follows f: C → {1, 2} with f = h • g where g and h are such that g(s) = (u, i) s and h ((u, i) s ) = 1 if s is a curve, and 2 otherwise. We define f on the s of C the same way, i.e. f: C → {1, 2} with f = h • g and g and h playing the same roles as g and h on the states of C.We can put an invertible arrow between two states of C iff they have the same image by f, and an invertible arrow between two states of C iff they have the same image by f. We then define F: C → C by equation. DISPLAYFORM3 It is easy to see (if the exact definitions are known) that FORMULA3 is an equivalence of categories that allows to transfer π from C to C. F induces a bijection F between the sets of classes (or types of position): DISPLAYFORM4 We finally obtain F (C) = C (types of curves) and F (S) = S (types of straights).This example of systematic categorization and generalization proves that we do not work at the level of states but that type of states are considered instead. Results of this approach are presented for a cyberphysical system: a slot car circuit, and for a simulated system: Atari 2600 video games. The focus on the slot car experimental setup arose from the need to validate the approach on a cyberphysical system. With its imperfect actuators such as a brushed, direct-current (DC) motor, imperfect contacts such as metallic brushes on strips, it allowed us to evaluate the approach while dealing with a wide range of signals from a real system. Moreover, its wide availability and low cost allowed to duplicate the test-bed so as to widen the span of the validations. On the other hand, the configuration is simple, as there is only one entity with dynamic behavior: the "Me' is the slot car. The enemies are located at unknown curvilinear abscissas where a high velocity is detrimental to the "Me". The setup is based on a Scalextric MINI Challenge Set C1320T. We have replaced the mechanical lap counter by a digital omnipolar Hall effect sensor DVR5033 from Texas Instruments. The current is sensed via a 1 Ω resistor in series with the metal strips carrying the power. A spectrum analysis of both the voltage and the current showed components in these signals above 350 Hz. The antialiasing, second-order filter was designed with a cut-off frequency f c = 31 Hz. The Design-to-Cost approach, classic in high-volume manufacturing, led to the now unusual choice of a real-pole filter G(s) = 1/(sRC +1) 2, where s is the Laplace variable, approximated by a Cauer Resistor Capacitor (RC) ladder network BID0. The values R and C are chosen thanks to f c = 1/(2πRC). Moreover, the scaled values of the second RC network, R/d and Cd with {d ∈ R : d > 0}, are computed to meet the specifications of the maximum magnitude error e(d) between G(s) and G a (s), the transfer function of the Cauer RC ladder defined by G a (s) = 1/ (sRC) 2 + s(d + 2)RC + 1. The value of e(d) is given by equation. We chose d = 0.1 to have less than 0.5 dB error, with no sensible impact on the later computations. An implementation with two identical RC sections (i.e. d = 1) would lead to e = 3.5 dB, which would degrade the overall performance. Both the voltage and the current are filtered by such ladder networks before being sampled at f s = 100 Hz:as there are no components in the power spectrum between f s /2 and 350 Hz, there is no aliasing. DISPLAYFORM0 The algorithms are written in C language and run in real-time on an Arduino Mega 2560 which has 8192 bytes of Random Access Memory (RAM). The analog signals are sampled and quantized by the integrated analog to digital converter in the microcontroller, with the sampling period defined by t s = 1/f s, and the sampling time being kt s with k ∈ N.The bijective case for the slot car relies on an three-step imitation procedure:1. A human player first drives the car for n laps, with n = 3 in our experiments.2. The K sampled voltages v(kt s) and currents i(kt s) of the shortest lap (with corresponding t best lap time) are stored in RAM for 0 ≤ k < K, to be replayed by the AI.3. An optimization method (Newton) minimizes the difference between the AI's lap time and t best by scaling the recorded samples v(kt s) used to generate the Pulse-Width Modulation (PWM) control signal. The analogy-based approach relies on two modules: the reward module, and the decision module described below. As in traditional RL, our approach relies on a reward from the environment. This reward is based on three variables: the lap time (measured directly with the lap counter), the presence of the car on the track (binary information), and the fact that the car is moving (also binary information). The algorithm that we designed to provide this reward constantly monitors the car so as to detect that it did not crash (i.e. that it did not leave the track when the velocity was too high) or that it did not stop (when the current was too low to move the car). Both detectors are based on k-nearest neighbors algorithms (k-NN) applied to the voltage and the current. They are implemented as boolean tests on the signals after comparison with some thresholds, to speed up the execution of the algorithm on the microcontroller. As an illustration, a crash can be detected when the voltage is high and the current is near zero: it means that there is no more load (no DC motor) in contact with the strips, even though the voltage is still applied. Using this reward model, the AI can successfully pilot the car on previously unencountered tracks. It does not replay scaled samples of any human driving. The only information reused by the algorithm is the safe speed: it does not trigger the "car crash" reward signal, yet it maintains the car in motion, thus not triggering the "car stop" reward signal. As the circuit is unknown, the bijective case cannot be used: there is no bijection between circuits. The algorithm only relies on the analogy-based approach and transposes knowledge previously acquired for a different circuit configuration thanks to equation FORMULA2. This knowledge -a safe speed for a given s -is transposed via non-bijective analogies presented in 3.3 with the function h((u, i) s ) evaluated with a classifier. Any classifier can be used, including unsupervised learning methods, as the two classes are clustered and separated. For simplicity, we used a k-NN.In practice, the analogy-based approach starts on the unknown circuit with the safe speed. The algorithm infers in real time, from only current and voltage measurements, whether the car is in a configuration that we humans call either curve or straight. The algorithm then chooses the best control signal based on its previous experiences (best in order to reach the goal of decreasing lap time while staying on the track). Even though we use the terms "straight" and "curve" in our explanation, the algorithm simply classifies current and voltage to choose a control signal so as to stay on the track while decreasing the lap time. The algorithm uses this past knowledge (the control signal for each class) in a previously unencountered situation. In this way, it generalizes its strategy and adapts to a radically different case: circuit 2 differs from circuit 1, and a replay of a recorded strategy learned on one circuit or scaled recorded samples of the human driving would fail on the second circuit. The experiments described in this article are conducted on two circuit configurations of different complexity presented in FIG0. The of our experiments for the bijective and the analogy cases are summarized in table 1. Values are tabulated as the mean and the standard deviation from the mean, except for the best lap which is the shortest lap time among all laps. We noticed that the first of eight consecutive laps is always the slowest one for the eight human subjects. The AI, which starts with no previous information, only relies on a safe speed as described in 4.1.1 using a constant PWM of 39% of the full speed. The analogy-based AI, which does not replay any recorded samples of a human driving, improves lap times in less than ten laps, even on an unknown track. On the longest and most complex circuit configuration (circuit 2), it almost ranks best, as tabulated on the line "Final lap". While the final human lap time is lower than the final AI lap time (2.29 s vs 2.52 s for circuit 1, 3.08 s vs 3.13 s for circuit 2), the human unsurprisingly exhibits a higher standard deviation from the mean (140 ms vs 80 ms for circuit 1, 540 ms vs 20 ms for circuit 2). Future improvements of the AI on the unknown track will include an optimization of the two speeds transposed by the function h((u, i) s ): only a safe speed was used during our experiments, leading to no car crash for the AI, contrary to some laps by the humans and thus not taken into account. Lastly, the bijective strategy -imitating the best human lap -also leads to the best lap time. However, contrary to the solution with analogies, it only works for an identical circuit. This means that while the best bijective (imitation) lap time (2.65 s) for circuit 2 is lower, thus better than the final lap time for the analogy (adaptive speed) AI (3.13 s), this strategy can only be used on circuit 2 and cannot lead to a generalization. It is only mentioned here as it gives an empirical lower bound for the lap time on a given circuit. To summarize the slot car case, we implemented the theoretical method exposed in section 3.3 that allows the AI to reuse previously acquired knowledge on a new circuit where a replay of recorded samples of a human driving would lead to an immediate car crash. Even though there is no bijection between the different circuits, in practice this theory allows to generalize knowledge to any different circuit (within the limits imposed by the size of the available RAM). While the slot car allowed us to validate the approach on real analog signals in a simple configuration, the ALE allowed us to validate the approach on more complex configurations while dealing with signals already sampled coming from the emulator. The concepts of entities with "Me" and life impulse introduced in 3.2 are also used to play Atari 2600 games. Our proof-of-concept is based on the detection of such entities thanks to image processing: Sobel operator (center image on figure 2) and bounding-box detection (right image on figure 2). It relies on the library. The entity "Me" is found using system identification. Signals such as impulses and pseudorandom sequences BID13 are sent to ALE to first detect the entities affected by these signals, then to build a dynamic model of the "Me". One or a few entities are controllable: they are the "Me". Their shapes can change during the gameplay, such as the paddle in Breakout, thus the possibility to identify different entities as the "Me". These measurements also update the probability functions p(E, F) for entities E and F that the contact between these entities changes the score, in a way similar to the reward function in RL. From these functions p, friends and enemies are inferred, leading to a basic survival strategy outlined in 3.2.The tests are carried out with the settings from BID17: the AI plays for a maximum of 5 minutes. We choose to use the DQN as the reference: the reason being that this publication is one of the most cited in relation to Atari 2600 games, and is the de-facto benchmark to which one must refer. Although we aim to control cyberphysical systems, we needed to validate the versatility of our approach by first testing it on this standard. We fully replicated the setup using code made publicly available by the authors, and we obtained the same as the publication. We were thus able to extract the score for the DQN for a low number of training frames, so as to compare with our approach. TAB2 for a training time of 10 000 frames (less than 3 minutes), which is 20 000 times less frames than the average training standard reported in BID17. While the DQN achieves better with millions of training frames, our AI reaches decent scores with comparatively much fewer frames, as plotted for Breakout on figure 3. A preliminary analysis of what really occurs while our algorithm learns is as follows: during the first thousand frames, the "me" is not yet correctly identified, as the digits, the ball and the paddle move randomly when controlled by the pseudorandom sequence. Once the identification has converged to the only system that the AI directly controls -the paddle, neither the ball nor the digits -, the algorithm looks for friends and enemies. It also detects that the ball is a friend, as it sometimes increases the score (when it breaks a brick). The best scores are in the range of 200 points which, after analysis, corresponds to partially destroyed rows of bricks. It never destroys all the bricks, as it sometimes misses the ball, especially when it looses the "me", for instance when the paddle's size changes or when the paddle disappears according to the basic image processing algorithm. Moreover, we noticed that the movement of the "me" under control of the algorithm sometimes never reaches a steady-state: it oscillates by a few pixels at a frequency of 5.4 Hz. Our explanation from a control perspective is as follows: the "me" can be approximated by a second-order system, and the control strategy is almost equivalent to a proportional controller. This, in the context of Linear-Time-Invariant (LTI) systems, would already explain the oscillations. Moreover, the strong non-linearities present both in the control input and the non-LTI "me" also explain in part these oscillations. The input being quantized to only three values (left, nothing, right), the closed-loop system generates a signal similar to limit-cycles. These oscillations, in turn, are responsible for many of the balls missed by the algorithm. To summarize the on the Atari games, we implemented a few of the concepts presented in 3.1: the notion of entities rather than samples or pixels, and the "me" with the behavior introduced in 3.2. This led to a learning time of a few thousand frames to get a better than human score on Breakout, however it still does not match the best score reached by the DQN after millions of frames on Pong. Continental philosophy lead us to formalize a mathematical concept to control an agent evolving in a world, whether it is simulated or real. The power of this framework was illustrated by the theoretical example of the slot car on unknown circuits. Results from experiments with a real slot car, using real analog signals confirmed our expectations, even though it only used a basic survival approach. Moreover, the same basic survival strategy was applied to two Atari 2600 games and showed the same trend: even though not as skilled as, for instance, DQN-based agents trained with two hundred million frames, our AI reached in less than ten thousand frames scores that DQN met after learning with a few million frames. The next steps are to apply the transposition properties to the Atari games, as we did for the slot car, which should further decrease the learning time when playing a new game. Moreover, going beyond the basic survival strategy will be mandatory to reach higher scores: approaches based on Monte-Carlo Tree Search will be investigated. | Continental-philosophy-inspired approach to learn with few data. | 1,055 | scitldr |
Audio signals are sampled at high temporal resolutions, and learning to synthesize audio requires capturing structure across a range of timescales. Generative adversarial networks (GANs) have seen wide success at generating images that are both locally and globally coherent, but they have seen little application to audio generation. In this paper we introduce WaveGAN, a first attempt at applying GANs to unsupervised synthesis of raw-waveform audio. WaveGAN is capable of synthesizing one second slices of audio waveforms with global coherence, suitable for sound effect generation. Our experiments demonstrate that—without labels—WaveGAN learns to produce intelligible words when trained on a small-vocabulary speech dataset, and can also synthesize audio from other domains such as drums, bird vocalizations, and piano. We compare WaveGAN to a method which applies GANs designed for image generation on image-like audio feature representations, finding both approaches to be promising. Synthesizing audio for specific domains has many practical applications in creative sound design for music and film. Musicians and Foley artists scour large databases of sound effects to find particular audio recordings suitable for specific scenarios. This strategy is painstaking and may in a negative outcome if the ideal sound effect does not exist in the library. A better approach might allow a sound artist to explore a compact latent space of audio, taking broad steps to find the types of sounds they are looking for (e.g. footsteps) and making small adjustments to latent variables to finetune (e.g. a large boot lands on a gravel path). However, audio signals have high temporal resolution, and strategies that learn such a representation must perform effectively in high dimensions. Generative Adversarial Networks (GANs) are one such unsupervised strategy for mapping low-dimensional latent vectors to high-dimensional data. The potential advantages of GAN-based approaches to audio synthesis are numerous. Firstly, GANs could be useful for data augmentation in data-hungry speech recognition systems. Secondly, GANs could enable rapid and straightforward sampling of large amounts of audio. Furthermore, while the usefulness of generating static images with GANs is arguable, there are many applications (e.g. Foley) for which generating sound effects is immediately useful. But despite their increasing fidelity at synthesizing images (; BID2), GANs have yet to be demonstrated capable of synthesizing audio in an unsupervised setting. A naïve solution for applying image-generating GANs to audio would be to operate them on imagelike spectrograms, i.e., time-frequency representations of audio. This practice of bootstrapping image recognition algorithms for audio tasks is commonplace in the discriminative setting . In the generative setting however, this approach is problematic as the most perceptually-informed spectrograms are non-invertible, and hence cannot be listened to without lossy estimations or learned inversion models .Recent work (van den ;) has shown that neural networks can be trained with autoregression to operate on raw audio. Such approaches are attractive as they dispense with engineered feature representations. However, unlike with GANs, the autoregressive setting in slow generation as output audio samples must be fed back into the model one at a time. In this work, we investigate both waveform and spectrogram strategies for generating one-second slices of audio with GANs.1 For our spectrogram approach (SpecGAN), we first design a spectrogram representation that allows for approximate inversion, and bootstrap the two-dimensional deep convolutional GAN (DCGAN) method to operate on these spectrograms. In WaveGAN, our waveform approach, we flatten the DCGAN architecture to operate in one dimension, ing in a model with the same number of parameters and numerical operations as its twodimensional analog. With WaveGAN, we provide both a starting point for practical audio synthesis with GANs and a recipe for modifying other image generation methods to operate on waveforms. We primarily envisage our method being applied to the generation of short sound effects suitable for use in music and film. For example, we trained a WaveGAN on drums, ing in a procedural drum machine designed to assist electronic musicians (demo chrisdonahue.com/wavegan). However, human evaluation for such domain-specific tasks would require expert listeners. Therefore, we also consider a speech benchmark, facilitating straightforward assessment by human annotators. Specifically, we explore a task where success can easily be judged by any English speaker: generating examples of spoken digits "zero" through "nine".Though our evaluation focuses on a speech generation task, we note that it is not our goal to develop a text-to-speech synthesizer. Instead, our investigation concerns whether unsupervised strategies can learn global structure (e.g. words in speech data) implicit in high-dimensional audio signals without conditioning. Our experiments on speech demonstrate that both WaveGAN and SpecGAN can generate spoken digits that are intelligible to humans. On criteria of sound quality and speaker diversity, human judges indicate a preference for the audio generated by WaveGAN compared to that from SpecGAN. GANs learn mappings from low-dimensional latent vectors z ∈ Z, i.i.d. samples from known prior P Z, to points in the space of natural data X. In their original formulation , a generator G: Z → X is pitted against a discriminator D: X → in a two-player minimax game. G is trained to minimize the following value function, while D is trained to maximize it: DISPLAYFORM0 In other words, D is trained to determine if an example is real or fake, and G is trained to fool the discriminator into thinking its output is real. demonstrate that their proposed training algorithm for Equation 1 equates to minimizing the Jensen-Shannon divergence between P X, the data distribution, and P G, the implicit distribution of the generator when z ∼ P Z. In this original formulation, GANs are notoriously difficult to train, and prone to catastrophic failure cases. Instead of Jensen-Shannon divergence, BID1 suggest minimizing the smoother Wasserstein-1 distance between generated and data distributions DISPLAYFORM1 where f L ≤ 1: X → R is the family of functions that are 1-Lipschitz. To minimize Wasserstein distance, they suggest a GAN training algorithm (WGAN), similar to that of , for the following value function: DISPLAYFORM2 Figure 1: First eight principal components for 5x5 patches from natural images (left) versus those of length-25 audio slices from speech (right). Periodic patterns are unusual in natural images but a fundamental structure in audio. Figure 2: Depiction of the transposed convolution operation for the first layers of the DCGAN (left) and WaveGAN (right) generators. DCGAN uses small (5x5), twodimensional filters while WaveGAN uses longer (length-25), one-dimensional filters and a larger upsampling factor. Both strategies have the same number of parameters and numerical operations. We motivate our design choices for WaveGAN by first highlighting the different types of structure found in audio versus images. One way to illustrate the differences between audio and images is by examining the axes along which these types of data vary most substantially, i.e. by principal component analysis. In Figure 1, we show the first eight principal components for patches from natural images and slices from speech. While the principal components of images generally capture intensity, gradient, and edge characteristics, those from audio form a periodic basis that decompose the audio into constituent frequency bands. In general, natural audio signals are more likely to exhibit periodicity than natural images. As a consequence, correlations across large windows are commonplace in audio. For example, in a waveform sampled at 16 kHz, a 440 Hz sinusoid (the musical note A4) takes over 36 samples to complete a single cycle. This suggests that filters with larger receptive fields are needed to process raw audio. This same intuition motivated van den in their design of WaveNet, which uses dilated convolutions to exponentially increase the model's effective receptive field with linear increase in layer depth. We base our WaveGAN architecture off of DCGAN which popularized usage of GANs for image synthesis. The DCGAN generator uses the transposed convolution operation (Figure 2) to iteratively upsample low-resolution feature maps into a high-resolution image. Motivated by our above discussion, we modify this transposed convolution operation to widen its receptive field. Specifically, we use longer one-dimensional filters of length 25 instead of two-dimensional filters of size 5x5, and we upsample by a factor of 4 instead of 2 at each layer (Figure 2). We modify the discriminator in a similar way, using length-25 filters in one dimension and increasing stride from 2 to 4. These changes in WaveGAN having the same number of parameters, numerical operations, and output dimensionality as DCGAN.Because DCGAN outputs 64x64 pixel images -equivalent to just 4096 audio samples -we add one additional layer to the model ing in 16384 samples, slightly more than one second of audio at 16 kHz. This length is already sufficient for certain sound domains (e.g. sound effects, voice commands), and future work adapting megapixel image generation techniques could expand the output length to more than a minute. We requantize the real data from its 16-bit integer representation (linear pulse code modulation) to 32-bit floating point, and our generator similarly outputs floating point waveforms. A complete description of our model is in Appendix D.In summary, we outline our modifications to the DCGAN method which in WaveGAN. This straightforward recipe already produces reasonable audio, and further contributions outlined below and in Appendix A serve to refine .1. Flatten 2D convolutions into 1D (e.g. 5x5 2D convolution becomes length-25 1D).2. Increase the stride factor for all convolutions (e.g. stride 2x2 becomes stride 4).3. Remove batch normalization from the generator and discriminator.4. Train using the WGAN-GP strategy. Phase shuffle n=1 DISPLAYFORM0 Figure 3: At each layer of the Wave-GAN discriminator, the phase shuffle operation perturbs the phase of each feature map by Uniform ∼ [−n, n] samples, filling in the missing samples (dashed outlines) by reflection. Here we depict all possible outcomes for a layer with four feature maps (n = 1).Generative image models that upsample by transposed convolution (such as DCGAN) are known to produce characteristic "checkerboard" artifacts in images . Periodic patterns are less common in images (Section 3.1), and thus the discriminator can learn to reject images that contain them. For audio, analogous artifacts are perceived as pitched noise which may overlap with frequencies commonplace in the real data, making the discriminator's objective more challenging. However, the artifact frequencies will always occur at a particular phase, allowing the discriminator to learn a trivial policy to reject generated examples. This may inhibit the overall optimization problem. To prevent the discriminator from learning such a solution, we propose the phase shuffle operation with hyperparameter n. Phase shuffle randomly perturbs the phase of each layer's activations by −n to n samples before input to the next layer (Figure 3). We apply phase shuffle only to the discriminator, as the latent vector already provides the generator a mechanism to manipulate the phase of a ant waveform. Intuitively speaking, phase shuffle makes the discriminator's job more challenging by requiring invariance to the phase of the input waveform. While a minority of recent research in discriminative audio classification tasks has used raw audio input , most of these approaches operate on spectrogram representations of audio. A generative model may also benefit from operating in such a time-frequency space. However, commonly-used representations in the discriminative setting are uninvertible. With SpecGAN, our frequency-domain audio generation model, we design a spectrogram representation that is both well-suited to GANs designed for image generation and can be approximately inverted. Additionally, to facilitate direct comparison, our representation is designed to use the same dimensionality per unit of time as WaveGAN (16384 samples yield a 128x128 spectrogram).To process audio into suitable spectrograms, we first perform the short-time Fourier transform with 16 ms windows and 8 ms stride, ing in 128 frequency bins 2 linearly spaced from 0 to 8 kHz. We take the magnitude of the ant spectra and scale amplitude values logarithmically to better-align with human perception. We then normalize each frequency bin to have zero mean and unit variance. This type of preprocessing is commonplace in audio classification, but produce spectrograms with unbounded values-a departure from image representations. We therefore clip the spectra to 3 standard deviations and rescale to [−1, 1]. Through an informal listening test, we determined that this clipping strategy did not produce an audible difference during inversion. Once our dataset has been processed into this format, we operate the DCGAN algorithm on the ant spectra. To render the ant generated spectrograms as waveforms, we first invert the steps of spectrogram preprocessing described above, ing in linear-amplitude magnitude spectra. We then employ the iterative Griffin-Lim algorithm with 16 iterations to estimate phase and produce 16384 audio samples. To facilitate human evaluation, our experimentation focuses on the Speech Commands Dataset . This dataset consists of many speakers recording individual words in uncontrolled recording conditions. We explore a subset consisting of the spoken digits "zero" through "nine" and refer to this subset as the Speech Commands Zero Through Nine (SC09) dataset. While this dataset is intentionally reminiscent of the popular MNIST dataset of written digits, we note that examples from SC09 are much higher dimensional (R 16000) than examples from MNIST (R 28×28=784).These ten words encompass many phonemes and two consist of multiple syllables. Each recording is one second in length, and we do not attempt to align the words in time. There are 1850 utterances of each word in the training set, ing in 5.3 hours of speech. The heterogeneity of alignments, speakers, and recording conditions make this a challenging dataset for generative modeling. Our baseline configuration for WaveGAN excludes phase shuffle. We compare this to the performance of WaveGAN with phase shuffle (n ∈ {2, 4}) and a variant of WaveGAN which uses nearest-neighbor upsampling rather than transposed convolution . Hoping to reduce noisy artifacts, we also experiment with adding a wide (length-512) post-processing filter to the output of the generator and learning its parameters with the rest of the generator variables (details in Appendix A.1). We use the WGAN-GP algorithm for all experiments, find-ing it to produce reasonable where others (; ; BID1 failed. We compare the performance of these configurations to that of SpecGAN.We also perform experiments on four other datasets with different characteristics FIG0):1. Drum sound effects (0.7 hours): Drum samples for kicks, snares, toms, and cymbals 2. Bird vocalizations (12.2 hours): In-the-wild recordings of many species 3. Piano (0.3 hours): Professional performer playing a variety of Bach compositions 4. Large vocab speech (TIMIT) (2.4 hours): Multiple speakers, clean BID12 We train our networks using batches of size 64 on a single NVIDIA P100 GPU. During our quantitative evaluation of SC09 (discussed below), our WaveGAN networks converge by their early stopping criteria (inception score) within four days (200k iterations, around 3500 epochs), and produce speech-like audio within the first hour of training. Our SpecGAN networks converge more quickly, within two days (around 1750 epochs). On the other four datasets, we train WaveGAN for 200k iterations representing nearly 1500 epochs for the largest dataset. Unlike with autoregressive methods (van den ;), generation with WaveGAN is fully parallel and can produce an hour of audio in less than two seconds. We list all hyperparameters in Appendix E. Evaluation of generative models is a fraught topic. demonstrate that quantitative measures of sample quality are poorly correlated with each other and human judgement. Accordingly, we use several quantitative evaluation metrics for hyperparameter validation and discussion, and also evaluate our most promising models with human judges.6.1 propose the inception score, which uses a pre-trained Inception classifier to measure both the diversity and semantic discriminability of generated images, finding that the measure correlates well with human judgement. Given model scores P (y | x) with marginal P (y), the inception score is defined as exp(E x D KL (P (y | x)||P (y))), and is estimated over a large number of samples (e.g. 50k). For n classes, this measure ranges from 1 to n, and is maximized when the model is completely confident about each prediction and predicts each label equally often. We will use this measure as our primary quantitative evaluation method and early stopping criteria. To measure inception score, we train an audio classifier on SC09. Our classifier first computes a short-time Fourier transform of the input audio with 64 ms windows and 8 ms stride. This representation is projected to 128 frequency bins equally spaced on the Mel scale from 40 Hz to 7800 Hz. Amplitudes are scaled logarithmically and normalized so that each bin has zero mean and unit variance. We process this perceptually-informed representation with four layers of convolution and pooling, projecting the to a softmax layer with 10 classes. We perform early stopping on the minimum negative log-likelihood of the validation set; the ant model achieves 93% accuracy on the test set. Because this classifier observes spectrograms, our spectrogramgenerating models may have a representational advantage over our waveform-generating models. Inception score has two trivial failure cases in which a poor generative model can achieve a high score. Firstly, a generative model that outputs a single example of each class with uniform probability will be assigned a high score. Secondly, a generative model that overfits the training data will achieve a high score simply by outputting examples on which the classifier was trained. We use two indicators metrics to determine if a high inception score has been caused by either of these two undesirable cases. Our first indicator, |D| self, measures the average Euclidean distance of a set of 1k examples to their nearest neighbor within the set (other than itself). A higher |D| self indicates higher diversity amongst samples. Because measuring Euclidean distance in time-domain audio poorly represents human perception, we evaluate distances in the same frequency-domain representation as our classifier from Section 6.1.Our second indicator, |D| train, measures the average Euclidean distance of 1k examples to their nearest neighbor in the real training data. If the generative model simply produces examples from the training set, this measure will be 0. We report |D| train and |D| self relative to those of the test set. While inception score is a useful metric for hyperparameter validation, our ultimate goal is to produce examples that are intelligible to humans. To this end, we measure the ability of human annotators on Amazon Mechanical Turk to label the generated audio. Using our best WaveGAN and SpecGAN models as measured by inception score, we generate random examples until we have 300 for each digit (as labeled by our classifier from Section 6.1)-3000 total. In batches of ten random examples, we ask annotators to label which digit they perceive in each example, and compute their accuracy with respect to the classifier's labels (random accuracy would be 10%). After each batch, annotators assign subjective values of 1-5 for criteria of sound quality, ease of intelligibility, and speaker diversity. We report accuracy (n = 3000) and mean opinion scores (n = 300) in TAB0. Results for our evaluation appear in TAB0. We also evaluate our metrics on the real training data, the real test data, and a version of SC09 generated by a parametric speech synthesizer BID4. We also compare to SampleRNN and two public implementations of WaveNet (van den), but neither method produced competitive (details in Appendix B), and we excluded them from further evaluation. These autoregressive models have not previously been examined on small-vocabulary speech data, and their success at generating full words has only been demonstrated when conditioning on rich linguistic features. Sound examples for all experiments can be found at chrisdonahue.com/wavegan_examples.While the maximum inception score for SC09 is 10, any score higher than the test set score of 8 should be seen as evidence that a generative model has overfit. Our best WaveGAN model uses phase shuffle with n = 2 and achieves an inception score of 4.7. To compare the effect of phase shuffle to other common regularizers, we also tried using 50% dropout in the discriminator's activations, which ed in a lower score. Phase shuffle decreased the inception score of SpecGAN, possibly because the operation has an exaggerated effect when applied to the compact temporal axis of spectrograms. Most experiments produced |D| self (diversity) values higher than that of the test data, and all experiments produced |D| train (distance from training data) values higher than that of the test data. While these measures indicate that our generative models produce examples with statistics that deviate from those of the real data, neither metric indicates that the models achieve high inception scores by the trivial solutions outlined in Section 6.2.Compared to examples from WaveGAN, examples from SpecGAN achieve higher inception score (6.0 vs. 4.7) and are labeled more accurately by humans (66% vs. 58%). However, on subjective criteria of sound quality and speaker diversity, humans indicate a preference for examples from WaveGAN. It appears that SpecGAN might better capture the variance in the underlying data compared to WaveGAN, but its success is compromised by sound quality issues when its spectrograms are inverted to audio. It is possible that the poor qualitative ratings for examples from SpecGAN are primarily caused by the lossy Griffin-Lim inversion and not the generative procedure itself. We see promise in both waveform and spectrogram audio generation with GANs; our study does not suggest a decisive winner. For a more thorough investigation of spectrogram generation methods, we point to follow-up work BID10.Finally, we train WaveGAN and SpecGAN models on the four other domains listed in Section 5. Somewhat surprisingly, we find that the frequency-domain spectra produced by WaveGAN (a timedomain method) are visually more consistent with the training data (e.g. in terms of sharpness) than those produced by SpecGAN FIG0 Much of the work within generative modeling of audio is within the context of text-to-speech. Textto-speech systems are primarily either concatenative or parametric. In concatenative systems, audio is generated by sequencing small, prerecorded portions of speech from a phonetically-indexed dictionary . Parametric systems map text to salient parameters of speech, which are then synthesized by a vocoder BID8; see for a comprehensive review. Some of these systems use learning-based approaches such as a hidden Markov models , and separately-trained neural networks pipelines to estimate speech parameters. Recently, several researchers have investigated parametric speech synthesis with end-to-end neural network approaches that learn to produce vocoder features directly from text or phonetic embeddings BID0;;; ). These vocoder features are synthesized to raw audio using off-the-shelf methods such as WORLD and Griffin-Lim , or trained neural vocoders (; ;). All of these methods are supervised: they are trained to map linguistic features to audio outputs. Several approaches have explored unsupervised generation of raw audio. van den propose WaveNet, a convolutional model which learns to predict raw audio samples by autoregressive modeling. WaveNets conditioned on rich linguistic features have widely been deployed in textto-speech systems, though they have not been demonstrated capable of generating cohesive words in the unconditional setting. BID9 BID7 all use GANs in combination with unstructured losses to map spectrograms in one domain to spectrograms in another. BID5 use GANs to map musical performance images into spectrograms. We present WaveGAN, the first application of GANs to unsupervised audio generation. WaveGAN is fully parallelizable and can generate hours of audio in only a few seconds. In its current form, WaveGAN can be used for creative sound design in multimedia production. In our future work we plan to extend WaveGAN to operate on variable-length audio and also explore a variety of label conditioning strategies. By providing a template for modifying image generation models to operate on audio, we hope that this work catalyzes future investigation of GANs for audio synthesis. Post-processing filters reject frequencies corresponding to noise byproducts created by the generative procedure (top). The filter for speech boosts signal in prominent speech bands, while the filter for bird vocalizations (which are more uniformly-distributed in frequency) simply reduces noise presence. Generative models that upsample by transposed convolution are known to produce characteristic "checkerboard" artifacts in images , artifacts with particular spatial periodicities. The discriminator of image-generating GANs can learn to reject images with these artifacts because they are uncommon in real data (as discussed in Section 3.1). However, in the audio domain, the discriminator might not have such luxury as these artifacts correspond to frequencies which might rightfully appear in the real data. While checkerboard artifacts are an annoyance in image generation, they can be devastating to audio generation . While our eye may perceive these types of periodic distortions as an intrusive texture, our ear perceives them as an abrasive tone. To characterize these artifacts in WaveGAN, we measure its impulse response by randomly initializing it 1000 times and passing unit impulses to its first convolutional layer. In FIG2, we plot the average of these responses in the frequency domain. The response has sharp peaks at linear multiples of the sample rates of each convolutional layer (250 Hz, 1 kHz, 4 kHz, etc.). This is in agreement with our informal observation of from WaveGAN, which often have a pitched noise close to the musical note B (247 × 2 n Hz).Below, we will discuss strategies we designed to mitigate these artifacts in WaveGAN. We experiment with adding a post-processing filter to the generator, giving WaveGAN a simple mechanism to filter out undesirable frequencies created by the generative process. This filter has a long window (512 samples) allowing it to represent intricate transfer functions, and the weights of the filter are learned as part of the generator's parameters. In FIG2, we compare the postprocessing filters that WaveGAN learns for human speech and bird vocalizations. The filters boost signal in regions of the frequency spectrum that are most prominent in the real data domain, and introduce notches at bands that are artifacts of the generative procedure as discussed in the previous section. Transposed convolution upsamples signals by inserting zeros in between samples and applying a learned filterbank. This operation introduces aliased frequencies, copies of pre-existing frequencies shifted by multiples of the new Nyquist rate, into the upsampled signal. While aliased frequencies are usually seen as undesirable artifacts of a bad upsampling procedure, in the generative setting their existence may be crucial for producing fine-grained details in the output. We experiment with three other upsampling strategies in WaveGAN: nearest-neighbor, linear and cubic interpolation, all of which attenuate aliased frequencies. In FIG3, we compare these strategies visually. While nearest neighbor upsampling ed in similar audio output to transposed convolution, linear and cubic interpolation strategies ed in qualitatively poor audio output (sound examples: chrisdonahue.com/wavegan_examples). We hypothesize that the aliased frequencies produced by upsampling convolutions may be more critical to audio generation than image generation. We developed our WaveGAN and SpecGAN models primarily to address the task of steerable sound effect generation. This is an inherently different task than text to speech (TTS), however autoregressive waveform models (e.g. WaveNet (van den) and SampleRNN ) that were developed for TTS can also be used to model and generate waveforms unconditionally. Hence, a comparison to these models for our task is reasonable. One upside of autoregressive models for our task is that they have the potential to produce high-quality audio. Potential downsides are 1) these models take several orders of magnitude longer to generate waveforms, and 2) they do not learn a compact latent space of waveforms, causing useful sound generation tasks like continuous exploration and interpolation to be impossible. We attempt to train two public implementations of WaveNet (ImplA 3 and ImplB 4) and SampleRNN failed to produce cohesive words (you can judge for yourself from our sound examples at the bottom chrisdonahue.com/wavegan_examples). This poor subjective performance is echoed by weak inception scores (weaker than any in TAB0): 1.07 ± 0.05, 1.29 ± 0.03, 2.28 ± 0.19 for WaveNet ImplA, WaveNet ImplB, and SampleRNN respectively. Note that these inception scores were calculated on far fewer examples (< 1k) than all of the scores listed in TAB0 (which were computed on 50k examples). This is because it took over 24 hours to produce even a thousand one-second examples with these methods (whereas our methods produce 50k examples in a few seconds).Autoregressive methods have not been demonstrated capable of learning to synthesize coherent words without conditioning on rich linguistic features. We are not claiming that these methods cannot learn to synthesize full words, merely that three open-source implementations were unable to do so with default parameters. We want to be clear that our intent is not to disparage autoregressive waveform methods as these methods were developed for a different task, and hence we excluded these poor scores from our table to avoid sending the wrong message. Instead, we hope to highlight that these implementations produced that were noncompetitive for our problem domain, and less useful (due to slowness and lack of a latent space) for creative generation of sound effects. Non GAN-activated cat WaveGAN activated cat SpecGAN activated cat As our improved throughout the course of this research, our cats became quite intrigued by the synthetic bird vocalizations produced by WaveGAN FIG4 ). While this was of course not a formal experiment, we did find this to be encouraging evidence that our method might be capable of producing audio that could additionally convince non-human animals. (n, 1024, 2d) Phase Shuffle (n = 2) (n, 1024, 2d) Conv1D (Stride=4) (25, 2d, 4d) (n, 256, 4d) LReLU (α = 0.2) (n, 256, 4d) Phase Shuffle (n = 2) (n, 256, 4d) Conv1D (Stride=4) (25, 4d, 8d) (n, 64, 8d) LReLU (α = 0.2) (n, 64, 8d) Phase Shuffle (n = 2) (n, 64, 8d) Conv1D (Stride=4) (25, 8d, 16d) (n, 16, 16d) LReLU (α = 0.2) (n, 16, 16d) Reshape (n, 256d) Dense (256d, 1) (n, 1) In TAB2, we list the full architectures for our WaveGAN generator and discriminator respectively. In TAB4, we list the same for SpecGAN. In these tables, n is the batch size, d modifies model size, and c is the number of channels in the examples. In all of our experiments in this paper, c = 1. All dense and convolutional layers include biases. No batch normalization is used in WaveGAN or SpecGAN. In TAB5, we list the values of these and all other hyperparameters for our experiments, which constitute our out-of-the-box recommendations for WaveGAN and SpecGAN. Adam (α = 1e−4, β1 = 0.5, β2 = 0.9) | Learning to synthesize raw waveform audio with GANs | 1,056 | scitldr |
The difficulty of obtaining sufficient labeled data for supervised learning has motivated domain adaptation, in which a classifier is trained in one domain, source domain, but operates in another, target domain. Reducing domain discrepancy has improved the performance, but it is hampered by the embedded features that do not form clearly separable and aligned clusters. We address this issue by propagating labels using a manifold structure, and by enforcing cycle consistency to align the clusters of features in each domain more closely. Specifically, we prove that cycle consistency leads the embedded features distant from all but one clusters if the source domain is ideally clustered. We additionally utilize more information from approximated local manifold and pursue local manifold consistency for more improvement. Results for various domain adaptation scenarios show tighter clustering and an improvement in classification accuracy. Classifiers trained through supervised learning have many applications , but it requires a great deal of labeled data, which may be impractical or too costly to collect. Domain adaptation circumvents this problem by exploiting the labeled data available in a closely related domain. We call the domain where the classifier will be used at, the target domain, and assume that it only contains unlabeled data {x t}; and we call the closely related domain the source domain and assume that it contains a significant amount of labeled data {x s, y s}. Domain adaptation requires the source domain data to share discriminative features with the target data . In spite of the common features, a classifier trained using only the source data is unlikely to give satisfactory in the target domain because of the difference between two domains' data distributions, called domain shift . This may be addressed by fine-tuning on the target domain with a small set of labeled target data, but it tends to overfit to the small labeled dataset . Another approach is to find discriminative features which are invariant between two domains by reducing the distance between the feature distributions. For example, domain-adversarial neural network (DANN) achieved remarkable using generative adversarial networks (GANs) . However, this approach still has room to be improved. Because the classifier is trained using labels from the source domain, the source features become clustered, and they determine the decision boundary. It would be better if the embedded features from the target domain formed similar clusters to the source features in class-level so that the decision boundary does not cross the target features. Methods which only reduce the distance between two marginal distributions bring the features into general alignment, but clusters do not match satisfactorily, as shown in Fig. 1(a). As a consequence, the decision boundary is likely to cross the target features, impairing accuracy. In this work, we propose a novel domain adaptation method to align the manifolds of the source and the target features in class-level, as shown in Fig. 1(b). We first employ label propagation to evaluate the relation between manifolds. Then, to align them, we reinforce the cycle consistency that is the correspondence between the original labels in the source domain and the labels that are propagated from the source to the target and back to the source domain. The cycle consistency draws features from both domains that are near to each other to converge, and those that are far apart to diverge. The proposed method exploits manifold information using label propagation which had not been taken into account in other cycle consistency based methods. As a , our approach outperforms other baselines on various scenarios as demonstrated in Sec. 4. Moreover, the role of cycle consistency is theoretically explained in Sec. 3.2 that it leads to aligned manifolds in class-level. To acquire more manifold information within the limited number of mini-batch samples, we utilize local manifold approximation and pursue local manifold consistency. In summary, our contributions are as follows: • We propose a novel domain adaptation method which exploits global and local manifold information to align class-level distributions of the source and the target. • We analyze and demonstrate the benefit of the proposed method over the most similar baseline, Associative domain adaptation (AssocDA) . • We present the theoretical on why the proposed cycle consistency leads to class-level manifold alignment, bringing better in domain adaptation. • We conduct extensive experiments on various scenarios and achieve the state-of-the-art performance. Unsupervised Domain Adaptation It has been shown that the classification error in the target domain is bounded by that in the source domain, the discrepancy between the domains and the difference in labeling functions. Based on this analysis, a number of works have endeavored to train domain-confusing features to minimize the discrepancy between the domains (; ; ; 2017). Maximum mean discrepancy can be used as a measure of domain discrepancy. In an approach inspired by GANs, a domain confusion can be converted into a minmax optimization. While minimization of domain discrepancy can be effective in reducing the upper bound on the error, it does not guarantee that the feature representation in the target domain is sufficiently discriminative. To address this issue, several techniques had been proposed. Explicit separation of the shared representation from the individual characteristics of each domain may enhance the accuracy of the model . This approach has been implemented as a network with private and shared encoders and a shared decoder. The centroid and prototype of each category can be used for class-level alignment . An alternative to such featurespace adaptation techniques is the direct conversion of target data to source data (; ;). Those proposed methods intend to transfer the style of images to another domain while preserving the content. This performs well on datasets containing Figure 2: Overview of our method. The feature generator G projects the input data into the feature space. The dashed line means weight sharing. The embedded source features f s and the target features f t are organized into a graph and then used together to evaluate cycle consistency through label propagation. The embedding classifier C learns from the source ground-truth labels. The discriminator D determines whether features originated in the source or the target domain. images that are similar at the pixel-level; they are problematic when the mapping between high-level features and images is complicated . Metric Learning Metric learning is learning an appropriate metric distance to measure the similarity or dissimilarity between data . Reducing the distances between similar data and increasing the distances between distinct data has shown to improve the accuracy of a classifier. Metric learning is particularly beneficial when very little labeled data is available, which is the situation for domain adaptation. combined metric learning and unsupervised domain adaptation with the enforcement of cycle consistency. In particular, the inner products of source features and target features with the same label are maximized, and minimized between features with different labels. AssocDA enforces the feature alignment between the source and target by forcing the two step round trip probability to be uniform in the same class and to vanish between different classes. Graph-based learning is closely related to metric learning, in that it achieves clustering using distance information. Label consistency is usually assumed, meaning that adjacent data tend to have the same labels . Label propagation has improved the performance of semi-supervised learning by enforcing label consistency by propagating labels from labeled to unlabeled data. To overcome need for fixed graphs to be provided in advance, the distances between each node can be adaptively learned , as in metric learning, and this increases accuracy in both semi-supervised and few-shot learning. Our algorithm, shown in Fig. 2, uses label propagation and cycle consistency to learn features from the source and the target domains which are both 1) indistinguishable each other and 2) close when placed within the same class, but distant when placed in different classes. The details are as follows. Manifold learning extracts intrinsic structures from both unlabeled and labeled data. We obtain these structures by constructing a graph whose vertexes are the embedded features and whose edges are the relations between data. We first embed the input data in the feature space, using the feature generator composed of convolutional layers following previous work . Subsequently, a fully connected graph is constructed according to the distances between the features. The edge weights W ij between the input data x i, x j are determined from the feature vectors using Gaussian similarity, ), where f i, f j are the embedded feature vectors of x i, x j, and σ is a scale parameter. It is known that graph-based methods are sensitive to the scale parameter σ. A large σ in an uniformly connected graph that disregards the latent structure of the data, while a small σ produces a sparse graph which fails to express all the relationship between the data. To adapt σ according to the embedded features, we take σ as a trainable variable to be learned during training. Label propagation is a method of manifold regularization, which in turn produces a classifier that is robust against small perturbations. Label propagation can be seen as a repeated random walk through the graph of features using an affinity matrix to assign the labels of target data . A label matrix y n ∈ R (Ns+Nt)×C refers to the labels assigned to data in both domains at the n-th step random walk. The dimension of y n is determined by N s, N t, and C which are the numbers of source and target data points and the number of classes, respectively. The first N s rows of y n contain the labels of the source data, and the remaining N t rows contain the labels of the target data. The initial label vector y 0 contains y s for the source data, which is one-hot coded ground-truth labels and zero vectors for the target data. The one step of the random walk transforms the label vector as follows: where,. W ts is a similarity matrix between the target and source data, and W tt is a similarity matrix which represents the interrelations in the target data. These are described in the Sec. 3.1. The normalization operation normalize(·) transforms the sum of each row to 1. The identity matrix in the normalized transition matrix T signifies that the labels of source data do not change because its labels are already known. In graph theory, these source data points would be called absorbing nodes. In label propagation, the labels of the target domain is assigned to the propagated labelsŷ t by infinite transition, formulated asŷ tt T ts y s, which converges as follows In our method,ŷ t is used to obtain the propagated labels of the source data in the same way aŝ y s = (I − T ss) −1 T stŷ t where T ss and T st are defined analogous to T tt and T ts, so that we can learn the features of which clusters match each other. We then refer to the property thatŷ s should be the same as the original label y s as cycle consistency. Pursuing cycle consistency forces not perfectly aligned features to move toward the nearest cluster, as shown in Fig. 3. The following theorem shows that enforcing cycle consistency on ideally clustered source data will segregate different classes of the source and the target data and gather the same classes. Theorem 1. Let {e i |1 ≤ i ≤ C} be the standard bases of C-dimensional Euclidean space. For the sake of simplicity, source data x 1, x 2, · · ·, x Ns are assumed to be arranged so that the first n 1 data belong to class 1, the n 2 data to class 2, and so forth. Assume that 1) the source data is ideally clustered, in the sense that T ss has positive values if the row and the column are the same class and zero otherwise, i.e., T ss = diag(T 1, T 2, · · ·, T C), the block diagonal where T i is a n i × n i positive matrix for i = 1, 2, · · ·, C and 2)ŷ s = y s. Then for all 1 ≤ j ≤ C, there exists a nonnegative vector v j ∈ R Ns such that 1) the part where source data belongs to j th class (from th element to [n 1 + n 2 + · · · + n j] th element) are positive and the other elements are all zero and 2) v j T stŷ t e i = 0 for all 1 ≤ i ≤ C, i = j. Proof. The illustration and the proof is given in Appx. A. In Thm. 1,ŷ t e i refers to the assigned probability as i th class to the target data. The implies that if a target data is enough to be predicted as i th class through label propagation, i.e., i th elements of the row inŷ t corresponding to the target data is nonzero, then the elements of T st which represent the transitions from source data of all but i th class to the target data should vanish, i.e., the target data is segregated from the source data in different classes. As described in Sec. 3.4, we employed DANN to prevent the target data distribution to be distinct from the source data distribution. If a column of T st is a zero vector, the feature of the corresponding target data for the column is considerably distant from all source data features. However, minimizing the DANN loss makes target features lie around source features, and thus each column of T st is not likely to be a zero vector. Combining this conjecture with Thm. 1, each row ofŷ t has only one nonzero value, i.e., every target data belongs to only one cluster. We thus argue that by pursuing this property, generator can learn more discriminative shared features, and classification performance may improve. Cycle consistency is enforced by minimizing the l 1 loss L cycle betweenŷ s and y s: Comparison with AssocDA The proposed method has some resemblance with AssocDA in that they both consider the similarities and transitions between data. However, we argue that AssocDA is a special case of our method. First, our method exploits manifold over each domain by taking relations within the same domain into account through label propagation, whereas AssocDA only considers relations across the domains. Specifically, in Eq. 1, our method utilizes both T ts and T tt, but AssocDA ignores T tt which often has useful information about the target data manifold. Second, AssocDA forces the two-step transition to be uniform within the same class. This strict condition may drive the source features of each class to collapse to one mode and can cause overfitting. On the contrary, our method only constrains source data to preserve its original labels after the label propagation. Thus, it does not require all source data be close to each other within the same class; it allows moderate intra-class variance. The experiment in Sec. 4.1 and Fig. 4 support these arguments and visualize the effect of the differences. As shown in Thm. 1, the introduced cycle consistency utilizes graph based global manifold information and enforces the source and target features to be aligned in class-level. However, in practice, the limited size of mini-batch may restrict the available information of graph. The knowledge from the local manifold of each sample, in this case, can complement the global manifold information. In this regard, we additionally pursue local manifold consistency that the output should not be sensitive to small perturbations in the local manifold, as suggested elsewhere (; ;). Concretely, localized GAN (LGAN) is employed to approximate the local manifold of each data and sample a marginally perturbed image along the local manifold from the given data. LGAN allows it as LGAN focuses on learning and linking patches of local manifolds in its training procedure. The difference between the predicted label of the perturbed image and that of the original image is minimized to impose local manifold consistency of the classifier as follows: where, C, G and G L are the embedding classifier, the feature generator and the LGAN generator, respectively. LGAN generator, G L (x, z), takes an image x and noise z to generate locally perturbed image along the approximated local manifold. H(·, ·) denotes cross entropy. µ and η are coefficients for the source and the target local manifold consistency loss, respectively. Our method learns a clustered feature representation that is indistinguishable across the source and target domains through the training process as follows: where, D is the discriminator. α and β are coefficients for the last two terms and λ is a scheduling parameter described in Appx B.1. L class is a widely used cross-entropy loss for labeled source data and L dann is a GAN loss : where discriminator's output D(·) is the probability that the input originated from the source domain. From the metric learning perspective, L class serves to separate the source features according to their ground-truth labels, which supports the assumption in Thm. 1, the ideally clustered source features. Subsequently, L dann takes a role in moving the target features toward the source features, but it is insufficient to produce perfectly aligned clusters. Our cycle loss L cycle and local loss L local facilitate clustering by enforcing cycle consistency and local manifold consistency. We present a toy example to empirically demonstrate the effect of our proposed cycle loss using manifold information compared to the most similar method, AssocDA. We designed synthetic dataset in 2-dimensional feature space with two classes as illustrated in the leftmost of Fig. 4. The source data lie vertically and the target data are slightly tilted and translated. The second column shows the negative gradients of AssocDA loss and our cycle loss with respect to each data. Negative gradients can be interpreted as the movement of features at each iteration. The third and fourth are the updated features using gradient descent in the middle and at the end of feature updates 1. As argued in Sec. 3.2, AssocDA does not consider the transition within the same domain and thus target data which are close to source data with different label (points inside red circles in the second column) are strongly attracted to them. On the other hand, the gradients of the cycle loss are much smaller than AssocDA. We speculate that it is because the attractions from source data in the same class are propagated through target data manifold. As a , AssocDA leads some data to move in wrong direction, being misclassified, while cycle loss brought correctly aligned manifolds. In addition, AssocDA attracts all features too close at the end of updates, which may cause overfitting. Last but not least, our cycle loss aligned source and target clusters correctly without the aid of dann loss. We thus argue that our method is complementary to DANN rather than an extension. We show the performance of the proposed method on two real visual dataset. First dataset, which we call by Digit & Object dataset, includes digit dataset such as SVHN and Synthetic Digits (DIGITS), Gradients of loss Progress at 150 steps Progress at 600 steps Figure 5: Visualization of learned features using t-SNE. Circles and x markers respectively indicate the source and target features. Colors correspond to labels. In all cases, the features from two domains form similar and tight clusters, which is the key objective of our method. and object dataset such as STL and CIFAR. We used ImageCLEF-DA as second dataset for more challenging benchmark. We employed three networks as previous work (; ; . A network with two convolutional layers and two fully connected layers for digit dataset and a network with nine convolutional layers and one fully connected layer for object dataset were implemented. Pretrained ResNet was used for ImageCLEF-DA dataset. More details on training settings, adaptation scenarios and an experiment on non-visual dataset are provided in Appx. B.1, B.2 and E. Tab. 1 compares the accuracy of our method on Digit & Object dataset with that of other approaches. For our method, we reported the of three models, one with local loss (L), another with cycle loss (C) and the other with both losses (C+L). Our algorithm outperformed the others on most of the tasks. In the most experiments, the performance of the proposed method was better than the state-ofthe-art. This suggests that enforcing alignment in addition to domain-invariant embedding reduces the error-rate. PixelDA showed superior performance on MNIST→MNIST-M, but it is attributable to the fact that PixelDA learns transferring the style of images at a pixel level which is similar to the way MNIST-M is generated from MNIST. T-SNE embeddings in Fig. 5 indicates that the learned features are well aligned and clustered. Tab. 2 reports the on ImageCLEF-DA dataset experiments. The performance of our method was better than or comparable to those of other baselines. Especially, our method outperforms which also aims to learn clustered and aligned features. Although the objectives are related, the approaches are quite different. Our method utilizes the manifolds of the source and the target domain through label propagation and cycle consistency, whereas CAT considers the distance between two samples for clustering and the distance between the first-order statistics of distributions for alignment. We argue that the better performance is attributed to utilizing manifold information beyond one to one relations of which benefits are explained in Sec. 4.1. Throughout ImageCLEF-DA experiments, the proposed method without the local loss achieved better accuracy compared to that with the local loss. Approximation of the local manifold on ImageCLEF-DA generated by LGAN was slightly worse than that on Digit & Object dataset; perturbed image was blurred and semantically invariant with the original image. Hence, we speculate that the performance of the proposed method may be improved with better local manifold approximation. In this paper, we proposed a novel domain adaptation which stems from the objective to correctly align manifolds which might in better performance. Our method achieved it, which was supported by intuition, theory and experiments. In addition, its superior performance was demonstrated on various benchmark dataset. Based on graph, our method depends on how to construct the graph. Pruning the graph or defining a similarity matrix considering underlying geometry may improve the performance. Our method also can be applied to semi supervised learning only with slight modification. We leave them as future work. A PROOF OF THEOREM 1 Theorem 1. Let {e i |1 ≤ i ≤ C} be the standard bases of C-dimensional Euclidean space. For the sake of simplicity, source data x 1, x 2, · · ·, x Ns are assumed to be arranged so that the first n 1 data belong to class 1, the n 2 data to class 2, and so forth. Assume that 1) the source data is ideally clustered, in the sense that T ss has positive values if the row and the column are the same class and zero otherwise, i.e., T ss = diag(T 1, T 2, · · ·, T C), the block diagonal where T i is a n i × n i positive matrix for i = 1, 2, · · ·, C and 2)ŷ s = y s. Then for all 1 ≤ j ≤ C, there exists a nonnegative vector v j ∈ R Ns such that 1) the part where source data belongs to j th class (from [n 1 + n 2 + · · · + n j−1 + 1] th element to [n 1 + n 2 + · · · + n j] th element) are positive and the other elements are all zero and 2) v j T stŷ t e i = 0 for all 1 ≤ i ≤ C, i = j. From the assumption, T ss is a block diagonal matrix of which block elements are T 1,T 2,· · ·,T C. v j is all zero except n j elements in the middle of v j. The n j elements are all positive and their indices correspond to those of T j in T ss. In the proof, the left eigenvector u j of T j will be substituted to this part. Proof. From the Perron-Frobenius Theorem that positive matrix has a real and positive eigenvalue with positive left and right eigenvectors, T j, the block diagonal element of T ss, has a positive left eigenvector u j with eigenvalue λ j for all j = 1, 2, · · · C. Then, as shown below, v j = (0 0 ··· 0 u j 0 ··· 0) where n 1 + n 2 + · · · + n j−1 zeros, u j and n j+1 + n j+2 + · · · + n C zeros are concatenated, is a left eigenvector of T ss with eigenvalue λ j by the definition of eigenvector. From the label propagation, we have,ŷ By multiplying v j (I − T ss) on the left and e i on the right to the both sides in Equation 13 and combining with the assumptionŷ s = y s, we have, The last zero comes from the definition of v j. In this subsection, we offer the modified version of Thm. 1 when the source features are slightly perturbed from the ideally clustered condition and the other assumption y s =ŷ s holds. We start from representing T ss as follows to indicate the perturbation. where, δT ss is assumed to be sufficiently small under infinite norm and T ss is a block diagonal transition matrix when the source features are ideally clustered as stated in Thm. 1. In the proof above, we showed eigenvalue λ j and its corresponding eigenvector v j of T j. According to perturbation theory of eigenvalue and eigenvector , the eigenvector can be approximated by first order when the perturbation is small. More generally and precisely, where, the norm is vector or matrix 2-norm and m j is determined by T ss. For the sake of simplicity, we use Big-O notation in Eq. 19 and Eq. 20. Now, we reuse Eq. 16 from the proof of Theorem 1 since it is still valid under the modified condition. We apply Eq. 19 to the right hand side as follows, where i = j. Eq. 24 holds because only j th block elements of v j are nonzero. We also used the fact that y s is bounded by 0 and 1. Similarly, the left hand side of Eq. 22 can be transformed as follows, The second term of Eq. 26 holds because T st andŷ t are bounded by 0 and 1. Finally, by combining Eq. 24 and Eq. 26, we have, Eq. 27 implies that if the perturbation is sufficiently small i.e., ||δT ss || ∞ << 1 and a target data is enough to be predicted as i th class through label propagation, then the transitions from source data of all but i th class to the target data is negligible because v j is positive for j th block and zero for others. It is the same with the of Theorem 1. In addition, the more strongly the target data is classified as i th class i.e., the corresponding element ofŷ t becomes greater, the smaller the transitions from source data in the other classes are, indicating the segregation against the other classes. Practically, the coefficients for L cycle and L cycle are scheduled to facilitate the clustering of source features correctly at the early stage of training. Thus we may assume that T ss is marginally perturbed around the ideally clustered one when our cycle loss takes effect. Scheduling the effect of losses To reduce the effect of noisy signal from L dann and L cycle during the early stages of training, a weight balance factor λ = 2 1+exp(−γ·p) − 1 is applied in Eq. 6. A constant γ determines the rate of increase of λ; p is the progress of training, which proceeds from 0 to 1. The parameter was introduced to make a classifier less sensitive to the erroneous signals from the discriminator in the beginning. Throughout the experiments, γ was set to 10. Hyperparameter Although it would be ideal to avoid utilizing labels from the target domain in the hyperparameter optimization, it seems that no globally acceptable method exists for this. One possibility is reverse validation scheme but this may not be accurate enough to estimate test accuracy . In addition , applications exist where the labeled target domain data is available at the test phase but not at the training phase. Hence, we adopted the protocol of that exploits a small set of labeled target domain data as a validation set; 256 samples for the Amazon review experiment and 1,000 samples for the other experiments (; 2017;). During training, Adam optimizer with learning rate of 10 −3 was utilized. Exponential moving averaging was applied to the optimization trajectory. It is an inherent characteristic of our method that each data sample affects the graph structure. So it is important for each class sample in each batch to represent its classes accurately. In other words, the transition matrix can be corrupted by biases in the samples. Therefore, the number of data samples in each class in a batch should be sufficient to avoid any likely bias. To address this problem, we performed experiments with batch size of up to 384 and observed very little improvement beyond a batch size of 128. So we fixed the batch size to 128 for Digit & Object dataset. For the ImageCLEF-DA dataset, we set the batch size to 36 because of limited computing resource. MNIST → MNIST-M The MNIST database of hand-written digits consists of digit images with 10 classes and MNIST-M consists of MNIST digits blended with natural color patches from the BSDS500 dataset . In addition, following other work the colors of the MNIST images were inverted randomly, because their colors are always white on black, whereas the MNIST-M images exhibit various colors. MNIST ↔ USPS USPS is another dataset of hand-written images of digits, with 10 classes. USPS contains 16×16 images and the size of the USPS image is upscaled to 28×28, which is the size of the MNIST image in our experiment. The evaluation protocol of CYCADA is adopted. SVHN → MNIST The Street View House Numbers (SVHN) dataset consists of images of house numbers acquired by Google Street View. The natural images that it contains, are substantially different from the line drawings in the MNIST dataset. The size of each MNIST image is upscaled to 32×32, which is the size of SVHN images. SYN DIGITS → SVHN SYN DIGITS dataset is synthetic number dataset which is similar to the SVHN dataset . The most significant difference between the SYN DIGITS dataset and the SVHN dataset is the untidiness in the of real images. CIFAR ↔ STL Both CIFAR dataset and STL dataset are 10-class datasets that contain images of animals and vehicles. Not overlapped classes are removed to make a 9-class domain adaptation task . We used the larger network only for this experiment. 2 The twelve common classes of three publicly available dataset (Caltech-256, ImageNet ILSVRC2012, and PASCAL VOC2012) are selected to form visual domain adaptation tasks. We perform all six possible adaptation scenarios among these three dataset. We searched hyperparameters within α = {0, 0.01, 0.1, 1}, β = {0.01, 0.1, 1}, µ = {0, 0.01} and η = {0, 0.1}. Perturbation to the LGAN generator, i.e. z, is fixed to 0.5 for all experiments. The best hyperparameters for each task is shown in Table. 3. Setting an appropriate value for the scale parameter, σ, is important because it has a substantial role in determining the transition matrix, T. Therefore, we conducted several experiments with fixing σ to various values. For these experiments, we excluded L local to observe the effect of σ.' Adapt' means that the σ is learned to adapt according to the embedded features. For four out of five scenarios, fixing σ to 1 performed better than fixing it to 0.1 or 10. With this observation, we initialized σ to 1 took it as a trainable variable. The of adaptively learning σ is reported at the bottom row of the table. Compared to fixing σ to 1, adaptively learning σ achieved better accuracy and had a lower standard deviation range which means that it is more stable. We also would like to highlight that our model is robust to the initial value of σ. We conducted extensive experiments with initializing σ to 0.1, 1 and 10 and taking it as a trainable variable. Except for SVHN → MNIST transfer task with setting initial σ value to 10, the initial value of σ has a minute influence to the accuracy. We believe that adaptively learning the scale parameter can be usefully employed in any other graph-based method. The learned σ values for various scenarios are as follows. It seems that σ adaptively learns its value according to the transfer task, regardless of its initialization. We tried l 2 loss and cross entropy loss to enforce cycle consistency as well. We excluded L local to compare the effectiveness of these functions. For all Digit dataset adaptation experiments, evaluating cycle consistency with l 1 norm achieved the highest accuracy. We speculate that l 1 norm is more numerically stable or provides more effective gradients than other functions in this case. The Amazon Reviews dataset provides a non-visual domain for domain adaptation experiments. It contains reviews of books, DVDs, electronics, and kitchen appliances encoded as 5,000-dimensional feature vectors containing unigrams and bigrams of the texts with binary labels. Four-and five-star reviews are labeled'positive'; reviews with fewer stars are labeled'negative'. We used 2,000 labeled source data and 2,000 unlabeled target data for training, and between 3,000 to 6,000 target data for testing. Tab. 8 shows that our method performs better than DANN , VFAE and ATT on the Amazon Reviews data in six out of twelve experiments. Our method was more accurate than DANN in nine out of twelve settings, showing approximately 2.0% higher classification accuracy on average. | A novel domain adaptation method to align manifolds from source and target domains using label propagation for better accuracy. | 1,057 | scitldr |
We propose a new method for training neural networks online in a bandit setting. Similar to prior work, we model the uncertainty only in the last layer of the network, treating the rest of the network as a feature extractor. This allows us to successfully balance between exploration and exploitation due to the efficient, closed-form uncertainty estimates available for linear models. To train the rest of the network, we take advantage of the posterior we have over the last layer, optimizing over all values in the last layer distribution weighted by probability. We derive a closed form, differential approximation to this objective and show empirically over various models and datasets that training the rest of the network in this fashion leads to both better online and offline performance when compared to other methods. Applying machine learning models to real world applications almost always involves deploying systems in dynamic, non-stationary environments. This dilemma requires models to be constantly re-updated with new data in order to maintain a similar model performance across time. Of course, doing this usually requires the new data to be relabeled, which can be expensive or in some cases, impossible. In many situations, this new labeled data can be cheaply acquired by utilizing feedback from the user, where the feedback/reward indicates the quality of the action taken by the model for the given input. Since the inputs are assumed independent, this task can be framed in the contextual bandit setting. Learning in this setting requires a balance between exploring uncertain actions (where we risk performing sub optimal actions) and exploiting actions the model is confident will lead to high reward (where we risk missing out on discovering better actions).Methods based on Thompson sampling (TS) or Upper Confidence Bounds (UCB) provide theoretically BID1 BID0 and empirically established ways BID12 BID3 for balancing exploration/exploitation in this setting. Unfortunately, both methods require estimation of model uncertainty. While this can be done easily for most linear models, it is a difficult and open problem for large neural network models underlying many modern machine learning systems. An empirical study by BID17 shows that having good uncertainty estimates is vital for neural networks learning in a bandit setting. Closed formed uncertainty estimations (and online update formulas) are available for many linear models. Since the last layer of many neural networks are usually (generalized) linear models, a straightforward way for learning neural networks in a bandit setting is to estimate the uncertainty (as a distribution over weights) on the last layer only, holding the previous layers fixed as feature functions which provide inputs to the linear model. This method (and variants thereof) has been proposed in bandit settings BID17 BID13 as well as other related settings (; BID16 BID11 and has been shown to work surprisingly well considering its simplicity. This style of methods, which we refer to as Bayesian last layer or BLL methods, also has the advantage of being both relatively model-agnostic and scalable to large models. Of course, BLL methods come with the tacit assumption that the feature functions defined by the rest of the network output good (linearly separable) representations of our inputs. This means that, unless the input data distribution is relatively static, the rest of the network will need to be updated in regular intervals to maintain low regret. In order to maintain low regret, the retraining objective must: 1) allow new data to be incorporated quickly into the learned model, and 2) prevent previously learned information from being quickly forgotten. Previous papers retrain BLL methods simply by sampling minibatches from the entire pool of previously seen data and maximizing log-likelihood over these minibatches, which fails to meet the first criteria above. In this paper we present a new retraining objective for BLL methods meeting both requirements. We avoid retraining the last layer with the entire network (throwing out the uncertainty information we learned about the last layer) or retraining with the last layer fixed (fixing the last layer to the mean of its distribution). Instead, we utilize the uncertainty information gathered about the last layer, and optimize the expected log-likelihood of both new and old data, marginalizing 1 over the entire distribution we have on the last layer. This gives a more robust model that performs relatively well over all likely values of the last layer. While this objective cannot be exactly computed, we derive a closed form, differentiable, approximation. We show that this approximation meets both criteria above, with a likelihood term to maximize that depends only on the new data (meeting the first point), and a quadratic regularization term that is computed only with previously seen data (meeting the second point). We show empirically that this method improves regret on the most difficult bandit tasks studied in BID17. We additionally test the method on a large state-of-the-art recurrent model, creating a bandit task out of a paraphrasing dataset. Finally, we test the method on convolutional models, constructing a bandit task from a benchmark image classification dataset. We show that our method is fast to adapt to new data without quickly forgetting previous information. Contextual bandits are a well researched class of sequential decision problems (; BID10, of which, many variants exist. In this paper, we mainly consider the multiclass contextual bandit problem studied in BID7 . The problem takes place over T online rounds. On round t, the learner receives a context x t, predicts a class label y t, and receives binary reward r t indicating whether the chosen label is correct. No other information is received about the other classes not picked. In our setting, we assume each class c (the arms of the bandit) has associated with it a vector z c and that the probability of a class label is modeled by: DISPLAYFORM0, where σ is the logistic function and f θ is a feature function parameterized by θ. In our case, f θ defines a neural network, while z c can be seen as the last layer of the network 2.Our goal is to get low regret, R = T i r * i − T i r i, where r * i is the optimal reward at step i. The key to getting low regret is employing a policy for balancing exploration and exploitation. If we capture the uncertainty in each z c at time t by modeling its posterior distribution over previous data D t−1 as a multivariate Gaussian, z c ∼ p(z c |D t−1), then we can easily deploy sound exploration strategies such as Thompson sampling or UCB. If we hold f θ fixed, then we can easily model this distribution by doing an online Bayesian regression on the outputs of f θ, which gives us closed form formulas for updating the posterior over the last layer (specifically, its mean and covariance) given a single datapoint.3 When f θ is a neural network, then this is an instance of a BLL method. BLL methods have been shown to be an effective, model agnostic, and scalable way to deal with exploration problems involving neural networks. Previous work has found them to be a pragmatic method for obtaining approximate uncertainty estimates for exploration BID13 BID17 BID16 BID2 and as proxies for Gaussian processes in both Bayesian Optimization problems and general regression tasks BID11.If f θ is fixed, then z c can be updated efficiently in an online manner. An unanswered question still remains however: how does one actually update and learn f θ? If you don't care about achieving low regret (ie you only care about offline performance), then the answer is easy; just gather your data, train f θ offline, possibly with off-policy learning methods (; BID6, and learn the Bayesian regression post-hoc. Of course, if you are concerned about online performance (regret) then this is not a viable option. A training method for f θ must take care of two things: when do we update the feature functions and what do we update them with? A reasonable answer to the former question is to update on a fixed schedule (every T rounds). In this paper, we focus on answering the latter questions of which there are two obvious solutions, each with a corresponding problem: Sample minibatches only from recent data. Problem: We may overfit on this data and forget old information. Sample minibatches uniformly from the set of all collected data (both recent and old). Problem:We have lost the ability to adapt quickly to new data. If the input distribution suddenly shifts, we will likely have to wait many iterations before newer data becomes common enough in our minibatch samples, all while our regret increases. One thing to consider is that when it comes time to train our feature functions, we have access to a distribution over our last layer. If, for example, our distribution has suddenly shifted, then the last layer distribution should have more variance, ideally placing density on last layers that do well on older data, as well as those that fit well to the new data. If the distribution remains the same, then the variance should be low and density should be placed on relatively few values. Intuitively, we can get the best of both worlds (ability to adapt or retain information when needed) by optimizing over all values of the last layer weighted by their probability. In the next section, we derive a local approximation to this objective that shows this indeed is the case. Let D t and D t−1 be the most recent set of data collected online, and the set of all previously collected data, respectively. Additionally, assume a zero mean Gaussian prior over the last layer, p(z|θ) that is constant with respect to θ. Recall that during online training we fix the parameters θ = θ t−1, and model the distribution Q = p(z c |D t, D t−1, θ t−1). We want a value of θ such that both our data and the last layer values drawn from Q are likely. Thus our objective is to maximize: DISPLAYFORM0 We can write the marginal likelihood as a convolution between a logistic function and a Gaussian (based on our assumption of zero mean Gaussian prior p(z|θ)), which can be approximated in closed form as per MacKay FORMULA1: DISPLAYFORM1 Where µ is the mean of p(z|θ). Since we have a zero mean prior, the above term evaluates to σ whose value is a constant 1 2. Using this , we can rewrite equation FORMULA1 as: DISPLAYFORM2 Where c is a constant entropy term which can be ignored. For the first expectation term, we can use a second order Taylor expansion around E z∼Q [log p(D t |z, θ)] to get a closed form approximation 4: DISPLAYFORM3 This approximation was used in and shown to work well empirically. The expectations in equation FORMULA4 The KL term in equation FORMULA3 can also be approximated locally with a second Taylor expansion around the current value of θ = θ t−1. Let K(θ) = KL(Q||p(z|D t−1, θ)). Then, the second order Taylor expansion around θ = θ t−1 is: DISPLAYFORM4 Utilizing properties of KL divergence, as well as equation FORMULA2, it can be derived 5 that K (θ t−1) will evaluate to 0, and K (θ t−1) will evaluate to βF P, where β = DISPLAYFORM5 and F P is the Fisher Information Matrix of P = p(z|D t−1, θ t−1). Getting rid of constants, we can write the local KL approximation (when θ is close to θ t−1) as: DISPLAYFORM6 The term β defines the ratio between the expected data likelihood given the last layer z distributed as z ∼ p(z|D t−1, θ t−1) and the expected data likelihood given the last layer is distributed under the prior p(z|θ). This indicates that the better our previous values of θ t−1 and z explain the data, the more we should regularize when incorporating new data (i.e raising the value of β). In practice, these values may be computed or approximated, however for efficiency, we treat β as a hyperparameter, and linearly increase it throughout the learning process. Our final objective to optimize is thus: DISPLAYFORM7 Notice that the old data is now only used to calculate the Fisher information matrix F P and is not actually involved in the optimization. Thus, the optimization (at least temporarily) over all our data can be done by simply drawing minibatches from the new data only, while the old data is only used to calculate the regularization term. The practical benefit of this is that the regularization term can be easily computed in parallel while doing the online Bayesian regression and collecting new data. The quadratic regularization term shares similarities to objective functions in continual learning which aim to prevent catastrophic forgetting BID8 ). Combining the online learning and the network retraining stages described in the previous section gives us the general form of the iterative algorithm we study in this paper. The algorithm alternates between two stages:Online Phase: As input, take in a set of data D t−1, the posteriors (one for each arm) of the last layer conditioned on previous data p(z|D t−1) as well as a fixed value of f θ. This phase takes place over a series of T online rounds. In every round, the learner receives a context x i, and uses the posteriors over the last layer to decide which arm to pull (via Thompson sampling or UCB). The learner receives feedback y i upon pulling the arm c, and updates the posterior over z c with it. After T rounds, the learner outputs the updated posteriors over z, and the data collected this phase, D t.Offline/Retraining Phase: As input, take in D t, D t−1, and the posteriors over z. Retrain f θ using method described in Section 3. Set D t−1 = D t ∪D t−1. Recompute the posteriors over z conditioned on D t−1 using the new value of f θ. Output D t−1, f θ, and the updated posteriors p(z|D t−1).The marginalization method described in Section 3 is one type of retraining method. We compare it against two other methods in the next section and present for various experiments. We evaluate our technique across a diverse set of datasets and underlying models. As an initial sanity check, we first evaluate our method on the three most difficult (but low dimensional) bandit tasks analyzed in BID17. We next look at two higher dimensional problems and models; one being a Natural Language Processing (NLP) task using a state-of-the-art recurrent model and the other being a vision task using a convolutional model. In particular, we look at the degree at which each method can achieve both good online performance (regret) and good offline performance (offline test set accuracy), even in the face of large shifts in the data distribution. We provide additional details on hyperparameters, experimental setup, and dataset information in Appendix 7.4. All experiments are run 5 times, and reported with the mean and standard error. We evaluate all the datasets against the baseline presented in BID17, as well as a variant of our proposed method. Bandit feedback. In this setting the models are trained using bandit feedback as the label:Marginalize. This is our method of marginalizing over all values of the last layer for the neural network training. Minibatches are sampled from the new data only and the regularization term is computed from the old data. Sample New. This baseline creates minibatches using only the newly collected data, optimizing the likelihood of the new data only. It is equivalent to our method with a regularization constant of zero. As mentioned in Section 2.1, this method is good at adapting to new data but has a drawback of forgetting the old information. Sample All. This is the retraining method presented in BID17. In this method, a set number minibatches are created by uniformly sampling from all collected data (both old and new). SGD gradient updates are then performed using these batches. This method is slow to adapt but retains older information (refer Section 2.1).Full feedback. In this setting models are trained using all the labels for the datasets:Batch Train. When evaluating the offline accuracy, we also give the for a model that has been trained on the shuffled data in batch mode, with all the labels for the dataset (i.e. full feedback). This measures how well we could do given we had access to all the labels (instead of just the bandit feedback), and trained in a normal offline setting. Surprisingly, as we see in some cases, training online with marginalization sometimes performs comparable to training offline. We first confirm that our method gives good online performance on simpler, but previously studied, problems in BID17.We present on the three hardest bandit tasks analyzed in BID17, the Census, Jester, and Adult dataset. The bandit problems for these datasets are defined as in previous work:Census and Adult. Both these datasets are used for multiclass classification problem. Census dataset has 9 classes whereas Adult dataset consists of 14 classes. For both datasets the bandit problem is created as follows: for each class we assign an arm, and each arm is associated with a logistic regression (parametrized by a vector) that takes a context as input and returns the expected reward (0 or 1) for selecting the arm. In the online round we receive a context (feature vector) and pick an arm according to some policy (like UCB), and receive a reward. Only the picked arm is updated in each round. Jester BID5 This dataset consists of jokes with their user rating. For the bandit problem, the model receives a context representing a user along with 8 jokes out of which it is required to make recommendation of 1 joke. In this setting each joke is defined as a bandit arm. The problem here is similar to above with the exception that each arm is associated with a linear regression and outputs the predicted user rating for the selected joke. The reward returned is the actual user rating for the selected joke. As previously done in BID17, we use a two layer MLP as the underlying model, using the same configuration across all methods. For Marginalize and Sample New, we perform the retraining after 1000 rounds. For Sample All we update after 100 rounds just like BID17. In Table 1 we report the average cumulative regret as well as the cumulative regret relative to a policy that selects arms uniformly at random. We report the for both Thompson Sampling (TS) and for UCB policies. Results are similar for either UCB and TS which shows that policies does not influence performance of the training mechanisms. On most of the tasks both Marginalize (our method) and Sample New outperforms Sample All (method used in BID17) in terms of cumulative regret. Both Marginalize and Sample New techniques are very similar in performance for the three datasets. All the three datasets used in this experiment are low dimensional, static, and relatively easy to learn, hence there is not much history to retain for Sample New technique. In the next section we will present on larger datasets and also evaluate where we will show that our method performs better than Sample New. Next we evaluate our method with a bigger and more complex underlying model on the NLP domain. We selected Bilateral Multi-Perspective Matching (BiMPM) , a recurrent model that performs well on several sentence matching tasks, including the paraphrase identification task, to evaluate our method. The goal of the paraphrase identification task is to determine whether a sentence is a paraphrase of another sentence, i.e., whether they convey the same meaning. To evaluate whether our algorithm is robust to shifts in data distribution we combined two different paraphrase identification datasets: i) The Quora Question Pairs dataset (Quora), 6 which contains 400,000 question pairs from the QA website Quora.com, and ii) The MSR Paraphrase Corpus (MSR) BID4, which contains 5,800 pairs of sentences extracted from news articles. To create an online training dataset we concatenate the MSR training set to a sample of 10,000 examples from the Quora training dataset 7.We run the online algorithms on this dataset to report the regret values, while we report the offline performance on the MSR and Quora test sets. We use UCB as our search strategy, as it performs similarly to Thompson sampling and runs much faster in our implementation. We analyze the following two bandit tasks:Multiclass. Like the previous datasets, we create a bandit problem by treating each class as an arm parameterized by a vector and the contexts as the individual data instances. A reward 1 is awarded for identifying correctly if the two sentences in the pair are paraphrase. For each method, we perform an offline retraining after 1,000 online rounds. Pool. Like the multiclass task, the pool based task occurs over a series of rounds. On each round, the model receives a pool of k(=3) instances, and must select one of them for the user. After that the model receives a reward based on its selection. The goal of the model is to learn a scoring function that predicts the expected reward for selecting a certain instance, while at the same time trying to keep regret low. This setting can be seen as an instance of the bandit problem formulation described in . In our case, our instances are candidate paraphrase pairs, where the model gets a reward of 1 for returning a valid paraphrase pair, and 0 otherwise. We use the same implementation and hyperparameters for BiMPM as in . For Marginalize and Sample All, we perform the retraining every 500 rounds. Sample New performed poorly offline on this setting and is thus updated every 1,000 rounds. In TAB2 we show that our method Marginalize outperforms both Sample All and Sample New techniques for both multiclass and pool based tasks. Sample All and Sample New have comparable cumulative regret. Sample New has worse offline accuracy on Quora dataset (because it forgets old information), while it has better offline accuracy on MSR (because it is able to adapt quicker). For Batch train, both multiclass and pool based tasks are same-a binary classification problem. Batch train performs only slightly better than our method in terms of offline accuracy, where Batch train gets full feedback, while our method only gets partial (bandit) feedback. FIG1 further shows that when the data distribution changes (switching form Quora to MSR in the pool based task) Marginalize and Sample New are able to adapt much faster than Sample All. Overall Marginalize achieved a lower regret as well as higher offline accuracy for both the bandit settings. We additionally evaluate our method using a convolutional neural network, which is a common network architecture for computer vision applications. Table 3: Image classification Bandit Task on CIFAR-10 dataset. We use CIFAR-10 dataset BID9 ) for this experiment. It is a commonly used dataset for image classification task consisting of 60,000 images from 10 different classes. Similar to the Quora/MSR tasks, we simulate a domain shift by concatenating together two datasets. In this case we create two data sets from CIFAR-10 by partitioning the dataset into images depicting animals (6 classes) and images depicting transportation (4 labels). As above, we analyze two bandit tasks:Multiclass. We define the multiclass bandit task similarly as above, for each of the 10 classes in CIFAR-10, we assign one arm. At each round, the model receives an image, guesses a class, and receives feedback (0 or 1) for this class only. The task is considerably more difficult than the multiclass paraphrase bandit due to the number of classes. We use 1,000 rounds for the retraining frequency for all methods. Pool. We also define a pool based bandit, similar to the pool based paraphrase bandit, with pool size k = 5. In this case we turn CIFAR-10 info a binary classification task. We select the two most difficult classes to classify (airplanes and birds, according to the confusion matrix of our base CNN model) in CIFAR-10, and denote these as the positive class. Like the previous pool task, the learner receives a pool of images and must select one. A reward of 1 is given for selecting an image belonging to the positive class, 0 otherwise. As done in the previous pool task, the data is sorted as to simulate a change in domain. We use a standard convolutional neural network architecture for this task, detailed in Appendix 7.5. We use 500 rounds for the retraining frequency for all methods. In Table 3 we present for the image classification bandit task, using average cumulative regret and offline accuracy as evaluation metrics. Again, Sample New performs better than Sample All for cumulative regret but under-performs in the offline setting. As expected, our method performs well for both cumulative regret and offline setting. For the multiclass task, our method performs significantly lower than batch train. This is not too surprising, for two reasons: i) Training a CNN architecture takes many more epochs over the data to converge (∼ 20 in our case) which is not achieved in a bandit setting; ii) CIFAR-10 has 10 classes, each defining an arm and in our setting; the bandit algorithms only gets feedback for one class in each round, compared to the full feedback received in batch train. Effectively, the number of labels per class in cut by a factor of 10. This is not as much an issue in the pool task, where we can see the between batch train and the bandit algorithms are comparable. In this paper we proposed a new method for training neural networks in a bandit setting. We tackle the problem of exploration-exploitation by estimating uncertainty only in the last layer, allowing the method to scale to large state-of-the-art models. We take advantage of having a posterior over the last layer weights by optimizing the rest of the network over all values of the last layer. We show that method outperforms other methods across a diverse set of underlying models, especially in online tasks where the distribution shifts rapidly. We leave it as future work to investigate more sophisticated methods for determining when to retrain the network, how to set the weight (β) of the regularization term in a more automatic way, and its possible connections to methods used for continual learning. We utilize the same notation here as in Section 3. First, we show how to arrive at equation from the main objective, equation FORMULA1. DISPLAYFORM0 By marginalizing over our zero mean Gaussian prior and using equation FORMULA2, it follows that the last expectation is approximately a constant, and can be removed. The second expectation is equal to the negative cross entropy between Q = p(z|D t, D t−1, θ t−1) and P θ = p(z|D t−1, θ). Using the fact that KL(Q||P θ) = CE(Q||P θ) − H(Q), we can replace the negative cross entropy with −KL(Q||P θ) − H(Q), where H(Q) is the entropy of Q which is constant and can be ignored, yielding equation.We next derive the KL term. As mentioned, we approximate the KL term locally around θ t−1 with a second order Taylor expansion. Again, let K(θ) = KL(p(z|D t, D t−1, θ t−1)||p(z|D t−1, θ)). The second order Taylor expansion around θ = θ t−1 is: DISPLAYFORM1 We can rewrite K (θ) with respect to θ as: DISPLAYFORM2 This can also be done for K. Let ∇ i indicate either the gradient (i = 1) or the Hessian (i = 2). Then we can rewrite the above expectation with respect to the distribution P = p(z|D t−1, θ t−1) instead: DISPLAYFORM3 Using the fact that p(z|D t, D t−1, θ t−1) = p(D t |θ t−1) −1 p(D t |z, θ t−1)p(z|D t−1, θ t−1), we can rewrite the above as: DISPLAYFORM4 the UCB is straightforward; if Σ is the covariance of our posterior over z, and x is the context, then with probability at least 1 − δ, the term (1 + ln(2/δ)/2) f θ (x) T Σf θ (x) is an upper confidence bound. The term α = (1 + ln(2/δ)/2) is treated as a hyperparameter. The arm c chosen is the one whose parameter vector z c maximizes the following: DISPLAYFORM5 The hyperparameter values that are used for all methods across all tasks (the global hyperparameters) are presented in TAB6. The hyper parameters for the low dimensional bandit tasks (we uses the same values for each low dimensional dataset), the paraphrase bandit (multiclass and pool), and the image classification bandit (multiclass and pool) are presented in Table 5. The meanings of the non obvious hyperparameter values are described below:Retrain Epochs: For Sample New and Marginalize; how many times to pass over the new data D t.Update Frequency: How many online rounds to do before updating the rest of the network offline. Num Updates: For Sample All; the number of batches to uniformly sample (and update with) from the entire distribution. Table 5: Method specific experiment details for all tasks. In this section we detail the architectures used for each task. We use the same underlying model for each method. Low dimensional task models. For the Low dimensional tasks, we utilize the same exact Multi Layer Perceptron (MLP) models as used in BID17. The model is a 2 layer MLP with a hidden layer dimension of 100, with ReLu activation functions. Paraphrase task. The paraphrase task uses the same BiMPM architecture and parameters used in. We refer readers to the corresponding paper for details. Image classification task. The convolutions architecture we use is defined as: (i) Two 3 × 3 convolutional layers with 64 filters and ReLu activations; (ii) A 2 × 2 max pooling layer with a stride of 2; (iii) Dropout layer with drop probability 0.25; (iv) A 3 × 3 and 2 × 2 convolutional layer both with 128 filters and ReLu activations; (v) A 2 × 2 max pooling layer with a stride of 2; (vi) Dropout layer with drop probability 0.25; (vii) A fully connected layer with 1024 units, followed by a tanh activation, followed by another fully connected layer with 100 units and a tanh activation. We utilize tanh activations at the end as per , who note that ReLu activations lead to difficulties in estimating the uncertainty. | This paper proposes a new method for neural network learning in online bandit settings by marginalizing over the last layer | 1,058 | scitldr |
Despite a growing literature on explaining neural networks, no consensus has been reached on how to explain a neural network decision or how to evaluate an explanation. Our contributions in this paper are twofold. First, we investigate schemes to combine explanation methods and reduce model uncertainty to obtain a single aggregated explanation. The aggregation is more robust and aligns better with the neural network than any single explanation method.. Second, we propose a new approach to evaluating explanation methods that circumvents the need for manual evaluation and is not reliant on the alignment of neural networks and humans decision processes. Despite the great success of neural networks especially in classic visual recognition problems, explaining the networks' decisions remains an open research problem. This is due in part to the complexity of the visual recognition problem and in part to the basic'ill-posedness' of the explanation task. This challenge is amplified by the fact that there is no agreement on what a sufficient explanation is and how to evaluate an explanation method. Many different explanation strategies and methods have been proposed (; ; ; ; ;). Focusing on visual explanations for individual decisions, most methods either use a backpropagation approach or aim to construct a simpler linear model with an intuitive explanation. The plethora of explanation approaches is a signature of the high-level epistemic uncertainty of the explanation task. This paper is motivated by a key insight in machine learning: Ensemble models can reduce both bias and variance compared to applying a single model. A related approach was pursued for functional visualization in neuroimaging . Here we for the first time explore the potential of aggregating explanations of individual visual decisions in reducing epistemic uncertainty for neural networks. We test the hypothesis that ensembles of multiple explanation methods are more robust than any single method. This idea is analyzed theoretically and evaluated empirically. We discuss the properties of the aggregate explanations and provide visual evidence that they combine features, hence are more complete and less biased than individual schemes. Based on this insight, we propose two ways to aggregate explanation methods, AGG-Mean and AGG-Var. In experiments on Imagenet, MNIST, and FashionMNIST, the aggregates identify relevant parts of the image more accurately than any single method. Second, we introduce IROF (Iterative Removal Of Features) as a new approach to quantitatively evaluate explanation methods without relying on human evaluation. We circumvent the problems of high correlation between neighbor pixels as well as the human bias that are present in current evaluation methods. The open problem of explainability is reflected in a lot of recent work (; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ;). We focus on generating visual explanations for single samples. The first work in this direction was with Saliency Maps (SM) that proposed backpropagating the output onto the input to gain an understanding of a neural network decision. The relevance for each input dimension is extracted by taking the gradient of the output w. r. t. to the input. This idea was extended by into Guided Backpropagation (GM) by applying ReLU non-linearities after each layer during the backpropagation. Compared to Saliency, this removes visual noise in the explanation. Grad-CAM (GC) from is an explanation method, developed for use with convolutional neural networks. By backpropagating relevance through the dense layers and up-sampling the evidence for the convolutional part of the network, the method obtains coarse heatmaps that highlight relevant parts of the input image. Integrated Gradients (IG) sums up the gradients from linearly interpolated pictures between a baseline, e.g. a black image, and the actual image. SmoothGrad (SG) filters out noise from a basic saliency map by creating many samples of the original input with Gaussian noise. The final saliency map is the average over all samples. Finally, we also consider. In contrast to the other methods, LIME is not based on backpropagation. Instead, it approximates the neural network with a linear model locally around the input to be explained. The coefficients of the linear model for the respective input dimensions give the importance of each dimension. Compared to the other methods this is much more computationally expensive as it requires many passes through the neural network. The evaluation of explanation methods is a relatively recent topic with few systematic approaches (; ; ; ;). To our knowledge, proposed the first quantitative approach to evaluate an explanation method by flipping pixels to their opposite and comparing the decrease in output with the relevance attributed to the pixel by the explanation method. As the authors note, this only works for low-dimensional input. This work was followed up upon in. By dividing high-dimensional images into squares, they make the method feasible for high-dimensional inputs. Squares with high relevance (as measured by the explanation method) consecutively get replaced with noise sampled from the uniform distribution. The difference between the original output and the output for the degraded images indicates the quality of the explanation method. proposes another quantitative approach to evaluate explanation methods called ROAR. For each explanation method they extract the relevance maps over the entire training set. They degrade the training set by setting different percentages of the pixels with the highest relevance to the mean and retrain the network. Each retrained network is evaluated on the test set. The accuracy on the test set decreases dependent on the percentage of pixels set to the mean. This requires retraining the same architecture multiple times for each explanation method at a high computational cost. proposed a different approach to evaluate explanation methods, called Sensitivityn, based on the notion that the decrease in output when a number of inputs are canceled out should be equal to the sum of their relevances. For a range of n (between 1 and the total number of inputs) they sample a hundred subsets of the input. For each n, the Pearson Correlation Coefficient (PCC) between the decrease in output, when the subset of features is removed, and the sum of their relevances is reported. The is a curve of the PCC dependent on the percentage of the input being removed. For a good explanation method, the PCC will decrease slowly. All currently available explanation methods have weaknesses that are inherent to the approach and include significant uncertainty in the ing heatmap (; ;). A natural way to mitigate this issue and reduce noise is to combine multiple explanation methods. Ensemble methods have been used for a long time to reduce the variance and bias of machine learning models. We apply the same idea to explanation methods and build an ensemble of explanation methods. We assume a neural network F: X → y with X ∈ R m×m and a set of explanation methods {e j} J j=1 with e j: X, y, F → E with E ∈ R m×m. We write E j,n for the explanation obtained for X n with method e j and denote the mean aggregate explanation asē withĒ n = 1 J J j=1 E j,n. While we assume the input to be an image ∈ R m×m, this method is generalizable to inputs of other dimensionalities as well. To get a theoretical understanding of the benefit of aggregation, we hypothesize the existence of a'true' explanationÊ n. This allows us to quantify the error of an explanation method as the mean squared difference between the'true' explanation and an explanation procured by an explanation method, i.e. the MSE. For clarity we subsequently omit the notation for the neural network. We write the error of explanation method j on image X n as err j,n = ||E j,n −Ê n || 2 with is the MSE of the aggregate. The typical error of an explanation method is the mean error over all explanation methods With these definitions we can do a standard bias-variance decomposition . Accordingly we can show the error of the aggregate will be less that the typical error of explanation methods, A detailed calculation is given in appendix A.1. The error of the aggregate MSE(Ē) is less than the typical error of the participating methods. The difference -a'variance' term -represents the epistemic uncertainty and only vanishes if all methods produce identical maps. By taking the average over all available explanation methods, we reduce the variance of the explanation compared to using a single method. To obtain this average, we normalize all input heatmaps such that the relevance over all pixels sum up to one. This reflects our initial assumption that all individual explanation methods are equally good estimators. We refer to this approach as AGG-Mean. size' map by dividing the mean aggregate locally with its standard deviation . Intuitively, this will assign less relevance to segments with high disagreement between methods. For stability, we divide not directly by the local variance but add a constant to the estimate of the local variance. This can be interpreted as a smoothing regularizer or a priori information regarding epistemic and aleatoric uncertainties. We refer to this approach as AGG-Var. where σ(E j∈J,n) is the point-wise standard deviation over all explanations j ∈ J for X n In section 4 we will evaluate and compare AGG-Mean and AGG-Var against basic explanation methods. Figure 1: Quantitative evaluation with IROF: Relevant segments as identified by an explanation method get consecutively replaced by the mean colour over the entire dataset. The IROF score of an explanation method is the integrated decrease in the class score over the number of removed segments. Quantitative evaluation is a recurring problem with explainability methods. This is especially true for high-dimensional input, such as images, where important features are made up of locally highly correlated pixels. If the information in one pixel is lost, this will not change the overall feature and should therefore not in a changed output score. The relevance values of single pixels are not indicative of the feature's importance as a whole. We circumvent this problem by utilizing conventional image segmentation, a well-explored area. By first dividing the image into coherent segments, we avoid the interdependency between the inputs. Methodology We assume a neural network F: X → y with X ∈ R m×m and a set of explanation methods {e j} J j=1 with e j: X, y, F → E with E ∈ R m×m. Furthermore we partition each image X n into a set of segments {S l n} L l=1 using a given segmentation method with s l n,i,j = 1 indicating that pixel x n,i,j belongs to segment l. Computing the mean importance of each segment according to a given explanation method j, two segments can be directly compared against each other. By sorting the segments in decreasing order of importance according to the explanation method, we get a ranking of how relevant each segment of the image is. We use X l n to indicate X n with the l segments with highest mean relevance replaced with the mean value. Taking F (X l n) y repeatedly with increasing l ∈ 0,..., L in a curve of the class score dependent on how many segments of the image are removed. Dividing this curve by F (X 0 n) y normalizes the scores to be within and makes curves comparable between input samples and networks. If an explanation method works well, it will attribute high relevance to segments important for classification. As segments with high relevance are removed first, the score for the target class will go down faster. By computing the area over the curve (AOC) for the class score curve and averaging over a number of input samples, we can identify the method that identifies relevant areas of the image more reliably. For a good explanation method, the AOC will be higher. We refer to this evaluation method as the iterative removal of features (IROF). The IROF for a given explanation method e j is expressed as: This approach is a quantitative comparison of two or more explainability methods that does not rely on human evaluation or alignment between human and neural network reasoning. For each explanation method the workflow produces a single value, enabling convenient comparison between two or more explanation methods. If the AOC is higher, the explanation method captures more information about the neural network classification. IROF is dependent on having meaningful segments in the input, as natural images do. Dividing up text or non-natural images such as EEG into meaningful and independent segments does not have a natural solution and is left for future research. We first present empirical validation of our proposed evaluation technique IROF in section 4.2. Subsequently we evaluate the aggregation of explanation techniques against the vanilla techniques with IROF, Sensitivity-n and qualitative evaluation. In appendix A.6.1 we compare aggregated methods on a dataset of human-annotated heatmaps. We tested our method on five neural network architectures that were pre-trained on ImageNet: VGG19, Xception, Inception, ResNet50 and ResNet101;;;. 1 Additionally, we ran experiments on CNN trained on the MNIST and FashionMNIST dataset;. We compared the aggregation methods against Saliency (SM), Guided Backpropagation (GB), SmoothGrad (SG), Grad-CAM (GC) and Integrated Gradients (IG) to have a selection of attributionbased methods. Additionally we compared against LIME as a method that is not based on attribution but rather on local approximation. The aggregations are based on all attributionbased methods. Some of the methods in positive and negative evidence. We only considered positive evidence for the ImageNet tasks to compare methods against each other. To check that this does not corrupt the methods, we compared the methods that do contain negative against their filtered version and found negligible difference between the two versions of a method in the used metrics. For Agg-Mean we introduced an additional parameter, to the divisor. In our experiments we set to be ten times the mean σ over the entire dataset. A good evaluation method should be able to reject the null hypothesis (a given explanation method is no better than random choice) with high confidence. We use this as motivation to evaluate and compare IROF by calculating the paired t-test of an explanation method versus random guessing. This is done with multiple explanation methods and networks, to reduce the impact of the explanation method. We compare IROF and pixel removal with mean value and black as a replacement value respectively. Additionally we compare against as explained in section 2. we set the 10% most relevant segments to the mean value over the dataset. For pixel removal, we set the equivalent number of pixels to the mean value. The percentage of segments or pixels being removed was chosen ad hoc. If the difference in degradation between random choice and the explanation method is high, the explanation method reports meaningful information. Since we compare the same explanation method on the same neural network with different evaluation methods, the p-values only contain information about how meaningful the evaluation method is. We computed IROF and pixel removal with black or mean replacement values and compared the p-value changes dependent on the number of samples. Results are shown in fig. 2 (extended in appendix A.6). In table 1 we provide for forty images in tabular form for an easier overview (other methods in appendix A.6). On forty images, all evaluation methods produce p-values below 0.05. Thus, all evaluation methods can distinguish between random guessing and an explanation method. However, IROF can reject the null hypothesis (the explanation method does not contain any information), with much higher confidence with the same number of samples for any configuration. We can conclude that IROF is more sensitive to the explanation method than pixel removal or , making it the better choice to quantitatively evaluate an explanation method. the IROF as described in section 3.2. We include two non-informative baselines. Random randomly chooses segments to remove. Sobel is a sobel edge detector. Neither of them contain information about the neural network. All explanation methods have a lower IROF than the random baseline on all architectures tested, indicating that all methods contain information about the image classification. Except for LIME, all methods also surpass the stronger baseline, SOBEL. The ranking of unaggregated methods varies considerably between architectures. This variance indicates that the accuracy of an explanation method depends on the complexity and structure of the neural network architecture. For all architectures AGG-Mean and AGG-Var have a lower IROF score than any non-aggregated method. For ResNet101 the difference between the best unaggregated method and the aggregated methods is especially large. We hypothesize, that the benefit of aggregating explanation methods increases for more complex neural network with large epistemic uncertainty on the explanation. We can empirically confirm that aggregating methods improves over unaggregated methods and more reliably identifies parts of the images that are relevant for classification. We show heatmaps for individual examination for each of the methods in fig. 3 and compare qualitatively (large version of fig. 3 in appendix A.6.2). While visual evaluation of explanations for neural networks can be misleading, there is no better way available of checking whether any given explanation method agrees with intuitive human understanding . Additionally, we compute alignment between human-annotated images and the explanation methods in??, using the human benchmark for evaluation introduced in. AGG-Var combines features of the aggregrated methods by attributing relevance on the classified object as a whole, but considering smaller details such as the face of an animal as more relevant. It is a combination of the detail-oriented and shape-oriented methods. Compared to SmoothGrad, which concentrates on one isolated feature, the relevance is more evenly distributed and aligned with our human intuition that classification takes context into account and does not rely on e.g. only the beak of a bird. We can conclude that combining explainability methods provides a meaningful visual improvement over single methods. To quantitatively compare explanation methods on a low-dimensional input we use Sensitivity-n . The exact procedure is described in section 2. We compare on MNIST and FashionMNIST , two low-dimensional dataset with a basic CNN 2 (architecture in appendix). We follow the procedure suggested in and test on a hundred randomly sampled subsets for 1000 randomly sampled test images. The number of pixels in the set n is chosen at fifteen points logarithmically spaced between 10 and 780 pixels. As described in section 2, for a range of n (between 1 and the total number of inputs) a hundred subsets of the input features are removed. For each n, the average Pearson Correlation Coefficient (PCC) between the decrease in output and the relevance of the removed output features is reported. The is a curve of the PCC dependent on the removed percentage. We show in fig. 4. AGG-Mean and AGG-Var perform in range of the best methods. For the CNN trained on FashionMNIST, AGG-Mean and AGG-Var perform better than unaggregated methods. For the CNN trained on MNIST, Guided Backpropagation and AGG-Mean perform best. For both networks (trained on FashionMNIST and MNIST respectively), SmoothGrad and GradCAM perform considerably worse than the other methods. In summary, aggregation seems to not be as beneficial when applied to a low-dimensional, "easier" tasks such as MNIST as it is for ImageNet, performing in range of the best unaggregated method. We hypothesize that this is because there is less epistemic uncertainty in explanations for less complex tasks and network architectures. In this work we gave a simple proof that aggregating explanation methods will perform at least as good as the typical individual method. In practice, we found evidence that aggregating methods outperforms any single method. We found this evidence substantiated across quantitative metrics. While our show that different vanilla explanation methods perform best on different network architectures, an aggregation supersedes all of them on any given architecture. Additionally we proposed a novel way of evaluation for explanation methods that circumvents the problem of high correlation between pixels and does not rely on visual inspection by humans, an inherently misleading metric. All currently available explanation methods have weaknesses that are inherent to the approach and include significant noise in the heatmap (; ;). A natural way to mitigate this issue and reduce noise is to combine multiple explanation methods. Ensemble methods have been used for a long time to reduce the variance and bias of machine learning models. We apply the same idea to explanation methods and build an ensemble of explanation methods. We assume a neural network F: X → y with X ∈ R mxm and a set of explanation methods {e j} J j=1 with e j: X, y, F → E with E ∈ R mxm. We write E j,n for the explanation obtained for X n with method e j and denote the mean aggregate explanation asē withĒ n = 1 J J j=1 E j,n. While we assume the input to be an image ∈ R mxm, this method is generalizable to inputs of other dimensions as well. We define the error of an explanation method as the mean squared difference between a hypothetical'true' explanation and an explanation procured by the explanation method, i.e. the MSE. For this definition we assume the existence of the hypothetical'true' explanationÊ n for image X n. For clarity we subsequently omit the notation for the neural network. We write the error of explanation method j on image X n as err j,n = ||E j,n −Ê n || 2 with and MSE(Ē) = 1 N n ||Ē n −Ê n || 2 is the MSE of the aggregate. The typical error of an explanation method is represented by the mean The error of the aggregate MSE(Ē) is less than the typical error of the participating methods. The difference -a'variance' term -represents the epistemic uncertainty and only vanishes if all methods produce identical maps. In section 3.1 we showed theoretically that the average MSE of two or more explanation methods will always be higher than the error of the averaged of those methods. Empirically, we test this for IROF with combinations of any two methods for ResNet101 and show the in fig. 5. For any two methods, the matrix shows the ratio between the aggregate method IROF and the average IROF of the aggregated methods. The aggregate IROF is always lower, confirming our theoretical . An important part of our method is the choice of explanation methods to be included in the aggregation. In our work we focus on backpropagation-based methods, since they tend to be computationally cheap. In our opinion this makes for a more realistic use case, not only for human but also for machine post processing. In contrast, locality based methods such as LIME or the method by require many forward passes, since their methods essentially "learn" what parts of the input are relevant. We included LIME in our experiments to have a not-backpropagation based method included. A.5.1 GENERAL We use SLIC for image segmentation due to availability and quick run time . Preliminary experiments with Quickshift showed similar . SLIC was chosen over Quickshift due to the quicker run time. The number of segments was set to 300 ad hoc. fig. 1 shows the same procedure with 100 segments for the sake of clarity. For AGG-Var, we add a constant to the denominator. We set this constant to 10 times the mean std, a value chosen empirically after trying values in the range of times the mean. Evaluations were run with a set random seed for reproducibility. Stddev were reported either for each individual or if they were non-significant in the caption to avoid cluttering the . Since our work does not include computationally heavy training, we did not record the exact computing infrastructure. The training for both models was equivalent. The architecture was as follows: We report p-values for evaluating with 50 images on ResNet101 in the manner described in section 4.2 in tabular form to provide a clear overview. We want to quantify whether an explanation method agrees with human judgement on which parts of an image should be important. While human annotation is expensive, there exists a benchmark for human evaluation introduced in. The benchmark includes ninety images of Figure 8: Example images from with human-annotated overlays. categories in the ImageNet Challenge (ten images were excluded due to the category not being in the ImageNet challenge) and provides annotations of relevant segments that ten human test objects found important. Example images are shown in fig. 8. While human evaluation is not a precise measure, we still expect some correlation between neural network and human judgement. To test the alignment, we calculate the cosine similarity, similarity(e j) = Since the images in this dataset are 224x224 pixel large, we only compute the cosine similarity for the network architectures where pretrained networks with this input size were available. We see that AGG-Mean and AGG-Var perform on-par with the best methods (SmoothGrad and GradCAM). While the aggregated methods perform better than the average explanation method, they do not surpass the best method. When we combine the two best-performing single methods, SmoothGrad and GradCAM, we surpass each individual method. We hypothesize that this is because the epistemic uncertainty is reduced by the aggregate. | We show in theory and in practice that combining multiple explanation methods for DNN benefits the explanation. | 1,059 | scitldr |
We present SOSELETO (SOurce SELEction for Target Optimization), a new method for exploiting a source dataset to solve a classification problem on a target dataset. SOSELETO is based on the following simple intuition: some source examples are more informative than others for the target problem. To capture this intuition, source samples are each given weights; these weights are solved for jointly with the source and target classification problems via a bilevel optimization scheme. The target therefore gets to choose the source samples which are most informative for its own classification task. Furthermore, the bilevel nature of the optimization acts as a kind of regularization on the target, mitigating overfitting. SOSELETO may be applied to both classic transfer learning, as well as the problem of training on datasets with noisy labels; we show state of the art on both of these problems. Deep learning has made possible many remarkable successes, leading to state of the art algorithms in computer vision, speech and audio, and natural language processing. A key ingredient in this success has been the availability of large datasets. While such datasets are common in certain settings, in other scenarios this is not true. Examples of the latter include "specialist" scenarios, for instance a dataset which is entirely composed of different species of tree; and medical imaging, in which datasets on the order of hundreds to a thousand are common. A natural question is then how one may apply the techniques of deep learning within these relatively data-poor regimes. A standard approach involves the concept of transfer learning: one uses knowledge gleaned from the source (data-rich regime), and transfers it over to the target (data-poor regime). One of the most common versions of this approach involves a two-stage technique. In the first stage, a network is trained on the source classification task; in the second stage, this network is adapted to the target classification task. There are two variants for this second stage. In feature extraction (e.g.), only the parameters of the last layer (i.e. the classifier) are allowed to adapt to the target classification task; whereas in fine-tuning (e.g. BID12), the parameters of all of the network layers (i.e. both the features/representation and the classifier) are allowed to adapt. The idea is that by pre-training the network on the source data, a useful feature representation may be learned, which may then be recycled -either partially or completely -for the target regime. This two-stage approach has been quite popular, and works reasonably well on a variety of applications. Despite this success, we claim that the two-stage approach misses an essential insight: some source examples are more informative than others for the target classification problem. For example, if the source is a large set of natural images and the target consists exclusively of cars, then we might expect that source images of cars, trucks, and motorcycles might be more relevant for the target task than, say, spoons. However, this example is merely illustrative; in practice, the source and target datasets may have no overlapping classes at all. As a , we don't know a priori which source examples will be important. Thus, we propose to learn this source filtering as part of an end-to-end training process. The ing algorithm is SOSELETO: SOurce SELEction for Target Optimization. Each training sample in the source dataset is given a weight, corresponding to how important it is. The shared source/target representation is then optimized by means of a bilevel optimization. In the interior level, the source minimizes its classification loss with respect to the representation parameters, for fixed values of the sample weights. In the exterior level, the target minimizes its classification loss with respect to both the source sample weights and its own classification layer. The sample weights implicitly control the representation through the interior level. The target therefore gets to choose the source samples which are most informative for its own classification task. Furthermore, the bilevel nature of the optimization acts as a kind of regularization on the target, mitigating overfitting, as the target does not directly control the representation parameters. Finally, note that the entire processtraining of the shared representation, target classifier, and source weights -happens simultaneously. We pause here to note that the general philosophy behind SOSELETO is related to the literature on instance reweighting for domain adaptation, see for example BID32. However, there is a crucial difference between SOSELETO and this literature, which is related to the difference between domain adaptation and more general transfer learning. Domain adaptation is concerned with the situation in which there is either full overlap between the source and target label sets; or in some more recent work BID43, partial but significant overlap. Transfer learning, by contrast, refers to the more general situation in which there may be zero overlap between label sets, or possibly very minimal overlap. (For example, if the source consists of natural images and the target of medical images.) The instance reweighting literature is concerned with domain adaptation; the techniques are therefore relevant to the case in which source and target have the same labels. SOSELETO is quite different: it makes no such assumptions, and is therefore a more general approach which can be applied to both "pure" transfer learning, in which there is no overlap between source and target label sets, as well as domain adaptation. (Note also a further distinction with domain adaptation: the target is often -though not always -taken to be unlabelled in domain adaptation. This is not the case for our setting of transfer learning.)Above, we have illustrated how SOSELETO may be applied to the problem of transfer learning. However, the same algorithm can be applied to the problem of training with noisy labels. Concretely, we assume that there is a large noisy dataset, as well as a much smaller clean dataset; the latter can be constructed cheaply through careful hand-labelling, given its small size. Then if we take the source to be the large noisy dataset, and the target to the small clean dataset, SOSELETO can be applied to the problem. The algorithm will assign high weights to samples with correct labels and low weights to those with incorrect labels, thereby implicitly denoising the source, and allowing for an accurate classifier to be trained. The remainder of the paper is organized as follows. Section 2 presents related work. Section 3 presents the SOSELETO algorithm, deriving descent equations as well as convergence properties of the bilevel optimization. Section 4 presents of experiments on both transfer learning as well as training with noisy labels. Section 5 concludes. Transfer learning As described in Section 1, the most common techniques for transfer learning are feature extraction and fine-tuning, see for example and BID12, respectively. An older survey of transfer learning techniques may be found in BID25. Domain adaptation BID28 is concerned with transferring knowledge when the source and target classes are the same. Earlier techniques aligned the source and target via matching of feature space statistics; BID19; subsequent work used adversarial methods to improve the domain adaptation performance BID10; BID36;.In this paper, we are more interested in transfer learning where the source and target classes are different. A series of recent papers BID20; BID26 BID4 b) address domain adaptation that is closer to our setting. In particular, BID5 examines "partial transfer learning", the case in which there is partial overlap between source and target classes (particularly when the target classes are a subset of the source). This setting is also dealt with in BID3. BID11 examine the scenario where the source and target classes are completely different. Similar to SOSELETO, they propose selecting a portion of the source dataset. However, the selection is not performed in an end-to-end fashion, as in SOSELETO; rather, selection is performed prior to training, by finding source examples which are similar to the target dataset, where similarity is measured by using filter bank descriptors. Another recent work of interest is, which focuses on a slightly different scenario: the target consists of a very small number of labelled examples (i.e. the few-shot regime), but a very large number of unlabelled examples. Training is achieved via an adversarial loss to align the source and the target representations, and a special entropy-based loss for the unlabelled part of the data. Instance reweighting for domain adaptation is a well studied technique, demonstrated e.g. in Covariate Shift methods Shimodaira FORMULA6; BID31. In these works, the source and target label spaces are the same. We, however, allow for different -even entirely nonoverlapping -classes in the source and target. Crucially, we do not make assumptions on the similarity of the distributions nor do we explicitly optimize for it. The same distinction applies for the recent work of, and for the partial overlap assumption of BID43. In addition, these two works propose an unsupervised approach, whereas our proposed method is completely supervised. Covariate shift determines the weighting for an instance as the ratio of its probability of being in the training set and being in the prediction set. Consequently, the feature vectors are used in re-weighting, regardless of their labels. This renders covariate shift unsuitable for handling noisy labels. Our re-weighing scheme is instead gradient-based and as we show next performs well in this task. Learning with noisy labels Classification with noisy labels is a longstanding problem in the machine learning literature, see the review paper BID9 and the references therein. Within the realm of deep learning, it has been observed that with sufficiently large data, learning with label noise -without modification to the learning algorithms -actually leads to reasonably high accuracy BID14; BID34.The setting that is of greatest interest to us is when the large noisy dataset is accompanied by a small clean dataset. BID33 introduce an additional noise layer into the CNN which attempts to adapt the output to align with the noisy label distribution; the parameters of this layer are also learned. BID39 use a more general noise model, in which the clean label, noisy label, noise type, and image are jointly specified by a probabilistic graphical model. Both the clean label and the type of noise must be inferred given the image, in this case by two separate CNNs. consider the same setting, but with additional information in the form of a knowledge graph on labels. Other recent work on label noise includes BID27, which shows that adding many copies of an image with noisy labels to a clean dataset barely dents performance; , in which two separate networks are simultaneously trained, and a sample only contributes to the gradient descent step if there is disagreement between the networks (if there is agreement, that probably means the label is wrong); and BID8, which analyzes theoretically the situations in which CNNs are more and less resistant to noise. A pair of papers BID18; combine ideas of learning with label noise with instance reweighting. Bilevel optimization Bilevel optimization problems have a nested structure: the interior level (sometimes called the lower level) is a standard optimization problem; and the exterior level (sometimes called the upper level) is an optimization problem where the objective is a function of the optimal arguments from the interior level. A branch of mathematical programming, bilevel optimization has been extensively studied within this community BID6;. For recent developments, readers are referred to the review paper BID30. Bilevel optimization has been used in both machine learning, e.g. BID1 and computer vision, e.g. BID24. We have two datasets. The source set is the data-rich set, on which we can learn extensively. It is denoted by {(x DISPLAYFORM0, where as usual x s i is the i th source training image, and y s i is its corresponding label. The second dataset is the target set, which is data-poor; but it is this set which ultimately interests us. That is, the goal in the end is to learn a classifier on the target set, and the source set is only useful insofar as it helps in achieving this goal. The target set is denoted DISPLAYFORM1, and it is assumed that is much smaller than the source set, i.e. n t n s .Our goal is to exploit the source set to solve the target classification problem. The key insight is that not all source examples contribute equally useful information in regards to the target problem. For example, suppose that the source set consists of a broad collection of natural images; whereas the target set consists exclusively of various breeds of dog. We would assume that any images of dogs in the source set would help in the target classification task; images of wolves might also help, as might cats. Further afield it might be possible that objects with similar textures as dog fur might be useful, such as rugs. On the flip side, it is probably less likely that images of airplanes and beaches will be relevant (though not impossible). However, the idea is not to come with any preconceived notions (semantic or otherwise) as to which source images will help; rather, the goal is to let the algorithm choose the relevant source images, in an end-to-end fashion. We assume that the source and target classifier networks have the same architecture, but different network parameters. In particular, the architecture is given by DISPLAYFORM2 where φ is last layer, or possibly last few layers, and θ constitutes all of the remaining layers. We will refer to φ colloquially as the "classifier", and to θ as the "features" or "representation". (This is consistent with the usage in related papers, see for example .) Now, the source and target will share features, but not classifiers; that is, the source network will be given by F (x; θ, φ s), whereas the target network will be F (x; θ, φ t). The features θ are shared between the two, and this is what allows for transfer learning. The weighted source loss is given by DISPLAYFORM3 where α j ∈ is a weight assigned to each source training example; and (·, ·) is a per example classification loss, in this case cross-entropy. The use of the weights α j will allow us to decide which source images are most relevant for the target classification task. The target loss is standard: DISPLAYFORM4 As noted in Section 1, this formulation allows us to address both the transfer learning problem as well as learning with label noise. In the former case, the source and target may have non-overlapping label spaces; high weights will indicate which source examples have relevant knowledge for the target classification task. In the latter case, the source is the noisy dataset, the target is the clean dataset, and they share a classifier (i.e. φ t = φ s) as well as a label space; high weights will indicate which source examples do not have label noise, and are therefore reliable. In either case, the target is much smaller than the source. The question now becomes: how can we combine the source and target losses into a single optimization problem? A simple idea is to create a weighted sum of source and target losses. Unfortunately, issues are likely to arise regardless of the weight chosen. If the target is weighted equally to the source, then overfitting may likely given the small size of the target. On the other hand, if the weights are proportional to the size of the two sets, then the source will simply drown out the target. A more promising idea is to use bilevel optimization. Specifically, in the interior level we find the optimal features and source classifier as a function of the weights α, by minimizing the source loss: DISPLAYFORM5 In the exterior level, we minimize the target loss, but only through access to the source weights; that is, we solve: min DISPLAYFORM6 Why might we expect this bilevel formulation to succeed? The key is that the target only has access to the features in an indirect manner, by controlling which source examples are included in the source classification problem. Thus, the target can influence the features chosen, but only in this roundabout way. This serves as an extra form of regularization, mitigating overfitting, which is the main threat when dealing with a small set such as the target. Implementing the bilevel optimization is rendered somewhat challenging due to the need to solve the optimization problem in the interior level. Note that this optimization problem must be solved at every point in time; thus, if we choose to solve the optimization for the exterior level via gradient descent, we will need to solve the interior level optimization at each iteration of the gradient descent. This is clearly inefficient. Furthermore, it is counter to the standard deep learning practice of taking small steps which improve the loss. Thus, we instead propose the following procedure. At a given iteration, we will take a gradient descent step for the interior level problem: DISPLAYFORM7 where m is the iteration number; λ p is the learning rate (where the subscript p stands for "parameters", to distinguish it from a second learning rate for α, to appear shortly); and Q(θ, φ s) is a matrix whose j th column is given by DISPLAYFORM8 Thus, Equation leads to an improvement in the features θ, for a fixed set of source weights α. Note that there will be an identical descent equation for the classifier φ s, which we omit for clarity. Given this iterative version of the interior level of the bilevel optimization, we may now turn to the exterior level. Plugging Equation into Equation FORMULA6 gives the following problem: DISPLAYFORM9 DISPLAYFORM10 where we have suppressed Q's arguments for readability. We can then take a gradient descent step of this equation, yielding: DISPLAYFORM11 where in the final line, we have made use of the fact that λ p is small. Of course, there will also be a descent equation for the classifier φ t. The ing update scheme is quite intuitive: source example weights are update according to how well they align with the target aggregated gradient. We have not yet dealt with the weight constraint. That is, we would like to explicitly require that each α j ∈. We may achieve this by requiring α j = σ(β j) where the new variable β j ∈ R, and σ: R → is a sigmoid-type function. As shown in Appendix A, for a particular piecewise linear sigmoid function, replacing the Update Equation with a corresponding update equation for β is equivalent to modifying Equation to read DISPLAYFORM12 where CLIP clips the values below 0 to be 0; and above 1 to be 1. FORMULA7 and FORMULA12, along with the descent equations for the source and target classifiers φ s and φ t. As usual, the whole operation is done on a mini-batch basis, rather than using the entire set; note that if processing is done in parallel, then source minibatches are taken to be non-overlapping, so as to avoid conflicts in the weight updates. SOSELETO is summarized in Algorithm 1. Note that the target derivatives ∂L t /∂θ and ∂L t /∂φ t are evaluated over a target mini-batch; we suppress this for clarity. In terms of time-complexity, we note that each iteration requires both a source batch and a target batch; assuming identical batch sizes, this means that SOSELETO requires about twice the time as the ordinary source classification problem. Regarding space-complexity, in addition to the ordinary network parameters we need to store the source weights α. Thus, the additional relative spacecomplexity required is the ratio of the source dataset size to the number of network parameters. This is obviously problem and architecture dependent; a typical number might be given by taking the source dataset to be Imagenet ILSVRC-2012 (size 1.2M) and the architecture to be ResNeXt-101 BID40 (size 44.3M parameters), yielding a relative space increase of about 3%.Convergence properties SOSELETO is only an approximation to the solution of a bilevel optimization problem. As a , it is not entirely clear whether it will even converge. In Appendix B, we demonstrate a set of sufficient conditions for SOSELETO to converge to a local minimum of the target loss L t. We briefly discuss some implementation details. In all experiments, we use the SGD optimizer without learning rate decay, and we use λ α = 1. We initialize the α-values to be 1, and in practice clip them to be in the slightly expanded range [0, 1.1]; this allows more relevant source points some room to grow. Other settings are experiment specific, and are discussed in the relevant sections. To illustrate how SOSELETO works on the problem of learning with noisy labels, we begin with a synthetic experiment, see FIG0. The setting is straightforward: the source dataset consists of 500 points which lie in R 2. There are two labels / classes, and the ideal separator between the classes is the y-axis. However, of the 500 points, 100 are corrupted: that is, they lie on the wrong side of the separator. This is shown in FIG0, in which one class is shown as white triangles and the second as black pluses. The target dataset is a set of 50 points, which are "clean", in the sense that they lie on the correct sides of the separator. (For the sake of simplicity, the target set is not illustrated.) SOSELETO is run for 100 epochs. In FIG0 and 1(c), we choose a threshold of 0.1 on the weights α, and colour the points accordingly. In particular, in FIG0 (b) the clean (i.e. correctly labelled) instances which are above the threshold are labelled in green, while those below the threshold are labelled in red; as can be seen, all of the clean points lie above the threshold for this choice of threshold, meaning that SOSELETO has correctly identified all of the clean points. In FIG0 (c), the noisy (i.e. incorrectly labelled) instances which are below the threshold are labelled in green; and those above the threshold are labelled in red. In this case, SOSELETO correctly identifies most of these noisy labels by assigning them small weights (below 0.1); in fact, 92 out of 100 points are assigned such small weights. The remaining 8 points, those shown in red, are all near the separator, and it is therefore not very surprising that SOSELETO mislabels them. All told, using this particular threshold the algorithm correctly accounts for 492 out of 500 points, i.e. 98.4%. FIG0. In FIG0 (e), a plot is shown of mean weight vs. training epoch for clean instances and noisy instances; the width of each plot is the 95% confidence interval of the weights of that type. All weights are initialized at 0.5; after 100 epochs, the clean instances have a mean weight of about 0.8, whereas the noisy instances have a mean weight of about 0.05. The evolution is exactly as one would expect. FIG0 (e) examines the role of the threshold, chose as 0.1 in the above discussion; although 0.1 is a good choice in this case, the good behaviour is fairly robust to choices in the range of 0.1 to 0.4. We now turn to a real-world setting of the problem of learning with label noise. We use a noisy version of CIFAR-10 , following the settings used in BID33; BID39. In particular, an overall noise level is selected. Based on this, a label confusion matrix is chosen such that the diagonal entries of the matrix are equal to one minus the noise level, and the off-diagonals are chosen randomly (while maintaining the matrix's stochasticity). Noisy labels are then sampled according to this confusion matrix. We run experiments for various overall noise levels. The target consists of a small clean dataset. CIFAR-10's train set consists of 50K images; of this 50K, both BID33; BID39 set aside 10K clean examples for pre-training, a necessary step in both of these algorithms. In contrast, we use a smaller clean dataset of half the size, i.e. 5K examples while the rest of the 45K samples are noisy. We compare our to the two state of the art methods BID33; BID39, as they both address the same setting as we do -the large noisy dataset is accompanied by a small clean dataset, with no extra side-information available. In addition, we compare with the baseline of simply training on the noisy labels without modification. In all cases, Caffes CIFAR-10 Quick cif architecture has been used. For SOSELETO, we use the following settings: λ p = 10 −4, the target batch-size is 32, and the source batch-size is 256. We use a larger source batch-size to enable more α-values to be affected quickly. Results are shown in TAB1 for three different overall noise levels, 30%, 40%, and 50%. Performance is reported for CIFAR-10's test set, which is of size 10K. (Note that the competitors' performance numbers are taken from BID39 .) SOSELETO achieves state of the art on all three noise levels, with considerably better performance than both BID33 and BID39: between 2.6% to 3.2% absolute improvement. Furthermore, it does so in each case with only half of the clean samples used in BID33 BID39.We perform further analysis by examining the α-values that SOSELETO chooses on convergence, see Figure 4.2. To visualize the , we imagine thresholding the training samples in the source set on the basis of their α-values; we only keep those samples with α greater than a given threshold. By increasing the threshold, we both reduce the total number of samples available, as well as change the effective noise level, which is the fraction of remaining samples which have incorrect labels. We may therefore plot these two quantities against each other, as shown in Figure 4.2; we show three plots, one for each noise level. Looking at these plots, we see for example that for the 30% noise level, if we take the half of the training samples with the highest α-values, we are left with only about 4% which have incorrect labels. We can therefore see that SOSELETO has effectively filtered out the incorrect labels in this instance. For the 40% and 50% noise levels, the corresponding numbers are about 10% and 20% incorrect labels; while not as effective in the 30% noise level, SOSELETO is still operating as designed. Further evidence for this is provided by the large slopes of all three curves on the righthand side of the graph. We now examine the performance of SOSELETO on a transfer learning task. In order to provide a challenging setting, we choose to (a) use source and target sets with disjoint label sets, and (b) use a very small target set. In particular, the source dataset is chosen to the subset of Google Street View House Numbers (SVHN) BID23 corresponding to digits 0-4. SVHN's train set is of size 73,257 images, with about half of those belonging to the digits 0-4. The target dataset is a very small subset of MNIST BID16 corresponding to digits 5-9. While MNIST's train set is of size 60K, with 30K corresponding to digits 5-9, we use very small subsets: either 20 or 25 images, with equal numbers sampled from each class (4 and 5, respectively). Thus, as mentioned, there is no overlap between source and target classes, making it a true transfer learning (rather than domain adaptation) problem; and the small target set size adds further challenge. Furthermore, this task has already been examined in.We compare our with the following techniques. Target only, which indicates training on just the target set; standard fine-tuning; Matching Nets BID38, a few-shot technique which is relevant given the small target size; fine-tuned Matching Nets, in which the previous is then fine-tuned on the target set; and two variants of the Label Efficient Learning technique -one which includes fine-tuning plus a domain adversarial loss, and the other the full technique presented in. Note that besides the target only and fine-tuning approaches, all other approaches depend on unlabelled target data. Specifically, they use all of the remaining MNIST 5-9 examples -about 30,000 -in order to aid in transfer learning. SOSELETO, by contrast, does not make use of any of this data. For each of the above methods, the simple LeNet architecture BID16 was used. For SOSELETO, we use the following settings: λ p = 10 −2, the source batch-size is 32, and the target batch-size is 10 (it is chosen to be small since the target itself is very small). Additionally, the SVHN images were resized to 28 × 28, to match the MNIST size. The performance of the various methods is shown in TAB2, and is reported for MNIST's test set which is of size 10K. We have divided TAB2 into two parts: those techniques which use the 30K examples of unlabelled data, and those which do not. SOSELETO has superior performance to all of the techniques which do not use unlabelled data. Furthermore, SOSELETO has superior performance to all of the techniques which do use unlabelled data, except the Label Efficient technique. It is noteworthy in particular that SOSELETO outperforms the few-shot techniques, despite not being designed to deal with such small amounts of data. In Appendix C we further analyze which SVHN instances are considered more useful than others by SOSELETO, by transfering all of SVHN classes to MNSIT 5-9. Two-stage SOSELETO Finally, we note that although SOSELETO is not designed to use unlabelled data, one may do so using the following two-stage procedure. Stage 1: run SOSELETO as described above. Stage 2: use the learned SOSELETO classifier to classify the unlabelled data. This will now constitute a dataset with noisy labels, and SOSELETO can now be run in the mode of training with label noise, where the noisily labelled unsupervised data is now the source, and the target remains the same small clean set. In the case of n t = 25, this procedure elevates the accuracy to above 92%. We have presented SOSELETO, a technique for exploiting a source dataset to learn a target classification task. This exploitation takes the form of joint training through bilevel optimization, in which the source loss is weighted by sample, and is optimized with respect to the network parameters; while the target loss is optimized with respect to these weights and its own classifier. We have derived an efficient algorithm for performing this bilevel optimization, through joint descent in the network parameters and the source weights, and have analyzed the algorithm's convergence properties. We have empirically shown the effectiveness of the algorithm on both learning with label noise, as well as transfer learning problems. An interesting direction for future research involves incorporating an additional domain alignment term into SOSELETO, in the case where the source and target dataset have overlapping labels. We note that SOSELETO is architecture-agnostic, and thus may be easily deployed. Furthermore, although we have focused on classification tasks, the technique is general and may be applied to other learning tasks within computer vision; this is an important direction for future research. Recall that our goal is to explicitly require that α j ∈. We may achieve this by requiring DISPLAYFORM0 where the new variable β j ∈ R, and σ(·) is a kind of piecewise linear sigmoid function. Now we will wish to replace the Update Equation, the update for α, with a corresponding update equation for β. This is straightforward. Define the Jacobian ∂α/∂β by ∂α ∂β ij = ∂α i ∂β jThen we modify Equation to read DISPLAYFORM1 The Jacobian is easy to compute analytically: where CLIP clips the values below 0 to be 0; and above 1 to be 1. DISPLAYFORM2 SOWETO is only an approximation to the solution of a bilevel optimization problem. As a , it is not entirely clear whether it will even converge. In this section, we demonstrate a set of sufficient conditions for SOWETO to converge to a local minimum of the target loss L t.To this end, let us examine the change in the target loss from iteration m to m + 1: DISPLAYFORM0 Now, we can use the evolution of the weights α. Specifically, we substitute Equation FORMULA11 | Learning with limited training data by exploiting "helpful" instances from a rich data source. | 1,060 | scitldr |
We present a novel approach for training neural abstract architectures which in- corporates (partial) supervision over the machine’s interpretable components. To cleanly capture the set of neural architectures to which our method applies, we introduce the concept of a differential neural computational machine (∂NCM) and show that several existing architectures (e.g., NTMs, NRAMs) can be instantiated as a ∂NCM and can thus benefit from any amount of additional supervision over their interpretable components. Based on our method, we performed a detailed experimental evaluation with both, the NTM and NRAM architectures, and showed that the approach leads to significantly better convergence and generalization capabilities of the learning phase than when training using only input-output examples. Recently, there has been substantial interest in neural abstract machines that can induce programs from examples BID2; BID4;; BID7; BID11; BID14; BID18; BID20; BID23; BID24. While significant progress has been made towards learning interesting algorithms BID8, ensuring the training of these machines converges to the desired solution can be very challenging. Interestingly however, even though these machines differ architecturally, they tend to rely on components (e.g., neural memory) that are more interpretable than a typical neural network (e.g., an LSTM). A key question then is:Can we somehow provide additional amounts of supervision for these interpretable components during training so to bias the learning towards the desired solution?In this work we investigate this question in depth. We refer to the type of supervision mentioned above as partial trace supervision, capturing the intuition that more detailed information, beyond inputoutput examples, is provided during learning. To study the question systematically, we introduce the notion of a differential neural computational machine (∂NCM), a formalism which allows for clean characterization of the neural abstract machines that fall inside our class and that can benefit from any amount of partial trace information. We show that common architectures such as Neural Turing Machines (NTMs) and Neural Random Access Machines (NRAMs) can be phrased as ∂NCMs, useful also because these architectures form the basis for many recent extensions, e.g., BID8; BID9; BID11. We also explain why other machines such as the Neural Program Interpreter (NPI) BID18 or its recent extensions (e.g., the Neural Program Lattice BID15) cannot be instantiated as an ∂NCM and are thus restricted to require large (and potentially prohibitive) amounts of supervision. We believe the ∂NCM abstraction is a useful step in better understanding how different neural abstract machines compare when it comes to additional supervision. We then present ∂NCM loss functions which abstractly capture the concept of partial trace information and show how to instantiate these for both the NTM and the NRAM. We also performed an extensive evaluation for how partial trace information affects training in both architectures. Overall, our experimental indicate that the additional supervision can substantially improve convergence while leading to better generalization and interpretability. To provide an intuition for the problem we study in this work, consider the simple task of training an NTM to flip the third bit in a bit stream (called Flip3rd) -such bitstream tasks have been extensively studied in the area of program synthesis (e.g., BID10 ; BID17). An example input-output pair for this task could be examples, our goal is to train an NTM that solves this task. An example NTM that generalizes well and is understandable is shown in FIG0. Here, the y-axis is time (descending), the x-axis is the accessed memory location, the white squares represent the write head of the NTM, and the orange squares represent the read head. As we can see, the model writes the input sequence to the tape and then reads from the tape in the same order. However, in the absence of richer supervision, the NTM (and other neural architectures) can easily overfit to the training set -an example of an overfitting NTM is shown in FIG0. Here, the traces are chaotic and difficult to interpret. Further, even if the NTM generalizes, it can do so with erratic reads and writes, an example of which is shown in FIG0. Here, the NTM learns to read from the third bit (circled) with a smaller weight than from other locations, and also reads and writes erratically near the end of the sequence. This model is less interpretable than the one in FIG0 because it is unclear how the model knows which the third bit actually is, or why a different read weight would help flip that bit. In this work we will develop principled ways for guiding the training of a neural abstract machine towards the behavior shown in FIG0. For instance, for Flip3rd, providing partial trace information on the NTM's read heads for 10% of the input-output examples is sufficient to bias the learning towards the NTM shown in FIG0 100% of the time. To capture the essence of our method and illustrate its applicability, we now define the abstract notion of a neural computational machine (NCM). NCMs mimic classic computational machines with a controller and a memory, and generalize multiple existing architectures. Our approach for supervision with partial trace information applies to all neural architectures expressible as NCMs. A useful feature of the NCM abstraction is that it clearly delineates end-to-end differentiable architectures BID7's NTM, BID14's NRAM), which can train with little to no trace supervision, from architectures that are not end-to-end differentiable BID18's NPI) and hence require a certain minimum amount of trace information. In the follow-up section, we show how to phrase two existing neural architectures (NTMs and NRAMs) as an NCM.An NCM is a triple of functions: a processor, a controller, and a loss:Processor The processor π: W × C × M → B × M performs a pre-defined set of commands C, which might involve manipulating memories in M. The commands may produce additional feedback in B. Also, the processor's operation may depend on parameters in W.Controller The controller κ: W × B × Q × I → C × Q × O decides which operations the machine performs at each step. It receives external inputs from I and returns external outputs in O. It can also receive feedback from the processor and command it to do certain operations (e.g., memory read). The decisions the controller takes may depend on its internal state (from Q). The controller can also depend on parameters in W. For instance, if the controller is a neural network, then the network's weights will range over W. Loss Function The loss function L e: Trace × E → R indicates how close a trace τ ∈ Trace of an execution of the machine (defined below) is to a behavior from a set E. The loss function provides a criterion for training a machine to follow a prescribed set of behaviors, and hence we impose certain differentiability conditions. We require that the loss surface is continuous and piecewise differentiable with respect to the weights w ∈ W for all examples e and inputs x with traces τ (w, x): DISPLAYFORM0 Execution The execution of the machine begins with an input sequence x = {x t} n 1 and initial values of the controller state q 0, memory m 0, and processor feedback b 0. At each time step t = 1... n, controller and processor take turns executing according to the following equations: DISPLAYFORM1 A trace τ (w, x, b 0, m 0, q 0) = {(c t, b t, q t, y t, m t)} n 1 records these quantities' values at each time step. We will occasionally write τ C, τ B,... for the trace projected onto one of its components c, b,.... ∂NCMs Note that the differentiability conditions that we impose on the loss do not imply that any of the NCM functions π, κ and L e are continuous or differentiable. They indeed can be highly discontinuous as in NCMs like BID21's memory networks with a hard attention mechanism, or as in BID18's neural programmer-interpreters. In order to fix these discontinuities and recover a differentiable loss surface, these architectures train with strong supervision only: the training examples e ∈ E must provide a value for every traced quantity that comes from a discontinuous parameter. In contrast, what we call differentiable neural computational machines (∂NCM), have κ, π and L e continuous and piecewise differentiable. In this case, the loss surface is differentiable with respect to every parameter. Thus, there is no need to specify corresponding values in the examples, and so we can train with as much trace information as available. We now show how NTMs and NRAMs can be instantiated as ∂NCMs. NTM as ∂NCM An Neural Turing Machine (NTM) BID7 FIG1 ) has access to a memory M ∈ R c×n of c cells of n real numbers each. We suppose the machine has one read head and one write head, whose addresses are, respectively, the probability vectors r, w ∈ {1...c}. At every time step, the read head computes the expected value m ∈ R n of a random cell at index i ∼ r. This value together with the current input are fed into a controller neural network, which then decides on several commands. It decides what fraction e ∈ R n to erase and how much a ∈ R n to add to the cells underneath the write head. The write head stores the tape expected after a random modification at index i ∼ w. Then the controller indicates the head movement with two probability vectors ∆r, ∆w ∈ {−1,0,+1} which are convolved with the respective head addresses (the actual addressing mechanism is more involved, but we omit it for brevity) Finally, the controller produces the current output value. In terms of NCMs, the NTM's variables fall into the following classes: DISPLAYFORM0 Each of these variables change over time according to certain equations (see Appendix A for details).The processor π and the controller κ functions for each time step satisfy: DISPLAYFORM1 The standard loss function L e for the NTM simply includes a term, such as cross-entropy or L 2 distance, for the machine output at every time step. Each of these compare the machine output to the respective values contained in the examples e ∈ E.NRAM as ∂NCM A Neural Random Access Machine (NRAM) BID14 is a neural machine designed for ease of pointer (de-) referencing. An NRAM has a variable sized memory M ∈ R c×c whose size varies between runs. It also has access to a register file r ∈ R n×c with a constant number n of registers. Both the memory and the registers store probability vectors over {1 . . . c}. The controller receives no external inputs, but at each time step reads the probability that a register assigns to 0. It also produces no external output, except a probability f ∈ for termination at the current time step. The output of the run is considered to be the final memory state. Unlike the NTM, computation in the NRAM is performed by a fixed sequence of modules. Each module implements a simple integer operation/memory manipulation lifted to probability vectors. For example, addition lifts to convolution, while memory access is like that of the NTM. At every time step the controller organizes the sequence of modules into a circuit, which is then executed. The circuit is encoded by a pair of probability distributions per module, as shown in FIG1. These distributions specify respectively which previous modules or registers will provide a given module first/second arguments. The distributions are stacked in the matrices a and b. A similar matrix c is responsible for specifying what values should be written to the registers at the end of the time step. The NCM instantiation of an NRAM is the following: DISPLAYFORM2 The equations that determine these quantities can be found in Appendix B. The processor function π and the controller function κ expressed in terms of these quantities are: DISPLAYFORM3 The loss of the NRAM is more complex than the NTM loss: it is an expectation with respect to the probability distribution p of termination time, as determined by the termination probabilities f t (see Appendix B). For every t = 1... k, the loss considers the negative log likelihood that the i-th memory cell at that time step equals the value e i provided in the example, independently for each i: DISPLAYFORM4 Incorporating supervision during NCM training can be helpful with: (i) convergence: additional bias may steer the minimization of the NCM's loss function L e, as much as possible, away from local minima that do not correspond to good solutions, (ii) interpretability: the bias can also be useful in guiding the NCM towards learning a model that is more intuitive/explainable to a user (especially if the user already has an intuition on what it is that parts of the model should do), and (iii) generalization: the bias can steer the NCM towards solutions which minimize not just the loss on example of difficulties it has seen, but on significantly more difficult examples. The way we provide additional supervision to NCMs is, by encoding, for example, specific commands issued to the processor, into extra loss terms. Let us illustrate how we can bias the learning with an NTM. Consider the task of copying the first half of an input sequence {x t} 2l 1 into the second half of the machine's output {y t} 2l 1, where the last input x l from the first half is a special value indicating that the first half ended. Starting with both heads at position 1, the most direct solution is to consecutively store the input to the tape during the first half of the execution, and then recall the stored values during the second half. In such a solution, we expect the head positions to be: DISPLAYFORM0 To incorporate this information into the training, we add loss terms that measure the cross-entropy (H) between p(t) and w t as well as between q(t) and r t. Importantly, we need not add terms for every time-step, but instead we can consider only the corner cases where heads change direction: DISPLAYFORM1 H(p(t), w t ) + H(q(t), r t ). We now describe the general shape of the extra loss terms for arbitrary NCMs. Since, typically, we can interpret only the memory and the processor in terms of well-understood operations, we will consider loss terms only for the memory state and the communication flow between the controller and the processor. We leave the controller's hidden state unconstrained -this also permits us to use the same training procedure with different controllers. The generic loss is expressed with four loss functions for the different components of an NCM trace: DISPLAYFORM0 For each part α ∈ {C, B, O, M}, we provide hints (t, v, µ) ∈ σ α that indicate a time step t at which the hint applies, an example v ∈ E α for the relevant component, and a weight w ∈ R of the hint. The weight is included to account for hints having a different importance at different time-steps, but also to express our confidence in the hint, e.g., hints coming from noisy sources would get less weight. A subtrace σ is a collection of hints used for a particular input-output example e. We call it a subtrace because, typically, it contains hints for a proper subset of the states traced by the NCM during execution. The net loss for a given input-output example and subtrace equals the original loss L e added to the weighted losses for all the hints, scaled by a constant factor λ: DISPLAYFORM1 For NTMs, we allow hints on the output y, the addresses r and w, and the tape M. We include extra loss terms for the memory state only (all other loss terms are zero): DISPLAYFORM0 Unlike the output and addresses, values on the tape are interpreted according to an encoding internal to the controller (which emerges only during training). Forcing the controller to use a specific encoding for the tape, as we do with NTM output, can have a negative effect on training (in our experiments, training diverged consistently). To remedy this, we do not apply loss to the tape directly but to a decoded version of a cell on the tape. While a decoder might find multiple representations and overfit, we found that it forced just enough consistency to improve the convergence rate. The decoder itself is an auxiliary network φ trained together with the NTM, which takes a single cell from memory as input. The output of the decoder is compared against the expected value which should be in that cell: DISPLAYFORM1 For all subtraces we provide in our experiments with NTMs, the hints have the same unit weight. For NRAMs, we hint which connections should be present in the circuit the controller constructs at each step, including the ones for register updates. An example circuit is shown in FIG2. In terms of an NCM, this amounts to providing loss for commands and no loss for anything else. We set the loss to the negative log likelihood of the controller choosing specific connections revealed in the hint: DISPLAYFORM0 In our experiments, we observed that assigning higher weight to hints at earlier timesteps is crucial for convergence of the training process. For a hint at time-step t, we use the weight µ = (t + 1) −2. A possible reason for why this helps is that the machine's behavior at later time-steps is highly dependent on its behavior at the early time-steps. Thus, the machine cannot reach a later behavior that is right before it fixes its early behavior. Unless the behavior is correct early on, the loss feedback from later time-steps will be mostly noise, masking the feedback from early time-steps. Other Architectures The NCM can be instantiated to architectures as diverse as a common LSTM network or End-To-End Differentiable Memory Networks. Any programming inducing neural network with at least partially interpretable intermediate states for which the dataset contains additional hints could be considered a good candidate for application of this abstraction. We evaluated our NCM supervision method on the NTM and NRAM architectures. For each of the two architectures we implemented a variety of tasks and experimented with different setups of trace supervision. The main questions that we address are: (i) does trace supervision help convergence, interpretability, and generalization? (ii) how much supervision is needed to train such models? Below, we summarize our findings -further details are provided in the appendix. Figure 5: The number of initial runs which generalized for Flip3rd. The first dimension listed in the rows controls the execution details revealed in a subtrace, while the second dimension (the density column) controls the proportion of examples that receive extra subtrace supervision. We measured how often we successfully trained an NTM that achieves strong generalization. We consider a model to generalize if relative to the training size limit n, it achieves perfect accuracy on all of tests of size ≤ 1.5n, and perfect accuracy on 90% of the tests of size ≤ 2n. FIG3 reports the average improvement compared to a baseline using only I/O examples. We ran experiments with four different tasks and various types of hints (cf. Appendices C, E). Some of the hint types are: read and write specify respective head addresses for all time steps; address combines the previous two; corner reveals the head addresses, but only when the heads change direction; value gives value for a single cell. Except for three cases, trace supervision helped improve generalization. Here, RepeatFlip3d is most challenging, with baseline generalizing only 5% of the time (cf. Appendix I). Here we have the largest improvement with extra supervision: corner type of hints achieve eight-fold increase in success rate, reaching 40%. Another task with an even larger ratio is RepeatCopyTwice (cf. Appendix), where success increases from 15.5% to 100%.In addition to this experiment, we performed an extensive evaluation of different setups, varying the global λ parameter of the loss Eq. 8, and providing hints for just a fraction of the examples. The full are in Appendix I; here we provide those for RepeatFlip3d in Table 5. The table reveals that the efficacy of our method heavily depends on these two parameters. The best in this case are for the read/corner type of hints 1 2 / 1 10 of the time, with λ ∈ {0.1, 1}. The best for other tasks are achieved with different setups. Generally, our is that training with traces 50% of the time usually improves performance (or does not lower it much) when compared to the best method. This observation raises the interesting question of what the best type and amount of hints are for a given task. Finally, we observed that in all cases where training with trace supervision converged, it successfully learned the head movements/tape values we had intended. This show that trace supervision can bias the architecture towards more interpretable behaviors. In those cases, the NTM learned consistently sharper head positions/tape values than the baseline, as FIG4 shows for Flip3rd. BID16 reporting that ListK for example generalizes poorly, even when trained with noise in the gradient, curriculum learning, and an entropy bonus. We observed that when run on an indefinite number of examples with the correct number of timesteps and a correct module sequence, Swap and Increment would in fact occasionally generalize perfectly, but did not have the resources to run such indefinite tests with Permute, ListK, and Merge. FIG5 demonstrates that when training had finished, either because it had ended early or had reached 5000 training examples (our upper bound), generalization would in fact be on average significantly better than the baseline the more hints that were used for all tasks. Here, number of hints used seemed to be a sufficient predictor for the quality of the trained model. The effect of increasing supervision on the quality of the trained model was so strong that not even noise in the input was able to significantly hinder generalization. In FIG5, we corrupted a single character in the output examples for the Permute problem in 10% of the examples. We found that without any extra hints, no convergence was seen after training was complete, whereas with just corner subtraces, the generalization was nearly optimal. Furthermore, we found that noise in the trace does not seriously harm performance. We corrupted a single hint for 20% of the traces of the Increment task using otherwise full supervision, as can be seen in the NoisyFull line of FIG0. We presented a method for incorporating (any amount of) additional supervision into the training of neural abstract machines. The basic idea was to provide this supervision (called partial trace information) over the interpretable components of the machine and to thus more effectively guide the learning towards the desired solution. We introduced the ∂NCM architecture in order to precisely capture the neural abstract machines to which our method applies. We showed how to formulate partial trace information as abstract loss functions, how to instantiate common neural architectures such as NTMs and NRAMs as ∂NCMs and concretize the ∂NCM loss functions. Our experimental indicate that partial trace information is effective in biasing the learning of both NTM's and NRAM's towards better converge, generalization and interpretability of the ing models. The controller for the NTM consists of the networks ϕ, ψ y, ψ e, ψ a, χ r, χ w, which operate on the variables:x -in q -controller state r -read address ∆r -change in r e -erase M -tape y -out m -read value w -write address ∆w -change in w a -addThe equations that describe NTM executions are: DISPLAYFORM0 The controller of the NRAM consists of the networks ϕ, ψ a, ψ b, ψ c, ψ f, which operate on the variables:a -lhs circuit b -rhs circuit c -register inputs o -module outputs r -register state M -memory tape h -controller state f -stop probability. The equations that describe the NRAM execution are: DISPLAYFORM0 DISPLAYFORM1 For all of our NTM experiments we use a densely connected feed-forward controller. There are two architectural differences from the original NTM BID7 that helped our baseline performance: the feed-forward controller, the erase and the add gates use tanh activation; the output layer uses softmax. In the original architecture these are all logistic sigmoids. For the newly introduced tape decoder (active only during training) we used two alternative implementations: a tanh-softmax network, and a single affine transformation. We tested the NTM's learning ability on five different tasks for sequence manipulation, two of which have not been previously investigated in this domain. These tasks can be found in Appendix E.We performed experiments using several combination of losses as summarized in Appendix F. The observed training performance per task is shown in Appendix I, with rows corresponding to the different loss setups. The corner setup differs from the address setup in that the example subtraces were defined only for a few important corner cases. For example in RepeatCopyTwice, the write head was provided once at the beginning of the input sequence, and once at the end. Similarly, the read head was revealed at the beginning and at the end of every output repetition. In all other setups we provide full subtraces (defined for all time steps).The supervision amount can be tuned by adjusting the λ weight from Equation 8. Further, we can also control the fraction of examples which get extra subtrace supervision (the density row in Figure I). The performance metric we use is the percentage of runs that do generalize after 100k iterations for the given task and supervision type. By generalize we mean that the NTM has perfect accuracy on all testing examples up to 1.5× the size of the max training length, and also perfect accuracy on 90% of the testing examples up to 2× the maximum training length. We used a feed-forwad controller with 2 × 50 units, except for RepeatCopyTwice, which uses 2 × 100 units. For training we used the Adam optimizer BID12, a learning rate of 10 −3 for all tasks except RepeatFlip3d and Flip3rd which use 5 · 10 −4. The lengths of the training sequences for the first four tasks are from 1 to 5, whereas the generalization of the model was tested with sequences of lengths up to 20. For Flip3rd and RepeatFlip3d, the training sequence length was up to 16, whereas the testing sequences have maximum length of 32. Like in the NTM, we use a densely connected two layer feed forward controller for our experiments, and use ReLU as the activation function. We make no modifications to the original architecture, and use noise with the parameter η = 0.3 as suggested by BID16, and curriculum learning as described by BID22. We stop training once we get to a difficulty specified by the task, and increase the difficulty once 0 errors were found on a new testing batch of 10 samples. Each training iteration trains with 50 examples of the currently randomly sampled difficulty. Regardless of whether the model had converged, training is stopped after 5000 samples were used. Such a low number is used to replicate the potential conditions under which such a model might be used. As with the NTM, the Adam optimizer was used. The specific tasks we use are described in Appendix G, and the specific kinds of supervision we give are described in Appendix H. The λ we used here was 40. The system was implemented using PyTorch. Every input sequence ends with a special delimiter x E not occurring elsewhere in the sequence Copy -The input consists of generic elements, x 1... x n x E. The desired output is x 1... x n x E.RepeatCopyTwice -The input is again a sequence of generic elements, x 1... x n x E. The desired output is the input copied twice x 1... x n x 1... x n x E. Placing the delimiter only at the end of the output ensures that the machine learns to keep track of the number of copies. Otherwise, it could simply learn to cycle through the tape reproducing the given input indefinitely. We kept the number of repetitions fixed in order to increase baseline task performance for the benefit of comparison. DyckWords -The input is a sequence of open and closed parentheses, x 1... x n x E. The desired output is a sequence of bits y 1... y n x E such that y i = 1 iff the prefix x 1... x i is a balanced string of parentheses (a Dyck word). Both positive and negative examples were given. Flip3rd -The input is a sequence of bits, x 1 x 2 x 3... x n x E. The desired output is the same sequence of bits but with the 3rd bit flipped: x 1 x 2x3... x n x E. Such a task with a specific index to be updated (e.g., 3rd) still requires handling data dependence on the contents of the index (unlike say the Copy task).RepeatFlip3d -The input is a sequence of bits, x 1 x 2 x 3 x 4 x 5 x 5... x E. The desired output is the same sequence of bits but with every 3rd bit flipped: DISPLAYFORM0 F NTM SUBTRACES value traces provide hints for the memory at every timestep as explained in Equation FORMULA0.read -provides a hint for the address of the read head at every timestep.write -provides a hint for the address of the write head at every timestep.address -provides hints for the address of both the read and the write head at every timestep.addr+val -provides value, read and write hints for every timestep.corner -provides hints for the address of both the read and the write head at every "important" timestep -we decided what important means here depends on which task we are referring to. In general, we consider the first and last timesteps important, and also any timestep where a head should change direction. For example, in RepeatCopyTwice for an example of size n with e repeats, we'd provide the heads at timesteps 0, n, 2n, 3n..., en. Below we describe all the tasks we experimented with. We predominantly picked tasks that the NRAM is known to have trouble generalizing on. We did not introduce any new tasks, and more detailed descriptions of these tasks can be found in BID14.Swap -Provided two numbers, a and b and an array p, swap p[a] and p [b]. All elements but that in the last memory cell are not zero. Increment -Given an array p, return the array with one added to each element. All elements but that in the last cell for the input are not zero. Elements can be zero in the output. Permute -Given two arrays p and q return a new array s such that DISPLAYFORM0 The arrays p and q are preceded by a pointer, a, to array q. The output is expected to be a, DISPLAYFORM1 ListK -Given a linked list in array form, and an index k return the value at node k. Merge -given arrays p and q, and three pointers a, b, c to array p, q, and the output sequence (given as zeros initially), place the sorted combination of p and q into the output location. The following table describes the specific NRAM instantiation used for each task. The default sequence (def) is the one described by BID14. The number of timesteps is usually dependent on the length of the problem instance, M (equivalently the word size or difficulty), and in the case of ListKwas given with respect to the argument k. The difficulty (D) was simply the length of the sequence used. H NRAM SUBTRACES For each of the tasks listed Appendix G, we hand coded a complete circuit for every module and every timestep we would provide. The following subtraces types describe how we provide hints based on this circuit. None -provides no hints. Full -provides the entire circuit. SingleHint -provides a random hint at a random timestep. SingleTimestep -provides the entire circuit at a random timestep. Corners -provides the entire circuit at the first and last timesteps. Registers -provides hints for the registers at every timestep. Modules -provides hints for the modules at every timestep. Which Details to Reveal for NTM? The first dimension listed in the rows of the tables of Figure I controls the execution details revealed in a Subtrace. We use subtraces showing either the addresses without the tape values, only the read heads or the write heads, or even weaker supervision in a few corner cases. In tasks Copy FIG7 ), RepeatCopyTwice (FIG7) and DyckWords FIG7, it is frequently the case that when the NTM generalizes without supervision, it converges to an algorithm which we are able to interpret. For them, we designed the addr+val traces to match this algorithm, and saw increases in generalization frequency of at least 45%. It can be concluded that when additionally provided supervision reflects the interpretable "natural" behavior of the NTM, the learning becomes significantly more robust to changes in initial weights. Additionally, for tasks Flip3rd FIG7 ) and RepeatFlip3d FIG7 ), both the baseline and other supervision types are outperformed by training with read supervision. It is also notable that corner supervision in RepeatFlip3d achieves highest improvement over the baseline, 60% over 5%. In essence, this means that providing only a small part of the trace can diminish the occurrence of local minima in the loss function. How Often to Reveal for NTM? The second dimension controls the proportion of examples that receive extra subtrace supervision (the density columns in Figure I). For Flip3rd, RepeatCopyTwice and DyckWords we observed that having only a small number of examples with extra supervision leads to models which are more robust to initial weight changes than the baseline, although not necessarily always as robust as providing supervision all the time. A couple of interesting cases stand out. For Flip3rd with 10% corner subtraces and λ = 1, we find a surprisingly high rate of generalization. Providing address traces 10% of the time when training RepeatCopyTwice leads to better performance all the time. For RepeatFlip3d, write traces at 1% frequency and λ = 0.1 generalize 30% of the time vs. 5% for baseline. While the type of trace which works best varies per task, for each task there exists a trace which can be provided only 1% of the time and still greatly improve the performance over the baseline. This suggests that a small amount of extra supervision can improve performance significantly, but the kind of supervision may differ. It is an interesting research question to find out how the task at hand relates to the optimal kind of supervision. FIG0: The average number of errors on the test set for each task and subtrace type once trained. FORMULA0 with that of the NRAM using the full tracer for Merge. For this experiment, a maximum of 10000 samples were used for the DNGPU and 5000 for the NRAM. The DNGPU was run out of the box from the code supplied by the authors. 20 runs were averaged for the DNGPU and 38 runs for the NRAM. One can deduce that while neither is able to generalize this task perfectly, the simpler and easier to understand architecture, NRAM, does generalize better with fewer examples when those examples come with richer supervision. The NRAM is parametrized by one or more straight-line partial programs, i.e., programs with no branching and no loops, chosen by register states. The machine runs in a loop, repeatedly selecting the program for that register state then executing it. The programs are expressend in a simple single-assignment imperative language. Each program statement i invokes one of the modules of the architecture and assigns the of the invocation to a local variable x i. That variable cannot be changed later. The final program statement is a parallel-asignment that modifies the machine registers r 1... r k. The values that appear in assignments/invocations can be: variables in scope, machine registers, or holes?. These values are not used directly during execution: the actual values needs to be supplied by the NRAM controller. The values are only used as hints for the controller during training, with the whole? denoting no hint. We can describe the language in an EBNF-style grammar: P 1::= S 1 P i::= P i−1; S i P::= P 1; R 1 | P 2; R 2 |...An example program for the Increment task would be the following: DISPLAYFORM0 x 2 ← READ(r 1); x 3 ← ADD(x 2, x 1); x 4 ← WRITE(r 1, x 3); x 5 ← ADD(r 1, x 1); DISPLAYFORM1 Here, the controller is encouraged to read the memory at the location stored in the first register r 1, add one to it, then store it back, and then increment the the first register. An alternative to the trace-based approach is to make the controller produce values only for the holes, and use directly the specified variable/register arguments. This way, only the unspecified parts of the program are learned. This is, for example, the approach taken by ∂Forth BID0. There, programs are expressed in a suitably adapted variant of the Forth programming language, which is as expressive as the language discussed above, but less syntactically constrained. The drawback of this alternative is that whenever an argument other than a whole is specified, one must also specify the time steps to which it applies in all possible executions and not just the training ones. That is why, typically, these values are specified either for all or for none of the time steps. In the following examples, we will describe the register states using "0", "! 0" and "-" meaning respectively that a register has 0, that it contains anything but zero, or that it can contain anything. For any register pattern.x 1 ← READ(r 0); x 2 ← W RIT E(0, x 1); x 3 ← READ(r 1); x 4 ← ADD(x 3, x 1); x 5 ← READ(x 4); x 6 ← W RIT E(r 1, x 5); x 7 ← IN C(r 1); x 8 ← DEC(x 1); x 9 ← LT (x 7, x 8); r 0 ← 0; r 1 ← x 7; r 2 ← x 9; r 3 ← 0; | We increase the amount of trace supervision possible to utilize when training fully differentiable neural machine architectures. | 1,061 | scitldr |
Bayesian learning of model parameters in neural networks is important in scenarios where estimates with well-calibrated uncertainty are important. In this paper, we propose Bayesian quantized networks (BQNs), quantized neural networks (QNNs) for which we learn a posterior distribution over their discrete parameters. We provide a set of efficient algorithms for learning and prediction in BQNs without the need to sample from their parameters or activations, which not only allows for differentiable learning in quantized models but also reduces the variance in gradients estimation. We evaluate BQNs on MNIST, Fashion-MNIST and KMNIST classification datasets compared against bootstrap ensemble of QNNs (E-QNN). We demonstrate BQNs achieve both lower predictive errors and better-calibrated uncertainties than E-QNN (with less than 20% of the negative log-likelihood). A Bayesian approach to deep learning considers the network's parameters to be random variables and seeks to infer their posterior distribution given the training data. Models trained this way, called Bayesian neural networks (BNNs) , in principle have well-calibrated uncertainties when they make predictions, which is important in scenarios such as active learning and reinforcement learning . Furthermore, the posterior distribution over the model parameters provides valuable information for evaluation and compression of neural networks. There are three main challenges in using BNNs: Intractable posterior: Computing and storing the exact posterior distribution over the network weights is intractable due to the complexity and high-dimensionality of deep networks. Prediction: Performing a forward pass (a.k.a. as probabilistic propagation) in a BNN to compute a prediction for an input cannot be performed exactly, since the distribution of hidden activations at each layer is intractable to compute. Learning: The classic evidence lower bound (ELBO) learning objective for training BNNs is not amenable to backpropagation as the ELBO is not an explicit function of the output of probabilistic propagation. These challenges are typically addressed either by making simplifying assumptions about the distributions of the parameters and activations, or by using sampling-based approaches, which are expensive and unreliable (likely to overestimate the uncertainties in predictions). Our goal is to propose a sampling-free method which uses probabilistic propagation to deterministically learn BNNs. A seemingly unrelated area of deep learning research is that of quantized neural networks (QNNs), which offer advantages of computational and memory efficiency compared to continuous-valued models. QNNs, like BNNs, face challenges in training, though for different reasons: (4.1) The non-differentiable activation function is not amenable to backpropagation. (4.2) Gradient updates cease to be meaningful, since the model parameters in QNNs are coarsely quantized. In this work, we combine the ideas of BNNs and QNNs in a novel way that addresses the aforementioned challenges in training both models. We propose Bayesian quantized networks (BQNs), models that (like QNNs) have quantized parameters and activations over which they learn (like BNNs) categorical posterior distributions. BQNs have several appealing properties: • BQNs solve challenge due to their use of categorical distributions for their model parameters. • BQNs can be trained via sampling-free backpropagation and stochastic gradient ascent of a differentiable lower bound to ELBO, which addresses challenges, and above. • BQNs leverage efficient tensor operations for probabilistic propagation, further addressing challenge. We show the equivalence between probabilistic propagation in BQNs and tensor contractions , and introduce a rank-1 CP tensor decomposition (mean-field approximation) that speeds up the forward pass in BQNs. • BQNs provide a tunable trade-off between computational resource and model complexity: using a refined quantization allows for more complex distribution at the cost of more computation. • Sampling from a learned BQN provides an alternative way to obtain deterministic QNNs. In our experiments, we demonstrate the expressive power of BQNs. We show that BQNs trained using our sampling-free method have much better-calibrated uncertainty compared with the stateof-the-art Bootstrap ensemble of quantized neural networks (E-QNN) trained by. More impressively, our trained BQNs achieve comparable log-likelihood against Gaussian Bayesian neural network (BNN) trained with stochastic gradient variational Bayes (SGVB) (the performance of Gaussian BNNs are expected to be better than BQNs since they allows for continuous random variables). We further verify that BQNs can be easily used to compress (Bayesian) neural networks and obtain determinstic QNNs. Finally, we evaluate the effect of mean-field approximation in BQN, by comparing with its Monte-Carlo realizations, where no approximation is used. We show that our sampling-free probabilistic propagation achieves similar accuracy and log-likelihood -justifying the use of mean-field approximation in BQNs. In Appendix A, we survey different approaches for training Bayesian neural networks including sampling-free assumed density filtering (; ; Hernández-;), sampling-based variational inference (; ;), as well as sampling-free variational inference , probabilistic neural networks (; ;), quantized neural network (; ; ; ; ; ; ; ; ;), and tensor networks and tensorial neural networks (; Orús, 2014; ; 2017; ; ;). • We propose an alternative evidence lower bound (ELBO) for Bayesian neural networks such that optimization of the variational objective is compatible with the backpropagation algorithm. • We introduce Bayesian quantized networks (BQNs), establish a duality between BQNs and hierarchical tensor networks, and show prediction a BQN is equivalent to a series of tensor contractions. • We derive a sampling-free approach for both learning and inference in BQNs using probabilistic propagation (analytical inference), achieving better-calibrated uncertainty for the learned models. • We develop a set of fast algorithms to enable efficient learning and prediction for BQNs. Notation. We use bold letters such as θ to denote random variables, and non-bold letters such as θ to denote their realizations. We abbreviate Pr [θ = θ] of N data points, we aim to learn a neural network with model parameters θ that predict the output y ∈ Y based on the input x ∈ X. We first solve the learning problem to find an approximate posterior distribution Q(θ; φ) over θ with parameters φ such that Q(θ; φ) ≈ Pr[θ|D]. We then solve the prediction problem to compute the predictive distribution Pr[y|x, D] for arbitrary input x = x given Q(θ; φ). For notational simplicity, we will omit the conditioning on D and write Pr [y|x, D] as Pr [y|x] in what follows. In order to address the prediction and learning problems in BNNs, we analyze these models in their general form of probabilistic graphical models (shown in Figure 3b in Appendix B). Let h (l), θ and h (l+1) denote the inputs, model parameters, and (hidden) outputs of the l-th layer respectively. We assume that θ (l)'s are layer-wise independent, i.e. Q(θ; φ) = Computing the predictive distribution Pr[y|x, D] with a BNN requires marginalizing over the random variable θ. The hierarchical structure of BNNs allows this marginalization to be performed in multiple steps sequentially. In Appendix B, we show that the predictive distribution of h (l+1) given input x = x can be obtained from its preceding layer h (l) by This iterative process to compute the predictive distributions layer-by-layer sequentially is known as probabilistic propagation (; Hernández-;). With this approach, we need to explicitly compute and store each intermediate is a function of x). Therefore, probabilistic propagation is a deterministic process that computes ψ (l+1) as a function of ψ (l) and φ (l), which we denote as Challenge in Sampling-Free Probabilistic Propagation. If the hidden variables h (l)'s are continuous, Equation generally can not be evaluated in closed form as it is difficult to find a family of parameterized distributions P for h (l) such that h (l+1) remains in P under the operations of a neural network layer. Therefore most existing methods consider approximations at each layer of probabilistic propagation. In Section 4, we will show that this issue can be (partly) addressed if we consider the h (l)'s to be discrete random variables, as in a BQN. Objective Function. A standard approach to finding a good approximation Q(θ; φ) is variational inference, which finds φ such that the KL-divergence KL(Q(θ; φ)||Pr [θ|D] ) from Q(θ; φ) to Pr[θ|D] is minimized. In Appendix B, we prove that to minimizing the KL-divergence is equivalent to maximizing an objective function known as the evidence lower bound (ELBO), denoted as L(φ). where Probabilistic Backpropagation. Optimization in neural networks heavily relies on the gradientbased methods, where the partial derivatives ∂L(φ)/∂φ of the objective L(φ) w.r.t. the parameters φ are obtained by backpropagation. Formally, if the output produced by a neural network is given by a (sub-)differentiable function g(φ), and the objective L(g(φ)) is an explicit function of g(φ) (and not just an explicit function of φ), then the partial derivatives can be computed by chain rule: Published as a conference paper at ICLR 2020 The learning problem can then be (approximately) solved by first-order methods, typically stochastic gradient descent/ascent. Notice that For classification, the function g(φ) returns the probabilities after the softmax function, not the categorical label; An additional regularizer R(φ) on the parameters will not cause difficulty in backpropagation, given ∂R(φ)/∂φ is easily computed. Challenge in Sampling-Free Probabilistic Backpropagation. Learning BNNs is not amenable to standard backpropagation because the ELBO objective function L(φ) in (4b) is not an explicit (i.e. implicit) function of the predictive distribution g(φ) in (4a): Although L n (φ) is a function of φ, it is not an explicit function of g n (φ). Consequently, the chain rule in Equation on which backpropagation is based is not directly applicable. Alternative Evidence Lower Bound. We make learning in BNNs amenable to backpropagation by developing a lower bound is an explicit function of the from the forward pass.) With L n (φ) in hand, we can (approximately) find φ by maximizing the alternative objective via gradient-based method: In Appendix C.1, we proved one feasible L n (φ) which only depends on second last output h (L−1). ) is deterministic given input x and all parameters before the last layer θ Analytic Forms of L n (φ). While the lower bound in Theorem 3.1 applies to BNNs with arbitrary distributions P on hidden variables h, Q on model parameters θ, and any problem setting (e.g. classification or regression), in practice sampling-free probabilistic backpropagation requires that L n (φ) can be analytically evaluated (or further lower bounded) in terms of φ (L−1) and θ (L−1). This task is nontrivial since it requires redesign of the output layer, i.e. the function of Pr[y|h (L−1), θ (L−1) ]. In this paper, we develop two layers for classification and regression tasks, and present the classification case in this section due to space limit. Since L n (φ) involves the last layer only, we omit the superscripts/subsripts of, x n, y n, and denote them as h, ψ, φ, x, y. with K the number of classes) be the pre-activations of a softmax layer (a.k.a. logits), and φ = s ∈ R + be a scaling factor that adjusts its scale such that are pairwise independent (which holds under mean-field approximation) and The regression case and proofs for both layers are deferred to Appendix C. While Section 3 provides a general solution to learning in BNNs, the solution relies on the ability to perform probabilistic propagation efficiently. To address this, we introduce Bayesian quantized networks (BQNs) -BNNs where both hidden units h (l)'s and model parameters θ (l)'s take discrete values -along with a set of novel algorithms for efficient sampling-free probabilistic propagation in BQNs. For simplicity of exposition, we assume activations and model parameters take values from the same set Q, and denote the degree of quantization as D = |Q|, (e.g. Q = {−1, 1}, D = 2). Lemma 4.1 (Probabilistic Propagation in BQNs). After quantization, the iterative step of probabilistic propagation in Equation is computed with a finite sum instead of an integral: and a categorically distributed h (l) in h (l+1) being categorical as well. The equation holds without any assumption on the operation Notice all distributions in Equation are represented in high-order tensors: Suppose there are I input units, J output units, and K model parameters at the l-th layer, then and h (l+1) ∈ Q J, and their distributions are characterized by respectively. Therefore, each step in probabilistic propagation is a tensor contraction of three tensors, which establishes the duality between BQNs and hierarchical tensor networks . Since tensor contractions are differentiable w.r.t. all inputs, BQNs thus circumvent the difficulties in training QNNs , whose outputs are not differentiable w.r.t. the discrete parameters. This is not surprising: if we consider learning in QNNs as an integer programming (IP) problem, solving its Bayesian counterpart is equivalent to the approach to relaxing the problem into a continuous optimization problem . Complexity of Exact Propagation. The computational complexity to evaluate Equation is exponential in the number of random variables O(D IJK), which is intractable for quantized neural network of any reasonable size. We thus turn to approximations. We propose a principled approximation to reduce the computational complexity in probabilistic propagation in BQNs using tensor CP decomposition, which factors an intractable high-order probability tensor into tractable lower-order factors . In this paper, we consider the simplest rank-1 tensor CP decomposition, where the joint distributions of P and Q are fully factorized into products of their marginal distributions, thus equivalent to the mean-field approximation . With rank-1 CP decomposition on, the tensor contraction in reduces to a standard Tucker contraction where each term of ψ k parameterizes a single categorical variable. In our implementation, we store the parameters in their log-space, i.e. Q(θ Fan-in Number E. In a practical model, for the l-th layer, an output unit h k} according to the connectivity pattern in the layer. We denote the set of dependent input units and parameters for h, and define the fan-in number E for the layer as max j I Complexity of Approximate Propagation. The approximate propagation reduces the computational complexity from O(D IJK) to O(JD E), which is linear in the number of output units J if we assume the fan-in number E to be a constant (i.e. E is not proportional to I). Different types of network layers have different fan-in numbers E, and for those layers with E greater than a small constant, Equation is inefficient since the complexity grows exponential in E. Therefore in this part, we devise fast(er) algorithms to further lower the complexity. Small Fan-in Layers: Direct Tensor Contraction. If E is small, we implement the approximate propagation through tensor contraction in Equation. The computational complexity is O(JD E) as discussed previously. See Appendix D.1 for a detailed discussion. Medium Fan-in Layers: Discrete Fourier Transform. If E is medium, we implement approximate propagation through fast Fourier transform since summation of discrete random variables is equivalent to convolution between their probability mass function. See Appendix D.2 for details. With the fast Fourier transform, the computational complexity is reduced to O(JE 2 D log(ED)). Large Fan-in Layers: Lyapunov Central Limit Theorem. In a typical linear layer, the fan-in E is large, and a super-quadratic algorithm using fast Fourier transform is still computational expensive. Therefore, we derive a faster algorithm based on the Lyapunov central limit theorem (See App D.3) With CLT, the computational complexity is further reduced to O(JED). Remarks: Depending on the fan-in numbers E, we adopt CLT for linear layers with sufficiently large E such as fully connected layers and convolutional layers; DFT for those with medium E such as average pooling layers and depth-wise layers; and direct tensor contraction for those with small E such as shortcut layers and nonlinear layers. In this section, we demonstrate the effectiveness of BQNs on the MNIST, Fashion-MNIST, KM-NIST and CIFAR10 classification datasets. We evaluate our BQNs with both multi-layer perceptron (MLP) and convolutional neural network (CNN) models. In training, each image is augmented by a random shift within 2 pixels (with an additional random flipping for CIFAR10), and no augmentation is used in test. In the experiments, we consider a class of quantized neural networks, with both binary weights and activations (i.e. Q = {−1, 1}) with sign activations σ(·) = sign(·). For BQNs, the distribution parameters φ are initialized by Xavier's uniform initializer, and all models are trained by ADAM optimizer for 100 epochs (and 300 epochs for CIFAR10) with batch size 100 and initial learning rate 10 −2, which decays by 0.98 per epoch. Table 1: Comparison of performance of BQNs against the baseline E-QNN. Each E-QNN is an ensemble of 10 networks, which are trained individually and but make predictions jointly. We report both NLL (which accounts for prediction uncertainty) and 0-1 test error (which doesn't account for prediction uncertainty). All the numbers are averages over 10 runs with different seeds, the standard deviation are exhibited following the ± sign. Training Objective of BQNs. To allow for customized level of uncertainty in the learned Bayesian models, we introduce a regularization coefficient λ in the alternative ELBO proposed in Equation (i.e. a lower bound of the likelihood), and train the BQNs by maximizing the following objective: where λ controls the uncertainty level, i.e. the importance weight of the prior over the training set. Baselines. We compare our BQN against the baseline -Bootstrap ensemble of quantized neural networks (E-QNN). Each member in the ensemble is trained in a non-Bayesian way , and jointly make the prediction by averaging over the logits from all members. Note Evaluation of BQNs. While 0-1 test error is a popular metric to measure the predictive performance, it is too coarse a metric to assess the uncertainty in decision making (for example it does not account for how badly the wrong predictions are). Therefore, we will mainly use the negative log-likelihood (NLL) to measure the predictive performance in the experiments. Once a BQN is trained (i.e. an approximate posterior Q(θ) is learned), we consider three modes to evaluate the behavior of the model: analytic inference (AI), Monte Carlo (MC) sampling and Maximum a Posterior (MAP) estimation: 1. In analytic inference (AI, i.e. our proposed method), we analytically integrate over Q(θ) to obtain the predictive distribution as in the training phase. Notice that the exact NLL is not accessible with probabilistic propagation (which is why we propose an alternative ELBO in Equation), we will report an upper bound of the NLL in this mode. 2. In MC sampling, S sets of model parameters are drawn independently from the posterior posterior θ s ∼ Q(θ), ∀s ∈ [S], and the forward propagation is performed as in (non-Bayesian) quantized neural network for each set θ s, followed by an average over the model outputs. The difference between analytic inference and MC sampling will be used to evaluate (a) the effect of mean-field approximation and (b) the tightness of the our proposed alternative ELBO. 3. MAP estimation is similar to MC sampling, except that only one set of model parameters θ is obtained θ = arg max θ Q(θ). We will exhibit our model's ability to compress a Bayesian neural network by comparing MAP estimation of our BQN with non-Bayesian QNN. Expressive Power and Uncertainty Calibration in BQNs. We report the performance via all evaluations of our BQN models against the Ensemble-QNN in Table 1 and Figure 1. Compared to E-QNNs, our BQNs have significantly lower NLL and smaller predictive error (except for Fashion-MNIST with architecture CNN). As we can observe in Figure 1, BQNs impressively achieve comparable NLL to continuous-valued BNN, with slightly higher test error. As our model parameters only take values {−1, 1}, small degradation in predictive accuracy is expected. Evaluations of Mean-field Approximation and Tightness of the Alternative ELBO. If analytic inference (by probabilistic propagation) were computed exactly, the evaluation metrics would have been equal to the ones with MC sampling (with infinite samples). Therefore we can evaluate the approximations in probabilistic propagation, namely mean-field approximation in Equation and relaxation of the original ELBO in Equation, by measuring the gap between analytic inference and MC sampling. As shown in Figure 2, such gaps are small for all scenarios, which justifies the approximations we use in BQNs. To further decouple these two factors of mean-field approximation and relaxation of the original ELBO, we vary the regularization coefficient λ in the learning objective. For λ = 0 (where the prior term is removed), the models are forced to become deterministic during training. Since the deterministic models do not have mean-field approximation in the forward pass, the gap between analytic inference and MC-sampling reflects the tightness of our alternative ELBO. As λ increases, the gaps increases slightly as well, which shows that the mean-field approximation becomes slightly less accurate with higher learned uncertainty in the model. Table 3: Bayesian Model compression through direct training of Ensemble-QNN vs a Monte-Carlo sampling on our proposed BQN. Each ensemble consists of 5 quantized neural networks, and for fair comparison we use 5 samples for Monte-Carlo evaluation. All the numbers are averages over 10 runs with different seeds, the standard deviation are exhibited following the ± sign. interpreted as another approach to compress a BQN, which reduces the original size to its S/64 (with the same number of bits as an ensemble of S QNNs). In Tables 2 and 3, we compare the models by both approaches to their counterparts (a single QNN for MAP, and E-QNN for MC sampling) trained from scratch as in. For both approaches, our compressed models outperform their counterparts (in NLL). We attribute this to two factors: (a) QNNs are not trained in a Bayesian way, therefore the uncertainty is not well calibrated; and (b) Non-differentiable QNNs are unstable to train. Our compression approaches via BQNs simultaneously solve both problems. We present a sampling-free, backpropagation-compatible, variational-inference-based approach for learning Bayesian quantized neural networks (BQNs). We develop a suite of algorithms for efficient inference in BQNs such that our approach scales to large problems. We evaluate our BQNs by Monte-Carlo sampling, which proves that our approach is able to learn a proper posterior distribution on QNNs. Furthermore, we show that our approach can also be used to learn (ensemble) QNNs by taking maximum a posterior (or sampling from) the posterior distribution. assuming g n (φ) can be (approximately) computed by sampling-free probabilistic propagation as in Section 2. However, this approach has two major limitations: (a) the Bayes' rule needed to be derived case by case, and analytic rule for most common cases are not known yet. (b) it is not compatible to modern optimization methods (such as SGD or ADAM) as the optimization is solved analytically for each data point, therefore difficult to cope with large-scale models. Sampling-based Variational inference (SVI), formulates an optimization problem and solves it approximately via stochastic gradient descent (SGD). The most popular method among all is, Stochastic Gradient Variational Bayes (SGVB), which approximates L n (φ) by the average of multiple samples (; ;). Before each step of learning or prediction, a number of independent samples of the model parameters {θ s} S s=1 are drawn according to the current estimate of Q, i.e. θ s ∼ Q, by which the predictive function g n (φ) and the loss L n (φ) can be approximated by where f n (θ) = Pr[y n |x n, θ] denotes the predictive function given a specific realization θ of the model parameters. The gradients of L n (φ) can now be approximated as This approach has multiple drawbacks: (a) Repeated sampling suffers from high variance, besides being computationally expensive in both learning and prediction phases; (b) While g n (φ) is differentiable w.r.t. φ, f n (θ) may not be differentiable w.r.t. θ. One such example is quantized neural networks, whose backpropagation is approximated by straight through estimator (Our approach considers a wider scope of problem settings, where the model could be stochastic, i.e.] is an arbitrary function. considers the case that all parameters θ are Gaussian distributed, whose sampling-free probabilistic propagation requires complicated approximation . Quantized Neural Networks These models can be categorized into two classes: Partially quantized networks, where only weights are discretized ; Fully quantized networks, where both weights and hidden units are quantized (; ; ; ;). While both classes provide compact size, low-precision neural network models, fully quantized networks further enjoy fast computation provided by specialized bit-wise operations. In general, quantized neural networks are difficult to train due to their non-differentiability. Gradient descent by backpropagation is approximated by either straight-through estimators or probabilistic methods (; ;). Unlike these papers, we focus on Bayesian learning of fully quantized networks in this paper. Optimization of quantized neural networks typically requires dedicated loss function, learning scheduling and initialization. For example, considers pre-training of a continuous-valued neural network as the initialization. Since our approach considers learning from scratch (with an uniform initialization), the performance could be inferior to prior works in terms of absolute accuracy. Tensor Networks and Tensorial Neural Networks Tensor networks (TNs) are widely used in numerical analysis , quantum physiscs (Orús, 2014), and recently machine learning (; 2017) to model interactions among multi-dimensional random objects. Various tensorial neural networks (TNNs) have been proposed that reduce the size of neural networks by replacing the linear layers with TNs. Recently, points out the duality between probabilistic graphical models (PGMs) and TNs. I.e. there exists a bijection between PGMs and TNs. Our paper advances this line of thinking by connecting hierarchical Bayesian models (e.g. Bayesian neural networks) and hierarchical TNs. The problem settings of general Bayesian model and Bayesian neural networks for supervised learning are illustrated in Figures 3a and 3b using graphical models.... General Bayesian model Formally, the graphical model in Figure 3a implies the joint distribution of the model parameters θ, the observed dataset D = {(x n, y n)} N n=1 and any unseen data point (x, y) is factorized as follows: [y|x, θ]. In other words, we assume that the samples (x n, y n)'s (and unseen data point (x, y)) are are identical and independent distributed according to the same data distribution; and x n (or x) and θ together predict the output y n (or y) according to the same conditional distribution. Notice that the factorization above also implies the following equations: With these implications, the posterior predictive distribution Pr[y|x, D] can now expanded as: dθ where we approximate the posterior distribution Pr[θ|D] by a parameterized distribution Q(θ; φ). const. where L n (φ) is the expected log-likelihood, which reflects the predictive performance of the Bayesian model on the data point (x n, y n); and R(φ) is the KL-divergence between Q(θ; φ) and its prior Pr[θ], which reduces to entropy H(Q) if the prior of θ follows a uniform distribution. Hierarchical Bayesian Model A Bayesian neural network can be considered as a hierarchical Bayesian model depicted in Figure 3b, which further satisfies the following two assumptions: Assumption B.1 (Independence of Model Parameters θ (l) ). The approximate posterior Q(θ; φ) over the model parameters θ are partitioned into L disjoint and statistically independent layers {θ (l) } L−1 l=0 (where each φ (l) parameterizes θ (l) in the l-th layer) such that: satisfy the Markov property that h (l+1) depends on the input x only through its previous layer h (l): where we use short-hand notations h (: l) and θ (: l) to represent the sets of previous layers {h (k) } l k=0 and {θ (k) } l k=0. For consistency, we denote h = x and h (L) = y. Proof of probabilistic prorogation Based on the two assumptions above, we provide a proof for probabilistic propagation in Equation as follows: C ALTERNATIVE EVIDENCE LOWER BOUND AND ITS ANALYTIC FORMS C.1 ALTERNATIVE EVIDENCE LOWER BOUND (PROOF FOR THEOREM 3.1) The steps to prove the inequality almost follow the ones for probabilistic propagation above: where the key is the Jensen's inequality is not random variable (typical for an output layer), L n (φ) can be simplified as: where we write Pr[(L−1) can be obtained by differentiating over Equation, while other gradients ∂L n (φ)/φ (: L−2) further obtained by chain rule: which requires us to compute ∂L n (φ)/∂ψ (L−1) and ∂ψ can be derived from Equation, ∂ψ (L−1) /∂φ (: L−2) can be obtained by backpropagating outputs of the (L − 2) th layer obtained from probabilistic propagation in Equation. In other words: is a function of all parameters from previous layers φ (: L−2), and if each step can be obtained by iterative chain rule. In this part, we first prove the alternative evidence lower bound (ELBO) for Bayesian neural networks with softmax function as their last layers. Subsequently, we derive the corresponding backpropagation rule for the softmax layer. Finally, we show a method based on Taylor's expansion to approximately evaluate a softmax layer without Monte Carlo sampling. Theorem C.1 (Analytic Form of L n (φ) for Classification). Let h ∈ R K (with K the number of classes) be the pre-activations of a softmax layer (a.k.a. logits), and φ = s ∈ R + be a scaling factor that adjusts its scale such that are pairwise independent (which holds under mean-field approximation) and ) and s is a deterministic parameter. Then L n (φ) can be further upper bound by the following analytic form: Proof. The lower bound follows by plugging Pr [y|h, s] and Pr[h k |x] into Equation. where the last equation follows where the under-braced term is unity since it takes the form of Gaussian distribution. From Equation to, we use the Jensen's inequality to achieve a lower bound for integral of log-sum-exp. The bound can be tighten with advanced techniques in. Derivatives of L n (φ) in To use probabilistic backpropagation to obtain the gradients w.r.t. the parameters from previous layers, we first need to obtain the derivatives w.r.t. Furthermore, the scale s can be (optionally) updated along with other parameters using the gradient Prediction with Softmax Layer Once we learn the parameters for the Bayesian neural network, in principle we can compute the predictive distribution of y by evaluating the following equation: where we denote the softmax function as Unfortunately, the equation above can not be computed in closed form. The most straight-forward work-around is to approximate the integral by Monte Carlo sampling: for each h k we draw S samples {h independently and compute the prediction: Despite its conceptual simplicity, Monte Carlo method suffers from expensive computation and high variance in estimation. Instead, we propose an economical estimate based on Taylor's expansion. First, we expand the function c (h) by Taylor's series at the point µ (up to the second order): Before we derive the forms of these derivatives, we first show the terms of odd orders do not contribute to the expectation. For example, if c (h) is approximated by its first two terms (i.e. a linear function), Equation can be written as where the second term is zero by the symmetry of Pr[h k |x] around µ k (or simply the definition of µ k 's). Therefore, the first-order approximation exactly in a (deterministic) softmax function of the mean vector µ. In order to incorporate the variance into the approximation, we will need to derive the exact forms of the derivatives of c (h). Specifically, the first-order derivatives are obtained from the definition of c (h). and subsequently the second-order derivatives from the first ones: with these derivatives we can compute the second-order approximation as The equation above can be further written in vector form as: In this part, we develop an alternative evidence lower bound (ELBO) for Bayesian neural networks with Gaussian output layers, and derive the corresponding gradients for backpropagation. Despite the difficulty to obtain an analytical predictive distribution for the output, we show that its central moments can be easily computed given the learned parameters. Theorem C.2 (Analytic Form of L n (φ) for Regression). Let h ∈ R I be the output of last hidden layer (with I the number of hidden units), and φ = (w, s) ∈ R I × R + be the parameters that define the predictive distribution over output y as Suppose the hidden units {h k} K k=1 are pairwise independent (which holds under mean-field approximation), and each h i has mean µ i and variance ν i, then L n (φ) takes an analytic form: where (Proof. The Equation is obtained by plugging Pr[y|h; w, s] into Equation. where the long summation in the first term can be further simplified with notations of µ and ν: where w •2 denotes element-wise square, i.e. Derivatives of L n (φ) in Equation It is not difficult to show that the gradient of L n (φ) can be backpropagated through the last layer. by computing derivatives of L n (φ) w.r.t. µ and ν: Furthermore, the parameters {w, s} can be updated along with other parameters with their gradients: Prediction with Gaussian Layer Once we determine the parameters for the last layer, in principle we can compute the predictive distribution Pr[y|x] for the output y given the input x according to Unfortunately, exact computation of the equation above for arbitrary output value y is intractable in general. However, the central moments of the predictive distribution Pr[y|x] are easily evaluated. Consider we interpret the prediction as y = w h +, where ∼ N (0, s), its mean and variance can be easily computed as Furthermore, if we denote the (normalized) skewness and kurtosis of h i as γ i and κ i: Then the (normalized) skewness and kurtosis of the prediction y are also easily computed with the In this section, we present fast(er) algorithms for sampling-free probabilistic propagation (i.e. evaluating Equation). According to Section 4, we divide this section into three parts, each part for a specific range of fan-in numbers E. If E is small, tensor contraction in Equation is immediately applicable. Representative layers of small E are shortcut layer (a.k.a. skip-connection) and what we name as depth-wise layers. Shortcut Layer With a skip connection, the output h (l+1) is an addition of two previous layers h (l) and h (m). Therefore and the distribution of h (l+1) can be directly computed as Depth-wise Layers In a depth-wise layer, each output unit h Depth-wise layers include dropout layers (where θ (l) are dropout rates), nonlinear layers (where θ (l) are threshold values) or element-wise product layers (where θ (l) are the weights). For both shortcut and depth-wise layers, the time complexity is O(JD 2) since E <= 2. In neural networks, representative layers with medium fan-in number E are pooling layers, where each output unit depends on a medium number of input units. Typically, the special structure of pooling layers allows for faster algorithm than computing Equation directly. Max and Probabilistic Pooling For each output, a max pooling layer picks the maximum value from corresponding inputs, i.e. h i, while a probabilistic pooling layer selects the value the inputs following a categorical distribution, i.e. Pr[h For both cases, the predictive distribution of h (l+1) j can be computed as Prob: P (h where is the culminative mass function of P . Complexities for both layers are O(ID). Average Pooling and Depth-wise Convolutional Layer Both layers require additions of a medium number of inputs. We prove a convolution theorem for discrete random variables and show that discrete Fourier transform (DFT) (with fast Fourier transform (FFT)) can accelerate the additive computation. We also derive its backpropagation rule for compatibility of gradient-based learning. Then C v (f) is the element-wise product of all Fourier transforms C ui (f), i.e. Proof. We only prove the theorem for two discrete random variable, and the extension to multiple variables can be proved using induction. Now consider, where b = b 1 + b 2 and B = B 1 + B 2. Denote the probability vectors of u 1, u 2 and v as P 1 ∈ B1−b1, P 2 ∈ B2−b2 and P ∈ B−b respectively, then the entries in P are computed with P 1 and P 2 by standard convolution as follows: The relation above is usually denoted as P = P 1 * P 2, where * is the symbol for convolution. Now define the characteristic functions C, C 1, and C 2 as the discrete Fourier transform (DFT) of the probability vectors P, P 1 and P 2 respectively: where R controls the resolution of the Fourier transform (typically chosen as R = B − b + 1, i.e. the range of possible values). In this case, the characteristic functions are complex vectors of same length R, i.e. C, C 1, C 2 ∈ C R, and we denote the (functional) mappings as C = F(P) and C i = F i (P i). Given a characteristic function, its original probability vector can be recovered by inverse discrete Fourier transform (IDFT): which we denote the inverse mapping as P = F −1 (C) and P i = F −1 i (C i). Now we plug the convolution in Equation into the characteristic function C(f) in (92a) and rearrange accordingly: The equation above can therefore be written as C = C 1 •C 2, where we use • to denote element-wise product. Thus, we have shown summation of discrete random variables corresponds to element-wise product of their characteristic functions. With the theorem, addition of E discrete random variables can be computed efficiently as follows where F denotes the Fourier transforms in Equations (93a) and (93b). If FFT is used in computing all DFT, the computational complexity of Equation is O(ER log R) = O(E 2 D log(ED)) (since R = O(ED)), compared to O(D E) with direct tensor contraction. Backpropagation When fast Fourier transform is used to accelerate additions in Bayesian quantized network, we need to derive the corresponding backpropagation rule, i.e. equations that relate ∂L/∂P to {∂L/∂P i} I i=1. For this purpose, we break the computation in Equation into three steps, and compute the derivative for each of these steps. where in (100b) we use C/C i to denote element-wise division. Since P i lies into real domain, we need to project the gradients back to real number in (100c). Putting all steps together: In this part, we show that Lyapunov central limit approximation (Lyapunov CLT) accelerates probabilistic propagation in linear layers. For simplicity, we consider fully-connected layer in the derivations, but the can be easily extended to types of convolutional layers. We conclude this part by deriving the corresponding backpropagation rules for the algorithm. Linear Layers Linear layers (followed by a nonlinear transformations σ(·)) are the most important building blocks in neural networks, which include fully-connected and convolutional layers. A linear layer is parameterized by a set of vectors θ (l)'s, and maps where u. Let v j = σ(ṽ j) = σ(i∈I(j) θ ji u i ) be an activation of a linear layer followed by nonlinearity σ. Suppose both inputs {u i} i∈I and parameters {θ ji} i∈I(j) have bounded variance, then for sufficiently large |I(j)|, the distribution ofṽ j converges to a Gaussian distribution N (μ j,ν j) with mean and variance as Published as a conference paper at ICLR 2020 for CNN, we use a 4-layers network with two 5 × 5 convolutional layers with 64 channels followed by 2 × 2 average pooling, and two fully-connected layers with 1024 hidden units. For CIFAR10, we evaluate our models on a smaller version of VGG , which consists of 6 convolutional layers and 2 fully-connected layers: 2 x 128C3 -MP2 -2 x 256C3 -MP2 -2 x 512C3 -MP2 -1024FC -SM10. Table 4: Performance of different networks in terms of RMSE. The numbers for BQN are averages over 10 runs with different seeds, the standard deviation are exhibited following the ± sign. The for PBP, EBP are from , and the one for NPN is from . 10 | We propose Bayesian quantized networks, for which we learn a posterior distribution over their quantized parameters. | 1,062 | scitldr |
Injecting adversarial examples during training, known as adversarial training, can improve robustness against one-step attacks, but not for unknown iterative attacks. To address this challenge, we first show iteratively generated adversarial images easily transfer between networks trained with the same strategy. Inspired by this observation, we propose cascade adversarial training, which transfers the knowledge of the end of adversarial training. We train a network from scratch by injecting iteratively generated adversarial images crafted from already defended networks in addition to one-step adversarial images from the network being trained. We also propose to utilize embedding space for both classification and low-level (pixel-level) similarity learning to ignore unknown pixel level perturbation. During training, we inject adversarial images without replacing their corresponding clean images and penalize the distance between the two embeddings (clean and adversarial). Experimental show that cascade adversarial training together with our proposed low-level similarity learning efficiently enhances the robustness against iterative attacks, but at the expense of decreased robustness against one-step attacks. We show that combining those two techniques can also improve robustness under the worst case black box attack scenario. Injecting adversarial examples during training (adversarial training), BID1 BID3 increases the robustness of a network against adversarial attacks. The networks trained with one-step methods have shown noticeable robustness against onestep attacks, but, limited robustness against iterative attacks at test time. To address this challenge, we have made the following contributions:Cascade adversarial training: We first show that iteratively generated adversarial images transfer well between networks when the source and the target networks are trained with the same training method. Inspired by this observation, we propose cascade adversarial training which transfers the knowledge of the end of adversarial training. In particular, we train a network by injecting iter FGSM images (section 2.1) crafted from an already defended network (a network trained with adversarial training) in addition to the one-step adversarial images crafted from the network being trained. The concept of using already trained networks for adversarial training is also introduced in BID9. In their work, purely trained networks are used as another source networks to generate one-step adversarial examples for training. On the contrary, our cascade adversarial training uses already defended network for iter FGSM images generation. Low level similarity learning: We advance the previous data augmentation approach by adding additional regularization in deep features to encourage a network to be insensitive to adversarial perturbation. In particular, we inject adversarial images in the mini batch without replacing their corresponding clean images and penalize distance between embeddings from the clean and the adversarial examples. There are past examples of using embedding space for learning similarity of high level features like face similarity between two different images BID8 BID7 ). Instead, we use the embedding space for learning similarity of the pixel level differences between two similar images. The intuition of using this regularization is that small difference on input should not drastically change the high level feature representation. We train ResNet models BID2 on MNIST BID6 and CIFAR10 dataset BID4 ) using the proposed adversarial training. We first show low level similarity learning improves robustness of the network against adversarial images generated by one-step and iterative methods compared to the prior work. We show that modifying the weight of the distance measure in the loss function can help control trade-off between accuracies for the clean and adversarial examples. Together with cascade adversarial training and low-level similarity learning, we achieve accuracy increase against unknown iterative attacks, but at the expense of decreased accuracy for one-step attacks. We also show our cascade adversarial training and low level similarity learning provide much better robustness against black box attack. One-step fast gradient sign method (FGSM), referred to as "step FGSM", generates adversarial image X adv by adding sign of the gradients w.r.t. the clean image X multiplied by ∈ as shown below BID1: DISPLAYFORM0 One-step target class method generates X adv by subtracting sign of the gradients computed on a target false label as follows: DISPLAYFORM1 We use least likely class y LL as a target class and refer this method as "step ll".Basic iterative method, referred to as "iter FGSM", applies FGSM with small α multiple times. DISPLAYFORM2 We use α = 1, number of iterations N to be min (+ 4, 1.25). Clip X, is elementwise clipping function where the input is clipped to the range [max(0, X −), min(255, X +)].Iterative least-likely class method, referred to as "iter ll", is to apply "step ll" with small α multiple times. DISPLAYFORM3 Carlini and Wagner attack BID0 ) referred to as "CW" solves an optimization problem which minimizes both an objective function f (such that attack is success if and only if f (X adv) < 0) and a distance measure between X adv and X.Black box attack is performed by testing accuracy on a target network with the adversarial images crafted from a source network different from the target network. Lower accuracy means successful black-box attack. When we use the same network for both target and source network, we call this as white-box attack. Adversarial training: is a form of data augmentation where it injects adversarial examples during training. In this method, k examples are taken from the mini batch B (size of m) and the adversarial examples are generated with one of step method. The k adversarial examples replaces the corresponding clean examples when making mini batch. Below we refer this adversarial training method as "Kurakin's". Figure 1: Correlation between adversarial noises from different networks for each. Shaded region shows ± 0.1 standard deviation of each line. Ensemble adversarial training BID9: is essentially the same with the adversarial training, but uses several pre-trained vanilla networks to generate one-step adversarial examples for training. Below we refer this adversarial training method as "Ensemble".3 PROPOSED APPROACH We first show transferability between purely trained networks and adversarially trained networks under black box attack. We use ResNet BID2 ) models for CIFAR10 classification. We first train 20-layer ResNets with different methods (standard training, adversarial training) and use those as target networks. We re-train networks (standard training and adversarial training) with the different initialization from the target networks, and use the trained networks as source networks. Experimental details and model descriptions can be found in Appendix A and B.In table 1, we report test accuracies under black box attack. We first observe that high robustness against one-step attack between defended networks (R20 K2 -> R20 K), and low robustness between undefended networks (R20 2 -> R20). This observation shows that error surfaces of neural networks are driven by the training method and networks trained with the same method end up similar optimum states. It is noteworthy to observe that the accuracies against step attack from the undefended network (R20 2) are always lower than those from defended network (R20 K2). Possible explanation for this would be that adversarial training tweaks gradient seen from the clean image to point toward weaker adversarial point along that gradient direction. As a , one-step adversarial images from defended networks become weaker than those from undefended network. Transferability (iterative attack): We observe "iter FGSM" attack remains very strong even under the black box attack scenario but only between undefended networks or defended networks. This is because iter FGSM noises (X adv -X) from defended networks resemble each other. As shown in figure 1, we observe higher correlation between iter FGSM noises from a defended network (R20 K) and those from another defended network (R20 K2).Difficulty of defense/attack under the black box attack scenario: As seen from this observation, it is efficient to attack an undefended/defended network with iter FGSM examples crafted from another undefended/defended network. Thus, when we want to build a robust network under the black box attack scenario, it is desired to check accuracies for the adversarial examples crafted from other networks trained with the same strategy. Inspired by the observation that iter FGSM images transfer well between defended networks, we propose cascade adversarial training, which trains a network by injecting iter FGSM images crafted from an already defended network. We hypothesize that the network being trained with cascade adversarial training will learn to avoid such adversarial perturbation, enhancing robustness against iter FGSM attack. The intuition behind this proposed method is that we transfer the knowledge of the end of adversarial training. In particular, we train a network by injecting iter FGSM images crafted from already defended network in addition to the one-step adversarial images crafted from the network being trained. We advance the algorithm proposed in by adding low level similarity learning. Unlike, we include the clean examples used for generating adversarial images in the mini batch. Once one step forward pass is performed with the mini batch, embeddings are followed by the softmax layer for the cross entropy loss for the standard classification. At the same time, we take clean embeddings and adversarial embeddings, and minimize the distance between the two with the distance based loss. The distance based loss encourages two similar images (clean and adversarial) to produce the same outputs, not necessarily the true labels. Thus, low-level similarity learning can be considered as an unsupervised learning. By adding regularization in higher embedding layer, convolution filters gradually learn how to ignore such pixel-level perturbation. We have applied regularization on lower layers with an assumption that low level pixel perturbation can be ignored in lower hierarchy of networks. However, adding regularization term on higher embedding layer right before the softmax layer showed best performance. The more convolutional filters have chance to learn such similarity, the better the performance. Note that cross entropy doesn't encourage two similar images to produce the same output labels. Standard image classification using cross entropy compares ground truth labels with outputs of a network regardless of how similar training images are. The entire training process combining cascade adversarial training and low level similarity learning is shown in figure 2. We define the total loss as follows: DISPLAYFORM0 are the ing embeddings from X i and X adv i, respectively. m is the size of the mini batch, k (≤ m/2) is the number of adversarial images in the mini batch. λ is the parameter to control the relative weight of classification loss for adversarial images. λ 2 is the parameter to control the relative weight of the distance based loss L dist in the total loss. Bidirectional loss minimizes the distance between the two embeddings by moving both clean and adversarial embeddings as shown in the left side of the figure 3. We tried N = 1, 2 and found not much difference between the two. We report the with N = 2 for the rest of the paper otherwise noted. When N = 2, L dist becomes L2 loss. DISPLAYFORM1 Pivot loss minimizes the distance between the two embeddings by moving only the adversarial embeddings as shown in the right side of the figure 3. DISPLAYFORM2 In this case, clean embeddings (E i) serve as pivots to the adversarial embeddings. In particular, we don't back-propagate through the clean embeddings for the distance based loss. The intuition behind the use of pivot loss is that the embedding from a clean image can be treated as the ground truth embedding. We first analyze the effect of low level similarity learning on MNIST. We train ResNet models BID2 with different methods (standard training, Kurakin's adversarial training and adversarial training with our distance based loss). Experimental details can be found in Appendix A. TAB2 shows the accuracy for MNIST test dataset for different types of attack methods. As shown in the table, our method achieves better accuracy than Kurakin's method for all types of attacks with a little sacrifice on the accuracy for the clean images. Even though adversarial training is done only with "step ll", additional regularization increases robustness against unknown "step FGSM", "iter ll", "iter FGSM" and CW L ∞ attacks. This shows that our low-level similarity learning can successfully regularize the one-step adversarial perturbation and its vicinity for simple image classification like MNIST. To visualize the embedding space, we modify 20-layer ResNet model where the last fully connected layer (64x10) is changed to two fully connected layers (64x2 and 2x10). We re-train networks with standard training, Kurakin's method and our pivot loss on MNIST. 1 In figure 4, we draw embeddings (dimension=2) between two fully connected layers. As seen from this figure, adversarial images from the network trained with standard training cross the decision boundary easily as increases. With Kurakin's adversarial training, the distances between clean and adversarial embeddings are minimized compared to standard training. And our pivot loss further minimizes distance between the clean and adversarial embeddings. Note that our pivot loss also decreases absolute value of the embeddings, thus, higher λ 2 will eventually in overlap between distinct embedding distributions. We also observe that intra class variation of the clean embeddings are also minimized for the network trained with our pivot loss as shown in the scatter plot in figure 4 (c).1 Modified ResNet models showed slight decreased accuracy for both clean and adversarial images compared to original ResNet counterparts, however, we observed similar trends (improved accuracy for iterative attacks for the network trained with pivot loss) as in table 2. We train 20-layer ResNet models with pivot loss and various λ 2 s for CIFAR10 dataset to study effects of the weight of the distance measure in the loss function. Figure 5 shows that a higher λ 2 increases accuracy of the iteratively generated adversarial images. However, it reduces accuracy on the clean images, and increasing λ 2 above 0.3 even in divergence of the training. This is because embedding distributions of different classes will eventually overlap since absolute value of the embedding will be decreased as λ 2 increases as seen from the section 4.2. In this experiment, we show that there exists clear trade-off between accuracy for the clean images and that for the adversarial images, and we recommend using a very high λ 2 only under strong adversarial environment.5 CASCADE ADVERSARIAL TRAINING ANALYSIS We further study the transferability of iter FGSM images between various architectures. To this end, we first train 56-layer ResNet networks (Kurakin's, pivot loss) with the same initialization. Then we train another 56-layer ResNet network (Kurakin's) with different initialization. We repeat the training for the 110-layer ResNet networks. We measure correlation between iter FGSM noises from different networks. Figure 6 (a) shows correlation between iter FGSM noises crafted from Kurakin's network and those from Pivot network with the same initialization. Conjectured from, we observe high corre- lation between iter FGSM noises from networks with the same initialization. Correlation between iter FGSM noises from the networks with different initialization, however, becomes lower as the network is deeper as shown in figure 6 (b). Since the degree of freedom increases as the network size increases, adversarially trained networks prone to end up with different states, thus, making transfer rate lower. To maximize the benefit of the cascade adversarial training, we propose to use the same initialization for a cascade network and a source network used for iterative adversarial examples generation. We first compare a network trained with Kurakin's method and that with pivot loss. We train 110-layer ResNet models with/without pivot loss and report accuracy in table 3. We observe our lowlevel similarity learning further improves robustness against iterative attacks compared to Kurakin's adversarial training. However, the accuracy improvements against iterative attacks (iter FGSM, CW) are limited, showing regularization effect of low-level similarity learning is not sufficient for the iterative attacks on complex color images like CIFAR10. This is different from MNIST test cases where we observed significant accuracy increase for iterative attacks only with pivot loss. We observe label leaking phenomenon reported in happens even though we don't train a network with step FGSM images. Additional analysis for this phenomenon is explained in Appendix D.Next, we train a network from scratch with iter FGSM examples crafted from the defended network, R110 P. We use the same initialization used in R110 P as discussed in 5.1. In particular, iter FGSM images are crafted from R110 P with CIFAR10 training images for = 1,2,..., 16, and those are used randomly together with step ll examples from the network being trained. We train cascade networks with/without pivot loss. We also train networks with ensemble adversarial training BID9 with/without pivot loss for comparison. The implementation details for the trained models can be found in Appendix B.We find several meaningful observations in table 3. First, ensemble and cascade models show improved accuracy against iterative attack although at the expense of decreased accuracy for onestep attacks compared to the baseline defended network (R110 K). Additional data augmentation from other networks enhances the robustness against iterative attack, weakening label leaking effect caused by one-step adversarial training. Second, our low-level similarity learning (R110 P,E, R110 P,C) further enhances robustness against iterative attacks including fully unknown CW attack (especially for =4). Additional knowledge learned from data augmentation through cascade/ensemble adversarial training enables networks to learn partial knowledge of perturbations generated by an iterative method. And the learned iterative perturbations become regularized further with our low-level similarity learning making networks robust against unknown iterative attacks. During this process, clean embeddings from other classes might also be moved toward the decision boundary which in decreased accuracy for the clean images. We finally perform black box attack analysis for the cascade/ensemble networks with/without pivot loss. We report black box attack accuracy with the source networks trained with the same method, but with different initialization from the target networks. The reason for this is adversarial examples transfer well between networks trained with the same strategy as observed in section 3.1. We re-train 110-layer ResNet models using Kurakin's, cascade and ensemble adversarial training with/without low-level similarity learning and use those networks as source networks for black-box attacks. Baseline 110-layer ResNet model is also included as a source network. Target networks are the same networks used in table 3. We found iter FGSM attack ed in lower accuracy than step FGSM attack, thus, report iter FGSM attack only in table 4.We first observe that iter FGSM attack from ensemble models (R110 E2, R110 P,E2) is strong ( in lower accuracy) compared to that from any other trained networks.2 Since ensemble models learn various perturbation during training, adversarial noises crafted from those networks might be more general for other networks making them transfer easily between defended networks. Second, cascade adversarial training breaks chicken and egg problem. (In section 3.1, we found that it is efficient to use a defended network as a source network to attack another defended network.) Even though the transferability between defended networks is reduced for deeper networks, cascade network (R110 K,C) shows worst case performance against the attack not from a defended network, but from a purely trained network (R110 2). Possible solution to further improve the worst case robustness would be to use more than one network as source networks (including pure/defended networks) for iter FGSM images generation for cascade adversarial training. Third, ensemble/cascade networks together with our low-level similarity learning (R110 P,E, R110 P,C) show better worst case accuracy under black box attack scenario. This shows that enhancing robustness against iterative white box attack also improves robustness against iterative black box attack. We performed through transfer analysis and showed iter FGSM images transfer easily between networks trained with the same strategy. We exploited this and proposed cascade adversarial training, a method to train a network with iter FGSM adversarial images crafted from already defended networks. We also proposed adversarial training regularized with a unified embedding for classification and low-level similarity learning by penalizing distance between the clean and their corresponding adversarial embeddings. Combining those two techniques (low level similarity learning + cascade adversarial training) with deeper networks further improved robustness against iterative attacks for both white-box and black-box attacks. However, there is still a gap between accuracy for the clean images and that for the adversarial images. Improving robustness against both one-step and iterative attacks still remains challenging since it is shown to be difficult to train networks robust for both one-step and iterative attacks simultaneously. Future research is necessary to further improve the robustness against iterative attack without sacrificing the accuracy for step attacks or clean images under both white-box attack and black-box attack scenarios. We perform 24x24 random crop and random flip on 32x32 original images. We generate adversarial images with "step ll" after these steps otherwise noted. We use stochastic gradient descent (SGD) optimizer with momentum of 0.9, weight decay of 0.0001 and mini batch size of 128. For adversarial training, we generate k = 64 adversarial examples among 128 images in one mini-batch. We start with a learning rate of 0.1, divide it by 10 at 4k and 6k iterations, and terminate training at 8k iterations for MNIST, and 48k and 72k iterations, and terminate training at 94k iterations for CIFAR10. Ensemble models Pre-trained models R20 E, R20 P,E, R110 E, R110 P,E R20 3, R110 3 R110 E2, R110 P,E2 R20 4, R110 4 Cascade models Pre-trained model R20 K,C, R20 P,C R20 P R110 K,C, R110 P,C R110 P R110 K,C2, R110 P,C2R110 P Figure 7: Argument to the softmax vs. in test time. "step ll", "step FGSM" and "random sign" methods were used to generate test-time adversarial images. Arguments to the softmax were measured by changing for each test method and averaged over randomly chosen 128 images from CIFAR10 test-set. Blue line represents true class and the red line represents mean of the false classes. Shaded region shows ± 1 standard deviation of each line. We draw average value of the argument to the softmax layer for the true class and the false classes to visualize how the adversarial training works as in figure 7. Standard training, as expected, shows dramatic drop in the values for the true class as we increase in "step ll" or "step FGSM direction. With adversarial training, we observe that the value drop is limited at small and our method even increases the value in certain range upto =10. Note that adversarial training is not the same as the gradient masking. As illustrated in figure 7, it exposes gradient information, however, quickly distort gradients along the sign of the gradient ("step ll" or "step FGSM) direction. We also observe improved (broader margins than baseline) for "random sign" added images even though we didn't inject random sign added images during training. Overall shape of the argument to the softmax layer in our case becomes smoother than Kurakin's method, suggesting our method is good for pixel level regularization. Even though actual value of the embeddings for the true class in our case is smaller than that in Kurakin's, the standard deviation of our case is less than Kurakin's, making better margin between the true class and false classes. We observe accuracies for the "step FGSM" adversarial images become higher than those for the clean images ("label leaking" phenomenon) by training with "step FGSM" examples as in. Interestingly, we also observe "label leaking" phenomenon even without providing true labels for adversarial images generation. We argue that "label leaking" is a natural of the adversarial training. To understand the nature of adversarial training, we measure correlation between gradients w.r.t. different images (i.e. clean vs. adversarial) as a measure of error surface similarity. We measure correlation between gradients w.r.t. clean vs. "step ll" image, clean vs. "step FGSM" image, clean vs. "random sign" added image, and "step ll" image vs. "step FGSM" image for three trained networks (a) R20, (b) R20 K and (c) R20 P (Ours) in TAB10 shows that black box attack between trained networks with the same initialization tends to be more successful than that between networks with different initialization as explained in. In table 9, our method (R20 P 2) is always better at one-step and iterative black box attack from defended networks (R20 K, R20 P) and undefended network (R20) than Kurakin's method (R20 B2). However, it is hard to tell which method is better than the other one as explained in the main paper. In table 10, we show black box attack accuracies with the source and the target networks switched from the table 4. We also observe that networks trained with both low-level similarity learning and cascade/ensemble adversarial training (R110 P,C2, R110 P,E2) show better worst-case performance than other networks. Overall, iter FGSM images crafted from ensemble model families (R110 E, R110 P,E) remain strong on the defended networks. such that X + δ ∈ n where, the function f is defined such that attack is success if and only if f (X + δ) < 0, δ is the target perturbation defined as X adv −X, c is the parameter to control the relative weight of function f in the total cost function, and τ is the control threshold used to penalize any terms that exceed τ.Since CW L ∞ attack is computationally expensive, we only use 100 test examples (10 examples per each class). We search adversarial example X adv with c ∈ {0.1, 0.2, 0.5, 1, 2, 5, 10, 20} and τ ∈ {0.02, 0.04, ..., 0.6} for MNIST and c ∈ {0.1, 0.3, 1, 3, 10, 30, 100} and τ ∈ {0.001, 0.002, ..., 0.01, 0.012, ..., 0.02, 0.024, ..., 0.04, 0.048, ..., 0.08} for CIFAR10. We use Adam optimizer with an initial learning rate of 0.01/c since we found constant initial learning rate for c · f (X + δ) term is critical for successful adversarial images generation. We terminate the search after 2,000 iterations for each X, c and τ. If f (X + δ) < 0 and the ing ||δ|| ∞ is lower than the current best distance, we update X adv. FIG5 shows cumulative distribution function of for 100 successful adversarial examples per each network. We report the number of adversarial examples with > 0.3*255 for MNIST and that with > 2 or 4 for CIFAR10. As seen from this figure, our approaches provide robust defense against CW L ∞ attack compared to other approaches. | Cascade adversarial training + low level similarity learning improve robustness against both white box and black box attacks. | 1,063 | scitldr |
Whereas it is believed that techniques such as Adam, batch normalization and, more recently, SeLU nonlinearities ``solve'' the exploding gradient problem, we show that this is not the case and that in a range of popular MLP architectures, exploding gradients exist and that they limit the depth to which networks can be effectively trained, both in theory and in practice. We explain why exploding gradients occur and highlight the {\it collapsing domain problem}, which can arise in architectures that avoid exploding gradients. ResNets have significantly lower gradients and thus can circumvent the exploding gradient problem, enabling the effective training of much deeper networks, which we show is a consequence of a surprising mathematical property. By noticing that {\it any neural network is a residual network}, we devise the {\it residual trick}, which reveals that introducing skip connections simplifies the network mathematically, and that this simplicity may be the major cause for their success. Arguably, the primary reason for the recent success of neural networks is their "depth", i.e. their ability to compose and jointly train nonlinear functions so that they co-adapt. A large body of work has detailed the benefits of depth (e.g. ; BID13 ; BID9 ; ; ;).The exploding gradient problem has been a major challenge for training very deep feedforward neural networks at least since the advent of gradient-based parameter learning . In a nutshell, it describes the phenomenon that as the gradient is backpropagated through the network, it may grow exponentially from layer to layer. This can, for example, make the application of vanilla SGD impossible for networks beyond a certain depth. Either the step size is too large for updates to lower layers to be useful or it is too small for updates to higher layers to be useful. While this intuitive notion is widely understood, there are important gaps in the foundational understanding of this phenomenon. In this paper, we take a significant step towards closing those gaps. To begin with, there is no well-accepted metric for determining the presence of pathological exploding gradients. Should we care about the length of the gradient vector? Should we care about the size of individual components of the gradient vector? Should we care about the eigenvalues of the Jacobians of individual layers? Depending on the metric used, different strategies arise for combating exploding gradients. For example, manipulating the width of layers a suggested by e.g. BID3; can greatly impact the size of gradient vector components but leaves the length of the gradient vector relatively unchanged. The underlying problem is that it is unknown whether exploding gradients according to any of these metrics necessarily lead to training difficulties. There is a large body of evidence that gradient explosion defined by some metric when paired with some optimization algorithm on some architectures and datasets is associated with poor (e.g. ;). But, can we make general statements about entire classes of algorithms and architectures?Algorithms such as RMSprop , Adam or vSGD are light modifications of SGD that rescale different parts of the gradient vector and are known to be able to lead to improved training outcomes. This raises an another important unanswered question. Are exploding gradients merely a numerical quirk to be overcome by simply rescaling different parts of the gradient vector or are they reflective of an inherently difficult optimization problem that cannot be easily tackled by simple modifications to a stock algorithm?It has become a common notion that techniques such as introducing normalization layers (e.g. , BID6, BID12,) or careful initial scaling of weights (e.g. , BID14, ,) largely eliminate exploding gradients by stabilizing forward activations. This notion was espoused in landmark papers. The paper that introduced batch normalization states:In traditional deep networks, too-high learning rate may in the gradients that explode or vanish, as well as getting stuck in poor local minima. Batch Normalization helps address these issues. The paper that introduced ResNet (b) states:Is learning better networks as easy as stacking more layers? An obstacle to answering this question was the notorious problem of vanishing/exploding gradients, which hamper convergence from the beginning. This problem, however, has been largely addressed by normalized initialization and intermediate normalization layers,... We argue that these claims are overly optimistic. While scaling weights or normalizing forward activations can reduce gradient growth defined according to certain metrics in certain situations, these techniques are not effective in general and can cause other problems even when they are effective. We intend to add nuance to these ideas which have been widely adopted by the community (e.g. BID12 ; BID7). In particular, we intend to correct the misconception that stabilizing forward activations is sufficient for avoiding exploding gradients (e.g.).ResNet (b) and other neural network architectures utilizing skip connections (e.g. ,) have been highly successful recently. While the performance of networks without skip connections starts to degrade when depth is increased beyond a certain point, the performance of ResNet continues to improve until a much greater depth is reached. While favorable changes to properties of the gradient brought about by the introduction of skip connections have been demonstrated for specific architectures (e.g. ; BID7), a general explanation for the power of skip connections has not been given. Our contributions are as follows:1. We introduce the'gradient scale coefficient' (GSC), a novel measurement for assessing the presence of pathological exploding gradients (section 2). It is robust to confounders such as network scaling (section 2) and layer width (section 3) and can be used directly to show that training is difficult (section 4). Therefore, we propose the unification of research on the exploding gradient problem under this metric.2. We demonstrate that exploding gradients are in fact present in a variety of popular MLP architectures, including architectures utilizing techniques that supposedly combat exploding gradients. We show that introducing normalization layers may even exacerbate the exploding gradient problem (section 3).3. We show that exploding gradients as defined by the GSC are not a numerical quirk to be overcome by rescaling different parts of the gradient vector, but are indicative of an inherently complex optimization problem and that they limit the depth to which MLP archi-tectures can be effectively trained, rendering very deep MLPs effectively much shallower (section 4). To our knowledge, this is the first time such a link has been established.4. For the first time, we show why exploding gradients are likely to occur in deep networks even when the forward activations do not explode (section 5). We argue that this is a fundamental reason for the difficulty of constructing very deep trainable networks.5. For the first time, we define the'collapsing domain problem' for training very deep feedforward networks. We show how this problem can arise precisely in architectures that avoid exploding gradients via careful initial scaling of weights and that it can be at least as damaging to the training process (section 6).6. For the first time, we show that the introduction of skip connections has a strong gradientreducing effect on deep network architectures in general. We detail the surprising mathematical relationship that makes this possible (section 7).7. We introduce the'residual trick' (section 4), which reveals that ResNets are a mathematically simpler version of networks without skip connections and thus approximately achieve what we term the'orthogonal initial state'. This provides, we argue, the major reason for their superior performance at great depths as well as an important criterion for neural network design in general (section 7).In section 8, we conclude and derive practical recommendations for designing and training deep networks as well as key implications of our work for deep learning research. In the appendix in section B, we provide further high-level discussion. In section B.1, we discuss related work including the relationship of exploding gradients with other measures of network trainability, such as eigenspectrum analysis , shattering gradients BID7, trajectory lengths , covariate shift (e.g. ) and Hessian conditioning (e.g. BID12). Recently, the behavior of neural networks at great depth was analyzed using mean field theory (; ; ; BID3 and dynamical systems theory BID10 BID0 . We discuss these lines of work in relation to this paper in sections B.1.1 and B.1.2 respectively. We discuss the implications of our work for the vanishing gradient problem in section B.2. We compare the exploding gradient problem as it occurs in feedforward networks to the exploding and vanishing gradient problems in RNNs (e.g.) in section B.3. In section B.4, we highlight open research questions and potential future work. For the purpose of this paper, we define a neural network f as a succession of layers f l, 0 ≤ l ≤ L, where each layer is a vector-to-vector transformation. We assume a prediction framework, where the'prediction layer' f 1 is considered to output the prediction of the network and the goal is to minimize the value of the error layer f 0 over the network's prediction and the true label y, summed over some dataset D.arg min θ E, where E = 1 |D| (x,y)∈D f 0 (y, f 1 (θ 1, f 2 (θ 2, f 3 (..f L (θ L, x)..))))Note that in contrast to standard notation, we denote by f L the lowest layer and by f 0 the highest layer of the network as we are primarily interested in the direction of gradient flow. Let the dimensionality / width of layer l be d l with d 0 = 1 and the dimensionality of the data input x be d. Each layer except f 0 is associated with a parameter sub-vector θ l that collectively make up the parameter vector θ = (θ 1, .., θ L). This vector represents the trainable elements of the network. Depending on the type of the layer, the sub-vector might be empty. For example, a layer composed of tanh nonlinearities has no trainable elements, so its parameter sub-vector is empty. We call these layers'unparametrized'. In contrast, a fully-connected linear layer has trainable weights, which are encompassed in the parameter sub-vector. We call these layers'parametrized'.We say a network that has layers f 0 through f L has'nominal depth' L. In contrast, we say the'compositional depth' is equal to the number of parametrized layers in the network, which is the quantity that is commonly referred to as "depth". For example, a network composed of three linear layers, two tanh layers and a softmax layer has nominal depth 6, but compositional depth 3.Let the'quadratic expectation' Q of a random variable X be defined as Q[X] = E[X 2] 1 2, i.e. the generalization of the quadratic mean to random variables. Similarly, let the'inverse quadratic expectation' Q −1 of a random variable X be defined as DISPLAYFORM0 However, this notion is insufficient because we can construct networks that can be trained successfully yet have Jacobians that grow exponentially at arbitrary rates. In a nutshell, all we have to do to construct such a network is to take an arbitrary network of desired depth that can be trained successfully and scale each layer function f l and each parameter sub-vector θ l by R −l for some constant R > 1. During training, all we have to do to correct for this change is to scale the gradient sub-vector corresponding to each layer by R −2l.Proposition 1. Consider any r > 1 and any neural network f which can be trained to some error level in a certain number of steps by some gradient-based algorithm. There exists a network f that can also be trained to the same error level as f and to make the same predictions as f in the same number of steps by the same algorithm, and has exponentially growing Jacobians with rate r. (See section E.1 for details.)Therefore, we need a definition of'exploding gradients' different from'exponentially growing Jacobians' if we hope to derive from it that training is intrinsically hard and not just a numerical issue to be overcome by gradient rescaling. Note that all propositions and theorems are stated informally in the main body of the paper, for the purpose of readability and brevity. In the appendix in sections E and F respectively, they are re-stated in rigorous terms, proofs are provided and the practicality of conditions is discussed. In this section, we outline our definition of'exploding gradients' which can be used to show hardness of training. It does not suffer from the confounding effect outlined in the previous section. Definition 1. Let the'quadratic mean norm' or'qm norm' of an m × n matrix A be the quadratic mean of its singular values where the sum of squares is divided by its right dimension n. If s 1, s 2,.., s min(m,n) are the singular values of A, we write: n An equivalent definition would be ||A|| qm = Q u ||Au|| 2, where u is a uniformly random unit length vector. In plain language, it measures the expected impact the matrix has on the length of a vector with uniformly random orientation. The qm norm is closely related to the L 2 norm via √ n||A|| qm = ||A|| 2. We use ||.|| 2 to denote the L 2 norm of both vectors and matrices. Definition 2. Let the'gradient scale coefficient (GSC)' for 0 ≤ l ≤ k ≤ L be as follows: DISPLAYFORM0 GSC(k, l, f, θ, x, y) = ||J Definition 3. We say that the network f (θ) has'exploding gradients with rate r and intercept c' at some point (x, y) if for all k and l we have GSC(k, l, f, θ, x, y) ≥ cr k−l, and in particular GSC(l, 0, f, θ, x, y) ≥ cr l.Of course, under this definition, any network of finite depth has exploding gradients for sufficiently small c and r. There is no objective threshold for c and r beyond which exploding gradients become pathological. Informally, we will say that a network has'exploding gradients' if the GSC can be well-approximated by an exponential function. The GSC combines the norm of the Jacobian with the ratio of the lengths of the forward activation vectors. In plain language, it measures the size of the gradient flowing backward relative to the size of the activations flowing forward. Equivalently, it measures the relative sensitivity of layer l with respect to small random changes in layer k. Proposition 2. GSC(k, l) measures the quadratic expectation of the relative size of the change in the value of f l in response to a small random change in f k. (See section E.2 for details.)What about the sensitivity of layers with respect to the parameter? For fully-connected linear layers, we obtain a similar relationship. Proposition 3. When f k is a fully-connected linear layer without trainable bias parameters and θ k contains the entries of the weight matrix, GSC(k, l) DISPLAYFORM1 measures the quadratic expectation of the relative size of the change in the value of f l in response to a small random change in θ k. Further, if the weight matrix is randomly initialized, Q DISPLAYFORM2 For reasons of space and mathematical simplicity, we focus our analysis for now on multi-layer perceptrons (MLPs) which are comprised only of fully-connected linear layers with no trainable bias parameters, and unparametrized layers. Therefore we also do not use trainable bias and variance parameters in the normalization layers. Note that using very deep MLPs with some architectural limitations as a testbed to advance the study of exploding gradients and related problems is a wellestablished practice (e.g. BID7 ;)., we focus on training error rather than test error in our analysis as we do not consider the issue of generalization. While exploding gradients have important implications for generalization, this goes beyond the scope of this paper. In section 2.2, we showed that we can construct trainable networks with exponentially growing Jacobians by simple multiplicative rescaling of layers, parameters and gradients. Crucially, the GSC is invariant to this rescaling as it affects both the forward activations and the Jacobian equally, so the effects cancel out. In this section, we show that exploding gradients exist in a range of popular MLP architectures. Consider the decomposability of the GSC. Proposition 5. Assuming the approximate decomposability of the norm of the product of Jaco- DISPLAYFORM0 This suggests that as long as the GSC of individual layers is approximately r > 1, we may have an exponential growth of GSC(k, l) in k − l. In FIG3, we show GSC(l, 0) for seven MLP architectures. A linear layer is followed by (i) a ReLU nonlinearity ('ReLU'), (ii) layer normalization BID6 followed by a ReLU nonlinearity ('layer-ReLU'), (iii) batch normalization plus ReLU ('batch-ReLU'), (iv) tanh, (v) layer norm plus tanh ('layer-tanh'), (vi) batch norm plus tanh ('batch-tanh'), (vii). All networks have compositional depth 50 (i.e. 50 linear layers) and each layer has 100 neurons. Both data input and labels are Gaussian noise and the error layer computes the dot product between the label and the prediction. The entries of weight matrices are dawn from independent Gaussian distributions with mean zero. Weight matrix entries for ReLU architectures are initialized with variance 2 100 as suggested by , weight matrix entries for tanh architectures with variance 1 100 as suggested by and BID14, and weight matrix entries for SeLU architectures with variance 1 100 as suggested by. For further experimental details, see section I.We find that in four architectures (batch-ReLU, layer-tanh, batch-tanh and SeLU), GSC(l, 0) grows almost perfectly linearly in log-space. This corresponds to gradient explosion. We call those architectures'exploding architectures'. Among these architectures, a range of techniques that supposedly reduce or eliminate exploding gradients are used: careful initial scaling of weights, normalization layers, SeLU nonlinearities. Adding normalization layers may even bring about or exacerbate exploding gradients. The exploding architectures have all been designed to have stable forward activations and they exhibit gradient explosion under any reasonable metric. In light of proposition 4, it is not surprising that these techniques are not effective in general at combating exploding gradients as defined by the GSC, as this metric is invariant under multiplicative rescaling. Normalization layers are used to scale the activations. Carefully choosing the initial scale of weights corresponds to a multiplicative scaling of weights. SeLU nonlinearities, again, act to scale down large activations and scale up small activations. While these techniques may of course impact the GSC by changing the fundamental mathematical properties of the network (as can be seen, for example, when comparing ReLU and batch-ReLU), they do not reduce it simply by virtue of controlling the size of forward activations. In contrast, the other three architectures (ReLU, layer-ReLU and tanh) do not exhibit exploding gradients. However, this apparent advantage comes at a cost, as we further explain in section 5.All curves in FIG3 exhibit small jitters. This is because we plotted the value of the GSC at every linear layer, every normalization layer and every nonlinearity layer in the graph and then connected the points corresponding to these values. Layers were placed equispaced on the x axis in the order they occurred in the network. Not every type of layer affects the GSC equally. In fact, we find that as gradients pass through linear layers, they tend to shrink relative to forward activations. In the exploding architectures, this is more than counterbalanced by the relative increase the gradient experience as it passes through e.g. normalization layers. Despite these layer-dependent differences, it is worth noting that each individual layer used in the architectures studied has only a small impact on the GSC. This would not be true for either the forward activations or gradients taken by themselves. For example, passing through a ReLU layer reduces the length of both activation and gradient vector by ≈ √ 2. This relative invariance to individual layers suggests that the GSC measures not just a superficial quantity, but a deep property of the network. This hypothesis is confirmed in the following sections. Finally, we note that the GSC is also robust to changes in width and depth. Changing the depth has no impact on the rate of explosion of the four exploding architectures as the layer-wise GSC, i.e. GSC(l + 1, l), is itself independent of depth. In FIG3, we also show the for the SeLU architecture where each layer contains 200 neurons instead of 100 ('SeLU wide'). We found that the rate of gradient explosion decreases slightly when width increases. We also studied networks with exploding architectures where the width oscillated from layer to layer. GSC(k, 0) still increased approximately exponentially and at a similar rate to corresponding networks with constant width. A summary of can be found in table 1. In this section, we introduce the concept of'effective depth' as defined for the ResNet architecture by. We denote a residual network by writing each layer f l (except f 0) as the sum of a fixed initial function i l and a residual function r l. We define the optimization problem for a residual net analogously to equation 1. DISPLAYFORM0 Let's assume for the sake of simplicity that the dimension of each layer is identical. In that case, the initial function for ResNet is generally chosen to be the identity function. Then, writing the identity matrix as I, we have DISPLAYFORM1 Multiplying out, this becomes the sum of 2 L terms. Almost all of those terms are the product of approximately L 2 identity matrices and L 2 residual Jacobians. However, if the operator norm of the residual Jacobians is less than p for some p < 1, the norm of terms decreases exponentially in the number of residual Jacobians they contain. Let the terms in df0 dx containing λ or more residual Jacobians be called'λ-residual' and let res λ be the sum of all λ-residual terms. Then: DISPLAYFORM2 Again, if p < 1, the right hand side decreases exponentially in λ for sufficiently large λ, for example when λ > L 2, so the combined size of λ-residual terms is exponentially small. argue, the full set of network layers does not jointly co-adapt during training because the information necessary for such co-adaption is contained in terms that contain many or all residual Jacobians. Only sets of layers of size at most λ where res λ is not negligably small co-adapt. The largest such λ is called the'effective depth' of the network. argue that if the effective depth is less than the compositional depth of a residual network, the network is not really as deep as it appears, but rather behaves as an ensemble of relatively shallow networks. This argument is bolstered by the success of the stochastic depth training technique, where random sets of residual functions are deleted for each mini-batch update. introduced the concept of effective depth somewhat informally. We give our formal definition in section D. There, we also provide a more detailed discussion of the concept and point out limitations. Now we make a crucial observation. Any neural network can be expressed as a residual network as defined in 2. We can simply choose arbitrary initial functions i l and define r l (θ l):= f l (θ l) − i l. Specifically, if we train a network f from some fixed initial parameter θ, we can set DISPLAYFORM0 Then training begins with all residual functions being zero functions. Therefore, all analysis devised for ResNet that relies on the small size of the residual Jacobians can then be brought to bear on arbitrary networks. We term this the'residual trick'. Indeed, the analysis by does not rely on the network having skip connections in the computational sense, but only on the mathematical framework of equation 2. Therefore, as long as the operator norms of DISPLAYFORM1 are small, f is effectively shallow. Terminology From now on, we will make a distinction between the terms'ResNet' and'residual network'. The former will be used to refer to networks that have an architecture as in He et al. (2016b) that uses skip connections. The latter will be used to refer to arbitrary networks expressed in the framework of equation 2. Networks without skip connections will be referred to as'vanilla networks'. In this section, we will show that an exploding gradient as defined by the GSC causes the effective training time of deep MLPs to be exponential in depth and thus limits the effective depth that can be achieved. The proof is based on the insight that the relative size of a gradient-based update ∆θ l on θ l is bounded by the inverse of the GSC if that update is to be useful. The basic assumption underlying gradient-based optimization is that the function optimized is locally approximated by a linear function as indicated by the gradient. Any update made based on a local gradient computation must be small enough so that the updated value lies in the region around the original value where the linear approximation is sufficiently accurate. Let's assume we apply a random update to θ l with relative magnitude (0,l). Then under the local linear approximation, according to proposition 3, this would change the output f 0 approximately by a value with quadratic expectation f 0. Hence, with significant probability, the error would become negative. This is not reflective of the true behavior of the function f 0 in response to changes in θ l of this magnitude. Since gradient-based updates impact the function value even more than random updates, useful gradientbased updates are even more likely to be bounded in relative magnitude by DISPLAYFORM0 DISPLAYFORM1 In a nutshell, if 1 GSC(0,l) decreases exponentially in l, so must the relative size of updates. So for a residual function to reach a certain size relative to the corresponding initial function, an exponential number of updates is required. But to reach a certain effective depth, a certain magnitude of λ-residual terms is required and thus a certain magnitude of residual functions relative to corresponding initial functions is required, and thus exponentially many updates. Theorem 1. Under certain conditions, if an MLP has exploding gradients with explosion rate r and intercept c on some dataset, then there exists a constant c such that training this neural network with a gradient-based algorithm to have effective depth λ takes at least c cr Importantly, the lower bound on the number of updates required to reach a certain effective depth stated by theorem 1 is independent of the nominal depth of the network. While the constant c depends on some constants that arise in the conditions of the theorem, as long as those constants do not change when depth is increased, neither does the lower bound. Corollary 1. In the scenario of theorem 1, if the number of updates to convergence is bounded, so is effective depth. Here we simply state that if we reach convergence after a certain number of updates, but theorem 1 indicates that more would be required to attain a greater effective depth, then that greater effective depth is unreachable with that algorithm. To practically validate our theory of limited effective depth, we train our four exploding architectures (batch-ReLU, layer-tanh, batch-tanh and SeLU) on CIFAR10. All networks studied have a compositional depth of 51 (i.e. 51 linear layers) and 100 neurons in each layer except for the input, prediction and error layers. Full experimental details can be found in section I.First, we determined the approximate best step size for SGD for each individual linear layer. We started by pre-training the highest layers of each network with a small uniform step size until the training classification error was below 85%, but at most for 10 epochs. Then, for each linear layer, we trained only that layer for 1 epoch with various step sizes while freezing the other layers. The step size that achieved the lowest training classification error after that epoch was selected. Note that we only considered step sizes that induce relative update sizes of 0.1 or less, because larger updates can cause weight instability. The full algorithm for step size selection and a justification is given in section I.4.In FIG5, we show the relative update size induced on each linear layer by what was selected to be the best step size as well as 1 GSC(l,0) as a dashed line. In section 4.3, we argued that 1 GSC(l,0) is an upper bound for the relative size of a useful update. We find that this bound holds and is conservative except for a small number of outliers. Even though our algorithm for determining the best step size for each layer gives noisy , there is a clear trend that lower layers require relatively smaller updates, and that this effect is more pronounced if the gradient explodes with a larger rate. Therefore the foundational assumption underlying theorem 1 holds. We then smoothed these best step size estimates and trained each network for 500 epochs with those smoothed estimates. Periodically, we scaled all step sizes jointly by 1 3. In FIG5, we show the training classification error of each architecture. There is a trend that architectures with less gradient explosion attain a lower final error. Note that, of course, all these error values are still much higher than the state of the art on CIFAR10. This is not a drawback however, as the goal of this section is to study and understand pathological architectures rather than find optimal ones. Those architectures, by definition, attain high errors. In FIG5, we show the GSC across the entire network, i.e. GSC(L, 0), as training progresses. During the initial pre-training phase, this value drops significantly but later regains or even exceeds its original value. In FIG5, the dashed line indicates the inverse of GSC(l, 0) for each l after pre-training. We find that the GSC actually falls below 1 as the gradient passes through the pretrained layers, but then resumes explosion once it reached the layers that were not pre-trained. We find this behavior surprising and unexpected. We conclude that nonstandard training procedures can have a significant impact on the GSC but that there is no evidence that when all layers are trained jointly, which is the norm, the GSC either significantly increases or decreases during training. We then went on to measure the effective depth of each network. We devised a conservative, computationally tractable estimate of the cumulative size of updates that stem from λ-residual terms. See section D.2 for details. The effective depth depicted in FIG5 is the largest value of λ such that this estimate has a length exceeding 10 −6. As expected, none of the architectures reach an effective depth equal to their compositional depth, and there is a trend that architectures that use relatively smaller updates achieve a lower effective depth. It is worth noting that the effective depth increases most sharply at the beginning of training. Once all step sizes have been multiplied by 1 3 several times, effective depth no longer changes significantly while the error, on the other hand, is still going down. This suggests that, somewhat surprisingly, high-order co-adaption of layers takes place towards the beginning of training and that as the step size is reduced, layers are fine-tuned relatively independently of each other. SeLU and especially tanh-batch reach an effective depth close to their compositional depth according to our estimate. In FIG5, we show the operator norm of the residual weight matrices after training. All architectures except SeLU, which has a GSC(L, 0) close to 1 after pre-training, show a clear downward trend in the direction away from the error layer. If this trend were to continue for networks that have a much greater compositional depth, then those networks would not achieve an effective depth significantly greater than our 51-linear layer networks. argue that a limited effective depth indicates a lack of high-order co-adaptation. We wanted to verify that our networks, especially layer-tanh and batch-ReLU, indeed lack these highorder co-adaptations by using a strategy independent of the concept of effective depth to measure this effect. We used Taylor expansions to do this. Specifically, we replaced the bottom k layers of the fully-trained networks by their first-order Taylor expansion around the initial functions. See section G for how this is done. This reduces the compositional depth of the network by k − 2. In FIG5, we show the training classification error in response to compositional depth reduction. We find that the compositional depth of layer-tanh and batch-ReLU can be reduced enormously without suffering a significant increase in error. In fact, the ing layer-tanh network of compositional depth 15 greatly outperforms the original batch-tanh and batch-ReLU networks. This confirms that these networks lack high-order co-adaptations. Note that cutting the depth by using the Taylor expansion not only eliminates high-order co-adaptions among layers, but also co-adaptions of groups of 3 or more layers among the bottom k layers. Hence, we expect the increase in error induced by removing only high-order co-adaptions to be even lower than what is shown in FIG5. Unfortunately, this cannot be tractably computed. Finally, we trained each of the exploding architectures by using only a single step size for each layer that was determined by grid search, instead of custom layer-wise step sizes. As expected, the final error was higher. The are found in table 2.Summary For the first time, we established a direct link between exploding gradients and severe training difficulties that cannot be overcome by gradient rescaling. These difficulties arise in MLPs composed of the most popular layer types, even those that utilize techniques that stabilize forward activations which are believed to combat exploding gradients. The gradient scale coefficient not only underpins this analysis, but is largely invariant to the confounders of network scaling (proposition 4), layer width and individual layers (section 3). Therefore we argue the GSC is the best metric for the study of exploding gradients in general. We used minibatches of size 1000 to train all architectures except batch-ReLU, for which we conducted full-batch training. When minibatches were used on batch-ReLU, the training classification error stayed above 89% throughout training. (Random guessing achieves a 90% error.) In essence, no learning took place. This is because of the pathological interplay between exploding gradients and the noise inherent in batch normalization. Under batch normalization, the activations at a neuron are normalized by their mean and standard deviation. These values are estimated using the current batch. Hence, if a minibatch has size b, we expect the noise induced by this process to have relative size ≈ 1 √ b. But we know that according to proposition 2, under the local linear approximation, this noise leads to a change in the error layer of relative size ≈ GSC √ b. Hence, if the GSC between the error layer and the first batch normalization layer is larger than √ b, learning should be seriously impaired. For the batch-ReLU architecture, this condition was satisfied and consequently, the architecture was untrainable using minibatches. Ironically, the gradient explosion that renders the noise pathological was introduced in the first place by adding batch normalization layers. Note that techniques exist to reduce the dependence of batch normalization on the current minibatch, such as using running averages . Other prominent techniques that induce noise and thus can cause problems in conjunction with large gradients are dropout , stochastic nonlinearities (e.g. BID15) and network quantization (e.g. BID2). Why do exploding gradients occur? As mentioned in section 3, gradients explode with rate r > 1 as long as we have (A) GSC(k, l) ≈ GSC(l + 1, 1)GSC(l + 2, l + 1).. GSC(k, k − 1) and (B) GSC(l + 1, 1) ≈ r for all k and l. It turns out that we can show both of these hold in expectation under fairly realistic conditions if we view the network parameter as a random variable. Theorem 2. Under certain conditions, for any neural network f with random parameter θ composed of layer functions f l that are surjective endomorphisms on the hypersphere, where the absolute singular values of the Jacobian of each layer are IID and differ by at least with probability δ, gradients explode in expectation with rate r(δ,). (See section F.2 for details.) Proof Summary. Consider a surjective endomorphism f l on the hypersphere and a random input x distributed uniformly on that hypersphere. Surjectivity implies that the absolute determinant of the Jacobian, in expectation over the input, is at least 1. The absolute determinant is the product of the absolute singular values. If those absolute singular values are IID and their expected product is at least 1, the expectation of each absolute singular value is also at least 1. So if these singular values are sufficently different from each other with sufficient probability, the expected quadratic mean of the singular values is at least r > 1. Since both input and output of f l have length 1, the expected GSC will also be at least r. While the conditions of theorem 2 cannot be fulfilled exactly, it nevertheless reveals an important insight. Exploding gradients tend to arise in practice even when forward activations are stable because in order to preserve the domain of the forward activations from layer to layer, Jacobians of individual layers need to have unit absolute determinants in expectation, and this tends to cause their qm norm values to be greater than 1, and then these values tend to compound exponentially.1 The theorem is stated for layers that are surjective endomorphisms on the hypersphere. Let'length-only layer normalization' (LOlayer) be a function that divides its input vector by its length. Then a sequence of a tanh / SeLU layer, a linear layer and a LOlayer layer, all of the same width, when viewed as a single macro-layer, is a surjective endomorphism on the hypersphere, as shown in proposition 6. Consequently, both LOlayer-tanh and LOlayer-SeLU exhibit exploding gradients (table 1). Layer-tanh and SeLU explode at very similar rates to LOlayer-tanh and LOlayer-SeLU respectively. Proposition 6. Any endomorphism on the hypersphere composed of (i) a strictly monotonic, continuous nonlinearity σ that has σ = 0, (ii) multiplication with a full-rank matrix and (iii) length-only layer normalization is bijective. (See section E.6 for the proof.)Theorem 2 presents two clear avenues for avoiding exploding gradients: (i) use non-surjective layer functions, (ii) ensure that Jacobians get progressively closer to multiples of orthogonal matrices as we go deeper. It turns out that these are exactly the strategies employed by ReLU and ResNet respectively to avoid exploding gradients, and we will discuss these in the next two sections. In the previous section, we showed how surjective endomorphisms can exhibit exploding gradients. This suggests that we can avoid exploding gradients by non-surjectivity, i.e. if we reduce the domain of the forward activations from layer to layer. Informally, this can be understood as follows. Consider some layer function f l. If we shrink its co-domain by a factor c, we reduce the eigenvalues of the Jacobian and hence its qm norm by c. If we also ensure that the length of the output stays the same, the GSC is also reduced by c. Similarly, inflating the co-domain would cause the qm norm to increase. This suggests that in neural network design, we can actively trade off exploding gradients and shrinkage of the domain and that eliminating one effect may exacerbate the other. This is precisely what we find in practice. Returning to figure 1, we now turn our attention to the middle graph (1B). Here, we plot the standard deviation of the activation values at each neuron across datapoints in the layers before each nonlinearity layer ('pre-activations'), averaged over all neurons in the same layer. The four exploding architectures exhibit a near constant standard deviation, whereas the other three architectures (ReLU, layer-ReLU and tanh) exhibit a rapidly collapsing standard deviation, which shows that the activations corresponding to different datapoints become more and more similar with depth. We term a layer-to-layer shrinkage of the domain the'collapsing domain problem'. But why is this effect a problem? Two reasons. Collapsing depth causes pseudo-linearity If the pre-activations that are fed into a nonlinearity are highly similar, the nonlinearity can be well-approximated by a linear function. In the tanh architecture we studied, for example, activation values become smaller and smaller as they are propagated forward. If the pre-activations of a tanh nonlinearity have small magnitude, the tanh nonlinearity can be approximated by the identity function. But if a tanh layer is approximately equal to an identity layer, the entire network becomes equivalent to a linear network. We say the network becomes'pseudo-linear'. Of course, linear networks of any depth have the representational capacity of a linear network of depth 1 and are unable to model nonlinear functions. Hence, a tanh network that is pseudo-linear beyond compositional depth k approximately has the representational capacity of a compositional depth k + 1 tanh network. Based on the decrease in pre-activation standard deviation exhibited by the tanh architecture in FIG3, a reasonable estimate is that the network of compositional depth 50 has the representational capacity of a network of compositional depth 10.Similarly, for a ReLU nonlinearity, if either all or most pre-activations are positive or all or most pre-activations are negative, the nonlinearity can be approximated by a linear function. If all or most pre-activations are positive, ReLU can be approximated by the identity function. If all or most pre-activations are negative, ReLU can be approximated by the zero function. In FIG3, we plot the proportion of pre-activations for each neuron in a nonlinearity layer that are positive or negative, whichever is smaller, averaged over each layer. We call this metric'sign diversity'. For both ReLU and layer-ReLU, sign diversity decreases rapidly to cause pseudo-linearity from at least, say, the 20th linear layer onwards. None of the four exploding architectures suffers a significant loss in sign diversity. Collapsing depth is exploding gradient in disguise In theorem 1, we used the fact that the output of the error layer of the network was positive to bound the size of a useful gradient-based update. In other words, we used the fact that the domain of the error layer was bounded. However, the collapsing domain problem causes not just a reduction of the size of the domain of the error layer, but of all intermediate layers. Hence, we expect the largest useful update to shrink in proportion with the reduction of the size of the domain. Therefore, we suspect that a collapsing domain will ultimately have the same effect on the largest useful update size of each layer as exploding gradients, that is to reduce them and thus cause a low effective depth. In table 2, we show the final error values achieved by training ReLU, layer-ReLU and tanh on CIFAR10. The errors are substantially higher than those achieved by the exploding architectures, except for batch-ReLU. Also, training with layer-wise step sizes did not help compared to training with a single step size. In FIG2, we show the estimated best relative update sizes for each layer. This time, there is no downward trend towards lower layers, which is likely why training with a single step size is "sufficient". Interestingly, we note that the difference between the 1 GSC bound and the empirically optimal relative update sizes is now much larger than it is for the exploding architectures (see FIG5). This suggests that indeed, collapsing domains may similarly reduce the optimal relative update size, just as exploding gradients. In FIG2, we find that again, the effective depth reached is significantly lower than the compositional depth of the network and is comparable to that of architectures with exploding gradients. In FIG2 and H, we plot the pre-activation standard deviation and sign diversity at the highest nonlinearity layer throughout training. Interestingly, pseudo-linearity declines significantly early in training. The networks become less linear through training. Summary In neural network design, there is an inherent tension between avoiding exploding gradients and preserving the domain of forward activations. Avoiding one effect can bring about or exacerbate the other. Both effects are capable of severely hampering training. This tension is brought about by the discrepancy of determinant and qm norm of layer-wise Jacobians and is a foundational reason for the difficulty in constructing very deep trainable network. In this paper, we do not give a rigorous definition of the collapsing domain problem, because it is hard to assess and measure. A number of metrics exist which all apply is somewhat different situations. We already discussed two metrics: pre-activation standard deviation and pre-activation sign diversity. In mean field theory, activation correlation plays a prominent role (section B.1.1). In fact, mean field theory is a formidable tool for statically estimating the properties of specific architectures, such as explosion rate and pre-activation standard deviation. See e.g. for an anlysis of tanh and BID5 for an analysis of batch-ReLU. We discuss collapsing domains further in the future work section B.4. ResNet and related architectures that utilize skip connections have been very successful recently. One reason for this is that they can be successfully trained to much greater depths than vanilla networks. In this section, we show how skip connections are able to greatly reduce the GSC and thus largely circumvent the exploding gradient problem. DISPLAYFORM0 k-dilution expresses the idea that the kinds of functions that f b represents are of a certain form if s b is restricted to matrix multiplication. The larger the value of k, the more ρ b is "diluted" by a linear function, bringing f b itself closer and closer to a linear function. Note that the identity function can be viewed as matrix multiplication with the identity matrix. relatively large amount of gradient reduction, and therefore ResNet can be trained successfully to "unreasonably" great depths for general architectures. To validate our theory, we repeated the experiments in figure 1 with 5 ResNet architectures: layerReLU, batch-ReLU, layer-tanh, batch-tanh and layer-SeLU. Each residual block is bypassed by an identity skip connection and composed of 2 sub-blocks of 3 layers each: first a normalization layer, then a nonlinearity layer, and then a fully-connected layer, similar to He et al. FIG9 to FIG3, we find the gradient growth is indeed much lower for ResNet compared to vanilla networks, with much of it taking place in the lower layers. In FIG9 we find that the rate of domain collapse for layer-ReLU, as measured by pre-activation sign diversity, is also significantly slowed. We then went on to check whether the gradient reduction experienced is in line with theorem 3. We measured the dilution level k b and growth rate r b at each residual block b and then replaced the growth rate with 1 + (k 2 b + 1)(r b − 1). The of this post-processing is found in FIG9. Indeed, the GSC of the exploding architectures now again grows almost linearly in log space, with the exception of batch-ReLU in the lowest few layers. The explosion rates closely track those in FIG3, though they are slightly higher. This confirms that the estimate of the magnitude of gradient reduction from theorem 3 is quite accurate. We then repeated the CIFAR10 experiments depicted in figure 2 with our 5 ResNet architectures. The are shown in figure 5. As expected, in general, ResNet enables higher relative update sizes, achieves lower error, a higher effective depth and is less "robust" to taylor approximation than vanilla networks. The only exception to this trend is the layer-SeLU ResNet when compared to the SeLU vanilla network, which already has a relatively slowly exploding gradient to begin with. Note that the severe reduction of the GSC persists throughout training (figure 5C). Also see table 2 to compare final error values. Note that in order to make the effective depth in figure 5D comparable to those in FIG5, we applied the residual trick to ResNet. We let the initial function i encompass not just the skip function s, but also the initial block function ρ. Hence, we DISPLAYFORM1 ). Note that our effective depth values for ResNet are much higher than those of. This is because we use a much more conservative estimate of this intractable quantity for both ResNet and vanilla networks. Gradient reduction is achieved not just by identity skip connections but, as theorem 3 suggests, also by skip connections that multiply the incoming value with a Gaussian random matrix. Results for those skip connections can be found in table 1. argues that deep ResNets behave like an ensemble of relatively shallow networks. We argue that comparable vanilla networks often behave like ensembles of even shallower networks. argues that deep ResNets are robust to lesioning. Additionally, we argue that comparable vanilla networks are often even more robust to depth reduction when considering the first order Taylor expansion. k-dilution has its limits. Any k-diluted function with large k is close to a linear function. Hence, we can view k-dilution as another form of pseudo-linearity that can damage representational capacity. It also turns out that under similar conditions to those used in theorem 3, dilution only disappears slowly as diluted functions are composed. If the diluting linear functions s b are the identity functions, this corresponds to feature refinement as postulated by. Proposition 7. Under certain conditions, the composition of B random functions that are k b -diluted in expectation respectively is DISPLAYFORM0 More simply, assume all the k b are equal to some k. Ignoring higher-order terms, the composition is DISPLAYFORM1 layers to eliminate that dilution. This indicates that the overall amount of gradient reduction achievable through dilution without incurring catastrophic pseudo-linearity is limited. The power of our theory lies in exposing the GSC-reducing effect of skip connections for general neural network architectures. As far as we know, all comparable previous works (e.g. ; BID7) demonstrated similar effects only for specific architectures. Our argument is not that certain ResNet's achieve a certain level of GSC reduction, but that ResNet users have the power to choose the level of GSC reduction by controlling the amount of dilution. For example, while the level of dilution increases as we go deeper in the style of ResNet architecture we used for experiments in this section, this need not be so. The skip function s and block function ρ can be scaled with constants to achieve arbitrary, desired levels of dilution . Alternatively, instead of putting all normalization layers in the residual blocks, we can insert them between blocks / skip connections. This would keep the dilution level constant and hence cause gradients to explode again, though at a lower rate compared to vanilla networks. We have shown how ResNets achieve a reduced gradient via k-dilution. And just as with effective depth, the residual trick allows us to generalize this notion to arbitrary networks. Definition 5. We say a residual network f (θ) has an'orthogonal initial state' if each initial function i l is multiplication with an orthogonal matrix or a slice / multiple thereof and r l (θ l) is the zero function. Any network that is trained from an (approximate) orthogonal initial state can benefit from reduced gradients via dilution to the extent to which initial and residual function are uncorrelated. (See section F.3 for more information.). ResNet is a style of architecture that achieves this, but it is far from being the only one. BID7 introduced the'looks-linear initialization' (LLI) for ReLU networks, which achieves not only an approximate orthogonal initial state, but outperformed ResNet in their experiments. We detail this initialization scheme in section H. In table 2, we show that a simple ReLU network with LLI can achieve an ever lower training error than ResNet on CIFAR10. In figure 6C, we find that indeed LLI reduces the gradient growth of batch-ReLU drastically not just in the initialized state, but throughout training even as the residual functions grow beyond the size achieved under Gaussian initialization (compare figure 6E to 2E and 4E). DiracNet achieves an approximate orthogonal initial state in a very similar way to LLI. An even simpler but much less powerful strategy is to initialize weight matrices as orthogonal matrices instead of Gaussian matrices. This reduces the gradient growth in the initialized state somewhat (table 1).Using the ensemble view of very deep networks reveals another significant disadvantage of nonorthogonal initial functions. The output computed by an ensemble member must pass through the initial functions of the layers not contained in that ensemble member to reach the prediction layer. Therefore, having non-orthogonal initial functions is akin to taking a shallow network and adding additional, untrainable non-orthogonal layers to it. This has obvious downsides such as a collapsing domain and / or exploding gradient, and an increasingly unfavorable eigenspectrum of the Jacobian . One would ordinarily not make the choice to insert such untrainable layers. While there has been some success with convolutional networks where lower layers are not trained (e.g. Saxe et al. FORMULA0 ; He et al. FORMULA0), it is not clear whether such networks are capable of outperforming other networks where such layers are trained. While skip connections do not resolve the tension between exploding gradients and collapsing domains, they reduce the pathology by avoiding unnecessary non-orthogonality contained in the initial function. The big question is now: What is the purpose of not training a network from an orthogonal initial state? We are not aware of such a purpose. Since networks with orthogonal initial functions are mathematically simpler than other networks, we argue they should be the default choice. Using non-orthogonality in the initial function, we believe, is what requires explicit justification. Summary In this paper, we demonstrate that contrary to popular belief, many MLP architectures composed of popular layer types exhibit exploding gradients, and those that do not exhibit collapsing domains (section 3). This tradeoff is caused by the discrepancy between absolute determinants and qm norms of layer-wise Jacobians (section 5). Both sides of this tradeoff cause pathologies. Exploding gradients, when defined by the GSC (section 2) cause low effective depth (section 4). Collapsing domains cause pseudo-linearity and can also cause low effective depth (section 6). However, both pathologies are caused to a surprisingly large degree by untrainable, and thus potentially unnecessary non-orthogonality contained in the initial functions. Making the initial functions more orthogonal via e.g. skip connections leads to improved outcomes (section 7). • Train from an orthogonal initial state, i.e. initialize the network such that it is a series of orthogonal linear transformations. This can greatly reduce the growth of the GSC and domain collapse not just in the initial state, but also as training progresses. It can prevent the forward activations from having to pass through unnecessary non-orthogonal transformations. Even if a perfectly orthogonal initial state is not achievable, an architecture that approximates this such as ResNet can still confer significant benefit.• When not training from an orthogonal initial state, avoid low effective depth. A low effective depth signifies that the network is composed of an ensemble of networks significantly shallower than the full network. If the initial functions are not orthogonal, the values computed by these ensemble members have to pass through what may be unnecessary and harmful non-orthogonal transformations. Low effective depth may be caused by, for example, exploding gradients or a collapsing domain.• Avoid pseudo-linearity. For the representational capacity of a network to grow with depth, linear layers must be separated by nonlinearities. If those nonlinearities can be approximated by linear functions, they are ineffective. Pseudo-linearity can be caused by, for example, a collapsing domain.• Keep in mind that skip connections help in general, but other techniques do not Diluting a nonlinear function with an uncorrelated linear function can greatly help with the pathologies described above. Techniques such as normalization layers, careful initialization of weights or SeLU nonlinearities can prevent the explosion or vanishing of forward activations. Adam, RMSprop or vSGD can improve performance even if forward activations explode or vanish. While those are important functionalities, these techniques in general neither help address gradient explosion relative to forward activations as indicated by the GSC nor collapsing domains.• As the GSC grows, adjust the step size. If it turns out that some amount of growth of the GSC is unavoidable or desirable, weights in lower layers could benefit from experiencing a lower relative change during each update. Optimization algorithms such as RMSprop or Adam may partially address this.• Control dilution level to control network properties. Skip connections, normalization layers and scaling constants can be placed in a network to trade off gradient growth and representational capacity. Theorem 3 can be used for a static estimate of the amount of gradient reduction achieved. Similarly, proposition 7 can be used for a static estimate of the overall dilution of the network.• Great compositional depth may not be optimal. Networks with more than 1000 layers have recently been trained (b). gave a formalism for training arbitrarily deep networks. However, ever larger amounts of dilution are required to prevent gradient explosion . This may ultimately lead to an effective depth much lower than the compositional depth and individual layers that have a very small impact on learning outcomes, because functions they represent are very close to linear functions. If there is a fixed parameter budget, it may be better spent on width than extreme depth . • Exploding gradients matter. They are not just a numerical quirk to be overcome by rescaling but are indicative of an inherently difficult optimization problem that cannot be solved by a simple modification to a stock algorithm.• Use GSC as a benchmark for gradient explosion. For the first time, we established a rigorous link between a metric for exploding gradients and hardness of training. The GSC is also robust to network rescaling, layer width and individual layers.• Any neural network is a residual network. The residual trick allows the application of ResNet-specific tools such as the popular theory of effective depth to arbitrary networks.• Step size matters when studying the behavior of networks. We found that using different step sizes for different layers had a profound impact on the training success of various architectures. Many studies that investigate fundamental properties of deep networks either do not consider layerwise step sizes (e.g. Schoenholz et al. FORMULA0) or do not even consider different global step sizes (e.g.). This can lead to inaccurate . We provide continued discussion in section B. Table 1: Key metrics for architectures in their randomly initialized state evaluated on Gaussian noise. In the'Normalization' column,'layer' refers to layer normalization,'batch' refers to batch normalization,'LOlayer' refers to length-only layer normalization and'none' refers to an absence of a normalization layer. In the'Matrix type' column,'Gaussian' refers to matrices where each entry is drawn from an independent Gaussian distribution with mean zero and a standard deviation that is constant across all entries.'orthogonal' refers to a uniformly random orthogonal matrix and'looks-linear' refers to the initialization scheme proposed by BID7 and expounded in section H. In the'Skip type' column,'identity' refers to identity skip connections and'Gaussian' refers to skip connections that multiply the incoming value with a matrix where each entry is drawn from an independent Gaussian distribution with mean zero and a standard deviation that is constant across all entries.' none' refers to an absence of skip connections. Table 2: Training classificaion error for architectures trained on CIFAR10. In the'Normalization' column,'layer' refers to layer normalization,'batch' refers to batch normalization and'none' refers to an absence of a normalization layer. In the'Matrix type' column,'Gaussian' refers to matrices where each entry is drawn from an independent Gaussian distribution with mean zero and a standard deviation that is constant across all entries.'looks-linear' refers to the looks-linear initialization scheme proposed by BID7 and expounded in section H. In the'Skip type' column,'identity' refers to identity skip connections and'none' refers to an absence of skip connections. In the two rightmost columns, we show the training classification error achieved when using a single step size and when using a custom step size for each layer. Whichever error value is lower is shown in bold. For further methodological details, see section I. For a detailed breakdown of these , see figures 2, 4, 5 and 6. So far, we have discussed exploding gradients and collapsing domains. In this section, we review related metrics and concepts from literature. We build on the work of BID7, who introduced the concept of gradient shattering. This states that in deep networks, gradients with respect to nearby points become more and more uncorrelated with depth. This is very similar to saying that the gradient is only informative in a smaller and smaller region around the point at which it is taken. This is precisely what happens when gradients explode and also, as we argue in section 6, when the domain collapses. Therefore, the exploding gradient problem and collapsing domain problem can be viewed as a further specification of the shattering gradient problem rather than as a counter-theory or independent phenomenon. We extend the work of BID7 in several important ways. First, they claim that the exploding gradient problem "has been largely overcome". We show that this is not true, especially in the context of very deep batch-ReLU MLPs, which are central to their paper. Second, by using effective depth we make a rigorous argument as to why exploding gradients cause hardness of training. While BID7 point out that shattering gradients interfere with theoretical guarantees that exist for various optimization algorithms, they do not provide a definitive argument as to why shattering gradients are in fact a problem. Third, our analysis extends beyond ReLU networks. We also build on the work of. They showed that both trajectories and small perturbations, when propagated forward, can increase exponentially in size. However, they do not distinguish too important cases: (i) an explosion that is simply due to an increase in the scale of forward activations and (ii) an explosion that is due to an increase in the gradient relative to forward activations. We are careful to make this distinction and focus only on case (ii). Since this is arguably the more interesting case, we believe the insights generated in our paper are more robust. investigated another important pathology of very deep networks: the divergence of singular values in multi-layer Jacobians. As layer-wise Jacobians are multiplied, the variances of their singular values compound. This leads to the direction of the gradient being determined by the dominant eigenvectors of the multi-layer Jacobian rather than the label, which slows down training considerably. In their seminal paper, motivated batch normalization with the argument that changes to the distribution of intermediate representations, which they term'covariate shift', are pathological and need to be combated. This argument was then picked up by e.g. and BID12 to motivate similar normalization schemes. We are not aware of any rigorous definition of the'covariate shift' concept nor do we understand why it is undesirable. After all, isn't the very point of training deep networks to have each layer change the function it computes, to which other layers co-adapt, to which then other layers co-adapt and so on? Having each layer fine-tune its weights in response to shifts in other layers seems to be the very mechanism by which deep networks achieve high accuracy. A classical notion of trainability in optimization theory is the conditioning of the Hessian. This can also deteriorate with depth. introduced an architecture that combats this pathology in an effective and computationally tractable way via iterative numerical methods and matrix decomposition. Matrix decomposition has also been used by e.g. BID4; BID1 to maintain orthogonality of recurrent weight matrices. Maybe such techniques could also be used to reduce the divergence of singular values of the layer-wise Jacobian during training. B.1.1 MEAN FIELD THEORY -EXPLODING GRADIENTS / COLLAPSING DOMAIN VS ORDER / CHAOS Our work bears similarity to a recent line of research studying deep networks using mean field theory (; ; ; BID3 . Those papers use infinitely wide networks to statically analyze the expected behavior of forward activations and gradients in the initialized state. They identify two distinct regimes, order and chaos, based on whether an infinitesimal perturbation shrinks or grows in expectation respectively as it is propagated forward. This corresponds to the expected qm norm of the layer-wise Jacobian being smaller or larger than 1 respectively. They show that in the chaotic regime, gradient vector length explodes whereas in the ordered regime, gradient vector length vanishes. Further, they show that for tanh MLPs the correlation between forward activation vectors corresponding to two different data inputs converges to 1 ('unit limit correlation') in the ordered regime as activations are propagated forward and to some value less than 1 in the chaotic regime. Specifically, in a tanh MLP without biases, in the chaotic regime, the correlation converges to 0 ('zero limit correlation'). They show how to use mean field theory as a powerful tool for the static analysis of individual network architectures. As in mean field theory, some of our analysis relies on the expected behavior of networks in their randomly initialized state (theorem 2). Further, it is clear that the order / chaos dichotomy bears similarity to the exploding gradient problem / collapsing domain problem dichotomy as presented in this paper. However, there are also important differences. One difference is that we argue that the GSC is a better metric for determining the presence of pathological exploding or vanishing gradients than gradient vector length and thus more meaningful than order / chaos. Using the GSC, we obtain very different regions of explosion, vanishing and stability for popular architectures compared to gradient vector length. For a tanh MLP with no biases, using gradient vector length, vanishing is achieved for σ w < 1, stability for σ w = 1 and explosion for σ w > 1. (σ w denotes the standard deviation of weight matrix entries times the square root of the right dimsion of the weight matrix, as defined in .) For a tanh MLP with no biases, using the GSC, vanishing is impossible, stability is achieved for σ w ≤ 1 and explosion for σ w > 1. For a ReLU MLP with no biases, using gradient vector length, vanishing is achieved for σ w < √ 2, stability for σ w = √ 2 and explosion for σ w > √ 2. For a ReLU MLP with no biases, using the GSC, stability is inevitable. The advantage of considering GSC can be seen in the case of the ReLU network. For tanh, showed that order corresponds to an exponential convergence towards a unit limit correlation and chaos corresponds to an exponential convergence towards a zero limit correlation. For a ReLU MLP with no biases and σ w > √ 2, infinitesimal noise grows (chaos), yet the correlation converges sub-exponentially to zero, which is a behavior we would expect from the edge of chaos. As we saw above, using the GSC to define order / chaos that is precisely where we are: the edge of chaos. A second difference is that the concepts of unit limit correlation and the collapsing domain problem are not the same. In fact, the former can be seen as a special case of the latter. In a tanh MLP with no bias and σ w slightly larger than 1, correlation converges to 0 and eventually, gradients explode. Yet the domain can still collapse dramatically in the short term as shown in FIG3 to cause pseudolinearity. In a tanh MLP with no bias and σ w very large, again, correlation converges to 0 and gradients explode. However, the tanh layer maps all points close to the corners of the hypercube, which corresponds to domain collapse. We do not use the assumption of infinite width in our analysis. The only possible exception is that the SSD assumption in proposition 7 can be viewed as implying infinite width. conjectures that the edge of chaos is necessary for training very deep networks, our paper provides somewhat contrary evidence. Our two best performing vanilla architectures, SeLU and layer-tanh, are both inside the chaotic regime whereas ReLU, layer-ReLU and tanh, which are all on the edge of chaos, exhibit a higher training classification error. Clearly, chaotic architectures avoid pseudo-linearity. The difference between our experiments and those in Schoenholz et al. FORMULA0 is that we allowed the step size to vary between layers. This had a large impact, as can be seen in table 2. We believe that our underscore the importance of choosing appropriate step sizes when comparing the behavior of different neural architectures or training algorithms in general. In section 4, we present a rigorous argument for the harmful nature of exploding gradients, and thus of chaos as defined by the GSC, at high depth. No comparable argument exists in mean field literature. obtained low accuracies for networks exhibiting unit limit correlation, it is not clear a priori that this effect is harmful for accuracy. After all, correlation information is a rather small part of the information present in the data, so the remaining information might be sufficient for learning. As a simple example, consider k-means. Performing k-meafns on an arbitrary dataset yields the same as first adding a large constant to the data and then performing k-means, even though the addition can easily destroy correlation information. In contrast, in section 6, we show how collapsing domains can directly harm expressivity and trainability. shows that pathologies such as gradient explosion that arise in vanilla networks are reduced in specific ResNet architectures. We extend this finding to general ResNet architectures. BID3 proposes to combat gradient growth by downscaling the weights in the residual block of a ResNet. This corresponds to increased dilution, which indeed reduces gradient growth as shown in section 7. However, we also show in proposition 7 that the reduction achievable in this way may without suffering catastrophic pseudo-linearity be limited. Anonymous (2018d) also proposes to combat the exploding gradient problem by changing the width of intermediate layers. Our analysis in section 4.4 suggests that this is not effective in reducing the growth of the GSC. BID3 concludes that changing the width combats the exploding gradient problem because they implicitly assume that the pathology of exploding gradients is determined by the scale of individual components of the gradient vector rather than the length of the entire vector or the GSC. They do not justify this assumption. We propose the GSC as a standard for assessing pathological exploding gradients to avoid such ambiguity. BID0 proposed ResNet architectures inspired by dynamical systems and numerical methods for ordinary differential equations. The central claim is that these architectures are stable at arbitrary depth, i.e. both forward activations and gradients (and hence GSC) are bounded as depth goes to infinity. They propose four practical strategies for building and training ResNets: (a) ensuring that residual and skip functions compute vectors orthogonal to each other by using e.g. skew-symmetric weight matrices (b) ensuring that the Jacobian of the skip function has eigenvalues with negative real part by using e.g. weight matrices factorized as −C T C (c) scaling each residual function by 1/B where B is the number of residual blocks in the network and (d) regularizing weights in successive blocks to be similar via a fusion penalty. Architecture GSC(L, 0) (base 10 log) GSC(L, 0) dilution-corrected (base 10 log) batch-ReLU (i) (i) Gaussian initialization, (ii) skew-symmetric initialization, (iii) initialization as -C T C where C is Gaussian initialized and (iv) Gaussian initialization where weight matrices in successive blocks have correlation 0.5. Initializations (ii), (iii) and (iv) mimic strategies (a), (b) and (d) respectively. To enable the comparison of the four initialization styles, we normalize each weight matrix to have a unit qm norm. We study all four initializations for both batch-ReLU and layer-tanh.. This is expected given theorem 3. One of the key assumptions is that skip and residual function be orthogonal in expectation. While initialization (i) achieves this, under (ii), the two functions are orthogonal not just in expectation, but with probability 1. Initialization (iii) has gradients that grow much faster than initialization (i). On the one hand, this is surprising as states that eigenvalues with negative real parts in the residual Jacobian supposedly slow gradient growth. On the other hand, it is not surprising because introducing correlation between the residual and skip path breaks the conditions of theorem 3.Initialization (iv) performs comparably to initialization (i) in reducing gradient growth, but requires a larger amount of dilution to achieve this . Again, introducing correlation between successive blocks and thus between skip and residual function breaks the conditions of theorem 3 and weakens the power of dilution. While we did not investigate the exact architectures proposed in; BID10, our show that more theoretical and empirical evaluation is necessary to determine whether architectures based on (a), (b) and (d) are indeed capable of increasing stability. Of course, those architectures might still confer benefits in terms of e.g. inductive bias or regularization. Finally, strategy (c), the scaling of either residual and/or skip function with constants is a technique already widely used in regular ResNets. In fact, our study suggests that in order to bound the GSC at arbitrary depth in a regular ResNet, it is sufficient to downscale each residual function by only We have not experienced vanishing gradients as defined by the GSC in our experiments. Our analysis suggests that strong domain collapse is necessary to not only overcome the gradient growth implied by theorem 2, but reverse it. We conjecture that such domain collapse could actually occur in e.g. ReLU and tanh architecture if a non-zero additive bias was introduced, though this goes beyond the scope of this paper. Exploding gradients and their counterpart, vanishing gradients, have been studied more extensively in the context of on RNNs (e.g. ; BID8). It is important to note that the problem as it arises in RNNs is similar but also different from the exploding gradient problem in feedforward networks. The goal in RNNs is often to absorb information early on and store that information through many time steps and sometimes indefinitely. In the classical RNN architecture, signals acquired early would be subjected to a non-orthogonal transformation at every time step which leads to all the negative consequences described in this paper. LSTMs and GRUs BID11, which are the most popular solutions to exploding / vanishing gradients in RNNs, are capable of simply leaving each neuron that is considered part of the latent state completely unmodified from time step to time step unless new information is received that is pertinent to that specific neuron. This solution does not apply in feedforward networks, because it is the very goal of each layer to modify the signal productively. Hence, managing exploding gradients in feedforward networks is arguably more difficult. Nevertheless, there is similarity between LSTM and the orthogonal initial state because both eliminate non-orthogonality "as much as possible". LSTM can eliminate non-orthogonality completely from time step to time step whereas in the orthogonal initial state, non-orthogonality is eliminated only from the initial function. Again, viewing feedforward networks as ensembles of shallower networks, orthogonal initial functions ensure that information extracted from each ensemble member does not have to pass through non-orthogonal transformations needlessly. This is precisely what LSTM attempts to achieve. Biases, convolutional and recurrent layers In this paper, we focus our analysis on MLPs without trainable bias and variance parameters. Theorem 1, in its formulation, applies only to such MLPs. Theorems 2, 3 and proposition 7 use assumptions that are potentially harder to achieve in non-MLP architectures. Our experimental evaluation is limited to MLPs. We think that very similar to those presented in this paper are acheivable for other types of neural networks, such as those containing trainable biases, convolutional layers or recurrent layers. The fundamental behavior of those architectures should be the same, though additional nuance and heavier mathematical bookkeeping might come into play. Understanding collapsing domains It is difficult to assess or measure the degree to which the domain collapses in a given network. There is no single correct metric to measure this effect and depending on the metric chosen, the set of networks exhibiting collapsing domains may look very different. So far, we discussed pre-activation standard deviation (section 6), pre-activation sign diversity (section 6) and activation correlation (section B.1.1). The volume of the domain and the entropy of the distribution of activations may also be of interest. Not all domains collapse in the same way. In the tanh architecture we studied in this paper, the domain collapses onto the origin. In linear MLPs, the domain collapses onto the line through the dominant eigenvector of the product of weight matrices, but never collapses onto a single point. In ReLU, the domain collapses onto a ray from the origin. In tanh with very large weights, activation vectors are mapped approximately onto the corners of the hypercube by the tanh layer. What gradient scale is best? GSC(1, L) indicates the relative responsiveness of the prediction layer with respect to changes in the input layer. Of course, the goal in deep learning, at least within a prediction framework, is to model some ground truth function t that maps data inputs to true labels. That function has itself a GSC at each input location x that measures the relative responsiveness of t(x) to changes in x. If the network is to perfectly represent the ground truth function, the GSCs would also have to match up. If, on the other hand, the GSC of the network differs significantly from that of t, the network is not fitting t well. This suggests that in fact, the "best" value of the GSC is one that matches that of the ground truth. If the GSC of the network is too low, we may experience underfitting. If the GSC of the network is too high, we may experience overfitting. How to achieve the "right" gradient? To model the ground truth function, we may not just want to consider the overall magnitude of the GSC across the dataset, but to enable the network to have gradients of different magnitudes from one data input to the next; or to learn highly structured gradients. For example, given an image of a dog standing in a meadow, we might desire a high gradient with respect to pixels signifying e.g. facial features of the dog but a low gradient with respect to pixels that make up the meadow, and a uniformly low gradient given an image of a meadow. Such gradients would be very valuable not just in modelling real world functions more accurately and improving generalization, but in making the output of neural networks more explainable and avoiding susceptibility to attacks with adversarial inputs. Understanding representational capacity and pseudo-linearity In section 7, we explained how dilution reduces gradient growth but may harm representation capacity. While removing untrainable non-orthogonality from the initial functions may not be harmful, to achieve large levels of dilution and thus large amounts of GSC reduction, many ResNet architectures also suppress the size of the residual function relative to the initial function. This may happen naturally when the size of the skip path grows via repeated addition as we go deeper, or by deliberately scaling down residual blocks . Clearly, if a neural network can only represent functions that are very close to linear functions, it may not be able to model the ground truth function well. However, there exist no mechanisms to determine how much dilution is harmful for a given ground truth function or dataset. Dilution is not only present in ResNet and special constructs such as looks-linear initialized ReLU networks, but even in vanilla, Gaussian initialized MLPs. For example, a SeLU nonlinearity can be more easily approximated by a linear function in terms of mean square error over a unit Gaussian input than a ReLU nonlinearity. We suspect that this is related to gradients in a SeLU MLP exploding more slowly than in a batch-ReLU MLP. Assessing the total amount of "linearity" present in a network is an open question. Therefore, we also cannot make blanket statements such as "SeLU is superior to batch-ReLU because gradients explode more slowly", because a batch-ReLU MLP with fewer layers might in some sense have as much representational power as a SeLU MLP with more layers. Finally, the impact of dilution on the representational power conferred by depth is an open question. How far does the orthogonal initial state take us? An orthogonal initial state reduces gradients via dilution, which allows for relatively larger updates, which enables increased growth of residual functions, which allows for greater effective depth. However, as residual functions grow, dilution decreases, so the gradient increases, so updates must shrink, so the growth of residual functions slows, so the growth of effective depth slows. In other words, for the network to become deeper, it needs to be shallow. Therefore, while training from an orthogonal initial state can increase effective depth, we expect this effect to be limited. Additional techniques could be required to learn functions which require a compositional representation beyond this limit. • x and y are generally used to refer to the components of a datapoint. Then, we have (x, y) ∈ D.• X refers to a vector of dimension d, i.e. the same dimension as the x component of datapoints. Similarly, Y refers to an element of the domain of possible labels. We call X a'data input' and Y a'label input'.• F l refers to a vector of dimension d l, i.e. the same dimension as f l.• We write f l (θ, x) as a short form of x) )..)). Sometimes, we omit x and / or θ. In that case, x and / or θ remains implicit. f l (θ, X) is an analogous short form. DISPLAYFORM0 • We write DISPLAYFORM1. Sometimes, we omit f k and / or θ. In that case, f k and / or θ remain implicit. f l (θ, F k) is an analogous short form.• We use f L+1, i L+1 and F L+1 interchangeably with x or X.• We say a random vector is'radially symmetric' if its length is independent of its orientation and its orientation is uniformly distributed.• We say a random matrix is'Gaussian initialized' if its entries are independent Gaussian random variables with mean zero and the standard deviation of all entries is the same.• We say an m * n random matrix is'orthogonally initialized' if it is a fixed multiple of an m * n submatrix of a max(m, n) * max(m, n) uniformly random orthogonal matrix.• We use parentheses to denote vector and matrix elements, i.e. A is the fourth element in the third row of the matrix A.• Throughout sections E and F, we assume implicitly that the GSC is defined and thus that neural networks are differentiable. All can be trivially extended to cover networks that are almost surely differentiable and directionally differentiable everywhere, which includes SeLU and ReLU networks.• All theoretical apply to arbitrary networks, not just MLPs, unless otherwise stated. However, some assumptions arising in proposition 7 and theorems 2 and 3 may be less easy to achieve in general architectures. We focus our discussion of these assumptions exclusively on the MLPs within the scope of this paper as outlined at the end of section 2. Let a'gradient-based algorithm' for training a mutable parameter vector θ from an initial value θfor a network f be defined as a black box that is able to query the gradient DISPLAYFORM0 at arbitrary query points (X, Y) but only at the current value of the mutable parameter vector θ. It is able to generate updates ∆θ which are added to the mutable parameter vector θ. Let the sequence of updates be denoted as ∆θ, ∆θ,... We define the successive states of θ recursively as θ (t) = θ (t−1) +∆θ (t). For simplicity, assume the algorithm is deterministic. In a residual network defined according to equation 2, we can write the gradient with respect to a parameter sub-vector as DISPLAYFORM1. Multiplying this out, we obtain 2 l−1 terms. We call a term'λ-residual' if it contains λ or more Jacobians of residual functions, as opposed to Jacobians of initial functions. Let res λ l (f, θ, X, Y) be the sum of all λ-residual terms in DISPLAYFORM2 Now consider two scenarios. In scenario, when the algorithm queries the gradient, it receives {DISPLAYFORM3, .., DISPLAYFORM4 e. the "regular" gradient. In scenario, it receives DISPLAYFORM5 e. a version of the gradient where all λ-residual terms are removed. Let the parameter vector attain states θ, θ,.. in scenario and θ (1,λ), θ (2,λ),.. in scenario. Then we say the'λ-contribution' at time t is θ (t) − θ (t,λ). Finally, we say the'effective depth at time t with threshold h' is the largest λ such that there exists an l with ||θ DISPLAYFORM6 There is no objectively correct value for the threshold h. In practice, we find that the λ-contribution decreases quickly when λ is increased beyond a certain point. Hence, the exact value of h is not important when comparing different networks by effective depth. The impact that the shift θ DISPLAYFORM7 has on the output of the network is influenced by the scale of θ (t) l as well as GSC(l, 0). If those values vary enormously between layers, it may be advisable to set different thresholds for different layers. Unfortunately, computing the effective depth measure is intractable as it would require computing exponentially many gradient terms. In this section, we explain how we estimate effective depth in our experiments. In this paper, we train networks only by stochastic gradient descent with either a single step size for all layers or a custom step size for each layer. Our algorithm for computing effective depth assumes this training algorithm. Vanilla networks Assume that the network is expressed as a residual network as in equation 2. Let B be the batch size, let c (t) l be the step size used at layer l for the t'th update and let B) )) be the batch of query points used to compute the t'th update. Then SGD computes DISPLAYFORM0 DISPLAYFORM1 For any update t and query point b, we estimate its λ-contribution at layer l as follows. For unparametrized layers, ||r k || op is set to zero. For linear layers, it is the operator norm of the residual weight matrix. The final estimate of the length of the λ-contribution at layer l for the entire training period is then simply the sum of the lengths of the estimated λ-contributions over all time points and query points. The core assumption here is that applying the Jacobian of the initial function of a given layer will increase the lengths of all terms approximately equally, no matter how many residual Jacobians they contain. In other words, we assume that in λ-residual terms, the large singular values of layer-wise Jacobians do not compound disproportionately compared to other terms. This is similar to the core assumption in theorem 1 in section F.1.We conservatively bound the impact of the Jacobian of the initial function with the impact of the Jacobian of the entire layer, i.e. DISPLAYFORM2 We use ||r k || op as a conservative estimate on how a residual Jacobian will increase the length of a term. We use the sum of the lengths of all λ-residual terms in a batch as a conservative bound on the length of the λ-contribution of the batch. In essence, we assume that all λ-residual terms have the same orientation. Finally, we use the sum of the lengths of the λ-contributions within each update as an estimate of the length of the total λ-contribution of the entire training period. On the one hand, this is conservative as we implicitly assume that the λ-contributions of each batch have the same orientation. On the other hand, we ignore indirect effects that λ-contributions in early batches have on the trajectory of the parameter value and hence on λ-contributions of later batches. Since we are ultimately interested in effective depth, we can ignore these second-order effects as they are negligible when the total λ-contribution is close to a small threshold h. Overall, we expect that our estimate of the effective depth (e.g. FIG5) is larger than its actual value. This is bolstered by the robustness of some of our trained networks to Taylor expansion (see FIG5).ResNet For ResNet architectures, we need to tweak our estimate of effective depth to take into account skip connections. Below, we detail how the variable arr is modified as it crosses a skip connection / residual block. We write f n (f m) = s n (f m) + ρ n (f m), where f n is the layer at which the skip connection terminates, f m is the layer at which the skip connection begins, s n is the function computed by the skip connection and ρ n (f m) = ρ n (f n+1 (..f m−1 (f m)..)) is the function computed by the residual block. We write f k = i k + r k for n + 1 ≤ k ≤ m − 1 and ρ n = i n + r n, i.e. we break down each layer in the residual block into an initial function and a residual function. In line 11, the combined effect of the skip connection and the initial functions of the residual block is approximated by the effect of the entire block, i.e. DISPLAYFORM3. In the same line, we must subtract the impact of the initial functions accumulated while passing through the residual block, i.e. DISPLAYFORM4. The impact of the residual functions in the block is, correctly, unaffected by the skip connection and bounded by the operator norm, as before. The effective depth measure has several limitations. One can train a linear MLP to have effective depth much larger than 1, but the will still be equivalent to a depth 1 network. Consider the following training algorithm: first randomly re-sample the weights, then apply gradient descent. Clearly, this algorithm is equivalent to just running gradient descent in any meaningful sense. The re-sampling step nonetheless blows up the residual functions so as to significantly increase effective depth. The effective depth measure is very susceptible to the initial step size. In our experiments, we found that starting off with unnecessarily large step sizes, even if those step sizes were later reduced, lead to worse outcomes. However, because of the inflating impact on the residual function, the effective depth would be much higher nonetheless. Effective depth may change depending on how layers are defined. In a ReLU MLP, for example, instead of considering a linear transformation and the following ReLU operation as different layers, we may define them to be part of the same layer. While the function computed by the network and the course of gradient-based training do not depend on such redefinition, effective depth can be susceptible to such changes. E.1 PROPOSITION 1 Proposition 1. Given:• a neural network f of nominal depth L• an initial parameter value θ• a mutable parameter value θ that can take values in some closed, bounded domain Θ• a dataset D of datapoints (x, y)• a closed, bounded domain D of possible query points (X, Y)• a function ||.|| from matrices to the reals that has c||.|| = ||c.|| and ||.|| ≥ 0• some deterministic algorithm that is able to query gradients of f at the current parameter value and at query points in D and that is able to apply updates ∆θ to the parameter value• constant r Assume:• Running the algorithm on f with θ initialized to θ for a certain number of updates T causes θ to attain a valueθ at which f attains some error value E final on D.• At every triplet (θ, X, Y) ∈ Θ × D, we have ||J DISPLAYFORM0 Then we can specify some other neural network f and some other initial parameter value θ such that the following claims hold:1. f has nominal depth L and the same compositional depth as f.2. The algorithm can be used to compute T updates by querying gradients of f at the current parameter value and at query points in D which cause θ to attain a valueθ where f attains error E final on D and makes the same predictions as f (θ) on D. Proof. Since Θ and D are closed and bounded, so is Θ × D. Therefore for all 0 ≤ l ≤ k ≤ L, both ||J l k || and ||T l k || attain their infimum on that domain if it exists. ||.|| is non-negative, so the infimum exists. ||.|| is non-zero on the domain, so the infimum, and therefore the minimum, is positive. Since f has finite depth, there is an r such that for all tuplets (θ, X, Y, k, l), we have ||J l k || ≥ r k−l and DISPLAYFORM0 r. Now, we define f via its layer functions. DISPLAYFORM1 f and f clearly have the same nominal and compositional depth, so claim holds. Given any vector v with L sub-vectors, define the transformation DISPLAYFORM2 We use the algorithm to train f as follows. Whenever the algorithm queries some gradient value df dθ, we instead submit to it the value R −1 (df dθ). Whenever the algorithm wants to apply an update ∆θ to the parameter, we instead apply R −1 (∆θ). Let S (t), 0 ≤ t ≤ T be the state of the system after applying t updates to θ under this training procedure. Let S (t), 0 ≤ t ≤ T be the state of the system after applying t updates to θ when the algorithm is run on f. Then the following invariances hold. DISPLAYFORM3, where θ (t) is the value of θ under S (t) and θ (t) is the value of θ under S (t).B f makes the same predictions and attains the same error on D under S (t) as f under S (t).C Any state the algorithm maintains is equal under both S (t) and S (t).We will show these by induction. At time t = 0, we have θ = R −1 (θ ) as chosen, so (A) holds. It is easy to check that (B) follows from (A). Since the algorithm has thus far not received any inputs, (C) also holds. Now for the induction step. Assuming that θ (t) = R −1 (θ (t) ), it is easy to check that DISPLAYFORM4 ). Therefore, whenever the algorithm queries a gradient of f, it will re- DISPLAYFORM5. Therefore, the algorithm receives the same inputs under both S (t) and S (t). Since the internal state of the algorithm is also the same, and the algorithm is deterministic, the update returned by the algorithm is also the same and so is the internal state after the update is returned, which completes the induction step for (C). Because the algorithm returns the same update in both cases, after the prescribed post-processing of the update under f, we have ∆θ DISPLAYFORM6 ). This completes the induction step for (A) and again, (B) follows easily from (A).(B) implies directly that claim holds. Finally, for any tuplet (θ, X, Y, k, l), we have ||T DISPLAYFORM7 Therefore, claim also holds, which completes the proof. Notes:• The condition that the Jacobians of f always have non-zero norms may be unrealistic. For practical purposes, it should be enough to have Jacobians that mostly have non-zero norms. This leads to a network f that has exploding Jacobians wherever f has Jacobians of size above some threshold, where that threshold can be arbitrarily chosen.• Claim of the proposition does not include the case (k, l) = and it does not include Jacobians with respect to the input X. These Jacobians have to be the same between f and f if we require f to have the same error and predictions as f. However, if we are ok with multiplicatively scaled errors and predictions, claim can be extended to cover those two cases. Scaled training errors and predictions are generally not a problem in e.g. classification.• Note that not only does the algorithm achieve the same predictions in the same number of updates for both f and f, but the computation conducted by the algorithm is also identical, so f is as "easy to train" as f no matter how we choose to quantify this as long as we know to apply the scaling transformation.• There are no constraints on the explosion rate r. If we can successfully train a network with some explosion rate, we can successfully train an equivalent network with an arbitrary explosion rate.• f is very similar to f, so this proposition can be used to construct trainable networks with exploding Jacobians of any shape and depth as long as there exists some trainable network of that shape and depth.• The proposition can be easily be extended to non-deterministic algorithms by using distributions and expectations.• The proposition can be easily extended to use directional derivatives instead of total derivatives to cover e.g. ReLU and SeLU nonlinearities. Proposition 2. Let U be the uniform distribution over the hypersphere. Then GSC(k, l) measures the quadratic expectation Q of the relative size of the change in the value of f l in response to a change in f k that is a small multiple of a random variable drawn from U.Equivalently, GSC(k, l) = lim →0 Q u∼U DISPLAYFORM8 Proof. We use LΣR T to denote the singular value decomposition and s i to denote singular values. DISPLAYFORM9 Proposition 3. Let U be the uniform distribution over the hypersphere. Assume f k is a fullyconnected linear layer without trainable bias parameters and θ k contains the entries of the weight matrix. Then GSC(k, l) DISPLAYFORM10 measures the quadratic expectation Q of the relative size of the change in the value of f l in response to a change in θ k that is a small multiple of a random variable drawn from U.Equivalently, GSC(k, l) DISPLAYFORM11 Further, if θ k is random and• all entries of θ k have the same quadratic expectation• all products of two different entries of θ k have an expectation of 0• the orientation of θ k is independent of its length DISPLAYFORM12 Proof. Throughout this derivation, we will use θ k to refer to both the parameter sub-vector and the weight matrix. Similarly, we will use u to refer to both a perturbation of the parameter sub-vector and of the weight matrix. We use LΣR T to denote the singular value decomposition and s i to denote singular values. DISPLAYFORM13 Further, assume that θ k is random and fulfills the conditions stated. Under those conditions, θ k is the product of a random scalar length variable and an independent random vector orientation variable θ k of unit length. Then for all DISPLAYFORM14. Since all entries of θ k have the same quadratic expectation, all entries in θ k have the same quadratic expectation. Further, DISPLAYFORM15, so the expectation of the product of two different entries of θ k is 0.Then, we have: DISPLAYFORM16 The conditions stated in the proposition for the random parameter sub-vector θ k are fulfilled, for example, if the corresponding weight matrix is either Gaussian initialized or orthogonally initialized. Therefore, the most popular initialization strategies for weight matrices are covered by this proposition. Proposition 4. Given: DISPLAYFORM0 • constants c 2,.., c L and γ 1,.., γ L• a network f of nominal depth L defined via its layer functions as follows. DISPLAYFORM1 Proof. Let c 0 = c 1 = 1. Then we have ||f l || 2 = c l ||f l || 2 for 0 ≤ l ≤ L and we have DISPLAYFORM2 Here, we consider general multiplicative rescalings provided they do not change the predictions and error values of the network. To ensure this, each layer function must compensate for the factor introduced by the previous layer as well as for the rescaling of the parameter. Not all network transformations that are used in practice to control the scale of forward activations fall under this proposition. Changing the scale of weights in a tanh or SeLU network or adding normalization layers is not covered. These changes can have a drastic impact on the high-level properties of the network, as shown throughout the paper. On the other hand, changing the scale of weights in a ReLU network is covered by the proposition, as long as the error layer f 0 also compensates for this rescaling. Also, changing the scale of weights in any architecture where linear layers are followed by a normalization layer is covered by the proposition. Proposition 5. Assuming the approximate decomposability of the norm of the product of Jaco- DISPLAYFORM0 Proof. DISPLAYFORM1 Proposition 6. Any endomorphism on the hypersphere composed of (i) a strictly monotonic, continuous nonlinearity σ that has σ = 0, (ii) multiplication with a full-rank matrix and (iii) length-only layer normalization is bijective. Proof. We will prove this by showing that the inverse image of any point under such an endomorphism is a single point. Take any point on the hypersphere. The inverse image under length-only layer normalization is a ray from the origin not including the origin. The inverse image of this ray under multiplication with a full-rank matrix is also a ray from the origin not including the origin. What remains to be shown is that the inverse image of this ray under the nonlinearity layer, when intersected with the hypersphere, yields a single point. We will show this via a series of claims. Let the dimension of the hypersphere be d and its radius be r. Claim 1: If a point on this ray has an inverse image, that inverse image is a single point. Let this point on the ray be x. Assume its inverse image contains two points y and z. Then for 1 ≤ i ≤ d, σ(y(i)) = x(i) and σ(z(i)) = x(i) and so σ(y(i)) = σ(z(i)). But y = z, so there exists an i such that y(i) = z(i). So there exist two different values y(i) and z(i) at which σ returns the same . But σ is strictly monotonic. Contradiction. Claim 2: If two points x 1 and x 2 on the ray have inverse images y 1 and y 2 and x 1 is closer to the origin than x 2, then for 1 ≤ i ≤ d, we have |y 1 (i)| ≤ |y 2 (i)|.For 1 ≤ i ≤ d, σ attains x 2 (i) at y 2 (i) and 0 at 0. Since σ is strictly monotonic, continuous and 0 ≤ |x 1 (i)| ≤ |x 2 (i)| and x 1 (i) and x 2 (i) have the same sign, σ attains x 1 (i) at a point between 0 and y 2 (i). Hence, |y 1 (i)| ≤ |y 2 (i)| as required. Claim 3: The function f that assigns to each point on the ray that has an inverse image the length of that inverse image is strictly increasing in the direction away from the origin. Take any two points on the ray x 1 and x 2 that have inverse images y 1 and y 2 where x 1 is closer to the origin. By the previous claim, for 1 ≤ i ≤ d, we have |y 1 (i)| ≤ |y 2 (i)| and therefore ||y 1 || 2 ≤ ||y 2 || 2 and therefore f (x 1) ≤ f (x 2). Assume f (x 1) = f (x 2). Then we must have DISPLAYFORM2 Since σ is strictly monotonic and σ = 0, σ either preserves the sign of all inputs or reverses the sign of all inputs. Since x 1 (i) and x 2 (i) have the same sign, so do y 1 (i) and y 2 (i). So y 1 (i) = y 2 (i), so y 1 = y 2, so the forward images of y 1 and y 2 are the same, so DISPLAYFORM3 Claim 4: The function f that assigns to each point on the ray that has an inverse image the length of that inverse image is continuous. Since f is only defined on a 1-dimensional space, it is enough to show left-continuity and rightcontinuity. Part 1: left-continuity. Take a sequence of points on the ray with inverse images that approach some point x lim on the ray from the left and assume that x lim also has an inverse image y lim. Then we need to show that the length of y lim is the limit of the lengths of the inverse images of the sequence. It is enough to show this for the monotonic re-ordering of the sequence. Let that monotonic re-ordering be x n. Then we have x n → x lim and ||x n || increases. By claim 2, for 1 ≤ i ≤ d, |y n (i)| is an increasing sequence. This means that |y n (i)| either converges or it is an unbounded sequence. If the latter is true, then it will exceed |y lim (i)|. But since x lim is at least as large as all x n, again by claim 2 we must have |y lim (i)| ≥ |y n (i)|. Contradiction. So |y n (i)| converges. Since σ is strictly monotonic and σ, σ either preserves the sign of all values or it reverses the sign of all values. Since for each 1 ≤ i ≤ d, the x n (i) all have the same sign because the x n are on a ray, the y n (i) all have the same sign and so since |y n (i)| converges, y n (i) converges. Let its limit be y lim (i). Because σ is continuous, its value at y lim (i) is the limit of σ(y n (i)). But that is x lim (i). So if y lim is the vector made up of the y lim (i), it is the inverse image of x lim. i.e. y lim = y lim. Since ||.|| 2 is also a continuous function, ||y n || 2 → ||y lim || 2 and so ||y n || 2 → ||y lim || 2 and so f (x n) → f (x) as required. Part 2: right-continuity. This case is analogous. We have a decreasing sequence x n and so decreasing sequences |y n (i)| and so convergent sequences y n (i) with a limit y lim that is equal to y lim and so f (x n) → f (x) as required. Claim 5: The co-domain of the function f that assigns to each point on the ray that has an inverse image the length of that inverse image is the positive reals. We argue by contradiction. Assume the co-domain of f is not the positive reals. Then the set S of positive reals not attained by f is non-empty. Let s be the infimum of S. Case 2: s > 0 and s ∈ S. Then there exists a sequence x n of points on the ray such that f (x n) → s and f (x n) is strictly increasing. By claim 3, ||x n || 2 is strictly increasing. Let the inverse images of the x n be y n. By claim 2, |y n (i)| is increasing for 1 ≤ i ≤ d. Since |y n (i)| ≤ ||y n || 2 < s, |y n (i)| is bounded from above, so it converges. As σ is strictly monotonic and σ = 0, σ either preserves the sign or reverses it for all inputs. So since the x n (i) all have the same sign, so do the y n (i). So y n (i) converges. Let this limit be y lim (i). Since for 1 ≤ i ≤ d, y n (i) → y lim (i), we have y n → y lim where y lim is the vector composed of the y lim (i). Since σ is continuous, the forward image of y lim is the limit of forward images of the y n. Since the forward images of the y n lie on the ray, so does their limit. Hence, the forward image of y lim lies on the ray. Call it x lim. Since length is also continuous, s = lim n→inf ||y n || 2 = ||y lim || 2. So f (x lim) = s so s ∈ S. Contradiction. Case 3: s > 0 and s ∈ S. Then there is a point x on the ray for which f (x) = s. Let its inverse image be y. Let I be the set of indeces i with 1 ≤ i ≤ d and x(i) = 0. Since σ has σ = 0 and it is strictly monotonic, y(i) = 0. For i ∈ I, let σ max (i) = σ(2y(i)). Since σ has σ = 0 and it is strictly monotonic, we have |σ(2y(i))| > |σ(y(i))| > 0 and so |σ max (i)| > |x(i)| > 0 and also σ max (i) and x(i) have the same sign. Let C = min i∈I σmax(i)x(i). Take some vector y that can vary. Since σ is continuous, as y (i) varies between y(i) and 2y(i), it attains all values between x(i) and σ max (i). So for all 1 ≤ c ≤ C, we can set y (i) to some value y c (i) such that σ(y c (i)) = cx(i). So the vector y c that has the aforementioned y c (i) components for i ∈ I and has zero components for i ∈ I is the inverse image of cx. So f is defined on cx for 1 ≤ c ≤ C. Let s:= f (Cx). By claim 3, f is strictly increasing so s > s. By claim 4, f is continuous. So between x and Cx, f takes all values between and including s and s. Also, by the original definition of s, f attains all positive real values less than s. So f attains all positive real values less than s. But s was defined to be the infimum of positive real values that f does not attain. Contradiction. Claim 6: The inverse image of the ray intersects the hypersphere in a single point. By claim 5, there is a point on the ray that has an inverse image of length r. By claim 3, there is exactly one such point. Therefore, the inverse image of the ray contains exactly one point of length r, so it intersects the hypersphere in exactly one point, as required. The proposition also applies if each neuron in the nonlinearity layer uses a different nonlinearity σ i as long as it fulfills the stated conditions. We say a random function f b is'k-diluted in expectation' with respect to random vector v if there exists a random matrix S b and a random function DISPLAYFORM0 We say a random function ρ(v) is'scale-symmetric decomposable' (SSD) if it can be written as u ρ ρ (v ||v||2)||v|| 2, where ρ is a random scalar function and u ρ is uniformly distributed on the hypersphere and independent of both ρ and v. We say a random matrix S is'scale-symmetric decomposable' (SSD) if Sv, when viewed as a function of the vector v is SSD. Proposition 7. Let u be a uniformly distributed unit length vector. Given random functions f b, 1 ≤ b ≤ B, which are k b -diluted in expectation with respect to u, a matrix S b that is either SSD or a multiple of the identity and an SSD random function ρ b:= f b − S b where all the S b and ρ b are independent, DISPLAYFORM1 2 -diluted in expectation with respect to u. Proof. Let U be the uniform distribution over unit length vectors and u ∼ U. We will procede by induction over B, where the induction hypothesis includes the following claims. DISPLAYFORM2 Let's start by looking at the case B = 1. Claim follows directly from the conditions of the proposition. We have: DISPLAYFORM3 This is claim. For any u, ρ 1 (u) is radially symmetric because ρ 1 is SSD. If S 1 is SSD, S 1 u is also radially symmetric for arbitrary u. If S 1 is a multiple of the identity, S 1 u is radially symmetric because u is radially symmetric. In either case, S 1 u is radially symmetric. Because the orientation of ρ 1 (u) is governed only by u ρ1 which is independent of both u and S 1, the orientations of S 1 u and ρ 1 (u) are independent. But the sum of two radially symmetric random variables with independent orientations is itself radially symmetric, so f 1 (u) is also radially symmetric. This yields claim. Now for the induction step. Set B to some value and also define k DISPLAYFORM4 DISPLAYFORM5 If S 1 is a multiple of the identity, by the definition of c 1, it is c 1 times the identity. Therefore DISPLAYFORM6 And analogously we have DISPLAYFORM7 • and f •, we obtain claim. Claim, when substituting in k •, S • and f • becomes DISPLAYFORM8 which is true, so we have claim.Consider S 1 S 2..S B u = S 1 S • u. We know S • u is radially symmetric by the induction hypothesis, so if S 1 is a multiple of the identity, so is S 1 S • u. If S 1 is SSD, then S 1 S • u is radially symmetric for any value of S • u. In either case, S 1 S • u is radially symmetric. DISPLAYFORM9 We know f • u is radially symmetric by the induction hypothesis. We also have DISPLAYFORM10 is radially symmetric with an orientation independent of that of S 1 f • u because it is governed only by u ρ1. The sum of two radially symmetric random variables with independent orientation is itself radially symmetric, so f 1 f • u is radially symmetric. DISPLAYFORM11 is radially symmetric by the induction hypothesis so as before, S 1 ρ • (u) is radially symmetric. And again, DISPLAYFORM12 ) is radially symmetric with independent orientation, so the sum f 1 f 2..f B u − S 1 S 2..S B u is radially symmetric. So we also have claim. This completes the proof. A Gaussian initialized matrix and an orthogonally initialized matrix are both SSD. Therefore the condition that the S b are either the identity or SSD is fulfilled for all popular skip connection types. Unfortunately, many ResNets do not quite fulfill the SSD condition on ρ b, but they come close. If the last operation of each ρ b is multiplication with an SSD matrix, a popular choice, the orientation of ρ b is indeed governed by an independent, uniform unit length vector u ρ as required. However, many choices of ρ b do not preserve the length of the incoming vector ||v|| 2 as in an SSD function. However, because the inputs to each ρ b are random and high-dimensional, we do not expect their lengths to vary much, especially if the original inputs to the network have themselves been normalized. So the loss of the information of the length of the incoming vector should not cause the overall behavior to change significantly. A block function ρ b that is SSD, for example, is any such function that is composed of only linear and ReLU layers, where the final layer is Gaussian or orthogonally initialized. Finally, note that this proposition applies only in expectation over randomly initialized matrices. As long as those matrices are high-dimensional, we expect it to apply approximately to specific realizations of those matrices as well. See section D for the formal definition of effective depth and related concepts. Consider some MLP f with nominal depth L and layers f l, 1 ≤ l ≤ L. Let its compositional depth be N and its linear layers be f ln, 1 ≤ n ≤ N where l 1 < l 2 <.. < l N. Let each linear layer be the sum of an unparametrized initial function i ln and a parametrized residual function r ln (θ ln). i ln represents multiplication with the initial weight matrix and is used interchangeably to denote that initial weight matrix. r ln (θ ln) represents multiplication with the residual weight matrix and is used entries of the residual weight matrix. Let an N -trace φ N be a subset of {1, .., N}. Let Φ N be the set of all possible N -traces and let Φ λ N be the set of all N -traces of size λ or more. We define the'gradient term' G(φ N, f, θ, X, Y): DISPLAYFORM0 if layer k is not a linear layer, J k = r ln (θ ln) if layer k corresponds to linear layer l n and n ∈ φ N, and J k = i ln if layer k corresponds to linear layer l n and n ∈ φ N.Let res DISPLAYFORM1 Theorem 1. Consider an MLP f as defined above with all parameter sub-vectors initialized to θ ln = 0. Taken some set of possible query points D. Let each of the parameter sub-vectors be updated with a sequence of updates ∆θ DISPLAYFORM2 Let K(λ, n) be the number of ways to choose λ distinct positive integers such that their sum is n. DISPLAYFORM3, the largest number that can be chosen is n − DISPLAYFORM4 alg represents the gradient-based algorithm and the quantity that is ultimately bounded by h is the first-order approximation of the relative λ-contribution at layer l N until time T. To obtain that the network has effective depth Λ, all we need is to set h to a small value. In that case, the first-order approximation is sufficient. Now, we analyze the four conditions in turn. Condition states that the algorithm computes the update. For convenience, we write the algorithm as a deterministic function of the gradient of the layer for which the update is computed. The proof can be trivially extended to algorithms that use the gradients of other layers, past gradients and as well randomness if we add the same dependencies to condition. Also for convenience, we assume a batch size of 1. We can apply the to larger batch sizes, for example, by having alg use past gradients and setting the majority of updates to zero. Condition reflects the argument from section 4.3 that the area around the current parameter value in which the gradient is reflective of the function is bounded by a hypersphere of relative radius 1 GSC(ln,0), and the assumption that gradients explode, i.e. GSC(l n, 0) ≥ cr ln. Note that for convenience, we divide the size of the update ||∆θ DISPLAYFORM5 ln || 2 by the weight matrix in the initialized state ||i ln || 2 instead of ||θ (t−1) ln || 2. This is realistic given the general observation that the largest useful update size decreases in practice when training a deep network. Therefore, we can bound all updates by the largest useful update size in the initialized state. The strongest condition is. It can be understood as making two distinct assertions. Firstly, ignoring the alg function, it bounds the length of the sum of the Λ-residual terms. In essence, it requires that on average, the size of these terms is "what one would expect" given the L 2 norm of the initial and residual weight matrices up to some constant c. In other words, we assume that in λ-residual terms, the large singular values of layer-wise Jacobians do not compound disproportionately compared to the full gradient. The bound is however also very conservative in the sense that it implicitly assumes that all λ-residual terms have the same orientation. Secondly, it asserts that alg is "relatively Lipschitz" over the gradient. This is fulfilled e.g. for SGD and SGD with custom layer-wise step sizes as used in our experiments. It is fulfilled by SGD with momentum as long as size of the momentum term is bounded below. In theory, it is not fulfilled by RMSprop or Adam as gradients on individual weight matrix entries can be "scaled up" arbitrarily via the denominator. In practice, the regularization term used in the denominator prevents this, although this is rarely necessary. Finally, condition states that the training time is limited. Importantly, the bound on T is exponential in Λ and independent of both L and N. Note that we did not attempt to make the bound tight. As it stands, unfortunately, the bound too loose to have much practical value. It would indicate that networks can be trained to far greater depth than is possible in practice. The limitation of effective depth in practice is studied in section 4.4. Theorem 2. Consider a neural network f with random parameter θ composed of layer functions f l that are surjective endomorphisms on the d-dimensional hypersphere. Let J l:= J l l+1. Let S be the hypersphere and let S be the uniform distribution on it. Let F l, 1 ≤ l ≤ L, be random vectors independent of θ where F l ∼ S. Assume:1. The θ l are independent of each other.2. Each Jacobian J l (θ l, F l+1) has d − 1 nonzero singular values which, as F l+1 and θ l vary, are independent and drawn from some distribution P l.3. There exist some > 0 and δ > 0 such that DISPLAYFORM0 4. f l (θ l, F l+1) is a uniformly distributed vector on the hypersphere.5. For any unit length vector u, J l (θ l, F l+1)u can be written as l (θ l, F l+1, u)u l. Here, l is random scalar independent of both u l and f l (θ l, F l+1). u l, conditioned on f l (θ l, F l+1), is uniformly distributed in the space of unit length vectors orthogonal to f l (θ l, F l+1).Let X ∼ S be a random vector independent of θ. Then setting r(δ,):= √ 1 + δ 2 we have DISPLAYFORM1 Proof. The hypersphere is a d − 1-dimensional subspace in R d. Hence, any endomorphism f l on that subspace will have Jacobians with at least one zero singular value with right eigenvector equal to the normal of the subspace at the input and left eigenvector equal to the normal of the subspace at the output. Since the subspace is the unit hypersphere, the normal at the input is the input and the normal at the output is the output. By assumption, the Jacobian has no other zero singular values. Let the singular values of the Jacobian be s 1,.., s d−1, s d. WLOG we set s d = 0.Throughout this proof, we use det(J l) to denote the product of singular values of the Jacobian excluding the zero singular value, i.e. det(J l) = d−1 i=1 s i. We use ||J l || qm to denote the quadratic mean of the singular values excluding the zero singular value, i.e. DISPLAYFORM2 As f l is surjective, for fixed θ l, we have by integration by substitution DISPLAYFORM3 But we also have DISPLAYFORM4 i=1 |s i | ≥ 1. But the nonzero singular values are assumed to be independent by condition. So DISPLAYFORM5 Similarly, we have DISPLAYFORM6 The last identity uses condition. Let θ l k:= (θ l, θ l+1, .., θ k−1). Now, we will prove the following claim by induction: DISPLAYFORM7 L, as required. Now for the induction step. Assume (A) is true for some l. Then by the induction hypothesis, DISPLAYFORM8. This completes the induction step. Analogously, we have claim (A2): DISPLAYFORM9 The last line comes from the induction hypothesis. Now, let's look at the second term Q θ l,f l+1,u l+1 ||J l l+1 (θ l, f l+1)u l+1 || 2. u l+1 if uniform among unit length vectors orthogonal to f l+1. But this leads to u l+1 being orthogonal to the normal of the hypersphere at f l+1 and thus orthogonal to the right null space of J l l+1. Since u l+1 is also independent of θ l, we have DISPLAYFORM10 Putting those together, we obtain DISPLAYFORM11 This is the desired claim. Let's look at the conditions. Condition is standard for randomly initialized weight matrices. Conditions and are both fulfilled if the last two operations of each layer function are multiplication with a weight matrix and length-only layer normalization and that weight matrix is Gaussian or orthogonally initialized. If the weight matrix is orthogonally initialized, this is easy to see, because the linear transformation and the normalization operation commute. If we exchange those two operations then the last operation applied in both f l (θ l, F l+1) and J l (θ l, F l+1)u is the orthogonal transformation, which decouples the orientations of both terms from the length of J l (θ l, F l+1)u as well as decoupling the orientations of the terms from each other up to preserving their angle. Finally, note that J l (θ l, F l+1)u always lies in the left-null space of J l (θ l, F l+1). But that space is orthogonal to f l (θ l, F l+1), and hence the two terms are orthogonal as required. If the weight matrix is Gaussian initialized, note that the product of a Gaussian initialized matrix and an orthogonally initialized matrix is Gaussian initialized. Hence, we can insert an additional orthogonally initialized matrix and then proceed with the previous argument to show that conditions and are fulfilled. After applying a linear transformation with one of the two initializations, conditions and hold except for the length of f l is not 1. Hence, even if length-only layer normalization is not used as part of the endomorphism, we expect and to hold approximately in practice. As far as we can tell, conditions and are not fulfilled in practice. They are both used to derive from unit determinants a greater than unit qm norm. As long this implications holds for practical layer functions, and are not necessary. Theorem 3. Let g and u be random vectors. Consider a function f that is k-diluted with respect to u, a matrix S and a function ρ. Let R(v) be the Jacobian of ρ at input v. Let r:= DISPLAYFORM0 Q||ρ(u)||2Q||g||2.Assume that E(Su).(ρ(u)) = 0 and that E(gR(u)).(gS) = 0. Also assume that DISPLAYFORM1 Proof. We have DISPLAYFORM2 u represents the incoming activation vector of some residual block and g the incoming gradient. r represents a type of expectation over the ratio DISPLAYFORM3 ||ρ(u)||2||g||2, ignoring the skip connection. Therefore r can be viewed as the growth of the GSC. Similarly, DISPLAYFORM4 Q||ρ(u)+Su||2Q||g||2 represents the growth of the GSC after the skip connection has been added. The key assumptions are E(Su).(ρ(u)) = 0 and E(gR(u)).(gS) = 0. In plain language, we assume that the function computed by the skip connection is uncorrelated to the function computed by the residual block and that the same is true for the gradient flowing through them. For the forward direction, this is true if either the skip connection is Gaussian / orthogonally initialized or the last layer of the residual block is linear and Gaussian / orthogonally initialized and if the randomness of the initialization is absorbed into the expectation. Unfortunately, for the backward direction, such a statement cannot be made because the gradient has a complex dependence both on S and R. However, the assumption that this dependence between the forward and backward direction is immaterial has proven realistic in mean field theory based studies (see section B.1.1). Under this assumption, as in the forward direction, we require that either the skip connection is Gaussian / orthogonally initialized or the first layer of the residual block is linear and Gaussian / orthogonally initialized. Note that even if both assumptions are only fulfilled approximately, this is not catastrophic to the theorem. The other assumption is Q||gS||2Q||u||2 Q||Su||2Q||g||2 = 1. This is true if S is an orthogonal matrix and so specifically if S is the identity matrix. If S is Gaussian / orthogonally initialized, this is true if the randomness of the initialization is absorbed into the Q terms. Again, if this assumption only holds approximately, it is not catastrophic to the theorem. An implicit assumption made is that the distribution of the incoming gradient g is unaffected by the addition of the skip connection, which is of course not quite true in practice. The addition of the skip connection also has an indirect effect on the distribution and scale of the gradient as it flows further towards the input layer. The experiments in figure 3 bear out the theory discussed here. We define the first-order Taylor approximation T l of the bottom layers up to layer l recursively. Write i l (x) as the short form of i l (i l+1 (..i L (x)..)). Then DISPLAYFORM0 The maximum number of parametrized residual functions composed in T l is 2. Otherwise, only addition and composition with fixed functions is used. Hence, the compositional depth of T l is min(L − l, 2). Hence, the network f Taylor(l):= f 0 (y, f 1 (..f l−1 (T l (X))..)) has compositional depth max(l + 1, L).For ResNet architectures, as in section D.2, we divide each layer in the residual block into its initial and residual function. Then the definition of the Taylor expansion remains as above, except a term s l (T m (θ, X)) is added at each layer l where a skip connection, represented by skip function s l, terminates. T m is the Taylor expansion at the layer where the skip connection begins. The looks-linear initialization ('LLI') of ReLU MLPs achieves an approximate orthogonal initial state. Consider a ReLU MLP with some number of linear layers and a ReLU layer between each• kl error layer: f 0 (f 1, y) = ln f 1 (y) where y is an integer class label and f 1 has one entry per class. Note that normalization layers (batch normalization, layer normalization or length-only layer normalization) do not use trainable bias and variance parameters. A network of compositional depth N contains N linear layers and N − 1 nonlinearity layers (ReLU, tanh or SeLU) inserted between those linear layers. If the network uses normalization layers, one normalization layer is inserted after each linear layer. For Gaussian noise experiments, the error layer is the dot product error layer. For CIFAR10 experiments, a softmax layer is inserted above the last linear or normalization layer and the error layer is a kl error layer. For Gaussian noise experiments, data inputs as well as predictions and labels have dimension 100. We used a compositional depth of 50. We generally used a uniform width of 100 throughout the network. However, we also ran experiments where the width of all layers from the first linear layer to the layer before the last linear layer had width 200. We also ran experiments where linear layers alternated in width between 200 and 100. For CIFAR10 experiments, data inputs have dimension 3072 and predictions have dimension 10. We use a compositional depth of 51. The first linear layer transforms the width to 100 and the last linear layer transformed the width to 10.The following initialization schemes for the weight matrices are used.• Gaussian: Each entry of the weight matrix is drawn as an independent Gaussian with mean 0. The variance of this Gaussian is one over the dimension of the incoming vector except when the weight matrix follows a ReLU layer. In that case, the variance of the Gaussian is two over the dimension of the incoming vector.• orthogonal: The weight matrix is a uniformly random orthogonal matrix. Note that this initialization scheme is only used for square matrices.• looks-linear: See section H.ResNet In all cases, the first layer is a linear layer. After that, there are 25 skip connections. Each skip connection bypasses a block of 6 layers: a normalization layer, a nonlinearity layer, a linear layer, another normalization layer, another nonlinearity layer, and another linear layer. Above the last skip connection, a final normalization layer is inserted, followed by softmax (CIFAR10 only) and then the error layer. For Gaussian noise experiments, we use a constant width of 100. For CIFAR10, the first linear layer transforms the width from 3072 to 100, and the last skip connection as well as the last linear linear in the last residual block transform the width from 100 to 10.Skip connections are identity skip connections, except the last skip connection in CIFAR10 experiments that is responsible for reducing the width. There, the skip connection multiplies its incoming value by a fixed 10 * 100 submatrix of a 100 * 100 orthogonal matrix. For Gaussian noise experiments, we also conducted some experiments where skip connections used random matrices where each entry is drawn from an independent Gaussian with mean 0 and the variance being one over the dimension of the incoming vector. For Gaussian noise experiments, both inputs and labels are 100-dimensional vectors were each entry is drawn from an independent Gaussian with mean 0 and variance 1 100. We normalized the input vectors to have length 10. We drew 100 independent datasets of size 10.000.For each dataset and each architecture we studied (see table 1 for the full list), we computed both the forward activations and the gradient for each datapoint. For architectures with batch normalization, all 10.000 datapoints were considered part of a single batch. Note that no training was conducted. We then computed the following metrics:• Expected GSC: At each layer l, we computed DISPLAYFORM0. Note that J 0 l is simply the "regular gradient" of the network.• Pre-activation standard deviation: For each nonlinearity layer l, we computed the standard deviation of the activations of each neuron in f l+1 over the 10.000 datapoints, i.e. DISPLAYFORM1 We then computed the quadratic mean of those standard deviations as a summary statistic for the layer.• Pre-activation sign diversity: For each nonlinearity layer l, at each neuron in f l+1, we computed min(pos, 1 − pos), where pos is the fraction of activations that were positive across the 10.000 datapoints. We then computed the mean of those values across the layer as a summary statistic. Finally, we obtained a summary statistic for each layer and architecture by averaging the over the 100 datasets. Results are shown in table 1, FIG3. For CIFAR10 experiments, we preprocessed each feature to have zero mean and unit variance. We used the training set of 50.000 datapoints and disregarded the test set. We used batches of size 1.000 except for the vanilla batch-ReLU architecture with Gaussian initialization, for which we used a batch size of 50.000. (See section 4.5 for the explanation.)We trained each architecture we studied (see table 2 for the full list) with SGD in two ways. First, with a single step size for all layers. Second, with a custom step size for each layer. Single step size We perform a grid search over the following starting step sizes: {1e5, 3e4, 1e4, 3e3, .., 1e − 4, 3e − 5, 1e − 5}. For each of those 21 starting step sizes, we train the network until the end-of-epoch training classification error has not decreased for 5 consecutive epochs. Once that point is reached, the step size is divided by 3 and training continues. Once the end-of-epoch training classification error has again not decreased for 5 epochs, the step size is divided by 3 again. This process is repeated until training terminates. Termination occurs either after 500 epochs or after the step size is divided 11 times, whichever comes first. The starting step size that obtains the lowest final training classification error is selected as the representative step size for which are presented in the paper. In this scenario, we use a different starting step size for each layer. After those step sizes are computed, smoothed and scaled as described in section I.4, we train the pre-trained network with those step sizes. As before, periodically, we divide all step sizes jointly by 3. As before, training is terminated after 11 divisions or when 500 epochs are reached, whichever comes first. We compute the following metrics:• Largest relative update size for each layer induced by the estimated optimal step size during the epoch where that optimal step size was estimated. See section I.4 for details.• Effective depth throughout training: see section D.2 for details. λ-contributions are accumulated from batch to batch.• Training classification error at the end of each epoch.• Training classification error when compositional depth is reduced via Taylor expansion after training: see section G for details.• GSC, pre-activation standard deviation and pre-activation sign diversity: for details, see the end of section I.2. Note that the expectations over the dataset were computed by maintaining exponential running averages across batches.• Operator norms of residual weight matrices after training. See table 2 and figures 2, 4, 5 and 6 for . We estimated the optimal step size for each linear layer under SGD for our CIFAR10 experiments. This turned out to be more difficult than expected. In the following, we describe the algorithm we used. It has five stages. Pre-training We started by pre-training the network. We selected a set of linear layers in the network that we suspected would require similar step sizes. In exploding architectures (vanilla batch-ReLU with Gaussian initialization, vanilla layer-tanh, vanilla batch-tanh, SeLU), we chose the second highest linear layer through the sixth highest linear layer for pretraining, i.e. 5 linear layers in total. We expected these layers to require a similar step size because they are close to the output and the weight matrices have the same dimensionality. For vanilla ReLU, vanilla layerReLU, vanilla tanh and looks-linear initialization, we chose the second lowest linear layer through the second highest linear layer (i.e. 49 linear layers in total) because the weight matrices have the same dimensionality. Finally, for ResNet, we chose the second lowest through the third highest linear layer (i.e. 48 linear layers in total), because the blocks those layers are in have the same dimensionality. We then trained those layers with a step size that did not cause a single relative update size of more than 0.01 (exploding architectures) or 0.001 (other architectures) for any of the pre-trained layers or any batch. We chose small step sizes for pre-training to ensure that pre-training would not impact effective depth. We pre-trained until the training classification error reached 85%, but at least for one epoch and at most for 10 epochs. The exact pre-training step size was chosen via grid search over a grid with multiplicative spacing of 3. The step size chosen was based on which step size reached the 85% threshold the fastest. Ties were broken by which step size achieved the lowest error. In the selection phase, we train each linear layer one after the other for one epoch while freezing the other layers. After each layer is trained, the change to the parameter caused by that epoch of training is undone before the next layer is trained. For each layer, we chose a step size via grid search over a grid with multiplicative spacing 1.5. The step size that achieved the lowest training classification error after the epoch was selected. Only step sizes that did not cause relative update sizes of 0.1 or higher were considered, to prevent weight instability. Now we can explain the need for pre-training. Without pre-training, the selection phase yields very noisy and seemingly random outcomes for many architectures. This is because it was often best to use a large step size to jump from one random point in parameter space to the next, hoping to hit a configuration at the end of the epoch where the error was, say, 88%. Since we used a tight spacing of step sizes, for most layers, there was at least one excessively large step size that achieved this spurious "success". Since we only trained a single layer out of 51 for a single epoch, the error of the "correct" step size after pre-training often did not reach, say, 88%. When we trained the network for 500 epochs with those noisy estimates, we obtained very high end-of-training errors. Pre-training ensures that training with an excessively high step size causes the error to exceed 85% again. Therefore, those step sizes are punished and step sizes that ultimately lead to a much better end-of-training error are selected. Clipping Even though pre-training was used, for some architectures, it was still beneficial to add the following restriction: as we consider larger and larger step sizes during grid search, as soon as we find a step size for which the error is at least 0.1% higher than for the current best step size, the search is terminated. Clipping is capable of further eliminating outliers and was used if and only it improved the end-of-training error. It was used for vanilla tanh, ResNet layer-tanh and looks-linear layer-ReLU.For each linear layer, the largest relative update size induced by the step size obtained for that layer after the clipping phase (or after the selection phase if clipping was not used) during the epoch of training conducted in the selection phase is shown in the in figures 2A, 4A, 5A and 6A.Smoothing In this stage, we built a mini-regression dataset of (X, Y) points as follows. For each X from 1 to 51, we include the point (X, Y) where Y is the largest relative update size the step size selected for linear layer X after clipping induced during the epoch of training in the selection phase. We then fit a line via least-squares regression on that dataset in log scale. For each X, we thus obtain a smoothed value Y. The ratio Y Y was multiplied to the step size obtained for each layer at the end of the clipping phase. We added this phase because we found that the end-of-training error could still be significantly improved by reducing noise among the layer-wise step sizes in this way. Scaling Finally, we jointly scale all layer-wise step sizes with a single constant. That value is chosen as in the selection phase by trying a small constant, training for one epoch, rewinding that epoch, multiplying that constant by 1.5, rinse, repeat. Again, that process was terminated once any layer experiences an update of relative size at least 0.1. This stage is necessary because the size of the update on the entire parameter vector when all layers are trained jointly is ≈ √ 51 times larger than when only single layers are trained as in the selection phase. Hence, a scaling constant less than 1 is usually needed to compensate. Again, some architectures benefited from using clipping, where we terminated the scaling constant search as soon as one exhibited an error more than 0.1% above the current best scaling constant. Vanilla tanh, vanilla layer-tanh, ResNet layer-tanh and looks-linear layer-ReLU used this clipping. Formally, for each architecture, we trained three networks to completion. One using no clipping, one using only clipping during the scaling phase, and using the clipping phase as well as clipping during the scaling phase. Whichever of these three networks had the lowest end-of-training error was selected for presentation in the paper. To compare, for single step size training, we compared 21 end-of-training error values. | We show that in contras to popular wisdom, the exploding gradient problem has not been solved and that it limits the depth to which MLPs can be effectively trained. We show why gradients explode and how ResNet handles them. | 1,064 | scitldr |
In this paper, we are interested in two seemingly different concepts: \textit{adversarial training} and \textit{generative adversarial networks (GANs)}. Particularly, how these techniques work to improve each other. To this end, we analyze the limitation of adversarial training as a defense method, starting from questioning how well the robustness of a model can generalize. Then, we successfully improve the generalizability via data augmentation by the ``fake'' images sampled from generative adversarial network. After that, we are surprised to see that the ing robust classifier leads to a better generator, for free. We intuitively explain this interesting phenomenon and leave the theoretical analysis for future work. Motivated by these observations, we propose a system that combines generator, discriminator, and adversarial attacker together in a single network. After end-to-end training and fine tuning, our method can simultaneously improve the robustness of classifiers, measured by accuracy under strong adversarial attacks, and the quality of generators, evaluated both aesthetically and quantitatively. In terms of the classifier, we achieve better robustness than the state-of-the-art adversarial training algorithm proposed in (Madry \textit{et al.}, 2017), while our generator achieves competitive performance compared with SN-GAN . Deep neural networks have been very successful in modeling images, texts, and audios. Nonetheless, their characters have not yet been fully understood BID36, leaving a big hole for malicious attack algorithms. In this paper, we start from adversarial attacks and defense but try to find the connection with Generative Adversarial Network (GAN) BID10. Superficially, the difference between them is that the adversarial attack is the algorithm that finds a highly resembled image to cheat the classifier, whereas the GAN algorithm at its core is a generative model where the generator learns to convert white noise to images that look like authentic to the discriminator. We show in this paper that they are indeed closely related and can be used to strengthen each other: to accelerate and stabilize the GAN training cycle, the discriminator is expected to stay robust to adversarial examples; at the same time, a well trained generator provides a continuous support in probability space and thus improves the generalization ability of discriminator, even under adversarial attacks. That is the starting point of our idea to associate generative networks with robust classifiers. We find a novel way to make a connection between GAN and adversarial training. More importantly, we develop a system called AdvGAN to combine generator, discriminator, and adversarial attacker in the same network. Through the proposed "co-training" and "fine-tunning" steps, we are able to simultaneously improve the quality of generated images and the accuracy of discriminator under strong adversarial attacks. For example, when applying state-of-the-art adversarial training technique BID25, the accuracy of ResNet18(+CIFAR10) drops from 81.5% to 29.6%; whereas the accuracy of our discriminator network drops from 81.1% to 36.4% (keeping all the hyperparameters and network structure unchanged). For the generator side, we are able to match or even beat the inception score of state-of-the-art method on medium scale datasets (see Sec. 4 for details), with significantly fewer iterations. Lastly, we modify the loss of AC-GAN and our experiments confirm the superiority over the original one. Notations Throughout this paper, we denote the (image, label) pair as (x i, y i), i is the index of data point; The classifier parameterized by weights w is f (x; w), this function includes the final Softmax layer so the output is probabilities. We also define D(x) and G(z) as the discriminator and generator networks respectively. The adversarial example x adv is crafted by perturbing the original input, i.e. x adv = x + δ, where δ ≤ δ max. For convenience, we consider ∞ -norm in our experiments. The real and fake images are denoted as x real/fake, readers should differentiate the "fake" images with "adversarial" images 1. The training set is denoted as P real, this is the empirical distribution. Given the training set P real, we define empirical loss function DISPLAYFORM0 Generative adversarial network. This is a kind of algorithm that learns to model distribution either with or without supervision BID10, which is often considered as a hard task especially for high dimensional data (images, texts, audios, etc.). In recent years, GANs keep to be intensively studied, toghther with other competitive generative models such as variational autoencoder or VAE, which learns the latent representation of data via prior knowledge BID20, and auto-regressive model that models the conditional distribution given previous states (e.g. PixelCNN (van den) ). One advantage of GANs over other methods is that they are able to generate high quality images directly from certain distributions, whereas the other methods are either slow in generation, or yield blurry images. A GAN has two competing networks with different objectives: in the training phase, the generator G(z) and the discriminator D(x) are evolved in a minimax game, which can be denoted as a unified loss: min DISPLAYFORM0 Unlike traditional machine learning problems where we typically minimize the loss, is hard to optimize and that is the focus of recent literature. Among them, a guideline for the architectures of G and D is summarized in BID32. Other training techniques, including feature matching (similar to MMD-GAN BID23 BID1) and mini-batch discrimination are proposed in BID12 to improve the stability and quality of networks. For high resolution and photo-realistic image generation, currently the standard way is to first learn to generate low resolution images as the intermediate products, and then learn to refine them progressively BID8 BID19, this turns out to be more stable than directly generate high resolution images through a gigantic network. To reach the equilibrium efficiently, alternative loss metrics BID0 BID4 BID13 BID38 are applied and proven to be effective. Among them, BID0 theoretically explains why training the DCGAN is highly unstable -since the image manifold is highly concentrated towards a low dimensional manifold, and if two distributions P real and P fake are supported on two low dimensional manifolds that do not perfectly align, then there exists an "optimal discriminator D(x)" that tells apart two distributions with probability one. Moreover, under that situation, the gradient of discriminator ∇D(x) closes to zero and thus the training process is halted. Closely following that theorem, proposes to use Wasserstein-1 distance to measure the distance between real and fake data distribution. The ing network, namely "Wasserstein-GAN", largely improves the stability of GAN training. Another noteworthy work inspired by WGAN/WGAN-GP is spectral normalization, the idea is to estimate the operator norm σ max (W) of weights W inside layers (convolution, linear, etc.), and then normalize these weights to have 1-operator norm through dividing weight tensors by operator norm: W = W/σ max (W). Because ReLU non-linearity is already 1-Lipschitz, if we stack these layers together the network as a whole would still be 1-Lipschitz, that is exactly the prerequisite to apply Kantorovich-Rubinstein duality to estimate Wasserstein distance. Despite the success of aforementioned works, we want to address one missing part of these models: to the best of our knowledge, none of them consider the robustness of discrimination network D(x). This overlooked aspect can be problematic especially for high resolution images and large networks, this will be one of the central points of this paper. Adversarial attacks and defenses: Apart from GAN, another key ingredient of our method is adversarial examples, originated in BID36 and further studied in BID11. They found that machine learning models can be easily "fooled" by slightly modified images if we design a tiny perturbation according to some "attack" algorithms. In this paper we apply a simple yet efficient algorithm, namely PGD-attack BID25, to generate adversarial examples. Given an example x with ground truth label y, PGD computes adversarial perturbation δ by solving the following optimization with Projected Gradient Descent: DISPLAYFORM1 where f (·; w) is the network parameterized by weights w, (·, ·) is the loss function and for convenience we choose · to be the ∞ -norm in accordance with BID25, but note that other norms are also applicable. Intuitively, the idea of FORMULA2 is to find the point x adv:= x + δ within an ∞ -ball such that the loss value of x adv is maximized, so that point is most likely to be an adversarial example. In fact, most optimization based attacking algorithms (e.g. FGSM BID11, C&W BID6) shares the same idea as PGD attack. Opposite to the adversarial attacks, the adversarial defenses are techniques that make models resistant to adversarial examples. It is worth noting that defense is a much harder task compared with attacks, especially for high dimensional data combined with complex models. Despite that huge amount of defense methods are proposed BID31 BID25 BID5 BID24 BID14 BID9 BID40 BID33, many of them rely on gradient masking or obfuscation, which provide an "illusion" of safety. They claimed that the most effective defense algorithm is adversarial training BID25, formulated as DISPLAYFORM2 where (x, y) ∼ P real is the (image, label) joint distribution of real data, f (x; w) is the network parameterized by w, f (x; w), y is the loss function of network (such as the cross-entropy loss). We remark that the data distribution P real is often not available in practice, which will be replaced by the empirical distribution.3 PROPOSED APPROACH In Sec. 2 we listed some of the published works on adversarial defense, and pointed out that adversarial training is the most effective method to date. However, until now this method has only been tested on small dataset like MNIST and CIFAR10 and it is an open problem as to whether it scales to large dataset such as ImageNet. To our knowledge, there are two significant drawbacks of this method that restrict its application. First and most obviously, the overhead to find adversarial examples in each iteration is about 10x of the normal process (this can be inferred by #Iterations in each PGD attack FORMULA1 is obtained by adversarial training on CIFAR-10, we set δ max = 0.03125 in. The horizontal axis is the attack strength δ which is equivalent to δ max in. Note that δ max in FORMULA2 and FORMULA3 have different meaningsone is for attack and the other is for defense. Notice the increasing accuracy gap when δ < 0.03125. Right: The local Lipschitz value (LLV) measured by gradient norm ∂ ∂xi f (x i ; w), y i 2, data pairs (x i, y i) are chosen from the training and testing set respectively. During the training process, LLV on the training set stabilizes at a low level, whereas LLV on the test set keeps growing. methods such as. In essence, restricting the LLV can be formulated as a composite loss minimization problem: DISPLAYFORM0 Notice that can be regarded as the "one-step approximation" of. In practice we need to change the expectation over P real to empirical distribution of finite data, DISPLAYFORM1 where DISPLAYFORM2 are feature-label pairs constitute the training set. Ideally, if we have enough data and model size is moderate then the objective function in still converges to. However in practice when taking adversarial examples into account, we have one more problem to worry about: Does small LLV in training set imply small LLV in test set? The enlarged accuracy gap shown in FIG0 (Left) tends to give a negative answer. To verify this phenomenon directly, we calculate the LLV on images sampled from training and testing set respectively (FIG0), we observe that in parallel with accuracy gap, the LLV gap between training and testing set is equally significant. Thus we conclude that although adversarial training controls LLV around training set effectively, this property does not generalize to test set. Notice that our empirical finding does not contradict the certified robustness of adversarial training using generalization theory (e.g. BID34), which only explains weak attack situation. The generalization gap can be potentially reduced if we have a better understanding of P real instead of approximating it by training set. This leads to our first motivation: can we use GAN to learn P real and plug it into adversarial training algorithm to improve robustness on test set? We will give a possible solution in Sec. 3.3. GANs are notoriously hard to train. To our knowledge, there are two major symptoms of a failure trial -gradient vanishing and mode collapse. The theoretical explanation of gradient vanishing problem is discussed in BID0 by assuming the images lie in a low dimensional manifold. Following this idea, BID12 propose to use 1-Wasserstein distance in place of the KL-divergence. The central character of WGAN and improved WGAN is that they require the set of discriminators {D(x; w)|∀w ∈ R d } equals to the set of all 1-Lipschitz functions w.r.t input x. Practically, we can either clip the weight of discriminator w, or add a gradient norm regularizer BID12. Recently, another regularization technique called "spectral normalization" ) is proposed to enforce 1-Lipschitz discriminator and for the first time, GAN learns to generate high quality images from full ImageNet data with only one generator-discriminator pair. In contrast, AC-GAN BID30 -the supervised version of DCGAN -divides 1000 classes into 100 groups so each network-pair only learns 10 classes. Despite the success along this line of research, we wonder if a weaker assumption to the discriminator is possible. Concretely, instead of a strict one-Lipschitz function, we require a small local Lipschitz value on image manifold. Indeed, we find a connection between robustness of discriminator and the learning efficiency of generator, as illustrated in Fig. 2. Fake images Figure 2: Comparing robust and non-robust discriminators, for simplicity, we put them together into one graph. Conceptually, the non-robust discriminator tends to make all images close to the decision boundary, so even a tiny distortion δ can make a fake image x 0 to be classified as a real image x adv = x 0 + δ. In contrast, such δ is expected to be much larger for robust discriminators. As one can see in Fig. 2, if a discriminator D(x) has small LLV (or |D (x)|), then we know DISPLAYFORM0 for a "reasonably" large δ. In other words, for robust discriminator, the perturbed fake image x adv = x 0 + δ is unlikely to be mistakenly classified as real image, unless δ is large. Compared with adversarial attacks, the attacker is now a generator G(z; w) parameterized by w ∈ R d instead of the gradient ascend algorithm. For making x 0 "looks like" a real image (x adv), we must update generator G(z; w) to G(z; w) and by assuming the Lipschitz continuity of G, DISPLAYFORM1 This indicates the movement of generator weights w − w is lower bounded by the distance of a fake image x 0 to the decision boundary, specifically we have w − w ≥ δ /L G. Furthermore, recall that a robust discriminator D(x) implies a larger δ, putting them together we know that improving the robustness of discriminator will lead to larger updates of the generator. In Sec. 4 we experimentally show that adversarial training not only speeds up the convergence to the equilibrium, but also obtains an excellent generator. But we leave the rigorous analysis for future works. Motivated by Sec. 3.1 and 3.2, we propose a system that combines generator, discriminator, and adversarial attacker into a single network. Our system consists of two stages, the first stage is an end-to-end GAN training: the generator feeds fake images to the discriminator; meanwhile real images sampled from training set are processed by PGD attacking algorithm before sending to the discriminator. After that the discriminator is learned to minimize both discrimination loss and classification loss (introduced below). In the next stage, the discriminator is refined by combining the fake and real images. The network structure is illustrated in Fig. 3. In what follows, we give more details about each component: Discriminator: The discriminator could have the standard architecture like AC-GAN. In each iteration, it discriminates real and fake images. When the ground truth labels are available, it also predicts the classes. In this paper, we only consider the label-conditioning GANs BID27 BID30, whose architectural differences are briefly overviewed in FIG3. Among them we simply choose AC-GAN, despite that SN-GAN (a combination of spectral normalization and projection discriminator) performs much better in their paper. The reason we choose the AC-GAN is that Step 1. Co-trainingStep 2. Fine-tuning Figure 3: Illustration of the training process. Step-1 is the standard GAN training, i.e. alternatively updating the G and D networks. The only difference is that whenever feeding the real images to the D network, we first run 5 steps of PGD attack, so the discriminator is trained with adversarial examples. Step-2 is a refining technique, aiming at improving prediction accuracy on the test set. SN-GAN discriminator relies on the ground truth labels and their adversarial loss is not designed to encourage high classification accuracy. But surprisingly, even though AC-GAN is beaten by SN-GAN by a large margin, after inserting the adversarial training module, the performance of AC-GAN matches or even surpasses the SN-GAN, due to the reason discussed in Sec. 3.2. We also changed the loss objective of AC-GAN. Recall that the original loss in BID30 defined by discrimination likelihood L S and classification likelihood L C: DISPLAYFORM0 where X real/fake are any real/fake images, S is the discriminator output, C is the classifier output. Based on, the goal of discriminator is to maximize L S + L C while generator aims at maximizing L C − L S. According to this definition, both G and D are optimized to increase L C: even if G(z) produces unrecognizable images, D(x) has to struggle to classify them (with high loss), in such case the corresponding gradient term ∇L C can contribute uninformative direction to the discriminator. To resolve this issue, we split L C as follows, DISPLAYFORM1 then discriminator maximizes L S + L C1 and generator maximizes L C2 − L S. The new objective functions ensure that discriminator only focuses on classifying the real images and discriminating real/fake images. Generator: Similar to the traditional GAN training, the generator is updated on a regular basis to mimic the distribution of real data. This is the key ingredient to improve the robustness of discriminators: as shown in Sec. 3.1, adversarial training performs well on training set but is vulnerable on test set. Intuitively, this is because during adversarial training, the network only "sees" adversarial examples residing in the δ max -ball of all training samples, whereas the rest images in the data manifold are undefended. Data augmentation is a natural way to resolve this issue, but traditional techniques BID21 BID15 BID37 BID41 BID17 rely largely on combinations of geometric transforms to the training images, in our case the support of the probability density function is still very small. Instead, our system uses images sampled from generator to provide a continuously supported p.d.f. for the adversarial training. Unlike traditional augmentation methods, if the equilibrium in is reached, then we can show that one desirable solution of would be P fake (z) dist.= P real, and therefore the robust classifier can be trained on the learned distribution. Fine-tuning the classifier: This step aims at improving the classification accuracy, based on the auxiliary classifier in the pretrained discriminator. This is crucial because in the GAN training stage (step 1 in Fig. 3), the discriminator is not trained to minimize the classification error, but a weighted loss of both discrimination and classification. But in step 2, we want to focus on the robust classification task DISPLAYFORM2 where x adv = arg min DISPLAYFORM3 Here the function f (x; w) is just the classifier part of network D(x), recall that we are dealing with conditional GAN. As we can see, throughout the fine-tuning stage, we force the discriminator to focus on the classification task rather than the discrimination task. It turns out that the fine-tuning step boosts the accuracy by a large margin. Adversarial attacker is omitted in Fig. 3 due to width limit. We experiment on both CIFAR10 and a subset of ImageNet data. Specifically, we extract classes y i such that y i ∈ np.arange from the original ImageNet data: recall in total there are 1000 classes in ImageNet data and we sampled 294 − 151 = 143 classes from them. We choose these datasets because 1) the current state-of-the-art GAN, SN-GAN, also worked on these datasets, and 2) the current state-of-the-art adversarial training method BID25 only scales to CIFAR dataset. For fair comparison, we copy all the network architectures for generators and discriminators from SN-GAN, other important factors, such as learning rate, optimization algorithms, #discriminator updates in each cycle, etc. are also kept the same. The only modification is that we discarded the feature projection layer and applied the auxiliary classifier (see FIG3). Please refer to the appendix or source code for more implementation details. In what follows, we check whether fine-tuning helps improving test set accuracy. To this end, we design a experiment that compares two set of models: in the first set, we directly extract the auxiliary classifiers from discriminators to classify images; in the next set, we apply fine-tuning strategy to the pretrained model as Fig. 3 illustrated. The can be found in FIG4, which supports our argument that fine-tuning is indeed useful for better prediction accuracy. Robustness of discriminator: comparing robustness with/ without data augmentation In this experiment, we would like to compare the robustness of discriminator networks with or without data augmentation technique discussed in Sec. 3.3. The robustness is measured by the prediction accuracy under adversarial attack. For networks without data augmentation, that would be equal to the state-of-the-art Madry's algorithm BID25. For attacking algorithm, we choose the widely used ∞ PGD attack BID25, but other gradient based attacks are expected to Table 1: Accuracy of our model under ∞ PGD-attack. Inside the parenthesis is the improvement over standard adversarial training defense BID25.yield the same . We set the ∞ perturbation to σ max ∈ np.arange(0, 0.1, 0.01) as defined in. Another minor detail is that we scale the images to [−1, 1] rather than usual. This is because generators always have a tanh output layer, so we need to do some adaptations accordingly. We exhibit the in Tab. 1, showing our method can improve the robustness of state-of-the-art defensive algorithm. Effect of split classification loss Here we show the effect of split classification loss described in, recall that if we apply the loss in then the ing model is AC-GAN. It is known that AC-GAN can easily lose modes in practice, i.e. the generator simply ignores the noise input z and produces fixed images according to the label y. This defect is observed in many previous works BID16 BID26 BID18. In this ablation experiment, we compare the generated images trained by two loss functions, the is shown in FIG5. Quality of generator and convergence speed In the last experiment, we compare the quality of generators trained in three datasets: CIFAR10, ImageNet subset (64px) and ImageNet subset (128px). Our baseline model is the SN-GAN, considering that, as far as we know, SN-GAN is the best GAN model capable of learning hundreds of classes. Note that SN-GAN can also learn the conditional distribution of the entire ImageNet data (1000 classes), unfortunately, we are not able to match this experiment due to time and hardware limit. To show that the adversarial training technique indeed accelerates the convergence speed, we also tried to exclude adversarial training -this is basically an AC-GAN, except that an improved loss function discussed in is applied to discriminator D(x). The are exhibited in FIG6, which shows that adversarial training can improve the performance of GAN, and our generator achieves better inception score than SNGAN. Another finding is that our new loss proposed in FORMULA10 We compare the inception scores between our model and the SN-GAN. Clearly our method learns a high quality generator in a short time, specifically, in both datasets, AC-GAN with adversarial training surpasses SN-GAN in just 25 epochs (64px) or 50 epochs (128px). Another observation is that with adversarial training, the convergence is greatly accelerated.whether adversarial training with fake data augmentation really shrinks the generalization gap. To this end, we draw the same figure as FIG0, except that now the classification model is the discriminator after fine-tuning step (shown in Fig. 3). We compare the accuracy gap in FIG7. Clearly the model trained with the adversarial real+fake augmentation strategy works extremely well: it improves the testing accuracy under PGD-attack and so the generalization gap between training/testing set does not increase that much. In this paper, we draw a connection between adversarial training BID25 and generative adversarial network BID10. Our primary goal is to improve the generalization ability of adversarial training and this is achieved by data augmentation by the unlimited fake images. Independently, we see an improvement of both robustness and convergence speed in GAN training. While the theoretical principle in behind is still unclear to us, we gave an intuitive explanation. Apart from that, a minor contribution of our paper is the improved loss function of AC-GAN, showing a better in image quality. | We found adversarial training not only speeds up the GAN training but also increases the image quality | 1,065 | scitldr |
We present a method to train self-binarizing neural networks, that is, networks that evolve their weights and activations during training to become binary. To obtain similar binary networks, existing methods rely on the sign activation function. This function, however, has no gradients for non-zero values, which makes standard backpropagation impossible. To circumvent the difficulty of training a network relying on the sign activation function, these methods alternate between floating-point and binary representations of the network during training, which is sub-optimal and inefficient. We approach the binarization task by training on a unique representation involving a smooth activation function, which is iteratively sharpened during training until it becomes a binary representation equivalent to the sign activation function. Additionally, we introduce a new technique to perform binary batch normalization that simplifies the conventional batch normalization by transforming it into a simple comparison operation. This is unlike existing methods, which are forced to the retain the conventional floating-point-based batch normalization. Our binary networks, apart from displaying advantages of lower memory and computation as compared to conventional floating-point and binary networks, also show higher classification accuracy than existing state-of-the-art methods on multiple benchmark datasets. Deep learning has brought about remarkable advancements to the state-of-the-art in several fields including computer vision and natural language processing. In particular, convolutional neural networks (CNN's) have shown state-of-the-art performance in several tasks such as object recognition with AlexNet BID19, VGG BID29, ResNet and detection with R-CNN BID10 BID9 BID26. However, to achieve real-time performance these networks are dependent on specialized hardware like GPU's because they are computation and memory demanding. For example, AlexNet takes up 250Mb for its 60M parameters while VGG requires 528Mb for its 138M parameters. While the performance of deep networks has been gradually improving over the last few years, their computational speed has been steadily decreasing BID32. Notwithstanding this, interest has grown significantly in the deployment of CNN's in virtual reality headsets (Oculus, GearVR), augmented reality gear (HoloLens, Epson Moverio), and other wearable, mobile, and embedded devices. While such devices typically have very restricted power and memory capacites, they demand low latency and real-time performance to be able to provide a good user experience. Not surprisingly, there is considerable interest in making deep learning models computationally efficient to better suit such devices BID24 BID4 BID40.Several methods of compression, quantization, and dimensionality reduction have been introduced to lower memory and computation requirements. These methods produce near state-of-the-art , either with fewer parameters or with lower precision parameters, which is possible thanks to the redundancies in deep networks BID3.In this paper we focus on the solution involving binarization of weights and activations, which is the most extreme form of quantization. Binarized neural networks achieve high memory and computational efficiency while keeping performance comparable to their floating point counterparts. BID5 have shown that binary networks allow the replacement of multiplication and additions by simple bit-wise operations, which are both time and power efficient. The challenge in training a binary neural network is to convert all its parameters from the continuous domain to a binary representation, typically done using the sign activation function. However, since the gradient of sign is zero for all nonzero inputs, it makes standard back-propagation impossible. Existing state-of-the-art methods for network binarization BID5 BID25 alternate between a binarized forward pass and a floating point backward pass to circumvent this problem. In their case, the gradients for the sign activation are approximated during the backward pass, thus introducing inaccuracies in training. Furthermore, batch normalization BID16 is necessary in binary networks to avoid exploding feature map values due to the large scale of the weights. However, during inference, using batch normalization introduces intermediary floating point representations. This means, despite binarizing weights and activations, these networks can not be used on chips that do not support floating-point computations. In our method, the scaled hyperbolic tangent function tanh is used to bound the values in the range [−1, 1]. The network starts with floating point values for weights and activations, and progressively evolves into a binary network as the scaling factor is increased. Firstly, this means that we do not have to toggle between the binary and floating point weight representations. Secondly, we have a continuously differentiable function that allows backpropagation passes. As another important contribution, we reduce the standard batch normalization operation during the inference stage to a simple comparison. This modification is not only very efficient and can be accomplished using fixedpoint operations, it is also an order of magnitude faster than the floating-point counterpart. More clearly, while existing binarization methods perform, at each layer, the steps of binary convolutions, floating-point batch normalization, and sign activation, we only need to perform binary convolutions followed by our comparison-based batch normalization, which serves as the sign activation at the same time (see Fig. 1). convolutions, batch norm, sign activation convolutions, combined comparison batch norm and sign activation Figure 1: Schematic comparison of a typical binary neural network generated by existing methods (top half) with that of our method (bottom half). Existing methods are unable to avoid the 32-bit floating point batch norm, while we simplify it to a comparison operation, which simultaneously serves as the activation layer. We validate the performance of our self-binarizing networks by comparing them to those of existing binarization methods. We choose the standard bechmarks of CIFAR-10, CIFAR-100 BID18 ) as well as ImageNet BID27 and popular network architectures such as VGG and AlexNet. We demonstrate higher accuracies despite using less memory and fewer computations. To the best of our knowledge, our proposed networks are the only ones that are free of any floating point computations and can therefore be deployed on low-precision integrated chips or micro-controllers. In what follows, in Sec. 2 we describe previous work related to reducing the network complexity and where we are placed among them. In Sec. 3 we explain how the scaled tanh function can be used for progressive binarization. In Sec. 4 we explain our binarization method for weights and activations and explain how to simplify batch normalization at inference time. In Sec. 5 we compare our technique to existing state-of-the-art binary networks on standard benchmarks and demonstrate the performance gains from our proposed batch normalization simplification. Sec. 6 concludes the paper. Making deep networks memory and computation efficient has been approached in various ways. In this section we cover some of the relevant literature and explain how our work is positioned with respect to the state-of-the-art. The interested reader may refer to BID4, BID24, and for a wider coverage. Since most of the computation in deep networks is due to convolutions, it is logical to focus on reducing the computational burden due to them. Howard et al. FORMULA0; BID34 BID8 employ variations of convolutional layers by taking advantage of the separability of the kernels either by convolving with a group of input channels or by splitting the kernels across the dimensions. In addition to depth-wise convolutions, MobileNetV2 BID28 uses inverted residual structures to learn how to combine the inputs in a residual block. In general, these methods try to design a model that computes convolutions in an efficient manner. This is different from this work because it focuses on redesigning the structure of the network, while ours tries to reduce the memory requirements by using lower precision parameters. However, our method can be applied to these networks as well. Reducing the number of weights likewise reduces the computational and memory burden. Due to the redundancy in deep neural networks BID3, there are some weights that contribute more to the output than others. The process of removing the less contributing weights is called pruning. In BID20, the contribution of weights is measured by the effect on the training error when this parameter is zeroed. In Deep Compression BID11, the weights with lowest absolute value are pruned and the remaining are quantized and compressed using Huffman coding. In other work, Fisher Information BID31 ) is used to measure the amount of information the output carries about each parameter, which allows pruning. While these approaches often operate on already trained networks and fine-tune them after compression, our method trains a network to a binary state from scratch without removing any parameters or feature maps. This allows us to retain the original structure of the network, while still leaving the potential for further compression after or during binarization using pruning techniques. Another way to reduce memory consumption and potentially improve computational efficiency is the quantization of weights. Quantized neural networks BID36 occupy less memory while retaining similar performance as their full precision counterparts. DoReFaNet BID36 proposes to train low bitwidth networks with low bitwidth gradients using stochastic quantization. Similarly, devise a weight partitioning technique to incrementally quantize the network at each step. The degree of quantization can vary between techniques. BID7, BID37, and BID22 quantize weights to three levels, i.e., two bits only. These quantizations, while severe, still allow for accurate inference. However, to improve computational efficency, specialized hardware is needed to take advantage of the underlying ternary operations. An extreme form of such quantization is binarization, which requires only one bit to represent. Expectation BackPropagation (EBP) paved the way for training neural networks for precision limitedhardware BID30. BinaryConnect BID5 extended the idea to train neural networks with binary weights. The authors later propose BinaryNet to binarize activations as well. Similarly, XNORNet BID25 extends BinaryNet by adding a scaling factor to the parameters of every layer while keeping the weights and activations binary. ABCNet approximates full precision parameters as a linear combination of binary weights and uses multiple binary activations to compensate for the information loss arising from quantization. BID13 use Hessian approximations to minimize loss with respect to the binary weights during training. The focus of our work is the binarization of weights and activations of a network. In previous binarization methods, the binarization process is non-differentiable leading to approximations during the training that can affect the final accuracy. In contrast, we use a differentiable function to pro-gressively self-binarize the network and improve its accuracy. Additionally, we differ from these techniques as we introduce a comparison-based binary batch normalization that eliminates all floating point operations at inference time. In typical binary network training, during the forward pass, the floating-point weights and activations are quantized to binary values {−1, 1}, through a piece-wise constant function, most commonly the sign function: DISPLAYFORM0 This non-linear activation leads to strong artifacts during the forward pass, and does not generate gradients for backpropagation. The derivatives of the binarized weights are therefore approximately computed using a Straight Through Estimator (STE) BID1. STE creates non-zero derivative approximations for functions that either have a zero derivative everywhere or are nondifferentiable. Typically, the derivative of sign is estimated by using the following STE: DISPLAYFORM1 In the backward propagation step, the gradients are computed on the binarized weights using the STE and the corresponding floating-point representations are updated. Since both forward and backward functions are different, the training is ill-defined. The lack of an accurate derivative for the weights and activations creates a mismatch between the quantized and floating-point values and influences learning accuracy. This kind of problems has been studied previously, and continuation methods have been proposed to simplify its solution BID0. To do so, the original complex and non-smooth optimization problem is transformed by smoothing the original function and then gradually decreasing the smoothness during training, building a sequence of sub-optimization problems converging to the original one. For example, BID2 apply these methods on the last layer of a neural network to predict hashes from images. Following this philosophy, we introduce a much simpler and efficient training method that allows the network to self-binarize. We pass all our weights and activations through the hyperbolic tangent function tanh whose slope can be varied using a scale parameter ν > 0. As seen in FIG0, when ν is large enough the tanh(νx) converges to the sign(x) function: DISPLAYFORM2 Throughout the training process, the weights and activations use floating-point values. Starting from a value of ν = 1, as the scale factor is progressively incremented, the weights and activations are forced to attain binary values {−1, 1}. During the training and while ν is still small, the derivative of tanh exists for every value of x and is expressed as DISPLAYFORM3 where sech is the hyperbolic secant. Using the scaled tanh, we can build a continuously differentiable network which progressively approaches a binary state, leading to a more principled approach to obtain a binarized network. In this section, we formally describe our self-binarizing approach. We first explain how weights and activations are binarized. Then, we propose a more efficient, comparison-based batch-normalization method that is more suitable when working in binary settings. As stated above, we cannot use binary weights at training time as it would make gradient computation infeasible. Instead, we use a set of constrained floating-point weights W at each layer. Unlike traditional networks, these weights are not learnable parameters of our model, but depend on learnable parameters. For each layer of the network, we define a set of learnable, unconstrained parameters P and use the scaled tanh to compute the weights as DISPLAYFORM0 where ν e is the scale factor at epoch e, taken from a sequence 1 = ν 0 < ν 1 <... < ν M → ∞ of increasingly larger values. During training, parameters P are updated to minimize a loss function using a standard gradient-based optimization procedure, such as stochastic gradient descent. The scaled tanh transforms the parameters P to obtain weights W that are bounded in the range [−1, 1] and become closer to binary values as the training proceeds. At the end of the training, weights W are very close to exact binary values. At this point, we obtain the final binary weights B by taking the sign of the parameters, DISPLAYFORM1 At inference time, we drop all learnable parameters P and constrained weights W, and use only binary weights B. Just as for weights, we cannot use binary activations either at training time. We follow the idea as above to address activation self-binarization as well. During training, we use the scaled tanh as the activation function of the network. For a given layer, the activation function transforms the output O of the layer to lead to the activation DISPLAYFORM0 The activations are constrained to lie within the range [−1, 1], and eventually become binary at the end of the training procedure. At inference time we make the network completely binary by substituting the scaled tanh by the sign operator as the binary activation function. Batch Normalization (BN) introduced by BID16 accelerates the training of a general deep network. During training, the BN layers compute the running mean µ r and standard deviation σ r of the feature maps that pass through them, as well as two parameters, β and γ, that define an affine transformation. Later, at inference time, BN layers normalize and transform the input I to obtain an output O as given by DISPLAYFORM0 For binary networks BID5 BID25 in particular, BN becomes essential in order to avoid exploding activation values. However, using BN brings in the limitation of relying on floating-point computations. Apart from affecting computation and memory requirements, the floating-point BN effectively eliminates the possibility of using the network on low-precision hardware. A useful observation of BN is that, in a networks with binary activations, the output of the BN layers is always fed into a sign function. This means only the sign of the normalized output is kept, while its magnitude is not relevant for the subsequent layers. We can leverage this property to simplify the BN operation. The sign of the output O of a BN layer can be reformulated as: DISPLAYFORM1 with DISPLAYFORM2 While T is a floating-point value, in practice we represent it as a fixed-point 8-bit integer. This sacrifices some amount of numerical precision, but we observed no negative effect on the performance of our models. We refer to our simplified batch normalization as Binary Batch Normalization (BinaryBN).Note that the derivation of Eq. FORMULA8 does not hold when γ = 0. This is handled as a special case that simply evaluates β > 0. It must be emphasized that BinaryBN is not an approximate method; it computes the exact value of the sign of the output of the standard BN.During training we use the conventional BN layers. At inference time, we replace them with the BinaryBN without any loss in prediction accuracy. Our trained models can thus bypass the floatingpoint batch normalization with an efficient alternative that requires mere comparison operations. In this section, we first compare our self-binarization method against other known techniques. Later on, we discuss and demonstrate the efficiency gained using our proposed BinaryBN instead of the typical BN layer. We compare our self-binarizing networks with other state-of-the-art methods that use binary networks. BID5 present BinaryConnect (BC), a method that only binarizes the weights. Binary Neural Networks (BNN) improves BC by also binarizing the activations. Similarly, BID25 present two variants of their method: Binary Weight Networks (BWN), which only binarizes the weights, and XNORnet (XNOR), which binarizes both weights and activations. For a fair comparison and to prove the generality of our method, we use the original implementations of BC, BNN, BWN, and XNOR, and apply our self-binarizing technique to them. Specifically, we substituted the weights from the original implementations by a pair of parameters P and constrained weights W given by Eq.. Also, we substituted the activation functions by the scaled tanh as described in Eq. FORMULA6. At inference time we used the binary weights obtained with Eq. and the sign operator as the activation function. Additionally, we replace the batch normalization layers by our BinaryBN layer for the cases where the activations are binarized. We evaluate the methods on three common benchmark datasets: CIFAR-10, CIFAR-100 BID18 ) and ILSVRC12 ImageNet BID27. For CIFAR-10 and CIFAR-100, we use a VGG-16-like network with data augmentation as proposed in BID21: 4 pixels are padded on each side, and a 32x32 patch is randomly cropped from the padded image or its horizontal flip. During testing, only a single view of the original 32x32 image is evaluated. The model is trained with a mini-batch size of 256 for 100 epochs. The ILSVRC12 ImageNet dataset is used to train an AlexNet-like network without drop-out or local response normalization layers. We use the data augmentation strategy from. At inference time, only the center crop is used from the validation set. The model is trained with a mini-batch size of 64 and a total of 50 epochs. In all our experiments we increase ν from 1 to 1000 during training in an exponential manner. The final ν = 1000 is large enough to make weights and activations almost binary in practice. We optimize all models using Adam BID17 with an exponentially decaying learning rate starting at 10 −3. TAB0 shows the of the experiments. Our self-binarizing approach achieves the highest accuracies for CIFAR-10 and ImageNet. Our method is only slightly outperformed by BC for CIFAR-100, but still gives better Top-5 accuracy for the same model. For both weights and activation binarization, our method obtains the best across all datasets and architectures. What is remarkable is that the improved performance comes despite eliminating floating-point computations and using drastically fewer bits than the previous methods, as can be seen in the columns B w, B a and B BN of TAB0. For CIFAR-10 and CIFAR-100, BWN outperforms the full precision models likely because the binarization serves as a regularizer BID5. With all other computations being common between our self-binarizing networks and other networks with binary weights and activations, any difference in computational efficiency has to arise from the use of a different batch normalization scheme. We therefore compare our proposed BinaryBN layer to the conventional BN layer as well as to the Shift-based Batch Normalization (SBN) proposed by. SBN proposes to round both γ and σ r to their nearest power of 2 and replace multiplications and divisions by left and right shifts, respectively. SBN follows the same formula as BN, and is used both at training and inference time, so that the network is trained with the rounded parameters. Table. 2 summarizes the requirements of memory and computational time of these three types of batch normalization layers. We assume the standard case where a binary convolution is followed by a batch normalization layer and then a binary activation function. For storing BN parameters, we need four 32-bit vectors of length c, amounting to 32×4c = 128c bits, c being the number of channels in this layer. For SBN, we need two 32-bit vectors and two 8-bit vectors of length c, ing in 80c bits. For BinaryBN, we need an 8-bit vector of size c to store the T value of each channel, and a binary vector for the sign of γ, totaling 9c bits. We experimentally assessed the time and memory requirements of the three batch normalization techniques. We run batches of increasing sizes through BN, SBN and BinaryBN layers with randomly generated values for µ r, σ r, β, and γ, and measure time and memory consumption. FIG1 shows the . Overall, BinaryBN is nearly one order of magnitude less memory consuming and faster than BN and SBN. We present a novel method to binarize a deep network that is principled, simple, and in binarization of weights and activations. Instead of relying on the sign function, we use the tanh function with a controllable slope. This simplifies the training process without breaking the flow of derivatives in the back-propagation phase as compared to that of existing methods that have to toggle between floating-point and binary representations. In addition to this, we replace the conventional batch normalization, which forces existing binarization methods to use floating point computations, by a simpler comparison operation that is directly adapted to networks with binary activations. Our simplified batch normalization is not only computationally trivial, it is also extremely memoryefficient. Our trained binary networks outperform those of existing binarization schemes on standard benchmarks despite using lesser memory and computation. | A method to binarize both weights and activations of a deep neural network that is efficient in computation and memory usage and performs better than the state-of-the-art. | 1,066 | scitldr |
In many applications, the training data for a machine learning task is partitioned across multiple nodes, and aggregating this data may be infeasible due to storage, communication, or privacy constraints. In this work, we present Good-Enough Model Spaces (GEMS), a novel framework for learning a global satisficing (i.e. "good-enough") model within a few communication rounds by carefully combining the space of local nodes' satisficing models. In experiments on benchmark and medical datasets, our approach outperforms other baseline aggregation techniques such as ensembling or model averaging, and performs comparably to the ideal non-distributed models. There has been significant work in designing distributed optimization methods in response to challenges arising from a wide range of large-scale learning applications. These methods typically aim to train a global model by performing numerous communication rounds between distributed nodes. However, most approaches treat communication reduction as an objective, not a constraint, and seek to minimize the number of communication rounds while maintaining model performance. Less explored is the inverse setting-where our communication budget is fixed and we aim to maximize accuracy while restricting communication to only a few rounds. These few-shot model aggregation methods are ideal when any of the following conditions holds:• Limited network infrastructure: Distributed optimization methods typically require a connected network to support the collection of numerous learning updates. Such a network can be difficult to set up and maintain, especially in settings where devices may represent different organizational entities (e.g., a network of different hospitals).• Privacy and data ephemerality: Privacy policies or regulations like GDPR may require nodes to periodically delete the raw local data. Few-shot methods enable learning an aggregate model in ephemeral settings, where a node may lose access to its raw data. Additionally, as fewer messages are sent between nodes, these methods have the potential to offer increased privacy benefits.• Extreme asynchronicity: Even in settings where privacy is not a concern, messages from distributed nodes may be unevenly spaced and sporadically communicated over days, weeks, or even months (e.g., in the case of remote sensor networks or satellites). Few-shot methods drastically limit communication and thus reduce the wall-clock time required to learn an aggregate model. Throughout this paper, we reference a simple motivating example. Consider two hospitals, A and B, which each maintain private (unshareable) patient data pertinent to some disease. As A and B are geographically distant, the patients they serve sometimes exhibit different symptoms. Without sharing the raw training data, A and B would like to jointly learn a single model capable of generalizing to a wide range of patients. The prevalent learning paradigm in this settingdistributed or federated optimization-dictates that A and B share iterative model updates (e.g., gradient information) over a network. From a meta-learning or multitask perspective, we can view each hospital (node) as a separate learning task, where our goal is to learn a single aggregate model which performs well on each task. However, these schemes often make similar assumptions on aggregating data and learning updates from different tasks. As a promising alternative, we present good-enough model spaces (GEMS), a framework for learning an aggregate model over distributed nodes within a small number of communication rounds. Intuitively, the key idea in GEMS is to take advantage of the fact that many possible hypotheses may yield'good enough' performance for a learning task on local data, and that considering the intersection between these sets can allow us to compute a global model quickly and easily. Our proposed approach has several advantages. First, it is simple and interpretable in that each node only communicates its locally optimal model and a small amount of metadata corresponding to local performance. Second, each node's message scales linearly in the local model size. Finally, GEMS is modular, allowing the operator to tradeoff the aggregate model's size against its performance via a hyperparameter.We make the following contributions in this work. First, we present a general formulation of the GEMS framework. Second, we offer a method for calculating the good-enough space on each node as a R d ball. We empirically validate GEMS on both standard benchmarks (MNIST and CIFAR-10) as well as a domain-specific health dataset. We consider learning convex classifiers and neural networks in standard distributed setups as well as scenarios in which some small global held-out data may be used for fine-tuning. We find that on average, GEMS increases the accuracy of local baselines by 10.1 points and comes within 43% of the (unachievable) global ideal. With fine-tuning, GEMS increases the accuracy of local baselines by 41.3 points and comes within 86% of the global ideal. Distributed Learning. Current distributed and federated learning approaches typically rely on iterative optimization techniques to learn a global model, continually communicating updates between nodes until convergence is reached. To improve the overall runtime, a key goal in most distributed learning methods is to minimize communication for some fixed model performance; to this end, numerous methods have been proposed for communication-efficient and asynchronous distributed optimization (e.g., ; ; ; ; ; Richtárik & Takáč, 2016; ;). In this work, our goal is instead to maximize performance for a fixed communication budget (e.g., only one or possibly a few rounds of communication).One-shot/Few-shot Methods. While simple one-shot distributed communication schemes, such as model averaging, have been explored in convex settings (; ; ; ;), guarantees typically rely on data being partitioned in an IID manner and over a small number of nodes relative to the total number of samples. Averaging can also perform arbitrarily poorly in non-convex settings, particularly when the local models converge to differing local optima . Other one-shot schemes leverage ensemble methods, where an ensemble is constructed from models trained on distinct partitions of the data (; ;). While these ensembles can often yield good performance in terms of accuracy, a concern is that the ing ensemble size can become quite large. In Section 4, we compare against these one-shot baselines empirically, and find in that GEMs can outperform both simple averaging and ensembles methods while requiring significantly fewer parameters. Meta-learning and transfer learning. The goals of metalearning and transfer learning are seemingly related, as these works aim to share knowledge from one learning process onto others. However, in the case of transfer learning, methods are typically concerned with one-way transfer-i.e., optimizing the performance of a single target model, not jointly aggregate knowledge between multiple models. In meta-learning, such joint optimization is performed, but similar to traditional distributed optimization methods, it is assumed that these models can be updated in an iterative fashion, with potentially numerous rounds of communication being performed throughout the training process. Version Spaces. In developing GEMS, we draw inspiration from work in version space learning, an approach for characterizing the set of logical hypotheses consistent with available data . Similar to , we observe that if each node communicates its version space to the central server, the server can return a consistent hypothesis in the intersection of all node version spaces. However, assume that the hypotheses of interest are consistent with the observed data-i.e., they perfectly predict the correct outcomes. Our approach significantly generalizes to explore imperfect, noisy hypotheses spaces as more commonly observed in practice. As in traditional distributed learning, we assume a training set DISPLAYFORM0..} as the subset of training examples belonging to node k, such that DISPLAYFORM1 We assume that a single node (e.g., a central server) can aggregate updates communicated in the network. Fixing a function class H, our goal is to learn an aggregate model h G ∈ H that approximates the performance of the optimal model h * ∈ H over S while limiting communication to one (or possibly a few) rounds of communication. In developing a method for model aggregation, our intuition is that the aggregate model should be at least good-enough over each node's local data, i.e., it should achieve some minimum performance for the task at hand. Thus, we can compute h G by having each node compute and communicate a set of locally good-enough models to a central server, which learns h G from the intersection of these sets. DISPLAYFORM2 denote a model evaluation function, which determines whether a given model h is good-enough over a sample of data points {(x i, y i)} d ⊆ S. In this work, define "good-enough" in terms of the accuracy of h and a threshold: DISPLAYFORM3 Using these model evaluation functions, we formalize the proposed approach for model aggregation, GEMS, in Algorithm 1. In GEMS, each node k = 1,..., K computes the set of models DISPLAYFORM4 and sends it to the central node. After collecting H 1,...H K, the central node selects h G from the intersection of the sets, ∩ i H i. When granted access to a small sample of public data, the server can additionally use this auxiliary data further fine-tune the selected h ∈ ∩ i H i, an approach we discuss further below. FIG3 visualizes this approach for a model class with only two weights (w 1 and w 2) and two learners ("red" and "blue"). The'good-enough' model space, H k, for each learner is a set of regions over the weight space (the blue regions correspond to one learner and the red regions correspond to second learner). The final aggregate model, h G, is selected from the area in which the spaces intersect. For a fixed hypothesis class H, applying Algorithm 1 requires two components: (i) a mechanism for computing H k over every node, and (ii) a mechanism for identifying the aggregate model, h G ∈ ∩ k H k. In this work, we present methods for two types of models: convex models and simple neural networks. For convex models, we find that H k can be approximated as R d -ball in the parameter space, requiring only a single round of communication between nodes to learn h G. For neural networks, we apply Algorithm 1 to each layer in a step-wise fashion, compute H k as a set of independent R d -balls corresponding to every neuron in the layer, and identify intersections between different neurons. This requires one round of communication per layer (a few rounds for the entire network).We can compute these R d balls by fixing the center at the optimal local model on a device. The radius for the ball is computed via binary search: at each iteration, the node samples a candidate hypothesis h and evaluates Q(h, S k). The goal is to identify that largest radius such that all models located in the R d ball are good-enough. Algorithm 2 presents a simple method for constructing H k. More details can be found in Appendix A (convex setting) and Appendix B (neural network setting). DISPLAYFORM5 Node k computes good-enough model space, H k, according to 4: end for DISPLAYFORM6 else 10:Set R upper = R end if 12: end while 13: Return H k Fine-tuning. In many contexts, a small sample of public data S public may be available to the central server. This could correspond to a public research dataset, or devices which have waived their privacy right. The coordinating server can fine-tune H G on S public by updating the weights for a small number of epochs. We find that fine-tuning is particularly useful for improving the quality of the GEMS aggregate model, H G, compared to other baselines. We now present the evaluation for GEMS on three datasets: MNIST , CIFAR-10 , and HAM10000 , a medical imaging dataset. HAM10000 (HAM) consists of images of skin lesions, and our model is tasked with distinguishing between 7 types of lesions. Full details can be found in Appendix C.1. We focus on the performance of GEMS for neural networks, and discuss for convex models in Appendix A. We partitioned data by label, such that all train/validation images corresponding to a particular label would be assigned to the same node. We consider three baselines: 1) global, a model trained on data aggregated across all nodes, 2) local, the average performance of models trained locally on each node, and 3) naive average, a parameter-wise average of all local models. All are reported on the aggregated test set consisting of all test data across all nodes. Fine-tuning consists of updating the last layer's weights of the GEMS model for 5 epochs over a random sample of 1000 images from the aggregated validation data. We report the average accuracy (and standard deviation) of all over 5 trials. Neural network performance. We evaluated the neural network variant of GEMS on simple two layer feedforward neural networks TAB0 ). The precise network configuration and training details are outlined in Appendix C.4. In the majority of cases, the untuned GEMS model outperforms the local/average baselines. Moreover, fine-tuning has a significant impact, and tuned GEMS model 1) significantly outperforms every baseline, and 2) does not degrade as K increases. In Appendix F, we demonstrate that GEMS is more parameter efficient than ensemble baselines, delivering better accuracy with fewer parameters. Fine-tuning. The in TAB0 suggest that fine-tuning on a holdout set of samples S public has a significant effect on the GEMS model. We evaluate the effect of fine-tuning as the number of public data samples (the size of the tuning set) changes. For neural networks FIG1 ), finetuned GEMS consistently outperforms 1) the finetuned baselines, and 2) a'raw' model trained directly on S public. This suggest that the GEMS model is learning weights that are more amenable to fine-tuning, and are perhaps capturing better representations for the overall task. Though this advantage diminishes as the tuning sample size increases, the advantage of GEMS is especially pronounced for smaller samples, and achieves remarkable improvements with just 100 images. Intersection Analysis. In certain cases, GEMS may not find an intersection between different nodes. This occurs when the task is too complex for the model, or is set too high. In practice, we notice that finding an intersection requires us to be conservative (e.g low values) when setting for each node. We explain this by our choice to represent H k as an R d ball. Though R d balls are easy to compute and intersect, they're fairly coarse approximations of the actual good-enough model space. To illustrate node behavior at different settings of, we defer the reader to experiments performed in Appendix G. In summary, we introduce GEMS, a framework for learning an aggregated model across different nodes within a few rounds of communication. We validate one approach for constructing good-enough model spaces (as R d balls) on three datasets for both convex classifiers and simple feedforward networks. Despite the simplicity of the proposed approach, we find that it outperforms a wide range of baselines for effective model aggregation TAB0 We provide a more detailed explanation of the GEMS algorithm for convex settings. Consider the class of linear separators f w (·) parameterized by a weight vector w ∈ R d. For each node k, we compute H k as R d -ball in the parameter space, represented as a tuple (c k ∈ R d, r k ∈ R) corresponding to the center and radius. Formally, H k = {w ∈ R d |||c k − w|| 2 ≤ r k}. Fixing as our minimum acceptable performance, we want to compute H k such that ∀w ∈ H k, Q(w, S k) = 1. Intuitively, every model contained within the d-ball should have an accuracy greater than or equal to. DISPLAYFORM0 over node data S k, is fixed hyperparameter, and Q(·) is a minimum accuracy threshold defined according to Eq. 1. R max and ∆ define the scope and stopping criteria for the binary search. Intersection: Given K nodes with individual DISPLAYFORM1., K}. We pick a point in this intersection by solving: DISPLAYFORM2 which takes a minimum value of 0 when w ∈ ∩ i H i. This w can be improved by fine-tuning on a limited sample of'public data'. We provide a more detailed explanation of the GEMS algorithm applied to neural networks. First, we observe that the final layer of an MLP is a linear model. Hence, we can apply the method above with no modification. However, the input to this final layer is a set of stacked, non-linear transformations which extract feature from the data. For these layers, the approach presented above faces two challenges:1. Node specific features: When the distribution of data is non-i.i.d across nodes, different nodes may learn different feature extractors in lower layers.2. Model Isomorphisms: MLPs are extremely sensitive to the weight initialization. Two models trained on the same set of samples (with different initializations) may have equivalent behavior despite learning weights. In particular, reordering a model's hidden neurons (within the same layer) does not alter the model's predictions, but corresponds to a different weight vector w. In order to construct H k for hidden layers, we modify the approach presented in Appendix A, applying it to individual hidden units. Formally, let the ordered set DISPLAYFORM0 Broadly, Q neuron returns 1 if the output of f w over z j−1 is within of z j l, and −1 otherwise. We can now apply Algorithm 2 to each neuron. Formally:1. Each node k learns a locally optimal model m k, with optimal neuron weights w j l *, over all j, l.2. Fix hidden layer j = 1. Apply Algorithm 2 to each DISPLAYFORM1, with Q(·) according to Eq 3 and predefined hyperparameter j. Denote the R d ball constructed for neuron l as H k j,l. to the central server which constructs the aggregate hidden layer f Gj,· such ∀i, k, ∃i: f Gj,i ∈ H k j,i. This is achieved by greedily applying Eq 2 to tuples in the cartesian product H 1 j,· ×... × H K j,·. Neurons for which no intersection exists are included in f Gj,·, thus trivially ensuring the condition above.4. The server sends h Gj,· to each node, which insert h Gj,· at layer j in their local models and retrain the layers above j.5. Increment j, and return to if any hidden layers remain. Step FORMULA10 is expensive for large L and K as |H DISPLAYFORM0 A simplifying assumption is that if H k j,i and H k j,l are'far', then the likelihood of intersection is low. Operationalizing this, we can perform k-means clustering over all neurons. In step, we now only look for intersections between tuples of neurons in the same cluster. Neurons for which no intersection exists are included in f Gj,·. For notational clarity, we denote the number of clusters with which k-means is run as m, in order to distinguish it from device index k. We describe preprocessing/featurization steps for our empirical . MNIST. We used the standard MNIST dataset. CIFAR-10. We featurize CIFAR-10 (train, test, and validation sets) using a pretrained ImageNet VGG-16 model from Keras. All models are learned on these featurized images. HAM10000. The HAM dataset consists of 10015 images of skin lesions. Lesions are classified as one of seven potential types: actinic keratoses and intraepithelial carcinoma (akiec), basal cell carcinoma (bcc), benign keratosis (bkl), dermatofibroma (df), melanoma (mel), melanocytic nevi (nv), and vascular lesions (vasc). As FIG4 shows, the original original dataset is highly skewed, with almost 66% of images belonging to one class. In order to balance the dataset, we augment each class by performing a series of random transformations (rotations, width shifts, height shifts, vertical flips, and horizontal flips) via Keras . We sample 2000 images from each class. We initially experimented with extracting ImageNet features (similar to our proceedure for CIFAR-10). However, training a model on these extractions ed in poor performance. We constructed our own feature extractor, by training a simple convolutional network on 66% of the data, and trimming the final 2 dense layers. This network contained 3 convolutional layers (32, 64, 128 filters with 3 × 3 kernels) interspersed with 2 × 2 MaxPool layers, and followed by a single hidden layer with 512 neurons. Given K nodes, we partitioned each dataset in order to ensure that all images corresponding to the same class belonged to the same node. TAB3 provides an explicit breakdown of the label partitions for each of the three datasets, across the different values of K we experimented with. We divided each dataset into train, validation, and test splits. All training occurs exclusively on the train split and all are reported for performance on the test split. We use the validation split to construct each node's Our convex model consists of a simple logistic regression classifier. We train with Adam, a learning rate of 0.001, and a batch size of 32. We terminate training when training accuracy converges. Our non-convex model consists of a simple two layer feedforward neural network. For MNIST and HAM, we fix the hidden layer size to 50 neurons. For CIFAR-10, we fix the hidden layer size to 100 neurons. We apply dropout with a rate of 0.5 to the hidden layer. We train with Adam, a learning rate of 0.001, and a batch size of 32. We terminate training when training accuracy converges. We evaluate the convex variant of GEMS on logistic classifiers. The for all three datasets for a varying number nodes K is presented in TAB6. Fine-tuning consists of updating the weights of the GEMS model for 5 epochs over a random sample of 1000 images from the aggregated validation data. Training details are provided in Appendix C.3In a convex setting, we find that GEMS frequently defaults to a weighted average of the parameters. Hence, the GEMS closely mirror naive averaging. As the number of agents increases, both untuned GEMS and the baselines significantly decrease in performance. However, tuned GEMS remains relatively consistent, and outperforms all other baselines. We use = 0.70 for MNIST, = 0.40 for HAM, and = 0.20 for CIFAR-10. E. Neural Network Results Table 4 presents the neural network for MNIST. We use = 0.7 for the final layer, and let j denote the deviation allowed for the hidden neurons (as defined in Eq 3). For neural networks, GEMS provides a modular framework to tradeoff between the model size and performance, via hyperparameters m (the number of clusters created when identifying intersections) and j (the maximum output deviation allowed for hidden neurons). Intuitively both parameters control the number of hidden neurons in the aggregate model h G. Table 7 compares adjustments for j and m on CIFAR-10 for 5 nodes against an ensemble of local device models. We observe that the GEMS performance correlates with the number of hidden neurons, and that GEMS outperforms the ensemble method at all settings (despite having fewer parameters). For ease of clarity, we describe the model size in terms of the number of hidden neurons. For ensembles, we sum the hidden neurons across all ensemble members. All are averaged over 5 trials, with standard deviations reported. We notice that in order for GEMs to find an intersection, we have to set conservatively. We illustrate this phenomenon in FIG8. We consider the convex MNIST case (K = 2), and do a grid search over different values of for each node. We plot whether an intersection was identified, and the ing accuracy at that setting TAB0 Model Aggregation via Good-Enough Model Spaces TAB0 Model Aggregation via Good-Enough Model Spaces | We present Good-Enough Model Spaces (GEMS), a framework for learning an aggregate model over distributed nodes within a small number of communication rounds. | 1,067 | scitldr |
We propose the Wasserstein Auto-Encoder (WAE)---a new algorithm for building a generative model of the data distribution. WAE minimizes a penalized form of the Wasserstein distance between the model distribution and the target distribution, which leads to a different regularizer than the one used by the Variational Auto-Encoder (VAE). This regularizer encourages the encoded training distribution to match the prior. We compare our algorithm with several other techniques and show that it is a generalization of adversarial auto-encoders (AAE). Our experiments show that WAE shares many of the properties of VAEs (stable training, encoder-decoder architecture, nice latent manifold structure) while generating samples of better quality. The field of representation learning was initially driven by supervised approaches, with impressive using large labelled datasets. Unsupervised generative modeling, in contrast, used to be a domain governed by probabilistic approaches focusing on low-dimensional data. Recent years have seen a convergence of those two approaches. In the new field that formed at the intersection, variational auto-encoders (VAEs) BID16 constitute one well-established approach, theoretically elegant yet with the drawback that they tend to generate blurry samples when applied to natural images. In contrast, generative adversarial networks (GANs) BID9 turned out to be more impressive in terms of the visual quality of images sampled from the model, but come without an encoder, have been reported harder to train, and suffer from the "mode collapse" problem where the ing model is unable to capture all the variability in the true data distribution. There has been a flurry of activity in assaying numerous configurations of GANs as well as combinations of VAEs and GANs. A unifying framework combining the best of GANs and VAEs in a principled way is yet to be discovered. This work builds up on the theoretical analysis presented in BID3. Following; BID3, we approach generative modeling from the optimal transport (OT) point of view. The OT cost is a way to measure a distance between probability distributions and provides a much weaker topology than many others, including f -divergences associated with the original GAN algorithms BID25. This is particularly important in applications, where data is usually supported on low dimensional manifolds in the input space X. As a , stronger notions of distances (such as f -divergences, which capture the density ratio between distributions) often max out, providing no useful gradients for training. In contrast, OT was claimed to have a nicer behaviour BID11 although it requires, in its GAN-like implementation, the addition of a constraint or a regularization term into the objective.: Both VAE and WAE minimize two terms: the reconstruction cost and the regularizer penalizing discrepancy between P Z and distribution induced by the encoder Q. VAE forces Q(Z|X = x) to match P Z for all the different input examples x drawn from P X. This is illustrated on picture (a), where every single red ball is forced to match P Z depicted as the white shape. Red balls start intersecting, which leads to problems with reconstruction. In contrast, WAE forces the continuous mixture Q Z:= Q(Z|X)dP X to match P Z, as depicted with the green ball in picture (b). As a latent codes of different examples get a chance to stay far away from each other, promoting a better reconstruction. In this work we aim at minimizing OT W c (P X, P G) between the true (but unknown) data distribution P X and a latent variable model P G specified by the prior distribution P Z of latent codes Z ∈ Z and the generative model P G (X|Z) of the data points X ∈ X given Z. Our main contributions are listed below (cf. also FIG0):• A new family of regularized auto-encoders (Algorithms 1, 2 and Eq. 4), which we call Wasserstein Auto-Encoders (WAE), that minimize the optimal transport W c (P X, P G) for any cost function c. Similarly to VAE, the objective of WAE is composed of two terms: the c-reconstruction cost and a regularizer D Z (P Z, Q Z) penalizing a discrepancy between two distributions in Z: P Z and a distribution of encoded data points, i.e. DISPLAYFORM0 When c is the squared cost and D Z is the GAN objective, WAE coincides with adversarial auto-encoders of BID23.• Empirical evaluation of WAE on MNIST and CelebA datasets with squared cost c(x, y) = x − y 2 2. Our experiments show that WAE keeps the good properties of VAEs (stable training, encoder-decoder architecture, and a nice latent manifold structure) while generating samples of better quality, approaching those of GANs.• We propose and examine two different regularizers D Z (P Z, Q Z). One is based on GANs and adversarial training in the latent space Z. The other uses the maximum mean discrepancy, which is known to perform well when matching high-dimensional standard normal distributions P Z BID10. Importantly, the second option leads to a fully adversary-free min-min optimization problem.• Finally, the theoretical considerations presented in BID3 and used here to derive the WAE objective might be interesting in their own right. In particular, Theorem 1 shows that in the case of generative models, the primal form of W c (P X, P G) is equivalent to a problem involving the optimization of a probabilistic encoder Q(Z|X).The paper is structured as follows. In Section 2 we review a novel auto-encoder formulation for OT between P X and the latent variable model P G derived in BID3. Relaxing the ing constrained optimization problem we arrive at an objective of Wasserstein auto-encoders. We propose two different regularizers, leading to WAE-GAN and WAE-MMD algorithms. Section 3 discusses the related work. We present the experimental in Section 4 and conclude by pointing out some promising directions for future work. Our new method minimizes the optimal transport cost W c (P X, P G) based on the novel auto-encoder formulation (see Theorem 1 below). In the ing optimization problem the decoder tries to accurately reconstruct the encoded training examples as measured by the cost function c. The encoder tries to simultaneously achieve two conflicting goals: it tries to match the encoded distribution of training examples Q Z:= E P X [Q(Z|X)] to the prior P Z as measured by any specified divergence D Z (Q Z, P Z), while making sure that the latent codes provided to the decoder are informative enough to reconstruct the encoded training examples. This is schematically depicted on FIG0. We use calligraphic letters (i.e. X) for sets, capital letters (i.e. X) for random variables, and lower case letters (i.e. x) for their values. We denote probability distributions with capital letters (i.e. P (X)) and corresponding densities with lower case letters (i.e. p(x)). In this work we will consider several measures of discrepancy between probability distributions P X and P G. The class of f -divergences BID21 ) is defined by DISPLAYFORM0 A rich class of divergences between probability distributions is induced by the optimal transport (OT) problem . Kantorovich's formulation of the problem is given by DISPLAYFORM0 where c(x, y): X × X → R + is any measurable cost function and P(X ∼ P X, Y ∼ P G) is a set of all joint distributions of (X, Y) with marginals P X and P G respectively. A particularly interesting case is when (X, d) is a metric space and c(x, y) = d p (x, y) for p ≥ 1. In this case W p, the p-th root of W c, is called the p-Wasserstein distance. When c(x, y) = d(x, y) the following Kantorovich-Rubinstein duality holds 1: DISPLAYFORM1 where F L is the class of all bounded 1-Lipschitz functions on (X, d). One way to look at modern generative models like VAEs and GANs is to postulate that they are trying to minimize certain discrepancy measures between the data distribution P X and the model P G. Unfortunately, most of the standard divergences known in the literature, including those listed above, are hard or even impossible to compute, especially when P X is unknown and P G is parametrized by deep neural networks. Previous research provides several tricks to address this issue. In case of minimizing the KL-divergence D KL (P X, P G), or equivalently maximizing the marginal log-likelihood E P X [log p G (X)], the famous variational lower bound provides a theoretically grounded framework successfully employed by VAEs BID16 BID24. More generally, if the goal is to minimize the f -divergence D f (P X, P G) (with one example being D KL), one can resort to its dual formulation and make use of f -GANs and the adversarial training BID25. Finally, OT cost W c (P X, P G) is yet another option, which can be, thanks to the celebrated Kantorovich-Rubinstein duality, expressed as an adversarial objective as implemented by the Wasserstein-GAN. We include an extended review of all these methods in Supplementary A.In this work we will focus on latent variable models P G defined by a two-step procedure, where first a code Z is sampled from a fixed distribution P Z on a latent space Z and then Z is mapped to the image X ∈ X = R d with a (possibly random) transformation. This in a density of the form DISPLAYFORM0 assuming all involved densities are properly defined. For simplicity we will focus on non-random decoders, i.e. generative models P G (X|Z) deterministically mapping Z to X = G(Z) for a given map G: Z → X. Similar for random decoders can be found in Supplementary B.1.It turns out that under this model, the OT cost takes a simpler form as the transportation plan factors through the map G: instead of finding a coupling Γ in between two random variables living in the X space, one distributed according to P X and the other one according to P G, it is sufficient to find a conditional distribution DISPLAYFORM1 is identical to the prior distribution P Z. This is the content of the theorem below proved in BID3. To make this paper self contained we repeat the proof in Supplementary B.Theorem 1 For P G as defined above with deterministic P G (X|Z) and any function G: DISPLAYFORM2 where Q Z is the marginal distribution of Z when X ∼ P X and Z ∼ Q(Z|X).This allows us to optimize over random encoders Q(Z|X) instead of optimizing over all couplings between X and Y. Of course, both problems are still constrained. In order to implement a numerical solution we relax the constraints on Q Z by adding a penalty to the objective. This finally leads us to the WAE objective: DISPLAYFORM3 where Q is any nonparametric set of probabilistic encoders, D Z is an arbitrary divergence between Q Z and P Z, and λ > 0 is a hyperparameter. Similarly to VAE, we propose to use deep neural networks to parametrize both encoders Q and decoders G. Note that as opposed to VAEs, the WAE formulation allows for non-random encoders deterministically mapping inputs to their latent codes. We propose two different penalties D Z (Q Z, P Z): DISPLAYFORM4 and use the adversarial training to estimate it. Specifically, we introduce an adversary (discriminator) in the latent space Z trying to separate 2 "true" points sampled from P Z and "fake" ones sampled from Q Z BID9. This in the WAE-GAN described in Algorithm 1. Even though WAE-GAN falls back to the min-max problem, we move the adversary from the input (pixel) space X to the latent space Z. On top of that, P Z may have a nice shape with a single mode (for a Gaussian prior), in which case the task should be easier than matching an unknown, complex, and possibly multi-modal distributions as usually done in GANs. This is also a reason for our second penalty: DISPLAYFORM5 For a positive-definite reproducing kernel k: Z × Z → R the following expression is called the maximum mean discrepancy (MMD): DISPLAYFORM6 where H k is the RKHS of real-valued functions mapping Z to R. If k is characteristic then MMD k defines a metric and can be used as a divergence measure. We propose to use DISPLAYFORM7 Fortunately, MMD has an unbiased U-statistic estimator, which can be used in conjunction with stochastic gradient descent (SGD) methods. This in the WAE-MMD described in Algorithm 2. It is well known that the maximum mean discrepancy performs well when matching high-dimensional standard normal distributions BID10 ) so we expect this penalty to work especially well working with the Gaussian prior P Z. Require: Regularization coefficient λ > 0. Initialize the parameters of the encoder Q φ, decoder G θ, and latent discriminator Dγ. while (φ, θ) not converged do Sample {x1, . . ., xn} from the training set Sample {z1, . . ., zn} from the prior PZ Samplezi from Q φ (Z|xi) for i = 1,..., n Update Dγ by ascending: DISPLAYFORM0 Update Q φ and G θ by descending: DISPLAYFORM1 end while ALGORITHM 2 Wasserstein Auto-Encoder with MMD-based penalty (WAE-MMD).Require: Regularization coefficient λ > 0, characteristic positive-definite kernel k. Initialize the parameters of the encoder Q φ, decoder G θ, and latent discriminator Dγ. while (φ, θ) not converged do Sample {x1, . . ., xn} from the training set Sample {z1, . . ., zn} from the prior PZ Samplezi from Q φ (Z|xi) for i = 1,..., n Update Q φ and G θ by descending: DISPLAYFORM2 end whileWe point out once again that the encoders Q φ (Z|x) in Algorithms 1 and 2 can be non-random, i.e. deterministically mapping input points to the latent codes. In this case Q φ (Z|x) = δ µ φ (x) for a function µ φ: X → Z and in order to samplez i from Q φ (Z|x i) we just need to return µ φ (x i). Literature on auto-encoders Classical unregularized auto-encoders minimize only the reconstruction cost. This in different training points being encoded into non-overlapping zones chaotically scattered all across the Z space with "holes" in between where the decoder mapping P G (X|Z) has never been trained. Overall, the encoder Q(Z|X) trained in this way does not provide a useful representation and sampling from the latent space Z becomes hard BID1.Variational auto-encoders BID16 ) minimize a variational bound on the KLdivergence D KL (P X, P G) which is composed of the reconstruction cost plus the regularizer DISPLAYFORM0 The regularizer captures how distinct the image by the encoder of each training example is from the prior P Z, which is not guaranteeing that the overall encoded distribution E P X [Q(Z|X)] matches P Z like WAE does. Also, VAEs require non-degenerate (i.e. nondeterministic) Gaussian encoders and random decoders for which the term log p G (x|z) can be computed and differentiated with respect to the parameters. Later BID24 proposed a way to use VAE with non-Gaussian encoders. WAE minimizes the optimal transport W c (P X, P G) and allows both probabilistic and deterministic encoder-decoder pairs of any kind. The VAE regularizer can be also equivalently written BID13 as a sum of D KL (Q Z, P Z) and a mutual information I Q (X, Z) between the images X and latent codes Z jointly distributed according to P X × Q(Z|X). This observation provides another intuitive way to explain a difference between our algorithm and VAEs: WAEs simply drop the mutual information term I Q (X, Z) in the VAE regularizer. When used with c(x, y) = x − y 2 2 WAE-GAN is equivalent to adversarial auto-encoders (AAE) proposed by BID23. Theory of BID3 (and in particular Theorem 1) thus suggests that AAEs minimize the 2-Wasserstein distance between P X and P G. This provides the first theoretical justification for AAEs known to the authors. WAE generalizes AAE in two ways: first, it can use any cost function c in the input space X; second, it can use any discrepancy measure D Z in the latent space Z (for instance MMD), not necessarily the adversarial one of WAE-GAN.Finally, Zhao et al. (2017b) independently proposed a regularized auto-encoder objective similar to BID3 and our based on very different motivations and arguments. Following VAEs their objective (called InfoVAE) defines the reconstruction cost in the image space implicitly through the negative log likelihood term − log p G (x|z), which should be properly normalized for all z ∈ Z. In theory VAE and InfoVAE can both induce arbitrary cost functions, however in practice this may require an estimation of the normalizing constant (partition function) which can 3 be different for different values of z. WAEs specify the cost c(x, y) explicitly and don't constrain it in any way. Literature on address computing the OT cost in large scale using SGD and sampling. They approach this task either through the dual formulation, or via a regularized version of the primal. They do not discuss any implications for generative modeling. Our approach is based on the primal form of OT, we arrive at regularizers which are very different, and our main focus is on generative modeling. The WGAN minimizes the 1-Wasserstein distance W 1 (P X, P G) for generative modeling. The authors approach this task from the dual form. Their algorithm comes without an encoder and can not be readily applied to any other cost W c, because the neat form of the Kantorovich-Rubinstein duality holds only for W 1. WAE approaches the same problem from the primal form, can be applied for any cost function c, and comes naturally with an encoder. In order to compute the values or of OT we need to handle non-trivial constraints, either on the coupling distribution Γ or on the function f being considered. Various approaches have been proposed in the literature to circumvent this difficulty. For W 1 tried to implement the constraint in the dual formulation by clipping the weights of the neural network f. Later BID11 proposed to relax the same constraint by penalizing the objective of with a term λ · E (∇f (X) − 1) 2 which should not be greater than 1 if f ∈ F L. In a more general OT setting of W c BID5 proposed to penalize the objective of with the KLdivergence λ · D KL (Γ, P ⊗ Q) between the coupling distribution and the product of marginals. BID8 showed that this entropic regularization drops the constraints on functions in the dual formulation as opposed to. Finally, in the context of unbalanced optimal transport it has been proposed to relax the constraint in by regularizing the objective with BID4 BID20, where Γ X and Γ Y are marginals of Γ. In this paper we propose to relax OT in a way similar to the unbalanced optimal transport, i.e. by adding additional divergences to the objective. However, we show that in the particular context of generative modeling, only one extra divergence is necessary. DISPLAYFORM1 Literature on GANs Many of the GAN variations (including f -GAN and WGAN) come without an encoder. Often it may be desirable to reconstruct the latent codes and use the learned manifold, in which cases these models are not applicable. There have been many other approaches trying to blend the adversarial training of GANs with autoencoder architectures (a; BID6 BID29 BID2 . The approach proposed by BID29 is perhaps the most relevant to our work. The authors use the discrepancy between Q Z and the distribution E Z ∼P Z [Q Z|G(Z) ] of auto-encoded noise vectors as the objective for the max-min game between the encoder and decoder respectively. While the authors showed that the saddle points correspond to P X = P G, they admit that encoders and decoders trained in this way have no incentive to be reciprocal. As a workaround they propose to include an additional reconstruction term to the objective. WAE does not necessarily lead to a min-max game, uses a different penalty, and has a clear theoretical foundation. Several works used reproducing kernels in context of GANs. BID19; BID7 use MMD with a fixed kernel k to match P X and P G directly in the input space X. These methods have been criticised to require larger mini-batches during training: estimating MMD k (P X, P G) requires number of samples roughly proportional to the dimensionality of the input space X BID28 which is typically larger than 10 3. BID18 take a similar approach but further train k adversarially so as to arrive at a meaningful loss function. WAE-MMD uses MMD to match Q Z to the prior P Z in the latent space Z. Typically Z has no more than 100 dimensions and P Z is Gaussian, which allows us to use regular mini-batch sizes to accurately estimate MMD. Random samples In this section we empirically evaluate 4 the proposed WAE model. We would like to test if WAE can simultaneously achieve (i) accurate reconstructions of data points, (ii) reasonable geometry of the latent manifold, and (iii) random samples of good (visual) quality. Importantly, the model should generalize well: requirements (i) and (ii) should be met on both training and test data. We trained WAE-GAN and WAE-MMD (Algorithms 1 and 2) on two real-world datasets: MNIST consisting of 70k images and CelebA BID22 containing roughly 203k images. Experimental setup In all reported experiments we used Euclidian latent spaces Z = R dz for various d z depending on the complexity of the dataset, isotropic Gaussian prior distributions P Z (Z) = N (Z; 0, σ 2 z · I d) over Z, and a squared cost function c(x, y) = x − y 2 2 for data points x, y ∈ X = R dx. We used deterministic encoder-decoder pairs, Adam BID15 ) with β 1 = 0.5, β 2 = 0.999, and convolutional deep neural network architectures for encoder mapping µ φ: X → Z and decoder mapping G θ: Z → X similar to the DCGAN ones reported by BID27 with batch normalization BID14. We tried various values of λ and noticed that λ = 10 seems to work good across all datasets we considered. Since we are using deterministic encoders, choosing d z larger than intrinsic dimensionality of the dataset would force the encoded distribution Q Z to live on a manifold in Z. This would make matching Q Z to P Z impossible if P Z is Gaussian and may lead to numerical instabilities. We use d z = 8 for MNIST and d z = 64 for CelebA which seems to work reasonably well. Random samples VAE WAE-MMD WAE-GAN Figure 3: VAE (left column), WAE-MMD (middle column), and WAE-GAN (right column) trained on CelebA dataset. In "test reconstructions" odd rows correspond to the real test points. We also report of VAEs. VAEs used the same latent spaces as discussed above and standard Gaussian priors P Z = N (0, I d). We used Gaussian encoders Q(Z|X) = N Z; µ φ (X), Σ(X) with mean µ φ and diagonal covariance Σ. For both MNIST and CelebA we used Bernoulli decoders parametrized by G θ. Functions µ φ, Σ, and G θ were parametrized by deep nets of the same architectures as used in WAE. In WAE-GAN we used discriminator D composed of several fully connected layers with ReLu. We tried WAE-MMD with the RBF kernel but observed that it fails to penalize the outliers of Q Z because of the quick tail decay. If the codesz = µ φ (x) for some of the training points x ∈ X end up far away from the support of P Z (which may happen in the early stages of training) the corresponding terms in the U-statistic k(z,z) = e − z−z 2 2 /σ 2 k will quickly approach zero and provide no gradient for those outliers. This could be avoided by choosing the kernel bandwidth σ 2 k in a data-dependent manner, however in this case per-minibatch U-statistic would not provide an unbiased estimate for the gradient. Instead, we used the inverse multiquadratics kernel k(x, y) = C/(C + x − y 2 2) which is also characteristic and has much heavier tails. In all experiments we used C = 2d z σ 2 z, which is the expected squared distance between two multivariate Gaussian vectors drawn from P Z. This significantly improved the performance compared to the RBF kernel (even the one with σ Random samples are generated by sampling P Z and decoding the ing noise vectors z into G θ (z). As expected, in our experiments we observed that for both WAE-GAN and WAE-MMD the quality of samples strongly depends on how accurately Q Z matches P Z. To see this, notice that during training the decoder function G θ is presented only with encoded versions µ φ (X) of the data points X ∼ P X. Indeed, the decoder is trained on samples from Q Z and thus there is no reason to expect good when feeding it with samples from P Z. In our experiments we noticed that even slight differences between Q Z and P Z may affect the quality of samples. In some cases WAE-GAN seems to lead to a better matching and generates better samples than WAE-MMD. However, due to adversarial training WAE-GAN is highly unstable, while WAE-MMD has a very stable training much like VAE. In order to quantitatively assess the quality of the generated images, we use the Fréchet Inception Distance introduced by BID12 and report the on CelebA in Table 1. These confirm that the sampled images from WAE are of better quality than from VAE, and WAE-GAN gets a slightly better score than WAE-MMD, which correlates with visual inspection of the images. Test reconstructions and interpolations. We take random points x from the held out test set and report their auto-encoded versions G θ (µ φ (x)). Next, pairs (x, y) of different data points are sampled randomly from the held out test set and encoded: z x = µ φ (x), z y = µ φ (y). We linearly interpolate between z x and z y with equally-sized steps in the latent space and show decoded images. Using the optimal transport cost, we have derived Wasserstein auto-encoders-a new family of algorithms for building generative models. We discussed their relations to other probabilistic modeling techniques. We conducted experiments using two particular implementations of the proposed method, showing that in comparison to VAEs, the images sampled from the trained WAE models are of better quality, without compromising the stability of training and the quality of reconstruction. Future work will include further exploration of the criteria for matching the encoded distribution Q Z to the prior distribution P Z, assaying the possibility of adversarially training the cost function c in the input space X, and a theoretical analysis of the dual formulations for WAE-GAN and WAE-MMD. Even though GANs and VAEs are quite different-both in terms of the conceptual frameworks and empirical performance-they share important features: (a) both can be trained by sampling from the model P G without knowing an analytical form of its density and (b) both can be scaled up with SGD. As a , it becomes possible to use highly flexible implicit models P G defined by a twostep procedure, where first a code Z is sampled from a fixed distribution P Z on a latent space Z and then Z is mapped to the image G(Z) ∈ X = R d with a (possibly random) transformation G: Z → X. This in latent variable models P G of the form.These models are indeed easy to sample and, provided G can be differentiated analytically with respect to its parameters, P G can be trained with SGD. The field is growing rapidly and numerous variations of VAEs and GANs are available in the literature. Next we introduce and compare several of them. The original generative adversarial network (GAN) BID9 approach minimizes DISPLAYFORM0 with respect to a deterministic decoder G: Z → X, where T is any non-parametric class of choice. It is known that D GAN (P X, P G) ≤ 2 · D JS (P X, P G) − log and the inequality turns into identity in the nonparametric limit, that is when the class T becomes rich enough to represent all functions mapping X to. Hence, GANs are minimizing a lower bound on the JS-divergence. However, GANs are not only linked to the JS-divergence: the f -GAN approach BID25 showed that a slight modification D f,GAN of the objective allows to lower bound any desired f -divergence in a similar way. In practice, both decoder G and discriminator T are trained in alternating SGD steps. Stopping criteria as well as adequate evaluation of the trained GAN models remain open questions. Recently, the authors of argued that the 1-Wasserstein distance W 1, which is known to induce a much weaker topology than D JS, may be better suited for generative modeling. When P X and P G are supported on largely disjoint low-dimensional manifolds (which may be the case in applications), D KL, D JS, and other strong distances between P X and P G max out and no longer provide useful gradients for P G. This "vanishing gradient" problem necessitates complicated scheduling between the G/T updates. In contrast, W 1 is still sensible in these cases and provides stable gradients. The Wasserstein GAN (WGAN) minimizes DISPLAYFORM1 where W is any subset of 1-Lipschitz functions on X. It follows from that D WGAN (P X, P G) ≤ W 1 (P X, P G) and thus WGAN is minimizing a lower bound on the 1-Wasserstein distance. Variational auto-encoders (VAE) BID16 utilize models P G of the form FORMULA4 and minimize DISPLAYFORM2 with respect to a random decoder mapping P G (X|Z). The conditional distribution P G (X|Z) is often parametrized by a deep net G and can have any form as long as its density p G (x|z) can be computed and differentiated with respect to the parameters of G. A typical choice is to use Gaussians P G (X|Z) = N (X; G(Z), σ 2 · I). If Q is the set of all conditional probability distributions Q(Z|X), the objective of VAE coincides with the negative marginal log-likelihood D VAE (P X, P G) = −E P X [log P G (X)]. However, in order to make the D KL term of tractable in closed form, the original implementation of VAE uses a standard normal P Z and restricts Q to a class of Gaussian distributions Q(Z|X) = N Z; µ(X), Σ(X) with mean µ and diagonal covariance Σ parametrized by deep nets. As a consequence, VAE is minimizing an upper bound on the negative log-likelihood or, equivalently, on the KL-divergence D KL (P X, P G).One possible way to reduce the gap between the true negative log-likelihood and the upper bound provided by D VAE is to enlarge the class Q. Adversarial variational Bayes (AVB) BID24 follows this argument by employing the idea of GANs. Given any point x ∈ X, a noise ∼ N, and any fixed transformation e: X × R → Z, a random variable e(x,) implicitly defines one particular conditional distribution Q e (Z|X = x). AVB allows Q to contain all such distributions for different choices of e, replaces the intractable term D KL Q e (Z|X), P Z in BID31 by the adversarial approximation D f,GAN corresponding to the KL-divergence, and proposes to minimize DISPLAYFORM3 The D KL term in may be viewed as a regularizer. Indeed, VAE reduces to the classical unregularized auto-encoder if this term is dropped, minimizing the reconstruction cost of the encoder-decoder pair Q(Z|X), P G (X|Z). This often in different training points being encoded into nonoverlapping zones chaotically scattered all across the Z space with "holes" in between where the decoder mapping P G (X|Z) has never been trained. Overall, the encoder Q(Z|X) trained in this way does not provide a useful representation and sampling from the latent space Z becomes hard BID1.Adversarial auto-encoders (AAE) BID23 replace the D KL term in with another regularizer: DISPLAYFORM4 where Q Z is the marginal distribution of Z when first X is sampled from P X and then Z is sampled from Q(Z|X), also known as the aggregated posterior BID23. Similarly to AVB, there is no clear link to log-likelihood, as D AAE ≤ D AVB. The authors of BID23 argue that matching Q Z to P Z in this way ensures that there are no "holes" left in the latent space Z and P G (X|Z) generates reasonable samples whenever Z ∼ P Z. They also report an equally good performance of different types of conditional distributions Q(Z|X), including Gaussians as used in VAEs, implicit models Q e as used in AVB, and deterministic encoder mappings, i.e. Q(Z|X) = δ µ(X) with µ: X → Z. We will consider certain sets of joint probability distributions of three random variables (X, Y, Z) ∈ X × X × Z. The reader may wish to think of X as true images, Y as images sampled from the model, and Z as latent codes. We denote by P G,Z (Y, Z) a joint distribution of a variable pair (Y, Z), where Z is first sampled from P Z and next Y from P G (Y |Z). Note that P G defined in and used throughout this work is the marginal distribution of Y when (Y, Z) ∼ P G,Z.In the optimal transport problem, we consider joint distributions Γ(X, Y) which are called couplings between values of X and Y. Because of the marginal constraint, we can write Γ(X, Y) = Γ(Y |X)P X (X) and we can consider Γ(Y |X) as a non-deterministic mapping from X to Y. Theorem 1. shows how to factor this mapping through Z, i.e., decompose it into an encoding distribution Q(Z|X) and the generating distribution P G (Y |Z).As in Section 2.2, P(X ∼ P X, Y ∼ P G) denotes the set of all joint distributions of (X, Y) with marginals P X, P G, and likewise for P(X ∼ P X, Z ∼ P Z). The set of all joint distributions of (X, Y, Z) such that X ∼ P X, (Y, Z) ∼ P G,Z, and (Y ⊥ ⊥ X)|Z will be denoted by P X,Y,Z. Finally, we denote by P X,Y and P X,Z the sets of marginals on (X, Y) and (X, Z) (respectively) induced by distributions in P X,Y,Z. Note that P(P X, P G), P X,Y,Z, and P X,Y depend on the choice of conditional distributions P G (Y |Z), while P X,Z does not. In fact, it is easy to check that P X,Z = P(X ∼ P X, Z ∼ P Z). From the definitions it is clear that P X,Y ⊆ P(P X, P G) and we immediately get the following upper bound: DISPLAYFORM0 If P G (Y |Z) are Dirac measures (i.e., Y = G(Z)), it turns out that P X,Y = P(P X, P G):adversary in WAE-GAN to α = 5 × 10 −4. After 30 epochs we decreased both by factor of 2, and after first 50 epochs further by factor of 5.Both encoder and decoder used fully convolutional architectures with 4x4 convolutional filters. Encoder architecture: DISPLAYFORM1 Decoder architecture: DISPLAYFORM2 Adversary architecture for WAE-GAN: DISPLAYFORM3 Here Conv k stands for a convolution with k filters, FSConv k for the fractional strided convolution with k filters (first two of them were doubling the resolution, the third one kept it constant), BN for the batch normalization, ReLU for the rectified linear units, and FC k for the fully connected layer mapping to R k. All the convolutions in the encoder used vertical and horizontal strides 2 and SAME padding. Finally, we used two heuristics. First, we always pretrained separately the encoder for several minibatch steps before the main training stage so that the sample mean and covariance of Q Z would try to match those of P Z. Second, while training we were adding a pixel-wise Gaussian noise truncated at 0.01 to all the images before feeding them to the encoder, which was meant to make the encoders random. We played with all possible ways of combining these two heuristics and noticed that together they in slightly (almost negligibly) better compared to using only one or none of them. Our VAE model used cross-entropy loss (Bernoulli decoder) and otherwise same architectures and hyperparameters as listed above. We pre-processed CelebA images by first taking a 140x140 center crops and then resizing to the 64x64 resolution. We used mini-batches of size 100 and trained the models for various number of epochs (up to 250). All reported WAE models were trained for 55 epochs and VAE for 68 epochs. For WAE-MMD we used λ = 100 and for WAE-GAN λ = 1. Both used σ 2 z = 2. For WAE-MMD the learning rate of Adam was initially set to α = 10 −3. For WAE-GAN the learning rate of Adam for the encoder-decoder pair was initially set to α = 3 × 10 −4 and for the adversary to 10 −3. All learning rates were decreased by factor of 2 after 30 epochs, further by factor of 5 after 50 first epochs, and finally additional factor of 10 after 100 first epochs. Both encoder and decoder used fully convolutional architectures with 5x5 convolutional filters. Encoder architecture:x ∈ R 64×64×3 → Conv 128 → BN → ReLU For WAE-GAN we used a heuristic proposed in Supplementary IV of BID24. Notice that the theoretically optimal discriminator would in D * (z) = log p Z (z) − log q Z (z), where p Z and q Z are densities of P Z and Q Z respectively. In our experiments we added the log prior log p Z (z) explicitly to the adversary output as we know it analytically. This should hopefully make it easier for the adversary to learn the remaining Q Z density term. Our VAE model used a cross-entropy reconstruction loss (Bernoulli decoder) and α = 10 −4 as the initial Adam learning rate and the same decay schedule as explained above. Otherwise all the architectures and hyperparameters were as explained above. | We propose a new auto-encoder based on the Wasserstein distance, which improves on the sampling properties of VAE. | 1,068 | scitldr |
In this paper, we aim to develop a novel mechanism to preserve differential privacy (DP) in adversarial learning for deep neural networks, with provable robustness to adversarial examples. We leverage the sequential composition theory in DP, to establish a new connection between DP preservation and provable robustness. To address the trade-off among model utility, privacy loss, and robustness, we design an original, differentially private, adversarial objective function, based on the post-processing property in DP, to tighten the sensitivity of our model. An end-to-end theoretical analysis and thorough evaluations show that our mechanism notably improves the robustness of DP deep neural networks. The pervasiveness of machine learning exposes new vulnerabilities in software systems, in which deployed machine learning models can be used (a) to reveal sensitive information in private training data , and/or (b) to make the models misclassify, such as adversarial examples . Efforts to prevent such attacks typically seek one of three solutions: Models which preserve differential privacy (DP) , a rigorous formulation of privacy in probabilistic terms; Adversarial training algorithms, which augment training data to consist of benign examples and adversarial examples crafted during the training process, thereby empirically increasing the classification accuracy given adversarial examples ; and Provable robustness, in which the model classification given adversarial examples is theoretically guaranteed to be consistent, i.e., a small perturbation in the input does not change the predicted label . On the one hand, private models, trained with existing privacy-preserving mechanisms (; ; ; 2017b; a; ;), are unshielded under adversarial examples. On the other hand, robust models, trained with adversarial learning algorithms (with or without provable robustness to adversarial examples), do not offer privacy protections to the training data. That one-sided approach poses serious risks to machine learning-based systems; since adversaries can attack a deployed model by using both privacy inference attacks and adversarial examples. To be safe, a model must be i) private to protect the training data, and ii) robust to adversarial examples. Unfortunately, there has not yet been research on how to develop such a model, which thus remains a largely open challenge. Simply combining existing DP-preserving mechanisms and provable robustness conditions (; ;) cannot solve the problem, for many reasons. (a) Existing sensitivity bounds (; 2017b; a) and designs have not been developed to protect the training data in adversarial training. It is obvious that using adversarial examples crafted from the private training data to train our models introduces a previously unknown privacy risk, disclosing the participation of the benign examples . (b) There is an unrevealed interplay among DP preservation, adversarial learning, and robustness bounds. (c) Existing algorithms cannot be readily applied to address the trade-off among model utility, privacy loss, and robustness. Therefore, theoretically bounding the robustness of a model (which both protects the privacy and is robust against adversarial examples) is nontrivial. Motivated by this open problem, we propose to develop a novel differentially private adversarial learning (DPAL) mechanism to: 1) preserve DP of the training data, 2) be provably and practically robust to adversarial examples, and 3) retain high model utility. In our mech-anism, privacy-preserving noise is injected into inputs and hidden layers to achieve DP in learning private model parameters (Theorem 1). Then, we incorporate ensemble adversarial learning into our mechanism to improve the decision boundary under DP protections. To do this, we introduce a concept of DP adversarial examples crafted using benign examples in the private training data under DP guarantees (Eq. 9). To address the trade-off between model utility and privacy loss, we propose a new DP adversarial objective function to tighten the model's global sensitivity (Theorem 3); thus, we significantly reduce the amount of noise injected into our function, compared with existing works (; 2017b; a). In addition, ensemble DP adversarial examples with a dynamic perturbation size µ a are introduced into the training process to further improve the robustness of our mechanism under different attack algorithms. An end-to-end privacy analysis shows that, by slitting the private training data into disjoint and fixed batches across epochs, the privacy budget in our DPAL is not accumulated across training steps (Theorem 4). After preserving DP in learning model parameters, we establish a solid connection among privacy preservation, adversarial learning, and provable robustness. Noise injected into different layers is considered as a sequence of randomizing mechanisms, providing different levels of robustness. By leveraging the sequential composition theory in DP , we derive a novel generalized robustness bound, which essentially is a composition of these levels of robustness (Theorem 5 and Proposition 1). To our knowledge, our mechanism establishes the first connection between DP preservation and provable robustness against adversarial examples in adversarial learning. Rigorous experiments conducted on MNIST and CIFAR-10 datasets show that our mechanism notably enhances the robustness of DP deep neural networks, compared with existing mechanisms. In this section, we revisit adversarial learning, DP, and our problem definition. Let D be a database that contains N tuples, each of which contains data x ∈ [−1, 1] d and a ground-truth label y ∈ Z K, with K possible categorical outcomes. Each y is a one-hot vector of K categories y = {y 1, . . ., y K}. A single true class label y x ∈ y given x ∈ D is assigned to only one of the K categories. On input x and parameters θ, a model outputs class scores f: R d → R K that maps d-dimensional inputs x to a vector of scores f (x) = {f 1 (x),..., f K (x)} s.t. ∀k ∈ [1, K]: f k (x) ∈ and K k=1 f k (x) = 1. The class with the highest score value is selected as the predicted label for the data tuple, denoted as y(x) = max k∈K f k (x). A loss function L(f (x), y) presents the penalty for mismatching between the predicted values f (x) and original values y. For the sake of clarity, the notations and terminologies frequently used in this paper are summarized in Table 1 (Appendix A). Let us briefly revisit DP-preserving techniques in deep learning, starting with the definition of DP. Definition 1 (, δ)-DP . A randomized algorithm A fulfills (, δ)-DP, if for any two databases D and D differing at most one tuple, and for all O ⊆ Range(A), we have: A smaller enforces a stronger privacy guarantee. Here, controls the amount by which the distributions induced by D and D may differ, δ is a broken probability. DP also applies to general metrics ρ(D, D) ≤ 1, where ρ can be l p -norms . DP-preserving algorithms in deep learning can be categorized into two lines: 1) introducing noise into gradients of parameters (; ; ; ; ;), 2) injecting noise into objective functions (; 2017b; a), and 3) injecting noise into labels . In Lemmas 2 and 4, we will show that our mechanism achieves better sensitivity bounds compared with existing works (; 2017b; a). Adversarial Learning. For some target model f and inputs (x, y x), the adversary's goal is to find an adversarial example x adv = x + α, where α is the perturbation introduced by the attacker, such that: x adv and x are close, and the model misclassifies x adv, i.e., y(x adv) = y(x). In this paper, we consider well-known l p∈{1,2,∞} -norm bounded attacks (b). Let l p (µ) = {α ∈ R d : α p ≤ µ} be the l p -norm ball of radius µ. One of the goals in adversarial learning is to minimize the risk over adversarial examples: θ * = arg min θ E (x,ytrue)∼D max α p ≤µ L f (x + α, θ), y x, where an attack is used to approximate solutions to the inner maximization problem, and the outer minimization problem corresponds to training the model f with parameters θ over these adversarial examples x adv = x + α. There are two basic adversarial example attacks. The first one is a single-step algorithm, in which only a single gradient computation is required. For instance, FGSM algorithm (b) finds adversarial examples by solving the inner maximization max α p ≤µ L f (x + α, θ), y x. The second one is an iterative algorithm, in which multiple gradients are computed and updated. For instance, in (a), FGSM is applied multiple times with T µ small steps, each of which has a size of µ/T µ. To improve the robustness of models, prior work focused on two directions: 1) Producing correct predictions on adversarial examples, while not compromising the accuracy on legitimate inputs (; ; ; b; a; ; ;); and 2) Detecting adversarial examples (; ; ; Abbasi & Gagné, 2017;). Among existing solutions, adversarial training appears to hold the greatest promise for learning robust models (Tramèr et al., 2017). One of the well-known algorithms was proposed in (b). At every training step, new adversarial examples are generated and injected into batches containing both benign and adversarial examples. The typical adversarial learning in (b) is presented in Alg. 2 (Appendix B). DP and Provable Robustness. Recently, some algorithms (; ; ; ;) have been proposed to derive provable robustness, in which each prediction is guaranteed to be consistent under the perturbation α, if a robustness condition is held. Given a benign example x, we focus on achieving a robustness condition to attacks of l p (µ)-norm, as follows: where k = y(x), indicating that a small perturbation α in the input does not change the predicted label y(x). To achieve the robustness condition in Eq. 2, Lecuyer et al. introduce an algorithm, called PixelDP. By considering an input x (e.g., images) as databases in DP parlance, and individual features (e.g., pixels) as tuples, PixelDP shows that randomizing the scoring function f (x) to enforce DP on a small number of pixels in an image guarantees robustness of predictions against adversarial examples. To randomize f (x), random noise σ r is injected into either input x or an arbitrary hidden layer, ing in the following (r, δ r)-PixelDP condition: Lemma 1 (r, δ r)-PixelDP . Given a randomized scoring function f (x) satisfying (r, δ r)-PixelDP w.r.t. a l p -norm metric, we have: is the expected value of f k (x), r is a predefined budget, δ r is a broken probability. At the prediction time, a certified robustness check is implemented for each prediction. A generalized robustness condition is proposed as follows: whereÊ lb andÊ ub are the lower and upper bounds of the expected valueÊf (x) = 1 n n f (x) n, derived from the Monte Carlo estimation with an η-confidence, given n is the number of invocations of f (x) with independent draws in the noise σ r. Passing the check for a given input guarantees that no perturbation up to l p-norm can change the model's prediction. PixelDP does not preserve DP in learning private parameters θ to protect the training data. That is different from our goal. Our new DPAL mechanism is presented in Alg. 1. Our network (Figure 1) can be represented as: f (x) = g(a(x, θ 1), θ 2 ), where a(x, θ 1) is a feature representation learning model with x as an input, and g will take the output of a(x, θ 1) and return the class scores f (x). At a high level, DPAL has three key components: DP a(x, θ 1), which is to preserve DP in learning the feature representation model a(x, θ 1); DP Adversarial Learning, which focuses on preserving DP in adversarial learning, given DP a(x, θ 1); and Provable Robustness and Verified Inferring, which are to compute robustness bounds given an input at the inference time. In particular, given a deep neural network f with model parameters θ (Lines 2-3), the network is trained over T training steps. In each step, a batch of m perturbed training examples and a batch of m DP adversarial examples derived from D are used to train our network Take a batch Bi ∈ B where i = t%(N/m), Assign Bt ← Bi 6: Ensemble DP Adversarial Examples: Take a batch Bi+1 ∈ B, Assign B adv t ← ∅ 9: for l ∈ A do 10: Take the next batch Ba ⊂ Bi+1 with the size m/|A| 11: ∀xj ∈ Ba: Craft x adv j by using attack algorithm (θ2) with the noise Output: (1 + 1/γx + 1/γ + 2)-DP parameters θ = {θ1, θ2}, robust model with an r budget 13: Verified Inferring: (an input x, attack size µa) 14: Compute robustness size (κ + ϕ)max in Eq. 15 of x 15: if (κ + ϕ)max ≥ µa then 16: Return isRobust(x) = T rue, label k, (κ + ϕ)max 17: else 18: Return isRobust(x) = F alse, label k, (κ + ϕ)max 3.1 DP FEATURE REPRESENTATION LEARNING Our idea is to use auto-encoder to simultaneously learn DP parameters θ 1 and ensure that the output of a(x, θ 1) is DP. The reasons we choose an auto-encoder are: It is easier to train, given its small size; and It can be reused for different predictive models. A typical data reconstruction function (cross-entropy), given a batch B t at the training step t of the input x i, is as follows: where the transformation of x i is h i = θ T 1 x i, the hidden layer h 1 of a(x, θ 1) given the batch B t is denoted as h 1Bt = {θ T 1 x i} xi∈Bt, and x i = θ 1 h i is the reconstruction of x i. To preserve 1 -DP in learning θ 1 where 1 is a privacy budget, we first derive the 1st-order polynomial approximation of R Bt (θ 1) by applying Taylor Expansion , denoted as R Bt (θ 1). Then, Functional Mechanism is employed to inject noise into coefficients of the approximated function where, parameters θ 1j derived from the function optimization need to be 1 -DP. To achieve that, Laplace noise ) is injected into coefficients 1 2 − x ij h i, where ∆ R is the sensitivity of R Bt (θ 1), as follows: To ensure that the computation of x i does not access the original data, we further inject Laplace noise ) into x i. This can be done as a preprocessing step for all the benign examples in D to construct a set of disjoint batches B of perturbed benign examples (Lines 2 and 5). The perturbed function now becomes: where, and x i = θ 1 h i. Let us denote β as the number of neurons in h 1, and h i is bounded in [−1, 1], the global sensitivity ∆ R is as follows: Lemma 2 The global sensitivity of R over any two neighboring batches, B t and B t, is as follows: All the proofs are in our Appendix. By setting ∆ R = d(β + 2), we show that the output of a(·), which is the perturbed affine transformation h 1Bt = {θ and θ 1 1,1 is the maximum 1-norm of θ 1 's columns (Operator norm, 2018). This is important to tighten the privacy budget consumption in computing the remaining hidden layers g(a(x, θ 1), θ 2 ). In fact, without using additional information from the original data, the computation of g(a(x, θ 1), θ 2 ) is also (1 /γ)-DP (the post-processing property of DP). Similarly, we observe that the perturbation of a batch Note that we do not use the post-processing property of DP to estimate the DP guarantee of h 1Bt based upon the DP guarantee of B t, since 1 /γ < 1 /γ x in practice. As a , the (1 /γ)-DP h 1Bt provides a more rigorous DP protection to the computation of g(·) and to the output layer. Lemma 3 The computation of the affine transformation h 1Bt is (1 /γ)-DP and the computation of the batch B t as the input layer is (1 /γ x)-DP. The following Theorem shows that optimizing R Bt (θ 1) is (1 /γ x + 1)-DP in learning θ 1 given an (1 /γ x)-DP B t batch. The optimization of R Bt (θ 1) preserves (1 /γ x + 1)-DP in learning θ 1. To integrate adversarial learning, we first draft DP adversarial examples x adv j using perturbed benign examples x j, with an ensemble of attack algorithms A and a random perturbation budget µ t ∈, at each step t (Lines 6-11). This will significantly enhances the robustness of our models under different types of adversarial examples with an unknown adversarial attack size µ. with y(x j) is the class prediction of f (x j) to avoid label leaking of the benign examples x j during the adversarial example crafting. Given a set of DP adversarial examples B adv t, training the auto-encoder with B adv t preserves (1 /γ x + 1)-DP. The proof of Theorem 2 is in Appendix H, Result 4. It can be extended to iterative attacks as where y(x . Second, we propose a novel DP adversarial objective function L Bt (θ 2), in which the loss function L for benign examples is combined with an additional loss function Υ for DP adversarial examples, to optimize the parameters θ 2. The objective function L Bt (θ 2) is defined as follows: where ξ is a hyper-parameter. For the sake of clarity, in Eq. 10, we denote y i and y j as the true class labels y xi and y xj of examples x i and x j. Note that x adv j and x j share the same label y xj. Now we are ready to preserve DP in objective functions L f (x i, θ 2), y i and Υ f (x adv j, θ 2), y j in order to achieve DP in learning θ 2. Since the objective functions use the true class labels y i and y j, we need to protect the labels at the output layer. Let us first present our approach to preserve DP in the objective function L for benign examples. Given h πi computed from the x i through the network with W π is the parameter at the last hidden layer h π. Cross-entropy function is approximated as Based on the post-processing property of DP ,. As a , the optimization of the function L 1Bt θ 2 does not disclose any information from the training data, and 1/γ, given neighboring batches B t and B t. Thus, we only need to preserve 2 -DP in the function L 2Bt (θ 2), which accesses the ground-truth label y ik. Given coefficients h πi y ik, the sensitivity ∆ L2 of L 2Bt (θ 2) is computed as: Lemma 4 Let B t and B t be neighboring batches of benign examples, we have the following inequality: ∆ L2 ≤ 2|h π |, where |h π | is the number of hidden neurons in h π. The sensitivity of our objective function is notably smaller than the state-of-the-art bound (a), which is crucial to improve our model utility. The perturbed functions are as follows: We apply the same technique to preserve As the perturbed functions L and Υ are always optimized given two disjoint batches B t and B adv t, the privacy budget used to preserve DP in the adversarial objective function L Bt (θ 2) is (1 /γ + 2), following the parallel composition property of DP . The total budget to learn private parameters We have shown that our mechanism achieves DP at the batch level B t ∪B adv t given a specific training step t. By constructing disjoint and fixed batches from the training data D, we leverage both parallel composition and post-processing properties of DP to extend the to (1 + 1 /γ x + 1 /γ + 2)-DP in learning θ = {θ 1, θ 2} on D across T training steps. There are three key properties in our approach: It only reads perturbed inputs B t and perturbed coefficients h 1, which are DP across T training steps; Given N/m disjoint batches in each epoch, for any example x, x is included in one and only one batch, denoted B x ∈ B. As a , the DP guarantee to x in D is equivalent to the DP guarantee to x in B x; since the optimization using any other batches does not affect the DP guarantee of x; and All the batches are fixed across T training steps to prevent additional privacy leakage, caused by generating new and overlapping batches (which are considered overlapping datasets in the parlance of DP) in the typical training approach. Theorem 4 Algorithm 1 achieves (1 + 1 /γ x + 1 /γ + 2)-DP parameters θ = {θ 1, θ 2} on the private training data D across T training steps. Now, we establish the correlation between our mechanism and provable robustness. In the inference time, to derive the provable robustness condition against adversarial examples x+α, i.e., ∀α ∈ l p, PixelDP mechanism randomizes the scoring function f (x) by injecting robustness noise σ r into either input x or a hidden layer, i.e., x = x + Lap(, where ∆ x r and ∆ h r are the sensitivities of x and h, measuring how much x and h can be changed given the perturbation α ∈ l p in the input x. Monte Carlo estimation of the expected valuesÊf (x),Ê lb f k (x), and E ub f k (x) are used to derive the robustness condition in Eq. 4. On the other hand, in our mechanism, the privacy noise σ p includes Laplace noise injected into both input x, i.e., This helps us to avoid injecting the noise directly into the coefficients h πi y ik. The correlation between our DP preservation and provable robustness lies in the correlation between the privacy noise σ p and the robustness noise σ r. We can derive a robustness bound by projecting the privacy noise σ p on the scale of the robustness noise σ r. Given the input x, let κ =, in our mechanism we have that: By applying a group privacy size κ , the scoring function f (x) satisfies r -PixelDP given α ∈ l p (κ), or equivalently is κ r -PixelDP given α ∈ l p, δ r = 0. By applying Lemma 1, we have ∀k, ∀α ∈ l p (κ): With that, we can achieve a robustness condition against l p (κ)-norm attacks, as follows: with the probability ≥ η x -confidence, derived from the Monte Carlo estimation ofÊf (x). Our mechanism also perturbs h (Eq. 7). Given ϕ = ). Therefore, the scoring function f (x) also satisfies r -PixelDP given the perturbation α ∈ l p (ϕ). In addition to the robustness to the l p (κ)-norm attacks, we achieve an additional robustness bound in Eq. 12 against l p (ϕ)-norm attacks. Similar to PixelDP, these robustness conditions can be achieved as randomization processes in the inference time. They can be considered as two independent and provable defensive mechanisms applied against two l p -norm attacks, i.e., l p (κ) and l p (ϕ). One challenging question here is: "What is the general robustness bound, given κ and ϕ?" Intuitively, our model is robust to attacks with α ∈ l p (κ + ϕ). We leverage the theory of sequential composition in DP to theoretically answer this question. Given S independent mechanisms M 1,..., M S, whose privacy guarantees are 1,..., S -DP with α ∈ l p. Each mechanism M s, which takes the input x and outputs the value of f (x) with the Laplace noise only injected to randomize the layer s (i.e., no randomization at any other layers), denoted as f s (x), is defined as: We aim to derive a generalized robustness of any composition scoring function f (M 1, . . ., M s |x) bounded in, defined as follows: Our setting follows the sequential composition in DP . Thus, we can prove that the expected value Ef (M 1, . . ., M S |x) is insensitive to small perturbations α ∈ l p in Lemma 5, and we derive our composition of robustness in Theorem 5, as follows: Lemma 5 Given S independent mechanisms M 1,..., M S, which are 1,..., S -DP w.r.t a l p -norm metric, then the expected output value of any sequential function f of them, i.e., f (M 1, . . ., M S |x) ∈, meets the following property: Theorem 5 (Composition of Robustness) Given S independent mechanisms M 1,..., M S. Given any sequential function f (M 1, . . ., M S |x), and letÊ lb andÊ ub are lower and upper bounds with an η-confidence, for the Monte Carlo estimation ofÊf then the predicted label k = arg max kÊ f k (M 1, . . ., M S |x), is robust to adversarial examples x + α, ∀α ∈ l p, with probability ≥ η, by satisfying:, which is the targeted robustness condition in Eq. 2. It is worth noting that there is no η s -confidence for each mechanism s, since we do not estimate the expected valueÊf s (x) independently. To apply the composition of robustness in our mechanism, the noise injections into the input x and its affine transformation h can be considered as two mechanisms M x and M h, sequentially applied as with independent draws in the noise χ 2, the noise χ 1 injected into x is fixed; and vice-versa. By applying group privacy with sizes κ and ϕ, the scoring functions f x (x) and f h (x), given M x and M h, are κ r -DP and ϕ r -DP given α ∈ l p. With Theorem 5, we have a generalized bound as follows: e., Eq. 14), then the predicted label k of our function f (M h, M x |x) is robust to perturbations α ∈ l p (κ + ϕ) with the probability ≥ η, by satisfying Our model is trained similarly to training typical deep neural networks. Parameters θ 1 and θ 2 are independently updated by applying gradient descent (Line 12). Regarding the inference time, we implement a verified inference procedure as a post-processing step (Lines 13-18). Our verified inference returns a robustness size guarantee for each example x, which is the maximal value of κ + ϕ, for which the robustness condition in Proposition 1 holds. Maximizing κ + ϕ is equivalent to maximizing the robustness epsilon r, which is the only parameter controlling the size of κ + ϕ; since, all the other hyper-parameters, i.e., ∆ R, m, 1, 2, θ 1, θ 2, ∆ x r, and ∆ h r are fixed given a well-trained model f (x): e., Eq. 14) The prediction on an example x is robust to attacks up to (κ + ϕ) max. The failure probability 1-η can be made arbitrarily small by increasing the number of invocations of f (x), with independent draws in the noise. Similar to , Hoeffding's inequality is applied to bound the approximation error inÊf k (x) and to search for the robustness bound (κ + ϕ) max. We use the following sensitivity bounds ∆ h r = β θ 1 ∞ where θ 1 ∞ is the maximum 1-norm of θ 1's rows, and ∆ x r = µd for l ∞ attacks. We also propose a new way to draw independent noise following the distribution of χ 1 + /ψ) for the transformation h, where χ 1 and χ 2 are the fixed noise used to train the network, and ψ is a parameter to control the distribution shifts between training and inferring. This new Monte Carlo Estimation of Ef (x) works better without affecting the DP bounds and the robustness (Appendix L). We have conducted an extensive experiment on the MNIST and CIFAR-10 datasets. We consider the class of l ∞ -bounded adversaries to see whether our mechanism could retain high model utility, while providing strong DP guarantees and protections against adversarial examples. Baseline Approaches. Our DPAL mechanism is evaluated in comparison with state-of-the-art mechanisms in: DP-preserving algorithms in deep learning, i.e., DP-SGD , AdLM (a); in Provable robustness, i.e., PixelDP ; and in DP-preserving algorithms with provable robustness, i.e., SecureSGD given heterogeneous noise , and SecureSGD-AGM given the Analytic Gaussian Mechanism (AGM) . To preserve DP, DP-SGD injects random noise into gradients of parameters, while AdLM is a Functional Mechanism-based approach. PixelDP is one of the state-ofthe-art mechanisms providing provable robustness using DP bounds. SecureSGD is a combination of PixelDP and DP-SGD with an advanced heterogeneous noise distribution; i.e., "more noise" is injected into "more vulnerable" latent features, to improve the robustness. The baseline models share the same design in our experiment. Four white-box attacks were used, including FGSM, I-FGSM, Momentum Iterative Method (MIM) , and MadryEtAl . is equivalent to an attack size 2µ a = 0.6 in our setting. The reason for using x ∈ [−1, 1] d is to achieve better model utility, while retaining the same global sensitivities to preserve DP, compared with x ∈ d. Our model configurations are in Appendix M and our approximation error bound analysis is presented in Appendix N. As in , we apply two accuracy metrics: |test| where |test| is the number of test cases, isCorrect(·) returns 1 if the model makes a correct prediction (else, returns 0), and isRobust(·) returns 1 if the robustness size is larger than a given attack size µ a (else, returns 0). Our task of validation focuses on shedding light into the interplay among model utility, privacy loss, and robustness bounds, by learning 1) the impact of the privacy budget t = (1 + 1 /γ x + 1 /γ + 2), and 2) the impact of attack sizes µ a. All statistical tests are 2-tail t-tests. All experimental Figures are in Appendix O. Results on the MNIST Dataset. Figure 2 illustrates the conventional accuracy of each model as a function of the privacy budget t on the MNIST dataset under l ∞ (µ a)-norm attacks, with µ a = 0.2 (a pretty strong attack). It is clear that our DPAL outperforms AdLM, DP-SGD, SecureSGD, and SecureSGD-AGM, in all cases, with p < 1.32e − 4. On average, we register a 22.36% improvement over SecureSGD (p < 1.32e − 4), a 46.84% improvement over SecureSGD-AGM (p < 1.83e − 6), a 56.21% improvement over AdLM (p < 2.05e − 10), and a 77.26% improvement over DP-SGD (p < 5.20e − 14), given our DPAL mechanism. AdLM and DP-SGD achieve the worst conventional accuracies. There is no guarantee provided in AdLM and DP-SGD. Thus, the accuracy of the AdLM and DPSGD algorithms seem to show no effect against adversarial examples, when the privacy budget is varied. This is in contrast to our DPAL model, the SecureSGD model, and the SecureSGD-AGM model, whose accuracies are proportional to the privacy budget. When the privacy budget t = 0.2 (a tight DP protection), there are significant drops, in terms of conventional accuracy, given the baseline approaches. By contrast, our DPAL mechanism only shows a small degradation in the conventional accuracy (6.89%, from 89.59% to 82.7%), compared with a 37% drop in SecureSGD (from 78.64% to 41.64%), and a 32.89% drop in SecureSGD-AGM (from 44.1% to 11.2%) on average, when the privacy budget t goes from 2.0 to 0.2. At t = 0.2, our DPAL mechanism achieves 82.7%, compared with 11.2% and 41.64% correspondingly for SecureSGD-AGM and SecureSGD. This is an important , showing the ability to offer tight DP protections under adversarial example attacks in our model, compared with existing algorithms. • Figure 4 presents the conventional accuracy of each model as a function of the attack size µ a on the MNIST dataset, under a strong DP guarantee, t = 0.2. It is clear that our DPAL mechanism outperforms the baseline approaches in all cases. On average, our DPAL model improves 44.91% over SecureSGD (p < 7.43e − 31), a 61.13% over SecureSGD-AGM (p < 2.56e − 22), a 52.21% over AdLM (p < 2.81e − 23), and a 62.20% over DP-SGD (p < 2.57e − 22). More importantly, our DPAL model is resistant to different adversarial example algorithms with different attack sizes. When µ a ≥ 0.2, AdLM, DP-SGD, SecureSGD, and SecureSGD-AGM become defenseless. We further register significantly drops in terms of accuracy, when µ a is increased from 0.05 (a weak attack) to 0.6 (a strong attack), i.e., 19.87% on average given our DPAL, across all attacks, compared with 27.76% (AdLM), 29.79% (DP-SGD), 34.14% (SecureSGD-AGM), and 17.07% (SecureSGD). • Figure 6 demonstrates the certified accuracy as a function of µ a. The privacy budget is set to 1.0, offering a reasonable privacy protection. In PixelDP, the construction attack bound r is set to 0.1, which is a pretty reasonable defense. With (small perturbation) µ a ≤ 0.2, PixelDP achieves better certified accuracies under all attacks; since PixelDP does not preserve DP to protect the training data, compared with other models. Meanwhile, our DPAL model outperforms all the other models when µ a ≥ 0.3, indicating a stronger defense to more aggressive attacks. More importantly, our DPAL has a consistent certified accuracy to different attacks given different attack sizes, compared with baseline approaches. In fact, when µ a is increased from 0.05 to 0.6, our DPAL shows a small drop (11.88% on average, from 84.29%(µ a = 0.05) to 72.41%(µ a = 0.6)), compared with a huge drop of the PixelDP, i.e., from 94.19%(µ a = 0.05) to 9.08%(µ a = 0.6) on average under I-FGSM, MIM, and MadryEtAl attacks, and to 77.47%(µ a = 0.6) under FGSM attack. Similarly, we also register significant drops in terms of certified accuracy for SecureSGD (78.74%, from 86.74% to 7.99%) and SecureSGD-AGM (81.97%, from 87.23% to 5.26%) on average. This is promising. Our key observations are as follows. Incorporating ensemble adversarial learning into DP preservation, with tightened sensitivity bounds and a random perturbation size µ t ∈ at each training step, does enhance the consistency, robustness, and accuracy of our model against different attack algorithms with different levels of perturbations. Our DPAL model outperforms baseline algorithms, including both DP-preserving and non-private approaches, in terms of conventional accuracy and certified accuracy in most of the cases. It is clear that existing DP-preserving approaches have not been designed to withstand against adversarial examples. Results on the CIFAR-10 Dataset further strengthen our observations. In Figure 3, our DPAL clearly outperforms baseline models in all cases (p < 6.17e−9), especially when the privacy budget is small (t < 4), yielding strong privacy protections. On average conventional accuracy, our DPAL mechanism has an improvement of 10.42% over SecureSGD (p < 2.59e − 7), an improvement of 14.08% over SecureSGD-AGM (p < 5.03e − 9), an improvement of 29.22% over AdLM (p < 5.28e − 26), and a 14.62% improvement over DP-SGD (p < 4.31e − 9). When the privacy budget is increased from 2 to 10, the conventional accuracy of our DPAL model increases from 42.02% to 46.76%, showing a 4.74% improvement on average. However, the conventional accuracy of our model under adversarial example attacks is still low, i.e., 44.22% on average given the privacy budget at 2.0. This opens a long-term research avenue to achieve better robustness under strong privacy guarantees in adversarial learning. • The accuracy of our model is consistent given different attacks with different adversarial perturbations µ a under a rigorous DP protection (t = 2.0), compared with baseline approaches (Figure 5). In fact, when the attack size µ a increases from 0.05 to 0.5, the conventional accuracies of the baseline approaches are remarkably reduced, i.e., a drop of 25.26% on average given the most effective baseline approach, SecureSGD. Meanwhile, there is a much smaller degradation (4.79% on average) in terms of the conventional accuracy observed in our DPAL model. Our model also achieves better accuracies compared with baseline approaches in all cases (p < 8.2e − 10). Figure 7 further shows that our DPAL model is more accurate than baseline approaches (i.e., r is set to 0.1 in PixelDP) in terms of certified accuracy in all cases, with a tight privacy budget set to 2.0 (p < 2.04e − 18). We register an improvement of 21.01% in our DPAL model given the certified accuracy over SecureSGD model, which is the most effective baseline approach (p < 2.04e − 18). In this paper, we established a connection among DP preservation to protect the training data, adversarial learning, and provable robustness. A sequential composition robustness theory was introduced to generalize robustness given any sequential and bounded function of independent defensive mechanisms. An original DP-preserving mechanism was designed to address the trade-off among model utility, privacy loss, and robustness by tightening the global sensitivity bounds. A new Monte Carlo Estimation was proposed to improve and stabilize the estimation of the robustness bounds; thus improving the certified accuracy under adversarial example attacks. However, there are several limitations. First, the accuracy of our model under adversarial example attacks is still very low. Second, the mechanism scalability is dependent on the model structures. Third, further study is needed to address the threats from adversarial examples crafted by unseen attack algorithms. Fourth, in this study, our goal is to illustrate the difficulties in providing DP protections to the training data in adversarial learning with robustness bounds. The problem is more challenging when working with complex and large networks, such as ResNet , VGG16 , LSTM , and GAN (a). Fifth, there can be alternative approaches to draft and to use DP adversarial examples. Addressing these limitations needs significant efforts from both research and practice communities. A NOTATIONS AND TERMINOLOGIES Function/model f that maps inputs x to a vector of scores f (x) = {f1(x),..., fK (x)} yx ∈ y A single true class label of example x y(x) = max k∈K f k (x) Predicted label for the example x given the function f x adv = x + α Adversarial example where α is the perturbation lp(µ) = {α ∈ R d : α p ≤ µ} The lp-norm ball of attack radius µ (r, δr) Robustness budget r and broken probability δr The expected value of f k (x) E lb andÊ ub Lower and upper bounds of the expected valueÊf (x) = Feature representation learning model with x and parameters θ1 Bt A batch of benign examples xi Data reconstruction function given Bt in a(x, θ1) The values of all hidden neurons in the hidden layer h1 of a(x, θ1) given the batch Bt RB t (θ1) and R B t (θ1) Approximated and perturbed functions of RB t (θ1) xi and xi Perturbed and reconstructed inputs xi Sensitivity of the approximated function RB t (θ1) h1B Sensitivities of x and h, given the perturbation α ∈ lp Privacy budget to protect the training data D (κ + ϕ)max Robustness size guarantee given an input x at the inference time B PSEUDO-CODE OF ADVERSARIAL TRAINING (KURAKIN ET AL., 2016B) Given a loss function: where m 1 and m 2 correspondingly are the numbers of examples in B t and B adv t at each training step. Proof 1 Assume that B t and B t differ in the last tuple, x m (x m). Then, Proof 2 Regarding the computation of h 1Bt = {θ The sensitivity of a function h is defined as the maximum change in output, that can be generated by a change in the input . Therefore, the global sensitivity of h 1 can be computed as follows: following matrix norms (Operator norm, 2018): θ T 1 1,1 is the maximum 1-norm of θ 1's columns. By injecting Laplace noise Lap(, and χ 2 drawn as a Laplace noise [Lap( β, in our mechanism, the perturbed affine transformation h 1Bt is presented as: This in an ( 1 /γ)-DP affine transformation h 1Bt = {θ Similarly, the perturbed inputs where ∆ x is the sensitivity measuring the maximum change in the input layer that can be generated by a change in the batch B t and γ x = ∆ R m∆x . Following , ∆ x can be computed as follows: Consequently, Lemma 3 does hold. Proof 3 Given χ 1 drawn as a Laplace noise [Lap( d and χ 2 drawn as a Laplace noise β, the perturbation of the coefficient φ ∈ Φ = { 1 2 h i, x i}, denoted as φ, can be rewritten as follows:, we have that: Consequently, the computation of R Bt (θ 1) preserves 1 -DP in Alg. 1. In addition, the parameter optimization of R Bt (θ 1) only uses the perturbed data B t, which is (1 /γ x)-DP (Lemma 3), in the computations of h i, h i, x i, parameter gradients, and gradient descents at each step. These operations do not access the original dataset B t; therefore, they do not incur any additional information from the original data (the post-processing property in). As a , the total privacy budget to learn the perturbed optimal parameters θ 1 in Alg. 1 is (1 /γ x + 1)-DP. Proof 4 Assume that B t and B t differ in the last tuple, and x m (x m) be the last tuple in B t (B t), we have that Since y mk and y mk are one-hot encoding, we have that Proof 5 Let B t and B t be neighboring batches of benign examples, and χ 3 drawn as Laplace noise [Lap( |hπ|, the perturbations of the coefficients h πi y ik can be rewritten as: Since all the coefficients are perturbed, and given ∆ L2 = 2|h π |, we have that The computation of L 2Bt θ 2 preserves ( 1 /γ + 2)-differential privacy. The optimization of L 2Bt θ 2 does not access additional information from the original input x i ∈ B t. Consequently, the optimal perturbed parameters θ 2 derived from L 2Bt θ 2 are (1 /γ + 2)-DP. Proof 6 First, we optimize for a single draw of noise during training (Line 3) and all the batches of perturbed benign examples are disjoint and fixed across epochs. As a , the computation of x i is equivalent to a data preprocessing step with DP, which does not incur any additional privacy budget consumption over T training steps (the post-processing property of DP) (Result 1). That is different from repeatedly applying a DP mechanism on either the same or overlapping datasets causing the accumulation of the privacy budget. Now, we show that our algorithm achieves DP at the dataset level D. Let us consider the computation of the first hidden layer, given any two neighboring datasets D and D differing at most one tuple x e ∈ D and x e ∈ D., we have that By having disjoint and fixed batches, we have that: From Eqs. 19, 20, and Lemma 3, we have that Eqs. 20 and 21 As a , the computation of h 1D is (1 /γ)-DP given the data D, since the Eq. 22 does hold for any tuple x e ∈ D. That is consistent with the parallel composition property of DP, in which batches can be considered disjoint datasets given h 1B as a DP mechanism . This does hold across epochs, since batches B are disjoint and fixed among epochs. At each training step t ∈ [1, T], the computation of h 1Bt does not access the original data. It only reads the perturbed batch of inputs B t, which is (1 /γ x)-DP (Lemma 3). Following the post-processing property in DP , the computation of h 1Bt does not incur any additional information from the original data across T training steps. Similarly, we show that the optimization of the function R Bt (θ 1) is (1 /γ x + 1)-DP across T training steps. As in Theorem 1 and Proof 3, we have that, where B ∈ B. Given any two perturbed neighboring datasets D and D differing at most one tuple x e ∈ D and x e ∈ D: From Eqs. 20, 23, and Theorem 1, we have that Eqs. 23 and 24 As a , the optimization of R D (θ 1) is (1 /γ x + 1)-DP given the data D (which is 1 /γ x -DP (Lemma 3)), since the Eq. 25 does hold for any tuple x e ∈ D. This is consistent with the parallel composition property in DP , in which batches can be considered disjoint datasets and the optimization of the function on one batch does not affect the privacy guarantee in any other batch. In addition, ∀t ∈ [1, T], the optimization of R Bt (θ 1) does not use any additional information from the original data D. Consequently, the privacy budget is (1 /γ x + 1) across T training steps, following the post-processing property in DP Similarly, we can also prove that optimizing the data reconstruction function R B adv t (θ 1) given the DP adversarial examples crafted in Eqs. 8 and 9, i.e., x adv j, is also (1 /γ x + 1)-DP given t ∈ [1, T] on the training data D. First, DP adversarial examples x adv j are crafted from perturbed benign examples x j. As a , the computation of the batch B adv t of DP adversarial examples is 1) (1 /γ x)-DP (the post-processing property of DP ), and 2) does not access the original data ∀t ∈ [1, T]. In addition, the computation of h 1B adv t and the optimization of R B adv t (θ 1) correspondingly are 1 /γ-DP and 1 -DP. In fact, the data reconstruction function R B adv t is presented as follows: where h, and x adv j = θ 1 h adv j. The right summation component in Eq. 26 does not disclose any additional information, since the sign(·) function is computed from perturbed benign examples (the post-processing property in DP ). Meanwhile, the left summation component has the same form with R Bt (θ 1) in Eq. 7. Therefore, we can employ the Proof 3 in Theorem 1, by replacing the coefficients Φ = {In addition to the Result 4, by applying the same analysis in Result 3, we can further show that the optimization of R D adv (θ 1) is (1 /γ x + 1)-DP given the DP adversarial examples D adv crafted using the data D across T training steps, since batches used to created DP adversarial examples are disjoint and fixed across epochs. It is also straightforward to conduct the same analysis in Result 2, in order to prove that the computation of the first affine transformation h 1B given the batch of DP adversarial examples B Regarding the output layer, the Algorithm 1 preserves (1 /γ + 2)-DP in optimizing the adversarial objective function L Bt∪B adv t (θ 2) (Theorem 3). We apply the same technique to preserve (1 /γ + 2)-DP across T training steps given disjoint and fixed batches derived from the private training data D. In addition, as our objective functions R and L are always optimized given two disjoint batches B t and B adv t, the privacy budget used to preserve DP in these functions is (1 + 1 /γ + 2), following the parallel composition property in DP With the Results 1-6, all the computations and optimizations in the Algorithm 1 are DP following the post-processing property in DP , by working on perturbed inputs and perturbed coefficients. The crafting and utilizing processes of DP adversarial examples based on the perturbed benign examples do not disclose any additional information. The optimization of our DP adversarial objective function at the output layer is DP to protect the ground-truth labels. More importantly, the DP guarantee in learning given the whole dataset level D is equivalent to the DP guarantee in learning on disjoint and fixed batches across epochs. Consequently, Algorithm 1 preserves (1 + 1 /γ x + 1 /γ + 2)-DP in learning private parameters θ = {θ 1, θ 2} given the training data D across T training steps. Note that the 1 /γ x is counted for the perturbation on the benign examples. Theorem 4 does hold. Proof 7 Thanks to the sequential composition theory in DP , As a , we have The sequential composition of the expected output is as: Lemma 5 does hold. Proof 8 ∀α ∈ l p, from Lemma 5, with probability ≥ η, we have that In addition, we also have Using the hypothesis (Eq. 14) and the first inequality (Eq. 27), we have that Now, we apply the third inequality (Eq. 28), we have that The Theorem 5 does hold. Proof 9 ∀α ∈ l p, by applying Theorem 5, we havê Furthermore, by applying group privacy, we have that By applying Proof 8, it is straight to have with probability ≥ η. Proposition 1 does hold. Recall that the Monte Carlo estimation is applied to estimate the expected valueÊf (x) = 1 n n f (x) n, where n is the number of invocations of f (x) with independent draws in the noise, i.e., ) in our case. When 1 is small (indicating a strong privacy protection), it causes a notably large distribution shift between training and inference, given independent draws of the Laplace noise. In fact, let us denote a single draw in the noise as ) used to train the function f (x), the model converges to the point that the noise χ 1 and 2χ 2 need to be correspondingly added into x and h in order to make correct predictions. χ 1 can be approximated as Lap(χ 1,), where → 0. It is clear that independent draws of the noise Lap(χ 1,). These distribution shifts can also be large, when noise is large. We have experienced that these distribution shifts in having independent draws of noise to estimatê Ef (x) can notably degrade the inference accuracy of the scoring function, when privacy budget 1 is small ing in a large amount of noise injected to provide strong privacy guarantees. To address this, one solution is to increase the number of invocations of f (x), i.e., n, to a huge number per prediction. However, this is impractical in real-world scenarios. We propose a novel way to draw independent noise following the distribution of χ 1 + /ψ) for the affine transformation h, where ψ is a hyper-parameter to control the distribution shifts. This approach works well and does not affect the DP bounds and the provable robustness condition, since: Our mechanism achieves both DP and provable robustness in the training process; and It is clear thatÊf is the n-th draw of the noise. When n → ∞,Êf (x) will converge to 1 n n g a(x + χ 1, θ 1) + 2χ 2, θ 2, which aligns well with the convergence point of the scoring function f (x). Injecting χ 1 and 2χ 2 to x and h during the estimation ofÊf (x) yields better performance, without affecting the DP and the robustness bounds. The MNIST database consists of handwritten digits . Each example is a 28 × 28 size gray-level image. The CIFAR-10 dataset consists of color images belonging to 10 classes, i.e., airplanes, dogs, etc. The dataset is split into 50,000 training samples and 10,000 test samples . The experiments were conducted on a single GPU, i.e., NVIDIA GTX TITAN X, 12 GB with 3,072 CUDA cores. All the models share the same structure, consisting of 2 and 3 convolutional layers, respectively for MNIST and CIFAR-10 datasets. Both fully-connected and convolution layers can be applied in the representation learning model a(x, θ 1). Given convolution layer, the computation of each feature map needs to be DP; since each of them independently reads a local region of input neurons. Therefore, the sensitivity ∆ R can be considered the maximal sensitivity given any single feature map in the first affine transformation layer. In addition, each hidden neuron can only be used to reconstruct a unit patch of input units. That in d (Lemma 2) being the size of the unit patch connected to each hidden neuron, e.g., d = 9 given a 3 × 3 unit patch, and β is the number of hidden neurons in a feature map. MNIST: We used two convolutional layers (32 and 64 features). Each hidden neuron connects with a 5x5 unit patch. A fully-connected layer has 256 units. The batch size m was set to 2,499, ξ = 1, ψ = 2. I-FGSM, MIM, and MadryEtAl were used to draft l ∞ (µ) adversarial examples in training, with T µ = 10. Learning rate t was set to 1e − 4. Given a predefined total privacy budget t, 2 is set to be 0.1, and 1 is computed as: 1 = t− 2 (1+1/γ+1/γx). This will guarantee that (1 + 1 /γ x + 1 /γ + 2) = t. ∆ R = (14 2 + 2) × 25 and ∆ L2 = 2 × 256. We used three convolutional layers (128, 128, and 256 features). Each hidden neuron connects with a 4x4 unit patch in the first layer, and a 5x5 unit patch in other layers. One fullyconnected layer has 256 neurons. The batch size m was set to 1,851, ξ = 1.5, ψ = 10, and T µ = 3. The ensemble of attacks A includes I-FGSM, MIM, and MadryEtAl. We use data augmentation, including random crop, random flip, and random contrast. Learning rate t was set to 5e − 2. In the CIFAR-10 dataset, 2 is set to (1 + r/3.0) and 1 = (1 + 2r/3.0)/(1 + 1/γ + 1/γ x), where r ≥ 0 is a ratio to control the total privacy budget t in our experiment. For instance, given r = 0, we have Computational Efficiency and Scalability. In terms of computation efficiency, our mechanism does not consume any extra computational resources to train the model, compared with existing DP-preserving algorithms in deep learning (; 2017b; a). The model invocations to approximate the robustness bounds can further be efficiently performed in a parallel process. Regarding the scalability, with remarkably tightened global sensitivities, the impact of the size of deep neural networks in terms of the number of hidden layers and hidden neurons is significantly remedied, since 1) ∆ R and ∆ L2 are small, 2) we do not need to inject any noise into the computation of the network g(·), and 3) we do not redraw the noise in each training step t. In addition, our mechanism is not restricted to the type of activation functions. That is similar to . As a , our mechanism has a great potential to be applied in larger deep neural networks using larger datasets. Extensively investigating this property requires further study from both research and practice communities. To compute how much error our polynomial approximation approaches (i.e., truncated Taylor expansions), R Bt (θ 1) (Eq. 6) and L Bt θ 2, incur, we directly apply Lemma 4 in , Lemma 3 in , and the well-known error bound in . Note that R Bt (θ 1) is the 1st-order Taylor series and L Bt θ 2 is the 2nd-order Taylor series. Let us closely follow (; ;) to adapt their into our scenario, as follows: Given the truncated function R Bt (θ 1) = xi∈Bt θ 1j h i r, the average error of the approximation is bounded as where θ 1 = arg min θ1 R Bt (θ 1), θ 1 = arg min θ1 R Bt (θ 1), L Bt (θ 2) is the original Taylor polynomial function of xi∈Bt L f (x i, θ 2), y i, θ 2 = arg min θ2 L Bt (θ 2), θ 2 = arg min θ2 L Bt (θ 2). Proof 10 Let U = max θ1 R Bt (θ 1) − R Bt (θ 1) and S = min θ1 R Bt (θ 1) − R Bt (θ 1). We have that U ≥ R Bt (θ 1) − R Bt (θ 1) and ∀θ * 1: S ≤ R Bt (θ * 1) − R Bt (θ * 1). Therefore, we have In addition, R Bt (θ 1) − R Bt (θ * 1) ≤ 0, it is straightforward to have: If U ≥ 0 and S ≤ 0 then we have: Eq. 34 holds for every θ * 1, including θ 1. Eq. 34 shows that the error incurred by truncating the Taylor series approximate function depends on the maximum and minimum values of R Bt (θ 1) − R Bt (θ 1). This is consistent with . To quantify the magnitude of the error, we rewrite R Bt (θ 1) − R Bt (θ 1) as: where g 1j (x i, θ 1j) = θ 1j h i and g 2j (x i, θ 1j) = θ 1j h i. By looking into the remainder of Taylor expansion for each j (i.e., following ), with z j ∈ [z lj − 1, z lj + 1], 1 |Bt| R Bt (θ 1j) − R Bt (θ 1j) must be in the interval. This can be applied to the case of our autoencoder, as follows: For the functions F 1j (z j) = x ij log(1 + e −zj) and F 2j (z j) = (1 − x ij) log(1 + e zj), we have F Consequently, Eq. 29 does hold. Similarly, by looking into the remainder of Taylor expansion for each label k, Eq. 30 can be proved straightforwardly. In fact, by using the 2nd-order Taylor series with K categories, we have that: | Preserving Differential Privacy in Adversarial Learning with Provable Robustness to Adversarial Examples | 1,069 | scitldr |
In high-dimensional reinforcement learning settings with sparse rewards, performing effective exploration to even obtain any reward signal is an open challenge. While model-based approaches hold promise of better exploration via planning, it is extremely difficult to learn a reliable enough Markov Decision Process (MDP) in high dimensions (e.g., over 10^100 states). In this paper, we propose learning an abstract MDP over a much smaller number of states (e.g., 10^5), which we can plan over for effective exploration. We assume we have an abstraction function that maps concrete states (e.g., raw pixels) to abstract states (e.g., agent position, ignoring other objects). In our approach, a manager maintains an abstract MDP over a subset of the abstract states, which grows monotonically through targeted exploration (possible due to the abstract MDP). Concurrently, we learn a worker policy to travel between abstract states; the worker deals with the messiness of concrete states and presents a clean abstraction to the manager. On three of the hardest games from the Arcade Learning Environment (Montezuma's, Pitfall!, and Private Eye), our approach outperforms the previous state-of-the-art by over a factor of 2 in each game. In Pitfall!, our approach is the first to achieve superhuman performance without demonstrations. Exploration is a key bottleneck in high-dimensional, sparse-reward reinforcement learning tasks. Random exploration (e.g., via epsilon-greedy) suffices when rewards are abundant BID24, but when rewards are sparse, it can be difficult for an agent starting out to even find any positive reward needed to bootstrap learning. For example, the infamously difficult game MON-TEZUMA'S REVENGE from the Arcade Learning Environment (ALE) BID6 contains over 10 100 states and requires the agent to go thousands of timesteps without receiving reward. Performing effective exploration in this setting is thus an open problem; without demonstrations, even state-of-the-art intrinsically-motivated RL agents BID5 BID44 achieve only about one-tenth the score of an expert human.In this paper, we investigate model-based reinforcement learning BID17 as a potential solution to the exploration problem. The hope is that with a model of the state transitions and rewards, one can perform planning under the model to obtain a more informed exploration strategy. However, as the model being learned is imperfect, errors in the model compound BID42 BID43 when planning over many time steps. Furthermore, even if a perfect model were known, in high-dimensional state spaces (e.g. over 10 100 states), planning-computing the optimal policy (e.g. via value iteration)-is intractable. As a , model-based RL has had limited success in high-dimensional settings BID42. To address this, some prior work has focused on learning more accurate models by using more expressive function approximators BID25, and learning local models BID21 BID50. Others have attempted to robustly use imperfect models by conditioning on, instead of directly following, model-based rollouts BID49, frequently replanning, and combining model-based with model-free approaches BID0 BID40. However, none of these techniques offer a fundamental solution. Instead of directly learning a model over the concrete state space, we propose an approach inspired by hierarchical reinforcement learning (HRL) BID41 BID46 Figure 1: (a) Illustration of the abstract MDP on MONTEZUMA'S REVENGE. We have superimposed a white grid on top of the original game. At any given time, the agent is in one of the grid cellseach grid cell is an abstract state. In this example, the agent starts at the top of a ladder (yellow dot). The worker then navigates transitions between abstract states (green arrows) to follow a plan made by the manager (red dots). (b) Circles represent abstract states. Shaded circles represent states within the known set. The manager navigates the agent to the fringe of the known set (s 3), then randomly explores with π d to discover new transitions near s 3 (dotted box). (c) The worker extends the abstract MDP by learning to navigate to the newly discovered abstract states (dotted arrows). 2017), and learn a model over a much smaller abstract state space. Specifically, we assume we have a state abstraction function BID22 BID36 BID9, which maps a highdimensional concrete state (e.g. all pixels on the screen) to a low-dimensional abstract state (e.g. the position of the agent). We then aim to learn an (abstract) Markov Decision Process (MDP) over this abstract state space as follows: A manager maintains an (abstract) MDP over a subset of all possible abstract states which we call the known set, which is grown over time. The crucial property we enforce is that this abstract MDP is highly accurate and near deterministic on the known set, so we can perform planning without suffering from compounding errors, and do it efficiently since we are working with a much smaller number of abstract states. Concurrently, we learn a worker policy that the manager uses to transition between abstract states. The worker policy has access to the concrete states; its goal is to hide the messy details of the real world from the manager (e.g., jumping over monsters) so that the manager has a much simpler planning problem (e.g., traversing between two locations). In our implementation, the worker keeps an inventory of skills (i.e., options BID41), each of which is driven by a deep neural network; the worker assigns an appropriate skill for each transition between abstract states. In this way, the worker does not "forget" BID20, and we ensure monotonic progress in learning the abstract MDP. This abstract MDP, which enables us to efficiently explore via planning, is a key difference between our work and previous HRL work (e.g., BID4 BID46), which also learn skills and operate on latent abstract state spaces but without forming an MDP.We evaluate our approach on three of the most challenging games from the ALE BID6: MONTEZUMA'S REVENGE, PITFALL!, and PRIVATE EYE. In all three domains, our approach achieves more than 2x the reward of prior non-demonstration state-of-the-art approaches. In PITFALL!, we are the first to achieve superhuman performance without demonstrations, surpassing the prior state-of-the-art by over 100x. Additionally, since our approach is model-based, we can generalize to new rewards without re-training, as long as the reward function is a function of the abstract states. When evaluated on a new reward function never seen during training, our approach achieves over 3x the reward of prior state-of-the-art methods explicitly trained on the new rewards. We assume the world is an unknown episodic finite-horizon MDP with (concrete) states x ∈ X and actions a ∈ A. We further assume we have a simple predefined state abstraction function mapping concrete states x to abstract states s = φ(x). In MONTEZUMA'S REVENGE, for instance, a concrete state contains the pixels on the screen, while the corresponding abstract state contains the agent's position and inventory (Figure 1). We assume that reward only depends on the abstract states: taking any action transitioning from concrete state x to x leads to reward R(φ(x), φ(x)).Model-based approaches promise better exploration via planning, but struggle in high-dimensional state spaces due to compounding errors and computational intractability. To avoid these problems, we propose to construct and operate on a low-dimensional representation of the world consisting of abstract states, which we call the abstract MDP (we refer to the original MDP, the world, as the concrete MDP). Then we plan in the abstract MDP. Concretely, the abstract MDP consists of:• The state space is a subset of all abstract states, which we call the known set S, consisting of abstract states that can be reliably reached from the initial abstract state via actions in the action set. Over time, we monotonically grow the known set, which initially only contains the starting abstract state.• The action set comprises of calling the worker policy π w (a|x, (s, s)) on transitions (s, s) from the current abstract state s to a nearby abstract state s. When called on a transition (s, s), the worker navigates from s to s by taking concrete actions a conditioned on the current concrete state x. The worker abstracts away the messiness of the underlying concrete MDP so that all other parts of the system can operate on the abstract MDP by calling the worker. We denote calling the worker on transition (s, s) as the action go(s, s).• The transition dynamics of calling action go(s, s) at abstract state s are defined by the original MDP: i.e., if the worker takes a concrete trajectory x 0, a 0, x 1, a 1, · · ·, x T, the ing abstract state is φ(x T). The rewards for transitioning from s to s are the rewards in the concrete MDP R(s, s), which only depend on the abstract states by assumption. The core idea behind our approach is to construct the abstract MDP (Section 3) by growing the action set (training the worker), which in turn grows the known set. At each point in time, the manager maintains the known set and (accurate, to avoid compounding errors) estimates of the reward and transition dynamics of the abstract MDP. With these dynamics estimates, the manager can solve the abstract MDP at all times (e.g., by value iteration), since the abstract MDP is small. As the abstract MDP grows, it captures more and more of the concrete MDP, enabling the manager to recover a better and better policy via planning. Ultimately, the known set of the abstract MDP contains all abstract states, enabling the manager to recover a high-reward policy. As the manager constructs the abstract MDP, the abstract MDP maintains two key properties:• The action set of the abstract MDP consists of only reliable actions, actions go(s, s) that transition from abstract state s to abstract state s with probability at least 1 − δ for some small δ to avoid compounding uncertainty. This enables the manager to reach any abstract state in the known set with high probability. To simplify notation, the manager estimates the success rate P (s, s) of action go(s, s) instead of the full dynamics P (•|go(s, s), s), treating the (vanishingly small) fraction of failures equally.• The action set and known set of the abstract MDP grow monotonically. Since the action set only contains reliable transitions, a key danger is if learning new reliable transitions (adding new actions) causes the worker to forget already learned transitions (removing actions), stalling progress. We opt for a non-parametric approach, where the worker learns a skill (neural subpolicy) for each transition, reusing skills when possible. When a worker learns to reliably traverse a transition, it freezes the corresponding skill's parameters. The manager's goal is to fully construct the abstract MDP so that the known set contains all abstract states. Then, it can compute a high-reward policy on the concrete MDP via planning on the abstract MDP. To construct the abstract MDP, the manager adds new actions to the abstract MDP: training the worker to reliably traverse new transitions (driving the transition success rates toward 1). Concretely, the manager discovers new transitions, trains the worker on these transitions, and updates its dynamics estimates using Algorithm 1. On each episode, the manager either chooses to discover new transitions via randomized exploration (Section 3.1) or trains the worker. This is done by constructing a prioritized list of exploration goals, where each goal is either a transition to learn Score all candidates and select highest priority candidate c 5:Compute a plan (s0, s1), (s1, s2), · · ·, (sT −1, sT = s) with model 6:for t = 1 to T do 7:Call worker to navigate transition (st−1, st) 8:if c is a transition (s, s) to learn then 9:LEARNWORKER(s, s) 10:else c is an abstract state s to explore 11:DISCOVERTRANSITIONS or an abstract state to find nearby transitions (Section 3.2). Upon selecting the highest-priority goal (e.g., the transition (s, s)), the manager navigates to the relevant abstract state (e.g., s) by planning with its dynamics models (e.g., the plan go(s 0, s 1), go(s 1, s 2), · · ·, go(s T −1, s T = s)), executing the plan, and then calling the worker on or randomly exploring from the selected goal. Finally, the manager updates its dynamics models. It uses a sliding window estimate of the past N transition worker attempts of traversing transition (s, s) for the transition dynamics, and updates its reward estimate of a transition (s, s) as the reward accumulated by the first successful traversal of (s, s). When a transition (s, s) becomes reliable (i.e., the dynamics T (s, s) exceeds the threshold 1 − δ), the manager adds go(s, s) to the action set of the abstract MDP and adds s to the known set. For the manager to train the worker on new transitions, it must first discover new transitions. To discover new transitions, the manager navigates to an exploration candidate: an abstract state s. Then, it performs randomized exploration to discover new transitions (s, s) to nearby abstract states s. As exploration candidates, the manager simply chooses the abstract states in the known set that have been explored fewer than N visit times: i.e., n(s) < N visit, where n(s) is the number of times the manager has explored abstract state s for nearby transitions. Effectively, the manager assumes that when n(s) ≥ N visit, all nearby transitions (s, s) have been already discovered. Concretely, at an abstract state s, the manager finds nearby transitions by following a simple policy π d (a t |x 0:t, a 0:t−1) for T d timesteps (Algorithm 3). The policy π d outputs randomized concrete actions a t conditioned on the past concrete states x 0:t and past concrete actions a 0:t−1, where φ(x 0) is the abstract state s it was initially invoked. During those T d timesteps, the manager records the transitions and rewards it observes: (φ(x 0), r 0, φ(x 1)), · · · (φ(x T −1), r T −1, φ(x T)), using the rewards to update its rewards model and the transitions as candidates for the worker to learn. Additionally, if it ends in another exploration candidate (i.e., n(s) < N visit ) after exploring for T d timesteps, it simply continues exploring for another T d timesteps. The simplest possible policy for the π d is to uniformly sample a concrete action at each timestep. However, we found that this inadequately discovered new transitions, because it would often perform useless action sequences (e.g., left, right, left, right). Instead, we use a simple method for π d to commit to an exploration direction. At each timestep, the π d uniformly samples a concrete action and a number between 1 and T repeat, and repeats the action the sampled number of times. Exploration goals. The manager selects an exploration goal from the set of all candidate exploration goals, consisting of exploration candidates for the transition discovery and candidate transitions for the worker to learn. The exploration candidates are just the abstract states in the known set with n(s) < N transition. The candidate transitions are the transitions discovered by the manager. In addition, the worker imposes a heuristic on its transition learning process in order to preserve the Markov property of the abstract MDP (Section 4), which sometimes makes it impossible for the worker to learn a transition. To avoid getting stuck, as candidates for the worker, the manager also considers "long-distance" transitions: (s, s) pairs for which the manager did not directly transition from s to s, but indirectly did so through a sequence of intermediate states DISPLAYFORM0 Input: a transition (s, s) to learn, called at concrete state x0 with φ(x0) = s 1: Set worker horizon DISPLAYFORM1 Observe xt 5:Compute worker intrinsic reward rt = R (s,s) (xt|s) 6:Update worker on (xt−1, at−1, rt, xt) 7:Choose DISPLAYFORM2 Freeze worker's skill π I(s,s) DISPLAYFORM3 be the length of the shortest such path, the manager considers all pairs (s, DISPLAYFORM4 Priority scores of the exploration goals. The manager must choose exploration goals in some order. Our theoretical (Section 5) hold for any priority function that eventually chooses all exploration goals. In our implementation, the manager prioritizes the easiest exploration goals (enabling the fastest growth of the action set), and the most useful goals (goal that either lead to more reward or enable the worker to learn new transitions).Concretely, the manager heuristically computes the easiness of a learning transition (s, s) as DISPLAYFORM5 where n succ is the number of times the worker has successfully traversed (s, s) and n f ail is the number of times the worker has failed in traversing (s, s). Intuitively, both 1) succeeding more and failing less and 2) shorter transitions, requiring fewer timesteps indicate easier to learn transitions. Similarly, for an abstract state s, the manager computes the easiness of discovering new neighboring transitions as e(s) = −n(s) since abstract states that have been explored less are more likely to have undiscovered transitions. The manager heuristically computes the usefulness of a learning a transition (s, s) as u(s, s) = λ 2 I new + R(s 0, s), where I new is an indicator that is 1 if there is an outgoing transition (s, s) and no current candidate transitions end in s. R(s 0, s) is the reward achieved by navigating from the initial abstract state s 0 to s. If I new is 1, then learning (s, s) opens new candidate transitions for the worker to learn, indicating that learning (s, s) is useful. For an abstract state s, the manager computes the usefulness just as u(s) = λ 3 + R(s 0, s). The λ 3 constant accounts for how much more or less useful discovering new transitions is compared to learning new transitions. To prioritize exploration goals, the manager uniformly switches between two priority functions. The first priority function simply equals the easiness plus usefulness: e + u, and the second priority function is the same, but without the reward term in u, to avoid falling into a local reward maximum. The worker forms the action set of the abstract MDP by learning many subtasks of the form: navigate from abstract state s to s. It does this while maintaining the three properties: 1) the worker reliably (with high probability) traverses (s, s) for each action go(s, s) in the abstract MDP; 2) the action set grows monotonically, so learning new transitions never causes the worker's old transitions to become unreliable; and 3) the worker learns transitions in a way that preserves the Markov property. While it is possible to learn a single policy for all transition subtasks, it is tricky to satisfy 2), since learning new transitions can have deleterious effects on previously learned transitions. Instead, the worker maintains an inventory of skills (Section A.3), where each transition is learned by a single skill, sharing the same skill amongst many transitions when possible. The worker uses these skills to form the action set of the abstract MDP following Algorithm 2: When the manager calls the worker on a transition (s, s), the worker selects the appropriate skill from the skill inventory and begins an episode of the subtask of traversing s to s (Section 4.2). During the skill episode, the skill receives intrinsic rewards, and is declared to have successfully completed the subtask if it meets the worker's holding heuristic, which heuristically maintains 3) by ensuring the worker can always control the abstract state. If at the end of the skill episode, the success rate of the worker traversing (s, s) exceeds the reliability threshold 1 − δ, the action go(s, s) is added to the abstract MDP. The worker's skill inventory I indexes skills so that the skill at index I(s, s) reliably traverses transition (s, s). Each skill is a goal-conditioned subpolicy π I(s,s) (a|x, s), which produces concrete actions a conditioned on the current concrete state x and the goal abstract state s. When the worker traverses a transition (s, s), it calls on the corresponding skill until the transition is traversed: i.e., π w (a|x, (s, s)) = π I(s,s) (a|x, s).When learning a new transition (s, s), the worker first tries to reuse its already learned skills from the skill inventory. For each skill π i in the skill inventory, it measures the success rate of π i on the new transition (s, s) over N transition attempts. If the success rate exceeds the reliability threshold 1 − δ for any skill π i, it updates the skill repository to reuse the skill: I(s, s) ← π i. Otherwise, if no already learned skill can reliably traverse the new transition, the worker creates a new skill and trains it to navigate the transition by optimizing intrinsic rewards during skill episodes (Section 4.2). Given a transition (s, s), the worker's subtask is to navigate from abstract state s to abstract state s. Each episode of this subtask consists of d(s, s) × H worker timesteps (longer transitions need more timesteps to traverse), where the reward at each timestep is R (s,s) (x t) = 1 if the skill has successfully reached the end of the transition (φ(x t) = s ) and 0 otherwise. These episodes additionally terminate if the main episode terminates or if the manager receives negative environment reward. When solving these subtasks to construct the action set of the abstract MDP, the worker must be careful not to violate the Markov property. In particular, the concrete state may contain some historydependent information lost due to the state abstraction function. For example, consider the task of jumping over a dangerous hole, consisting of three abstract states: s 1 (the cliff before the hole), s 2 (the air above the hole), and s 3 (the solid ground on the other side of the hole). The worker might incorrectly assume that it can reliably traverse from s 1 to s 2 by simply walking off the cliff. But adding this as a reliable transition to the abstract MDP causes a problem: there is now no way to successfully traverse from s 2 to s 3 due to missing history-dependent information in the abstract state (i.e., the way the worker navigated s 1 to s 2), violating the Markov property. On navigating a transition (s, s), the worker avoids this problem by navigating to s and then checking for history-dependent consequences with the holding heuristic. The worker assumes that if its navigation of (s, s) changed unobserved parts of the state, then those changes would eventually cause the abstract state to change (e.g., in the example, the worker would eventually hit the bottom and die). Consequently, if the worker can stay in s for many timesteps, then it did not significantly unobserved parts of the state. This corresponds to only declaring the episode as a success if the worker accumulates at least R hold reward (equivalent to being in s for R hold timesteps).Any RL algorithm can be used to represent and train the skills to perform this subtask. We choose to represent each skill as a Dueling DDQN (van ; BID47 . For faster training, the skills use self-imitation BID29 to more quickly learn from previous successful episodes, and count-based exploration similar to BID5 to more quickly initially discover skill reward. Since the skill inventory can contain many skills, we save parameters by occasionally using pixel-blind skills. Appendix A.3 fully describes our skill training and architecture. We are interested in the sample complexity BID16, the number of samples required to learn a policy that achieves reward close to the optimal policy, with high probability. Standard in the tabular setting (e.g., MBIE-EB BID38) guarantee learning a nearoptimal policy, but require a number of timesteps polynomial in the size of the state space, which is effectively vacuous in the deep RL setting, where state spaces can be exponentially large (e.g., > 10 100 states). In contrast, our approach is able to use time and space polynomial in the size of the abstract state space, which is exponentially smaller, by operating on the abstract MDP.Formally, assuming that our neural network policy class is rich enough to represent all necessary skills, with high probability, our approach can learn a near-optimal policy on a subclass of MDPs in time and space polynomial in the size of the abstract MDP (details in Appendix C). The key intuition is that instead of learning a single task where the time horizon is the length of the game, our approach learns many subtasks where the time horizon is the number of steps required to navigate from one abstract state to another. This is critical, as many deep RL algorithms (e.g. -greedy) require a number of samples exponential in the time horizon to solve a task. Following BID2, we empirically evaluate our approach on three of the most challenging games from the ALE BID6: MONTEZUMA'S REVENGE, PITFALL!, and PRIVATE EYE. We do not evaluate on simpler games (e.g., Breakout), because they are already solved by prior state-of-the-art methods BID13 and do not require sophisticated exploration. We use the standard ALE setup (Appendix A) and end the episode when the agent loses a life. We report rewards from periodic evaluations every 4000 episodes, where the manager plans for optimal reward in the currently constructed abstract MDP. We average our approach over 4 seeds and report 1 standard deviation error bars in the training curves. Our experiments use the same set of hyperparameters (Appendix A.1) across all three games, where the hyperparameters were exclusively and minimally tuned on MONTEZUMA'S REVENGE.In all three games, the state abstraction function uses the RAM state, available through the ALE simulator, to extract the bucketed location of the agent and the agent's current inventory. Roughly, this distinguishes states where the agent is in different locations or has picked up different items, but doesn't distinguish states where other details differ (e.g. monster positions or obstacle configurations). Notably, the abstract state function does not specify what each part of the abstract state means, and the agent does not know the entire abstract state space beforehand. We describe the exact abstract states in Appendix A.2. Among the many deep RL approaches, in each game, we compare with the prior non-demonstration state-of-the-art approach, which use prior knowledge comparable to our RAM information:• In MONTEZUMA'S REVENGE, we compare with SmartHash BID44, a countbased exploration approach which estimates state visit counts with a hash-based density model and provides intrinsic reward to revisit states with low visit counts. Like our approach, SmartHash also uses RAM state information. It hashes each state to a hand-selected subset of the RAM state and maintains visit counts on the hashes. • In PITFALL!, we compare with SOORL BID18, a planning approach which requires prior knowledge to extract the objects on the screen. SOORL is the only prior nondemonstration approach to achieve positive reward in PITFALL!, but requires extensive engineering (much stronger than RAM state info) to identify and extract all objects. Once SOORL has access to the objects, it learns in very few frames since data from similar objects can be pooled in learning. Consequently, in our training curves, we report its final average performance over 100 runs, as well as its final best performance over 100 runs, • In PRIVATE EYE, we compare with another count-based exploration method, DQNPixelCNN, which uses a pixel-based density model to estimate state visitation counts. We compare with the reported in. DQNPixelCNN uses less prior knowledge than our approach, but we compare with it because it achieves the previous non-demonstration state-of-the-art .• In all three games, we compare with AbstractStateHash, which performs count-based exploration identical to SmartHash, but uses the same RAM information as our approach. Comparison between our worst performing seed with the best performing seed of the prior-state-of-the-art. Our worst performing seed outperforms the prior best on MONTEZUMA'S REVENGE and PITFALL! and performs comparably to the prior best on PRIVATE EYE. Our best performing seed achieves new peak rewards. FIG0 shows the main . AbstractStateHash matches the prior state-of-the-art on MON-TEZUMA'S REVENGE, but performs relatively poorly on PRIVATE EYE and PITFALL!. This suggests both that prior state-of-the-art methods do not effectively leverage the state abstraction function, and that the state abstraction function does not trivialize the learning problem. In MONTEZUMA'S REVENGE, after 2B training frames, our approach achieves a final average reward of 11020, more than doubling the average reward of SmartHash: 5001. Our approach achieves higher average reward than SmartHash at every point along the training curves and continues to learn even late into training, while SmartHash plateaus (Appendix B.3 presents more on the ability of our approach to continue to learn without plateauing).Our approach is the first non-demonstration approach to achieve superhuman performance on PIT-FALL!, achieving a final average reward of 9959.6 after 2B frames of training, compared to average human performance: 6464 BID33. In addition, our approach achieves more than double the reward of SOORL, which achieves a maximum reward of 4000 over 100 seeds and a mean reward of 80.52, and even significantly outperforms Ape-X DQfD BID33, which uses high-scoring expert demonstrations during training to achieve a final mean reward of 3997.5.In PRIVATE EYE, our approach achieves a mean reward of 35636.1, more than double the reward of DQN-PixelCNN, which achieves 15806.5. Our approach performs even better, approaching human performance, if we change a single hyperparameter (Appendix B.4).Stability. Recent work BID12 has drawn attention to the instability of deep RL . To highlight the stability of our , we compare our worst performing seed against the prior state-of-the-art's best performing seed in TAB3. Even our worst seed outperforms the mean performance of the prior state-of-the-art approaches. In addition, our worst seed is competitive with the highest previously reported rewards in each of the games, significantly outperforming the previous high in MONTEZUMA'S REVENGE and PITFALL!, and narrowly performing worse than the previous high in PRIVATE EYE. Even in PRIVATE EYE, while DQN-PixelCNN achieves 39000 reward on its best single episode across all seeds, none of its seed consistently achieves more than 15806.5 reward over many episodes. In contrast, our worst seed consistently obtains 36200 reward. Furthermore, our best seeds achieve new peak performances in each of the games. By using the abstract MDP, our approach can quickly generalize to new tasks in the same environment that were not seen during training. It does this by revisiting the transitions in the abstract MDP to update the rewards model with the newly observed reward. We study the ability of our approach to do this. After our approach completes training on the original reward function on MONTEZUMA'S REVENGE, we evaluate its performance on three new reward functions, allowing our approach to interact with the environment for an additional 1M frames to observe each new reward function. We compare with SmartHash, trained from scratch directly on the new reward functions for 1B frames. Even when evaluated on an unseen reward function, our approach achieves about 3x as much reward as SmartHash, which is directly trained on the new reward function. This suggests that our approach can generalize to new tasks with the same dynamics. We describe the details in Appendix B.2. We additionally evaluate the performance of our approach on the recommended BID23 form of ALE stochasticity (sticky actions 25% of the time) on PRIVATE EYE (selected because it requires the fewest frames for training). Figure 3 compares the performance of our method on the stochastic version of PRIVATE EYE with the performance of our method on the deterministic version of PRIVATE EYE. Performance degrades slightly on the stochastic version, because the worker's skills become harder to learn. However, both versions outperform the prior state-of-the-art DQNPixelCNN, and the worker is able to successfully abstract away stochasticity from the manager in the stochastic version of the game so that the abstract MDP remains near-deterministic. To see how easy it is to create a high-performing state abstraction function on new tasks, we study the robustness of our approach to state abstraction functions of varying degrees of coarseness on PRIVATE EYE. Our state abstraction function buckets the agent's (x, y) coordinates. We vary the coarseness of the abstraction function by varying the bucketing size: increasing the bucketing size in fewer, coarser abstract states. We report in Figure 4 on five different bucket sizes obtained by scaling the original bucket size by 1 2, 2 3, 1, 3 2, and 2. To adjust for the updated bucket sizes, we also scale the worker's skill episode horizon H worker by the same value. Our method outperforms the prior state-of-the-art approach DQN-PixelCNN across the entire range of bucket sizes, suggesting that our approach does not require a highly tuned state abstraction function. Exploration in tabular settings is well-understood via optimism in the face of uncertainty (OFU) BID7 BID39 ) and posterior sampling BID31 BID30. OFU methods BID7 BID37 ) achieve provably near-optimal policies in polynomial time by providing reward bonuses for exploring regions of uncertainty. Nevertheless, despite recent stronger optimality bounds BID8, these methods do not scale to the deep RL setting, where the state space is prohibitively large. BID5; BID44; similarly apply optimism reward bonuses in the case of high-dimensional state spaces, empirically improving exploration. However, these methods no longer guarantee optimality, and can suffer from insufficient exploration because they reactively seek infrequently visited states, whereas our manager proactively seeks new abstract states. While model-based RL succeeds in the tabular setting BID39 ) and in tasks with relatively small (e.g., < 100 dimensions) state spaces BID25, it has little success on tasks with exponentially large state spaces (e.g., pixel inputs). Prior work BID27 that learns models in these large state spaces suffers from compounding errors BID42, where longterm predictions become extremely inaccurate. To make matters worse, while prior model-based works BID49 BID11 ) succeed on relatively dense reward tasks, model-based planning methods (e.g., value iteration) can be computationally intractable in longhorizon, sparse-reward tasks, even when a perfect model is known. To circumvent these problems, our work learns an abstract MDP consisting of an exponentially smaller abstract state space and learned skills. BID28 similarly learns a model over abstract states and skills, but uses manually engineered skills, whereas ours are learned. Our work relates to prior work on hierarchical reinforcement learning (HRL), which also operates on abstract states BID10 BID22 with learned skills or subgoals BID35 BID36 BID46 BID4. However, a key difference is that our work constructs the abstract MDP, enabling us to perform targeted exploration via planning and rely on the Markov property to avoid exponentially large state histories. In contrast, the abstract states and skills in these other works do not meet such useful structural properties, and consequently can be difficult to learn with. BID34 similarly constructs an abstract MDP like ours. However, due to critical design decisions, our approach outperforms theirs by nearly an order of magnitude. Whereas our approach monotonically grows the known set by saving worker parameters as transitions become reliable, BID34 uses the same worker parameters to simultaneously learn many transitions. This causes catastrophic forgetting, as training on a new transition causes the worker to fail a previously learned transition, and prevents growth of the abstract MDP.Only imitation learning methods BID2 BID33 outperform our method on the ALE's hardest exploration games. However, using demonstrations sidesteps the exploration problem our approach seeks to solve because following demonstrations leads to high reward. This work presents a framework for tackling long-horizon, sparse-reward, high-dimensional tasks by using abstraction to decrease the dimensionality of the state space and to address compounding model errors. Empirically, this framework performs well in hard exploration tasks, and theoretically guarantees near-optimality. However, this work has limitations as well. First, our approach relies on some prior knowledge in the state abstraction function, although we compare against state-ofthe-art methods using a similar amount of prior knowledge in our experiments. This information is readily available in the ALE, which exposes the RAM, and in many robotics tasks, which expose the underlying state (e.g., joint angles and object positions). Still, future work could attempt to automatically learn the state abstraction or extract the abstraction directly from the visible pixels. One potential method might be to start with a coarse represention, and iteratively refine the representation by splitting abstract states whenever reward is discovered. Another limitation of our work is that our simple theoretical guarantees require relatively strong assumptions. Fortunately, even when these assumptions are not satisfied, our approach can still perform well, as in our experiments. A , the pixel concrete states are downsampled and cropped to 84 by 84 and then are converted to grayscale. To capture velocity information, the worker receives as input the past four frames stacked together. Every action is repeated 4 times. In addition, MONTEZUMA'S REVENGE and PITFALL! are deterministic by default. As a , the manager deterministically navigates to the fringes of the known set by calling on the worker's deterministic, saved skills. To minimize wallclock training time, we save the states at the fringes of the known set and enable the worker to teleport to those states, instead of repeatedly re-simulating the entire trajectory. When the worker teleports, we count all the frames it would have had to simulate as part of the training frames. Importantly, this only affects wallclock time, and does not benefit or change the agent in any way. Notably, this does not apply to PRIVATE EYE, where the initial state is stochastically chosen from two similar possible states. A.1 HYPERPARAMETERS All of our hyperparameters are only tuned on MONTEZUMA'S REVENGE. Our skills are trained with the Adam optimizer BID19 with the default hyperparameters. TAB5 describes all hyperparameters and the values used during experiments (bolded), as well as other values that we tuned over (non-bolded). Most of our hyperparameters were selected once and never tuned. In MONTEZUMA'S REVENGE, each abstract state is a (bucketed agent x-coordinate, bucketed agent y-coordinate, agent room number, agent inventory, current room objects, agent inventory history) tuple. These are given by the RAM state at indices 42 (bucketed by 20), 43 (bucketed by 20), 3, 65, and 66 respectively. The agent inventory history is a counter of the number of times the current room objects change (the room objects change when the agent picks up an object).In PITFALL!, each abstract state is a (bucketed agent x-coordinate, bucketed agent y-coordinate, agent room number, items that the agent has picked up) tuple. These are given by the RAM state at indices 97 (bucketed by 20), 105 (bucketed by 20), 1, and 113 respectively. In PRIVATE EYE, each abstract state is a (bucketed agent x-coordinate, bucketed agent y-coordinate, agent room number, agent inventory, agent inventory history, tasks completed by the agent) tuple. These are given by the RAM state at indices 63 (bucketed by 40), 86 (bucketed by 20), 92, 60, 72, and 93 respectively. Architecture. Our skills are represented as Dueling DDQNs BID45 BID47, which produce the state-action value Q (s,s) (x, a) = A (s,s) (x, a) + V (s,s) (x), where A (s,s) (x, a) is the advantage and V (s,s) (x) is the state-value function. The skills recover a policy π K(s,s) (a|x, (s, s)) by greedily selecting the action with the highest Q-value at each concrete state x. The skill uses the standard architecture BID24 to represent A (s,s) (x, a) and V (s,s) (x) with a small modification to also condition on the transition (s, s). First, after applying the standard ALE pre-processing, the skill computes the pixel embedding e x of the pixel state x by applying three square convolutional layers with (filters, size, stride) equal to,, and respectively with rectifier non-linearities BID26, and applying a final rectified linear layer with output size 512. Next, the skill computes the transition embedding e (s,s) by concatenating [e r ; e dif f] and applying a final rectified linear layer with output size 64, where:• e r is computed as the cumulative reward received by the skill during the skill episode, represented as one-hot, and passed through a single rectified linear layer of output size 32.• e dif f is computed as s − s passed through a single rectified linear layer of output size 96.Finally, e x and e (s,s) are concatenated and passed through a final linear layer to obtain A (s,s) (x, a) and V (s,s) (x).To prevent the skill from changing rapidly as it begins to converge on the optimal policy, we keep a sliding window estimate of its success rate p success. At each timestep, with probability 1 − p success, we sample a batch of (x, a, r, x) tuples for transition (s, s) from the replay buffer and update the policy according the DDQN loss function: L = ||Q (s,s) (x, a) − target|| 2 2, where target = (r +Q target (x, arg max a ∈A Q (s,s) (x, a))). Additionally, since the rewards are intrinsically given, the optimal Q-value is known to be between 0 and R hold. We increase stability by clipping target between these values. Pixel blindness. In addition, some skills are easy to learn (e.g. move a few steps to the left) and don't require pixel inputs to learn at all. To prevent the skills from unnecessarily using millions of parameters for these easy skills, the worker first attempts to learn pixel-blind skills for simple transitions (s, s) with d(s, s) = 1 (i.e. (s, s) was directly observed by the manager). The pixelblind skills only compute e (s,s) and pass this through a final layer to compute the advantage and value functions (they do not compute or concatenate with e x). If the worker fails to learn a pixelblind skill, (e.g. if the skill actually requires pixel inputs, such as jumping over a monster) it will later try to learn a pixel-aware skill instead. Epsilon schedule. The skills use epsilon-greedy exploration, where at each timestep, with probability, a random action is selected instead of the one produced by the skill's policy BID48. Once a skill becomes frozen, is permanently set to 0.The number of episodes required to learn each skill is not known in advance, since some skills require many episodes to learn (e.g. traversing a difficult obstacle), while other skills learn in few episodes (e.g. moving a little to the left). Because of this, using an epsilon schedule that decays over a fixed number of episodes, which is typical for many RL algorithms, is insufficient. If epsilon is decayed over too many episodes, the simple skills waste valuable training time making exploratory actions, even though they've already learned near-optimal behavior. In contrast, if epsilon is decayed over too few episodes, the most difficult skills may never observe reward, and may consequently fail to learn. To address this, we draw motivation from the doubling trick in online learning BID1 to create an epsilon schedule, which accomodates skills requiring varying number of episodes to learn. Instead of choosing a fixed horizon, we decay epsilon over horizons of exponentially Figure 5: The saw-tooth epsilon schedule used by our skills increasing length, summarized in Figure 5. This enables skills that learn quickly to achieve low values of epsilon early on in training, while skills that learn slowly will later explore with high values of epsilon over many episodes. Count-based exploration. Our skill additionally use count-based exploration similar to BID44; BID5 to learn more quickly. Each skill maintains a count of the number of times visit(s) it has visited each abstract state s. Then, the skill provides itself with additional intrinsic reward to motivate itself to visit novel states, equal to. Self-imitation. When learning to traverse difficult obstacles (e.g. jumping over a disappearing floor), the skill may observe a few successes long before successfully learning a policy to reliably traverse the difficult obstacle. We use a variant of the self-imitation described in BID29 to decrease this time. Whenever a skill successfully traverses a transition, it adds the entire successful trajectory to a separate replay buffer and performs imitation learning on the successful trajectories. These successful trajectories are actually optimal skill trajectories because the skill episode uses undiscounted reward, so all successful trajectories are equally optimal. To update on these skills, the skill periodically samples from this replay buffer and updates on an imitation loss function DISPLAYFORM0, where θ is the skill's parameters, and L 1 and L 2 are defined as below:• Let G t = T i=t r t be the reward to-go for a successful trajectory a) on the reward to-go of the successful trajectory, because G t is actually the optimal Q-value on successful trajectories (all successful trajectories are equally optimal): i.e., L 1 = ||G t − Q (s,s) (x t, a t)|| 2. DISPLAYFORM1 • We use the margin-loss from DISPLAYFORM2 Intuitively, L 2 encourages the skill to replay the actions that led to successful trajectories over other actions. We use λ = 0.5, which was chosen with no hyperparameter tuning. B ADDITIONAL The worker learns skills that successfully apply in to many similar transitions. Figure 6 depicts the number of different transitions each skill is used on in MONTEZUMA'S REVENGE, PITFALL!, and PRIVATE EYE. The simplest skills (e.g. move to the left) enjoy the highest number of reuses, while more esoteric skills (e.g. jump over a particular monster) are only useful in few scenarios. Figure 7 provides an example of a skill in MONTEZUMA'S REVENGE with relatively high reuse. The arrows denote the movement of the agent when it executes the skill. The same skill that jumps over a monster in the first room (Figure 7(a) ) can also climb up ladders. In Figure 7(b), the skill appears to know how to climb up all parts of the ladder except for this middle. This occurs because the spider occasionally blocks the middle of the ladder, and a different special skill must be used to avoid the spider. However, the skill reuse is not perfect. For example, in Figure 7 (a), the skill can climb up the top half of ladders, but a separate skill climbs the bottom half of the ladders. To evaluate the ability of our approach to generalize to new reward functions, we train our approach on the basic MONTEZUMA'S REVENGE reward function and then test it on three challenging new reward functions (illustrated in FIG4), not seen during training:• Get key: the agent receives 1000 reward for picking up the key in room 14 (6 rooms away from the start). In addition, the agent receives -100 reward for picking up any other objects or opening any other doors. Figure 9: Training curves of SmarthHash on alternate tasks in MONTEZUMA'S REVENGE, compared with the performance of our approach generalizing to the new task. Our approach only receives 1M frames with the new task.• Kill spider: the agent receives 1000 reward for killing the spider in room 13 (5 rooms away from the start). To kill the spider, the agent must first pick up the sword in room 6 (3 rooms away from the start) and save the sword for the spider. The agent receives no other reward.• Enter room 8: the agent receives 1000 reward for entering room 8 (6 rooms away from the start). The agent receives no other reward. In all three tasks, the episode ends when the agent completes its goal and receives positive reward. Our approach trains on the basic MONTEZUMA'S REVENGE reward function for 2B frames, and then is allowed to observe the new reward functions for only 1M frames. We compare with SmartHash, which trains directly on the new reward functions for 1B frames. The are summarized in Figure 9. Even when evaluated on a reward function different from the reward function it was trained on, our approach achieves about 3x as much reward as SmartHash, which is trained directly on the new reward function. Averaged over all 3 tasks, our approach achieves an average reward of 716.7 out of an optimal reward of 1000, whereas SmartHash only achieves an average reward of 220, even when trained directly on the new reward function. These experiments suggest that after our approach is trained in an environment on one task, it can quickly and successfully adapt to new tasks with little additional training. Whereas many prior state-of-the-art approaches tend to plateau toward the end of training, our approach continues to make near-linear progress. Figure 10 graphs the number of transitions learned by the worker against the number of frames in training. In MONTEZUMA'S REVENGE and PIT-FALL! particularly, the rate the worker learns new transitions is nearly constant throughout training. Because of this, when we continued to train a single seed on PITFALL!, by 5B frames, it achieved a reward of 26000, and by 20B frames, it achieved a reward of 35000. By changing a single hyperparameter, our approach can perform even better on PRIVATE EYE, exceeding human performance on 2 of 4 seeds. Since nearby abstract states are particularly easy to discover in PRIVATE EYE, the manager needs not explore for new transitions as many times. Consequently, if we decrease the number of times the manager explores around each abstract state N visit from the 500 (used in the main experiments for all three games) to 10, performance improves. Figure 11 compares the performance with the decreased value of N visit with the original value of N visit reported in the main experiments. Decreasing N visit prevents the manager from wasting frames with unnecessary exploration and consequently enables the worker to learn more transitions in fewer total frames. With N visit set to 10, our approach achieves a final average performance of 60247 after 200M frames of training. Additionally, the top 2 of our 4 seeds achieve rewards of 75600 and 75400, exceeding average human performance: 69571 BID33. In general, a hierarchical policy over skills is not guaranteed to be near-optimal, because certain optimal trajectories may be impossible to follow using the skills. Because of this, hierarchical reinforcement learning literature typically focuses on hierarchical optimality BID10 optimality given the abstractions. However, under the following assumptions, our approach provably achieves a near-optimal policy on the original MDP in time polynomial in the size of the abstract MDP, with high probability. Proposition 1. Under the assumptions, for a given input η and, π t is at most suboptimal, To prove Proposition 1, we require the following three lemmas: DISPLAYFORM0 Lemma 1. By setting N transition to be O(log 1−(1−η) 1 |Φ| 2 /2 2(/|Φ|HV * (x0)) 2 ), with probability 1 − η, at each timestep t, π t is near-optimal on the current abstract MDP: i.e., V πt t (s) ≥ V * t (s)− for all abstract states s ∈ Φ.Lemma 2. If the known set is equal to the set of all abstract states (S = Φ) at timestep T, then for any policy π on the abstract MDP, π achieves the same expected reward on the abstract MDP as on the concrete MDP: i.e., V π T (φ(x 0)) = V π (x 0), where x 0 is the initial concrete state. In addition, the expected return of the optimal policy on the abstract MDP is equal to the expected return of the optimal policy on the concrete MDP: i.e., V * T (φ(x 0)) = V * (x 0) where x 0 is the initial state. Lemma 3. With probability 1 − η, the known set grows to cover all abstract states in O |Φ| 3 (|A| + Given these lemmas, we are ready to prove Proposition 1:Proof of Proposition 1. For simplicity, we ignore terms due to appropriately setting N transition and 1 − η from Lemma 1, but these terms are all polynomial in the size of the abstract MDP.By Lemma 3 the known set grows to cover all abstract states in T = O |Φ| 3 (|A| + log K|Φ|+log 1 η log 1 p) + d max × H worker timesteps. For all timesteps t ≥ T, by Lemma 1, π t is at most suboptimal on the abstract MDP. On all those timesteps, the known set is equal to all abstract states, so by Lemma 2, π t is at most suboptimal on the concrete MDP.Proofs. Now, we prove Lemma 1, Lemma 2, and Lemma 3.Proof of Lemma 1. LetP (s |o, s) denote the estimated transition dynamics 3 andR(s, s) denote the estimated reward model in the abstract MDP.For each reliable transition (s, s) (action in the abstract MDP), the manager estimatesP (s |o, s) from N transition samples of the worker. We bound the error in the model |P (s |o, s) − P (s |o, s)| with high probability by Hoeffding's inequality: DISPLAYFORM1 By the Assumption 2,R(s, s) = R(s, s) because all trajectories leading to s achieve some reward r and all trajectories leading to s achieve some reward r, so a single estimate R(s, s) = r − r is sufficient to accurately determineR.Because the model errors are bounded, and because the abstract MDP is Markov, we can apply the simulation lemma BID17, which states that if |P (s |o, s) − P (s |o, s)| ≤ α and |R(s, s)−R(s, s)| ≤ α, then the policy π optimizing the MDP formed byP andR is at most suboptimal: i.e., at each timestep t, V π t (s) ≥ V * t (s)− for all s ∈ S, where α is O (/|S|HV * (x 0)) 2, and H is the horizon of the abstract MDP. Since the total number of transitions is bounded by |S| 2, substituting for N transition gives the desired . | We automatically construct and explore a small abstract Markov Decision Process, enabling us to achieve state-of-the-art results on Montezuma's Revenge, Pitfall!, and Private Eye by a significant margin. | 1,070 | scitldr |
Most deep reinforcement learning (RL) systems are not able to learn effectively from off-policy data, especially if they cannot explore online in the environment. This is a critical shortcoming for applying RL to real-world problems where collecting data is expensive, and models must be tested offline before being deployed to interact with the environment -- e.g. systems that learn from human interaction. Thus, we develop a novel class of off-policy batch RL algorithms which use KL-control to penalize divergence from a pre-trained prior model of probable actions. This KL-constraint reduces extrapolation error, enabling effective offline learning, without exploration, from a fixed batch of data. We also use dropout-based uncertainty estimates to lower bound the target Q-values as a more efficient alternative to Double Q-Learning. This Way Off-Policy (WOP) algorithm is tested on both traditional RL tasks from OpenAI Gym, and on the problem of open-domain dialog generation; a challenging reinforcement learning problem with a 20,000 dimensional action space. WOP allows for the extraction of multiple different reward functions post-hoc from collected human interaction data, and can learn effectively from all of these. We test real-world generalization by deploying dialog models live to converse with humans in an open-domain setting, and demonstrate that WOP achieves significant improvements over state-of-the-art prior methods in batch deep RL. In order to scale deep reinforcement learning (RL) to safety-critical, real-world domains, two abilities are needed. First, since collecting real-world interaction data can be expensive and timeconsuming, algorithms must be able to learn from off-policy data no matter how it was generated, or how little correlation between the data distribution and the current policy. Second, it is often necessary to carefully test a policy before deploying it to the real world; for example, to ensure its behavior is safe and appropriate for humans. Thus, the algorithm must be able to learn offline first, from a static batch of data, without the ability to explore. This off-policy, batch reinforcement learning (BRL) setting represents a challenging RL problem. Most deep RL algorithms fail to learn from data that is not heavily correlated with the current policy (b). Even models based on off-policy algorithms like Q-learning fail to learn in the offline, batch setting, when the model is not able to explore. If the batch data is not sufficient to cover the state-action space, BRL models can suffer from extrapolation error, learning unrealistic value estimates of state-action pairs not contained in the batch (b). It can be impossible to correct for extrapolation error when there is a mismatch in the distribution of stateactions pairs in the batch data, and the distribution induced by the learned policy. For example, if the policy learns to select actions which are not contained in the batch, it cannot learn a reasonable value function for those actions. Figure 1 illustrates this concept, where the batch only covers a subset of possible policies. Extrapolation error is particularly problematic in high-dimensional state and action spaces (such as those inherent in language generation). We propose to resolve these issues by leveraging a pre-trained generative model of the state-action space, p(a|s), trained on known sequences of interaction data. While training with RL, we penalize divergence from this prior model with different forms of KL-control. This technique ensures that the RL model learns a policy that stays close the state-action distribution of the batch, combating Figure 1: In this example batch RL problem, the robot's goal is to travel the minimum distance around the black walls to get to the red flag. A trained behavior policy generated the batch data; the probability of each of the states appearing in the batch, p B (s), is in yellow (white locations are not contained in the batch). If the offline RL policy estimates the value of going up or left from the start position is high, it will have no way to refine this estimate using the batch data, or learn a good policy in this region of state space. The KL-constraint ensures that the RL policy will stay within the support of the batch data. However, the behavior policy is suboptimal, so using behavior cloning to directly imitate the batch data will in suboptimal return. Instead, the KL-constrained model can learn to find the optimal policy, which is within the support of the batch. extrapolation error. We also propose using dropout to obtain uncertainty estimates of the target Qvalues, and use this lower bound to alleviate overestimation bias. We benchmark against a discrete adaptation of Batch Constrained Q-learning (BCQ) (b), a recently proposed state-of-the-art BRL algorithm for continuous domains, and show that our Way Off-Policy algorithm achieves superior performance in both a traditional RL domain, as well as in a challenging, underexplored, real-world reinforcement learning problem: using implicitly expressed human reactions in chat to improve open-domain dialog systems. When a machine learning system interacts with humans, ideally we would like to learn about the humans' preferences in order to improve its performance. Yet having humans manually indicate their preferences through explicit means like pressing a button (e.g.) or submitting a feedback report, does not scale. Instead, we would like to be able to use humans' implicit reactions, such as the sentiment they express, or the length of the conversation, in order to improve the policy. However, applying off-policy batch RL to language generation is challenging because the number of potential combinations of words and sentences leads to a combinatorial explosion in the size of the state space. The action space -the set of frequent vocabulary words in the English language -is 20,000-dimensional. This compounds extrapolation error, making BRL even more difficult. However, when learning from human interactions in the wild, it is crucial to be able to learn offline and test the policy before deploying it, lest it learn inappropriate behaviors (e.g.). To support this work, we developed an interactive online platform that allows humans to chat with deep neural network dialog models running on a GPU; the BRL models trained for this study are available live at https://neural.chat/rl/. Through this platform we collected human responses to a set of over 40 different dialog models over the course of several months. Using our Way Off-Policy algorithm, we are able to effectively learn from this batch of data, in spite of the fact that it was generated with a vastly different set of model architectures, which were trained on different datasets. Further, we use the batch to learn from many different reward functions designed post-hoc to extract implicit human preferences, something that is only possible with effective off-policy BRL. In summary, the contributions of this paper are: • A novel algorithm, Way Off-Policy learning, which is the first to propose using KL-control from a pre-trained prior model as a way to reduce extrapolation error in batch RL. • Experiments showing the effectiveness of WOP above strong baselines based on prior work (e.g. Fujimoto et al. (2018b) ), on both traditional RL tasks and on the challenging problem of open-domain dialog generation. • A set of novel conversation rewards based on how human preferences are implicitly expressed in text. We are the first work to learn from implicit signals in conversation offline using batch RL. The approach we propose is based on KL-control, a branch of stochastic optimal control (SOC) where the Kullback-Leibler (KL) divergence from some distribution is used to regularize an RL policy (e.g. (; ; ;) ). Well-known examples include Trust Region Policy Optimization (TRPO) , and use conservative, KL-regularized policy updates to restrict the RL algorithm to stay close to its own prior policy (e.g. (; ; ;) ). KL-control can also be applied to entropy maximization (e.g. (; ;) ); for example, G-learning penalizes KLdivergence from a simple uniform distribution in order to cope with overestimation of Q-values .KL-control has also been used to improve transfer learning between maximum likelihood estimation (MLE) training on data, and training with RL . To the best of our knowledge, our work is the first to propose penalizing KL-divergence from a learned prior model of the state-action space as a way to improve offline batch RL. Other strategies to improve off-policy learning have been proposed, but differ from this work in key respects. Many focus on scenarios where the policy is able to explore and collect more data (e.g. ;); for example, when learning online from an outdated replay buffer (e.g.). In contrast, we learn entirely offline, from a fixed batch of data, without the ability to explore. Methods proposed for this setting have often not been used in conjunction with modern function approximation techniques (e.g.). Many other works focus on off-policy policy evaluation (rather than policy learning), for example using importance sampling or model estimation (e.g. ; ; ;). In the deep BRL setting, have proposed a correction to policy gradients, have proposed covariance-shift methods, and have proposed normalized feature representations. use maximum mean discrepancy to cope with extrapolation error in BRL, while use a Random Ensemble Mixture (REM) Q-network. Most similar to our work is Batch Constrained Q-learning (BCQ) (b), which tackles off-policy deep BRL in continuous action domains by training a generative model of the batch, p(a|s), sampling from this model, and selecting the best action based on a Q-estimate. Unlike our approach, this does not integrate information about the distribution p(a|s) directly into the policy, or allow the model to learn when to strategically deviate from the prior in order to obtain more reward. We propose using dropout to approximate model uncertainty of the target Q-network. The idea of using dropout to estimate uncertainty in neural networks was proposed by. Different forms of uncertainty estimates have been used in RL (e.g. ;); for example, Bayesian uncertainty estimates have been proposed as an alternative to double DQN . Improving dialog systems with RL has largely been restricted to task-oriented dialog systems, which have a limited number of task-specific actions (e.g.). These approaches may incorporate human input, usually through explicit, manual feedback (e.g.), but sometimes with more implicit signals, such as the user interrupting the system or starting over . Efforts to expand RL to the open-domain dialog setting, such as those of Li et al. (2016b;, are less numerous, and do not involve learning from human feedback. Even in the open-domain setting, authors may choose to use a highly restricted action space; for example, using RL to choose which scripted or MLE dialog model to invoke to answer a user's query (a). Since the posting of the preprint of this paper, have used explicit human feedback to improve the summarization and text continuation performance of a large-scale language model. Although they do not study dialog or the batch RL setting (instead learning online from a trained model of human feedback), they do make use of our proposal to penalize KL-divergence from a pre-trained language model, and find that this is important to achieving good performance. Although implicit signals such as sentiment and conversation length have been used in MLE systems, the idea of using such signals as a reward for RL is relatively unexplored. Shin and colleagues uses on-policy learning in conjunction with a usersentiment approximator to improve a seq2seq model , but are unable to learn directly from user feedback. To the best of our knowledge, we are the first to use batch RL to train open-domain dialog models on implicit cues gained from real human interactions. We employ typical RL notation in which s t represents the environment state at time t, the agent takes action a t according to its policy π(a t |s t), and receives reward r(s t, a t). The agent's goal is to maximize reward over an episode trajectory τ, with a discount factor of γ applied to future rewards. Q-learning learns an action-value estimate of the total expected discounted future reward,, through iterative updates based on the Bellman equation: In deep Q-learning , a Q-network approximates Q θπ (s t, a t) and drives the policy π. A second target Q-network approximates the expected reward from the next state, ). In batch RL, we are given a fixed batch of data B, and assume that no further interaction with the environment is possible. To train Q θπ, we sample (s t, a t, r t, s t+1) ∼ B, and update the weights of the Q-network to approximate Eq. 1. Because Q-learning is an off-policy algorithm, in principle it should be able to learn from data collected by any behavior policy. However, extrapolation error can occur if the BRL policy learns to favour a state-action pair (s, a) that is unlikely, or not contained, in the batch data. In this case, the estimate Q(s, π(s)) can be arbitrarily bad (b). Such errors can then accumulate through the Bellman backup operator . Experiments from Fujimoto et al. (2018b) show that extrapolation error can be highly detrimental to learning off-policy in BRL. These problems are compounded by the fact that algorithms based on the Bellman operator are inherently optimistic in the face of uncertainty. When value estimates for some region of the stateaction space are noisy (because too few experience samples have been used to refine them), the maximum operation in Eq. 1 will lead to an overestimation of expected future reward. In a normal RL setting, this overestimation bias drives the model to explore areas of the state-action space for which the value estimates have the highest variance, thus enabling it to refine them; in essence, creating a built-in drive to explore. However, in a batch setting where exploration is not possible, the model is instead driven to value parts of the state-action space for which it has little to no data to learn a good policy (see Figure 1). The overestimation of Q-values in the BRL setting necessitates other methods for estimating the future reward via the Target Q-network. Clipped Double Q-learning (a) maintains two independent pairs of Q-networks, and takes the minimum of their estimates of future reward. This approach is computationally expensive and memory intensive. Instead, we leverage the fact that a network trained with dropout can be used to approximate a Bayesian uncertainty estimate of the output value . Given the target Q-network Q θ T, we compute Q(a t+1, s t+1) using a Monte Carlo (MC) estimate of the lower-bound of Q θ T (a t+1, s t+1) by running M stochastic forward passes of the network, each with a new dropout mask Using the minimum operator penalizes high variance estimates and leads the algorithm to be pessimistic in the face of uncertainty, rather than optimistic. Such a bias will push the model to favour actions that lead to states well covered by the batch data. Batch Constrained Q-learning (BCQ) (b) proposes to address the BRL problem by constraining the actions of the Q-network to be close to the data contained within the batch. This is accomplished by learning a generative model of the batch, G w = p(a|s), and sampling from this model during learning and inference. Because BCQ is designed for continuous action domains, it applies a learned perturbation model ξ(s, a; Φ) which is allowed to alter the action within the range [−Φ, Φ]. BCQ learns Q-estimates that incorporate the perturbation model, Q θ (s, a + ξ(s, a; Φ)). To act, n possible actions are sampled from the generative model,, perturbed, and the action with the maximum Q-value is selected, giving the BCQ policy: We propose an adaptation of BCQ to discrete action spaces (DBCQ) which does not use a continuous perturbation model. Since BCQ relies on Double Clipped Q-learning (a), here we use dropout-based uncertainty estimates as in Eq. 2. Thus the DBCQ policy is: Rather than simply sample from the prior, we would like the Q-learning algorithm to directly incorporate the prior into the policy. Thus, we use KL-control to penalize divergence between the learned prior p(a|s), and the Q-network policy π θ, while still maximizing reward. Given a trajectory of ac- be the policy of our Q-learning algorithm at the trajectory level. Similarly, let p(τ) = T t=1 p(a t |s t) be the prior distribution over the trajectory, and r(τ) be the return. We seek to maximize the following KL-regularized objective: Since, we can see that this is equivalent to maximizing the following expected value function of the policy π θ at the action level: The two terms introduced in Eq. 6 have clear motivations. The p(a|s) term rewards the model for choosing actions that have high probability under the prior, biasing the model to state-action pairs that are likely to be in the batch. The − log π(a|s) term is analogous to entropy regularization. Maintaining diversity in the action space through entropy regularization is important for generative models like dialog systems, which are known to collapse to an uninteresting, small number of repeated samples (a). Re-stating Eq. 6 as an entropy-regularized Q-function, we obtain: One can derive a soft version of the entropy-regularized Q-function that uses a Boltzmann distribution to estimate future reward . We refer to it as a Ψ-function following previous work , which derived this function as a generalization of the Ψ-learning proposed by . The optimal Ψ-function and policy are: Because it avoids taking a hard max over noisy estimates, Ψ-learning leads to less overestimation of future reward . This improves learning through more stable temporal-difference (TD) updates. Thus, it may be especially useful in the BRL setting for reducing optimism in the face of uncertainty. The Way Off-Policy (WOP) algorithm combines Monte Carlo (MC) target estimation, Ψ-learning, and KL-control from a pre-trained prior. To demonstrate the effectiveness of these techniques, we conduct a series of experiments in traditional RL tasks using the OpenAI gym . Here we show for the CartPole-v0 environment; more are available in the Appendix. We first train an online Qlearning Behavior policy, and store all (s, a, r, s) experience samples into a replay buffer. We use this buffer to train a prior model of p(a|s) using a Variational Auto-encoder (VAE) (details in Appendix). This model is used as a part of both the DBCQ and WOP algorithms. We can use the prior for imitation learning, by sampling actions directly from p(a|s) to obtain Behavioral Cloning (BC). We benchmark all of these techniques against vanilla Q-learning on the batch data (Batch Q). We experiment with four different conditions which vary the quality of the Behavior policy and the replay buffer data: a) Full buffer: all experience samples experienced during online training are used for offline learning; b) Concurrent: the offline learning algorithms see a sliding window of experience samples in the same order that the online learner experienced them; c) Expert demonstrator: the buffer only contains experience generated by a fully trained online learner; and d) Noisy demonstrator: the online learner has a high probability of acting randomly (= 0.3) and is thus a bad model of the optimal policy. Figure 2 shows the . Across conditions, we see that WOP is able to outperform Batch Q, imitation learning (BC), DBCQ, and the original behavior policy. As expected, Imitation learning (BC) underperforms other techniques when the batch contains noisy or inexpert experience samples. However, when the batch contains only expert trajectories, Batch Q fails to learn, because the batch does not cover the full state-action space well, increasing extrapolation error (as illustrated in Figure 1). DBCQ matches or outperforms BC and Batch Q in all scenarios. However, because DBCQ acts by sampling from p(a|s) as learned by the BC model, its performance suffers when the batch data is noisy or imperfect. In contrast, WOP is able to learn to trade-off staying close to the prior and obtaining higher reward, and consistently outperforms all other algorithms in this environment. Here, we tackle the problem of training an open-domain dialog model from human feedback. We consider human interaction to represent the'environment'. The response of a human to the bot's utterance is used to compute a reward signal to train the model. The state is the conversation history, composed of a series of conversation turns or utterances, u 1...t, where each utterance is composed of vocabulary tokens. The model attempts to construct a response utterance u π t+1 = [a 1, a 2, ..., a n] by iteratively choosing an action a i as the next token. Applying RL to dialog generation is challenging due to the large state-action space. The number of tokens in the vocabulary of our pre-trained model is 20,000, making the action space very high-dimensional; this further compounds the problem of extrapolation error. We trained over 40 dialog models with different architectures (e.g. Serban et al. (2017b) ), on different datasets, generating models that varied significantly in terms of the distribution of language they learned. We deployed these models to users via a web server that hosts neural network dialog models on GPU for fast, real-time inference: https://neural.chat. The code for the models and the server is available in open-source at <redacted>. Using the server, we collected a batch of human interaction data containing 14232 pairs of user input and agent response. Because learning language online from humans on the internet can in inappropriate behavior (see), learning offline using BRL is imperative. The batch data was used to train the RL models as described in Section 3. Here, we use a pre-trained language model to estimate p(a|s). We also initialize the weights of the Q-network and target Q-network are from the pre-trained model, to combat extrapolation error. The trained RL models were then re-deployed to the web. We recruited 90 Mechanical Turk workers to provide a total of 718 7-point Likert scale ratings of the bots' quality, fluency, diversity, contingency (relatedness), and empathy, after interacting with each bot for at least 3 turns. Participants also had the option to provide explicit feedback through upvoting or downvoting a particular utterance within the interface. We sum these manual votes to create an overall votes score. We note that using this platform to test our models "in the wild" with humans represents a more meaningful test of generalization than testing an RL model in the same limited (game) environment in which it was trained, since humans are not restricted in the text they can type as input to the model. We seek to improve a dialog model's ability to engage in natural conversation with a human by learning from the signals implicit in the way that the human responds. Rather than having the human manually label good performance -which we show in this work does not scale -the agent should recognize informative cues within the user's responses, like sentiment, and the amount of time they spend chatting. Essentially, we want to create an agent that is intrinsically motivated to produce positive reactions in its human conversation partner. To this end, we reward the model for: 1) eliciting positive sentiment, 2) eliciting longer conversations and more words typed (a sign of engagement), 3) eliciting laughter (in the form of typed 'ha's), 4) high semantic similarity between the human input and bot response, and 5) asking questions, since this is an important active listening skill . The total reward given to the agent is a combination of these, with details (and coefficients) in the Appendix. Note that the first 4 types of rewards depend on eliciting positive responses from a human user; we call these the implicit human reward. The 5th reward is easily exploitable by the agent itself. These rewards were designed and extracted post-hoc from the batch of human data, and thus learning from them is only possible with effective batch RL, since they had no effect on the policies used to generate the batch. Table 2: Purely reward-maximizing methods like Batch Q (left) diverge away from realistic language (saying phrases like "where did you say to me?") in order to trivially exploit the reward function by asking a question every turn, and using the maximum number of tokens in every sentence. In contrast, KL-control methods (right) output plausible language by staying close to the prior, but shift to using polite, cheerful language to maximize implicit human reward. [To compare models, we not only look at human users' ratings and votes, but also consider the automatic signals detectable from the text itself. This implicit human reward metric aggregates the measures listed in items 1-4 in Section 5.1, and measures the ability to elicit positive responses from a human. Table 1 shows the of the human evaluation, comparing WOP to ablations of itself, Batch Q, and DBCQ. MC Target Q estimation leads to modest improvements in votes and human reward, but does not improve ratings. Using Ψ-learning improves all three. However, the most notable difference in performance comes from KL-control. The KL-control models show substantial gains over the baseline models across both ratings and human reward. We perform a oneway analysis of variance (ANOVA) comparing the KL-control models to the Batch Q baselines and DBCQ on the total human rating score, and find that the KL-control models are significantly better, F (x) = 4.781, p <.05. This validates the hypothesis that KL-control with a strong, pre-trained prior can be used to improve batch RL. As shown in Figure 3, without KL-regularization, the baseline RL models diverge quickly and continuously from the prior, losing information about realistic sequences. This helps explain the poor performance of DBCQ in Table 1. The underlying Q-network in DBCQ does not directly integrate the prior. As Q-learning causes the model to diverge from the prior, the Q-estimates of language generated according to the prior become unrealistic, and Eq. 4 selects unrealistic actions. This in highly'diverse' (random) generated utterances. Although DBCQ performed well in simple domains in Section 4, it does not scale effectively to dialog. The pre-trained prior may be especially important in a generative domain like dialog, where the true reward function is unknown, and so purely maximizing a heuristic reward may lead to lower quality conversations. Table 2 shows examples of conversations with a Batch Q and KL-control model. Because the Batch Q model has no incentive to stay close to realistic language, it learns to exploit the reward by asking a question and outputting the maximum number of tokens every utterance. These sentences contain implausible phrases that do not represent realistic language (e.g. "where did you say to me?"). In contrast, the KL-control model uses fluent language, but shifts its distribution towards cheerful and polite speech, presumably because this is what led to positive human responses in the batch data. In fact, we noticed that all models trained with the implicit human rewards described in Section 5.1 learned to use more cheerful and supportive language. Therefore, we create post-hoc metrics to measure this effect (see the Appendix for details). Figure 4 shows how these metrics, as well as the implicit rewards, differ across models. Without KLcontrol, baseline methods like Batch Q exploit simple rewards like asking questions at the expense of realistic language, explaining their poor quality ratings. In contrast, KL-control models learn to rely more on realistic but polite, supportive, and cheerful dialog to elicit higher total human reward. Table 3 presents the of WOP models trained with only a single reward function, ordered from lowest to highest quality. Notably, extracting multiple different reward functions post-hoc from a batch of data and training on these independently is only possible with effective BRL. Investigating which rewards presented are most critical to achieving high-quality conversations with humans, we note that maximizing positive and minimizing negative sentiment in the user turns out to lead to the highest quality bot. This underscores the importance of affective signals as cues for good conversation. Bots trained on the manual upvotes and downvotes provided by users on the utterance level fail to achieve similarly high performance. Even though users were instructed to make use of the vote feature, the task is burdensome, and users did not vote frequently enough to provide a good training signal. This validates the hypothesis that implicit signals of human enjoyment (such as sentiment) are a more scalable way to learn from human preferences. This paper presents the Way Off-Policy (WOP) algorithm, which improves performance when learning off-policy without the possibility to explore -i.e. batch RL (BRL). We are the first to propose using KL-control from a strong prior model pre-trained on data as a way to avoid extrapolation and instability in BRL. Our on traditional RL tasks demonstrate that our WOP algorithm provides performance improvements over state-of-the-art BRL techniques, and the in dialog generation show that KL-control is critical to achieving good performance in this real-world, highdimensional setting. In a generative domain such as dialog, the true reward function is not known, and trivially exploiting the rewards can actually lead to worse performance. Thus, KL-control may be particularly necessary to ensure samples remain realistic and close to the data distribution. We propose several reward functions that could allow an open-domain dialog generation model to learn from rich cues implicit in human interaction, where learning from expressed sentiment was most promising. We find that maximizing implicit rewards leads to better performance than relying on explicit feedback. We hope that the techniques presented here can improve learning with RL from offline data, making it easier to apply RL to safety-critical settings such as human interaction. A APPENDIX The total reward used to train the bots is a combination of the rewards described below, in the following proportions: 0.15682657 * question + 0.13837638 * semantic coherence + 0.15313653 * laughter + 0.14206642 * sentiment transition + 0.14206642 * sentiment + 0.14760148 * words elicited + 0.1199262 * conversation length. To compute sentiment on short texts like conversation utterances, we leverage a state-of-the-art sentiment-detection model, which was trained on a massive amount of Twitter data to predict the emojis in tweets . Transfer learning from this model to other tasks showed that it was able to significantly outperform a series of sentiment, irony, and sarcasm benchmarks. This DeepMoji model outputs a probability distribution over 64 most-frequently used emojis as shown in Figure 5. After observing the performance of the model in detecting users' emotions in the domain of online chat, we define a set of weights over the emojis and calculate the weighted sum over an emotion embedding vector to derive a sentiment reward which is higher for positive sentiment and lower for negative sentiment. These weights are shown in Figure 5 (b). We also compute a sentiment-transition reward using the same score based on whether the peak positive sentiment occurred later in the conversation than the peak negative sentiment, reasoning that sentiment should improve over the course of the conversation. Based on prior work , we use the number of turns in the conversation as an indicator of the quality of the bot's performance. To distribute this reward over every utterance in the conversation, we take the total conversation length N, and compute the discounted reward for utterance n < N as γ N −n N. We also reward each utterance with the number of words in the user's response, which we refer to as the words elicited. Laughter has been shown to be very important to human affiliation and solidarity . Therefore, we detect the number of occurrences of the string'ha' in the user's response, and use this as a reward. Interestingly, we find that bots trained to maximize user laughter learn to be extremely supportive and cheerful compared to other bots (for definitions of supportive and cheerful, see Section ??). Language style matching has been shown to be a strong predictor of relationship initiation and stability . While it would be ideal if our chatbots could intelligently adapt their conversation style to a new user, in reality most baseline dialog models struggle to maintain topic coherence, even over a few utterances (for an analysis of this effect, see ). Therefore we reward semantic similarity between the user's input and the bot's response, to encourage the bot to stay on topic and produce reasonable answers. This score is computing by leveraging a state-of-the-art sentence embedding model , and penalizing distance in embedding space. Asking questions is an important listening skill, and is linked to conversation management, attentiveness, and responsiveness . Therefore, we give the bot a reward of 0.5 if the utterance contains a question word (how, what, where, why, when, who), and an additional 0.5 if it contains a question mark. After training the bots on these rewards, we noticed a shift in the distribution of their language towards more polite, cheerful, and supportive speech. Therefore, we designed post-hoc metrics to measure these qualities, which are based on counting whether a subset of phrases is present in an utterance. Politeness phrases: if I may; may I; please; thanks; no worries; if you don't mind; have a great day; I'm sorry. Supportive phrases: you're right; you are right; you're not alone; you are not alone; congrats; that's a good idea; that is a good idea; you'll be fine; you will be fine; you'll be okay; you will be okay; it will get better; sorry you're going through; sorry you are going through; if it makes you feel better; if it makes you feel any better; keep your head up; keep it up; I'm in a similar situation; I am in a similar situation; you'll get it; you will get it; happy for you; I'm in the same boat; I am in the same boat; if you feel like you need to vent. Cheerful phrases: nice to hear; happy; excited; really nice; glad; the best; great; good time; looking forward; beautiful. All Q-networks shared the same underlying architecture: three fully-connected layers of size, with ReLU activation between. The model of p(a|s) was learned with a Variational Autoencoder which attempted to reconstruct the next state given the current state, p(s |s), using a meansquared error loss. Both the encoder and decoder were made up of two linear layers with 750 neurons each. The latent dimension of the VAE was size 256. The next action, was predicted from the latent embedding z, meaning the model learned three functions: z = f e (s), s = f d (z), and a = f a (z) All models were trained with the Adam optimizer. For each experiment, we ran 50 trials of each model with a different random seed each time. The Behavior policy was trained for a total of 20,000 steps in the environment, so in the Full buffer condition offline agents saw 20,000 experience samples. The Behavior policy typically converged before 10,000 steps, so in the Expert demonstrator condition the offline agents received the last 10,000 experience samples from the trained agent. In the Concurrent condition, offline agents saw a moving window of 1000 samples, since the online learner only used the most recent 1000 samples in the buffer for learning. The learning rate was.001, γ =.99, and decayed linearly from 1.0 to.01 over 2000 steps. The KL-constraint was computed as D KL [q(τ)||p(τ)] = α log p(a|s) − β log π(a|s), where α = 0.5 and β = 0.1. DBCQ sampled n = 2 actions before selecting the best action based on the maximum Q-value; note that in this environment there are only 2 actions. The underlying architecture of the language models employed for this work is a Variational Hierarchical Recurrent Encoder Decoder (VHRED) (b), with additional knowledge distillation to improve the model's ability to track the sentiment and semantics of the conversation, as proposed by. The language models were originally trained on two datasets: movie dialogs and a dataset scraped from reddit.com/r/casual_conversation . The RL models were initialized with the weights of the best model trained on the Reddit dataset. RL models were trained for between 800 and 1000 batches of data, where the batch size was fixed at 32. Early stopping was used to determine the number of training iterations of the best checkpoint. All other hyperparameters were shared between RL models, and were as follows: discount γ = 0.5, weight placed on RL reward vs. KL-divergence term c = 2, number of Monte Carlo samples of the Target Q-network M = 5, target network update rate α =.005, learning rate r =.0001. We used a smooth L1 loss function to approximate the Q-values, and clipped gradients at a value of 1.0. The underlying parameters of the VHRED model were as follows: Context RNN hidden size = 1000, decoder hidden size = 1250, encoder hidden size = 1250, z embedding size = 600, gradient clip = 1.0, dropout d = 0.2. The maximum conversation length was fixed at 5 utterances (context from more than 5 utterances ago was discarded), and the maximum sentence length was 30 tokens. We also added layers to the Context RNN and regularized it to be able to predict the semantic content of the input utterance using a form of knowledge distillation from a state-ofthe-art sentence-embedding model . There were 2 additional feedforward semantic prediction prediction layers of size 128, which used ReLu activation. We ran additional experiments in another standard OpenAI gym environment, Acrobot-v1; Figure 6 shows the . The experimental setup and hyperparameters remained the same as in the first experiment, except that we used a smaller VAE (the encoder and decoder had only one layer of size 256 each, and the latent dimension was 64), the weight on the prior was α = 0.5 and the entropy term was β = 0.1, and we used the Q-learning loss rather than Ψ-learning. Figure 7: Normalized reward scores obtained by models trained with respect to different rewards. We see that the bot trained to ask questions is easily able to exploit this reward, and similarly the bot trained to elicit positive sentiment does so successfully. For the rest of the bots, the relationship is less clear. For example, the bot trained to elicit laughter becomes the most supportive and cheerful, while the bot trained to elicit more words is very polite. Figure 7 shows the normalized reward scores obtained bots trained with respect to different rewards. We see that in testing the bots in the wild, with a new set of users, bots trained to optimize certain types of human response (e.g. words elicited) do not perform best on this task. We hypothesize this is because the relatively small size of batch date we were able to collect (≈ 14, 000 utterances) does not give the bots enough information about how to elicit long responses from users. In addition to the comparisons between RL models, we also benchmarked the RL methods against the MLE prior. While we found that the KL-control methods exceed the prior in obtaining high implicit reward signals from humans (e.g. eliciting positive sentiment, laughter, more words, etc), we found that this did not lead to them being rated significantly higher in quality by human judges. Even though the RL bots successfully learned to conduct conversation to optimize these implicit signals, we believe the fact that this did not lead to higher quality ratings highlights the idea that the reward functions we are optimizing do not fully cover what it means to have a high quality conversation with a human user. We note that these rewards are only a first step, and hope other researchers will be able to use the techniques proposed in this paper to learn from better measures of human enjoyment. To collect data from humans interacting with our bots, we built https://neural.chat, a platform for hosting deep neural network dialog models online on GPU for fast, real-time inference. Figure 8) shows an example of the interface, in which users are able to rate the bots after talking to them for at least three turns. The server was hosted on a Google Cloud Platform virtual instance with 64GB of RAM and a NVIDIA Tesla P100 graphics card. The backend was a Django program being served by NGINX and uWSGI. For simplicity, we opted to have the Django process import the chatbots into the same Python process as Django, rather than have the two connect to each other via other means such as sockets. This configuration decreased development time and increased reliability, but it would need to be revisited if the server needed to scale several orders of magnitude past what was required for this study. The current configuration was still able to support hundreds of simultaneous users and host more than 30 bots concurrently. The chatbots were kept in a separate project from the Django project and maintained separately from the server code. Each chatbot extended an abstract class that defined key methods for the Django program to use, and was registered to a globally accessible dictionary via a decorator. The Django project was provided the path to the Chatbots project in its PYTHONPATH, so it could import the dictionary in which all the chatbot objects had been registered and use that to dynamically determine which chatbots were available and to access them in its views. It is important to note that the chatbots used PyCUDA, and PyCUDA does not work in a multiprocessing environment. Because of this, uWSGI needed to be configured to only have one python process and to disable any attempt at multiprocessing. Furthermore, the chatbots required substantial startup times, so all chatbots are kept in memory at all times in the Django process. In order to keep all the chatbots in memory concurrently, we needed a very high amount of RAM on our server and opted for a 64GB virtual instance, and a GPU with 16GB RAM. This combination of CUDA to run the chatbots on the GPU with a high amount of RAM to keep all bots in memory at the same time ed in incredibly fast server response times, with effectively no increase in response time when using the bots in requests compared to requests that did not. For further information and instructions on server configuration, please read the server documentation available at ¡URL redacted for anonymity¿. We hope that this platform will allow others to host their own bots and evaluate them in an interactive setting. | We show that KL-control from a pre-trained prior can allow RL models to learn from a static batch of collected data, without the ability to explore online in the environment. | 1,071 | scitldr |
Transfer and adaptation to new unknown environmental dynamics is a key challenge for reinforcement learning (RL). An even greater challenge is performing near-optimally in a single attempt at test time, possibly without access to dense rewards, which is not addressed by current methods that require multiple experience rollouts for adaptation. To achieve single episode transfer in a family of environments with related dynamics, we propose a general algorithm that optimizes a probe and an inference model to rapidly estimate underlying latent variables of test dynamics, which are then immediately used as input to a universal control policy. This modular approach enables integration of state-of-the-art algorithms for variational inference or RL. Moreover, our approach does not require access to rewards at test time, allowing it to perform in settings where existing adaptive approaches cannot. In diverse experimental domains with a single episode test constraint, our method significantly outperforms existing adaptive approaches and shows favorable performance against baselines for robust transfer. One salient feature of human intelligence is the ability to perform well in a single attempt at a new task instance, by recognizing critical characteristics of the instance and immediately executing appropriate behavior based on experience in similar instances. Artificial agents must do likewise in applications where success must be achieved in one attempt and failure is irreversible. This problem setting, single episode transfer, imposes a challenging constraint in which an agent experiences-and is evaluated on-only one episode of a test instance. As a motivating example, a key challenge in precision medicine is the uniqueness of each patient's response to therapeutics (; ;). Adaptive therapy is a promising approach that formulates a treatment strategy as a sequential decision-making problem (; ;). However, heterogeneity among instances may require explicitly accounting for factors that underlie individual patient dynamics. For example, in the case of adaptive therapy for sepsis , predicting patient response prior to treatment is not possible. However, differences in patient responses can be observed via blood measurements very early after the onset of treatment . As a first step to address single episode transfer in reinforcement learning (RL), we propose a general algorithm for near-optimal test-time performance in a family of environments where differences in dynamics can be ascertained early during an episode. Our key idea is to train an inference model and a probe that together achieve rapid inference of latent variables-which account for variation in a family of similar dynamical systems-using a small fraction (e.g., 5%) of the test episode, then deploy a universal policy conditioned on the estimated parameters for near-optimal control on the new instance. Our approach combines the advantages of robust transfer and adaptation-based transfer, as we learn a single universal policy that requires no further training during test, but which is adapted to the new environment by conditioning on an unsupervised estimation of new latent dynamics. In contrast to methods that quickly adapt or train policies via gradients during test but assume access to multiple test rollouts and/or dense rewards (; ;), we explicitly optimize for performance in one test episode without accessing the reward function at test time. Hence our method applies to real-world settings in which rewards during test are highly delayed or even completely inaccessible-e.g., a reward that depends on physiological factors that are accessible only in simulation and not from real patients. We also consider computation time a crucial factor for real-time application, whereas some existing approaches require considerable computation during test . Our algorithm builds on variational inference and RL as submodules, which ensures practical compatibility with existing RL workflows. Our main contribution is a simple general algorithm for single episode transfer in families of environments with varying dynamics, via rapid inference of latent variables and immediate execution of a universal policy. Our method attains significantly higher cumulative rewards, with orders of magnitude faster computation time during test, than the state-of-the-art model-based method , on benchmark high-dimensional domains whose dynamics are discontinuous and continuous in latent parameters. We also show superior performance over optimization-based meta-learning and favorable performance versus baselines for robust transfer. Our goal is to train a model that performs close to optimal within a single episode of a test instance with new unknown dynamics. We formalize the problem as a family (S, A, T, R, γ), where (S, A, R, γ) are the state space, action space, reward function, and discount of an episodic Markov decision process (MDP). Each instance of the family is a stationary MDP with transition function T z (s |s, a) ∈ T. When a set Z of physical parameters determines transition dynamics , each T z has a hidden parameter z ∈ Z that is sampled once from a distribution P Z and held constant for that instance. For more general stochastic systems whose modes of behavior are not easily attributed to physical parameters, Z is induced by a generative latent variable model that indirectly associates each T z to a latent variable z learned from observed trajectory data. We refer to "latent variable" for both cases, with the clear ontological difference understood. Depending on application, T z can be continuous or discontinuous in z. We strictly enforce the challenging constraint that latent variables are never observed, in contrast to methods that use known values during training , to ensure the framework applies to challenging cases without prior knowledge. This formulation captures a diverse set of important problems. Latent space Z has physical meaning in systems where T z is a continuous function of physical parameters (e.g., friction and stiffness) with unknown values. In contrast, a discrete set Z can induce qualitatively different dynamics, such as a 2D navigation task where z ∈ {0, 1} decides if the same action moves in either a cardinal direction or its opposite . Such drastic impact of latent variables may arise when a single drug is effective for some patients but causes serious side effects for others . Training phase. Our training approach is fully compatible with RL for episodic environments. We sample many instances, either via a simulator with controllable change of instances or using off-policy batch data in which demarcation of instances-but not values of latent variables-is known, and train for one or more episodes on each instance. While we focus on the case with known change of instances, the rare case of unknown demarcation can be approached either by preprocessing steps such as clustering trajectory data or using a dynamic variant of our algorithm (Appendix C). Single test episode. In contrast to prior work that depend on the luxury of multiple experience rollouts for adaptation during test time (; ; ;), we introduce the strict constraint that the trained model has access to-and is evaluated on-only one episode of a new test instance. This reflects the need to perform near-optimally as soon as possible in critical applications such as precision medicine, where an episode for a new patient with new physiological dynamics is the entirety of hospitalization. We present Single Episode Policy Transfer (SEPT), a high-level algorithm for single episode transfer between MDPs with different dynamics. The following sections discuss specific design choices in SEPT, all of which are combined in synergy for near-optimal performance in a single test episode. Our best theories of natural and engineered systems involve physical constants and design parameters that enter into dynamical models. This physicalist viewpoint motivates a partition for transfer learning in families of MDPs: 1. learn a representation of latent variables with an inference model that rapidly encodes a vectorẑ of discriminative features for a new instance; 2. train a universal policy π(a|s, z) to perform near-optimally for dynamics corresponding to any latent variable in Z; 3. immediately deploy both the inference model and universal policy on a given test episode. To build on the generality of model-free RL, and for scalability to systems with complex dynamics, we do not expend computational effort to learn a model of T z (s |s, a), in contrast to model-based approaches . Instead, we leverage expressive variational inference models to represent latent variables and provide uncertainty quantification. In domains with ground truth hidden parameters, a latent variable encoding is the most succinct representation of differences in dynamics between instances. As the encodingẑ is held constant for all episodes of an instance, a universal policy π(a|s, z) can either adapt to all instances when Z is finite, or interpolate between instances when T z is continuous in z . Estimating a discriminative encoding for a new instance enables immediate deployment of π(a|s, z) on the single test episode, bypassing the need for further fine-tuning. This is critical for applications where further training complex models on a test instance is not permitted due to safety concerns. In contrast, methods that do not explicitly estimate a latent representation of varied dynamics must use precious experiences in the test episode to tune the trained policy . In the training phase, we generate an optimized of short trajectories, where each Tp ) is a sequence of early state-action pairs at the start of episodes of instance T i ∈ T (e.g. T p = 5). We train a variational auto-encoder, comprising an approximate posterior inference model q φ (z|τ) that produces a latent encodingẑ from τ and a parameterized generative model p ψ (τ |z). The dimension chosen forẑ may differ from the exact true dimension when it exists but is unknown; domain knowledge can aid the choice of dimensionality reduction. Because dynamics of a large variety of natural systems are determined by independent parameters (e.g., coefficient of contact friction and Reynolds number can vary independently), we consider a disentangled latent representation where latent units capture the effects of independent generative parameters. To this end, we bring β-VAE into the context of families of dynamical systems, choosing an isotropic unit Gaussian as the prior and imposing the constraint D KL (q φ (z|τ i) p(z)) <. The β-VAE is trained by maximizing the variational lower bound L(ψ, φ; τ i) for each τ i across D: This subsumes the VAE as a special case (β = 1), and we refer to both as VAE in the following. Since latent variables only serve to differentiate among trajectories that arise from different transition functions, the meaning of latent variables is not affected by isometries and hence the value ofẑ by itself need not have any simple relation to a physically meaningful z even when one exists. Only the partition of latent space is important for training a universal policy. Earlier methods for a family of similar dynamics relied on Bayesian neural network (BNN) approximations of the entire transition function s t+1 ∼T (BNN) z (s t, a t), which was either used to perform computationally expensive fictional rollouts during test time or used indirectly to further optimize a posterior over z . Our use of variational inference is more economical: the encoder q φ (z|τ) can be used immediately to infer latent variables during test, while the decoder p ψ (τ |z) plays a crucial role for optimized probing in our algorithm (see Section 3.3). In systems with ground truth hidden parameters, we desire two additional properties. The encoder should produce low-variance encodings, which we implement by minimizing the entropy of q φ (z|τ): under a diagonal Gaussian parameterization, where σ 2 d = Var(q φ (z|τ)) and dim(z) = D. We add −H(q φ (z|τ)) as a regularizer to equation 1. Second, we must capture the impact of z on higher-order dynamics. While previous work neglects the order of transitions (s t, a t, s t+1) in a trajectory , we note that a single transition may be compatible with multiple instances whose differences manifest only at higher orders. In general, partitioning the latent space requires taking the ordering of a temporally-extended trajectory into account. Therefore, we parameterize our encoder q φ (z|τ) using a bidirectional LSTM-as both temporal directions of (s t, a t) pairs are informative-and we use an LSTM decoder p ψ (τ |z) (architecture in Appendix E.2). In contrast to embedding trajectories from a single MDP for hierarchical learning , our purpose is to encode trajectories from different instances of transition dynamics for optimal control. We train a single universal policy π(a|s, z) and deploy the same policy during test (without further optimization), for two reasons: robustness against imperfection in latent variable representation and significant improvement in scalability. Earlier methods trained multiple optimal policies {π * on training instances with a set {z i} N i=1 of hidden parameters, then employed either behavioral cloning or off-policy Q-learning to train a final policy π(a|s, z) using a dataset {(s t,ẑ i ; a t ∼ π * i (a|s t))}. However, this supervised training scheme may not be robust : if π(a|s, z) were trained only using instance-specific optimal state-action pairs generated by π * i (a|s) and posterior samples ofẑ from an optimal inference model, it may not generalize well when faced with states and encodings that were not present during training. Moreover, it is computationally infeasible to train a collection {π -which is thrown away during test-when faced with a large set of training instances from a continuous set Z. Instead, we interleave training of the VAE and a single policy π(a|s, z), benefiting from considerable computation savings at training time, and higher robustness due to larger effective sample count. To execute near-optimal control within a single test episode, we first rapidly computeẑ using a short trajectory of initial experience. This is loosely analogous to the use of preliminary medical treatment to define subsequent prescriptions that better match a patient's unique physiological response. Our goal of rapid inference motivates two algorithmic design choices to optimize this initial phase. First, the trajectory τ used for inference by q φ (z|τ) must be optimized, in the sense of machine teaching , as certain trajectories are more suitable than others for inferring latent variables that underlie system dynamics. If specific degrees of freedom are impacted the most by latent variables, an agent should probe exactly those dimensions to produce an informative trajectory for inference. Conversely, methods that deploy a single universal policy without an initial probing phase can fail in adversarial cases, such as when the initial placeholderẑ used in π θ (a|s, ·) at the start of an instance causes failure to exercise dimensions of dynamics that are necessary for inference. Second, the VAE must be specifically trained on a dataset D of short trajectories consisting of initial steps of each training episode. We cannot expend a long trajectory for input to the encoder during test, to ensure enough remaining steps for control. Hence, single episode transfer motivates the machine teaching problem of learning to distinguish among dynamics: our algorithm must have learned both to generate and to use a short initial trajectory to estimate a representation of dynamics for control. Our key idea of optimized probing for accelerated latent variable inference is to train a dedicated probe policy π ϕ (a|s) to generate a dataset D of short trajectories at the beginning of all training episodes, such that the VAE's performance on D is optimized 2. Orthogonal to training a meta-policy for faster exploration during standard RL training , our probe and VAE are trained for the purpose of performing well on a new test MDP. For ease of exposition, we discuss the case with access to a simulator, but our method easily allows use of off-policy batch data. We start each training episode using π ϕ for a probe phase lasting T p steps, record the probe trajectory τ p into D, train the VAE using minibatches from D, then use τ p with the encoder to generateẑ for use by π θ (a|s,ẑ) to complete the remainder of the episode (Algorithm 1). At test time, SEPT only requires lines 5, 8, and 9 in Algorithm 1 (training step in 9 removed; see Algorithm 2). The reward function for π ϕ is defined as the VAE objective, approximated by the variational lower bound: Initialize encoder φ, decoder ψ, probe policy ϕ, control policy θ, and trajectory buffer D 3: for each instance Tz with transition function sampled from T do 4: for each episode on instance Tz do 5: Execute πϕ for Tp steps and store trajectory τp into D 6: Use variational lower bound as the reward to train πϕ by descending gradient 7: Train VAE using minibatches from D for gradient ascent on and descent on 8: Estimateẑ from τp using encoder q φ (z|τ) 9: Execute π θ (a|s, z) withẑ for remaining time steps and train it with suitable RL algorithm 10: end for 11: end for 12: end procedure probe to help the VAE's inference of latent variables that distinguish different dynamics (Figure 1). We provide detailed justification as follows. First we state a derived in Appendix A: Proposition 1. Let p ϕ (τ) denote the distribution of trajectories induced by π ϕ. Then the gradient of the entropy H(p ϕ (τ)) is given by Noting that dataset D follows distribution p ϕ and that the VAE is exactly trained to maximize the log probability of D, we use L(ψ, φ; τ) as a tractable lowerbound on log p ϕ (τ). Crucially, to generate optimal probe trajectories for the VAE, we take a minimum-entropy viewpoint and descend the gradient. This is opposite of a maximum entropy viewpoint that encourages the policy to generate diverse trajectories , which would minimize log p ϕ (τ) and produce an adversarial dataset for the VAE-hence, optimal probing is not equivalent to diverse exploration. The degenerate case of π ϕ learning to "stay still" for minimum entropy is precluded by any source of environmental stochasticity: trajectories from different instances will still differ, so degenerate trajectories in low VAE performance. Finally we observe that equation 3 is the defining equation of a simple policy gradient algorithm for training π ϕ, with log p ϕ (τ) interpreted as the cumulative reward of a trajectory generated by π ϕ. This completes our justification for defining reward R p (τ):= L(ψ, φ; τ). We also show empirically in ablation experiments that this reward is more effective than choices that encourage high perturbation of state dimensions or high entropy (Section 6). Figure 1: π ϕ learns to generate an optimal dataset for the VAE, whose performance is the reward for π ϕ. Encodingẑ by the VAE is given to control policy π θ. The VAE objective function may not perfectly evaluate a probe trajectory generated by π ϕ because the objective value increases due to VAE training regardless of π ϕ. To give a more stable reward signal to π ϕ, we can use a second VAE whose parameters slowly track the main VAE according to ψ ← αψ + (1 − α)ψ for α ∈, and similarly for φ. While analogous to target networks in DQN , the difference is that our second VAE is used to compute the reward for π ϕ. Transfer learning in a family of MDPs with different dynamics manifests in various formulations . Analysis of -stationary MDPs and -MDPs provide theoretical grounding by showing that an RL algorithm that learns an optimal policy in an MDP can also learn a near-optimal policy for multiple transition functions (Kalmár et al., 1998;). Imposing more structure, the hidden-parameter Markov decision process (HiP-MDP) formalism posits a space of hidden parameters that determine transition dynamics, and implements transfer by model-based policy training after inference of latent parameters . Our work considers HiP-MDP as a widely applicable yet special case of a general viewpoint, in which the existence of hidden parameters is not assumed but rather is induced by a latent variable inference model. The key structural difference from POMDPs is that given fixed latent values, each instance from the family is an MDP with no hidden states; hence, unlike in POMDPs, tracking a history of observations provides no benefit. In contrast to multi-task learning , which uses the same tasks for training and test, and in contrast to parameterized-skill learning , where an agent learns from a collection of rewards with given task identities in one environment with fixed dynamics, our training and test MDPs have different dynamics and identities of instances are not given. Prior latent variable based methods for transfer in RL depend on a multitude of optimal policies during training , or learn a surrogate transition model for model predictive control with real-time posterior updates during test . Our variational model-free approach does not incur either of these high computational costs. We encode trajectories to infer latent representation of differing dynamics, in contrast to state encodings in . Rather than formulating variational inference in the space of optimal value functions , we implement transfer through variational inference in a latent space that underlies dynamics. Previous work for transfer across dynamics with hidden parameters employ model-based RL with Gaussian process and Bayesian neural network (BNN) models of the transition function , which require computationally expensive fictional rollouts to train a policy from scratch during test time and poses difficulties for real-time test deployment. DPT uses a fully-trained BNN to further optimize latent variable during a single test episode, but faces scalability issues as it needs one optimal policy per training instance . In contrast, our method does not need a transition function and can be deployed without optimization during test. Methods for robust transfer either require access to multiple rounds from the test MDP during training , or require the distribution over hidden variables to be known or controllable . While meta-learning (; ; ;) in principle can take one gradient step during a single test episode, prior empirical evaluation were not made with this constraint enforced, and adaptation during test is impossible in settings without dense rewards. We conducted experiments on three benchmark domains with diverse challenges to evaluate the performance, speed of reward attainment, and computational time of SEPT versus five baselines in the single test episode. We evaluated four ablation and variants of SEPT to investigate the necessity of all algorithmic design choices. For each method on each domain, we conducted 20 independent training runs. For each trained model, we evaluate on M independent test instances, all starting with the same model; adaptations during the single test episode, if done by any method, are not preserved across the independent test instances. This means we evaluate on a total of 20M independent test instances per method per domain. Hyperparameters were adjusted using a coarse coordinate search on validation performance. We used DDQN with prioritized replay as the base RL component of all methods for a fair evaluation of transfer performance; other RL algorithms can be readily substituted. Domains. We use the same continuous state discrete action HiP-MDPs proposed by for benchmarking. Each isolated instance from each domain is solvable by RL, but it is highly challenging, if not impossible, for naïve RL to perform optimally for all instances because significantly different dynamics require different optimal policies. In 2D navigation, dynamics are discontinuous in z ∈ {0, 1} as follows: location of barrier to goal region, flipped effect of actions (i.e., depending on z, the same action moves in either a cardinal direction or its opposite), and direction of a nonlinear wind. In Acrobot , the agent applies {+1, 0, −1} torques to swing a two-link pendulum above a certain height. Dynamics are determined by a vector z = (m 1, m 2, l 1, l 2) of masses and lengths, centered at 1.0. We use four unique instances in training and validation, constructed by sampling ∆z uniformly from {−0. extrapolation. In HIV, a patient's state dynamics is modeled by differential equations with high sensitivity to 12 hidden variables and separate steady-state regions of health, such that different patients require unique treatment policies . Four actions determine binary activation of two drugs. We have M = 10, 5, 5 for 2D navigation, Acrobot, and HIV, respectively. Baselines. First, we evaluated two simple baselines that establish approximate bounds on test performance of methods that train a single policy: as a lower bound, Avg trains a single policy π(a|s) on all instances sampled during training and runs directly on test instances; as an upper bound in the limit of perfect function approximation for methods that use latent variables as input, Oracle π(a|s, z) receives the true hidden parameter z during both training and test. Next we adapted existing methods, detailed in Appendix E.1, to single episode test evaluation: 1. we allow the model-based method BNN to fine-tune a pre-trained BNN and train a policy using BNN-generated fictional episodes every 10 steps during the test episode; 2. we adapted the adversarial part of EPOpt , which we term EPOpt-adv, by training a policy π(a|s) on instances with the lowest 10-percentile performance; 3. we evaluate MAML as an archetype of meta-learning methods that require dense rewards or multiple rollouts . We allow MAML to use a trajectory of the same length as SEPT's probe trajectory for one gradient step during test. We used the same architecture for the RL module of all methods (Appendix E.2). To our knowledge, these model-free baselines are evaluated on single-episode transfer for the first time in this work. To investigate the benefit of our optimized probing method for accelerated inference, we designed an ablation called SEPT-NP, in which trajectories generated by the control policy are used by the encoder for inference and stored into D to train the VAE. Second, we investigated an alternative reward function for the probe, labeled TotalVar and defined as R(τ):= 1/T p Tp−1 t=1 |s t+1,i − s t,i | for probe trajectory τ. In contrast to the minimum entropy viewpoint in Section 3.3, this reward encourages generation of trajectories that maximize total variation across all state space dimensions. Third, we tested the maximum entropy viewpoint on probe trajectory generation, labeled MaxEnt, by giving negative lowerbound as the probe reward: R p (τ):= −L(ψ, φ; τ). Last, we tested whether DynaSEPT, an extension that dynamically decides to probe or execute control (Appendix C), has any benefit for stationary dynamics. 2D navigation and Acrobot are solved upon attaining terminal reward of 1000 and 10, respectively. SEPT outperforms all baselines in 2D navigation and takes significantly fewer number of steps to solve (Figures 2a and 2c). While a single instance of 2D navigation is easy for RL, handling multiple instances is highly non-trivial. EPOpt-adv and Avg almost never solve the test instance-we set "steps to solve" to 50 for test episodes that were unsolved-because interpolating between instance-specific optimal policies in policy parameter space is not meaningful for any task instance. MAML did not perform well despite having the advantage of being provided with rewards at test time, unlike SEPT. The gradient adaptation step was likely ineffective because the rewards are sparse and delayed. BNN requires significantly more steps than SEPT, and it uses four orders of magnitude longer computation time (Table 4), due to training a policy from scratch during the test episode. Training times of all algorithms except BNN are in the same order of magnitude (Table 3). In Acrobot and HIV, where dynamics are continuous in latent variables, interpolation within policy space can produce meaningful policies, so all baselines are feasible in principle. SEPT is statistically significantly faster than BNN and Avg, is within error bars of MAML, while EPOpt-adv outperforms the rest by a small margin (Figures 2b and 2d). Figure 5 shows that SEPT is competitive in terms of percentage of solved instances. As the true values of latent variables for Acrobot test instances were interpolated and extrapolated from the training values, this shows that SEPT is robust to out-oftraining dynamics. BNN requires more steps due to simultaneously learning and executing control during the test episode. On HIV, SEPT reaches significantly higher cumulative rewards than all methods. Oracle is within the margin of error of Avg. This may be due to insufficient examples of the high-dimensional ground truth hidden parameters. Due to its long computational time, we run three seeds for BNN on HIV, shown in Figure 4b, and find it was unable to adapt within one test episode. Comparing directly to reported in DPT , SEPT solves 2D Navigation at least 33% (>10 steps) faster, and the cumulative reward of SEPT (mean and standard error) are above DPT's mean cumulative reward in Acrobot (Table 2). Together, these show that methods that explicitly distinguish different dynamics (e.g., SEPT and BNN) can significantly outperform methods that implicitly interpolate in policy parameter space (e.g., Avg and EPOpt-adv) in settings where z has large discontinuous effect on dynamics, such as 2D navigation. When dynamics are continuous in latent variables (e.g., Acrobot and HIV), interpolation-based methods fare better than BNN, which faces the difficulty of learning a model of the entire family of dynamics. SEPT worked the best in the first case and is robust to the second case because it explicitly distinguishes dynamics and does not require learning a full transition model. Moreover, SEPT does not require rewards at test time allowing it be useful on a broader class of problems than optimization-based meta-learning approaches like MAML. Appendix D contains training curves. Figures 2f, 2g and 2j show that the probe phase is necessary to solve 2D navigation quickly, while giving similar performance in Acrobot and significant improvement in HIV. SEPT significantly outperformed TotalVar in 2D navigation and HIV, while TotalVar gives slight improvement in Acrobot, showing that directly using VAE performance as the reward for probing in certain environments can be more effective than a reward that deliberately encourages perturbation of state dimensions. The clear advantage of SEPT over MaxEnt in 2D navigation and HIV supports our hypothesis in Section 3.3 that the variational lowerbound, rather than its negation in the maximum entropy viewpoint, should be used as the probe reward, while performance was not significantly differentiated in Acrobot. SEPT outperforms DynaSEPT on all problems where dynamics are stationary during each instance. On the other hand, DynaSEPT is the better choice in a non-stationary variant of 2D navigation where the dynamics "switch" abruptly at t = 10 (Figure 4c). Figure 3 shows that SEPT is robust to varying the probe length T p and dim(z). Even with certain suboptimal probe length and dim(z), it can outperform all baselines on 2D navigation in both steps-to-solve and final reward; it is within error bars of all baselines on Acrobot based on final cumulative reward; and final cumulative reward exceeds that of baselines in HIV. Increasing T p means foregoing valuable steps of the control policy and increasing difficulty of trajectory reconstruction for the VAE in high dimensional state spaces; T p is a hyper-parameter that should be validated for each application. Appendix D.5 shows the effect of β on latent variable encodings. We propose a general algorithm for single episode transfer among MDPs with different stationary dynamics, which is a challenging goal with real-world significance that deserves increased effort from the transfer learning and RL community. Our method, Single Episode Policy Transfer (SEPT), trains a probe policy and an inference model to discover a latent representation of dynamics using very few initial steps in a single test episode, such that a universal policy can execute optimal control without access to rewards at test time. Strong performance versus baselines in domains involving both continuous and discontinuous dependence of dynamics on latent variables show the promise of SEPT for problems where different dynamics can be distinguished via a short probing phase. The dedicated probing phase may be improved by other objectives, in addition to performance of the inference model, to mitigate the risk and opportunity cost of probing. An open challenge is single episode transfer in domains where differences in dynamics of different instances are not detectable early during an episode, or where latent variables are fixed but dynamics are nonstationary. Further research on dynamic probing and control, as sketched in DynaSEPT, is one path toward addressing this challenge. Our work is one step along a broader avenue of research on general transfer learning in RL equipped with the realistic constraint of a single episode for adaptation and evaluation. A DERIVATIONS Proposition 1. Let p ϕ (τ) denote the distribution of trajectories induced by π ϕ. Then the gradient of the entropy H(p ϕ (τ)) is given by Proof. Assuming regularity, the gradient of the entropy is For trajectory τ:= (s 0, a 0, s 1, . . ., s t) generated by the probe policy π ϕ: Since p(s 0) and p(s i+1 |s i, a i) do not depend on ϕ, we get Substituting this into the gradient of the entropy gives equation 3. Restore trained decoder ψ, encoder φ, probe policy ϕ, and control policy θ 3: Run probe policy π ϕ for T p time steps and record trajectory τ p Use τ p with decoder q φ (z|τ) to estimateẑ 5: Useẑ with control policy π θ (a|s, z) for the remaining duration of the test episode 6: end procedure C DYNASEPT In our problem formulation, it is not necessary to computeẑ at every step of the test episode, as each instance is a stationary MDP and change of instances is known. However, removing the common assumption of stationarity leads to time-dependent transition functions T z (s |s, a), which introduces problematic cases. For example, a length T p probing phase would fail if z leads to a switch in dynamics at time t > T p, such as when poorly understood drug-drug interactions lead to abrupt changes in dynamics during co-medication therapies . Here we describe an alternative general algorithm for non-stationary dynamics, which we call DynaSEPT. We train a single policy π θ (a|s, z, η) that dynamically decides whether to probe for better inference or act to maximize the MDP reward R env, based on a scalar-valued function η: R → representing the degree of uncertainty in posterior inference, which is updated at every time step. The total reward is R tot (τ):= ηR p (τ) + (1 − η)R env (τ f), where τ is a short sliding-window trajectory of length T p, and τ f is the final state of τ. The history-dependent term R p (τ) is equivalent to a delayed reward given for executing a sequence of probe actions. Following the same reasoning for SEPT, one choice for R p (τ) is L(φ, ψ; τ). Assuming the encoder outputs variance σ 2 i of each latent dimension, one choice for η is a normalized standard deviation over all dimensions of the latent variable, i.e., where σ i,max is a running max of σ i. Despite its novelty, we consider DynaSEPT only for rare nonstationary dynamics and merely as a baseline in the predominant case of stationary dynamics, where SEPT is our primary contribution. DynaSEPT does not have any clear advantage over SEPT when each instance T z is a stationary MDP. DynaSEPT requires η to start at 1.0, representing complete lack of knowledge about latent variables, and it still requires the choice of hyperparameter T p. Only after T p steps can it use the uncertainty of q φ (z|τ) to adapt η and continue to generate the sliding window trajectory to improveẑ. By this time, SEPT has already generated an optimized sequence using π ϕ for the encoder to estimateẑ. If a trajectory of length T p is sufficient for computing a good estimate of latent variables, then SEPT is expected to outperform DynaSEPT. Steps to solve Table 1 reports the number of steps in a test episode required to solve the MDP. Average and standard deviation were computed across all test instances and across all independently trained models. If an episode was not solved, the maximum allowed number of steps was used (50 for 2D navigation and 200 for Acrobot). Table 2 shows the mean and standard error of the cumulative reward over test episodes on Acrobot. The reported mean cumulative value for DPT in Acrobot is -27.7 . Step 0 Figure 6 shows training curves on all domains by all methods. None of the baselines, except for Oracle, converge in 2D navigation, because it is meaningless for Avg and EPOpt-adv to interpolate between optimal policies for each instance, and MAML cannot adapt due to lack of informative rewards for almost the entire test episode. Hence these baselines cannot work for a new unknown test episode, even in principle. We allowed the same number of training episodes for HIV as in , and all baselines except MAML show learning progress. Figure 7: Two-dimensional encodings generated for four instances of Acrobot (represented by four ground-truth colors), for different values of β. We chose β = 1 for Acrobot. There is a tradeoff between reconstruction and disentanglement as β increases . Increasing β encourages greater similarity between the posterior and an isotropic Gaussian. Figure 7 gives evidence that this comes at a cost of lower quality of separation in latent space. E EXPERIMENTAL DETAILS For 2D navigation, Acrobot, and HIV, total number of training episodes allowed for all methods are 10k, 4k, and 2.5k, respectively. We switch instances once every 10, 8 and 5 episodes, respectively. There are 2, 8 and 5 unique training instances, and 2, 5, and 5 validation instances, respectively. For each independent training run, we tested on 10, 5, and 5 test instances, respectively. The simple baselines Average and Oracle can be immediately deployed in a single test episode after training. However, the other methods for transfer learning require modification to work in the setting of single episode test, as they were not designed specifically for this highly constrained setting. We detail the necessary modifications below. We also describe the ablation SEPT-NP in more detail. BNN. , a pre-trained BNN model was fine-tuned using the first test episode and then used to generate fictional episodes for training a policy from scratch. More episodes on the same test instance were allowed to help improve model accuracy of the BNN. In the single test episode setting, all fine-tuning and policy training must be conducted within the first test episode. We fine-tune the pre-trained BNN every 10 steps and allow the same total number of fictional episodes as reported in for policy training. We measured the cumulative reward attained by the policy-while it is undergoing training-during the single real test episode. EPOpt. EPOpt trains on the lowest -percentile rollouts from instances sampled from a source distribution, then adapts the source distribution using observations from the target instance . Since we do not allow observation from the test instance, we only implemented the adversarial part of EPOpt. To run EPOpt with off-policy DDQN, we generated 100 rollouts per iteration and stored the lowest 10-percentile into the replay buffer, then executed the same number of minibatch training steps as the number that a regular DDQN would have done during rollouts. MAML. While MAML uses many complete rollouts per gradient step , the single episode test constraint mandates that it can only use a partial episode for adaptation during test, and hence the same must be done during meta-training. For both training and test, we allow MAML to take one gradient step for adaptation using a trajectory of the same length as the probe trajectory of SEPT, starting from the initial state of the episode. We implemented a first-order approximation that computes the meta-gradient at the post-update parameters but omits second derivatives. This was reported to have nearly equal performance as the full version, due to the use of ReLU activations. SEPT-NP. π θ (a|s, z) begins with a zero-vector for z at the start of training. When it has produced a trajectory τ p of length T p, we store τ p into D for training the VAE, and use τ p with the VAE to estimate z for the episode. Later training episodes begin with the rolling mean of all z estimated so far. For test, we give the final rolling mean of z at the end of training as initial input to π θ (a|s, z). Encoder. For all experiments, the encoder q φ (z|τ) is a bidirectional LSTM with 300 hidden units and tanh activation. Outputs are mean-pooled over time, then fully-connected to two linear output layers of width dim(z), interpreted as the mean and log-variance of a Gaussian over z. Decoder. For all experiments, the decoder p ψ (τ |z) is an LSTM with 256 hidden units and tanh activation. Given input [s t, a t,ẑ] at LSTM time step t, the output is fully-connected to two linear output layers of width |S| + |A|, and interpreted as the mean and log-variance of a Gaussian decoder for the next state-action pair (s t+1, a t+1). Q network. For all experiments, the Q function is a fully-connected neural network with two hidden layers of width 256 and 512, ReLU activation, and a linear output layer of size |A|. For SEPT and Oracle, the input is the concatenation [s t, z], where z is estimated in the case of SEPT and z is the ground truth in for the Oracle. For all other methods, the input is only the state s. Probe policy network. For all experiments, π ϕ (a|s) is a fully-connected neural network with 3 hidden layers, ReLU activation, 32 nodes in all layers, and a softmax in the output layer. E.3 HYPERPARAMETERS VAE learning rate was 1e-4 for all experiments. Size of the dataset D of probe trajectories was limited to 1000, with earliest trajectories discarded. 10 minibatches from D were used for each VAE training step. We used β = 1 for the VAE. Probe policy learning rate was 1e-3 for all experiments. DDQN minibatch size was 32, one training step was done for every 10 environment steps, end = 0.15, learning rate was 1e-3, gradient clip was 2.5, γ = 0.99, and target network update rate was 5e-3. Exploration decayed according n+1 = c n every episode, where c satisfies end = c N start and N is the total number of episodes. Prioritized replay used the same parameters in . | Single episode policy transfer in a family of environments with related dynamics, via optimized probing for rapid inference of latent variables and immediate execution of a universal policy. | 1,072 | scitldr |
Domain specific goal-oriented dialogue systems typically require modeling three types of inputs, viz., (i) the knowledge-base associated with the domain, (ii) the history of the conversation, which is a sequence of utterances and (iii) the current utterance for which the response needs to be generated. While modeling these inputs, current state-of-the-art models such as Mem2Seq typically ignore the rich structure inherent in the knowledge graph and the sentences in the conversation context. Inspired by the recent success of structure-aware Graph Convolutional Networks (GCNs) for various NLP tasks such as machine translation, semantic role labeling and document dating, we propose a memory augmented GCN for goal-oriented dialogues. Our model exploits (i) the entity relation graph in a knowledge-base and (ii) the dependency graph associated with an utterance to compute richer representations for words and entities. Further, we take cognizance of the fact that in certain situations, such as, when the conversation is in a code-mixed language, dependency parsers may not be available. We show that in such situations we could use the global word co-occurrence graph and use it to enrich the representations of utterances. We experiment with the modified DSTC2 dataset and its recently released code-mixed versions in four languages and show that our method outperforms existing state-of-the-art methods, using a wide range of evaluation metrics. Goal-oriented dialogue systems which can assist humans in various day-to-day activities have widespread applications in several domains such as e-commerce, entertainment, healthcare, etc. For example, such systems can help humans in scheduling medical appointments, reserving restaurants, booking tickets, etc.. From a modeling perspective, one clear advantage of dealing with domain specific goal-oriented dialogues is that the vocabulary is typically limited, the utterances largely follow a fixed set of templates and there is an associated domain knowledge which can be exploited. More specifically, there is some structure associated with the utterances as well as the knowledge base. More formally, the task here is to generate the next response given (i) the previous utterances in the conversation history (ii) the current user utterance (known as the query) and (iii) the entities and relationships in the associated knowledge base. Current state-of-the-art methods BID30 BID23 typically use variants of Recurrent Neural Network BID10 to encode the history and current utterance and an external memory network to store the entities in the knowledge base. The encodings of the utterances and memory elements are then suitably combined using an attention network and fed to the decoder to generate the response, one word at a time. However, these methods do not exploit the structure in the knowledge base as defined by entity-entity relations and the structure in the utterances as defined by a dependency parse. Such structural information can be exploited to improve the performance of the system as demonstrated by recent works on syntax-aware neural machine translation BID13 BID2 BID4, semantic role labeling and document dating BID35 which use GCNs BID8 BID9 BID19 to exploit sentence structure. In this work, we propose to use such graph structures for goal-oriented dialogues. In particular, we compute the dependency parse tree for each utterance in the conversation and use a GCN to capture the interactions between words. This allows us to capture interactions between distant words in the sentence as long as they are connected by a dependency relation. We also use GCNs to encode the entities of the KB where the entities are treated as nodes and the relations as edges of the graph. Once we have a richer structure aware representation for the utterances and the entities, we use a sequential attention mechanism to compute an aggregated context representation from the GCN node vectors of the query, history and entities. Further, we note that in certain situations, such as, when the conversation is in a code-mixed language or a language for which parsers are not available then it may not be possible to construct a dependency parse for the utterances. To overcome this, we construct a co-occurrence matrix from the entire corpus and use this matrix to impose a graph structure on the utterances. More specifically, we add an edge between two words in a sentence if they co-occur frequently in the corpus. Our experiments suggest that this simple strategy acts as a reasonable substitute for dependency parse trees. We perform experiments with the modified DSTC2 BID3 dataset which contains goal-oriented conversations for reserving restaurants. We also use its recently released code-mixed versions BID1 which contain code-mixed conversations in four different languages, viz., Hindi, Bengali, Gujarati and Tamil. We compare with recent state-of-the-art methods and show that on average the proposed model gives an improvement of 2.8 BLEU points and 2 ROUGE points. Our contributions can be summarized as follows: (i) We use GCNs to incorporate structural information for encoding query, history and KB entities in goal-oriented dialogues (ii) We use a sequential attention mechanism to obtain query aware and history aware context representations (iii) We leverage co-occurrence frequencies and PPMI (positive-pointwise mutual information) values to construct contextual graphs for code-mixed utterances and (iv) We show that the proposed model obtains state-of-the-art on the modified DSTC2 dataset and its recently released code-mixed versions. In this section we review the previous work in goal-oriented dialogue systems and describe the introduction of GCNs in NLP.Goal-Oriented Dialogue System: Initial goal-oriented dialogue systems BID39 BID37 were based on dialogue state tracking BID38 BID15 b) and included pipelined modules for natural language understanding, dialogue state tracking, policy management and natural language generation. used neural networks for these intermediate modules but still lacked absolute end-to-end trainability. Such pipelined modules were restricted by the fixed slot-structure assumptions on the dialogue state and required per-module based labelling. To mitigate this problem BID3 released a version of goal-oriented dialogue dataset that focuses on the development of end-to-end neural models. Such models need to reason over the associated KB triples and generate responses directly from the utterances without any additional annotations. For example, BID3 proposed a Memory Network BID34 based model to match the response candidates with the multi-hop attention weighted representation of the conversation history and the KB triples in memory. BID22 further added highway BID33 and residual connections BID14 to the memory network in order to regulate the access to the memory blocks. BID30 developed a variant of RNN cell which computes a refined representation of the query over multiple iterations before querying the memory. However, all these approaches retrieve the response from a set of candidate responses and such a candidate set is not easy to obtain in any new domain of interest. To account for this,; adapted RNN based encoder-decoder models to generate appropriate responses instead of retrieving them from a candidate set. introduced a key-value memory network based generative model which integrates the underlying KB with RNN based encode-attend-decode models. BID23 used memory networks on top of the RNN decoder to tightly integrate KB entities with the decoder to generate more infor-mative responses. However, as opposed to our work, all these works ignore the underlying structure of the entity-relation graph of the KB and the syntactic structure of the utterances. Recently, there has been an active interest in enriching existing encode-attenddecode models BID0 with structural information for various NLP tasks. Such structure is typically obtained from the constituency and/or dependency parse of sentences. The idea is to treat the output of a parser as a graph and use an appropriate network to capture the interactions between the nodes of this graph. For example, BID13 and BID4 showed that incorporating such syntactical structures as Tree-LSTMs in the encoder can improve the performance of Neural Machine Translation (NMT). BID29 use Graph-LSTMs to perform cross sentence n-ary relation extraction and show that their formulation is applicable to any graph structure and Tree-LSTMs can be thought of as a special case of it. In parallel, Graph Convolutional Networks (GCNs) BID9 BID8 BID19 and their variants BID20 have emerged as state-of-the-art methods for computing representations of entities in a knowledge graph. They provide a more flexible way of encoding such graph structures by capturing multi-hop relationships between nodes. This has led to their adoption for various NLP tasks such as neural machine translation BID25 BID2, semantic role labeling, document dating BID35 and question answering BID17 BID26.To the best of our knowledge ours is the first work that uses GCNs to incorporate dependency structural information and the entity-entity graph structure in a single end-to-end neural model for goaloriented dialogue. This is also the first work that incorporates contextual co-occurrence information for code-mixed utterances, for which no dependency structures are available. In this section we describe Graph Convolutional Networks (GCN) BID19 for undirected graphs and then describe their syntactic versions which work with directed labeled edges of dependency parse trees. Graph convolutional networks operate on a graph structure and compute representations for the nodes of the graph by looking at the neighbourhood of the node. k layers of GCNs can be stacked to account for neighbours which are k-hops away from the current node. Formally, let G = (V, E) be an undirected graph where V is the set of nodes (let |V| = n) and E is the set of edges. Let X ∈ R n×m be the input feature matrix with n nodes and each node x u (u ∈ V) is represented by an m-dimensional feature vector. The output of a 1-layer GCN is the hidden representation matrix H ∈ R n×d where each d-dimensional representation of a node captures the interactions with its 1-hop neighbour. Each row of this matrix can be computed as: DISPLAYFORM0 Here W ∈ R d×m is the model parameter matrix, b ∈ R d is the bias vector and ReLU is the rectified linear unit activation function. N (v) is the set of neighbours of node v and is assumed to also include the node v so that the previous representation of the node v is also considered while computing the new hidden representation. To capture interactions with nodes which are multiple hops away, multiple layers of GCNs can be stacked together. Specifically, the representation of node v after k th GCN layer can be formulated as: DISPLAYFORM1 where h k u is the representation of the u th node in the (k − 1) th GCN layer and h DISPLAYFORM2 In a directed labeled graph G = (V, E), each edge between nodes u and v is represented by a triple (u, v, L(u, v) ) where L(u, v) is the associated edge label. modified GCNs to operate over directed labeled graphs, such as the dependency parse tree of a sentence. For such a tree, in order to allow information to flow from head to dependents and vice-versa, they added inverse dependency edges from dependents to heads such as (v, u, L(u, v) ) to E and made the model parameters and biases label specific. In their formulation, u,v) which are label specific. Suppose there are L different labels, then this formulation will require L weights and biases per GCN layer ing in a large number of parameters. To avoid this, the authors use only three sets of weights and biases per GCN layer (as opposed to L) depending on the direction in which the information flows. More specifically, u,v), where dir(u, v) indicates whether information flows from u to v, v to u or u = v. In this work, we also make b DISPLAYFORM0 DISPLAYFORM1 DISPLAYFORM2 instead of having a separate bias per label. The final GCN formulation can thus be described as: DISPLAYFORM3 We first formally define the task of end-to-end goal-oriented dialogue generation. Each dialogue of t turns can be viewed as a succession of user utterances (U) and system responses (S) and can be represented as: (U 1, S 1, U 2, S 2, ..U t, S t). Along with these utterances, each dialogue is also accompanied by e KB triples which are relevant to that dialogue and can be represented as: DISPLAYFORM0 Each triple is of the form: (entity 1, relation, entity 2). These triples can be represented in the form of a graph G k = (V k, E k) where V is the set of all entities and each edge in E is of the form: (entity 1, entity 2, relation) where relation signifies the edge label. At any dialogue turn i, given the (i) dialogue history H = (U 1, S 1, U 2, ..S i−1), (ii) the current user utterance as the query Q = U i and (iii) the associated knowledge graph G k, the task is to generate the current response S i which leads to a completion of the goal. As mentioned earlier, we exploit the graph structure in KB and the syntactic structure in the utterances to generate appropriate responses. Towards this end we propose a model with the following components for encoding these three types of inputs. The query Q = U i is the i th (current) utterance in the dialogue and contains |Q| tokens. We denote the embedding of the i th token in the query as q i We first compute the contextual representations of these tokens by passing them through a bidirectional RNN: DISPLAYFORM0 Now, consider the dependency parse tree of the query sentence denoted by G Q = (V Q, E Q). We use a query specific GCN to operate on G Q, which takes DISPLAYFORM1 as the input to the 1 st GCN layer. The node representation in the k th hop of the query specific GCN is computed as: DISPLAYFORM2 where u,v) DISPLAYFORM3 DISPLAYFORM4 Figure 1: Illustration of the GCN and RNN+GCN modules which are used as encoders in our model. The notations are specific to the dialogue history encoder but both the encoders are same for the query. The GCN encoder is same for the KB except the graph structure. The history H of the dialogue contains |H| tokens and we denote the embedding of the i th token in the history by p i Once again, we first compute the hidden representations of these embeddings using a bidirectional RNN: DISPLAYFORM0 We now compute a dependency parse tree for each sentence in the history and collectively represent all the trees as a single graph G H = (V H, E H). Note that this graph will only contain edges between words belonging to the same sentence and there will be no edges between words across sentences. We then use a history specific GCN to operate on G H which takes s t as the input to the 1 st layer. The node representation in the k th hop of the history specific GCN is computed as: DISPLAYFORM1 where V k dir (u,v) and o k dir (u,v) are edge direction specific history-GCN weights and biases in the k th hop and a 1 u = s u. Such an encoder with a single hop of GCN is illustrated in figure 1(b) and the encoder without the BiRNN is depicted in figure 1(a). As mentioned earlier, G K = (V K, E K) is the graph capturing the interactions between the entities in the knowledge graph associated with the dialogue. Let there be m such entities and we denote the embeddings of the node corresponding to the i th entity as e i We then operate a KB specific GCN on these entity representations to obtain refined representations which capture relations between entities. The node representation in the k th hop of the KB specific GCN is computed as: DISPLAYFORM0 where U k dir (u,v) and z k dir (u,v) are edge direction specific KB-GCN weights and biases in k th hop and r 1 u = e u. We also add inverse edges to E K similar to the case of syntactic GCNs in order to allow information flow in both the directions for an entity pair in the knowledge graph. Dialogue History KB Entities Query DISPLAYFORM0 We use an RNN decoder to generate the tokens of the response and let the hidden states of the decoder be denoted as: DISPLAYFORM0 where T is the total number of decoder timesteps. In order to obtain a single representation from the final layer (k = f) of the query-GCN node vectors, we use an attention mechanism as described below: DISPLAYFORM1 DISPLAYFORM2 DISPLAYFORM3 Here v 1, W 1, W 2 are parameters. Further, at each decoder timestep, we obtain a query aware representation from the final layer of the history-GCN by computing an attention score for each node/token in the history based on the query context vector h Q t as shown below: DISPLAYFORM4 DISPLAYFORM5 DISPLAYFORM6 Here v 2, W 3, W 4 and W 5 are parameters. Finally, we obtain a query and history aware representation of the KB by computing an attention score over all the nodes in the final layer of KB-GCN using h Q t and h H t as shown below: DISPLAYFORM7 DISPLAYFORM8 DISPLAYFORM9 Here v 3, W 6, W 7, W 8 and W 9 are parameters. This sequential attention mechanism is illustrated in FIG0. For simplicity, we depict the GCN and RNN+GCN encoders as blocks. The internal structure of these blocks are shown in figure 1. The decoder takes two inputs, viz., (i) the context which contains the history and the KB and (ii) the query which is the last/previous utterance in the dialogue. We use an aggregator which learns the overall attention to be given to the history and KB components. These attention scores: θ H t and θ K t are dependent on the respective context vectors and the previous decoder state d t−1. The final context vector is obtained as: DISPLAYFORM0 where [;] denotes the concatenation operator. At every timestep the decoder then computes a probability distribution over the vocabulary using the following equations: DISPLAYFORM1 where w t is the decoder input at time step t, V and b are parameters. P vocab gives us a probability distribution over the entire vocabulary and the loss for time step t is l t = − log P vocab (w * t), where w * t is the t th word in the ground truth response. The total loss is an average of the per-time step losses. For the dialogue history and query encoder, we used the dependency parse tree for capturing structural information in the encodings. However, if the conversations occur in a language for which no dependency parsers exist, for example: code-mixed languages like Hinglish (Hindi-English) BID1, then we need an alternate way of extracting a graph structure from the utterances. One simple solution which worked well in practice was to create a word co-occurrence matrix from the entire corpus where the context window is an entire sentence. Once we have such a co-occurrence matrix, for a given sentence we can connect an edge between two words if their co-occurrence frequency is above a threshold value. The co-occurrence matrix can either contain co-occurrence frequency counts or positive-pointwise mutual information (PPMI) values BID6 BID7 BID27. In this section we describe the datasets used in our experiments, the various hyperparameters that we considered and the models that we compared. The original DSTC2 dataset BID15 ) was based on the task of restaurant reservation and contains transcripts of real conversations between humans and bots. The utterances were labeled with the dialogue state annotations like the semantic intent representation, requested slots and the constraints on the slot values. We report our on the modified DSTC2 dataset of BID3 where such annotations are removed and only the raw utterance-response pairs are present with an associated set of KB triples for each dialogue. For our experiments with contextual graphs we reported our on the code-mixed versions of modified DSTC2, which was recently released by BID1 1. This dataset has been collected by code-mixing the utterances of the English version of modified DSTC2 in four languages viz. Hindi (Hi-DSTC2), Bengali (Be-DSTC2), Gujarati (Gu-DSTC2) and Tamil (Ta-DSTC2), via crowdsourcing. Statistics about this dataset and example dialogues are shown in Appendix A.Model per-resp. acc BLEU ROUGE Entity F1 1 2 L Rule-Based BID3 33.3 -----MEMNN BID3 41.1 -----QRN BID30 50.7 -----GMEMNN BID22 48.7 -----Seq2Seq-Attn BID0 46.0 57.3 67.2 56.0 64.9 67.1 Seq2Seq-Attn+Copy 47.3 55.4 ---71.6 HRED BID31 48.9 58.4 67.9 57.6 65.7 75.6 Mem2Seq BID23 45 Table 2: Comparison of RNN+GCN-SeA, GCN-SeA with other models on all code-mixed datasets We used the same train, test and validation splits as provided in the original versions of the datasets. We minimized the cross entropy loss using the Adam optimizer BID18 and tuned the initial learning rates in the range of 0.0006 to 0.001. For regularization we used an L2 norm of 0.001 in addition to a dropout BID32 of 0.1. We used randomly initialized word embeddings of size 300. The RNN and GCN hidden dimensions were also chosen to be 300. We use GRU BID5 ) cells for the RNNs. All parameters were initialized from a truncated normal distribution with a standard deviation of 0.1. We compare the performance of the following models.(i) RNN+GCN-SeA vs GCN-SeA: We use RNN+GCN-SeA to refer to the model described in section 4. Instead of using the hidden representations obtained from the bidirectional RNNs, we also experiment by providing the token embeddings directly to the GCNs i.e. c 1 u = q u in equation 6 and a 1 u = p u in equation 8. We refer to this model as GCN-SeA.(ii) Cross edges between the GCNs: In addition to the dependency and contextual edges, we add edges between words in the dialogue history/query and KB entities if a history/query word exactly matches the KB entity. Such edges create a single connected graph which is encoded using a single GCN encoder and then separated into different contexts to perform the sequential attention. This model is referred to as RNN+CROSS-GCN-SeA.(iii) Frequency vs PPMI Contextual Graph: We experiment with the raw frequency cooccurrence graph structure and the PPMI graph structure for the code-mixed datasets, as explained in section 4.6. We refer to these models as GCN-SeA+Freq and GCN-SeA+PPMI. In both these models, the GCN takes inputs from a bidirectional RNN.(iv) GCN-SeA+Random vs GCN-SeA+Structure: We experiment with the model where the graph is constructed by randomly connecting edges between two words in a context. We refer to this model as GCN-SeA+Random. We refer to the model which either uses dependency or contextual graph instead of random graphs as GCN-SeA+Structure. In this section we discuss the of our experiments as summarized in tables 1,2, and 3. We use BLEU BID28 and ROUGE BID21 metrics to evaluate the generation quality of responses. We also report the per-response accuracy which computes the percentage of responses in which the generated response exactly matches the ground truth response. In order to evaluate the model's capability of correctly injecting entities in the generated response, we report the entity F1 measure as defined in.Results on En-DSTC2: We compare our model with the previous works on the English version of modified DSTC2 in table 1. For most of the retrieval based models, the BLEU or ROUGE scores are not available as they select a candidate from a list of candidates as opposed to generating it. Our model outperforms all of the retrieval and generation based models. We obtain a gain of 0.7 in the per-response accuracy compared to the previous retrieval based state-of-the-art model of BID30, which is a very strong baseline for our generation based model. We call this a strong baseline because the candidate selection task of this model is easier than the response generation task of our model. We also obtain a gain of 2.8 BLEU points, 2 ROUGE points and 2.5 entity F1 points compared to current state-of-the-art generation based models. Results on code-mixed datasets and effect of using RNNs: The of our experiments on the code-mixed datasets are reported in table 2. Our model outperforms the baseline models on all the code-mixed languages. One common observation from the over all the languages (including En-DSTC2) is that RNN+GCN-SeA performs better than GCN-SeA. Similar observations were made by for the task of semantic role labeling. Effect of using Hops: As we increased the number of hops of GCNs, we observed a decrease in the performance. One reason for such a drop in performance could be that the average utterance length is very small (7.76 words). Thus, there isn't much scope for capturing distant neighbourhood information and more hops can add noisy information. Please refer to Appendix B for detailed about the effect of varying the number of hops. Frequency vs PPMI graphs: We observed that PPMI based contextual graphs were slightly better than frequency based contextual graphs (See Appendix C). In particular, when using PPMI as opposed to frequency based contextual graph, we observed a gain of 0.95 in per-response accuracy, 0.45 in BLEU, 0.64 in ROUGE and 1.22 in entity F1 score when averaged across all the code-mixed languages. Effect of using Random Graphs: GCN-SeA-Random and GCN-SeA-Structure take the token embeddings directly instead of passing them though an RNN. This ensures that the difference in performance of the two models are not influenced by the RNN encodings. The are shown in table 3 and we observe a drop in performance for GCN-Random across all the languages. This Table 3: GCN-SeA with random graphs and frequency co-occurrence graphs on all DSTC2 datasets.shows that any random graph does not contribute to the performance gain of GCN-SeA and the dependency and contextual structures do play an important role. Ablations: We experiment with replacing the sequential attention by the Bahdanau attention BID0. We also experiment with various combinations of RNNs and GCNs as encoders. The are shown in We showed that structure aware representations are useful in goal-oriented dialogue and we obtain state-of-the art performance on the modified DSTC2 dataset and its recently released code-mixed versions. We used GCNs to infuse structural information of dependency graphs and contextual graphs to enrich the representations of the dialogue context and KB. We also proposed a sequential attention mechanism for combining the representations of (i) query (current utterance), (ii) conversation history and (ii) the KB. Finally, we empirically showed that when dependency parsers are not available for certain languages such as code-mixed languages then we can use word co-occurrence frequencies and PPMI values to extract a contextual graph and use such a graph with GCNs for improved performance. south part of town. bot api call R cuisine south moderate api call R cuisine south moderate KB Triples: pizza hut cherry hinton R post code pizza hut cherry hinton post code pizza hut cherry hinton R cuisine italian pizza hut cherry hinton R location south pizza hut cherry hinton R phone pizza hut cherry hinton phone pizza hut cherry hinton R address pizza hut cherry hinton address pizza hut cherry hinton R price moderate pizza hut cherry hinton R rating 3 restaurant alimentum R post code restaurant alimentum post code restaurant alimentum R cuisine european restaurant alimentum R location south restaurant alimentum R phone restaurant alimentum phone restaurant alimentum R address restaurant alimentum address restaurant alimentum R price moderate restaurant alimentum R rating 10 user <SILENCE> <SILENCE> bot restaurant alimentum is a nice restaurant in the south of town serving modern european food. restaurant alimentum ek bohot hi accha restaurant hai south town me jo modern european food serve karta hai user can i get the address? kya mujhe address mil sakta hai bot Sure, restaurant alimentum is on restaurant alimentum address. Bilkul, restaurant alimentum restaurant alimentum address par hai user and whats the phone number for that? aur iska phone number kya hai? bot The phone number of restaurant alimentum is restaurant alimentum phone.restaurant alimentum ka phone number hai restaurant alimentum phone user okay, thank you, good bye. theek, thank you, good bye. bot you are welcome. welcome, alvida. | We propose a Graph Convolutional Network based encoder-decoder model with sequential attention for goal-oriented dialogue systems. | 1,073 | scitldr |
Effectively capturing graph node sequences in the form of vector embeddings is critical to many applications. We achieve this by (i) first learning vector embeddings of single graph nodes and (ii) then composing them to compactly represent node sequences. Specifically, we propose SENSE-S (Semantically Enhanced Node Sequence Embedding - for Single nodes), a skip-gram based novel embedding mechanism, for single graph nodes that co-learns graph structure as well as their textual descriptions. We demonstrate that SENSE-S vectors increase the accuracy of multi-label classification tasks by up to 50% and link-prediction tasks by up to 78% under a variety of scenarios using real datasets. Based on SENSE-S, we next propose generic SENSE to compute composite vectors that represent a sequence of nodes, where preserving the node order is important. We prove that this approach is efficient in embedding node sequences, and our experiments on real data confirm its high accuracy in node order decoding. Accurately learning vector embeddings for a sequence of nodes in a graph is critical to many scenarios, e.g., a set of Web pages regarding one specific topic that are linked together. Such a task is challenging as: (i) the embeddings may have to capture graph structure along with any available textual descriptions of the nodes, and moreover, (ii) nodes of interest may be associated with a specific order. For instance, (i) for a set of Wikipedia pages w.r.t. a topic, there exists a recommended reading sequence; (ii) an application may consist of a set of services/functions, which must be executed in a particular order (workflow composability); (iii) in source routing, the sender of a packet on the Internet specifies the path that the packet takes through the network or (iv) the general representation of any path in a graph or a network, e.g., shortest path. Node sequence embedding, thus, requires us to (i) learn embeddings for each individual node of the graph and (ii) compose them together to represent their sequences. To learn the right representation of individual nodes and also their sequences, we need to understand how these nodes are correlated with each other both functionally and structurally. A lot of work has only gone into learning single node embeddings (i.e., where node sequence length is 1), as they are essential in feature representations for applications like multi-label classification or link prediction. For instance, algorithms in BID22, BID4, BID28 and others try to extract features purely from the underlying graph structure; algorithms in BID12, BID19 and others learn vector representations of documents sharing a common vocabulary set. However, many applications would potentially benefit from representations that are able to capture both textual descriptions and the underlying graph structure simultaneously. For example, classification of nodes in a network not only depends on their inter-connections (i.e., graph structure), but also nodes' intrinsic properties (i.e., their textual descriptions); for product recommendations, if the product is new, it may not have many edges since not many users have interacted with it; however, using the textual descriptions along with the graph structure allows for efficient bootstrapping of the recommendation service. For general case of sequence lengths greater than 1, despite the importance in applications like workflow composability described above, there is generally a lack of efficient solutions. Intuitively, we can concatenate or add all involved node vectors; however, such a mechanism either takes too much space or loses the sequence information; thus unable to represent node sequences properly. We aim to learn node sequence embeddings by first first addressing the single node embedding issue, as a special case of node sequence embedding, by considering both the textual descriptions and the graph structure. We seek to answer two questions: How should we combine these two objectives? What framework should we use for feature learning? Works that jointly address these two questions either investigate them under different problem settings BID1 BID32, under restricted learning models BID13, ignore the word context within the document BID16, do not co-learn text and graph patterns or only consider linear combinations of text and graph BID3; this is elaborated further in Section 2. In contrast, we propose a generic neural-network-based model called SENSE-S (Semantically Enhanced Node Sequence Embeddings -for Single nodes) for computing vector representations of nodes with additional semantic information in a graph. SENSE-S is built on the foundation of skip-gram models. However, SENSE-S is significantly different from classic skipgram models in the following aspects: (i) For each word φ in the textual description of node v in the given graph, neighboring words of φ within v's textual description and neighboring nodes of v within the graph are sampled at the same time.(ii) The text and graph inputs are both reflected in the output layer in the form of probabilities of co-occurrence (in graph or text). (iii) Moreover, this joint optimization problem offers an opportunity to leverage the synergy between the graph and text inputs to ensure faster convergence. We evaluate the generated vectors on (i) to show that our SENSE-S model improves multi-label classification accuracy by up to 50% and (ii) Physics Citation dataset BID14 to show that SENSE-S improves link prediction accuracy by up to 78% over the state-of-the-art. Next, we propose SENSE for general feature representation of a sequence of nodes. This problem is more challenging in that (i) besides the original objectives in SENSE-S, we now face another representation goal, i.e., sequence representation while preserving the node order; (ii) it is important to represent the sequence in a compact manner; and (iii) more importantly, given a sequence vector, we need to be able to decipher which functional nodes are involved and in what order. To this end, we develop efficient schemes to combine individual vectors into complex sequence vectors that address all of the above challenges. The key technique we use here is vector cyclic shifting, and we prove that the different shifted vectors are orthogonal with high probability. This sequence embedding method is also evaluated on the Wikispeedia and Physics Citation datasets, and the accuracy of decoding a node sequence is shown to be close to 100% when the vector dimension is large. We overview the most related works by categorizing them as follows:Learning vector representation from text: Vector representation of words (Schütze, 1993) has been a long standing research topic. It has received significant attention in the recent times due to the advances in deep neural networks BID0 BID18. In particular, these neural-network-based schemes outperform n-gram-based techniques BID10 BID9 significantly as they are able to learn the similarities between words. Furthermore, paragraph2vec BID12 extends the well-established word2vec BID19 to learn representations of chunks of text. Learning vector representation from graphs: Lot of research has gone into learning graph representations by translating the network/graph into a set of words or documents BID22 BID23 BID24 BID29 BID28 BID4. Generic models incorporating both edge weight and direction information for graph embeddings are proposed in BID39, BID28 and BID4. Specifically, node2vec BID4 advances the state-of-the-art in this area by designing flexible node sampling methodologies to allow feature vectors to exhibit different properties. Subgraph2vec BID20 extends these schemes to learn vector representations of subgraphs. BID26 proposes techniques to represent graph sequences under the assumption that each node is represented by a random binary vector. Learning graph representation with auxiliary information: Broadly speaking, our work falls into the category of node embedding in graphs with auxiliary information. BID5, BID33, BID7 and others address the case where nodes are associated with labels. BID21 studies graph embedding when node/edge attributes are continuous. BID1 investigates phrase ambiguity resolution via leveraging hyperlinks. However, all these works operate under information or network constraints. On the other hand, BID32, BID36 and explore embedding strategies in the context of knowledge graphs, where the main goal is to maintain the entity relationships specified by semantic edges. In contrast, we consider a simpler network setting where only nodes are associated with semantic information. EP BID3 and GraphSAGE BID6 learn embeddings for structured graph data. However, the textual similarities are only captured by linear combinations. Planetoid computes node embeddings under semi-supervised settings; metapath2vec BID2 learns embeddings for heterogeneous networks (node can be author or paper); and Graph-Neural-Network-based embeddings are explored in BID11 and BID17. However, these papers do not explicitly learn graph structure and text patterns simultaneously, and thus are complementary to SENSE. SNE BID16, SNEA, TADW BID34, HSCA BID37, AANE BID8, ANRL BID38 and PLANE BID13 are more related to our work. However, unlike SENSE, these do not consider the relative context of the words w.r.t. the document. Furthermore, the objective of PLANE is to maximize the likelihood that neighboring nodes have similar embeddings, which is not always the case in practice because neighboring nodes may be semantically different; more critically, it relies on strong assumptions of statistical distributions of words and edges in the network. In this regard, we propose a generic embedding scheme that jointly considers network topology as well as the nodes' semantic information. To embed a general node sequence, we first consider a special case where each node sequence contains only one node. Such single node embedding is referred to as SENSE-S, which jointly learns node representations along with textual descriptions in graphs. Let G = (V, E) denote a given directed or undirected graph, where V is the set of nodes and E the set of edges. Each node in V is associated with a text description. We aim to embed each node v in V into a feature vector that captures both graphical (neighboring node inter-connections) and textual (semantic meaning of v) properties. Specifically, let φ denote a word in the text description of node v. Suppose we obtain a set of neighboring nodes of v in graph G via a specific node sampling strategy, e.g., biased random walk BID4, and a set of neighboring words of φ in the text description of v by a sliding window over consecutive words BID19. We then define N G (v) as a probabilistic event of observing the set of neighboring nodes of v in G (under the chosen model) and N T (φ|v) as an event of observing the set of neighboring words of φ in the text description of v. DISPLAYFORM0 be the embedding function that maps each node v in V into a d-dimensional vector. Our goal is to find function f that maximizes: DISPLAYFORM1 Since events N G (v) and N T (φ|v) are independent, can be rewritten as: DISPLAYFORM2 where w v is the number of words in the description of v. Given a word in a node, jointly captures the node neighborhood in the graph and the word neighborhood in text. We build SENSE-S on the foundation of the skip-gram model BID19. In particular, BID4 and BID22 leverage this skip-gram model to learn vector representation of nodes in a graph by performing biased random walks on the graph and treating each walk as equivalent to a sentence, aiming to predict a neighboring node given the current node in a graph. On the other hand, BID12 extends skip-gram models to learn embeddings of various chunks of text, e.g., sentences, paragraphs or entire documents, via sampling the neighboring words over a sliding window within the text. Motivated by the effectiveness of these models, we build two SENSE-S models, SENSE-S (add) and SENSE-S (concat), as detailed below. Let w denote the number of words (some uninformative words are skipped) in text descriptions across all nodes in the given graph G, i.e., w is the size of the entire vocabulary, and n the number of nodes in G. Then our model is formulated as a neural network, as shown in Figure 1. In this model, each input is a two-tuple (φ, v), i.e., word φ is picked from the description of node v. Then (φ, v) is mapped to two one-hot vectors, w-dimensional word vector φ and n-dimensional vertex vector v, where only the entries corresponding to φ and v are set to 1 and others are set to 0. Then as in typical fully connected neural network architectures, in the first layer, φ T and v T are multiplied (implemented as look-up for efficiency) by two matrices M w×d and M n×d, respectively. The ing vectors are then added together and multiplied by another matrix M d×(w+n) in the second layer, i.e., a (w DISPLAYFORM0 is obtained. Finally, unlike typical skip-gram models where all entries in the output vectors of the second layer are processed by a softmax function, we decompose vector h into two sub-vectors, h 1 consisting of the first w entries and h 2 consisting of the rest. Then h 1 and h 2 are fed to separate softmax functions, yielding h 1 and h 2, respectively (see Figure 1). The reason for this decomposition operation is that we use h 1 to represent the probability vector of neighboring words of φ in the description of node v, and h 2 to represent the probability vector of neighboring nodes of v in the graph. Using this neural network architecture, we aim to learn the values of all entries in the matrices M w×d, M n×d and M d×(w+n) such that the entries (i.e., probabilities) in h 1 and h 2 corresponding to neighboring words of φ (within the text description of node v) and neighboring nodes of v (within the given graph) are maximized; see the objective in. We use matrix M n×d as our final semantically augmented node embeddings, i.e., each row in M n×d corresponds to a node embedding vector. Note that a unique property of SENSE-S is that it is a conjoined model where the textual and graphical inputs both contribute to the learning of node embeddings. Moreover, there is an add operation for combining these features, and thus this model is called SENSE-S (add). This is in contrast to the concatenation operation that is used in a different implementation of SENSE-S; see below. FIG3. Clearly, SENSE-S (concat) is quite similar to SENSE-S (add), except that (i) the ing vectors generated by the first layer are concatenated, and (ii) the matrix dimension in the second layer becomes (2d) × (w + n). In SENSE-S, our focus has been on computing semantically enhanced embeddings for individual nodes. In this section, we propose the general SENSE to represent any set of nodes following a specified order using the node vectors generated by SENSE-S, called node sequence embedding. Given the original graph G = (V, E), let S = v 1 → v 2 → · · · → v q be a node sequence constructed with v i ∈ V (i = 1, 2, . . ., q). Note that S may contain repeated nodes, e.g., some functions need to be executed more than once in one application. Intuitively, node sequence S can be represented by a d × q matrix with column i being the vector representation of node v i. However, such a representation is costly in space. Alternatively, representing S by the vector sum q i=1 v i (v i corresponds to v i) in missing the node order information. Hence, in this section, we seek to find a low-dimensional vector representation such that (i) node properties in the original network G and (ii) the node order in S are both preserved. In this section, all node vectors are unit vectors obtained by normalizing the node vectors generated by SENSE-S; this property is critical in our node sequence embedding method (see Section 4.2).Node Sequence Vector Construction: Given a node sequence S = v 1 → v 2 → · · · → v q following in order from node v 1 to node v q, let v i be the unit node vector of node v i. We first perform positional encoding, via cyclic shift function. Specifically, given vector v of dimension-d and non-negative integer m, we define v. Note that for repeated nodes in S, they are cyclically shifted by different positions depending on the specific order in the node sequence. In S, by imposing the restriction q d, we ensure that wraparounds do not occur while shifting the vector, because it may lead to ambiguity of node positions within a node sequence. Simple as this embedding approach may seem, we show that it exhibits the following advantages. First, the dimension of node sequence vectors remains the same as that of the original node vectors. Second, given a node sequence vector S, we are able to infer which nodes are involved in S and their exact positions in S as explained below. Node Sequence Vector Decoding: The method for determining which nodes are included (and in which order) in a given node sequence vector is referred to as node sequence vector decoding. The basic idea in node sequence vector decoding is rooted in Theorem 2 (Section 4.2), which implies that using cyclic shifting, we essentially enable a preferable property that v. By this property, we make the following claim, assuming that all node vectors are unit vectors. Claim 1. Given a node sequence vector S, node v, whose node vector is v, is at the k-th position of this node sequence if the inner product S · v (k−1) ≈ 1.Claim 1 provides an efficient way to decode a given node sequence. In particular, to determine whether (and where) a node resides in a node sequence, it only takes quadratic complexity O(d 2).Example: Suppose we have a node sequence S = 1 → 2 → 3 consisting of three nodes. By SENSE-S and our encoding method, we construct its node sequence vector as S = v1 +v2 +v3. Then each node can compute the inner product of its cyclic shifted vector with S. If the is approximately 1, then its position in this node sequence is uniquely determined. For instance, DISPLAYFORM0 2 ≈ 0 + 1 + 0 = 1 (see Theorem 2). Thus, given the encoded node sequence S, we know node 2 is at the second position. We now present our theoretical , to support Claim 1. Proofs are presented in the appendix. = 0 happens with high probability when the vector dimension is sufficiently large, although v a and v b are originally related. This section evaluates SENSE under varying scenarios, with multiple datasets and applications. We evaluate SENSE on the following datasets that contain graph information along with textual descriptions:: it contains both Wikipedia plain text articles (text descriptions) and hyper links between articles (graph structure). It is a directed graph with 4, 604 nodes (each is a Wikipedia article) and 119, 882 hyper links. Citation Network BID14: it contains both textual descriptions (title, authors and abstract of the paper) and graphical structure (citations). It is a directed graph with 27, 770 nodes (papers) and 352, 807 edges (citations). To train our SENSE-S model, we first define the loss function based on. Suppose the vocabulary size associated with each node v ∈ V is similar, i.e., we assume ∀v ∈ V, w v ≈ c (c is a constant). Then let DISPLAYFORM0 We define the loss function as DISPLAYFORM1 We then use stochastic gradient descent to minimize our loss function. Let η be the learning rate. At each iteration, we update the model parameters by adding a fraction of −∇L, i.e.,−η∇L = cη∇F G + η∇F T. Since the per-node vocabulary size is much larger than 1, the node neighborhood sampling via random walk is much less frequent than the textual neighborhood sampling. Therefore we want to inject more input data consisting of only the nodes' graphical neighborhood information. To do so, we adjust the model parameter update rule −η∇L as DISPLAYFORM2 where β 1 and β 2 are the equivalent learning rates for graph inputs and text inputs, respectively. We first evaluate SENSE-S, which is compared against the following baseline solutions. To compare with schemes that use graph information alone, we use node2vec since it has been shown to be flexible to capture different graph properties. To compare against schemes that use textual information alone, we use semantic vectors from paragraph2vec since it outperforms other schemes such as Recursive Neural Tensor Networks BID27 ) for tasks like Sentiment Analysis. As in SENSE-S, we also study two implementations of paragraph2vec, i.e., addition and concatenation operations at the hidden layer, referred to as paragraph2vec (add) and paragraph2vec (concat), respectively. For joint text/graph learning, we compare with the following: 1) Initialize with semantic vectors: We learn embeddings using node2vec, but rather than using random initialization, we initialze the vectors using paragraph2vec.2) Initialize with graphical vectors: Here, we learn final embeddings using paragraph2vec, but initialize them with node2vec, i.e., just reverse of the scheme above.3) Iterative Vectorization: The above approaches only leverage semantic or graphical vectors for initializations. Here, we try to capture both iteratively. Specifically, in one iteration, we compute node embedding via node2vec with the embeddings from the previous iteration as initializations; the corresponding are then fed to paragraph2vec as initializations to further compute node embeddings, after which we go to the next iteration. We repeat this process multiple times (5 times in our experiment) to get the final embeddings. Here, we simply concatenate the vectors obtained from paragraph2vec and node2vec and use them as our node embedding vectors. Test splits of data. Document length = first 500 characters (n2v: node2vec, p2v (add)/(cat): paragraph2vec (add)/(concat), n2v w/ p2v: node2vec with paragraph2vec (add) initialization, p2v w/ n2v: paragraph2vec (add) with node2vec initialization, IteraVec: Iterative Vectorization, cat(p2v,n2v): concatenation of p2v and n2v vectors).Experimental Setup: We first learn the vector representations and then use these vector representations for two different tasks: (i) Multi-label classification: Wikipedia pages are classified into different categories, such as history, science, people, etc. This ground truth information is included in the Wikispeedia dataset (which is not used while learning the vectors). There are 15 different top level categories, and our multi-label classification task tries to classify a page into one or more of these categories based on the vectors obtained from different algorithms. We train the OneVsRestClassifier (SVM, linear kernel) from scikit-learn for this task.(ii) Link prediction: Since no category information is available for the Citation Network, we evaluate for link prediction. In particular, 1% of existing citations are removed, after which vectors are learned on this network. We use these removed links as positive samples for link prediction. For negative samples, we randomly sample the same number of pairs which are not linked via a citation in the original network. To obtain the similarity features w.r.t. a pair of nodes, after experimenting with several alternatives, we chose the element-wise absolute difference and train SVM classifier (linear kernel) for link prediction. Parameter settings: (i) We perform κ random walks starting from each node in the graph (κ is 10 for Wikispeedia and 3 for Citation Network, since Citation Network is larger); (ii) each walk is of length 80 as recommended by BID4; (iii) we use sliding window of size 5 for neighboring word sampling; (iv) the default node vector dimension is 128; (v) multi-label classification error is the misclassification rate over the test set; (vi) link prediction error is the percentage of incorrect link predictions over all pairs of papers in the test set; (vii) learning rates β 1 and β 2 in are selected based on the validation set, and the error is reported on the test set. The error of multi-label classification is reported in FIG4, where the first 500 characters of each Wikipedia page are selected as its textual description. Third, among the schemes that utilize both textual and graphical information, SENSE-S (add) and SENSE-S (concat) consistently perform the best. This is because we train the network to co-learn the textual and graphical information to ensure that both objectives converge. This is in contrast with other schemes where the two objectives, due to the loose coupling, are not guaranteed to converge. Finally, SENSE-S (add) (in FIG4 (c)) outperforms node2vec by over 40%, paragraph2vec (add) by over 55% and the closest baseline scheme that leverages both text and graph by 30%. This confirms the benefit of co-learning features from textual as well as graphical information under the SENSE-S architecture. We also see similar trends using the first 1, 000 characters and omit the for brevity. The error of link prediction in the Citation Network is reported in FIG5. We fix train:valid:test to 60%:20%:20% (based on number of links) and use the first 500 characters as text description. We make several interesting observations. First, schemes that use graph information alone (node2vec) substantially outperform schemes that use text descriptions alone (paragraph2vec) for link prediction task. Intuitively, this is because the neighborhood information, which is important for link prediction task, is captured effectively by the node embeddings obtained from node2vec-like techniques. Second, even in cases where the difference in accuracy using the two different sources of information is large, SENSE-S (add) and (cat) are robust, and can effectively extract useful information from text descriptions to further reduce the error of link prediction that uses graph structure alone. Finally, this is significant because it demonstrates the effectiveness of SENSE-S in a variety of scenarios, including cases where the two sources of information may not be equally valuable. We now evaluate the accuracy of encoding/decoding by SENSE for node sequences. We evaluate on three experiments via constructing node sequences in the following different ways: Experiment 1: the node at every position in a sequence is chosen uniformly at random from 4, 604 Wikispeedia nodes. Note that as mentioned earlier, this node sequence may contain repeated nodes (which is allowed), and may or may not be a subgraph of the original graph. Experiment 2: the node sequences are constructed by performing random walks on the Wikispeedia graph. Experiment 3: the node sequences are constructed by performing random walks on the Physics Citation Network. Note that in both Experiments 2 and 3, adjacent nodes in the node sequences will have related vector embeddings. Next, for such constructed node sequences, their vector representations are computed by SENSE-S. Given these sequence vectors, we then decode the node at each position. We evaluate the decoding accuracy under different sequence lengths and vector dimensions, as reported in FIG6.From FIG6 (a), we make the following observations. First, when the node sequence length is small, all vector dimensions lead to almost 100% decoding accuracy. Second, as node sequence length increases, the decoding accuracy declines sharply especially in cases where the node vector dimensions are relatively small, i.e., 128 and 256. This is because, by Theorem 2, correlations between the involved node vectors cause inevitable errors. Such error accumulates when the length of the node sequence is large. Nevertheless, with sufficiently large node vector dimension, i.e., 1024, even long sequences can be decoded perfectly, and with 512, we can decode a workflow of length 10 with over 90% accuracy. Interestingly, FIG6 and (c) also show similar trends. This is significant, as it shows that even if node sequences are constructed from correlated node vectors, i.e., picked from same graph neighborhood, the decoding still achieves high accuracy. This is because, as shown in Theorem 2, after cyclic shifting, the ing vectors are orthogonal with high probability when the vector dimension is large (even if these vectors are originally related). Finally, in Figures 5 (a) and (b) the decoding algorithm needs to find the best match from among 4, 592 nodes (Wikispeedia network). In contrast, in FIG6 (c), it is much more challenging to find the match among 27, 770 nodes. Yet, we are able to decode with high accuracy with theoretical guarantees. We presented SENSE that learns semantically enriched vector representations of graph node sequences. To achieve this, we first developed SENSE-S that learns single node embeddings via a multi-task learning formulation that jointly learns the co-occurrence probabilities of nodes within a graph and words within a node-associated document. We evaluated SENSE-S against state-ofthe-art approaches that leverage both graph and text inputs and showed that SENSE-S improves multi-label classification accuracy in Wikispeedia dataset by up to 50% and link prediction over Physics Citation network by up to 78%. We then developed SENSE that is able to employ provable schemes for vector composition to represent node sequences using the same dimension as the individual node vectors from SENSE-S. We demonstrated that the individual nodes within the sequence can be inferred with a high accuracy (close to 100%) from such composite SENSE vectors. A LEMMA FOR THE PROOF OF THEOREM 2 Proof. Since both x and y are unit vectors, we have x · y = ||x||2||y||2 cos θ = cos θ, where θ is the angle between x and y. Since x and y are not correlated and both x and y are uniformly distributed across the sphere surface, θ is also uniformly distributed, and thus E[x · y] = 0.As x · y is purely determined by the angle θ between x and y, without loss of generality, we select y = | Node sequence embedding mechanism that captures both graph and text properties. | 1,074 | scitldr |
Recent evidence shows that convolutional neural networks (CNNs) are biased towards textures so that CNNs are non-robust to adversarial perturbations over textures, while traditional robust visual features like SIFT (scale-invariant feature transforms) are designed to be robust across a substantial range of affine distortion, addition of noise, etc with the mimic of human perception nature. This paper aims to leverage good properties of SIFT to renovate CNN architectures towards better accuracy and robustness. We borrow the scale-space extreme value idea from SIFT, and propose EVPNet (extreme value preserving network) which contains three novel components to model the extreme values: parametric differences of Gaussian (DoG) to extract extrema, truncated ReLU to suppress non-stable extrema and projected normalization layer (PNL) to mimic PCA-SIFT like feature normalization. Experiments demonstrate that EVPNets can achieve similar or better accuracy than conventional CNNs, while achieving much better robustness on a set of adversarial attacks (FGSM,PGD,etc) even without adversarial training. Convolutional neural networks (CNNs) evolve very fast ever since AlexNet makes a great breakthrough on ImageNet image classification challenge in 2012. Various network architectures have been proposed to further boost classification performance since then, including VGGNet , GoogleNet, ResNet , DenseNet and SENet, etc. Recently, people even introduce network architecture search to automatically learn better network architectures . However, state-of-the-art CNNs are challenged by their robustness, especially vulnerability to adversarial attacks based on small, human-imperceptible modifications of the input . thoroughly study the robustness of 18 well-known ImageNet models using multiple metrics, and reveals that adversarial examples are widely existent. Many methods are proposed to improve network robustness, which can be roughly categorized into three perspectives: modifying input or intermediate features by transformation , denoising ), generative models ; modifying training by changing loss functions (; ;, network distillation , or adversarial training designing robust network architectures; ) and possible combinations of these basic categories. For more details of current status, please refer to a recent survey . Although it is known that adversarial examples are widely existent, some fundamental questions are still far from being well studied like what causes it, and how the factor impacts the performance, etc. One of the interesting findings in is that model architecture is a more critical factor to network robustness than model size (e.g. number of layers). Some recent works start to explore much deeper nature. For instance, both show that CNNs are trained to be strongly biased towards textures so that CNNs do not distinguish objects contours from other local or even noise edges, thus perform poorly on shape dominating object instances. On the contrary, there are no statistical difference for human behaviors on both texture rich objects and global shape dominating objects in psychophysical trials. further analyze and show that deep convolutional features can be categorized into robust and non-robust features, while non-robust features may even account for good generalization. However, non-robust features are not expected to have good model interpretability. It is thus an interesting topic to disentangle robust and non-robust features with certain kinds of human priors in the network designing or training process. In fact, human priors have been extensively used in handcraft designed robust visual features like SIFT . SIFT detects scale-space extrema from input images, and selects stable extrema to build robust descriptors with refined location and orientation, which achieves great success for many matching and recognition based vision tasks before CNN being reborn in 2012 . The scale-space extrema are efficiently implemented by using a difference-of-Gaussian (DoG) function to search over all scales and image locations, while the DoG operator is believed to biologically mimic the neural processing in the retina of the eye . Unfortunately, there is (at least explicitly) no such scale-space extrema operations in all existing CNNs. Our motivation is to study the possibility of leveraging good properties of SIFT to renovate CNN networks architectures towards better accuracy and robustness. In this paper, we borrow the scale-space extrema idea from SIFT, and propose extreme value preserving networks (EVPNet) to separate robust features from non-robust ones, with three novel architecture components to model the extreme values: parametric DoG (pDoG) to extract extreme values in scale-space for deep networks, truncated ReLU (tReLU) to suppress noise or non-stable extrema and projected normalization layer (PNL) to mimic PCA-SIFT like feature normalization. pDoG and tReLU are combined into one block named EVPConv, which could be used to replace all k × k (k > 1) conv-layers in existing CNNs. We conduct comprehensive experiments and ablation studies to verify the effectiveness of each component and the proposed EVPNet. Figure 1 illustrates a comparison of responses for standard convolution + ReLU and EVPConv in ResNet-50 trained on ImageNet, and shows that the proposed EVPConv produces less noises and more responses around object boundary than standard convolution + ReLU, which demonstrates the capability of EVPConv to separate robust features from non-robust ones. Our major contribution are: • To the best of our knowledge, we are the first to explicitly separate robust features from non-robust ones in deep neural networks from an architecture design perspective. • We propose three novel network architecture components to model extreme values in deep networks, including parametric DoG, truncated ReLU, and projected normalization layer, and verify their effectiveness through comprehensive ablation studies. • We propose extreme value preserving networks (EVPNets) to combine those three novel components, which are demonstrated to be not only more accurate, but also more robust to a set of adversarial attacks (FGSM, PGD, etc) even for clean model without adversarial training. Robust visual features. Most traditional robust visual feature algorithms like SIFT and SURF are based on the scale-space theory , while there is a close link between scale-space theory and biological vision , since many scalespace operations show a high degree of similarity with receptive field profiles recorded from the mammalian retina and the first stages in the visual cortex. For instance, DoG computes the difference of two Gaussian blurred images and is believed to mimic the neural processing in the retina . SIFT is one such kind of typical robust visual features, which consists of 4 major stages: scale-space extrema detection with DoG operations; Keypoints localization by their stability; Orientation and scale assignment based on primary local gradient direction; Histogram based keypoint description. We borrow the scale-space extrema idea from SIFT, and propose three novel and robust architecture components to mimic key stages of SIFT. Robust Network Architectures. Many research efforts have been devoted to network robustness especially on defending against adversarial attacks as summarized in. However, there are very limited works that tackle this problem from a network architecture design perspective. A major category of methods focus on designing new layers to perform denoising operations on the input image or the intermediate feature maps. Most of them are shown effective on black-box attacks, while are still vulnerable to white-box attacks. Non-local denoising layer proposed in is shown to improve robustness to white-box attack to an extent with adversarial training . Peer sample information is introduced in with a graph convolution layer to improve network robustness. Biologically inspired protection introduces highly non-linear saturated activation layer to replace ReLU layer, and demonstrates good robustness to adversarial attacks, while similar higher-order principal is also used in. However, these methods still lack a systematic architecture design guidance, and many are not robust to iterative attack methods like PGD under clean model setting. In this work, inspired by robust visual feature SIFT, we are able to design a series of innovative architecture components systematically for improving both model accuracy and robustness. We should stress that extreme value theory is a different concept to scale-space extremes, which tries to model the extreme in data distribution, and is used to design an attack-independent metric to measure robustness of DNNs by exploring input data distribution. Difference-of-Gaussian. Given an input image I and Gaussian kernel G(x, y, σ) as below where σ denotes the variance. Also, difference of Gaussian (DoG) is defined as where ⊗ is the convolution operation, and I 1 = G(x, y, σ) ⊗ I 0. Scale-space DoG repeatedly convolves input images with the same Gaussian kernels, and produces difference-of-Gaussian images by subtracting adjacent image scales. Scale-space extrema (maxima and minima) are detected in DoG images by comparing a pixel to its 26 neighbors in 3×3 grids at current and two adjacent scales . Adversarial Attacks. We use h(·) to denote the softmax output of classification networks, and h c (·) to denote the prediction probability of class c. Then given a classifier h(x) = y, the goal of adversarial attack is to find x adv such that the output of classifier deviates from the true label y: Attack Method. The most simple adversarial attack method is Fast Gradient Sign Method (FGSM) , a single-step method which takes the sign of the gradient on the input as the direction of the perturbation. L(·, ·) denotes the loss function defined by cross entropy. Specifically, the formation is as follows: where x is the clean input, y is the label. is the norm bound (||x − x adv || ≤, i.e. -ball) of the adversarial perturbation. Projected gradient descent (PGD) iteratively applies FGSM with a small step size α i (a;) with formulation as below: where i is the iteration number, α = /T with T being the number of iterations.' Proj' is the function to project the image back to -ball every step. Some advanced and complex attacks are further introduced in DeepFool , CW , MI-FGSM. Adversarial Training aims to inject adversarial examples into training procedure so that the trained networks can learn to classify adversarial examples correctly. Specifically, adversarial training solves the following empirical risk minimization problem: where A(x) denotes the area around x bounded by L ∞ /L 2 norm, and H is the hypothesis space. In this work, we employ both FGSM and PGD to generate adversarial examples for adversarial training. Inspired by traditional robust visual feature SIFT, this paper aims to improve model accuracy and robustness by introducing three novel network architecture components to mimic some key components in SIFT: parametric DoG (pDoG), truncated ReLU (tReLU), and projected normalization layer (PNL). Combining pDoG and tReLU constructs the so-called extreme value preserving convolution (EVPConv) block as shown in Figure 2 (a), which can be used to replace all k × k (k > 1) conv-layers in existing CNNs. PNL is a new and robust layer plugged in to replace global average pooling (GAP) layer as shown in Figure 2 (c). A network with all the three components is named as extreme value preserving network (EVPNet). In the following, we will describe these three components in details separately, and elaborate on how they are used to construct the EVPConv block and EVPNet. Parametric DoG (pDoG) is a network component we design to mimic DoG operation. Recall DoG in Equation 2, it repeatedly convolves input images with the same Gaussian kernel in which kernel size σ is designable, and then computes the differences for adjacent Gaussian blurred images. For CNNs, we mimic DoG with two considerations. First, we replace the Gaussian kernel with a learnable convolutional filter. Specifically, we treat each channel of feature map separately as one image, and convolve it with a learnable k × k kernel to mimic Gaussian convolution. Note that the learnable convolution kernel is not required to be symmetric since some recent evidence shows that non-symmetric DoG may perform even better (; Winnemöller, 2011). Applying the procedure to all the feature-map channels is equal to a depth-wise (DW) convolution. Second, we enforce successive depth-wise convolutions in the same block with shared weights since traditional DoG operation uses the same Gaussian kernel. As CNNs produce full scale information at different stages with a series of convolution and downsampling layers, each pDoG block just focuses on producing extrema for current scale, while not requiring to produce full octave extrema like SIFT. The shared DW convolution introduces minimum parameter overhead, and avoid "extrema drift" in the pDoG space so that it may help finding accurate local extrema. Formally, given input feature map f 0, a minimum of two successive depth-wise convolution is applied as where DW is depth-wise convolution with w as the shared weights. pDoG is thus computed as It is worth noting that the successive minus operations make the minus sign not able to be absorbed into w for replacing minus into addition operation. To the best of our knowledge, this is the first time, minus component has been introduced into deep neural networks, which brings totally new element for architecture design/search. Following SIFT, we compute local extrema (maxima and minimal) across the pDoG images using maxout operations : Note we do not compute local extrema in 3 × 3 spatial grids as in SIFT since we do not require keypoint localization in CNNs. Finally, to keep the module compatible to existing networks, we need ensure the output feature map to be of the same size (number of channels and resolution). Therefore, a maxout operation is taken over to merge two feature maps and obtain the final output of this block: Truncated ReLU (tReLU). The pDoG block keeps all the local extrema in the DoG space, while many local extrema are unstable because of small noise and even contrast changes. SIFT adopts a local structure fitting procedure to reject those unstable local extrema. To realize similar effect, we propose truncated ReLU (tReLU) to suppress non-robust local extrema. The basic idea is to truncate small extrema which correspond to noise and non-stable extrema in the pDoG space. This can be implemented by modifying the commonly used ReLU function as where θ is a learnable truncated parameter. Note that this function is discontinued at x = θ and x = −θ. We make a small modification to obtain a continuous version for easy training as below Figure 2(b) plots the tReLU function. Different from the original ReLU, tReLU introduces a threshold parameter θ and keeps elements with higher magnitude. θ can be either a block-level parameter (each block has one global threshold) or a channel-level parameter (each channel holds a separate threshold). By default, we take θ as a block-level parameter. tReLU is combined with pDoG not only to suppress non-robust extrema, but also to simplify the operations. When combining Equation 8 and Equation 9 together, there is nested maxout operation which satisfies commutative law, so that we could rewrite z 0 and z 1 as where | · | is element-wise absolute operation. With tReLU to suppress non-robust features, we have Hence, in practice, we use Equation 13 instead of Equation 8 to compute z 0 and z 1. Note that tReLU does improve robustness and accuracy for pDoG feature maps, while providing no benefits when replacing ReLU in standard CNNs according to our experiments (see Table 1). Projected Normalization Layer (PNL). SIFT computes gradient orientation histogram followed by L2 normalization to obtain final feature representation. This process does not take gradient pixel relationship into account. PCA-SIFT handles this issue by projecting each local gradient patch into a pre-computed eigen-space using PCA. We borrow the idea from PCA-SIFT to build projected normalization layer (PNL) to replace global average pooling (GAP) based feature generation in existing CNNs. Suppose the feature-map data matrix before GAP is X ∈ R d×c, where d = w × h corresponds to feature map resolution, and c is the number of channels, we obtain column vectors {x i ∈ R c} d i=1 from X to represent the i-th pixel values from all channels. The PNL contains three steps: We add a 1 × 1 conv-layer, which can be viewed as a PCA with learnable projection matrix W ∈ R c×p. The output is u i = W T x i, where u i ∈ R p further forms a data matrix U ∈ R d×p. We compute L2 norm for row vectors To eliminate contrast or scale impact, we normalize v to obtainṽ = v/ v p, while · p means the p norm. the normalized vectorṽ is fed into classification layer for prediction purpose. It is interesting to note that PNL actually computes a second order pooling similar as . Suppose w j ∈ R c is the j-th row of W, v j in step-2 can be rewritten as where is an auto-correlation matrix. Figure 2 (c) illustrates the PNL layer. Theoretically, GAP produces a hyper-cube, while PNL produces a hyper-ball. This is beneficial for robustness since hyper-ball is more smooth, and a smoothed surface is proven more robust . Our experiments also verify this point (see Table 1). With these three novel components, we can derive a novel convolution block named EVPConv, and the corresponding networks EVPNet. In details, EVPConv starts from the pDoG component, and replaces Equation 8 with tReLU as in Equation 13. In SIFT, the contribution of each pixel is weighted by the gradient magnitude. This idea can be extended to calibrate contributions from each feature-map channel. Fortunately, Squeeze-and-Excitation (SE) module proposed in provides the desired capability. We thus insert the SE block after tReLU, and compute the output of EVPConv as: where concat(·) means concatenating z 0 and z 1 together for a unified and unbiased calibration, SE(·) is the SE module, s 0 and s 1 are the calibration corresponding to z 0 and z 1, and max denotes an element-wise maximum operation. Figure 2 (a) illustrates the overall structure of EVPConv. EVPConv can be plugged to replace any k × k (k >1) conv-layers in existing CNNs, while the PNL layer can be plugged to replace the GAP layer for feature abstraction. The network consisting of both EVPConv block and the PNL layer is named as EVPNet. The EVPConv block introduces very few additional parameters: w for shared depth-wise convolution, θ for tReLU and parameters for SE module. Note that we allow each EVPConv block having its own w and θ. EVPConv brings relatively fewer additional parameters, which is about 7∼20% (see Appendix A) (smaller models more relative increasing). It also increases theoretic computing cost 3∼10% for a bunch of parameterfree operations like DoG and maxout. However, the added computing cost is non-negligible in practice (2× slower according to our training experiments) due to more memory cost for additional copy of feature-maps. Near memory computing architecture may provide efficient support for this new computing paradigm. Experimental Setup. We evaluate the proposed network components and EVPNet on CIFAR10 and SVHN datasets. CIFAR-10 is a widely used image classification dataset containing 60, 000 images of size 32×32 with 50, 000 for training and 10,000 for testing. SVHN is a digit recognition dataset containing 73,257 training images, 26,032 test images, all with size 32×32. We introduce our novel components into the well-known and widely used ResNet, and compare to the basic model on both clean accuracy and adversarial robustness. As the EVPConv block contains a SE module, to make a fair comparison, we set SE-ResNet as our comparison target. In details, we replace the input conv-layer and the first 3 × 3 conv-layer in the residual block with EVPConv, and replace the GAP layer with the proposed PNL layer. Following , for CIFAR-10, all the networks are trained with SGD using momentum 0.9, 160 epochs in total. The initial learning rate is 0.1, divided by 10 at 80 and 120 epochs. For SVHN, we use the same network architecture as CIFAR-10. The models are trained for 80 epochs, with initial learning rate 0.1, divided by 10 at 40 and 60 epochs. For tReLU, the channel-level parameter θ is initialized by uniformly sampling from. In this work, we consider adversarial perturbations constrained under l ∞ norm. The allowed perturbation norm is 8 pixels . We evaluate non-targeted attack adversarial robustness in three settings: normal training, FGSM adversarial training and PGD adversarial training . During adversarial training, we use the predicted label to generate adversarial examples to prevent label leaking effect (b). To avoid gradient masking , we use R-FGSM for FGSM adversarial training, which basically starts from a random point in the ball. , during training, PGD attacks generate adversarial examples by 7 PGD iterations with 2-pixel step size starting from random points in the allowed ball. We report accuracy on both whitebox and blackbox attack. We evaluate a set of well-known whitebox attacks, including FGSM, PGD, DeepFool, CW. We use'PGD-N' to denote attack with N PGD iterations of step size 2 pixels by default. Specifically, we compare for PGD-10 and PGD-40. For blackbox attack, we choose VGG-16 as the source model which is found by to exhibit high adversarial transferability, and choose FGSM as the method to generate adversarial examples from VGG-16 as it is shown to lead to better transferability. Ablation Study. This part conducts a thorough ablation study to show the effectiveness of each novel architecture component and how they interact to provide strong adversarial robustness. We conduct experiments on CIFAR-10 with SE-ResNet-20, which contains one input conv-layer, three residual stage each with two bottleneck residual blocks, and GAP layer followed by a classification layer. We evaluate the accuracy and robustness for all the possible combinations of the proposed three components under the normal training setting. For PGD attack, we use two step sizes: 1 pixel as in and 2 pixels as in. Table 1 lists full evaluation of each tested model. Several observations can be made from the table: Solely adding pDoG or PNL layer leads to significant robustness improvement. pDoG even yields clean accuracy improvement, while PNL yields slightly clean accuracy drops. tReLU does not bring benefit for standard convolution, while yields notable improvement on both clean accuracy and adversarial accuracy, when combining with pDoG. That verifies our previous claim that tReLU is suitable to work for the DoG space. Combining all the three components together obtains the best adversarial robustness, while still achieve 1.2% clean accuracy improvement over the model without these three components. Based on these observations, we incorporate all the three components into the CNNs to obtain the so-called EVPNet for the following experiments if not explicitly specified. As 2-pixels PGD attack is much stronger than 1-pixel PGD attack, we use it as default in the following studies. Benchmark Results. We conduct extensive experiments on CIFAR-10 and SVHN to compare the proposed EVPNet with the source networks. The two sources networks are For fair comparison, we use the SE extended ResNet as our baseline. Table 2 lists comprehensive comparison on CIFAR-10. We list 7 different kinds of accuracies: clean model accuracy, whilebox attack accuracies by FGSM/PGD-10/PGD-40/DeepFool/CW, and blackbox attack accuracy with adversarial examples generated by FGSM on the VGG-16 model. We can see that under normal training case, EVPNet outperforms baseline by a large margin in terms of robustness with FGSM, PGD, DeepFool, and CW attacks. Even under the strongest PGD-40 white box attack, our EVPNet still has non-zero accuracy without any adversarial training. For those cases with adversarial training, our EVPNet consistently beats baseline networks with noticeable margin. training performs worse on PGD-10/PGD-40 attacks than FGSM adversarial training, and even much worse than normal training on this dataset. This may be due to the fact that SVHN is a shape/edge dominating digit recognition dataset, which may generate a lot of difficult adversarial samples with broken edges. And it also coincides with the finding by. Our EVPNet shows better robustness on this dataset without adversarial training than CIFAR-10, which may suggest that EVPNet is more robust on shape/edge dominating object instances. All these evidences prove that the proposed EVPNet is a robust network architecture. Analysis. We make some further analysis to compare EVPNet to baseline networks. First, we plot the test error at different PGD iterations for different evaluated networks under normal training case on both CIFAR-10 and SVHN datasets as shown in Figure 3. It can be seen that EVPNet consistently performs significantly better than the corresponding baseline networks under all PGD iterations. Some may concern that the accuracy of EVPNet on the strongest PGD-40 attack is not satisfied (∼ 10%). We argue that from three aspects: The adversarial is remarkable as it is by the clean model without using any other tricks like adversarial training, adversarial loss, etc. The proposed components also brings consistent clean accuracy improvement even on large-scale dataset (see Appendix A). More importantly, the methodology we developed may shed some light on future studies in network robustness and network architecture design/search. Second, we further investigate the error amplification effect as. Specifically, we feed both benign examples and adversarial examples into the evaluated networks, and compute the normalized L 2 distance for each res-block outputs as γ = x − x 2 / x 2, where x is the response vector of benign example, and x is the response vector for the adversarial example. We randomly sample 64 images from the test set to generate adversarial examples using PGD-40. The models evaluated are trained without adversarial training. Figure 4 illustrates the . As we can see, EVPNet has much lower average normalized distance than the baseline models almost on all the blocks. It is interesting to see that the baseline models have a big jump for the normalized distance at the end of the networks on all the 4 sub-figures. This urges the adversarial learning researchers to make further investigation on the robustness especially for latter layers around GAP. Nevertheless, this analysis demonstrates that EVPNet significantly reduces the error amplification effect. Third, we compare the differences on feature responses between regular convolution + ReLU and EVPConv. This comparison is made on large-scale and relative high resolution (224 × 224) ImageNet dataset for better illustration. We train ResNet-50 and EVPNet-50 on ImageNet, and visualize their prediction responses for the first corresponding convolution block in Figure 1. It clearly shows that ResNet-50 has more noise responses, while EVPNet-50 gives more responses on object boundary. This demonstrates the capability of EVPConv to separate robust features from non-robust ones. Full benchmark on ImageNet are also very promising, see Appendix A for more details. This paper mimics good properties of robust visual feature SIFT to renovate CNN architectures with some novel architecture components, and proposes the extreme value preserving networks (EVPNet). Experiments demonstrate that EVPNets can achieve similar or better accuracy over conventional CNNs, while achieving much better robustness to a set of adversarial attacks (FGSM, PGD, etc) even for clean model without any other tricks like adversarial training. top-1 accuracy to near zero, while the EVP-ResNet variants keep 6∼10% top-1 accuracy. The gap in FGSM attacks is even larger. This improvement is remarkable considering that it is by clean model without adversarial training. For the MobileNet case, we also observe notable accuracy and robustness improvement. Please refer to Table 4 for more details. In summary, our solid and attempts may inspire future new ways for robust network architecture design or even automatic search. In the main paper, we demonstrate the great robustness of the proposed components on ResNet with bottleneck residual blocks. Here, we extend the proposed components to other state-of-the-art network architectures, and choose Wide-ResNet (Zagoruyko & komodakis, 2017) as an example since it is mostly studied on other adversarial training works (; ;). Wide-ResNet (WRN) has two successive wide-channel 3 × 3 conv-layers in residual block instead of the three conv-layer bottleneck structure based residual block. We use WRN-22-8 as the baseline network with depth 22 and widening factor 8. It is obvious that WRN-22-8 has much better clean accuracy than ResNet-20 and ResNet-56 used in the main paper. For EVPNet, We replace input conv-layer and the first 3 × 3 conv-layer in each wide residual block with EVPConv, and replace the GAP layer with our PNL layer. Table 5 shows the comparison on normal training case. We can see that EVPNet achieves similar clean accuracy, while performing significantly better on adversarial attacks with FGSM/PGD-10/PGD-40. Note that under PGD-10 and PGD-40 attacks, the baseline model drops accuracy to near 0, while the EVPNet remains a much higher accuracy, considering that no adversarial training is utilized in this study. This demonstrates the strong robustness of the proposed EVPNet. | This paper aims to leverage good properties of robust visual features like SIFT to renovate CNN architectures towards better accuracy and robustness. | 1,075 | scitldr |
Compressed representations generalize better , which may be crucial when learning from limited or noisy labeled data. The Information Bottleneck (IB) method provides an insightful and principled approach for balancing compression and prediction in representation learning. The IB objective I(X; Z) − βI(Y ; Z) employs a Lagrange multiplier β to tune this trade-off. However, there is little theoretical guidance for how to select β. There is also a lack of theoretical understanding about the relationship between β, the dataset, model capacity, and learnability. In this work, we show that if β is improperly chosen, learning cannot happen: the trivial representation P(Z|X) = P(Z) becomes the global minimum of the IB objective. We show how this can be avoided, by identifying a sharp phase transition between the unlearnable and the learnable which arises as β varies. This phase transition defines the concept of IB-Learnability. We prove several sufficient conditions for IB-Learnability, providing theoretical guidance for selecting β. We further show that IB-learnability is determined by the largest confident, typical, and imbalanced subset of the training examples. We give a practical algorithm to estimate the minimum β for a given dataset. We test our theoretical on synthetic datasets, MNIST, and CIFAR10 with noisy labels, and make the surprising observation that accuracy may be non-monotonic in β. Compressed representations generalize better , which is likely to be particularly important when learning from limited or noisy labels, as otherwise we should expect our models to overfit to the noise. introduced the Information Bottleneck (IB) objective function which learns a representation Z of observed variables (X, Y) that retains as little information about X as possible, but simultaneously captures as much information about Y as possible:min IB β (X, Y ; Z) = min I(X; Z) − βI(Y ; Z)I(X; Y) = dx dy p(x, y)log p(x,y) p(x)p(y) is the mutual information. The hyperparameter β controls the trade-off between compression and prediction, in the same spirit as Rate-Distortion Theory , but with a learned representation function P (Z|X) that automatically captures some part of the "semantically meaningful" information, where the semantics are determined by the observed relationship between X and Y.The IB framework has been extended to and extensively studied in a variety of scenarios, including Gaussian variables BID6 ), meta-Gaussians , continuous variables via variational methods BID3; BID5 BID8 ), deterministic scenarios (Strouse & Schwab (2017a); BID12 ), geometric clustering (Strouse & Schwab (2017b) ), and is used for learning invariant and disentangled representations in deep neural nets BID0 b) ). However, a core issue remains: how should we select β? In the original work, the authors recommend sweeping β > 1, which can be prohibitively expensive in practice, but also leaves open interesting theoretical questions around the relationship between β, P (Z|X), and the observed data, P (X, Y). For example, under how much label noise will IB at a given β still be able to learn a useful representation?This work begins to answer some of those questions by characterizing the onset of learning. Specifically:• We show that improperly chosen β may in a failure to learn: the trivial solution P (Z|X) = P (Z) becomes the global minimum of the IB objective, even for β 1.• We introduce the concept of IB-Learnability, and show that when we vary β, the IB objective will undergo a phase transition from the inability to learn to the ability to learn.• Using the second-order variation, we derive sufficient conditions for IB-Learnability, which provide theoretical guidance for choosing a good β.• We show that IB-learnability is determined by the largest confident, typical, and imbalanced subset of the training examples, reveal its relationship with the slope of the Pareto frontier at the origin on the information plane I(Y ; Z) vs. I(X; Z), and discuss its relation with model capacity. We use our main to demonstrate on synthetic datasets, MNIST , and CIFAR10 BID13 ) under noisy labels that the theoretical prediction for IB-Learnability closely matches experiment. We present an algorithm for estimating the onset of IB-Learnability, and demonstrate that it does a good job of estimating the theoretical predictions and the empirical . Finally, we observe discontinuities in the Pareto frontier of the information plane as β increases, and those dicontinuities correspond to accuracy decreasing as β increases. We are given instances of (x, y) ∈ X × Y drawn from a distribution with probability (density) P (X, Y), where unless otherwise stated, both X and Y can be discrete or continuous variables. (X, Y) is our training data, and may be characterized by different types of noise. We can learn a representation Z of X with conditional probability 1 p(z|x), such that X, Y, Z obey the Markov chain Z ← X ↔ Y. Eq. above gives the IB objective with Lagrange multiplier β, IB β (X, Y ; Z), which is a functional of p(z|x): IB β (X, Y ; Z) = IB β [p(z|x)]. The IB learning task is to find a conditional probability p(z|x) that minimizes IB β (X, Y ; Z). The larger β, the more the objective favors making a good prediction for Y. Conversely, the smaller β, the more the objective favors learning a concise representation. How can we select β such that the IB objective learns a useful representation? In practice, the selection of β is done empirically. recommends "sweeping β". In this section, we provide theoretical guidance for choosing β by introducing the concept of IB-Learnability and providing a series of IB-learnable conditions. Definition 1 (IB β -Learnability). (X, Y) is IB β -learnable if there exists a Z given by some p 1 (z|x), such that DISPLAYFORM0, where p(z|x) = p(z) characterizes the trivial representation such that Z = Z trivial is independent of X.If (X; Y) is IB β -learnable, then when IB β (X, Y ; Z) is globally minimized, it will not learn a trivial representation. If (X; Y) is not IB β -learnable, then when IB β (X, Y ; Z) is globally minimized, it may learn a trivial representation. Necessary condition for IB-Learnability. From Definition 1, we can see that IB β -Learnability for any dataset (X; Y) requires β > 1. In fact, from the Markov chain Z ← X ↔ Y, we have I(Y ; Z) ≤ I(X; Z) via the dataprocessing inequality. If β ≤ 1, then since I(X; Z) ≥ 0 and I(Y ; Z) ≥ 0, we have that min(I(X; Z) − βI(Y ; Z)) = 0 = IB β (X, Y ; Z trivial). Hence (X, Y) is not IB β -learnable for β ≤ 1.Theorem 1 characterizes the IB β -Learnability range for β (see Appendix B for the proof): Theorem 1. If (X, Y) is IB β1 -learnable, then for any β 2 > β 1, it is IB β2 -learnable. Based on Theorem 1, the range of β such that (X, Y) is IB β -learnable has the form β ∈ (β 0, +∞). Thus, β 0 is the threshold of IB-Learnability. Furthermore, the trivial representation is a stationary solution for the IB objective: Lemma 1.1. p(z|x) = p(z) is a stationary solution for IB β (X, Y ; Z).The proof in Appendix E shows that the first-order variation δIB β [p(z|x)] = 0 vanishes at the trivial representation. Lemma 1.1 yields our strategy for finding sufficient conditions for learnability: find conditions such that p(z|x) = p(z) is not a local minimum for the functional IB β [p(z|x)]. By requiring that the second order variation δ 2 IB β [p(z|x)] < 0 at the trivial representation (Suff. Cond. 1, Appendix C), and constructing a special form of perturbation at the trivial representation (Suff. Cond. 2, Appendix F), we arrive at the key of this paper (see Appendix G for the proof) 2:Theorem 2 (Confident Subset Suff. Cond.). A sufficient condition for (X, Y) to be IB β -learnable is X and Y are not independent, and DISPLAYFORM1 where Ω x denotes the event that x ∈ Ω x, with probability p(Ω x). Moreover, (inf Ωx⊂X β 0 (Ω x)) −1 gives a lower bound on the slope of the Pareto frontier at the origin of the information plane I(Y ; Z) vs. I(X; Z).Characteristics of dataset leading to low β 0. From Eq., we see that three characteristics of the subset Ω x ⊂ X lead to low β 0: confidence: p(y|Ω x) is large; typicality and size: the number of elements in Ω x is large, or the elements in Ω x are typical, leading to a large probability of p(Ω x); imbalance: p(y) is small for the subset Ω x, but large for its complement. In summary, β 0 will be determined by the largest confident, typical and imbalanced subset of examples, or an equilibrium of those characteristics. Theorem 2 immediately leads to two important corollaries under special problem structures: classification with classconditional noisy labels BID4 ) and deterministic mappings. Corollary 2.1. Suppose that the true class labels are y *, and the input space belonging to each y * has no overlap. We only observe the corrupted labels y with class-conditional noise p(y|x, y *) = p(y|y *) = p(y). Then a sufficient condition for IB β -Learnability is: DISPLAYFORM2 Corollary 2.2. For classification problems, if Y is a deterministic function of X and not independent of X, then a necessary and sufficient condition for IB β -Learnability is β > β 0 = 1.Therefore, if we find that β 0 > 1 for a classification task, we may infer that Y is not a deterministic function of X, i.e. either some classes have overlap, or the labels are noisy. However, finite models may add effective class overlap if they have insufficient capacity for the learning task. This may translate into a higher observed β 0, even when learning deterministic functions. Proofs are provided in Appendix H. Based on Theorem 2, for general classification tasks we suggest Algorithm 1 in Appendix J to empirically estimate an upper-boundβ 0 ≥ β 0. Here, we give the intuition behind the algorithm. First, we train a single maximum likelihood model on the dataset. That model provides estimates for all p(y|x) in the training set. Since learnability is defined with respect to the training data, it is correct to directly use the empirical probability of p(x) and p(y) in the training data. Given p(x), p(y), and p(y|x), and the understanding that we are seaching for a confident subset Ω x, we can then perform an efficient targeted search of the exponential space of subsets of the training data. The algorithm returns the lowest estimate ofβ 0 found during that process. After estimatingβ 0, we can then use it for learning with IB, either directly, or as an anchor for a region where we can perform a much smaller sweep than we otherwise would have. This may be particularly important for very noisy datasets, where β 0 can be very large. To test our theoretical and Alg. 1, we perform experiments on synthetic datasets, MNIST, and CIFAR10. Additional experiment details are provided in Appendix K.Synthetic datasets. We generate a set of synthetic datasets with varying class-conditional noise rates. FIG0 shows the of sweeping β to find the empirical onset of learning, and compares that onset to the predicted onset using Eq.. Clearly the estimate provides a tight upper bound in this simple setting. Also note that β 0 grows exponentially as the label noise increases, underscoring that improperly-chosen β may in an inability to learn useful representations, and the importance of theoretically-guided β selection as opposed to sweeping β in general. MNIST. We perform binary classification with digits 0 and 1, but again add class-conditional noise to the labels with varying noise rates ρ. To explore how the model capacity influences the onset of learning, for each dataset we train two sets of Variational Information Bottleneck (VIB) models differing only by the number of neurons in their hidden layers of the encoder: one with n = 128 neurons, the other with n = 512 neurons. Insufficient capacity will in more uncertainty of Y given X from the point of view of the model, so we expect β 0,observed for the n = 128 model to be larger. FIG0 confirms this prediction. It also shows the β 0,estimated and β 0,predicted given by Algorithm 1 and Eq., respectively. We see that Algorithm 1 does a good job estimating the onset of learning for the large-capacity model, and that the estimated line up well with the theoretical predictions. CIFAR10 forgetting. For CIFAR10 BID13 ), we study how forgetting varies with β. In other words, given a VIB model trained at some high β 2, if we anneal it down to some much lower β 1, what accuracy does the model converge to? We estimated β 0 = 1.0483 on a version of CIFAR10 with 20% label noise using Alg. 1. The lowest β with performance above chance was β = 1.048. See Appendix K.1 for experiment details. As can be seen in FIG0, there are large discontinuities in the Pareto frontier, even though we vary β in very small increments. Those discontinuities start at points on the Pareto frontier where many values of β yield essentially the same I(X; Z) and I(Y ; Z), and end when β crosses apparent phase transitions that give large increases in both I(X; Z) and I(Y ; Z) (marked with red arrows). FIG0 shows that the lowest value of β in each such region tends to have the highest accuracy. A primary empirical of our work is the following: some datasets have non-monotonic performance in regions where multiple values of β cluster together. This surprising behavior is important to check for when training IB models. More thorough study is needed, but based on our initial , we may expect that reducing β to the minimal value that achieves a particular point on the information plane yields better representations. The phenomenon of discontinuities is also observed in prediction error vs. information in the model parameter BID0; BID2 ), I(c; X) vs. H(c) (c denotes clusters) in geometric clustering (Strouse & Schwab (2017b) ). Although these discontinuities (including ours) are observed via different axes, we conjecture that they may all have a shared root cause, which is an interesting topic for future research. In this paper, we have presented theoretical for predicting the onset of learning, and have shown that it is determined by the largest confident, typical and imbalanced subset of the examples. We gave a practical algorithm for predicting the transition, and showed that those predictions are accurate, even in cases of extreme label noise. We have also observed a surprising non-monotonic relationship between β and accuracy, and shown its relationship to discontinuities in the Pareto frontier of the information plane. We believe these will provide theoretical and practical guidance for choosing β in the IB framework for balancing prediction and compression. Our work also raises other questions, such as whether there are other phase transitions in learnability that might be identified. We hope to address some of those questions in future work. Mélanie Rey and Volker Roth. Meta-gaussian information bottleneck. In Advances in Neural Information Processing Systems, pp. 1916 Systems, pp. -1924 Systems, pp., 2012.Ohad Shamir, Sivan Sabato, and Naftali Tishby. Learning and generalization with the information bottleneck. The structure of the Appendix is as follows. In Appendix A, we provide preliminaries for the first-order and secondorder variations on functionals. Then we prove Theorem 1 in Appendix B. In Appendix C, we state and prove Sufficient Condition 1 for IB β -learnability. In Appendix D, we calculate the first and second variations of IB β [p(z|x)] at the trivial representation p(z|x) = p(z), which is used in proving the Sufficient Condition 2 IB β -learnability (Appendix F). After these preparations, we prove the key of this paper, Theorem 2, in Appendix G. Then two important corollaries 2.1, 2.2 are proved in Appendix H. We provide additional discussions and insights for Theorem 2 in Appendix I, and Algorithm 1 for estimation of an upper boundβ 0 ≥ β 0 in Appendix J. Finally in Appendix K, we provide details for the experiments. Let functional F [f (x)] be defined on some normed linear space R. Let us add a perturbative function h(x) to f (x), and now the functional F [f (x) + h(x)] can be expanded as DISPLAYFORM0 where ||h|| denotes the norm of h, DISPLAYFORM1 is a linear functional of h(x), and is called the first-order DISPLAYFORM2 is a quadratic functional of h(x), and is called the second- DISPLAYFORM3 B PROOF OF THEOREM 1Proof. At the trivial representation p(z|x) = p(z), we have I(X; Z) = 0, and I(Y ; Z) = 0 due to the Markov chain, so DISPLAYFORM4 C SUFFICIENT CONDITION 1 AND PROOFIn this section, we prove the Sufficient Condition 1 for IB β -learnability, which will lay the foundation for the Sufficient condition 2 (Appendix F) and the Confident Subset Sufficient condition (key of this paper, Theorem 2) that follow. Theorem 3 (Suff. Cond. 1). A sufficient condition for (X, Y) to be IB β -learnable is that there exists a perturbation function h(z|x) with 3 h(z|x)dz = 0, such that the second-order variation DISPLAYFORM5 Proof. To prove Theorem 3, we use the Theorem 1 of Chapter 5 of BID9 which gives a necessary condition for F [f (x)] to have a minimum at f 0 (x). Adapting to our notation, we have:Theorem 4 (Gelfand et al. FORMULA2). A necessary condition for the functional F [f (x)] to have a minimum at f (x) = f 0 (x) is that for f (x) = f 0 (x) and all admissible h(x), DISPLAYFORM6 DISPLAYFORM7, let us calculate the first and second-order variation of I(X; Z) and I(Y ; Z) w.r.t. p(z|x), respectively. Through this derivation, we use h(z|x) as a perturbative function, for ease of deciding different orders of variations. We will finally absorb into h(z|x). DISPLAYFORM8 We have DISPLAYFORM9 Expanding F 1 [p(z|x) + h(z|x)] to the second order of, we have DISPLAYFORM10 Collecting the first order terms of, we have DISPLAYFORM11 Collecting the second order terms of 2, we have DISPLAYFORM12 Now let us calculate the first and second-order variation of F 2 [p(z|x)] = I(Z; Y). We have DISPLAYFORM13 Using the Markov chain Z ← X ↔ Y, we have DISPLAYFORM14 Then expanding F 2 [p(z|x) + h(z|x)] to the second order of, we have DISPLAYFORM15 Collecting the first order terms of, we have DISPLAYFORM16 Collecting the second order terms of, we have DISPLAYFORM17 Finally, we have DISPLAYFORM18 DISPLAYFORM19 Absorb into h(z|x), we get rid of the factor and obtain the final expression in Lemma 4.1.E PROOF OF LEMMA 1.1Proof. Using Lemma 4.1, we have DISPLAYFORM20 Let p(z|x) = p(z) (the trivial representation), we have that log p(z|x) p(z) ≡ 0. Therefore, the two integrals are both 0. Hence, DISPLAYFORM21 F SUFFICIENT CONDITION 2 AND PROOF Theorem 5 (Suff. Cond. 2). A sufficient condition for (X, Y) to be IB β -learnable is X and Y are not independent, and β > inf DISPLAYFORM0 where the functional β 0 [h(x)] is given by DISPLAYFORM1 Moreover, we have that inf h(x) β[h(x)] −1 is a lower bound of the slope of the Pareto frontier in the information plane I(Y ; Z) vs. I(X; Z) at the origin. The proof is given in Appendix F, which also gives a construction for h(z|x) for Theorem 3 for any h(x) satisfying Theorem 5, and shows that the converse is also true: if there exists h(z|x) suth that the condition in Theorem 3 is true, then we can find h(x) satisfying the the condition in Theorem 5.The geometric meaning of (β 0 [h(x)]) −1 is as follows. It equals DISPLAYFORM2 under a perturbation function of the form h 1 (z|x) = h(x)h 2 (z) (satisfying h 2 (z)dz = 0 and δ 2 I(X;Z), which turns out to be equal to DISPLAYFORM3 under the class of perturbation functions h 1 (z|x) = h 2 (z)h(x), and provides a lower bound of sup h(z|x) DISPLAYFORM4, which is the slope of the Pareto frontier in the information plane I(Y ; Z) vs. I(X; Z) at the origin. Theorem 5 in essence states that as long as β −1 is lower than this lower bound of the slope of the Pareto frontier, (X; Y) is IB β -learnable. From Theorem 5, we see that it still has an infimum over an arbitrary function h(x), which is not easy to estimate. To get rid of h(x), we can use a specific functional form for h(x) in Eq. FORMULA26, and obtain a stronger sufficient condition for IB β -Learnability. But we want to choose h(x) as near to the infimum as possible. To do this, we note the following characteristics for the R.H.S of Eq.:• We can set h(x) to be nonzero if x ∈ Ω x for some region Ω x ⊂ X and 0 otherwise. Then we obtain the following sufficient condition: DISPLAYFORM5 • The numerator of the R.H.S. of Eq. attains its minimum when h(x) is a constant within Ω x. This can be proved using the Cauchy-Schwarz inequality: DISPLAYFORM6, and defining the inner product as u, v = u(x)v(x)dx. Therefore, the numerator of the R.H.S. of Eq. ≥ − 1, and attains equality when DISPLAYFORM7 Based on these observations, we can let h(x) be a nonzero constant inside some region Ω x ⊂ X and 0 otherwise, and the infimum over an arbitrary function h(x) is simplified to infimum over Ω x ⊂ X, and we obtain the confident subset sufficient condition (Theorem 2) for IB β -Learnability, which is a key of this paper. Proof. Firstly, from the necessary condition of β > 1 in Section 2, we have that any sufficient condition for IB β -learnability should be able to deduce β > 1. Now using Theorem 3, a sufficient condition for (X, Y) to be IB β -learnable is that there exists h(z|x) with h(z|x)dx = 0 such that DISPLAYFORM0 At the trivial representation, p(z|x) = p(z) and hence p(x, z) = p(x)p(z). Due to the Markov chain Z ← X ↔ Y, we have p(y, z) = p(y)p(z). Substituting them into the δ 2 IB β [p(z|x)] in Lemma 4.1, the condition becomes: there exists h(z|x) with h(z|x)dz = 0, such that DISPLAYFORM1 Rearranging terms and simplifying, we have DISPLAYFORM2 where DISPLAYFORM3 Now we prove that the condition that ∃h(z|x) s.t. p(z) dz > 0, and let h 1 (z|x) = h(x)h 2 (z). Now we have DISPLAYFORM4 DISPLAYFORM5 In other words, the condition Eq. FORMULA35 is equivalent to requiring that there exists an h(x) such that G[h(x)] < 0. Hence, a sufficient condition for IB β -learnability is that there exists an h(x) such that DISPLAYFORM6 When h(x) = C = const in the entire input space X, Eq. becomes: DISPLAYFORM7 which cannot be true. Therefore, h(x) = const cannot satisfy Eq..Rearranging terms and simplifying, and note that dxh(x)p(x) 2 > 0 due to h(x) ≡ 0 = const, we have DISPLAYFORM8 For the R.H.S. of Eq., let us show that it is greater than 0. Using Cauchy-Schwarz inequality: u, u v, v ≥ u, v 2, and setting u(x) = h(x) p(x), v(x) = p(x), and defining the inner product as DISPLAYFORM9 It attains equality when DISPLAYFORM10 v(x) = h(x) is constant. Since h(x) cannot be constant, we have that the R.H.S. of Eq. is greater than 0.For the L.H.S. of Eq., due to the necessary condition that β > 0, if FORMULA42 cannot hold. Then the h(x) such that Eq. holds is for those that satisfies DISPLAYFORM11 DISPLAYFORM12 We see this constraint contains the requirement that h(x) ≡ const. Written in the form of expectations, we have DISPLAYFORM13 Since the square function is convex, using Jensen's inequality on the outer expectation on the L.H.S. of Eq., we have DISPLAYFORM14 The equality holds iff E x∼p(x|y) [h(x)] is constant w.r.t. y, i.e. Y is independent of X. Therefore, in order for Eq. FORMULA0 to hold, we require that Y is not independent of X.Using Jensen's inequality on the innter expectation on the L.H.S. of Eq., we have DISPLAYFORM15 The equality holds when h(x) is a constant. Since we require that h(x) is not a constant, we have that the equality cannot be reached. Under the constraint that Y is not independent of X, we can divide both sides of Eq. 8, and obtain the condition: there exists an h(x) such that DISPLAYFORM16 Written in the form of expectations, we have DISPLAYFORM17 We can absorb the constraint Eq. into the above formula, and get DISPLAYFORM18 where DISPLAYFORM19 which proves the condition of Theorem 5.Furthermore, from Eq. FORMULA0 we have DISPLAYFORM20 for h(x) ≡ const, which satisfies the necessary condition of β > 1 in Section 2.Proof of lower bound of slope of the Pareto frontier at the origin: Inflection point for general Z: If we do not assume that Z is at the origin of the information plane, but at some general stationary solution Z * with p(z|x), we define DISPLAYFORM21 DISPLAYFORM22 DISPLAYFORM23 It becomes a non-stable solution (non-minimum), and we will have other Z that achieves a better IB β (X, Y ; Z) than the current Z *.Multiple phase transitions To discuss multiple phase transitions, let us first obtain the β for stationary solution for the IB objective. At a stationary solution for IB β [p(z|x)], for valid perturbation h(z|x) satisfying dzh(z|x) = 0 for any x, we have δ IB β [p(z|x)] − dzdxλ(x)p(z|x) = 0 as a constraint optimization with λ(x) as Lagrangian multipliers. Using Eq., we have DISPLAYFORM24 Therefore, we have DISPLAYFORM25 The last equality is due to that the first equality is always true for any function h(z|x). So we can take out the dxdzh(z|x) factor. λ(x) is used for normalization of p(z|x). Eq. FORMULA0 is equivalent to the of the self-consistent equation in.Eq. FORMULA0 and Eq. provide us with an ideal tool to study multiple phase transitions. For each β, at the minimization of the IB objective, Eq. FORMULA0 is satisfied by some Z * that is at the Pareto frontier on the I(Y ; Z) vs. I(X; Z) plane. As we increase β, the inf h(x) β [h(x)] may remain stable for a wide range of β, until β is greater than inf h(x) β [h(x)], at which point we will have a phase transition where suddenly there is a better Z = Z * * that achieves much lower IB β (X, Y ; Z) value. For example, we can rewrite Eq. FORMULA0 as DISPLAYFORM26 whereλ DISPLAYFORM27 p(x). By substituting into Eq. FORMULA0, we may proceed and get useful . Proof. According to Theorem 5, a sufficient condition for (X, Y) to be IB β -learnable is that X and Y are not independent, and DISPLAYFORM0 We can assume a specific form of h(x), and obtain a (potentially stronger) sufficient condition. Specifically, we let DISPLAYFORM1 for certain Ω x ⊂ X. Substituting into Eq., we have that a sufficient condition for (X, Y) to be IB β -learnable is DISPLAYFORM2 where DISPLAYFORM3 The denominator of Eq. FORMULA0 is DISPLAYFORM4 Using the inequality x − 1 ≥ logx, we have DISPLAYFORM5 Both equalities hold iff p(y|Ω x) ≡ p(y), at which the denominator of Eq. FORMULA0 is equal to 0 and the expression inside the infimum diverge, which will not contribute to the infimum. Except this scenario, the denominator is greater than 0. Substituting into Eq. FORMULA0, we have that a sufficient condition for (X, Y) to be IB β -learnable is DISPLAYFORM6 Since Ω x is a subset of X, by the definition of h(x) in Eq. FORMULA0, h(x) is not a constant in the entire X. Hence the numerator of Eq. FORMULA0 is positive. Since its denominator is also positive, we can then neglect the "> 0", and obtain the condition in Theorem 2.Since the h(x) used in this theorem is a subset of the h(x) used in Theorem 5, the infimum for Eq. FORMULA2 is greater than or equal to the infimum in Eq.. Therefore, according to the second statement of Theorem 5, we have that the (inf Ωx β 0 (Ω x)) −1 is also a lower bound of the slope for the Pareto frontier of I(Y ; Z) vs. I(X; Z) curve. Now we prove that the condition Eq. FORMULA2 is invariant to invertible mappings of X. In fact, if X = g(X) is a uniquely invertible map (if X is continuous, g is additionally required to be continuous), let X = {g(x)|x ∈ Ω x }, and denote DISPLAYFORM7 Additionally we have X = g(X). Then DISPLAYFORM8 For dataset (X, Y) = (g(X), Y ), applying Theorem 2 we have that a sufficient condition for it to be IB β -learnable is DISPLAYFORM9 where the equality is due to Eq.. Comparing with the condition for IB β -learnability for (X, Y) (Eq. FORMULA2), we see that they are the same. Therefore, the condition given by Theorem 2 is invariant to invertible mapping of X. Proof. We use Theorem 2.Let Ω x contain all elements x whose true class is y * for some certain y *, and 0 otherwise. Then we obtain a (potentially stronger) sufficient condition. Since the probability p(y|y *, x) = p(y|y *) is classconditional, we have DISPLAYFORM10, we obtain a sufficient condition for IB β learnability. Proof. We again use Theorem 2. Since Y is a deterministic function of X, let Y = f (X). Since it is classification problem, Y contains at least one value y such that its probability p(y) > 0, we let Ω x contain only x such that f (x) = y. Substituting into Eq., we have DISPLAYFORM0 Therefore, the sufficient condition becomes β > 1.Furthermore, since a necessary condition for IB β -learnability is β > 1 (Section 2), we have that β > β 0 = 1 is a necessary and sufficient condition. Similarity to information measures. The denominator of Eq. is closely related to mutual information. Using the inequality x − 1 ≥ log(x) for x > 0, it becomes: DISPLAYFORM0 whereĨ(Ω x ; Y) is the mutual information "density" at Ω x ⊂ X. Of course, this quantity is also D KL [p(y|Ω x)||p(y)], so we know that the denominator of Eq. FORMULA2 is non-negative. Incidentally, E y∼p (y|Ωx) p (y|Ωx) p(y) − 1 is the density of "rational mutual information" BID15 DISPLAYFORM1 Similarly, the numerator is related to the self-information of Ω x: DISPLAYFORM2 so we can estimate the phase transition as: DISPLAYFORM3 Since Eq. uses upper bounds on both the numerator and the denominator, it does not give us a bound on β 0.Multiple phase transitions. Based on this characterization of Ω x, we can hypothesize datasets with multiple learnability phase transitions. Specifically, consider a region Ω x0 that is small but "typical", consists of all elements confidently predicted as y 0 by p(y|x), and where y 0 is the least common class. By construction, this Ω x0 will dominate the infimum in Eq., ing in a small value of β 0. However, the remaining X − Ω x0 effectively form a new dataset, X 1. At exactly β 0, we may have that the current encoder, p 0 (z|x), has no mutual information with the remaining classes in X 1; i.e., I(Y 1 ; Z 0) = 0. In this case, Definition 1 applies to p 0 (z|x) with respect to I(X 1 ; Z 1). We might expect to see that, at β 0, learning will plateau until we get to some β 1 > β 0 that defines the phase transition for X 1. Clearly this process could repeat many times, with each new dataset X i being distinctly more difficult to learn than X i−1. The end of Appendix F gives a more detailed analysis on multiple phase transitions. Estimating model capacity. The observation that a model can't distinguish between cluster overlap in the data and its own lack of capacity gives an interesting way to use IB-Learnability to measure the capacity of a set of models relative to the task they are being used to solve. Learnability and the Information Plane. Many of our can be interpreted in terms of the geometry of the Pareto frontier illustrated in FIG1, which describes the trade-off between increasing I(Y ; Z) and decreasing I(X; Z). At any point on this frontier that minimizes IB min β ≡ min I(X; Z) − βI(Y ; Z), the frontier will have slope β −1 if it is differentiable. If the frontier is also concave (has negative second derivative), then this slope β −1 will take its maximum β −1 0 at the origin, which implies IB β -Learnability for β > β 0, so that the threshold for IB β -Learnability is simply the inverse slope of the frontier at the origin. More generally, as long as the Pareto frontier is differentiable, the threshold for IB β -learnability is the inverse of its maximum slope. Indeed, Theorem 2 gives lower bounds of the slope of the Pareto frontier at the origin. This means that we lack IB β -learnability for β < β 0, which makes the origin the optimal point. If the frontier is convex, then we achieve optimality at the upper right endpoint if β > β 1, otherwise on the frontier at the location between the two endpoints where the frontier slope is β −1. Learnability and contraction coefficient If we regard the true mapping from X to Y as a channel with transition kernel P Y |X, we can define contraction coefficient η KL (P Y |X) = sup Q;P:0<DKL(P ||Q)<∞ BID16 ) as a measure of how much it keeps the two distributions P and Q intact (as opposed to being drawn nearer measured by KL-divergence) after pushing forward through the channel. By BID16 DISPLAYFORM0 DISPLAYFORM1. Theorem 5 hence also provides a lower bound for the contraction coefficient DISPLAYFORM2 Similarly for Theorem 2. In Alg. 1 we present a detailed algorithm for estimating β 0.Algorithm 1 Estimating the upper bound for β 0 for IB β -Learnability Require: Dataset D = {(x i, y i)}, i = 1, 2,...N. The number of classes is C. Require ε: tolerance for estimating β 0 1: Learn a maximum likelihood model p θ (y|x) using the dataset D. DISPLAYFORM0. 5: j * = arg max j (P y|x) ij 6: Sort the rows of P y|x in decreasing values of (P y|x) ij *. 7: Search i upper untilβ 0 = Getβ(P y|x, p y, Ω) is minimal with tolerance ε, where Ω = {1, 2, ...i upper}. 8: returnβ 0 Subroutine Getβ(P y|x, p y, Ω) s1: (N, C, n) ← (number of rows of P y|x, number of columns of P y|x, number of elements of Ω). DISPLAYFORM1 We use the Variational Information Bottleneck (VIB) objective by BID3. For the synthetic experiment, the latent Z has dimension of 2. The encoder is a neural net with 2 hidden layers, each of which has 128 neurons with ReLU activation. The last layer has linear activation and 4 output neurons, with the first two parameterizes the mean of a Gaussian and the last two parameterizes the log variance of the Gaussian. The decoder is a neural net with 1 hidden layers with 128 neurons and ReLU activation. Its last layer has linear activation and outputs the logit for the class labels. It uses a mixture of Gaussian prior with 500 components (for the experiment with class overlap, 256 components), each of which is a 2D Gaussian with learnable mean and log variance, and the weights for the components are also learnable. For the MNIST experiment, the architecture is mostly the same, except the following: for Z, we let it have dimension of 256. For the prior, we use standard Gaussian with diagonal covariance matrix. For all experiments, we use Adam BID11 ) optimizer with default parameters. We do not add any regularization. We use learning rate of 10 −4 and have a learning rate decay of 1 1+0.01×epoch. We train in total 2000 epochs with batch size of 500. All experiments has train-test split of 5:1, and we report the accuracy on the test set, w.r.t. the true labels. For estimation of β 0,exp in FIG0, in the accuracy vs. β i curve, we take the mean and standard deviation of the accuracy for the lowest 5 β i values, denoting as µ β, σ β. When β i is greater than µ β + 3σ β, we regard it as learning a non-trivial representation, and take the average of β i and β i−1 as the experimentally estimated onset of learning. We also inspect manually and confirm that it is consistent with human intuition. For the estimating β 0,estimated using Alg. 1, at step 7 we use the following discrete search algorithm. We gradually narrow down the range We trained a deterministic 28x10 wide resnet BID10 ), using the open source implementation from BID7. However, we extended the final 10 dimensional logits of that model through another 3 layer MLP classifier, in order to keep the inference network architecture identical between this model and the VIB models we describe below. During training, we dynamically added label noise according to the class confusion matrix in Tab. K.1. The mean label noise averaged across the 10 classes is 20%. After that model had converged, we used it to estimate β 0 with Alg. 1. Even with 20% label noise, β 0 was estimated to be 1.0483.We then trained 73 different VIB models using the same 28x10 wide resnet architecture for the encoder, parameterizing the mean of a 10-dimensional unit variance Gaussian. Samples from the encoder distribution were fed to the same 3 layer MLP classifier architecture used in the deterministic model. The marginal distributions were mixtures of 500 fully covariate 10-dimensional Gaussians, all parameters of which are trained. The VIB models had β ranging from 1.02 to 2.0 by steps of 0.02, plus an extra set ranging from 1.04 to 1.06 by steps of 0.001 to ensure we captured the empirical β 0 with high precision. However, this particular VIB architecture does not start learning until β > 2.5, so none of these models would train as described. 4 Instead, we started them all at β = 100, and annealed β down to the corresponding target over 10,000 training gradient steps. The models continued to train for another 200,000 gradient steps after that. In all cases, the models converged to essentially their final accuracy within 20,000 additional gradient steps after annealing was completed. They were stable over the remaining ∼ 180, 000 gradient steps. | Theory predicts the phase transition between unlearnable and learnable values of beta for the Information Bottleneck objective | 1,076 | scitldr |
We consider a new class of \emph{data poisoning} attacks on neural networks, in which the attacker takes control of a model by making small perturbations to a subset of its training data. We formulate the task of finding poisons as a bi-level optimization problem, which can be solved using methods borrowed from the meta-learning community. Unlike previous poisoning strategies, the meta-poisoning can poison networks that are trained from scratch using an initialization unknown to the attacker and transfer across hyperparameters. Further we show that our attacks are more versatile: they can cause misclassification of the target image into an arbitrarily chosen class. Our show above 50% attack success rate when poisoning just 3-10% of the training dataset. Deep neural networks are increasingly deployed in high stakes applications, including automated vehicles , medical diagnosis , and copyright detection . However, neural networks are vulnerable to a range of security vulnerabilities, that compromise the reliability of these systems. To date, most research on ML security has focused on evasion attacks , in which the attacker manipulates classifier inputs at test time. In contrast, data poisoning is an emerging threat model in which the attacker manipulates training data in an imperceptible way, with the goal of controlling the behavior of a classifier trained on that data (; ; ; ;). Unlike evasion attacks, data poisoning poses a threat in situations wherein the attacker may not have access to test-time data, but can place malicious data into the training set using insider access, by placing it on social media, or just by leaving malicious data online and waiting for it to be scraped by dataset collection bots. While poisoning attacks have potentially deleterious effects, their scope has been limited. Currently available targeted attacks rely on heuristics that predict how an image will impact a trained network. As a , current methods only work in the case of transfer learning, in which a pre-trained model is fine-tuned on a small dataset , or require the attacker to have control of both training and testing data (; a; b), a setting often referred to as backdoor attacks. Inspired by meta-learning techniques, we explore a new poisoning method called meta-poisoning. Rather than relying on heuristics or hand-crafted formulations, we propose to directly learn image perturbations that interfere with training dynamics to produce targeted behaviors. More specifically, meta-poisoning unrolls multiple steps of the SGD training pipeline into a deep computation graph, and then backpropagates through the training pipeline to create images with a desired effect on training. We demonstrate that meta-poisoning can manipulate the label of a test image with very high reliability using only "clean-label" attack images, and a poison budget of 10% or less of the training data. Furthermore, this can be done in the challenging context of from-scratch training (not transfer learning) of a victim network. Finally, the learning process for producing poisoned images can use augmentation strategies to treat factors like regularization constants and initialization as nuisance variables, creating poisons that work on a wide range of models and transfer across training environments. Classical poisoning attacks generally work by degrading overall model performance for simple linear classifiers, rather than eliciting specific test-time behaviors. The overfitting behavior and fluid class boundaries of neural networks enables more sophisticated "targeted" attacks that produce specific test-time behaviors chosen by the attacker. Several types of targeted poisoning attacks currently exist in the literature. Backdoor attacks , also called "Trojan" attacks (; a; b), insert a pattern or shape into training images that belong to the target class. The network learns to associate these patterns with the target class, and then mistakenly places images into the target class at test time when they contain that pattern. Unlike the pure poisoning attacks studied here, backdoor attacks assume a very strong threat model in which the attacker can manipulate both train and test inputs. Two recent publications on clean-label poisoning attacks have the same setting as in backdoor attacks except that they remove the need for a trigger: the target image may be completely unperturbed. In feature collision attacks, the attacker first chooses a target image whose class label she wishes to alter at test time. She then makes small perturbations to a training image, called the "source" image, so that its feature representation "collides" with the feature representation of the target image . During training, the network over-fits on the data, learning to classify the poison source image with its assigned label in the training set. Then, at test time, the target image has a very similar feature representation to the poison image, and is therefore assigned the same (but incorrect) label. Feature collision attacks have been extended to convex polytope attacks, which work in the black-box setting . However, feature collision attacks only work in the case of transfer learning, when a pre-trained model is fine-tuned on a small poisoned dataset. To date, no clean-label attacks have been demonstrated on networks trained from scratch. Such clean-label attacks have thus far depended on hand-crafted formulations and heuristics, rather than directly looking at the training pipeline. For example, feature collision and convex polytope attacks are based on the assumption that if the poison is "close enough" to the target in representation space, then the decision boundary will wrap around the poison, inadvertently including the target into the poison class. Backdoor attacks assume that the network will rely on a particular Trojan feature to assign class labels, but it is difficult to predict this behavior without conducting experiments involving the non-poisoned data samples, and strong knowledge of the training pipeline. Furthermore, since such attacks are bound to the limitations of the heuristic on which they're based, they are less versatile. For example, the target image in all the above cases can only be made to misclassify as the source class, rather than another class of the attacker's choosing. Meta-learning is a field that exploits training dynamics to create networks that learn quickly when presented with few-shot learning tasks (; ;). Meta-learning methods have an inner loop that uses an optimizer to train on new data, and an outer loop to back-propagate through the training pipeline and find optimal network parameters that adapt quickly to new tasks. While meta-learning has been used primarily for few-shot learning, other applications of it exist. We take a page from the meta-learning playbook and propose methods that craft poison images by backpropagating through the training pipeline. Unlike previous attacks, the proposed methods can poison networks trained from scratch (i.e. from random initialization), without manipulating test-time inputs. The poisons transfer across different training pipelines, and further, can be crafted to perform different types of attacks (such as misclassifying the target to a third class) simply by changing the crafting loss. We consider an attacker who wishes to force a target image x t of her choice to be assigned label y source by the victim model. She has the ability to perturb the training set by adding where is a small value, N is the total number of examples, and M is the dimensionality of the examples. We limit the attacker to being able to perturb only a small number of examples n, where n/N is typically < 10%. Outside of these n examples, the corresponding rows in ∆ are zero, meaning no perturbation is made. The optimal perturbation ∆ * is then where L(x, y; θ) is a loss function used to measure how well a model with parameters θ assigns label y to image x, and θ * (∆) are the network parameters found by training on the perturbed training data X + ∆. Note that equation 1 is a bi-level optimization problem -the minimization for ∆ involves the parameters θ * (∆), which are themselves the minimizer of the training problem The formulation in equation 1 considers the training of a single network with a single initialization. To produce a transferable poison attack that is robust to changes in the initialization and training process, we average equation 2 over M surrogate loss functions, randomly initialized and independently trained, leading to the final optimization objective of Finding the global minimum of this objective over many model instances is intractable using conventional bi-level methods. For example, each iteration of the direct gradient descent strategies as discussed in for poisoning a simple SVM model as in would require minimizing equation 2 for all surrogate models as well as computing the (intractable) inverse of the parameter Hessian matrix. Rather than directly solve the bi-level problem, we adopt methods from meta-learning . In meta-learning, one tries to find model parameters that yield the lowest possible loss after being updated by a few steps of SGD on a randomly chosen task. For meta-poisoning, we are interested in choosing a set of parameters (here, ∆) that will minimize the poisoning loss equation 3 after applying an SGD update with a randomly chosen model. In both cases, this can be done by unrolling and back-propagating through a complete step of the training process. Intuitively, MetaPoison can be described by Figure 1. Instead of minimizing the full bi-level objective, we settle for a "first-order" approximation that only looks ahead a few steps in the training process. We run natural training, following the parameters θ toward low training error. After each Algorithm 1 Crafting poisoned images via Meta-Learning 1: Input N The training set of images and labels (X tr, Y tr), target image x t and intended target label y source, threshold, n < N subset of images to be poisoned. T budget of training epochs. M randomly initialized models. 2: Stagger the M given models, training the t-th model up to tM/T epochs. 3: Select n images from the training set to be poisoned, denoted by X p. 4: Begin For i = 1,..., k unrolling steps and for M models do: Average target losses for all models: Find ∇ Xp L total by backpropagation 9: Update X p by first-order Opt. (e.g. Adam or signGD) and project onto ∆ 10: Update all models by taking a first-order step minimizing L θ M (X tr, Y tr). Reset models that have reached T epochs to initialization 12: STOP if maximal number of crafting steps reached or X p is converged. 13: Return X p SGD update, we ask how can we update the poison perturbation ∆ so that, when added to the training images for a single SGD step, it causes the most severe impact? We answer this question by (i) selecting source images from the training set, (ii) computing a parameter update to θ with respect to the training loss, (iii) passing the target image x t through the ing model, and (iv) evaluating the target adversarial loss L(x t, y source) on the target image x t to measure the discrepancy between the network output and adversarially chosen label. Finally, we backpropagate through this entire process to find the gradient of L(x t, y source) with respect to the data perturbation ∆. This gradient is used to refine ∆ by applying a perturbation to the source image. This process is depicted in Figure 1. More specifically, when crafting poisons, we insert the poisons (sourced from the training set, although new examples could be added too) into the training set. Next we form a random mini-batch of data on each SGD update. If there exists at least one poison in the mini-batch, the computation graph is unrolled over one or more SGD steps ahead to compute a meta-gradient ∇ ∆ L(x t, y source). The meta-gradient, which is with respect to the pixels of the poison images in that mini-batch, is applied as a perturbation to the image using the Adam optimizer. Afterward the perturbation is To evaluate our crafted poisons, the poisoned images are inserted into a "victim" model with unknown initialization, mini-batching order, and training strategy, and the training then follows a new optimization trajectory. If the poisons succeed, this trajectory would lead to a minimum with both low training error and low target adversarial error, satisfying equation 3. This ensures that there are no unrealistic assumptions on the victim; in practice, the victim can apply standard training techniques with all of their inherent stochasticity (i.e., random initialization and stochastic minibatching). This approach is supplemented by repeating the gradient computation over N models surrogate models. Before we begin the poisoning process we pre-train these (independently initialized) surrogate models to varying epochs, so that every poison update sees models from numerous stages of training. Once a model reaches a sentinel number of epochs, it is reset to epoch 0 with a new random initialization. Akin to meta-learning, which samples new tasks with each meta-step, meta-poison "samples" new model parameters along training trajectories for each craft step. Algorithm 1 summarizes the details. We test the developed algorithm by considering the task of poisoning a standard Resnet-20 trained on the CIFAR-10 dataset 2. Standard network implementation, training procedure, and data augmentation techniques are used 3. For our attacks we generally choose a random target image from the CIFAR-10 test set and choose the first n images from a random poison class as the source images. We use the typical cross entropy loss as our objective function during poison crafting and make poison updates using the Adam optimizer. We project onto the constraint ∆ ∞ < after each gradient update, with = 16/255. Examples for poisoned images can be seen in Figure 2. Figure 4 shows that inserting the poison into training successfully changes the label of the targeted image for nearly 100% of initializations, without affecting the validation accuracy of the other training images, making the attack imperceptible without knowledge of the target image or detection of the adversarial pattern. Figure 3 visualizes the progress of the poison during the crafting phase. The first-order nature of the crafting process is clearly visible in 3a. After unrolling for a few steps, we find that the unrolled state target loss L(x t, y source) is just slightly lower that of the normal (unaffected) state. Yet, during victim training, these small nudges from the poisons toward lower target loss have a cumulative effect over all training epochs and the target loss decreases significantly, leading to a reclassification of the target image at the end of victim training. Note that each point on the red curve in Figure 3a and dark blue curve in Figure 3b are the target loss and success rate at the end of training when trained from-scratch on the poisons generated at that particular craft step. It is further important to note that test accuracy of all other images is only marginally effected, making this attack nearly undetectable, as shown in Figure 3b. The effectiveness of the poison increases as more information on different network states and examples is accumulated and averaged during the crafting process. We now take a closer look at what happens when the victim trains their model on the poisoned dataset. One would typically expect that as training progresses, misclassification of the target example (i.e. to the source class or any class other than the target class) will decrease as the generalization ability of the model increases. We see in Figure 4 that the adversarial target loss decreases and attack success rate increases as a function of epoch. Each curve corresponds to a different randomly selected source-target pair and is the average of 8 independent training runs. Further, the test accuracy curve is unaffected. While we train only to 50 epochs here for runtime constraints, the at 100 epochs almost always the same, since after epoch 40, 60, and 80 in the standard training procedure, the learning rate decreases significantly. In figure 5a we investigate restrictions into the poison budget, that is how many images from the poison class can be accessed during training. For these experiments we use a smaller poison budget. While 10% corresponds to control over the full set of training images from one class, we see that this number can be reduced significantly. The effectiveness varies more between different poison-target pairings, than with the control over the image set. Plainly speaking, an "easier" choice for the poison class, is more important than the actual amount of control over it. Figure 4 and figure 3b show how small the effect of the poison on the remainder of the validation set really is. While other attack exist that make training difficult in general or disturb generalization accuracy for all classes, e.g. , our method behaves very differently. This is a consequence of the bi-level formulation of the poisoning problem, we minimize the target loss under minimizers of the victim loss. This leads to an almost surgical precision, where precisely only the target image is modified, leaving other images unaffected and making it hard to detect that an attack took place without knowledge of the target image. This is no small feat as the validation set, especially on CIFAR contains a substantial collection of similar images, compared to the target image. In the threat model considered so far in this investigation, the poisons were investigated on the same architecture and hyper-parameter settings as they were crafted on, so that the only source of difference between training model and victim model was in the initial network parameters. In more general settings, an attacker might be able to reasonably guess the architecture used by the defender, yet the exact hyper-parameters are often unknown. We check the transferability of poisons across hyper-parameters in Figure 5b. We find that although there is some variance across different target pairs, the crafted poisons are astonishingly robust against hyperparameter changes. For details, the baseline scenario involves batchsize of 125, learning rate of 0.1, optimizer of Momentum, and no regularization (no data augmentation and weight decay). Corresponding changes in the victim model include: batchsize and learning rate of 250 and 0.2, optimizer of SGD, and regularization which includes weight decay and data augmentation. Beyond the conventional poisoning attack discussed previously, MetaPoison can flexibly attack in other ways. In this section we detail two other attack strategies. Third-Party Attack: In this scenario, the attacker has access to just one class of images, e.g. car, and wants to transfer a target image of, e.g. deer to a "third-party" label, e.g. horse. If we assume the feature collision discussed in previous works such as to be a necessary mechanism for clean-label data poisoning, then this attack should be impossible. The features of car lie too far away from the features of horse to affect the classification of deer images. Yet the attack works nearly as well as the default attack, as shown detailed in 6c. Multi-target attack: Previous attacks focused on a single target image. However, an attacker might want to misclassify several target images with the same set of poisons. Interestingly this attack (within the same budget as the single target attack) is relatively difficult. Inter-class variability between different targets makes it difficult for the poisoned images to be effective in all situations and the reliability of the poisoning decreases. Loss Landscape Visualization: The loss landscape shown in the schematic 1 is not just a vague motivation, we can actually infer information about the actual loss landscape. To do so, we apply PCA and compute the two main components differentiating final clean and the target weight vectors. The principle components accounted for 96% of the variance. We then plot the clean trajectory used during crafting and the poisoned victim trajectory during validation, together with the directions of the gradients of the the unrolled iterations. Interestingly, this plot, shown in 6a shows that the agreement between schematics and experimental are quite close. Feature space visualization: Figure 6b visualizes the average distance in feature space for several pairs of poison and target. We find that, in contrast to feature collision attacks, e.g. , the distance between the target and poison is actually substantial for MetaPoison, suggesting that the approach works by a very different mechanism. Due to optimization-based nature of our approach, we do not need to find this mechanism heuristically or by modelling, we are able to generate it implicitely by directly approximating the data poisoning problem. We have extended learning-to-learn techniques to adversarial poison example generation, or learning-to-craft. We devised a novel fast algorithm by which to solve the bi-level optimization inherent to poisoning, where the inner training of the network on the perturbed dataset must be performed for every crafting step. Our , showing the first clean-label poisoning attack that works on networks trained from scratch, demonstrates the effectiveness of this method. Further our attacks are versatile, they have functionality such as the third-party attack which are not possible with previous methods. We hope that our work establishes a strong attack baseline for future work on clean-label data poisoning and also promote caution that these new methods of data poisoning are able to muster a strong attack on industrially-relevant architectures, that even transfers between training runs and hyperparameter choices. | Generate corrupted training images that are imperceptible yet change CNN behavior on a target during any new training. | 1,077 | scitldr |
We give a new algorithm for learning a two-layer neural network under a very general class of input distributions. Assuming there is a ground-truth two-layer network y = A \sigma(Wx) + \xi, where A, W are weight matrices, \xi represents noise, and the number of neurons in the hidden layer is no larger than the input or output, our algorithm is guaranteed to recover the parameters A, W of the ground-truth network. The only requirement on the input x is that it is symmetric, which still allows highly complicated and structured input. Our algorithm is based on the method-of-moments framework and extends several in tensor decompositions. We use spectral algorithms to avoid the complicated non-convex optimization in learning neural networks. Experiments show that our algorithm can robustly learn the ground-truth neural network with a small number of samples for many symmetric input distributions. Deep neural networks have been extremely successful in many tasks related to images, videos and reinforcement learning. However, the success of deep learning is still far from being understood in theory. In particular, learning a neural network is a complicated non-convex optimization problem, which is hard in the worst-case. The question of whether we can efficiently learn a neural network still remains generally open, even when the data is drawn from a neural network. Despite a lot of recent effort, the class of neural networks that we know how to provably learn in polynomial time is still very limited, and many require strong assumptions on the input distribution. In this paper we design a new algorithm that is capable of learning a two-layer 1 neural network for a general class of input distributions. Following standard models for learning neural networks, we assume there is a ground truth neural network. The input data (x, y) is generated by first sampling the input x from an input distribution D, then computing y according to the ground truth network that is unknown to the learner. The learning algorithm will try to find a neural network f such that f (x) is as close to y as possible over the input distribution D. Learning a neural network is known to be a hard problem even in some simple settings , so we need to make assumptions on the network structure or the input distribution D, or both. Many works have worked with a simple input distribution (such as Gaussians) and try to learn more and more complex networks (; ; ; ;). However, the input distributions in real life are distributions of very complicated objects such as texts, images or videos. These inputs are highly structured, clearly not Gaussian and do not even have a simple generative model. We consider a type of two-layer neural network, where the output y is generated as y = Aσ(W x) + ξ. Here x ∈ R d is the input, W ∈ R k×d and A ∈ R k×k are two weight matrices 2. The function σ is the standard ReLU activation function σ(x) = max{x, 0} applied entry-wise to the vector W x, and ξ is a noise vector that has E[ξ] = 0 and is independent of x. Although the network only has two layers, learning similar networks is far from trivial: even when the input distribution is Gaussian, Ge et al. (2017b) and showed that standard optimization objective can have bad local optimal solutions. Ge et al. (2017b) gave a new and more complicated objective function that does not have bad local minima. For the input distribution D, our only requirement is that D is symmetric. That is, for any x ∈ R d, the probability of observing x ∼ D is the same as the probability of observing −x ∼ D. A symmetric distribution can still be very complicated and cannot be represented by a finite number of parameters. In practice, one can often think of the symmetry requirement as a "factor-2" approximation to an arbitrary input distribution: if we have arbitrary training samples, it is possible to augment the input data with their negations to make the input distribution symmetric, and it should take at most twice the effort in labeling both the original and augmented data. In many cases (such as images) the augmented data can be interpreted (for images it will just be negated colors) so reasonable labels can be obtained. When the input distribution is symmetric, we give the first algorithm that can learn a two-layer neural network. Our algorithm is based on the method-of-moments approach: first estimate some correlations between x and y, then use these information to recover the model parameters. More precisely we have Theorem 1 (informal). If the data is generated according to Equation, and the input distribution x ∼ D is symmetric. Given exact correlations between x, y of order at most 4, as long as A, W and input distribution are not degenerate, there is an algorithm that runs in poly(d) time and outputs a networkÂ,Ŵ of the same size that is effectively the same as the ground-truth network: for any input x,Âσ(Ŵ x) = Aσ(W x).Of course, in practice we only have samples of (x, y) and cannot get the exact correlations. However, our algorithm is robust to perturbations, and in particular can work with polynomially many samples. Theorem 2 (informal). If the data is generated according to Equation, and the input distribution x ∼ D is symmetric. As long as the weight matrices A, W and input distributions are not degenerate, there is an algorithm that uses poly(d, 1/) time and number of samples and outputs a network A,Ŵ of the same size that computes an -approximation function to the ground-truth network: for any input x, Â σ(Ŵ x) − Aσ(W x) 2 ≤.In fact, the algorithm recovers the original parameters A, W up to scaling and permutations. Here when we say weight matrices are not degenerate, we mean that the matrices A, W should be full rank, and in addition a certain distinguishing matrix that we define later in Section 2 is also full rank. We justify these assumptions using the smoothed analysis framework .In smoothed analysis, the input is not purely controlled by an adversary. Instead, the adversary can first generate an arbitrary instance (in our case, arbitrary weight matrices W, A and symmetric input distribution D), and the parameters for this instance will be randomly perturbed to yield a perturbed instance. The algorithm only needs to work with high probability on the perturbed instance. This limits the power of the adversary and prevents it from creating highly degenerate cases (e.g. choosing the weight matrices to be much lower rank than k). Roughly speaking, we show Published as a conference paper at ICLR 2019Theorem 3 (informal). There is a simple way to perturb the input distribution, W and A such that with high probability, the distance between the perturbed instance and original instance is at most λ, and our algorithm outputs an -approximation to the perturbed network with poly(d, 1/λ, 1/) time and number of samples. In the rest of the paper, we will first review related works. Then in Section 2 we formally define the network and introduce some notations. Our algorithm is given in Section 3. Finally in Section 4 we run experiments to show that the algorithm can indeed learn the two-layer network efficiently and robustly. The experiments show that our algorithm works robustly with reasonable number of samples for different (symmetric) input distributions and weight matrices. Due to space constraints, the proof for polynomial number of samples (Theorem 2) and smoothed analysis (Theorem 3) are deferred to the appendix. There are many works in learning neural networks, and they come in many different styles. Non-standard Networks Some works focus on networks that do not use standard activation functions. BID1 gave an algorithm that learns a network with discrete variables. and follow-up works learn neural networks with polynomial activation functions. used the rank-1 tensor decomposition for learning a non-overlapping convolutional neural network with differentiable and smooth activation and Gaussian input. ReLU network, Gaussian input When the input is Gaussian, Ge et al. (2017b) showed that for a two-layer neural network, although the standard objective does have bad local optimal solutions, one can construct a new objective whose local optima are all globally optimal. Several other works (; b; ; ; ; ;) extend this to different settings. General input with score functions A closely related work does not require the input distribution to be Gaussian, but still relies on knowing the score function of the input distribution (which in general cannot be estimated efficiently from samples). gave a way to design loss functions with desired properties for one-hidden-layer neural networks with general input distributions based on a new proposed local likelihood score function estimator. For general distributions (including symmetric ones) their estimator can still require number of samples that is exponential in dimension d (as in Assumption 1(d)). There are several lines of work that try to extend the learning to more general distributions. Du et al. (2017a) showed how to learn a single neuron or a single convolutional filter under some conditions for the input distribution.; Zhang et al. (2016; 2017);; used kernel methods to learn neural networks when the norm of the weights and input distributions are both bounded (and in general the running time and sample complexity in this line of work depend exponentially on the norms of weights/input). showed that gradient descent minimizes the training error in an over-parameterized two-layer neural network. They only consider training error while our also apply to testing error. The work that is most similar to our setting is , where they showed how to learn a single neuron (or a single convolutional filter) for any symmetric input distribution. Our two-layer neural network model is much more complicated. Our work uses method-of-moments, which has already been applied to learn many latent variable models (see BID0 and references there). The particular algorithm that we use is inspired by an over-complete tensor decomposition algorithm FOOBI . Our smoothed analysis are inspired by BID1 and , although our setting is more complicated and we need several new ideas. Published as a conference paper at ICLR 2019 In this section, we first describe the neural network model that we learn, and then introduce notations related to matrices and tensors. Finally we will define distinguishing matrix, which is a central object in our analysis. We consider two-layer neural networks with d-dimensional input, k hidden units and k-dimensional output, as shown in Figure 1. We assume that k ≤ d. The input of the neural network is denoted by x ∈ R d. Assume that the input x is i.i.d. drawn from a symmetric distribution D 3. Let the two weight matrices in the neural network be W ∈ R k×d and A ∈ R k×k. The output y ∈ R k is generated as follows: DISPLAYFORM0 where σ(·) is the element-wise ReLU function and ξ ∈ R k is zero-mean random noise, which is independent with input x. Let the value of hidden units be h ∈ R k, which is equal to σ(W x). Denote i-th row of matrix W as w i (i = 1, 2, ..., k). Also, let i-th column of matrix A be a i (i = 1, 2, ..., k). By property of ReLU activations, for any constant c > 0, scaling the i-th row of W by c while scaling the i-th column of A by 1/c does not change the function computed by the network. Therefore without loss of generality, we assume every row vector of W has unit norm. DISPLAYFORM1 Figure 1: Network model. We use [n] to denote the set {1, 2, · · ·, n}. For two random variables X and Y, we say X d = Y if they come from the same distribution. In the vector space R n, we use ·, · to denote the inner product of two vectors, and use · to denote the Euclidean norm. We use e i to denote the i-th standard basis vector. For a matrix A ∈ R m×n, let A [i,:] denote its i-th row vector, and let A [:,j] denote its j-th column vector. Let A's singular values be σ 1 (A) ≥ σ 2 (A) ≥ · · · ≥ σ min(m,n) (A), and denote the smallest singular value be σ min (A) = σ min(m,n) (A). The condition number of matrix A is defined as κ(A):= σ 1 (A)/σ min (A). We use I n to denote the identity matrix with dimension n × n. The spectral norm of a matrix is denoted as ·, and the Frobenius norm as · F.We represent a d-dimensional linear subspace S by a matrix S ∈ R n×d, whose columns form an orthonormal basis for subspace S. The projection matrix onto the subspace S is denoted by Proj S = SS, and the projection matrix onto the orthogonal subspace of S is denoted by Proj S ⊥ = I n − SS.For matrix A ∈ R m1×n1, C ∈ R m2×n2, let the Kronecker product of A and C be A ⊗ C ∈ R m1m2×n1n2, which is defined as (A ⊗ C) (i1,i2),(j2,j2) = A i1,i2 C j1,j2. For a vector x ∈ R d, the Kronecker product x ⊗ x has dimension d 2. We denote the p-fold Kronecker product of x as x ⊗p, which has dimension d p.We often need to convert between vectors and matrices. For a matrix A ∈ R m×n, let vec(A) ∈ R mn be the vector obtained by stacking all the columns of A. For a vector a ∈ R m 2, let mat(x) ∈ R m×m denote the inverse mapping such that vec(mat(a)) = a. A central object in our analysis is a large matrix whose columns are closely related to pairs of hidden variables. We call this the distinguishing matrix and define it below: Definition 1. Given a weight matrix W of the first layer, and the input distribution D, the distin- DISPLAYFORM0 is a matrix whose columns are indexed by ij where 1 ≤ i < j ≤ k, and DISPLAYFORM1. Another related concept is the augmented distinguishing matrix M, which is a d 2 × (k 2 + 1) matrix whose first k 2 columns are exactly the same as distinguishing matrix N, and the last column (indexed by 0) is defined as DISPLAYFORM2 For both matrices, when the input distribution is clear from context we use N or M and omit the superscript. The exact reason for these definitions will only be clear after we explain the algorithm in Section 3. Our algorithm will require that these matrices are robustly full rank, in the sense that σ min (M) is lowerbounded. Intuitively, every column N D ij looks at the expectation over samples that have opposite signs for weights w i, w j (w i xw j x ≤ 0, hence the name distinguishing matrix).Requiring M and N to be full rank prevents several degenerate cases. For example, if two hidden units are perfectly correlated and always share the same sign for every input, this is very unnatural and requiring the distinguishing matrix to be full rank prevents such cases. Later in Section C we will also show that requiring a lowerbound on σ min (M) is not unreasonable: in the smoothed analysis setting where the nature can make a small perturbation on the input distribution D, we show that for any input distribution D, there exists simple perturbations D that are arbitrarily close to D such that σ min (M D) is lowerbounded. In this section, we describe our algorithm for learning the two-layer networks defined in Section 2.1. As a warm-up, we will first consider a single-layer neural network and recover the in using method-of-moments. This will also be used as a crucial step in our algorithm. Due to space constraints we will only introduce algorithm and proof ideas, the detailed proof is deferred to Section A in appendix. Throughout this section, when we use E[·] without further specification the expectation is over the randomness x ∼ D and the noise ξ. We will first give a simple algorithm for learning a single-layer neural network. More precisely, suppose we are given samples (x 1, y 1),..., (x n, y n) where x i ∼ D comes from a symmetric distribution, and the output y i is computed by DISPLAYFORM0 Here ξ i's are i.i.d. noises that satisfy E[ξ i] = 0. Noise ξ i is also assumed to be independent with input x i. The goal is to learn the weight vector w. The idea of the algorithm is simple: we will estimate the correlations between x and y and the covariance of x, and then recover the hidden vector w using these two estimates. The main challenge here is that y is not a linear function on x. gave a crucial observation that allows us to deal with the non-linearity: Published as a conference paper at ICLR 2019Algorithm 1 Learning Single-layer Neural Networks Input: Samples (x 1, y 1),..., (x n, y n) generated according to Equation. Output: Estimate of weight vector w. DISPLAYFORM0 Lemma 1. Suppose x ∼ D comes from a symmetric distribution and y is computed as in, then DISPLAYFORM1 Importantly, the right hand side of Lemma 1 does not contain the ReLU function σ. This is true because if x comes from a symmetric distribution, averaging between x and −x can get rid of nonlinearities like ReLU or leaky-ReLU. Later we will prove a more general version of this lemma (Lemma 6).Using this lemma, it is immediate to get a method-of-moments algorithm for learning w: we just need to estimate E[yx] and E[xx], then we know DISPLAYFORM2. This is summarized in Algorithm 1. In order to learn the weights of the network defined in Section 2.1, a crucial observation is that we have k outputs as well as k hidden-units. This gives a possible way to reduce the two-layer problem to the single-layer problem. For simplicity, we will consider the noiseless case in this section, where DISPLAYFORM0 Let u ∈ R k be a vector and consider u y, it is clear that u y = (u A)σ(W x). Let z i be the normalized version i-th row of A −1, then we know z i has the property that z i A = λ i e i where λ i > 0 is a constant and e i is a basis vector. The key observation here is that if u = z i, then u A = λ i e i. As a , u y = λ i e i σ(W x) = σ(λ i w i x) is the output of a single-layer neural network with weight equal to λ i w i. If we know all the vectors {z 1, ..., z k}, the input/output pairs (x, z i y) correspond to single-layer networks with weight vectors {λ i w i}. We can then apply the algorithm in Section 3.1 (or the algorithm in) to learn the weight vectors. When u A = λ i e i, we say that u y is a pure neuron. Next we will design an algorithm that can find all vectors {z i}'s that generate pure neurons, and therefore reduce the problem of learning a two-layer network to learning a single-layer network. Pure Neuron Detector In order to find the vector u that generates a pure neuron, we will try to find some property that is true if and only if the output can be represented by a single neuron. Intuitively, using ideas similar to Lemma 1 we can get a property that holds for all pure neurons: DISPLAYFORM1 As before, the ReLU activation does not appear because of the symmetric input distribution. For y = u y, we can estimate all of these moments (E[ŷ 2], E[ŷx], E[xx]) using samples and check whether this condition is satisfied. However, the problem with this property is that even if z = u y is not pure, it may still satisfy the property. More precisely, ifŷ = k i=1 c i σ(w i x), then we have DISPLAYFORM2 Published as a conference paper at ICLR 2019The additional terms may accidentally cancel each other which leads to a false positive. To address this problem, we consider a higher order moment: DISPLAYFORM3 Here N ij's are columns of the distinguishing matrix defined in Definition 1.The important observation here is that there are DISPLAYFORM4 considering their symmetry) dimensional objects. When the distinguishing matrix is full rank, we know its columns N ij are linearly independent. In that case, if the sum of the extra terms is 0, then the coefficient in front of each N ij must also be 0. The coefficients are c i c j which will be non-zero if and only if both c i, c j are non-zero, therefore to make all the coefficients 0 at most one of {c i} can be non-zero. This is summarized in the following Corollary: DISPLAYFORM5. Suppose the distinguishing matrix is full rank, if f (u) = 0 for unit vector u, then u must be equal to one of ±z i.We will call the function f (u) a pure neuron detector, as u y is a pure neuron if and only if f (u) = 0. Therefore, to finish the algorithm we just need to find all solutions for f (u) = 0.Linearization The main obstacle in solving the system of equations f (u) = 0 is that every entry of f (u) is a quadratic function in u. The system of equations f (u) = 0 is therefore a system of quadratic equations. Solving a generic system of quadratic equations is NP-hard. However, in our case this can be solved by a technique that is very similar to the FOOBI algorithm for tensor decomposition . The key idea is to linearize the function by thinking of each degree 2 monomial u i u j as a separate variable. Now the number of variables is k 2 +k = k 2 +k and f is linear in this space. In other words, there exists a matrix DISPLAYFORM6 } are all in the nullspace of T. Later in Section A we will prove that the nullspace of T consists of exactly these vectors (and their combinations):Lemma 4. Let T be the unique R d 2 ×(k2+k) matrix that satisfies T vec * (uu) = f (u) (where f (u) is defined as in Corollary 1), suppose the distinguishing matrix is full rank, then the nullspace of T is exactly the span of {vec DISPLAYFORM7 Based on Lemma 4, we can just estimate the tensor T from the samples we are given, and its smallest singular directions would give us the span of {vec DISPLAYFORM8 Finding z i 's from span of z i z i 's In order to reduce the problem to a single-layer problem, the final step is to find z i 's from span of z i z i 's. This is also a step that has appeared in FOOBI and more generally other tensor decomposition algorithms, and can be solved by a simultaneous diagonalization. Let Z be the matrix whose rows are z i 's, which means Z = diag(λ)A −1. Let X = Z D X Z and Y = Z D Y Z be two random elements in the span of z i z i, where D X and D Y are two random diagonal matrices. Both matrices X and Y can be diagonalized by matrix Z. In this case, if we compute DISPLAYFORM9 That is, z i is an eigenvector of XY −1! The matrix XY −1 can have at most k eigenvectors and there are k z i's, therefore the z i's are the only eigenvectors of XY −1. Published as a conference paper at ICLR 2019Lemma 5. Given the span of z i z i's, let X, Y be two random matrices in this span, with probability 1 the z i's are the only eigenvectors of XY −1.Using this procedure we can find all the z i's (up to permutations and sign flip). Without loss of generality we assume z i A = λ i e i. The only remaining problem is that λ i might be negative. However, this is easily fixable by checking E[DISPLAYFORM0 has the same sign as λ i, and we can flip z i if E[z i y] is negative. We can now give the full algorithm, see Algorithm 2. The main steps of this algorithm is as explained in the previous section. Steps 2 -5 constructs the pure neuron detector and finds the span of vec * (z i z i) (as in Corollary 1); Steps 7 -9 performs simultaneous diagonalization to get all the z i's; Steps 11, 12 calls Algorithm 1 to solve the single-layer problem and outputs the correct . Algorithm 2 Learning Two-layer Neural Networks Input: Samples (x 1, y 1),..., (x n, y n) generated according to Equation Output: Weight matrices W and A. DISPLAYFORM0 where each entry is expressed as a degree-2 polynomial over u. {Reduce to 1-Layer Problem} 11: For each z i, let v i be the output of Algorithm 1 with input (x 1, z i y 1),..., (x n, z i y n). 12: Let Z be the matrix whose rows are z i's, V be the matrix whose rows are v i' s. 13: return V, Z −1. DISPLAYFORM1 We are now ready to state a formal version of Theorem 1:Theorem 4. Suppose A, W, E[xx] and the distinguishing matrix N are all full rank, and Algorithm 2 has access to the exact moments, then the network returned by the algorithm computes exactly the same function as the original neural network. It is easy to prove this theorem using the lemmas we have. Proof. By Corollary 1, we know that after Step 5 of Algorithm 2, the span of columns of S is exactly equal to the span of {vec * (z i z i)}. By Lemma 5, we know the eigenvectors of XY −1 at Step 8 are exactly the normalized version of rows of A −1. Without loss of generality, we will fix the permutation and assume z i A = λ i e i. In Step 9, we use the fact that E[DISPLAYFORM2 where E[σ(w i x)] is always positive because σ is the ReLU function. Therefore, after Step 9 we can assume all the λ i's are positive. Now the output z i y = λ i σ(w i x) = σ(λ i w i x) (again by property of ReLU function σ), by the design of Algorithm 1 we know v i = λ i w i. We also know that Z = diag(λ)A −1, therefore DISPLAYFORM3. These two scaling factors cancel each other, so the two networks compute the same function. Published as a conference paper at ICLR 2019Figure 2: Error in recovering W, A and outputs ("MSE") for different numbers of training samples and different dimensions of W and A. Each point is the of averaging across five trials, where on the left W and A are both drawn as random 10 × 10 orthonormal matrices and in the center as 32 × 32 orthonormal matrices. On the right, given 10, 000 training samples we plot the square root of the algorithm's error normalized by the dimension of W and A, which are again drawn as random orthonormal matrices. The input distribution is a spherical Gaussian. In this section, we provide experimental to validate the robustness of our algorithm for both Gaussian input distributions as well as more general symmetric distributions such as symmetric mixtures of Gaussians. There are two important ways in which our implementation differs from our description in Section 3.3. First, our description of the simultaneous diagonalization step in our algorithm is mostly for simplicity of both stating and proving the algorithm. In practice we find it is more robust to draw 10k random samples from the subspace spanned by the last k right-singular vectors of T and compute the CP decomposition of all the samples (reshaped as matrices and stacked together as a tensor) via alternating least squares . As alternating least squares can also be unstable we repeat this step 10 times and select the best one. Second, once we have recovered and fixed A we use gradient descent to learn W, which compared to Algorithm 1 does a better job of ensuring the overall error will not explode even if there is significant error in recovering A. Crucially, these modifications are not necessary when the number of samples is large enough. For example, given 10,000 input samples drawn from a spherical Gaussian and A and W drawn as random 10 × 10 orthogonal matrices, our implementation of the original formulation of the algorithm was still able to recover both A and W with an average error of approximately 0.15 and achieve close to zero mean square error across 10 random trials. First we show that our algorithm does not require a large number of samples when the matrices are not degenerate. In particular, we generate random orthonormal matrices A and W as the ground truth, and use our algorithm to learn the neural network. As illustrated by Figure 2, regardless of the size of W and A our algorithm is able to recover both weight matrices with minimal error so long as the number of samples is a few times of the number of parameters. To measure the error in recovering A and W, we first normalize the columns of A and rows of W for both our learned parameters and the ground truth, pair corresponding columns and rows together, and then compute the squared distance between learned and ground truth parameters. Note in the rightmost plot of Figure 2, in order to compare the performance between different dimensions, we further normalize the recovering error by the dimension of W and A. It shows that the squared root of normalized error remains stable as the dimension of A and W grows from 10 to 32. In Figure 2, we also show the overall mean square error-averaged over all output units-achieved by our learned parameters. FIG0 demonstrates the robustness of our algorithm to label noise ξ for Gaussian and symmetric mixture of Gaussians input distributions. In this experiment, we fix the size of both A and W to be 10 × 10 and again generate both parameters as random orthonormal matrices. The overall mean square error achieved by our algorithm grows almost perfectly in step with the amount of label noise, Published as a conference paper at ICLR 2019: Error in recovering W, A and outputs ("MSE"), on the left for different levels of conditioning of W and on the right for A. Each point is the of averaging across five trials with 20,000 training samples, where for each trial one parameter is drawn as a random orthonormal matrix while the other as described in Section 4.3. The input distribution is a mixture of Gaussians with two components, one based at the all-ones vector and the other at its reflection.indicating that our algorithm recovers the globally optimal solution regardless of the choice of input distribution. We've already shown that our algorithm continues to perform well across a range of input distributions and even when A and W are high-dimensional. In all previous experiments however, we sampled A and W as random orthonormal matrices so as to control for their conditioning. In this experiment, we take the input distribution to be a random symmetric mixture of two Gaussians and vary the condition number of either A or W by sampling singular value decompositions U ΣV such that U and V are random orthonormal matrices and Σ ii = λ −i, where λ is chosen based on the desired condition number. FIG1 respectively demonstrate that the performance of our algorithm remains steady so long as A and W are reasonably well-conditioned before eventually fluctuating. Moreover, even with these fluctuations the algorithm still recovers A and W with sufficient accuracy to keep the overall mean square error low. Optimizing the parameters of a neural network is a difficult problem, especially since the objective function depends on the input distribution which is often unknown and can be very complicated. In this paper, we design a new algorithm using method-of-moments and spectral techniques to avoid the Published as a conference paper at ICLR 2019 complicated non-convex optimization for neural networks. Our algorithm can learn a network that is of similar complexity as the previous works, while allowing much more general input distributions. There are still many open problems. The current requires output to have the same (or higher) dimension than the hidden layer, and the hidden layer does not have a bias term. Removing these constraints are are immediate directions for future work. Besides the obvious ones of extending our to more general distributions and more complicated networks, we are also interested in the relations to optimization landscape for neural networks. In particular, our algorithm shows there is a way to find the global optimal network in polynomial time, does that imply anything about the optimization landscape of the standard objective functions for learning such a neural network, or does it imply there exists an alternative objective function that does not have any local minima? We hope this work can lead to new insights for optimizing a neural network. In this section, we first provide the missing proofs for the lemmas appeared in Section 3. Then we discuss how to handle the noise case (i.e. y = σ(W x) + ξ) and give the corresponding algorithm (Algorithm 3). At the end we also briefly discuss how to handle the case when the matrix A has more rows than columns (more outputs than hidden units).Again, throughout the section when we write E[·], the expectation is taken over the randomness of x ∼ D and noise ξ. Single-layer: To get rid of the non-linearities like ReLU, we use the property of the symmetric distribution (similar to ). Here we provide a more general version (Lemma 6) instead of proving the specific Lemma 1. Note that Lemma 1 is the special case when a = w and p = q = 1 (here ξ does not affect the since it has zero mean and is independent with x, thus DISPLAYFORM0 Lemma 6. Suppose input x comes from a symmetric distribution, for any vector a ∈ R d and any non-negative integers p and q satisfying that p + q is an even number, we have DISPLAYFORM1 where the expectation is taken over the input distribution. Proof. Since input x comes from a symmetric distribution, we know that E σ(a x) DISPLAYFORM2 There are two cases to consider: p and q are both even numbers or both odd numbers.1. For the case where p and q are even numbers, we have DISPLAYFORM3 2. For the other case where p and q are odd numbers, we have DISPLAYFORM4 14 Published as a conference paper at ICLR 2019Pure neuron detector: The first step in our algorithm is to construct a pure neuron detector based on Lemma 2 and Lemma 3. We will provide proofs for these two lemmas here. Proof of Lemma 2. This proof easily follows from Lemma 6. Setting a = w, p = 2 and q = 0 in Lemma 6, we have DISPLAYFORM5 Proof of Lemma 3. Here, we only prove the second equation, since the first equation is just a special case of the second equation. First, we rewriteŷ = k i=1 c i σ(w i x) = u y by letting u A = c. Then we transform these two terms in the LHS as follows. Let's look at DISPLAYFORM6 where the second equality holds due to Lemma 6. Now, let's look at the second term E (u y) DISPLAYFORM7 Published as a conference paper at ICLR 2019Now, we subtract by to obtain DISPLAYFORM0 where uses FORMULA34 of the following Lemma 7, and uses the definition of distinguishing matrix N (Definition 1). Lemma 7. Given input x coming from a symmetric distribution, for any vector a, b ∈ R d, we have DISPLAYFORM1 where the expectation is taken over the input distribution. Proof. Here we just prove the first identity, because the proof of the second one is almost identical. First, we rewrite DISPLAYFORM2 Thus, we only need to show that DISPLAYFORM3 When a xb x > 0, we know a x and b x are both positive or both negative. In either case, we DISPLAYFORM4 which finished our proof. Finding span: Now, we find the span of {vec DISPLAYFORM5 Published as a conference paper at ICLR 2019It is not hard to verify that u y is a pure neuron if and only if f (u) = 0. Note that f (u) = 0 is a system of quadratic equations. So we linearize it by increasing the dimension (i.e., consider u i u j as a single variable) similar to the FOOBI algorithm. Thus the number of variable is DISPLAYFORM6 Now, we prove the Lemma 4 which shows the null space of T is exactly the span of {vec * (z i z i)}.Proof of Lemma 4. We divide the proof to the following two cases:1. For any vector vec * (U) belongs to the null space of T, we have T vec * (U) = 0. Note that the RHS of equals to 0 if and only if A U A is a diagonal matrix since the distinguishing matrix N is full column rank and A U A is symmetric. Thus vec * (U) belongs to the span of {vec * (z i z i)} since U = Z DZ for some diagonal matrix D.2. For any vector vec * (U) belonging to the span of {vec DISPLAYFORM7 Note that A z i only has one non-zero entry due to the definition of z i, for any i ∈ [k]. Thus all coefficients in the RHS of are 0. We get T vec * (U) = 0.Finding z i's: Now, we prove the final Lemma 5 which finds all z i's from the span of {vec * (z i z i)} by using simultaneous diagonalization. Given all z i's, this two-layer network can be reduced to a single-layer one. Then one can use Algorithm 1 to recover the first layer parameters w i's. Proof of Lemma 5. As we discussed before this lemma, we have Proof. First, we know there exist diagonal matrices DISPLAYFORM8 DISPLAYFORM9 } are the k-least right singular vectors of T (see Line 5 of Algorithm 2). Then, let the vector d i ∈ R k be the diagonal elements of D i, for all i ∈ k. Let matrix Q ∈ R k×k be a matrix where its i-th column is DISPLAYFORM10, where ζ 1 and ζ 2 are two random k-dimensional standard Gaussian vectors (see Line 7 of Algorithm 2). DISPLAYFORM11 Thus, Q has full rank and none of its rows are zero vectors. Let i-th row of Q be q i. Let's consider D X first. In order for i-th diagonal element of D X to be zero, we need q i ζ 1 = 0. Since q i is not a zero vector, we know the solution space of q i ζ 1 = 0 is a lower-dimension manifold in R k, which has zero measure. Since finite union of zero-measure sets still has measure zero, the event that zero valued elements exist in the diagonal of D X or D Y happens with probability zero. If i-th and j-th diagonal elements of DISPLAYFORM12 −1. Again, we know the solution space is a lower-dimensional manifold in R 2k space, with measure zero. Since finite union of zero-measure sets still has measure zero, the event that duplicated diagonal elements exist in DISPLAYFORM13 Y happens with probability zero. A.2 NOISY CASE Now, we discuss how to handle the noisy case (i.e. y = σ(W x) + ξ). The corresponding algorithm is described in Algorithm 3. Note that the noise ξ only affects the first two steps, i.e., pure neuron detector (Lemma 3) and finding span of vec * (z i z i) (Lemma 4). It does not affect the last two steps, i.e., finding z i's from the span (Lemma 5) and learning the reduced single-layer network. Because Published as a conference paper at ICLR 2019 Algorithm 3 Learning Two-layer Neural Networks with Noise Input: Samples (x 1, y 1),..., (x n, y n) generated according to Equation Output: Weight matrices W and A. 1: {Finding span of vec DISPLAYFORM14 where each entry is expressed as a degree-2 polynomial over u. DISPLAYFORM15, where ζ 1 and ζ 2 are two independently sampled kdimensional standard Gaussian vectors. 8: Let z 1, ..., z k be eigenvectors of XY −1 .9: For each z i, use the second half of samples {( DISPLAYFORM16 {Reduce to 1-Layer Problem} 11: For each z i, let v i be the output of Algorithm 1 with input {(x j, z i y j)} n j=n/2+1. 12: Let Z be the matrix whose rows are z i's, V be the matrix whose rows are v i's. DISPLAYFORM17 Lemma 5 is independent of the model and Lemma 1 is linear wrt. noise ξ, which has zero mean and is independent of input x. Many of the steps in Algorithm 3 are designed with the robustness of the algorithm in mind. For example, in step 5 for the exact case we just need to compute the null space of T. However if we use the empirical moments the null space might be perturbed so that it has small singular values. The separation of the input samples into two halves is also to avoid correlations between the steps, and is not necessary if we have the exact moments. Modification for pure neuron detector: Recall that in the noiseless case, our pure neuron detector contains a term E[(u y) 2 (x ⊗ x)], which causes a noise square term in the noisy case. Here, we modify our pure neuron detector to cancel the extra noise square term. In the following lemma, we state our modified pure neuron detector in Equation 11, and give it a characterization. Lemma 9. Suppose y = Aσ(W x) + ξ, for any u ∈ R k, we have DISPLAYFORM18 where N ij's are columns of the distinguishing matrix (Definition 1), and DISPLAYFORM19 We defer the proof of this lemma to the end of this section. Recall that the augmented distinguishing matrix M consists of the distinguishing matrix N plus column E[x ⊗ x]. Now, we need to assume the augmented distinguishing matrix M is full rank. Modification for finding span: For Lemma 4, as we discussed above, here we assume the augmented distinguishing matrix M is full rank. The corresponding lemma is stated as follows (the proof is exactly the same as previous Lemma 4):Published as a conference paper at ICLR 2019 11 ), suppose the augmented distinguishing matrix is full rank, then the nullspace of T is exactly the span of {vec * (z i z i)}. DISPLAYFORM20 Similar to Theorem 4, we provide the following theorem for the noisy case. The proof is almost the same as Theorem 4 by using the noisy version lemmas (Lemmas 9 and 10).Theorem 5. Suppose E[xx], A, W and the augmented distinguishing matrix M are all full rank, and Algorithm 3 has access to the exact moments, then the network returned by the algorithm computes exactly the same function as the original neural network. Now, we only need to prove Lemma 9 to finish this noise case. Proof of Lemma 9. Similar to and FORMULA32, we deduce these three terms in RHS of one by one as follows. For the first term, it is exactly the same as since the expectation is linear wrt. ξ. Thus, we have DISPLAYFORM21 Now, let's look at the second term E (u y) 2 (x ⊗ x) which is slightly different from due to the noise ξ. Particularly, we add the third term to cancel this extra noise square term later. DISPLAYFORM22 where FORMULA0 uses.For the third term, we have DISPLAYFORM23 where the third equality holds due to Lemma 6 and Lemma 7, and uses the definition of m ij. Published as a conference paper at ICLR 2019Algorithm 4 Learning Two-layer Neural Networks with Non-square A Input: Samples (x 1, y 1),..., (x n, y n) generated according to Equation. Output: Weight matrices W ∈ R k×d and A ∈ l×k.1: Using half samples (i.e. {(x i, y i)} n/2 i=1 ) to estimate empirical momentsÊ[yx]. 2: Let P be a l × k matrix, which columns are left singular vectors ofÊ [yx]. 3: Run Algorithm 3 on samples {(x i, P y i)} n i=n/2. Let the output of Algorithm 3 be V, Z −1.4: return V, P Z −1.Finally, we combine these three terms as follows: DISPLAYFORM0 where FORMULA0 uses FORMULA34 (same as ). In this paper, for simplicity, we have assumed that the dimension of output equals the number of hidden units and thus A is a k × k square matrix. Actually, our algorithm can be easily extended to the case where the dimension of output is at least the number of hidden units. In this section, we give an algorithm for this general case, by reducing it to the case where A is square. The pseudo-code is given in Algorithm 4. Theorem 6. Suppose E[xx], W, A and the augmented distinguishing matrix M are all full rank, and Algorithm 4 has access to the exact moments, then the network returned by the algorithm computes exactly the same function as the original neural network. Proof. Let the ground truth parameters be A ∈ R l×k and W ∈ R k×d. The samples are generated by y = Aσ(W x)+ξ, where the noise ξ is independent with input x. We have DISPLAYFORM0 Since both W and E[xx] are full-rank, we know the column span of E[yx] are exactly the column span of A. Furthermore, we know the columns of P is a set of orthonormal basis for the column span of A.For a ground truth neural network with weight matrices W and P A, the generated sample will just be (x, P y). According to Theorem 5, we know for any input x, we have Z −1 σ(V x) = P Aσ(W x). Thus, we have DISPLAYFORM1 where the second equality holds since P P is just the projection matrix to the column span of A. In this section we will show that even if we do not have access to the exact moments, as long as the empirical moments are estimated with enough (polynomially many) samples, Algorithm 2 and Published as a conference paper at ICLR 2019Algorithm 3 can still learn the parameters robustly. We will focus on Algorithm 3 as it is more general, the for Algorithm 2 can be viewed as a corollary when the noise ξ = 0. Throughout this section, we will useV,Ẑ −1 to denote the of Algorithm 3 with empirical moments, and use V, Z −1 for the when the algorithm has access to exact moments, similarly for other intermediate . For the robustness of Algorithm 3, we prove the following theorem. Theorem 7. Assume that the norms of x, ξ, A are bounded by x ≤ Γ, ξ ≤ P 2, A ≤ P 1, the covariance matrix and the weight matrix are robustly full rank: DISPLAYFORM0 Further assume that the augmented distinguishing matrix has smallest singular values σ min (M) ≥ α. For any small enough, for any δ < 1, given poly Γ, P 1, P 2, d, 1/, 1/γ, 1/α, 1/β, 1/δ number of i.i.d. samples, let the output of Algorithm 3 beV,Ẑ −1, we know with probability at least 1 − δ, DISPLAYFORM1 for any input x. In order to prove the above Theorem, we need to show that each step of Algorithm 3 is robust. We can divide Algorithm 3 into three steps: finding the span of vec * (z i z i)'s; finding z i's from the span of vec * (z i z i)'s; recovering first layer using Algorithm 1. We will first state the key lemmas that prove every step is robust to noise, and finally combine them to show our main theorem. First, we show that with polynomial number of samples, we can approximate the span of vec * (z i z i)'s in arbitrary accuracy. LetT be the empirical estimate of T, which is the pure neuron detector matrix as defined in Algorithm 3. As shown in Lemma 10, the null space of T is exactly the span of vec * (z i z i)'s. We use standard matrix perturbation theory (see Section D.2) to show that the null space of T is robust to small perturbations. More precisely, in Lemma 11, we show that with polynomial number of samples, the span of k least singular vectors ofT is close to the null space of T.Lemma 11. Under the same assumptions as in Theorem 7, let S ∈ R (k2+k)×k be the matrix whose k columns are the k least right singular vectors of T. Similarly defineŜ ∈ R (k2+k)×k for empirical estimateT. Then for any ≤ γ/2, for any δ < 1, given O(DISPLAYFORM2) number of i.i.d. samples, we know with probability at least 1 − δ, DISPLAYFORM3 The proof of the above lemma is in Section B.1. Basically, we need to lowerbound the spectral gap (k 2 -th singular value of T) and to upperbound the Frobenius norm of T −T. Standard matrix perturbation bound shows that if the perturbation is much smaller than the spectral gap, then the null space is preserved. Next, we show that we can robustly find z i's from the span of vec * (z i z i)'s. Since this step of the algorithm is the same as the simultaneous diagonalization algorithm for tensor decompositions, we use the robustness of simultaneous diagonalization BID1 to show that we can find z i's robustly. The detailed proof is in Section B.2. DISPLAYFORM4, where ζ 1 and ζ 2 are two independent standard Gaussian vectors. Let z 1, · · · z k be the normalized row vectors of A −1. Let z 1,...,ẑ k be the eigenvectors ofXŶ −1 (after sign flip). For any δ > 0 and small enough, with DISPLAYFORM5, with probability at least 1 − δ over the randomness of ζ 1, ζ 2 and i.i.d. samples, there exists a permutation π(i) ∈ [k] such that DISPLAYFORM6 Finally, givenẑ i's, the problem reduces to a one-layer problem. We will first give an analysis for Algorithm 1 as a warm-up. When we call Algorithm 1 from Algorithm 3, the situation is slightly different. Note we reserve fresh samples for this step, so that the samples used by Algorithm 1 are Published as a conference paper at ICLR 2019 still independent with the estimateẑ i (learned using the other set of samples). However, sinceẑ i is not equal to z i, this introduces an additional error term (ẑ i − z i) y which is not independent of x and cannot be captured by ξ. We modify the proof for Algorithm 1 to show that the algorithm is still robust as long as ẑ i − z i is small enough. Lemma 13. Assume that x ≤ Γ, A ≤ P 1, ξ ≤ P 2 and σ min (E[xx]) ≥ γ. Suppose that for each 1 ≤ i ≤ k, ẑ i − z i ≤ τ. Then for any ≤ γ/2 and δ < 1, given O(DISPLAYFORM7) number of samples for Algorithm 1, we know with probability at least 1 − δ, DISPLAYFORM8 Combining the above three lemmas, we prove Theorem 7 in Section B.4. DISPLAYFORM9 We first prove that the step of finding the span of {vec * (z i z j)} is robust. The main idea is based on standard matrix perturbation bounds (see Section D.2). We first give a lowerbound on the k 2 -th singular value of T, giving a spectral gap between the smallest non-zero singular value and the null space. See the lemma below. The proof is given in Section B.1.1. Lemma 14. Suppose σ min (M) ≥ α, σ min (A) ≥ β, we know that matrix T has rank k 2 and the k 2 -th singular value of T is lower bounded by αβ 2.Then we show that with enough samples the estimateT is close enough to T, so Wedin's Theorem (Lemma 25) implies the subspace found is also close to the true nullspace of T. The proof is deferred to Section B.1.2. Lemma 15. Assume that x ≤ Γ, A ≤ P 1, ξ ≤ P 2 and σ min (E[xx]) ≥ γ > 0, then for any DISPLAYFORM10 ) number of i.i.d. samples, we know T − T F ≤, with probability at least 1 − δ. Finally we combine the above two lemmas and show that the span of the least k right singular vectors ofT is close to the null space of T.Lemma 11. Under the same assumptions as in Theorem 7, let S ∈ R (k2+k)×k be the matrix whose k columns are the k least right singular vectors of T. Similarly defineŜ ∈ R (k2+k)×k for empirical estimateT. Then for any ≤ γ/2, for any δ < 1, given O(DISPLAYFORM11) number of i.i.d. samples, we know with probability at least 1 − δ, DISPLAYFORM12 Proof. According to Lemma 15, given O(DISPLAYFORM13) number of i.i.d. samples, we know with probability at least 1 − δ, DISPLAYFORM14 According to Lemma 14, we know σ k2 (T) ≥ αβ 2. Then, due to Lemma 27, we have DISPLAYFORM15 Published as a conference paper at ICLR 2019 DISPLAYFORM0 Figure 5: Characterize T as the product of four matrices. In order to lowerbound the k 2 -th singular value of T, we first express T as the product of four simpler matrices, T = M BCF, as illustrated in Figure 5. The definitions of these four matrices DISPLAYFORM0 will be introduced later as we explain their effects. From Lemma 9, we know that DISPLAYFORM1 for any symmetric k × k matrix U. For convenience, we first use matrix F to transform vec * (U) to vec(U), which has k 2 dimensions. Matrix F is defined such that F vec * (U) = vec(U), for any k × k symmetric matrix U. Note that this is very easy as we just need to duplicate all the non-diagonal entries. Second, we hope to get the coefficients (A U A) ij's. Notice that DISPLAYFORM0 Since we only care about the elements of A U A at the ij-th position for 1 ≤ i < j ≤ k, we just pick corresponding rows of A ⊗ A to construct our matrix C, which has dimension k 2 × k 2.The first matrix M is the augmented distinguishing matrix (see Definition 1). In order to better understand the reason that we need matrix B, let's first re-write T vec * (U) in the following way: DISPLAYFORM1 Thus, T vec * (U) is just a linear combination of (M ij − m ij E[x ⊗ x])'s with coefficients equal to (A U A) ij' s. We have already expressed coefficients (A U A) ij's using CF vec * (U). Now, we just need to use matrix B to transform the augmented distinguishing matrix M to a d 2 × k 2 matrix, with each column equal to (M ij − m ij E[x ⊗ x]). In order to achieve this, the first k 2 rows of B is just the identity matrix I k2, and the last row of DISPLAYFORM2 With above characterization of T, we are ready to show that the k 2 -th singular value of T is lower bounded. Lemma 14. Suppose σ min (M) ≥ α, σ min (A) ≥ β, we know that matrix T has rank k 2 and the k 2 -th singular value of T is lower bounded by αβ 2.Proof. Since matrix C has dimension k 2 × k 2, it's clear that the rank of T is at most k 2. We first prove that the rank of T is exactly k 2.Since the first k 2 rows of B constitute the identity matrix I k2, we know B is a full-column rank matrix with rank equal to k 2. We also know that matrix M is a full column rank matrix with rank k 2 + 1. Thus, the product matrix M B is still a full-column rank matrix with rank k 2. If we can prove that the product matrix CF has full-row rank equal to k 2. It's clear that T = M BCF also has rank k 2. Next, we prove that CF has full-row rank. Published as a conference paper at ICLR 2019 Since σ min (A) ≥ β, we know A ⊗ A is full rank, and a subset of its rows C has full row rank. For the sake of contradiction, suppose that there exists non-zero vector a ∈ R k2, such that k2 l=1 a l (CF) [l,:] DISPLAYFORM0 Since C consists of a subset of rows of A ⊗ A, we know C [l,ij] = C [l,ji] for any l and any i < j. Thus, k2 l=1 a l (CF) [l,:] = 0 simply implies k2 l=1 a l C [l,:] = 0, which breaks the fact that C is full-row rank. Thus, the assumption is false and CF has full-row rank. Now, let's prove that the k 2 -th singular value of T is lower bounded. We first show that in the product characterization of T, the smallest singular value of each individual matrix is lower bounded. According to the assumption, we know the smallest singular value of M is lower bounded by α. Since the first k 2 rows of matrix B constitute a k 2 × k 2 identity matrix, we know DISPLAYFORM1 where u is any k 2 -dimensional vector. Since σ min (A) ≥ β, we know σ min (A ⊗ A) ≥ β 2. According to the construction of C, we know C consists a subset of rows of A ⊗ A. Denote the indices of the row not picked as S. We have DISPLAYFORM2 where u has dimension k 2 and v has dimension k 2.We lowerbound the smallest singular value of CF by showing that σ min (CF) ≥ σ min (C). For any unit vector u ∈ R k2, we know DISPLAYFORM3 ji for any i < j. We also know [u C] ij = [u C] ji for i < j. Thus, we know for any unit vector u, u CF ≥ u C, which implies σ min (CF) ≥ σ min (C).Finally, since in the beginning we have proved that matrix T has rank k 2, the k 2 -th singular value is exactly the smallest non-zero singular value of T. Denote the smallest non-zero singular of T as σ + min (T), we have DISPLAYFORM4 where the first inequality holds because both M and B has full column rank. In this section, we prove that given polynomial number of samples, T − T F is small with high probability. We do this by standard matrix concentration inequalities. Note that our requirements on the norm of x is just for convenience, and the same proof works as long as x has reasonable tail-behavior (e.g. sub-Gaussian). Lemma 15. Assume that x ≤ Γ, A ≤ P 1, ξ ≤ P 2 and σ min (E[xx]) ≥ γ > 0, then for any ≤ γ/2, for any 1 > δ > 0, given O(DISPLAYFORM0) number of i.i.d. samples, we know T − T F ≤, with probability at least 1 − δ. Proof. In order to get an upper bound for T − T F, we first show that T − T 2 is upper bounded. We know DISPLAYFORM0, according to the definition of T, we know DISPLAYFORM1 wheref (u) =T vec * (uu) and the fourth inequality uses the Cauchy-Schwarz inequality. Next, we only need to upper bound max u: u ≤2 f (u) −f (u). Recall that DISPLAYFORM2 Notice that DISPLAYFORM3 We first show that given polynomial number of samples, DISPLAYFORM4 is upper bounded with high probability. Since each row of W has unit norm, we have W ≤ √ k. Due to the assumption that x ≤ Γ, A ≤ P 1, ξ ≤ P 2, we have DISPLAYFORM5 According to Lemma 24, we know given O(DISPLAYFORM0 2) number of samples, DISPLAYFORM1 with probability at least 1 − δ. Similarly, we can show that given O(DISPLAYFORM2 2) number of samples, DISPLAYFORM3 with probability at least 1 − δ. Since xx ≤ Γ 2, we know that given O(DISPLAYFORM4 2) number of samples, DISPLAYFORM5 with probability at least 1 − δ. Suppose that ≤ γ/2 ≤ σ min (E[xx])/2, we knowÊ[xx] has full rank. According to Lemma 29, we have DISPLAYFORM6 with probability at least 1 − δ. By union bound, we know for any < γ/2, given O(DISPLAYFORM7 2) number of samples, with probability at least 1 − δ, we have DISPLAYFORM8 Then, we have DISPLAYFORM9 Thus, given O(DISPLAYFORM10) number of samples, we know DISPLAYFORM11 Published as a conference paper at ICLR 2019 with probability at least 1 − δ. 2, according to Lemma 24, we know given DISPLAYFORM0 with probability at least 1 − δ. Next, let's look at the third term DISPLAYFORM1 Again, using Lemma 24 and union bound, we know given O(DISPLAYFORM2) number of samples, we have DISPLAYFORM3 Thus, Define DISPLAYFORM4 Then, we have DISPLAYFORM5 Thus, we know that given O(DISPLAYFORM6) number of samples, we know DISPLAYFORM7 with probability at least 1 − δ. Now, let's bound the last term, DISPLAYFORM8 Similar as the first term, we can show that given O(DISPLAYFORM9) number of samples, we have DISPLAYFORM10 with probability at least 1 − δ. Now, we are ready to combine our bound for each of four terms. By union bound, we know given DISPLAYFORM11 ) number of samples, DISPLAYFORM12 Published as a conference paper at ICLR 2019hold with probability at least 1 − δ. Thus, we know DISPLAYFORM13 with probability at least 1 − δ. where the second inequality holds since DISPLAYFORM0 Thus, we know given O(DISPLAYFORM1) number of samples, DISPLAYFORM2 with probability at least 1 − δ. Thus, given O(DISPLAYFORM3) number of samples, T − T F ≤ with probability at least 1 − δ. In this section, we will show that the simultaneous diagonalization step in our algorithm is robust. Let S andŜ be two (k 2 + k) by k matrices, whose columns consist of the least k right singular vectors of T andT respectively. According to Lemma 11, we know with polynomial number of samples, the Frobenius norm of SS −ŜŜ ⊥ is well bounded. However, due to the rotation issue of subspace basis, we cannot conclude that S −Ŝ F is small. Only after appropriate alignment, the difference between S andŜ becomes small. Lemma 16. Let S andŜ be two (k 2 + k) by k matrices, whose columns consist of the least k right singular vectors of T andT respectively. If SS −ŜŜ F ≤, there exists an rotation matrix R ∈ R k×k satisfying RR = R R = I k, such that DISPLAYFORM0 Proof. Since S has orthonormal columns, we have σ k (SS) = 1. Then, according to Lemma 35, we know there exists rotation matrix R such that DISPLAYFORM1 Let the k columns of S be vec DISPLAYFORM2, where D i is a diagonal matrix. Let Q be a k × k matrix, whose i-th column consists of the diagonal elements of D i, such that Q ij equals the j-th diagonal element of D i. Let vec * (X) = SRζ 1, vec * (Y) = SRζ 2, where R is the rotation matrix in Lemma 16 and ζ 1, ζ 2 are two independent standard Gaussian vectors. Let D X = diag(QRζ 1) and D Y = diag(QRζ 2). It's not hard to check that DISPLAYFORM3 Y are well separated. Lemma 17. Assume that A ≤ P 1, σ min (A) ≥ β. Then for any δ > 0 we know with probability at least 1 − δ, we have sep(DISPLAYFORM4 Proof. We first show that matrix Q is well-conditioned. Since DISPLAYFORM0 . Let U be a k 2 ×k matrix whose columns consist of vec(U i)'s. Also defineQ as a k 2 × k matrix whose columns are vec(D i)'s. Note that matrixQ only has k non-zero rows, which are exactly matrix Q. With the above definition, we have DISPLAYFORM1 Notice that a subset of rows of U constitute matrix S, which is an orthonormal matrix. Thus, we have σ min (U) ≥ σ min (S) = 1. Since we assume σ min (A) ≥ β, we have DISPLAYFORM2 Thus, we have σ min (Q) ≥ β 2, which implies σ min (Q) ≥ β 2.We also know DISPLAYFORM3.Since S = 1, we know U ≤ √ 2. For the smallest singular value of A − ⊗ A −, we have DISPLAYFORM4 Thus, we have Q ≤ √ 2P DISPLAYFORM5 By properties of Gaussians, we know q ⊥ i,j, Rζ 1 is independent of q j, Rζ 1, so we can first fix q j, Rζ 1 and apply anti-concentration of Gaussians (see Lemma 38) to q ⊥ i,j, Rζ 1. As a we know with probability at least 1 − δ/k 2: DISPLAYFORM6 By union bound, we know with probability at least 1 − δ, DISPLAYFORM7 LetX =Ŝζ 1 andŶ =Ŝζ 2. Next, we prove that the eigenvectors ofXŶ −1 are close to the eigenvectors of XY −1.Published as a conference paper at ICLR 2019 DISPLAYFORM8, where ζ 1 and ζ 2 are two independent standard Gaussian vectors. Let z 1, · · · z k be the normalized row vectors of A −1. Let z 1,...,ẑ k be the eigenvectors ofXŶ −1 (after sign flip). For any δ > 0 and small enough, with DISPLAYFORM9, with probability at least 1 − δ over the randomness of ζ 1, ζ 2 and i.i.d. samples, there exists a permutation DISPLAYFORM10 Proof. Let z 1, · · ·, z k be the eigenvectors of XY −1 (before sign flip step). Similarly definê z 1, · · ·,ẑ k forXŶ −1. We first prove that the eigenvectors ofXŶ −1 are close to the eigenvectors of XY −1. DISPLAYFORM11 where DISPLAYFORM12 According to Lemma 34, we have DISPLAYFORM13. In order to bound the perturbation matrices F and G, we need to first bound E X, E Y and σ min (Y), σ min (Ŷ).As we know, E X =X − X = (Ŝ − SR)ζ 1. According to Lemma 16, we have Ŝ − SR ≤ Ŝ − SR F ≤ 2. We also know with probability at least 1 DISPLAYFORM14 Similarly, with probability at least 1 DISPLAYFORM15 Now, we lower bound the smallest singular value of X and Y. Since DISPLAYFORM16 Since D X is a diagonal matrix, its smallest singular value equals the smallest absolute value of its diagonal element. Recall each diagonal element of D X is q i, Rζ 1, which follows a Gaussian distribution whose standard deviation is at least q i ≥ β 2. By anti-concentration property of Gaussian (see Lemma 38), we know | q i, Rζ 2 | ≥ Ω(δβ 2 /k) for all i with probability 1 − δ/4. Thus, we have σ min (X) ≥ poly(1/d, 1/P 1, β, δ). Similarly we have the same for Y. DISPLAYFORM17 Thus, for small enough, we have F ≤ poly(d, P 1, 1/β,, 1/δ) and G ≤ poly(d, P 1, 1/β,, 1/δ). In order to apply Lemma 33, we also need to bound κ(A −) and DISPLAYFORM18 where the second inequality holds because σ min (Y) ≥ poly(1/d, 1/P 1, β, δ). Recall that X = mat * (SRζ 1). It's not hard to verify that with probability at least 1 − exp(−d Ω ), we have X ≤ poly(d). Thus, we know XY −1 ≤ poly(d, P 1, 1/β, 1/δ). Similarly, we can also prove that DISPLAYFORM19 Published as a conference paper at ICLR 2019According to Lemma 17, we know with probability at least 1 − δ/4, sep(DISPLAYFORM0 . Thus, by union bound, we know for small enough, with probability at least DISPLAYFORM1 DISPLAYFORM2 with probability at least 1 − δ. According to Lemma 5, the eigenvectors of XY −1 (after sign flip) are exactly the normalized rows of A −1 (up to permutation). Now the only issue is the sign ofẑ i. By the robustness of sign flip step (see Lemma 18), we know for small enough, with O(DISPLAYFORM3, with probability at least 1 − δ, the sign flip ofẑ i is consistent with the sign flip of z i .In the following lemma, we show that the sign flip step ofẑ i is robust. DISPLAYFORM4 . We know, for any δ < 1, with O( DISPLAYFORM5 Proof. We first show that E[ z i, y] is bounded away from zero. Let Z be a k × k matrix, whose rows are {z i}. Without loss of generality, assume that Z = diag(±λ)A −1. Since A −1 ≤ 1/β, we know λ i ≥ β, for each i. Thus, we have DISPLAYFORM6 Published as a conference paper at ICLR 2019Note that we reserve fresh samples for this step, thusẑ i is independent with samples inÊ[ẑ i, y]]. DISPLAYFORM7 ) number of samples, we have DISPLAYFORM8. Combined with the fact that DISPLAYFORM9, with O(DISPLAYFORM10 We will first show that Algorithm 1 is robust. DISPLAYFORM0) number of i.i.d. samples, we know with probability at least 1 − δ, ŵ − w ≤, whereŵ is the learned weight vector. Proof. We first show that given polynomial number of i. DISPLAYFORM1 is upper bounded with high probability, whereÊ[yx] is the empirical estimate of E [yx]. Due to the assumption that x ≤ Γ, w ≤ 1, ξ ≤ P 2, we have DISPLAYFORM2 According to Lemma 24, we know given O(DISPLAYFORM3 2) number of samples, with probability at least 1 − δ, we have DISPLAYFORM4 Since xx ≤ Γ 2, we know that given O(DISPLAYFORM5 ≤ with probability at least 1 − δ. Suppose that ≤ γ/2 ≤ σ min (E[xx])/2, we knowÊ[xx] has full rank. According to Lemma 29, we have DISPLAYFORM6 with probability at least 1 − δ. By union bound, we know for any ≤ γ/2, given O(DISPLAYFORM7 2) number of samples, with probability at least 1 − δ, we have DISPLAYFORM8. Published as a conference paper at ICLR 2019 Then, we have DISPLAYFORM0 Thus, given O(DISPLAYFORM1) number of samples, with probability at least 1 − δ, we have ŵ − w ≤. Now let's go back to the call to Algorithm 1 in Algorithm 3. Let z i's be the normalized rows of A −1, and letẑ i's be the eigenvectors ofXŶ −1 (with correct sign). From Lemma 12, we know {ẑ i} are close to {z i} with permutation. Without loss of generality, we assume the permutation here is just an identity mapping, which means z i −ẑ i is small for each i. For each z i, let v i be the output of Algorithm 1 given infinite number of inputs (x, z i y). For eacĥ z i, letv i be the output of Algorithm 1 given only finite number of samples (x,ẑ i y). In this section, we show that suppose z i −ẑ i is bounded, with polynomial number of samples, v i −v i is also bounded. The input for Algorithm 1 is (x,ẑ i y). We viewẑ i y as the summation of z i y and a noise term (ẑ i − z i) y. Here, the issue is that the noise term (ẑ i − z i) y is not independent with the sample (x,ẑ i y), which makes the robust analysis in Theorem 8 not applicable. On the other hand, since we reserve a separate set of samples for Algorithm 1, the estimateẑ i is independent with the samples (x, y)'s used by Algorithm 1. Thus, the samples (x,ẑ i y)'s here are still i.i.d., which enables us to use matrix concentration bounds to show the robustness here. Lemma 13. Assume that x ≤ Γ, A ≤ P 1, ξ ≤ P 2 and σ min (E[xx]) ≥ γ. Suppose that for each 1 ≤ i ≤ k, ẑ i − z i ≤ τ. Then for any ≤ γ/2 and δ < 1, given O(DISPLAYFORM2) number of samples for Algorithm 1, we know with probability at least 1 − δ, DISPLAYFORM3 [ẑ i yx]. Thus, in order to bound v i −v i, we only need to show DISPLAYFORM4 are both bounded. The first term can be bounded as follows. DISPLAYFORM5 We can use standard matrix concentration bounds to upper bound the second term. By similar analysis of Theorem 1, we know given O(DISPLAYFORM6) number of i.i.d. samples, with probability at least 1 − δ, DISPLAYFORM7 Published as a conference paper at ICLR 2019 Overall, we have DISPLAYFORM0 By union bound, we know given O(DISPLAYFORM1) number of i.i.d. samples, with probability at least 1 − δ, DISPLAYFORM2 for any 1 ≤ i ≤ k. Proof of Theorem 7. Combining Lemma 11, Lemma 12 and Lemma 13, we know given poly Γ, P 1, P 2, d, 1/, 1/γ, 1/α, 1/β, 1/δ number of i.i.d. samples, with probability at least 1 − δ, DISPLAYFORM0 Let V be a k × d matrix whose rows are v i's. Similarly define matrixV forv i' s. Since v i −v i ≤ for any i, we know every row vector of V −V has norm at most, which implies V −V ≤ √ k.Let Z be a k × k matrix whose rows are z i' s. Similarly define matrixẐ forẑ i' s. Again, we have Z −Ẑ ≤ √ k. In order to show Z −1 −Ẑ −1 is small using standard matrix perturbation bounds (Lemma 29), we need to lower bound σ min (Z). Notice that Z is just matrix A −1 with normalized row vectors. As we know, σ min (A −1) ≥ 1/P 1, and A −1 ≤ 1/β, which implies that every row vector of A −1 has norm at most 1/β. Let D z be the diagonal matrix whose i, i-th entry is the norm of i-th row of DISPLAYFORM1 Then, according to Lemma 29, as long as ≤ β 2P1, we have DISPLAYFORM2 In order to bound V, we can bound the norm of its row vectors. We have, DISPLAYFORM3. Now we can bound Z −1 σ(V x)−Ẑ −1 σ(V x) as follows. DISPLAYFORM4 where the first inequality holds since DISPLAYFORM5 Thus, we know given poly Γ, P 1, P 2, d, 1/, 1/γ, 1/α, 1/β, 1/δ number of i.i.d. samples, with probability at least 1 − δ, DISPLAYFORM6 where the first equality holds because Aσ(W x) = Z −1 σ(V x), as shown in Theorem 5. In smoothed analysis, it's clear that after adding small Gaussian perturbations, matrix A and W will become robustly full rank with reasonable probability (Lemma 36). In this section, we will focus on the tricky part, using smoothed analysis framework to show that it is natural to assume the distinguishing matrix is robustly full rank. We will consider two settings. In the first case, the input distribution is the Gaussian distribution N (0, I d), and the weights for the first layer matrix W is perturbed by a small Gaussian noise. In this case we show that the augmented distinguishing matrix M has smallest singular value σ min (M) that depends polynomially on the dimension and the amount of perturbation. This shows that for the Gaussian input distribution, σ min (M) is lower bounded as long as W is in general position. In the second case, we will fix a full rank weight matrix W, and consider an arbitrary symmetric input distribution D. There is no standard way of perturbing a symmetric distribution, we give a simple perturbation D that can be arbitrarily close to D, and prove that σ min (M D) is lowerbounded. Perturbing W for Gaussian Input We first consider the case when the input follows standard Gaussian distribution N (0, I d). The weight matrix W is perturbed to W where DISPLAYFORM0 Here E ∈ R k×d is a random matrix whose entries are i.i.d. standard Gaussians. We will use M to denote the perturbed version of the augmented distinguishing matrix M. Recall that the columns of M has the form: DISPLAYFORM1 where w i is the i-th row of W. Also, since M is the augmented distinguishing matrix it has a final column M 0 = vec(I d). We show that the smallest singular value of M is lower bounded with high probability. Theorem 9. Suppose that k ≤ d/5, and the input follows standard Gaussian distribution N (0, I d).Given any weight matrix W with w i ≤ τ for each row vector, let W be a perturbed version of W according to Equation and M be the perturbed augmented distinguishing matrix. With probability at least 1 − exp(−d Ω ), we have DISPLAYFORM2 We will prove this Theorem in Section C.1.Perturbing the Input Distribution Our algorithm works for a general symmetric input distribution D. However, we cannot hope to get a like Theorem 9 for every symmetric input distribution D. As a simple example, if D is just concentrated on 0, then we do not get any information about weights and the problem is highly degenerate. Therefore, we must specify a way to perturb the input distribution. We define a perturbation that is parametrized by a random Gaussian matrix Q and a parameter λ ∈. The random matrix Q is used to generate a Gaussian distribution D Q with a random covariance matrix. To sample a point in D Q, first sample n ∼ N (0, I d), and then output Qn. We show that given any input distribution, after applying (Q, λ)-perturbation with a random Gaussian matrix Q, the smallest singular value of the augmented distinguishing matrix M D is lower bounded. Recall that M D is defined as DISPLAYFORM3 as the first k 2 columns and has E x∼D [x ⊗ x] as the last column. Theorem 10. Given weight matrix W with w i ≤ τ for each row vector and symmetric input distribution D. Suppose that k ≤ d/7 and σ min (W) ≥ ρ, after applying (Q, λ)-perturbations to yield perturbed input distribution D, where Q is a d × d matrix whose entries are i.i.d. Gaussians, we have with probability at least 1 − exp(−dΩ FORMULA0) over the randomness of Q, DISPLAYFORM4 We will prove this later in Section C.3. In this section, we will prove Theorem 9, as restated below: Theorem 9. Suppose that k ≤ d/5, and the input follows standard Gaussian distribution N (0, I d).Given any weight matrix W with w i ≤ τ for each row vector, let W be a perturbed version of W according to Equation FORMULA0 and M be the perturbed augmented distinguishing matrix. With probability at least 1 − exp(−d Ω ), we have DISPLAYFORM0 To prove this theorem, recall the definition of M ij: DISPLAYFORM1 Since Gaussian distribution is highly symmetric, for every direction u that is orthogonal to both w i and w j, we have u mat(M ij)u be a constant. We can compute this constant as DISPLAYFORM2 This implies that if we consider mat(M ij) − m ij I d, it is going to be a matrix whose rows and columns are in span of w i and w j. In fact we can compute the matrix explicitly as the following lemma: Lemma 19. Suppose input x follows standard Gaussian distribution N (0, I d), and suppose weight matrix W has full-row rank, then for any 1 ≤ i < j ≤ k, we have DISPLAYFORM3 where 0 < φ ij < π is the angle between weight vectors w i and w j.Of course, the same lemma would be applicable to W, so we have an explicit formula for M ij. We will bound the smallest singular value using the idea of leave-one-out distance (as previously used in).Leave-one-out Distance Leave-one-out distance is a metric that is closely related to the smallest singular value but often much easier to estimate. Definition 2. For a matrix A ∈ R d×n (d ≥ n), the leave-one-out distance d(A) is defined to be the smallest distance between a column of A to the span of other columns. More precisely, let A i be the i-th column of A and S −i be the span of all the columns except for A i, then DISPLAYFORM4 Published as a conference paper at ICLR 2019 showed that one can lowerbound the smallest singular value of a matrix by its leave-one-out distance. Lemma 20 (DISPLAYFORM5 Therefore, to bound σ min ( M) we just need to lowerbound d(M). We use the ideas similar to BID1 and. Since every column of M (except for M 0) is random, we will try to show that even if we condition on all the other columns, because of the randomness in M ij, the distance between M ij to the span of other columns is large. However, there are several obstacles in this approach:1. The augmented distinguishing matrix M has a special column M 0 = vec(I d) that does not have any randomness.2. The closed form expression for M (as in Lemma 19) has complicated coefficients that are not linear in the vectors w i and w j.3. The columns of M ij are not independent with each other, so if we condition on all the other columns, M ij is no longer random. To address the first obstacle, we will prove a stronger version of Lemma 20 that allows a special column. Lemma 21. Let A ∈ R d×(n+1) (d ≥ n + 1) be an arbitrary matrix whose columns are A 0, A 1,..., A n. For any i = 1, 2,..., n, let S −i be the subspace spanned by all the other columns (including A 0) except for A i, and let d (A):= min i=1,...,n (I d −Proj S−i)A i. Suppose the column A 0 has norm √ d and A 1,..., A n has norm at most C, then DISPLAYFORM6 This lemma shows that if we can bound the leave-one-out distance for all but one column, then the smallest singular value of the matrix is still lowerbounded as long as the columns do not have very different norms. We defer the proof to Section C.2.For the second obstacle, we show that these coefficients are lowerbounded with high probability. Therefore we can condition on the event that all the coefficients are large enough. Lemma 22. Given weight vectors w i and w j with norm w i, w j ≤ τ, let w i = w i + ρε i, w j = w j +ρε j where ε i, ε j are i.i.d. Gaussian random vectors. With probability at least 1−exp(−d Ω ), we know w i ≤ τ + 3ρ 2 d/2, w j ≤ τ + 3ρ 2 d/2 and DISPLAYFORM7 where φ ij is the angle between w i and w j. In particular, if W = W + ρE where E is an i.i.d. Gaussian random matrix, with probability at least 1−exp(−d Ω ), for all i, w i ≤ τ + 3ρ 2 d/2, and for all i < j, the coefficient φ ij /π in front of the term w i w j + w j w i is at least DISPLAYFORM8 This lemma intuitively says that after the perturbation w i and w j cannot be close to co-linear. We defer the detailed proof to Section C.2.For the final obstacle, we use ideas very similar to which decouples the randomness of the columns. Proof of Theorem 9. Let E 1 be the event that Lemma 22 does not hold. Event E 1 will be one of the bad events (but note that we do not condition on E 1 not happening, we use a union bound at the end).Published as a conference paper at ICLR 2019 DISPLAYFORM9 That is, the columns of M are DISPLAYFORM10 for i < j, where w i,L denotes the restriction of vector w i to the subset L. Note that the restriction of vec(I d) to the rows indexed by L 1 × L 2 is just an all zero vector. We will focus on a column M ij with i < j and try to prove it has a large distance to the span of all the other columns. Let V ij be the span of all other columns, which is equal to DISPLAYFORM11 It's clear that V ij is correlated with M ij, which is bad for the proof. To get around this problem, we follow the idea of and define the following subspace that contains V ij, DISPLAYFORM12 By definition V ij ⊂V ij, and thusV DISPLAYFORM13 Note that w i,L1 ⊗ w j,L2 is independent withV ij. Moreover, subspaceV ij has dimension at most DISPLAYFORM14. Then by Lemma 31, we know that with probability at least 1 DISPLAYFORM15 Let E 2 be the event that this inequality does not hold for some i, j. DISPLAYFORM16 }. Now we know when neither bad events E 1 or E 2 happens, for every pair i < j, DISPLAYFORM17 Currently, we have proved that for any i < j, the distance between column M ij and the span of other columns is at least inverse polynomial. To use Lemma 21 we just need to give a bound on the norms of these columns. By Lemma 22, we know when E 1 does not happen DISPLAYFORM18 where τ is the uniform upper bound of the norm of every row vector of W. Let τ = τ + DISPLAYFORM19 Published as a conference paper at ICLR 2019 Thus, we have DISPLAYFORM20 Thus, there exists C = poly(τ, d, ρ), such that M ij ≤ C for every i < j. Now applying Lemma 21 immediately gives the . C.2 PROOF OF AUXILIARY LEMMAS FOR SECTION C.1We will first prove the characterization for columns in the augmented distinguishing matrix. Proof of Lemma 19. For simplicity, we start by assuming that every weight vector w i has unit norm. At the end of the proof we will discuss how to incorporate the norms of w i, w j. Also throughout the proof we will abuse notation to use M ij as its matrix form mat(M ij).Let S ij be the subspace spanned by w i and w j. Let S ⊥ ij be the orthogonal subspace of S ij. Let {e DISPLAYFORM21} be a set of orthonormal basis for S ij such that e, w j > 0. We use matrix S ij ∈ R d×2 to represent subspace S ij, which matrix has e Let Proj Sij = S ij S ij, and Proj DISPLAYFORM22 which is equivalent to proving that Proj DISPLAYFORM23 DISPLAYFORM24 where the last equality holds since u ∈ S ⊥ ij is orthogonal to e. We also know that DISPLAYFORM25 where the third equality holds because u x is independent with w i x, w j x and v. Note since u is orthogonal with w i, w j, v, we know for standard Gaussian vector x, random variable u x is independent with w i x, w j x, v x. Published as a conference paper at ICLR 2019Since the column span and row span of Proj S ⊥ ij M ij both belong to the subspace S ⊥ ij, there must exist DISPLAYFORM26. We only need to show this matrix C must be m ij I d−2. In order to show this, we prove for any u, v ∈ S DISPLAYFORM27 where the fourth equality holds because u x, v x are independent with w i x, w j x. Thus, we know DISPLAYFORM28 Let's now compute the closed form for m ij. Recall that DISPLAYFORM29 Note, we only need to consider input x within subspace S ij, which subspace has dimension two. Using the polar representation of two-dimensional Gaussian random variables (r is the radius and θ is the angle), we have ), which means DISPLAYFORM30 DISPLAYFORM31 where c DISPLAYFORM32 and c DISPLAYFORM33 are four coefficients. Now, we only need to figure out the four coefficients of this linear combination. Similar as the computation for m ij, we use polar integration to show that, DISPLAYFORM34 where the first equality holds because e. Similarly, we can show that DISPLAYFORM35 Published as a conference paper at ICLR 2019Proof of Lemma 21. The smallest singular value of A can be defined as follows: DISPLAYFORM36 Au.Suppose u * ∈ argmin u: u =1 Au. Let u * i be the coordinate corresponding to the column A i, for 0 ≤ i ≤ n. We consider two cases here. If |u * 0 | ≥ 4nC 2 4nC 2 +d, then we have DISPLAYFORM37 where the third inequality uses Cauchy-Schwarz inequality. DISPLAYFORM38 Above all, we know that the smallest singular value of A is lower bounded as follows, DISPLAYFORM39 Next we give the bound on the angle between two perturbed vectors w i and w j.Proof of Lemma 22. According to the definition of ρ-perturbation, we know w i = w i + ρε i, w j = w j + ρε j, where ε i, ε j are i.i.d. standard Gaussian vectors. First, we show that with high probability, the projection of w i on the orthogonal subspace of w j is lower bounded. Denote the subspace spanned by w j as S wj, and denote the subspace spanned by {w j, w i} as S wj ∪wi. Thus, we have DISPLAYFORM40 where S ⊥ wj is the orthogonal subspace of S wj. Published as a conference paper at ICLR 2019 DISPLAYFORM0 matrix, whose columns constitute a set of orthonormal basis for the subspace S ⊥ wj ∪wi. Thus, it's not hard to check that Proj S ⊥ w j ∪w i DISPLAYFORM1, which is a chi-squared random variable with (d − 2) degrees of freedom. According to the tail bound for chi-squared random variable, we have DISPLAYFORM2, ∀t ∈.Let t = 1 2, we know that with probability at least 1 − 2 exp(DISPLAYFORM3 Thus, we have Proj S ⊥ w j DISPLAYFORM4 . Recall that DISPLAYFORM5 We also know DISPLAYFORM6 where the last equality holds since w i ≤ τ . Note ε i 2 is another chi-squared random variable with d degrees of freedom. Similar as above, we can show that with probability at least 1 − 2 exp( DISPLAYFORM7 By union bound, we know with probability at least 1 − 2 exp( DISPLAYFORM8 Combined with the fact that φ ij ≥ sin( φ ij) when φ ij ∈ [0, π], we know with probability at least 1 − 2 exp(DISPLAYFORM9 Given W = W + ρE, where E is an i.i.d. Gaussian matrix, by union bound, we know with probability at least 1 DISPLAYFORM10 Published as a conference paper at ICLR 2019 In this section, we show that starting from any well-conditioned weight matrix W, and any symmetric input distribution D, how to perturb the distribution locally to D so that the smallest singular value of M D is at least inverse polynomial. Recall the definition of (Q, λ)-perturbation: we mix the original distribution D with a distribution D Q which is just a Gaussian N (0, QQ). To create a sample x in D, with probability 1 − λ we draw a sample from D; otherwise we draw a standard Gaussian n ∼ N (0, I d) and let x = Qn. We will prove Theorem 10 which we restate below:Theorem 10. Given weight matrix W with w i ≤ τ for each row vector and symmetric input distribution D. Suppose that k ≤ d/7 and σ min (W) ≥ ρ, after applying (Q, λ)-perturbations to yield perturbed input distribution D, where Q is a d × d matrix whose entries are i.i.d. Gaussians, we have with probability at least 1 − exp(−d Ω ) over the randomness of Q, DISPLAYFORM0 To prove this, let us first take a look at the structure of augmented distinguishing matrix for these DISPLAYFORM1 DISPLAYFORM2 Our proof will go in two steps. First we will show that σ min (M D Q) is large. Then we will show that even mixing with M D will not significantly reduce the smallest singular value, so σ min (M D) is also large. In addition to the techniques that we developed in Section C.1, we need two ideas that we call noise domination and subspace decoupling to solve the new challenges here. Noise Domination First let us focus on σ min (M D Q). This instance has weight W and input distribution N (0, QQ). Let M W Q be the augmented distinguishing matrix for an instance with weight W Q and input distribution N (0, I d). Our first observation shows that M D Q and M W Q are closely related, and we only need to analyze the smallest singular value of M W Q. The problem now is very similar to what we did in Theorem 9, except that the weight W Q is not an i.i.d. Gaussian matrix. However, we will still be able to use Theorem 9 as a black-box because the amount of noise in W Q is in some sense dominating the noise in a standard Gaussian. More precisely, we use the following simple claim: Claim 1. Suppose property P holds for N (µ, I d) for any µ, and the property P is convex (in the sense that if P holds for two distributions it also holds for their mixture), then for any covariance matrix Σ I d, we know P also holds for N (µ, Σ).Intuitively the claim says that if the property holds for a Gaussian distribution with smaller variance regardless of the mean, then it will also hold for a Gaussian distribution with larger variance. The proof is quite simple:Proof. Let Σ = Σ − I d, by assumption we know Σ is still a positive semidefinite matrix. Let x ∼ N (µ, Σ), x ∼ N (µ, Σ) and δ ∼ N (0, I d), by property of Gaussians it is easy to see that That is, N (µ, Σ) is a mixture of N (x, I). Since property P is true for all N (x, I), it is also true for N (µ, Σ).With this claim we can immediately use the of Theorem 9 to show σ min (M D Q) is large. Published as a conference paper at ICLR 2019Subspace Decoupling Next we need to consider the mixture M D. The worry here is that although σ min (M D Q) is large, mixing with D might introduce some cancellations and make σ min (M D) much smaller. To prove that this cannot happen with high probability, the key observation is that in the first step, to prove σ min (M W Q) is large we have only used the property of W Q. If we letQ be the projection of Q to the orthogonal space of row span of W, thenQ is still a Gaussian random matrix even if we condition on the value of W Q! Therefore in the second step we will use the additional randomness inQ to show that the cancellation cannot happen. The idea of partitioning the randomness of Gaussian matrices has been widely used in analysis of approximate message passing algorithms. The actual proof is more involved and we will need to partition the Gaussian matrix Q into more parts in order to handle the special column in the augmented distinguishing matrix. Now we are ready to give the full proof of Theorem 10Proof of Theorem 10. Let us first recall the definition of augmented distinguishing matrix: M D is a d 2 by (k 2 + 1) matrix, where the first k 2 columns consist of M D ij:= E x∼D (w i x)(w j x)(x ⊗ x)1{w i xw j x ≤ 0}, and the last column is E x∼D [x ⊗ x]. According to the definition of (Q, λ)-perturbation, if we let D Q be N (0, QQ), then we have DISPLAYFORM3 In the first step, we will try to analyze M D Q. The first k 2 columns of this matrix M D Q can be written as: M D Q ij = E x∼D Q (w i x)(w j x)(x ⊗ x)1{w i xw j x ≤ 0} = E n∼N (0,I d) (w i Qn)(w j Qn)(Qn ⊗ Qn)1{w i Qnw j Qn ≤ 0} = Q ⊗ QE n∼N (0,I d) (w i Qn)(w j Qn)(n ⊗ n)1{w i Qnw j Qn ≤ 0} for any i < j, and the last column is DISPLAYFORM4 Except for the factor Q ⊗ Q, the remainder of these columns are exactly the same as the augmented distinguishing matrix of a network whose first layer weight matrix is W Q and input distribution is N (0, I d). We use M W Q to denote the augmented distinguishing matrix of such a network, then we have DISPLAYFORM5 Therefore we can first analyze the smallest singular value of M W Q. Let W = W Q. Note that Q is a Gaussian matrix, and W is fixed, so W Q is also a Gaussian random matrix except its entries are not i.i.d. More precisely, there are only correlations within columns of W Q, and for any column of W Q, the covariance matrix is W W. Since the smallest singular value of W is at least ρ, we know σ min (W W) ≥ ρ 2. Let the covariance matrix of W Q be Σ W Q ∈ R kd×kd, which has smallest singular value at least ρ 2. Therefore we know Σ W Q ρ 2 I kd. It's not hard to verify that with probability at least 1 − exp(−d Ω ), the norm of every row of W Q is upper bounded by poly(τ, d). By Claim 1, any convex property that holds for any N (0, ρ 2 I kd) perturbation must also hold for Σ W Q 4. Thus, we know with probability at least 1 − exp(−d Ω ), σ min (M W Q) ≥ poly(1/τ, 1/d, ρ).To prepare for the next step, we will rewrite M W Q as the product of two matrices. According to the closed form of M W Q ij in Lemma 19, we know each column of M W Q can be expressed as a linear combination of w i ⊗ w j's and vec(I d). Therefore: DISPLAYFORM6 Published as a conference paper at ICLR 2019Since R has full row rank, we know that the row span of Proj W ⊥ ⊗W ⊥ M D belongs to the row span of R. According to the definition of U, it's also clear that the column span of Proj W ⊥ ⊗W ⊥ M D belongs to the column span of U ⊗ U. Thus, there exists matrix C ∈ R DISPLAYFORM7 Thus, DISPLAYFORM8 ≥σ min (1 − λ)Proj W ⊥ ⊗W ⊥ M D + λU ⊗ U P 2 V W ⊗ P 2 V W, vec(P 1 P 1 + P 2 P 2) R =σ min λU ⊗ U 1 − λ λ C + P 2 V W ⊗ P 2 V W, vec(P 1 P 1 + P 2 P 2) R.Note that C only depends on U and R, U only depends on W, and R only depends on W Q. With W Q fixed, C is also fixed. Clearly, C is independent with P 1 and P 2. For convenience, denote H:= 1 − λ λ C + P 2 V W ⊗ P 2 V W, vec(P 1 P 1 + P 2 P 2). Now, let's prove that the smallest singular value of matrix H ∈ R DISPLAYFORM9 is lower bounded using leave-one-out distance. Let's first consider its submatrixĤ which consists of the first k 2 columns of H. Note that within random matrix P 2 V W, every row are independent with each other. Within each row, the covariance matrix is W W. Recall that W is a random matrix whose covariance Σ W Q ρ 2 I kd, we can again apply Claim 1 with the property proved in Lemma 37. As a , with probability at least 1 − exp(−d Ω ), DISPLAYFORM10 Thus the covariance matrix of each row of P 2 V W has smallest singular value at least γ:= poly(1/d, ρ).We can view P 2 V W as the summation of two independent Gaussian matrix, one of which has covariance matrix γI (d−k)k. For this matrix, we will do something very similar to Theorem 9 in order to lowerbound its smallest singular value. Claim 2. For a random matrix K ∈ R (d−k)×k that is equal to K o + E where E is a Gaussian random matrix whose entries have variance γ. If d ≥ 7k, for any subspace S C that is independent of K and has dimension at most k 2 + 1, the leave-one-out distance d(Proj S ⊥ C K ⊗ K) is at least poly(γ, 1/d).The proof idea is similar as Theorem 9, and we try to apply Lemma 31 to K ⊗ K. In the proof we should think of K:= P 2 V W, and denote i-th column of K as K i. We also think of the space S C as the column span of C.Since U is an orthonormal matrix, we know σ min (U ⊗ U) = 1. According to Eq. 19, we know with probability at least 1 − exp(−d Ω ), DISPLAYFORM11 ≥λσ min (U ⊗ U)σ min (H)σ min (R) ≥poly(1/τ, 1/d, ρ, λ)where the second inequality holds since all of U ⊗ U, H and R have full column rank. In this section, we collect some known on matrix perturbations and concentration bounds. Basically, we used matrix concentration bounds to do the robust analysis and used matrix perturbation bounds to do the smoothed analysis. We also proved several corollaries that are useful in our setting. Matrix concentration bounds tell us that with enough number of independent samples, the empirical mean of a random matrix can converge to the mean of this matrix. Lemma 23 (Matrix Bernstein; Theorem 1.6 in Tropp FORMULA0). Consider a finite sequence {Z k} of independent, random matrices with dimension d 1 × d 2. Assume that each random matrix satisfies E[Z k] = 0 and Z k ≤ R almost surely. Define DISPLAYFORM0 Then, for all t ≥ 0, DISPLAYFORM1 σ 2 + Rt/3.As a corollary, we have: Lemma 24. Consider a finite sequence {Z 1, Z 2 · · · Z m} of independent, random matrices with dimension d 1 × d 2. Assume that each random matrix satisfies Z k ≤ R, 1 ≤ k ≤ m. Then, for all t ≥ 0, Alignment of Subspace Basis. Due to the rotation issue, we cannot conclude that S −Ŝ is small even we know SS −ŜŜ is bounded. The following Lemma shows that after appropriate alignment, S is indeed close toŜ. Lemma 35 (Lemma 6 in Ge et al. (2017a) ). Given matrices S,Ŝ ∈ R d×r, we have. Let A ∈ R m×n and suppose that m ≥ n. Assume that the entries of A are independent standard Gaussian variable, then for every > 0, with probability at least 1 − (C) m−n+1 + e −C n, where C, C are two absolute constants, we have: DISPLAYFORM2 DISPLAYFORM3 DISPLAYFORM4 However, in our setting, we are more interested in fixed matrices perturbed by Gaussian variables. The smallest singular value of these "perturbed rectangular matrices" can be bounded as follows. Lemma 37 (Lemma G.16 in). Let A ∈ R m×n and suppose that m ≥ 3n. If all the entries of A are independently ρ-perturbed to yield A, then for any > 0, with probability at least 1 − (C) 0.25m, for some absolute constant C, the smallest singular value of A is bounded below by: DISPLAYFORM5 | We give an algorithm for learning a two-layer neural network with symmetric input distribution. | 1,078 | scitldr |
Teachers intentionally pick the most informative examples to show their students. However, if the teacher and student are neural networks, the examples that the teacher network learns to give, although effective at teaching the student, are typically uninterpretable. We show that training the student and teacher iteratively, rather than jointly, can produce interpretable teaching strategies. We evaluate interpretability by measuring the similarity of the teacher's emergent strategies to intuitive strategies in each domain and conducting human experiments to evaluate how effective the teacher's strategies are at teaching humans. We show that the teacher network learns to select or generate interpretable, pedagogical examples to teach rule-based, probabilistic, boolean, and hierarchical concepts. Human teachers give informative examples to help their students learn concepts faster and more accurately BID23 BID21 BID4. For example, suppose a teacher is trying to teach different types of animals to a student. To teach what a "dog" is they would not show the student only images of dalmatians. Instead, they would show different types of dogs, so the student generalizes the word "dog" to all types of dogs, rather than merely dalmatians. Teaching through examples can be seen as a form of communication between a teacher and a student. Recent work on learning emergent communication protocols in deep-learning based agents has been successful at solving a variety of tasks BID7 BID24 BID18 BID5 BID16. Unfortunately, the protocols learned by the agents are usually uninterpretable to humans, and thus at the moment have limited potential for communication with humans. We hypothesize that one reason the emergent protocols are uninterpretable is because the agents are typically optimized jointly. Consider how this would play out with a teacher network T that selects or generates examples to give to a student network S. If T and S are optimized jointly, then T and S essentially become an encoder and decoder that can learn any arbitrary encoding. T could encode "dog" through a picture of a giraffe and encode "siamese cat" through a picture of a hippo. The examples chosen by T, although effective at teaching S, are unintuitive since S does not learn in the way we expect. On the other hand, picking diverse dog images to communicate the concept of "dog" is an intuitive strategy because it is the effective way to teach given how we implicitly assume a student would interpret the examples. Thus, we believe that S having an interpretable learning strategy is key to the emergence of an interpretable teaching strategy. This raises the question of whether there is an alternative to jointly optimizing T and S, in which S maintains an interpretable learning strategy, and leads T to learn an interpretable teaching strategy. We would ideally like such an alternative to be domain-agnostic. Drawing on inspiration from the cognitive science work on rational pedagogy (see Section 2.1), we propose a simple change:1. Train S on random examples 2. Train T to pick examples for this fixed S Step 1, S learns an interpretable strategy that exploits a natural mapping between concepts and examples, which allows T to learn an interpretable teaching strategy in Step 2.1. Evaluating how similar T's strategy is to intuitive human-designed strategies (Section 4) 2. Evaluating how effective T's strategy is at teaching humans (Section 5)We find that, according to these metrics, T learns to give interpretable, pedagogical examples to teach rule-based, probabilistic, boolean, and hierarchical concepts. What does it mean to rationally teach and learn through examples? One suggestion is that a rational teacher chooses the examples that are most likely to make the student infer the correct concept. A rational student can then update their prior belief of the concept given the examples and the fact that the examples were chosen by a cooperative teacher. Shafto et al formalize this intuition in a recursive Bayesian model of human pedagogical reasoning BID21 BID22 BID23. In their model the probability a teacher selects an example e to teach a concept c is a soft maximization (with parameter α) over what the student's posterior probability of c will be. The student can then update their posterior accordingly. This leads to two recursive equations: DISPLAYFORM0 Note that in general there are many possible solutions to this set of dependent equations. A sufficient condition for a unique solution is an initial distribution for P teacher (e|c). Shafto et al suggest that a natural initial distribution for the teacher is a uniform distribution over examples consistent with the concept. They empirically show that the fixed point that from this initial distribution matches human teaching strategies. In our work, we initialize the teacher distribution in the way suggested by Shafto et al. We optimize in two steps: train the student on this initial distribution of examples optimize the teacher for this fixed student. This approach is analogous to doing one iteration of Equation 2 and then one iteration of Equation 1. We find that one iteration is sufficient for producing interpretable strategies. Teaching via examples can be seen as communication between a teacher to a student via examples. Much recent work has focused on learning emergent communication protocols in deep-learning based agents BID7 BID24. However, these emergent protocols tend to be uninterpretable. A number of techniques have been suggested to encourage interpretability, such as limiting symbol vocabulary size BID18, limiting memorization capabilities of the speaker, or introducing auxiliary tasks such as image labelling based on supervision data BID16.Despite these modifications, the protocols can still be difficult to interpret. Moreover, it is unclear how modifications like limiting vocabulary size apply when communication is in the form of examples because usually examples are already a fixed length (e.g coordinates in a plane) or constrained to be selected from a set of possible examples. So, there must be other reasons that humans come up with interpretable protocols in these settings, but neural networks do not. We suggest that one reason may be that these communication protocols are typically learned through joint optimization of all agents BID7 BID24 BID18 BID16, and evaluate how changing from a joint optimization to an iterative one can improve interpretability. Require: p(C): distribution over concepts DISPLAYFORM0 One problem studied in the machine teaching literature is finding a student-teacher pair such that the student can learn a set of concepts when given examples from the teacher BID12 BID3. However, it is difficult to formalize this problem in a way that avoids contrived solutions known as "coding tricks." Although the community has not agreed on a single definition of what a coding trick is, it refers to a solution in which the teacher and student simply "collude" on a pre-specified protocol for encoding the concept through examples. Many additional constraints to the problem have been proposed to try to rule out coding tricks. These additional constraints include requiring the student to be able to learn through any superset of the teacher's examples BID9, requiring the learned protocols to work for any ordering of the concepts or examples BID27, requiring the student to learn all concepts plus their images under primitive recursive operators BID19, and giving incompatible hypothesis spaces to the student and teacher BID1.The prior work has mainly been theoretically driven. The papers provide a definition for what it means to avoid collusion and then aim to find student-teacher pairs that provably satisfy the proposed definition. Our work takes a more experimental approach. We provide two criteria for interpretability and then empirically evaluate how modifying the optimization procedure affects these two criteria. We consider a set of possible concepts C and examples E. For example, C may be different animals like cats, dogs, parrots, etc and E may be images of those animals. The prior p(e|c) is a distribution over non-pedagogically selected examples of the concept. For example, if C is the set of all animals, then p(e|c) could be a uniform distribution over images of a given animal. A student S: E → C takes in a running sequence of K examples and at each step outputs a guessĉ for the concept the sequence of examples corresponds to. A teacher T: C × C → E takes in the target concept to teach and S's current guess of the concept and outputs the next example for the student at each step. When the set of examples is continuous T outputs the examples directly. When E is discrete we use the Gumbel-Softmax trick BID14 to have T generate a sample from E.The performance of both S and T is evaluated by a loss function L: C × C → R that takes in the true concept and S's output after K examples (although in some tasks we found it useful to sum the losses over all S's outputs). In our work, both S and T are modeled with deep recurrent neural networks parameterized by θ S and θ T, respectively. Recurrent memory allows the student and teacher to effectively operate over sequences of examples. T and S are illustrated graphically in FIG0.In the recent work on learning deep communication protocols, the standard way to optimize S and T would be to optimize them jointly, similar to the training procedure of an autoencoder (Algorithm 1). However, joint optimization allows S and T to form an arbitrary, uninterpretable encoding of Train student on random examples: DISPLAYFORM0 Train teacher best response to student: DISPLAYFORM1 the concept via examples. We compare joint optimization to an alternative approach we call a best response (BR) optimization (Algorithm 2), which iteratively trains S and T in two steps:1. Train S on concept examples e 1,... e K ∼ p(·|c) coming from prior example distribution. 2. Train T to select or generate examples for the fixed S from Step 1.The intuition behind separating the optimization into two steps is that if S learns an interpretable learning strategy in Step 1, then T will be forced to learn an interpretable teaching strategy in Step 2. 1 The reason we expect S to learn an "interpretable" strategy in Step 1 is that it allows S to learn a strategy that exploits the natural mapping between concepts and examples. For example, suppose the concept space is the set of all rectangles and p(e|c) is a uniform distribution over points within a rectangle (the task in Section 4.1). In Step 1, S learns to only guess rectangles that contain all the given examples. Because S expects examples to be within the rectangle, then in Step 2, T learns to only give examples that are within the rectangle, without explicitly being constrained to do so. So, T learns to picks the most informative examples that are still within the rectangle, which are the corners of the rectangle. We created a variety of tasks for evaluation that capture a range of different types of concepts (rulebased, probabilistic, boolean, and hierarchical concepts). Below we give a brief description of the tasks and why we chose them. The rest of the section provides further details on the tasks and the first interpretability criteria, while the next section addresses the second interpretability criteria. Rule-based concepts. We first aimed to replicate a common task in the rational pedagogy literature in cognitive science, known as the rectangle game BID21. In the variant of the rectangle game that we consider, there is a rectangle that is known to the teacher but unknown to the student. The student's goal is to infer the boundary of the rectangle from examples of points within the rectangle. The intuitive strategy that human teachers tend to use is to pick opposite corners of the rectangle BID22 BID23. We find that T learns to match this strategy. Probabilistic concepts. It is often difficult to define naturally-occurring concepts via rules. For example, it is unclear how to define what a bird is via logical rules. Moreover, some examples of a concept can seem more prototypical than others (e.g sparrow vs peacock) BID20, and this is not captured by simply modeling the concept as a set of rules that must be satisfied. An alternative approach models concept learning as estimating the probability density of the concept BID0 BID2 BID8 BID10. BID23 investigate teaching and learning unimodal distributions. But often a concept (e.g lamp) can have multiple subtypes (e.g. desk lamp and floor lamp). So, we investigate how T teaches a bimodal distribution. The bimodal distribution is parameterized as a mixture of two Gaussian distributions and S's goal is to learn the location of the modes. T learns the intuitive strategy of giving examples at the two modes. Boolean concepts. An object can have many properties, but only a few of them may be relevant for deciding whether the object belongs to a concept or not. For example, a circle is a circle whether it has a radius of 5 centimeters or 100 meters. The purpose of this task is to see what strategy T learns to quickly teach S which properties are relevant to a concept. The possible examples we consider are images that vary based on four properties: size (small, medium, large), color (red, blue, green), shape (square vs circle), and border (solid vs none). Only one to three of these properties define a concept. For example, if the concept is red circles, then red circles of any size or border fit the concept. T learns the intuitive strategy of picking two examples whose only common properties are the ones required by the concept, allowing S to learn that the other properties are not relevant for membership in the concept. Hierarchical concepts. Human-defined concepts are often hierarchical, e.g. animal taxonomies. Humans are sensitive to taxonomical structure when learning how to generalize to a concept from an example BID26. The purpose of this task is to test how T learns to teach when the concepts form a hierarchical structure. We create hierarchical concepts by pruning subtrees from Imagenet. T's goal is to teach S nodes from any level in the hierarchy, but can only give images from leaf nodes. T learns the intuitive strategy of picking two examples whose lowest common ancestor is the concept node, allowing S to generalize to the correct level in the hierarchy. A concept (rectangle) is encoded as a length four vector c ∈ [−10, 10] 4 of the minimum x, minimum y, maximum x, and maximum y of the rectangle. p(e|c) is a uniform distribution over points in the rectangle. Examples are two-dimensional vectors that encode the x and y coordinate of a point. The loss between the true concept c and S's output c is L(c,ĉ) = ||c −ĉ|| 2 2 and is only calculated on S's last output. S is first trained against ten examples generated from p(e|c). Then T is trained to teach S in two examples. T generates examples continuously as a two-dimensional vector. Figure 2 shows an example of T's choices and S's guess of the concept after each example given. Under both BR and joint optimization S is able to infer the concept in two examples. However, in joint optimization it is not clear how T's examples relate to the ground-truth rectangle (black) or what policy the student (orange) has for inferring the rectangle. On the other hand, in the BR case T outputs points close to opposite corners of the rectangle, and S expands its estimate of the rectangle to fit the examples the teacher gives. Figure 4 measures the distance between the random, best response (teacher), and joint strategy to the intuitive strategy of giving corners averaged over concepts. Specifically, let e = (e 1, e 2) be the two examples given and S(c) be the set of tuples of opposite corners of c. The distance measures how close these two examples are to a pair of opposite corners and is defined as d(e, c) = min s∈S(c) ||e 1 − s 1 || 2 + ||e 2 − s 2 || 2. T's examples are much closer to opposite corners than either the random or joint strategy. A concept is encoded as two-dimensional vector c = (µ 1, µ 2) ∈ 2 where µ 1 and µ 2 are the locations of the two modes and µ 1 < µ 2. p(e|c) = 0.5N (µ 1, 1) + 0.5N (µ 2, 1) is a mixture of two Gaussians. The loss between the true concept c and S's outputĉ is L(c,ĉ) = ||c −ĉ|| 2 2. S is first trained against five examples generated from p(e|c). Then T is trained to teach S in two examples. T generates examples continuously as a one-dimensional vector. T learns the intuitive strategy of giving the two modes as the examples. Figure 5 measures the distance to the intuitive strategy by the distance, ||e − c|| 2, between the examples, e, and the true modes, c. Both e and c are sorted when calculating the distance. T learns to match the intuitive strategy better than the random or joint strategy. Examples are images of size 25 x 25 x 3. Concepts are ten-dimensional binary vectors where each dimension represents a possible value of a property (size, color, shape, border). The value of one in the vector indicates that the relevant property (e.g. color) must take on that value (e.g. red) in order to be considered a part of the concept. p(e|c) is a uniform distribution over positive examples of the concept. The loss between the true concept c and S's outputĉ is L(c,ĉ) = ||c −ĉ|| 2 2. S is first trained on five examples generated from p(e|c). In both BR and joint optimization, we trained S with a curriculum starting with concepts defined by three properties, then two, and then one. T is trained to teach S with two examples. In this experiment, T selects an example from a discrete set of all images. We use the Gumbel-Softmax estimator BID14 to select discrete examples from final layer of T in a differentiable manner. T learns the intuitive strategy of picking two examples whose only common properties are the ones required by the concept, so that S can rule out the auxiliary properties. For example, Figure 7 shows T's examples for the concept of red. T selects a large red square with no border and then a small red circle with a border. The only property the two shapes have in common is red, so the concept must only consist of red. Indeed, 87% of T's examples only have the required properties in common, compared to 36% of random examples, and 0% of jointly trained examples (Figure 8). We create a set of hierarchical concepts by pruning a subtree from Imagenet. Each node in the subtree is a concept and is encoded as a one-hot vector. We randomly select 10 images of each leaf node. The possible examples for a leaf node are any of its ten images. The possible examples for an interior node are images from any of its descendant leaves. For example, in the hierarchy of apes shown in Figure 9, the possible examples for the "lesser apes" concept are images of siamangs or gibbons. We use a pretrained ResNet-50 model BID11 to embed each image into a 2048 length vector. p(e|c) is a uniform distribution over the possible examples for the concept. L(c,ĉ) is the softmax cross entropy loss between the true concept c and S's outputĉ. S is first trained on five examples generated from p(e|c). T then learns to teach S with two examples. As in 4.3, the final layer of T uses the Gumbel-Softmax estimator to sample an example image. T learns the intuitive strategy of picking examples from two leaf nodes such that the lowest common ancestor (LCA) of the leaf nodes is the concept node. This strategy encodes the intuition that to teach someone the concept "dog" you wouldn't only show them images of dalmations. Instead you would show examples of different types of dogs, so they generalize to a higher level in the taxonomy. For example, to teach what an ape is T could select an image of an orangutan and a siamang because the lowest common ancestor of the two is the ape concept (Figure 9). FIG0 shows T's correspondence to the intuitive strategy on the interior nodes of three example subtrees of Imagenet: apes, parrots, and felines. These subtrees have 16, 19, and 57 possible concepts respectively. T learns to follow the LCA strategy 100% of the time, whereas the highest the jointly trained strategy ever gets is 20%. In the previous section, we evaluated interpretability by measuring how similar T's strategy was to a qualitatively intuitive strategy for each task. In this section, we revisit two of the tasks and provide an additional measure of interpretability by evaluating how effective T's strategy is at teaching humans. We ran experiments to see how well T could teach humans the bimodal distributions task from Section 4.2. 60 subjects were recruited on Amazon Mehcanical Turk. They were tested on the ten concepts with modes in E = {4, 8, 12, 16, 20}. 30 subjects were shown two examples generated from p(e|c) for each concept and the other 30 subjects were shown two examples generated by T for each concept. The subjects were then given five test lines of lengths in E and asked to rate on a scale of one to five how likely they think the line is a part of the concept. For each concept there were two lines with very high probability of being in the concept and three lines with very low probability of being in the concept. A subject is said to have gotten the concept correct if they gave the high-probability lines a rating greater than three and the low-probability lines a rating less than or equal to three. The subjects given examples from the teacher had an average accuracy of 18%, compared to 8% with random examples. In addition, the teacher group had a much higher standard deviation than the random group, 19% compared to 6%. The maximum accuracy in the teacher group was 70%, but just 20% in the random group. The difference between groups was highly significant with p < 0.001, calculated using a likelihood-ratio test on an ordinary logit model as described in.Although the teacher group did better, neither group had a high mean accuracy. The task is difficult because a subject needs to get the entire distribution correct to be counted as a correct answer. But another possible reason for poor performance is people may have had the wrong hypothesis about the structure of concepts. It seems as though many subjects hypothesized that the structure of the concept space was unimodal, rather than bimodal, thus believing that lines with a length in between the two shown to them were very likely to be a part of the concept. An interesting open research question is how to ensure that the human has the correct model of the concept space. To evaluate human learning of boolean concepts (the task from Section 4.3), we sampled ten test concepts, five composed of one property and five composed of two properties. We recruited 80 subjects on Amazon Mechanical Turk and showed 40 of them two random positive examples of the ten concepts and the other 40 of them two examples chosen by the teacher. They were then asked to classify four new images as either a part of the concept or not. The four new images always had two positive examples and two negative examples for the concept. As shown in FIG0, the group that received examples from T performed better with a mean accuracy of 76%, compared to a mean accuracy of 71% for those that received random examples. This difference was highly significant with p < 0.001, calculated using the same procedure described in Section 5.1 from.What leads the protocols that humans learn to be so different from the protocols that deep learning models learn? One explanation is that humans have limitations that deep learning models do not. We investigated the impact of one limitation: humans cannot jointly optimize among themselves. We found that switching to an iterative optimization in which the student network is trained against examples coming from a non-pedagogical distribution and then the teacher network is trained against this fixed student leads to more interpretable teaching protocols. The intuition behind the approach is that leads the student to learn an interpretable learning strategy, which then constrains the teacher to learn an interpretable teaching strategy in.But this is just one of many possible limitations. For example, one reason we believe human students did not learn concepts as well as the student network (Section 5) is that humans had a different prior over concepts. In the probabilistic concepts task, humans seemed to believe that the lines came from a unimodal, rather than bimodal, distribution. In the boolean concepts task, humans tended to overemphasize color as a property. It is unrealistic to assume that a teacher and student have a perfectly matching prior over concepts or perfect models of each other. An important open question is which of these limitations are fundamental for the emergence of interpretable teaching protocols. While we carried out our experiments in the setting of teaching via examples, another direction for future work is investigating how an iterative optimization procedure works in more complex teaching settings (say teaching through demonstrations) and in communication tasks more broadly. Overall, we hope that our work presents a first step towards understanding the gap between the interpretability of machine agents and human agents. | We show that training a student and teacher network iteratively, rather than jointly, can produce emergent, interpretable teaching strategies. | 1,079 | scitldr |
Stochastic gradient descent (SGD), which trades off noisy gradient updates for computational efficiency, is the de-facto optimization algorithm to solve large-scale machine learning problems. SGD can make rapid learning progress by performing updates using subsampled training data, but the noisy updates also lead to slow asymptotic convergence. Several variance reduction algorithms, such as SVRG, introduce control variates to obtain a lower variance gradient estimate and faster convergence. Despite their appealing asymptotic guarantees, SVRG-like algorithms have not been widely adopted in deep learning. The traditional asymptotic analysis in stochastic optimization provides limited insight into training deep learning models under a fixed number of epochs. In this paper, we present a non-asymptotic analysis of SVRG under a noisy least squares regression problem. Our primary focus is to compare the exact loss of SVRG to that of SGD at each iteration t. We show that the learning dynamics of our regression model closely matches with that of neural networks on MNIST and CIFAR-10 for both the underparameterized and the overparameterized models. Our analysis and experimental suggest there is a trade-off between the computational cost and the convergence speed in underparametrized neural networks. SVRG outperforms SGD after a few epochs in this regime. However, SGD is shown to always outperform SVRG in the overparameterized regime. Many large-scale machine learning problems, especially in deep learning, are formulated as minimizing the sum of loss functions on millions of training examples . Computing exact gradient over the entire training set is intractable for these problems. Instead of using full batch gradients, the variants of stochastic gradient descent (SGD) (; ; ; ; ;) evaluate noisy gradient estimates from small mini-batches of randomly sampled training points at each iteration. The mini-batch size is often independent of the training set size, which allows SGD to immediately adapt the model parameters before going through the entire training set. Despite its simplicity, SGD works very well, even in the non-convex non-smooth deep learning problems . However, the optimization performance of the stochastic algorithm near local optima is significantly limited by the mini-batch sampling noise, controlled by the learning rate and the mini-batch size. The sampling variance and the slow convergence of SGD have been studied extensively in the past (; ;). To ensure convergence, machine learning practitioners have to either increase the mini-batch size or decrease the learning rate toward the end of the training . The minimum loss achieved in real dataset MNIST (a logistic regression model). Our theoretical prediction (a) matched with the training dynamics for real datasets, demonstrating tradeoffs between computational cost and convergence speed. The curves in red are SVRG and curves in blue are SGD. Different markers refer to different per-iteration computational cost, i.e., the number of backpropagation used per iteration on average. their strong theoretical guarantees, SVRG-like algorithms have seen limited success in training deep learning models . Traditional from stochastic optimization focus on the asymptotic analysis, but in practice, most of deep neural networks are only trained for hundreds of epochs due to the high computational cost. To address the gap between the asymptotic benefit of SVRG and the practical computational budget of training deep learning models, we provide a non-asymptotic study on the SVRG algorithms under a noisy least squares regression model. Although optimizing least squares regression is a basic problem, it has been shown to characterize the learning dynamics of many realistic deep learning models . Recent works suggest that neural network learning behaves very differently in the underparameterized regime vs the overparameterized regime ), characterized by whether the learnt model can achieve zero expected loss. We account for both training regimes in the analysis by assuming a linear target function and noisy labels. In the presence of label noise, the loss is lower bounded by the label variance. In the absence of the noise, the linear predictor can fit each training example perfectly. We summarize the main contributions as follows: • We show the exact expected loss of SVRG and SGD along an optimization trajectory as a function of iterations and computational cost. • Our non-asymptotic analysis provides an insightful comparison of SGD and SVRG by considering their computational cost and learning rate schedule. We discuss the trade-offs between the total computational cost, i.e. the total number of back-propagations performed, and convergence performance. • We consider two different training regimes with and without label noise. Under noisy labels, the analysis suggests SGD only outperforms SVRG under a mild total computational cost. However, SGD always exhibits a faster convergence compared to SVRG when there is no label noise. • Numerical experiments validate our theoretical predictions on both MNIST and CIFAR-10 using various neural network architectures. In particular, we found the comparison of the convergence speed of SGD to that of SVRG in underparameterized neural networks closely matches with our noisy least squares model prediction. Whereas, the effect of overparameterization is captured by the regression model without label noise. Stochastic variance reduction methods consider minimizing a finite-sum of a collection of functions using SGD. In case we use SGD to minimize these objective functions, the stochasticity comes from the randomness in sampling a function in each optimization step. Due to the induced noise, SGD can only converge using decaying step sizes with sub-linear convergence rate. Methods such as SAG , SVRG , and SAGA , are able to recover linear convergence rate of full-batch gradient descent with the asymptotic cost comparable to SGD. SAG and SAGA achieve this improvement at the substantial cost of storing the most recent gradient of each individual function. In contrast, SVRG spends extra computation at snapshot intervals by evaluating the full-batch gradient. Theoretical such as show that under certain smoothness conditions, we can use larger step sizes with stochastic variance reduction methods than is allowed for SGD and hence achieve even faster convergence. In situations where we know the smoothness constant of functions, there are on the optimal mini-batch size and the optimal step size given the inner loop size . Applying variance reduction methods in deep learning has been studied recently . The authors conjectured the ineffectiveness is caused by various elements commonly used in deep learning such as data augmentation, batch normalization and dropout. Such elements can potentially decrease the smoothness and make the stored gradients become stale quickly. The proposed solution is to either remove these elements or update the gradients more frequently than is practical. Dynamics of SGD and quadratic models Our main analysis tool is very closely related to recent work studying the dynamics of gradient-based stochastic methods. derived the dynamics of stochastic gradient descent with momentum on a noisy quadratic model , showing the problem of short horizon bias. In , the authors showed the same noisy quadratic model captures many of the essential characteristic of realistic neural networks training. Their noisy quadratic model successfully predicts the effectiveness of momentum, preconditioning and learning rate choices in training ResNets and Transformers. However, these previous quadratic models assume a constant variance in the gradient that is independent of the current parameters and the loss function. It makes them inadequate for analyzing the stochastic variance reduction methods, as SVRG can trivially achieve zero variance under the constant gradient noise. Instead, we adopted a noisy least-squares regression formulation by considering both the mini-batch sampling noise and the label noise. There are also recent works that derived the risk of SGD, for least-squares regression models using the bias-variance decomposition of the risk ). We use a similar decomposition in our analysis. In contrast to the asymptotic analysis in these works, we compare SGD to SVRG along the optimization trajectory for any finite-time horizon under limited computation cost, not just the convergence points of those algorithms. Underparameterization vs overparameterization. Many of the state-of-the-art deep learning models are overparameterized deep neural networks with more parameters than the number of training examples. Even though these models are able to overfit to the data, when trained using SGD, they generalize well . As suggested in recent work, underparameterized and overparameterized regimes have different behaviours; ). Given the infinite width and a proper weight initialization, the learning dynamics of a neural network can be well-approximated by a linear model via the neural tangent kernel (NTK) . In NTK regime, neural networks are known to achieve global convergence by memorizing every training example. On the other hand, previous convergence for SVRG have been obtained in stochastic convex optimization problems that are similar to that of an underparameterized model . Our proposed noisy least-squares regression analysis captures both the underparameterization and overparameterization behavior by considering the presence or the absence of the label noise. We will primarily focus on comparing the minibatch version of two methods, SGD and SVRG . Denote L i as the loss on i th data point. The SGD update is written as, is the minibatch gradient, t is the training iteration, and α (t) is the learning rate. The SVRG algorithm is an inner-outer loop algorithm proposed to reduce the variance of the gradient caused by the minibatch sampling. In the outer loop, for every T steps, we evaluate a large batch gradientḡ = 1 N N i ∇ θ (mT) L i, where N b, and m is the outer loop index, and we store the parameters θ (mT). In the inner loop, the update rule of the parameters is given by, whereĝ Note that in our analysis, the reference point is chosen to be the last iterate of previous outer loop θ (mT), recommended as a practical implementation of the algorithm by the original SVRG paper. We now define the noisy least squares regression model . In this setting, the input data is d-dimensional, and the output label is generated by a linear teacher model with additive noise, y. We assume WLOG θ * = 0. We also assume the data covariance matrix Σ is diagonal. This is an assumption adopted in many previous analysis and it is also a practical assumption as we often apply whitening to pre-process the training data. We would like to train a student model θ that minimizes the squared loss over the data distribution: At each iteration, the optimizer can query an arbitrary number of data points {x i, y i} i sampled from data distribution. The SGD method uses b data points to form a minibatch gradient: where SVRG on the other hand, queries for N data points every T steps to form a large batch gradient where X N and N are defined similarly. At each inner loop step, it further queries for another b data points, to form the update in Eq. 2. Lastly, note that the expected loss can be written as a function of the second moment of the iterate, Hence for the following analysis we mainly focus on deriving the dynamics of the second mo- When Σ is diagonal, the loss can further be reduced to Definition 1 (Formula for dynamics). We define the following functions and identities, The SGD update (Eq. 1) with the mini-batch gradient of of the noisy least squares model (Eq. 4) is, We substitute the update rule to derive the following dynamics for the second moment of the iterate: This dynamics equation can be understood intuitively as follows. The term 1 leads to an exponential shrinkage of the loss due to the gradient descent update. Since we are using a noisy gradient, the second term 2 represents the variance of stochastic gradient caused by the random input X b. The term 3 comes from the label noise. We show in the next theorem that when the second moment of the iterate approaches zero, 2 will also approach zero. However due to the presence of the label noise, the expected loss is lower bounded by 3. When Σ is diagonal, we further analyze and decompose C(M(θ)) as a function of m(θ) so as to derive the following dynamics and decay rate for SGD. Theorem 2 (SGD Dynamics and Decay Rate). Given the noisy linear regression objective function (Eq. 3), under the assumption that x ∼ N (0, Σ) with Σ diagonal and θ * = 0, we can express C(θ) as a function of m(θ): Then we derive following dynamics of expected second moment of θ: Under the update rule of SGD, R is the decay rate of the second moment of parameters between two iterations. And based on Theorem 2 the expected loss can be calculated by 3 A DILEMMA FOR SVRG By querying a large batch of datapoints X N every T steps, and a small minibatch X b at every step, the SVRG method forms the following update rule: To derive the dynamics of the second moment of the parameters following the SVRG update, we look at the dynamics of one round of inner loop updates, i.e., from θ (mT) to θ ((m+1)T ): Lemma 3. The dynamics of the second moment of the iterate following SVRG update rule is given by, The dynamics equation above is very illuminating as it explicitly manifests the weakness of SVRG. First notice that terms 1, 2, 3 reappear, contributed by the SGD update. The additional terms, 4 and 5, are due to the control variate. Observe that the variance reduction term 5 decays exponentially throughout the inner loop, with decay rate I − αΣ, i.e. P. We immediately notice that this is the same term that governs the decay rate of the term 1, and hence ing in a conflict between the two. Specifically, if we want to reduce the term 1 as fast as possible, we would prefer a small decay rate and a large learning rate, i.e. α → 1 λmax(Σ). But this will also make the boosts provided by the control variate diminish rapidly, leading to a poor variance reduction. The term 4 makes things even worse as it will maintain as a constant throughout the inner loop, contributing to an extra variance on top of the variance from standard SGD. On the other hand, if one chooses a small learning rate for the variance reduction to take effect, this inevitably will make the decay rate for term 1 smaller, ing in a slower convergence. Nevertheless, a good news for SVRG is that the label noise (term 3) is scaled by b N, which lets SVRG converge to a lower loss value than SGD -a strict advantage of SVRG compared to SGD. To summarize, the variance reduction from SVRG comes at a price of slower gradient descent shrinkage. In contrast, SVRG is able to converge to a lower loss value. This motivates the question, which algorithm to use given a certain computational cost? We hence performed a thorough investigation through numerical simulation as well as experiments on real datasets in Sec. 4. Similarly done for SGD, we decompose C(θ) as a function of m(θ) and derive the following decay rate for SVRG. Theorem 4 (SVRG Dynamics and Decay rate). Given the noisy linear regression objective function (Eq. 3), under the assumption that x ∼ N (0, Σ) with Σ diagonal and θ * = 0, the dynamics for SVRG in m(θ) is given by: In the absence of the label noise (i.e., σ y = 0), we observe that both SGD and SVRG enjoy linear convergence as a corollary of Theorem 2 and Theorem 4: Corollary 5. Without the label noise, the dynamics of the second moment following SGD is given by, and the dynamics of SVRG is given by, where λ is defined in Eq.. Note that similar have been shown in the past; ), where a general condition known as "interpolation regime" is used to show linear convergence of SGD. Specifically they assume that ∇L i (θ *) = 0 for all i, and our setting without label noise clearly also belongs to this regime. This setting also has practical implications, as one can treat training overparameterized neural networks as in interpolation regime. This motivates the investigation of the convergence rate of SGD and SVRG without label noise, and was also extensively studied in the experiments detailed as follows. In Sec. 3 we discussed a critical dilemma for SVRG that is facing a choice between effective variance reduction and faster gradient descent shrinkage. At the same time, it enjoys a strict advantage over SGD as it converges to a lower loss. We define the total computational cost as the total number of back-propagations performed. Similarly, per-iteration computational cost refers to the number of back-propagations performed per iteration. In this section, we study the question, which algorithm converges faster given certain total computational cost? We study this question for both the underparameterized and the overparameterized regimes. Our investigation consists of two parts. First, numerical simulations of the theoretical convergence rates (Sec. 4.1). Second, experiments on real datasets (Sec. 4.2). In both parts, we first fix the per-iteration computational cost. For SGD, the per-iteration computational budge is equal to the minibatch size. We picked three batch size {64, 128, 256}. Denote the batchsize of SGD as b, the equivalent batch size for SVRG is b = we also ran over a set of snapshot intervals T. After running over all sets of hyperparameters, we gather all training curves of all hyperparameters. We then summarize the performance for each algorithm by plotting the lower bound of all training curves, i.e. each point (l, t) on the curve showed the minimum loss l at time step t over all hyperparameters. We compared the two methods under different computational cost. Remarkably, we found in many cases phenomenon predicted by our theory matches with observations in practice. Our experiments suggested there is a trade-off between the computational cost and the convergence speed for underparameterized neural networks. SVRG outperformed SGD after a few epochs in this regime. Interestingly, in the case of overparameterized model, a setting that matches modern day neural networks training, SGD strictly dominated SVRG by showing a faster convergence throughout the entire training. We first performed numerical simulations of the dynamics derived in Theorem 2 for SGD and Theorem 4 for SVRG. We picked a data distribution, with data dimension d = 100, and the spectrum of Σ is given by an exponential decay schedule from 1 to 0.01. For both methods, we picked 50 learning rate from 1.5 to 0.01 using a exponential decay schedule. For SVRG, we further picked a set of snapshot intervals for each learning rate: {256, 128, 64}. We performed simulations in both underparameterized and overparameterized setting (namely with and without label noise), and plotted the lower bound curves over all hyperparameters at Figure 2. The x-axis represents the normalized total computational cost, denoting tbN −1, which is equivalent to the notion of an epoch in finite dataset setting. And the loss in Figure 2 does not contain bayes error (i.e. We have the following observations from our simulations. In the case with label noise, the plot demonstrated an explicit trade-off between computational cost and convergence speed. We observed a crossing point of between SGD and SVRG appear, indicating SGD achieved a faster convergence speed in the first phase of the training, but converged to a higher loss, for all per-iteration compute cost. Hence it shows that one can trade more compute cost for convergence speed by choosing SGD than SVRG, and vice versa. Interestingly, we found that the the per-iteration computational cost does not seem to affect the time crossing point takes place. For all these three costs, the crossing points in the plot are at around the same time: 5.5 epochs. In the case of no label noise, we observed both methods achieved linear convergence, while SGD achieved a much faster rate than SVRG, showing absolute dominance in this regime. In this section, we performed a similar investigation as in the last section, on two standard machine learning benchmark datasets: MNIST and CIFAR-10 . We present the from underparameterized setting first, followed by the overparameterized setting. We performed experiments with three batchsizes for SGD: {64, 128, 256}, and an equivalent batchsize for SVRG. For each batch size, we pick 8 learning rates varying from 0.3 to 0.001 following an exponential schedule. Additionally, we chose three snapshot intervals for every computational budget, searching over the best snapshot interval given the data. Hence for each per-iteration computational cost {64, 128, 256}, there are 24 groups of experiments for SVRG and 8 groups of experiments for SGD. for training on MNIST and CIFAR-10 with overparameterized models. In this setting we observed strict dominance of SGD over SVRG in convergence speed for all computational cost, matching our previous theoretical prediction. For MNIST, we trained two underparameterized model: 1. logistic regression 784 − 10 2. a underparameterized two layer MLP 784 − 10 − 10 where the hidden layer has 10 neurons. For CIFAR-10, we chose a underparameterized convolutional neural network model, which has only two 8-channel convolutional layers, and one 16-channel convolutional layer with one additional fully-connected layer. Filter size is 5. The lowest loss achieved over all hyperparameters for these models for each per-iteration computational cost are shown in Figure 3. From these experiments, we observe that on MNIST, the with underparameterized model were consistent with the dynamics simulation of noisy least squares regression model with label noise. First of all, SGD converged faster in the early phase, ing in a crossing point between SGD and SVRG. It showed a trade-offs between computational cost and convergence speed: before the crossing point, SGD converged faster than SVRG; after crossing point, SVRG attained a lower loss. In addition, in Fig 3a, all the crossing points of three costs matched at the same epoch (around 5), which was also consistent with the our findings with noisy least squares regression model. On CIFAR-10, SGD achieved slightly faster convergence in the early phase, but was surpassed by SVRG around 17 − 25 epochs, again showing a trade-off between compute and speed. Lastly, we compared SGD and SVRG on MNIST and CIFAR-10 using overparameterized models. For MNIST, we used a MLP with two hidden layers, each layer having 1024 neurons. For CIFAR-10, we chose a large convolutional network, which has one 64-channel convolutional layer, one 128-channel convolutional layer followed by one 3200 to 1000 fully connected layer and one 1000 to 10 fully connected layer. The lowest loss achieved over all hyperparameters for these models for each per-iteration computational cost are shown in Figure 4. For training on MNIST, both SGD and SVRG attained close to zero training loss. The were again consistent with our dynamics analysis on the noisy linear regression model without label noise. SGD has a strict advantage over SVRG, and achieved a much faster convergence rate than SVRG throughout the entire training. As for CIFAR-10, we stopped the training before either of the two got close to zero training loss due to lack of computing time. But we clearly see a trend of approaching to zero loss. Similarly, we also had the same observations as before, where SGD outperforms SVRG, confirms the limitation of variance reduction in the overparameterized regime. In this paper, we studied the convergence properties of SGD and SVRG in the underparameterized and overparameterized settings. We provided a non-asymptotic analysis of both algorithms. We then investigated the question about which algorithm to use under certain total computational cost. We performed numerical simulations of dynamics equations for both methods, as well as extensive experiments on the standard machine learning datasets, MNIST and CIFAR-10. Remarkably, we found in many cases phenomenon predicted by our theory matched with observations in practice. Our experiments suggested there is a trade-off between the computational cost and the convergence speed for underparameterized neural networks. SVRG outperformed SGD after the first few epochs in this regime. In the case of overparameterized model, a setting that matches with modern day neural networks training, SGD strictly dominated SVRG by showing a faster convergence for all computational cost. Lemma 6 (Gradient Covariance). Given the noisy linear regression objective function (Eq. 3), under the assumption that x ∼ N (0, Σ) with Σ diagonal and θ * = 0, we have Proof. In the following proof, we define the entry-wise p power on vector x as x •p. Under our assumption µ = 0, θ Eq. 12 is a from The Matrix Cookbook (See section 8.2.3 in). Then, for its main diagonal term, we have: Hence, for C M(θ (t) ), we have: which is the first of Theorem 2. Notice, this can be generalized to any square matrix A not only for E[θ (t) θ (t) ], i.e. for any square matrix A ∈ R d×d, with x ∼ N (0, Σ) and Σ diagonal, since we have For batch gradient where [N] b is the index set of X b. Theorem 2. Given the noisy linear regression objective function (Eq. 3), under the assumption that x ∼ N (0, Σ) with Σ diagonal and θ * = 0, we can express C(M(θ)) as a function of m(θ): Then we derive following dynamics of expected second moment of θ: Proof. Since, E[b] = 0, and b is independent with θ (t), X b, we have: and, Since X b is independent with θ (t), we have: Thus, For its diagonal term, we have: This formula can be written as: where D THE PROOF OF LEMMA 3 Lemma 3. The dynamics of the second moment of the iterate following SVRG update rule is given by, 4 variance due tog 5 Variance reduction from control variate Proof. For SVRG update rule Eq. 8, we have: Using the update rule of SVRG, we can get the outer product of parameters as: Likewise, since E[N] = 0 and N is independent with X b, X N and θ (t), we have the expectation of equation 46, equation 47 equal to 0. And same as SGD, we also have Then, we give a significant formula about the expectation of θ (mT +t) θ (mT), utilized to derive the expected term related to variance reduction amount. and N is independent with X N and θ (mT), the expectation of Eq. 53 is equal to 0. Therefore, which suggests the covariance betweenĝ (mT +t) andg (mT +t) is exponentially decayed. For every other term appearing in Eq. 41, we have the following . First, similar with SGD, we have the formula about gradient descent shrinkage as: Using Eq. 54, we have following for variance reduction term from control variate. We first take expectation over θ (mT +t) θ (mT) with Eq. 54 due to the independence among X b, X N and θ. For the forth term, which represents the variance ofg (mT +t), we consider the independence between X b and X N and get Thus, Under our definition, it can be expressed as: 4 variance due tog 5 Variance reduction from control variate E THE PROOF OF THEOREM 4 Theorem 4. Given the noisy linear regression objective function (Eq. 3), under the assumption that x ∼ N (0, Σ) with Σ diagonal and θ * = 0, the dynamics for SVRG in m(θ) is given by: Proof. Form lemma 3 and lemma 6, we can get: where Recursively expending the above formula from m(θ ((m+1)T ) ) to m(θ (mT) ), we can get the following : = R Rm(θ (mT +T −2) ) − QP T −2 m(θ (mT) ) + F m(θ (mT) ) + N −1 n In other word, Eq. 79 describe the dynamic of expected second moment of iterate between two nearby snapshots, m(θ ((m+1)T ) ) = λ(α, b, T, N, Σ)m(θ (mT) ) + (I − R T)(I − R) In our theoretical analysis (Section 3), we evaluate a large batch gradientḡ to control variance. That is because any number of data points can be directly sampled form the true distribution. But in order to compare the computational cost between SVRG and SGD, we set the number of data points used to calculateḡ as N, which is slightly different with the original SVRG's setup of full-batch gradient. Therefore, we evaluate the sensitivity of N to illustrate when N is beyond a threshold, it will cause little difference in convergence speed for SVRG. From figure 5a, we can tell N has little effect on the convergence speed of SVRG under the noisy least square model, but it determines the constant term of label noise in Eq. 9 which determines the level of final loss. Besides, we also compare large batch SGD to SVRG in Figure 5b | Non-asymptotic analysis of SGD and SVRG, showing the strength of each algorithm in convergence speed and computational cost, in both under-parametrized and over-parametrized settings. | 1,080 | scitldr |
Non-autoregressive machine translation (NAT) systems predict a sequence of output tokens in parallel, achieving substantial improvements in generation speed compared to autoregressive models. Existing NAT models usually rely on the technique of knowledge distillation, which creates the training data from a pretrained autoregressive model for better performance. Knowledge distillation is empirically useful, leading to large gains in accuracy for NAT models, but the reason for this success has, as of yet, been unclear. In this paper, we first design systematic experiments to investigate why knowledge distillation is crucial to NAT training. We find that knowledge distillation can reduce the complexity of data sets and help NAT to model the variations in the output data. Furthermore, a strong correlation is observed between the capacity of an NAT model and the optimal complexity of the distilled data for the best translation quality. Based on these findings, we further propose several approaches that can alter the complexity of data sets to improve the performance of NAT models. We achieve the state-of-the-art performance for the NAT-based models, and close the gap with the autoregressive baseline on WMT14 En-De benchmark. Traditional neural machine translation (NMT) systems (; ;) generate sequences in an autoregressive fashion; each target token is predicted step-by-step by conditioning on the previous generated tokens in a monotonic (e.g. left-to-right) order. While such autoregressive translation (AT) models have proven successful, the sequential dependence of decisions precludes taking full advantage of parallelism afforded by modern hardware (e.g. GPUs) at inference time. On the other hand, there is a recent trend of non-autoregressive translation (NAT) models (;, trading the model's capacity for decoding efficiency by making it possible predict the whole sequence or multi-token chunks of the sequence simultaneously. Such a non-autoregressive factorization assumes that the output tokens are independent from each other. However, this assumption obviously does not hold in reality and as a NAT models generally perform worse than standard AT models. One key ingredient to reducing the performance degradation of NAT models that is used in almost all existing works ( ; ; , inter alia) is creation of training data through knowledge distillation . More precisely, sequence-level knowledge distillation -a special variant of the original approach -is applied during NAT model training by replacing the target side of training samples with the outputs from a pre-trained AT model trained on the same corpus with a roughly equal number of parameters. It is usually assumed that knowledge distillation's reduction of the "modes" (alternative translations for an input) in the training data is the key reason why distillation benefits NAT training. However, this intuition has not been rigorously tested, leading to three important open questions: • Exactly how does distillation reduce the "modes", and how we could we measure this reduction quantitatively? Why does this reduction consistently improve NAT models? • What is the relationship between the NAT model (student) and the AT model (teacher)? Are different varieties of distilled data better for different NAT models? • Due to distillation, the performance of NAT models is largely bounded by the choice of AT teacher. Is there a way to further close the performance gap with standard AT models? In this paper, we aim to answer the three questions above, improving understanding of knowledge distillation through empirical analysis over a variety of AT and NAT models. Specifically, our contributions are as follows: • We first visualize explicitly on a synthetic dataset how modes are reduced by distillation (§3.1). Inspired by the synthetic experiments, we further propose metrics for measuring complexity and faithfulness for a given training set. Specifically, our metrics are the conditional entropy and KL-divergence of word translation based on an external alignment tool, and we show these are correlated with NAT model performance (§3.2). • We conduct a systematic analysis (§4) over four AT teacher models and six NAT student models with various architectures on the standard WMT14 English-German translation benchmark. These experiments find a strong correlation between the capacity of an NAT model and the optimal dataset complexity for the best translation quality. • Inspired by these observations, we propose approaches to further adjust the complexity of the distilled data in order to match the model's capacity (§5). We also show that we can achieve the state-of-the-art performance for NAT and largely match the performance of the AT model. In order to model the joint probability of the output sequence y, NMT models usually generate each output token conditioned on the previous generated ones p(y|x) = T t=1 p(y t |y <t, x), known as the autoregressive factorization. To generate a translation from this model, one could predict one token at a time from left to right and greedily take arg max over each output probability distribution. Besides greedy decoding, beam search is often employed to generate better translations by considering a fixed number of hypotheses. In this work, we study non-autoregressive translation (NAT), a special subset of NMT models with an additional restriction (zeroth-order Markov assumption) upon the output predictions or a subset thereof. The simplest formulation of an NAT model independently factors the conditional distribution: p(y|x) = T t=1 p(y t |x). Standard NAT models adopt a similar architecture as the Transformer and make non-autoregressive predictions for the entire sequence with one forward pass of the decoder. However, because multiple translations are possible for a single input sentence (the so-called multi-modality problem;), vanilla NAT models can fail to capture the dependencies between output tokens and tend to make egregious mistakes such as outputting tokens repeatedly. To improve the model's ability to handle multi-modality, recent works have incorporated approaches including relaxing the fully non-autoregressive restriction and adopting K decoding passes (instead of just one) to iteratively refine the generated outputs;;; ); using latent variables (; ;) or structured information such as syntax trees to capture translation variation; training NAT models with objectives other than maximum likelihood; ) which ameliorates the effects of multi-modality. However, to achieve competitive performance with the autoregressive model, almost all existing NAT models rely on training using data distilled from a pre-trained AT model instead of the real parallel training set, as described below. Knowledge distillation was originally proposed for training a weaker student classifier on the targets predicted from a stronger teacher model. A typical approach is using the label probabilities produced by the teacher as "soft targets" q i = exp(z i /τ) j exp(z j /τ) for training the student model, where q i and z i are the probability and the logit of class i respectively and τ is the temperature. Prior work has shown the effectiveness of adopting knowledge distillation in adversarial defense , neural network compression , and fast inference of speech synthesis . In the context of sequence generation, extend knowledge distillation to the sentence level using "hard targets" from a pretrained large teacher model to train a small sequence generation model. More precisely, the teacher distribution q(t|x) is approximated by its mode: q(t|x) ≈ 1{t = arg max t∈T q(t|x)} with the following objectives: where t ∈ T is the space of possible target sequences. This can also be seen as a special case of the standard distillation over the sentence space when the temperature τ approaches 0, which is equivalent to taking the arg max over all feasible translations. While the "hard target"ŷ is the most likely translation predicted by the teacher, in practice we use beam search as an approximation. As mentioned earlier, almost all the existing literature trains NAT models using sequence-level knowledge distillation from a pre-trained AT model to achieve competitive performance. Particularly, it is common to train the teacher model as a standard Transformer with a roughly equal number of trainable parameters as the desired NAT model on the real data. In this paper, we will first study how this knowledge distillation process affects the behavior of NAT models. In this section, we start from an introductory example to illustrate how NAT models fail to capture the multi-modality of data. Then we propose a metric to assess the multi-modality of a data set and use that to test our hypothesis about the mechanism of knowledge distillation for NAT models. Dataset. We start by investigating NAT's difficulties in modeling multi-modality in output data using a synthetic setup where we explicitly include multiple modes in the training data. More specifically, we utilize three language pairs -English-German (En-De), English-French (En-Fr), and English-Spanish (En-Es) -from the Europarl parallel corpus. 2 We extract sentences that have aligned sentences for all languages, and create a multi-reference En-De/Es/Fr corpus. In this case every English input sentence always corresponds to target sentences in three different languages, which forms three explicit output modes. Notably, this is similar to the one-to-many translation setting in but in our case we do not have an explicit signal (e.g. target language tag) to tell the NMT model which target language to translate to. Models. We train both the AT and NAT models on this concatenated data set, then compare the distributions of translations with each other. We use the standard Transformer(base) model as the AT model, and a simplified version of as the NAT model where the decoder's inputs are monotonically copied from the encoder embeddings and a length predictor is learned to predict the target sentence length. Both models are trained for 300, 000 steps using maximum likelihood. After training, we use both models to translate the English sentences in the validation and test sets. Visualization of AT Outputs. The synthetic setup enables us to better understand and visualize the modes in the outputs more easily. First, we visualize the outputs from the AT model. For every translated sentence, we visualize the estimated probability distribution of language classes as a point in Fig. 1 (a). This probability is calculated as the average of the posterior probability of each token, and it is estimated based on the Bayes' law: Figure 1: Posterior distribution of language IDs for the outputs from different models. Each translation is represented as a point inside the simplex ∆ 2 = {(p de, p es, p fr)|p k ∈, p de +p es +p fr = 1} where p k is the estimated probability of being translated into language k ∈ (de, es, fr). We distinguish the language that has the largest probability with different colors. where l i denotes the language class i, and p(y t |l i) is the token frequency of y t in language l i. We assume p(l i) follows a uniform distribution. As shown in Fig. 1 (a), points of the AT outputs are clustered closely to each vertex of the simplex which indicates that the AT model prefers to generate the whole sequence in one language. This phenomenon verifies our assumption that decoding from the AT model (distillation) is essentially selecting "modes" over the real data. Visualization of NAT Outputs. We visualize outputs for the NAT model trained on the same data in Fig. 1 (b). In contrast to the AT , the NAT points are scattered broadly inside the simplex, indicating the NAT model fails to capture the mode of language types. Instead, it predicts tokens mixed with multiple languages, which corroborates our hypothesis about the NAT model. Next, we create two datasets that have fewer modes than the original dataset. First, we randomly select a single target sentence from one of the three languages for each source sentence. Second, we perform distillation, decoding from the AT model trained on the combined training set. As noted in the AT , distillation will also roughly be selecting a language mode, but we conjecture that this selection may be more systematic, selecting a particular language for a particular type of training sentence. As shown in Fig. 1 (c) (d), NAT models trained on both of these datasets are more likely to choose one mode (language) when generating translations, showing that training with reduced modes is essential for NAT model. Furthermore, points in Fig. 1 (d) are clearly clustered better than (c) indicating that modes selected by AT models are indeed likely more systematic and easy to capture than those generated by randomly assigning a language for each sentence. To better study why distillation is crucial for NAT models, in this section, we propose quantitative measures for analyzing the complexity and faithfulness of parallel data, two properties that we hypothesize are important for NAT training. Measure of Complexity. Inspired by the observations in the synthetic experiments, we propose to use a measure of translation uncertainty, specifically operationalized as conditional entropy, as the measurement of complexity C(d) for any given dataset d: Under review as a conference paper at ICLR 2020 where we use x and y to denote a word in the source and target vocabulary respectively. T x and T y denote the length of the source and target sentences. To make the computation tractable, we make two additional assumptions on the conditional distribution p(y|x): • Assumption 1: We assume the target tokens are independent given the source sentence. Then the conditional entropy of a sentence can be converted into the sum of entropy of target words conditioned on the source sentence x. • Assumption 2: We assume the distribution of p(y t |x) follows an alignment model 3 where y t is is generated from the word alignment distribution p(y t |Align(y t)). This makes it possible to simplify the conditional entropy to the sum of entropy of target words conditioned on the aligned source words. The corpus level complexity C(d) is then calculated by adding up the conditional entropy H(Y|X = x) of all sentences and averaging over all source tokens, denoted To illustrate that the proposed metric is a reasonable measure of complexity of a parallel corpus, in Tab. 1 we compute C(d) for the parallel data of different language pairs, the concatenated data set and the distilled data from the AT model described in §3.1. We observe that the conditional entropy of the distilled data is much smaller than that of the original concatenated data as well as the random-selection data mentioned above. Additionally, we find that the conditional entropy of En-Es and En-Fr are similar but that of En-De is relatively larger, which can also explain why the student NAT model prefers to predict the modes of Es or Fr more often than De as shown in Fig. 1(d). Measure of Faithfulness. C(d) reflects the level of multi-modality of a parallel corpus, and we have shown that a simpler data set is favorable to an NAT model. However, it is not fair to assess the data set only by its complexity, e.g. we can trivially construct a simple data set with no variations in the output, which obviously won't be useful for training. The other important measurement of the data set is its faithfulness to the real data distribution. To measure the faithfulness of a parallel corpus d, we use KL-divergence of the alignment distribution between the real parallel data set r and an altered parallel data set d, denoted F (d): In this section, we perform an extensive study over a variety of non-autoregressive (NAT) models trained from different autoregressive (AT) teacher models, to assess how knowledge distillation affects the performance of NAT models. Data. We use the data set commonly used by prior work as our evaluation benchmark: WMT14 English-German (En-De) 4. We use newstest2013 as the validation set for selecting the best model, and newstest2014 as the test set. We learn a byte-pair encoding vocabulary of 37,000 on the tokenized data. AT Models. We set up four Transformer models with different parameter sizes: Transformertiny/small/base/big denoted as tiny, small, base, big respectively. We build base and big models following settings described in , and reduce the model sizes for tiny, small to create weaker teacher models. Details of the model architectures can be found in Appendix A. All the models are trained using the Adam optimizer with the maximum number of steps set to 300, 000. After training, we use the ing AT models to decode the whole training set with beam size 5 and replace the real target sentences to create a new parallel corpus. NAT Models. We consider the following NAT models, from vanilla to state-of-the-art. All the models are using the Transformer as the basic backbone and are (re-)implemented based on Fairseq 5 except for FlowSeq. We briefly outline the methods and parameters here, and describe detailed settings in the Appendix A. • Vanilla NAT : Similarly to §3.1, we use a simplified version where the decoder's inputs are directly copied from the encoder without considering latent variables. • FlowSeq : FlowSeq adopts normalizing flows as the latent variables to model the mappings from source sentences to a latent space. • NAT with Iterative Refinement (iNAT, : iNAT extends the vanilla NAT by iteratively reading and refining the translation. The number of iterations is set to 10 for decoding. • Insertion Transformer : InsT adopts a similar architecture as iNAT while generating the sequence by parallel insertion operations. Here, we only consider InsT trained with uniform loss as described in the original paper. • MaskPredict : MaskT adopts a masked language model to progressively generate the sequence from an entirely masked input. The number of iterations is set to be 10. • Levenshtein Transformer : LevT uses similar architectures as in InsT and MaskT while generating based on both insertion and deletion operations. We experiment with a base and big LevT model (LevT and LevT-big in Tab. 2). We also summarize the parameter size, performance and relative decoding speed of the NAT models introduced in Tab Iters is number of passes used in decoding for output length n and hyperparameter k. Pass is relative time used for one pass of decoding. As mentioned earlier, we analyze each model by training from both the real and 4 distillation targets. We train the NAT models for the same number of steps as the AT models. For a fair comparison of the actual ability of each NAT-based model, we test all the models based on "greedy decoding" without any advanced search algorithms (e.g. length beam , noisy parallel decoding , or re-ranking from the teacher model ). Notably, the vanilla NAT and FlowSeq output translations with single forward pass, while the remaining models are based on the iterative refinement. We compare different dimensions of the data generated by the four AT models and the real data set in Fig. 3. First, Fig. 3 (a) shows that as the capacity of the AT model increases, the complexity C(d) of the distilled data increases, which indicates that the multi-modality increases as well. At the same time, we observe that F (d) defined in §3.2 also decreases, showing that the distilled data more faithfully represents the word-level translation distribution of the original data. Second, we plot the BLEU score of the distilled data w.r.t to the real data set in (b) and we observe that the BLEU score of the distilled data from a higher-capacity teacher model is higher, which is both intuitive and in agreement with the on KL divergence. We also investigate how the relative ordering of words in the source and target sentences is changed during distillation. We use the fuzzy reordering score proposed in. A larger fuzzy reordering score indicates the more monotonic alignments. As shown in Fig 3 (c), the distilled data has significantly less reordering compared to the real parallel sentences, and the distilled data from a weaker AT teacher is more monotonic than a stronger AT teacher. We also show a randomly sampled example in Fig. 2 where compared to the real translation, the AT distilled target is much more monotonically aligned to the source sentence. This has potential benefits in that these simpler reordering patterns may be easier to learn for NAT models, but also disadvantages in that it may prevent NAT models from learning complex reordering patterns. In §4.2, we have shown that decoding with an AT model reduces the conditional entropy of the parallel data set, which mitigates multi-modality in the output data. But does the decoding method of the AT model affect this change in the data set? We also investigate different decoding strategies when creating distilled data, using the base Transformer model as the teacher and the vanilla NAT model as the student. In Tab. 3, four decoding methods are presented: sampling, sampling within the top-10 candidates, beam search, and greedy decoding. With the same AT model, the performance of the NAT model differs widely depending on the decoding approach, where distillation with beam search in the best performance. We can see that beam search or greedy decoding can reduce the complexity of the real data the most while maintaining high faithfulness. In contrast, sampling based decoding methods less aggressively reduce the modes in the output sequence. This finding is in concert with , who demonstrate that because beam search approximately selects the most probable translation, it effectively reduces diversity in the output translations compared to sampling or the true distribution. We next examine the relationship between the NAT students and distilled training data from different AT models. In Fig. 4, we demonstrate for the NAT models listed in §4.1. We use the test set performance on real data as a simple metric to measure the capacity of the NAT model and arrange the subfigures in an increasing order of the performance (left-to-right, top-to-bottom Transformer LevT LevT-big Figure 4: The performance of NAT models of varying capacity trained on both the real and the distilled data from tiny, small, base and big AT models on WMT14-ENDE newstest 2014 test sets. smaller complexity as measured above in §4.2. The best performance of NAT models -from lower capacity ones to higher capacity ones -is achieved with distilled data of lower complexity to higher complexity, e.g. the vanilla NAT model performs best when using the distilled data from a small Transformer whereas the LevT achieves the best performance when training with the distilled data from a big Transformer. Third, and notably, by simply changing the distilled data set upon which the models are trained, we are able to significantly improve the state-of-the-art for models in a particular class. For example, FlowSeq increased to 22, by simply changing from the distilled data of Transformer(base) to Transformer(small). Finally, we find that by distilled from a big-AT model, LevT is able to close the gap with the Transformer (base) with a similar number of parameters. Both LevT and LevT-big achieve the state-of-the-art performance for NAT-based models. The previous section shows that the optimal complexity of the dataset is highly correlated with the capacity of the NAT model. In this section, we introduce three techniques that can be used to alter the distilled data to match the capacity of NAT model. Specifically, these techniques can be used to simplify the data further (BANs, MoE) for a lower-capacity student model or increase faithfulness of the data set (Interpolation) for a higher-capacity student model. Born-Again Networks. We apply Born-Again neworks (BANs) to create a simplified dataset for NAT models. BANs were originally proposed as a self-distillation technique that uses the output distribution of a trained model to train the original model. Starting from the real data, we repeatedly train new AT models with decoded sentences from the AT model at the previous iteration. This process is repeated for k times and yields k distilled data sets, upon which we perform NAT training and examine how the k born-again teachers affect the performance of NAT students. We conduct experiments using the vanilla NAT model (which achieved the best performance with distilled data from a small Transformer in §4.4) and the base Transformer as the AT model. As shown in Fig. 5, we can make the following observations: (i) The performance of the base AT model almost remains unchanged during the reborn iterations. (ii) The performance of the vanilla NAT model can be improved by 2 BLEU when using the distilled data from reborn iteration 6. (iii) As the reborn iterations continue, the complexity of the distilled data decreases and becomes constant eventually. Meanwhile, the quality of the distilled data compared to the real data decreases. Mixture-of-Experts. The mixture-of-expert model (MoE;) learns different experts for diverse machine translation, and different mixture components were shown to capture con- In Fig. 6, we observe that the performance of the best expert of MoE tends to decrease as the number of experts increases. However, the complexity (C(d)) and faithfulness (F (D)) of distilled data from different MoE models has a relatively large variance. Compared to using the distilled data from a plain base AT model, the performance of NAT model is improved by 1.21 BLEU when using the distilled data from the MoE model with the number of experts of 3 which produces the distilled data with least complexity. Table 4: Results w/ and w/o sequencelevel interpolation with LevT. Sequence-Level Interpolation. §4.4 shows stronger NAT models (e.g. MaskT, LevT) have the ability to learn from the dataset that is closer to the real data, and achieve better performance. We adopt the sequence-level interpolation proposed in as a natural way to create a better dataset. Different from distillation, interpolation picks the sentence with the highest sentence-level BLEU score w.r.t. the ground truth from K−best beam search hypotheses. In our experiments, we first run beam search using the base Transformer model with a beam size of 5 then select the sentences with the highest BLEU score from the top-3 candidates. Tab. 4 compares the performance of LevT trained with distilled data from the AT model with the standard distillation or interpolation. We observe that selection with BLEU score from the base AT model (base-inter) improves the performance of LevT ∼ 0.4 BLEU while the dataset complexity C(d) does not increase much. In this paper, we first systematically examine why knowledge distillation improves the performance of NAT models. We conducted extensive experiments with autoregressive teacher models of different capacity and a wide range of NAT models. Furthermore, we defined metrics that can quantitatively measure the complexity of a parallel data set. Empirically, we find that a higher-capacity NAT model requires a more complex distilled data to achieve better performance. Accordingly, we propose several techniques that can adjust the complexity of a data set to match the capacity of an NAT model for better performance. A EXPERIMENTAL DETAILS Model All the AT models are implemented based on the Transformer model using fairseq, and we basically follow the fairseq examples to train the transformers 6. Following the notation from , we list the basic parameters of all the AT model we used: Models Table 5: Basic hyper-parameters of architecture for AT models. Training For all experiments, we adopt the Adam optimizer using β 1 = 0.9, β 2 = 0.98, = 1e − 8. The learning rate is scheduled using inverse sqrt with a maximum learning rate 0.0005 and 4000 warmup steps. We set the label smoothing as 0.1. All the models are run on 8 GPUs for 300, 000 updates with an effective batch size of 32, 000 tokens. The best model is selected based on the validation loss except for FlowSeq which uses valid BLEU score. Decoding After training, we use beam-search with a fixed beam size 5 for all AT models to create the distilled dataset. No length penalty is used. Model Tab. 2 also lists all the NAT models we test in this work. In general, all the NAT models except FlowSeq and LevT-big adopts a similar architecture and hyper-parameters as the Transformerbase (see Tab. 5). LevT-big is a naive extension of the original LevT model with a comparable parameter setting as Transformer-big (Tab. 5). For FlowSeq, we use the base model (FlowSeq-base) described in . We re-implemented the vanilla NAT as a simplified version of where instead of modeling fertility as described in the original paper, we monotonically copy the encoder embeddings to the input of the decoder. All the models except InsT require the additional module to predict the length of the output sequence, or the number of placeholders to be inserted, which is implemented as a standard softmax classifier over the lengths of. For LevT, we also have a binary classifier to predict the deletion of the incorrect tokens. Training Similar to the AT models, all the NAT models are trained using the Adam optimizer with the same learning rate scheduler, in which the warmup steps are set to 10, 000. We train the FlowSeq model on 32 GPUs with a batch size as 2048 sentences, while all the other models are trained on 8 GPUs with an effective batch size of 64, 000 tokens. Note that, the batch sizes for training NAT is typically larger than the AT model to make the learning sufficient. There are also specialized training settings for each models: • iNAT: following the original paper, we train the iNAT model jointly with 4 iterations of refinement during training. For each iteration, the model has the 50% probability to learn as a denoising autoencoder, and the rest of the probability to learn from the model's own prediction. • InsT : in this work, we only consider training the Insertion Transformer (InsT) using the slot-loss based on the uniform loss function . That is, we assign equal probabilities to all the insertable tokens inside each slot. • MaskT : following the original paper, we train the model as a typical masked language model where the ratio of masked tokens is sampled from 0 ∼ 100%. • LevT : in this work, we only consider sequence generation tasks, which means the training of LevT is very similar to InsT. We use sentences with randomly deleted tokens to learn insertion, and learn deletion based on the model's own prediction. Decoding For a fair comparison over all the NAT models, we use greedy decoding for all the model without considering any advance decoding methods such as searching or re-ranking from a teacher model. For the vanilla NAT and FlowSeq, decoding is quite straight-forward and simply picking the arg max at every position. For iNAT and MaskT, we fix the decoding steps as 10. Both InsT and LevT decode in an adaptive number of iterations, and we set the maximum iterations for both models as 10. A special EOS penalty that penalizes generating too short sequences is tuned based on the validation set for both InsT and LevT. For all models, final are calculated as tokenized BLEU score. The detailed dataset split for WMT14 En-De is shown in Tab. 6. In Fig. 7, we also plot the histogram of the conditional entropy of each pair of sentences H(y|x) in the real parallel data and different distilled data sets from the big-AT, base-AT, small-AT and tiny-AT respectively. It shows that the distribution of the sentence-level conditional entropy differs widely. The mode of H(y|x) in the real data is the highest and follows by distilled data from the big-AT, base-AT, small-AT and tiny-AT. This observation aligns with the complexity value C(d) proposed in §3.2. WMT'14 En-De 4,500,966 3000 3003 37,009 In Figure 8, we also showed with different metrics together with BLEU scores considering BLEU score sometimes cannot capture the change of the system. We considered 5 additional metrics in our experiments: METEOR , RIBES , ChrF (Popović, 2015) TER , and BEER . Not surprisingly, we find that all the metrics are correlated with the original BLEU scores quite well showing the similar trend as discussed earlier. Bayesian decision theory is a fundamental statistical approach to the problem of pattern classification, which provides a principled rule of finding the optimal classification decision using probability and losses that accompany such decisions. In the problem of structured prediction, let x denote the input sequence and y denote the output label sequence. Let H denote all the possible hypothesis functions from the input to the output space: H = {h : X → Y}. Let r(y|x) denote the conditional risk on the input x, which is the expected loss of predicting y based on the posterior probabilities:, where L(y, y) is the loss function that penalizes predicting the true target y as y. The classification task aims to find a hypothesis function h that minimizes the overall risk R given by This is known as the Bayes risk. To minimize the overall risk, obviously we need to minimize the conditional risk for each input x. The Bayesian decision rule states that the global minimum of R(h) is achieved when the classifier make predictions that minimize each conditional risk given x and this gives the Bayes optimal classifier: Let us consider two loss functions defined in Eq. 5. First is the sequence-level loss L seq (y, y) = 1 − I(y = y), then in this case the Bayes classifier is:, which is the most probable output label sequence given the input sequence x. Second let us consider the token-level loss L tok (y, y) = T t=1 1 − I(y t = y t), i.e the sum of zero-one loss at each time step. We have: This suggests that the Bayes classifier finds the most probable label at each time step given the input sequence. To study how training data affects the performance of a weaker classifier, we construct a Hidden Markov Model (HMM) by sampling the parameters of the transition and emission probabilities uniformly within (0, a] and (0, b] respectively. A higher value of a and b indicates an HMM model with higher uncertainty. We refer this HMM as the "true HMM" as our real data generator. Next we consider a weaker classifier that uses a low-dimension bidirectional-LSTM (Bi-LSTM) to encode the input sequence and individual softmax functions at each time step to predict labels independently, which is referred as the "Bi-LSTM" classifier. Obviously, the Bi-LSTM classifier is not able to model the dependencies between output labels embedded in the HMM, and it is equivalent to a simplified non-autoregressive generation model. We generate the real training data D real = {(x 1, y 1), · · ·, (x N, y N)} of size N by sampling from the joint probability of the true HMM. Similarly we sample N test data points as the test data and N valid data points as the validation data. We evaluate the classifier's token-level accuracy tacc and sequence-level accuracy sacc on the test data respectively, where tacc =. These two metrics correspond to the token-level loss L tok and sequence-level loss L seq on each data point of the test data. First, we use h * seq (x) to generate the distillation labels y from the true HMM, which corresponds to applying the Viterbi decoding to each x i in D real. The training data set D seq is created with (x i, y i). Next, we use h * tok (x) to generate the distillation labelsŷ and create the training data D tok of (x i,ŷ i). To generateŷ, we apply the forward-backward algorithm to each x i in D real and obtain P (y t i |x i). We take arg max over the label space L:ŷ t i = arg max We use these three training data (D real, D tok, D seq) to train the Bi-LSTM classifier respectively. We repeat the experiment for 50 times by constructing 50 HMM models with different random seeds as the data generator. We find that when evaluating with the token-level accuracy tacc, models trained with D tok yields the best performance (Bi-LSTM trained with D tok win 97.6% runs); when evaluating with the sequence-level accuracy sacc, models trained with D seq yields the best performance (Bi-LSTM trained with D seq win 98.5% runs). This is because the Bi-LSTM classifier has difficulty modeling the true data distribution defined by an HMM. On the other hand, it is easier for the Bi-LSTM classifier to model the distributions of D seq and D tok. Data sets D seq and D tok define deterministic conditional distributions over the input data, which are much simpler than the real data distribution. By definition, D tok is created by the optimal Bayes classifier h * tok (x), this means that the Bi-LSTM classifier trained with D tok can better capture the distribution of P (y t |x) = max ut P (u t |x), which can generalize better to the test data when evaluated with the token-level accuracy. Similarly, Bi-LSTM trained with D seq performs better on the test data with the sequence-level metric. This corroborates our observation in machine translation task that NAT has difficulty in modeling the real conditional distribution of true sentence pairs. However, when using the distilled data translated from a pretrained autoregressive model with beam-search decoding, it performs better on the test set when evaluated with the BLEU score metric. | We systematically examine why knowledge distillation is crucial to the training of non-autoregressive translation (NAT) models, and propose methods to further improve the distilled data to best match the capacity of an NAT model. | 1,081 | scitldr |
Owing to their ability to both effectively integrate information over long time horizons and scale to massive amounts of data, self-attention architectures have recently shown breakthrough success in natural language processing (NLP), achieving state-of-the-art in domains such as language modeling and machine translation. Harnessing the transformer's ability to process long time horizons of information could provide a similar performance boost in partially-observable reinforcement learning (RL) domains, but the large-scale transformers used in NLP have yet to be successfully applied to the RL setting. In this work we demonstrate that the standard transformer architecture is difficult to optimize, which was previously observed in the supervised learning setting but becomes especially pronounced with RL objectives. We propose architectural modifications that substantially improve the stability and learning speed of the original Transformer and XL variant. The proposed architecture, the Gated Transformer-XL (GTrXL), surpasses LSTMs on challenging memory environments and achieves state-of-the-art on the multi-task DMLab-30 benchmark suite, exceeding the performance of an external memory architecture. We show that the GTrXL, trained using the same losses, has stability and performance that consistently matches or exceeds a competitive LSTM baseline, including on more reactive tasks where memory is less critical. GTrXL offers an easy-to-train, simple-to-implement but substantially more expressive architectural alternative to the standard multi-layer LSTM ubiquitously used for RL agents in partially-observable environments. It has been argued that self-attention architectures deal better with longer temporal horizons than recurrent neural networks (RNNs): by construction, they avoid compressing the whole past into a fixed-size hidden state and they do not suffer from vanishing or exploding gradients in the same way as RNNs. Recent work has empirically validated these claims, demonstrating that self-attention architectures can provide significant gains in performance over the more traditional recurrent architectures such as the;;. In particular, the Transformer architecture has had breakthrough success in a wide variety of domains: language modeling;, machine translation , summarization (Liu & Lapata), question answering (;, multi-task representation learning for NLP (; ;, and algorithmic tasks . The repeated success of the transformer architecture in domains where sequential information processing is critical to performance makes it an ideal candidate for partially observable RL problems, where episodes can extend to thousands of steps and the critical observations for any decision often span the entire episode. Yet, the RL literature is dominated by the use of LSTMs as the main mechanism for providing memory to the agent; ). Despite progress at designing more expressive memory architectures; that perform better than LSTMs in memory-based tasks and partially-observable environments, they have not seen widespread adoption in RL agents perhaps due to their complex implementation, with the LSTM being seen as the go-to solution for environments where memory is required. In contrast to these other memory architectures, the transformer is well-tested in many challenging domains and has seen several open-source implementations in a variety of deep learning frameworks 1. Motivated by the transformer's superior performance over LSTMs and the widespread availability of implementations, in this work we investigate the transformer architecture in the RL setting. In particular, we find that the canonical transformer is significantly difficult to optimize, often ing in performance comparable to a random policy. This difficulty in training transformers exists in the supervised case as well. Typically a complex learning rate schedule is required (e.g., linear warmup or cosine decay) in order to train (;, or specialized weight initialization schemes are used to improve performance . These measures do not seem to be sufficient for RL. , for example, transformers could not solve even simple bandit tasks and tabular Markov Decision Processes (MDPs), leading the authors to hypothesize that the transformer architecture was not suitable for processing sequential information. However in this work we succeed in stabilizing training with a reordering of the layer normalization coupled with the addition of a new gating mechanism to key points in the submodules of the transformer. Our novel gated architecture, the Gated Transformer-XL (GTrXL) (shown in Figure 1, Right), is able to learn much faster and more reliably and exhibit significantly better final performance than the canonical transformer. We further demonstrate that the GTrXL achieves state-ofthe-art when compared to the external memory architecture MERLIN on the multitask DMLab-30 suite . Additionally, we surpass LSTMs significantly on memory-based DMLab-30 levels while matching performance on the reactive set, as well as significantly outperforming LSTMs on memory-based continuous control and navigation environments. We perform extensive ablations on the GTrXL in challenging environments with both continuous actions and high-dimensional observations, testing the final performance of the various components as well as the GTrXL's robustness to seed and hyperparameter sensitivity compared to LSTMs and the canonical transformer. We demonstrate a consistent superior performance while matching the stability of LSTMs, providing evidence that the GTrXL architecture can function as a drop-in replacement to the LSTM networks ubiquitously used in RL. The transformer network consists of several stacked blocks that repeatedly apply self-attention to the input sequence. The transformer layer block itself has remained relatively constant since its original introduction (; ;). Each layer consists of two submodules: an attention operation followed by a position-wise multi-layer network (see Figure 1 (left)). The input to the transformer block is an embedding from the previous layer E (l−1) ∈ R T ×D, where T is the number of time steps, D is the hidden dimension, and l ∈ [0, L] is the layer index with L being the total number of layers. We assume E is an arbitrarily-obtained input embedding of dimension [T, D], e.g. a word embedding in the case of language modeling or an embedding of the per-timestep observations in an RL environment. Multi-Head Attention: The Multi-Head Attention (MHA) submodule computes in parallel H softattention operations for every time step. A residual connection (a) and layer normalization are then applied to the output (see Appendix C for more details): Multi-Layer Perceptron: The Multi-Layer Perceptron (MLP) submodule applies a 1 × 1 temporal convolutional network f (l) (i.e., kernel size 1, stride 1) over every step in the sequence, producing a new embedding tensor E (l) ∈ R T ×D. As in, the network output does not include an activation function. After the MLP, there is a residual update followed by layer normalization: Relative Position Encodings: The basic MHA operation does not take sequence order into account explicitly because it is permutation invariant. Positional encodings are a widely used solution in Canonical Transformer(-XL) block with multi-head attention and position-wise MLP submodules and the standard layer normalization placement with respect to the residual connection (a). Center: TrXL-I moves the layer normalization to the input stream of the submodules. Coupled with the residual connections, there is a gradient path that flows from output to input without any transformations. Right: The GTrXL block, which additionally adds a gating layer in place of the residual connection of the TrXL-I. domains like language where order is an important semantic cue, appearing in the original transformer architecture . To enable a much larger contextual horizon than would otherwise be possible, we use the relative position encodings and memory scheme used in. In this setting, there is an additional T -step memory tensor M (l) ∈ R T ×D, which is treated as constant during weight updates. The MHA submodule then becomes: where StopGrad is a stop-gradient function that prevents gradients flowing backwards during backpropagation. We refer to Appendix C for a more detailed description. While the transformer architecture has achieved breakthrough in modeling sequences for supervised learning tasks (; ;, a demonstration of the transformer as a useful RL memory has been notably absent. Previous work has highlighted training difficulties and poor performance . When transformers have not been used for temporal memory but instead as a mechanism for attention over the input space, they have had success-notably in the challenging multi-agent Starcraft 2 environment. Here, the transformer was applied solely across Starcraft units and not over time. Multiplicative interactions have been successful at stabilizing learning across a wide variety of architectures (; ; . Motivated by this, we propose the introduction of powerful gating mechanisms in place of the residual connections within the transformer block, coupled with changes to the order of layer normalization in the submodules. As will be empirically demonstrated, the "Identity Map Reordering" and gating mechanisms are critical for stabilizing learning and improving performance. Our first change is to place the layer normalization on only the input stream of the submodules, a modification described in several previous works (b; ;). The model using this Identity Map Reordering is termed TrXL-I in the following, and is depicted visually in Figure 1 (center). A key benefit to this reordering is that it now enables an identity map from the input of the transformer at the first layer to the output of the transformer after the last layer. This is in contrast to the canonical transformer, where there are a series of layer normalization operations that non-linearly transform the state encoding. Because the layer norm reordering causes a path where two linear layers are applied in sequence, we apply a ReLU activation to each sub-module output before the residual connection (see Appendix C for equations). The TrXL-I already exhibits a large improvement in stability and performance over TrXL (see Section 4.3.1). One hypothesis as to why the Identity Map Reordering improves is as follows: assuming that the submodules at initialization produce values that are in expectation near zero, the state encoding is passed un-transformed to the policy and value heads, enabling the agent to learn a Markovian policy at the start of training (i.e., the network is initialized such that π(·|s t, . . ., s 1) ≈ π(·|s t) and V π (s t |s t−1, . . ., s 1) ≈ V π (s t |s t−1)). In many environments, reactive behaviours need to be learned before memory-based ones can be effectively utilized, i.e., an agent needs to learn how to walk before it can learn how to remember where it has walked. We further improve performance and optimization stability by replacing the residual connections in Equations 4 and 2 with gating layers. We call the gated architecture with the identity map reordering the Gated Transformer(-XL) (GTrXL). The final GTrXL layer block is written below: where g is a gating layer function. A visualization of our final architecture is shown in Figure 1 (right), with the modifications from the canonical transformer highlighted in red. In our experiments we ablate a variety of gating layers with increasing expressivity: Input: The gated input connection has a sigmoid modulation on the input stream, similar to the short-cut-only gating from He et al. (2016b): The gated output connection has a sigmoid modulation on the output stream: g ) y Highway: The highway connection modulates both streams with a sigmoid: Sigmoid-Tanh: The sigmoid-tanh (SigTanh) gate (Van den) is similar to the Output gate but with an additional tanh activation on the output stream: g y) Gated-Recurrent-Unit-type gating: The Gated Recurrent Unit (GRU) is a recurrent network that performs similarly to an LSTM but has fewer parameters. We adapt its powerful gating mechanism as an untied activation function in depth: We have claimed that the Identity Map Reordering aids policy optimization because it initializes the agent close to a Markovian policy / value function. If this is indeed the cause of improved stability, we can explicitly initialize the various gating mechanisms to be close to the identity map. This is the purpose of the bias b g in the applicable gating layers. We later demonstrate in an ablation that initially setting b In this section, we provide experiments on a variety of challenging single and multi-task RL domains: DMLab-30 , Numpad and Memory Maze (see Fig. 8). Crucially we demonstrate that the proposed Gated Transformer-XL (GTrXL) not only shows substantial improvements over LSTMs on memory-based environments, but suffers no degradation of performance on reactive environments. The GTrXL also exceeds MERLIN , an external memory architecture which used a Differentiable Neural Computer coupled with auxiliary losses, surpassing its performance on both memory and reactive tasks. For all transformer architectures except when otherwise stated, we train relatively deep 12-layer networks with embedding size 256 and memory size 512. These networks are comparable to the state-of-the-art networks in use for small language modeling datasets (see enwik8 in). We chose to train deep networks in order to demonstrate that our do not necessarily sacrifice complexity for stability, i.e. we are not making transformers stable for RL simply by making them shallow. Our networks have receptive fields that can potentially span any episode in the environments tested, with an upper bound on the receptive field of 6144 (12 layers × 512 memory). Future work will look at scaling transformers in RL even further, e.g. towards the 52-layer network in. See App. B for experimental details. For all experiments, we used V-MPO , an on-policy adaptation of Maximum a Posteriori Policy Optimization (MPO) (a; b) that performs approximate policy iteration based on a learned state-value function V (s) instead of the state-action value function used in MPO. Rather than directly updating the parameters in the direction of the policy gradient, V-MPO uses the estimated advantages to first construct a target distribution for the policy update subject to a sample-based KL constraint, then calculates the gradient that partially moves the parameters toward that target, again subject to a KL constraint. V-MPO was shown to achieve state-of-the-art for LSTM-based agents on the multi-task DMLab-30 benchmark suite. We first present of the best performing GTrXL variant, the GRU-type gating, against a competitive LSTM baseline, demonstrating a substantial improvement on the multi-task DMLab-30 do- 75.2 ± 10.4 GTrXL (SigTanh) 101.0 ± 1.3 83.9 ± 0.7 Table 1: Final human-normalized return averaged across all 30 DMLab levels for baselines and GTrXL variants. We also include the 100-capped score where the per-level mean score is clipped at 100, providing a metric that is proportional to the percentage of levels that the agent is superhuman. We see that the GTrXL (GRU) surpasses LSTM by a significant gap and exceeds the performance of MERLIN trained for 100 billion environment steps. The GTrXL (Output) and the proposed reordered TrXL-I also surpass LSTM but perform slightly worse than MERLIN and are not as robust as GTrXL (GRU) (see Sec. 4.3.2). We sample 6-8 hyperparameters per model. We include standard error over runs. Figure 3: Numpad demonstrating that the GTrXL has much better memory scaling properties than LSTM. Left: As the Numpad environment's memory requirement increases (because of larger pad size), the GTrXL suffers much less than LSTM. However, because of the combinatorial nature of Numpad, the GTrXL eventually also starts dropping in performance at 4x4. We plot mean and standard error of the last 200 episodes after training each model for 0.15B, 1.0B and 2.0B environment steps for Numpad size 2, 3 and 4, respectively. Center, Right: Learning curves for the GTrXL on 2 × 2 and 4 × 4 Numpad. Even when the LSTM is trained for twice as long, the GTrXL still has a substantial improvement over it. We plot 5 hyperparameter settings per model for learning curves. main . DMLab-30 is a large-scale, multitask benchmark comprising 30 firstperson 3D environments with image observations and has been widely used as a benchmark for architectural and algorithmic improvements (; ;). The levels test a wide agent competencies such as language comprehension, navigation, handling of partial observability, memory, planning, and other forms of long horizon reasoning, with episodes lasting over 4000 environment steps. Figure 2 shows mean return over all levels as training progresses, where the return is human normalized as done in previous work (meaning a human has a per-level mean score of 100 and a random policy has a score of 0), while Table 1 has the final performance at 10 billion environment steps. The GTrXL has a significant gap over a 3-layer LSTM baseline trained using the same V-MPO algorithm. Furthermore, we included the final of a previously-published external memory architecture, MERLIN . Because MERLIN was trained for 100 billion environment steps with a different algorithm, IMPALA, and also involved an auxiliary loss critical for the memory component to function, the learning curves are not directly comparable and we only report the final performance of the architecture as a dotted line. Despite the differences, our demonstrate that the GTrXL can match the previous state-of-the-art on DMLab-30. An informative split between a set of memory-based levels and more reactive ones (listed in Appendix D) reveals that our model specifically has large improvements in environments where memory plays a critical role. Meanwhile, GTrXL also shows improvement over LSTMs on the set of reactive levels, as memory can still be effectively utilized in some of these levels. Figure 4: Learning curves for the gating mechanisms, along with MERLIN score at 100 billion frames as a reference point. We can see that the GRU performs as well as any other gating mechanism on the reactive set of tasks. On the memory environments, the GRU gating has a significant gain in learning speed and attains the highest final performance at the fastest rate. We plot both mean (bold) and the individual 6-8 hyperparameter samples per model (light). Figure 5: Sensitivity analysis of GTrXL variants versus TrXL and LSTM baselines. We sample 25 different hyperparameter sets and seeds and plot the ranked average return at 3 points during training (0.5B, 1.0B and 2.0B environment steps). Higher and flatter lines indicate more robust architectures. The same hyperparameter sampling distributions were used across models (see Appendix B). We plot human performance as a dotted line. We next demonstrate that the GTrXL scales better compared to an LSTM when an environment's temporal horizon is increased, using the "Numpad" continuous control task of which allows an easy combinatorial increase in the temporal horizon. In Numpad, a robotic agent is situated on a platform resembling the 3x3 number pad of a telephone (generalizable to N × N pads). The agent can interact with the pads by colliding with them, causing them to be activated (visualized in the environment state as the number pad glowing). The goal of the agent is to activate a specific sequence of up to N 2 numbers, but without knowing this sequence a priori. The only feedback the agent gets is by activating numbers: if the pad is the next one in the sequence, the agent gains a reward of +1, otherwise all activated pads are cleared and the agent must restart the sequence. Each correct number in the sequence only provides reward once, i.e. each subsequent activation of that number will no longer provide rewards. Therefore the agent must explicitly develop a search strategy to determine the correct pad sequence. Once the agent completes the full sequence, all pads are reset and the agent gets a chance to repeat the sequence again for more reward. This means higher reward directly translates into how well the pad sequence has been memorized. An image of the scenario is provided in Figure 3. There is the restriction that contiguous pads in the sequence must be contiguous in space, i.e. the next pad in the sequence can only be in the Moore neighborhood of the previous pad. Furthermore, no pad can be pressed twice in the sequence. We present two in this environment in Figure 3. The first measures the final performance of the trained models as a function of the pad size. We can see that LSTM performs badly on all 3 pad sizes, and performs worse as the pad size increases from 2 to 4. The GTrXL performs much better, and almost instantly solves the environment with its much more expressive memory. On the center Figure 6: Learning curves comparing a thinner GTrXL (GRU) with half the embedding dimension of the other presented gated variants and TrXL baselines. The Thin GTrXL (GRU) has fewer parameters than any other model presented but still matches the performance of the best performing counterpart, the GTrXL (Output), which has over 10 million more parameters. We plot both mean (bold) and 6-8 hyperparameter settings (light) per model. and right images, we provide learning curves for the 2 × 2 and 4 × 4 Numpad environments, and show that even when the LSTM is trained twice as long it does not reach GTrXL's performance. We demonstrated that the GRU-type-gated GTrXL can achieve state-of-the-art on DMLab-30, surpassing both a deep LSTM and an external memory architecture, and also that the GTrXL has a memory which scales better with the memory horizon of the environment. However, the question remains whether the expressive gating mechanisms of the GRU could be replaced by simpler alternatives. In this section, we perform extensive ablations on the gating variants described in Section 3.2, and show that the GTrXL (GRU) has improvements in learning speed, final performance and optimization stability over all other models, even when controlling for the number of parameters. We first report the performance of the gating variants in DMLab-30. Table 1 and Figure 4 show the final performance and training curves of the various gating types in both the memory / reactive split, respectively. The canonical TrXL completely fails to learn, while the TrXL-I improves over the LSTM. Of the gating varieties, the GTrXL (Output) can recover a large amount of the performance of the GTrXL (GRU), especially in the reactive set, but as shown in Sec. 4.3.2 is generally far less stable. The GTrXL (Input) performs worse than even the TrXL-I, reinforcing the identity map path hypothesis. Finally, the GTrXL (Highway) and GTrXL (SigTanh) are more sensitive to the hyperparameter settings compared to the alternatives, with some settings doing worse than TrXL-I. Beyond improved performance, we next demonstrate a significant reduction in hyperparameter and seed sensitivity for the GTrXL (GRU) compared to baselines and other GTrXL variants. We use the "Memory Maze" environment, a memory-based navigation task in which the agent must discover the location of an apple randomly placed in a maze of blocks. The agent receives a positive reward for collecting the apple and is then teleported to a random location in the maze, with the apple's position held fixed. The agent can make use of landmarks situated around the room to return as quickly as possible to the apple for subsequent rewards. Therefore, an effective mapping of the environment in more frequent returns to the apple and higher reward. We chose to perform the sensitivity ablation on Memory Maze because it requires the use of longrange memory to be effective and it includes both continuous and discrete action sets (details in Appendix A) which makes optimization more difficult. In Figure 5, we sample 25 independent V-MPO hyperparameter settings from a wide range of values and train the networks to 2 billion environment steps (see Appendix B). Then, at various points in training (0.5B, 1.0B and 2.0B), we rank all runs by their mean return and plot this ranking. Models with curves which are both higher and flatter are thus more robust to hyperparameters and random seeds. Our demonstrate that the GTrXL (GRU) can learn this challenging memory environment in much fewer environment steps than LSTM, and that GTrXL (GRU) beats the other gating variants in stability by a large margin, thereby offering a substantial reduction in necessary hyperparameter tuning. The values in Table 3 list what percentage of the 25 runs per model had losses that diverged to infinity. We can see that the only model reaching human performance in 2 billion environment steps is the GTrXL (GRU), with 10 runs having a mean score 8 and above. For the final gating ablation, we compare transformer variants while tracking their total parameter count to control for the increase in capacity caused by the introduction of additional parameters in the gating mechanisms. To demonstrate that the advantages of the GTrXL (GRU) are not solely due to an increase in parameter count, we halve the number of attention heads (which also effectively halves the embedding dimension due to the convention that the embedding size is the number of heads multiplied by the attention head dimension). The effect is a substantial reduction in parameter count, ing in less parameters than even the canonical TrXL. Fig. 6 and Tab. 2 compare the different models to the "Thin" GTrXL (GRU), with Tab. 2 listing the parameter counts. We include a parameter-matched LSTM model with 12 layers and 512 hidden size. The Thin GTrXL (GRU) surpasses every other model (within variance) except the GTrXL (GRU), even surpassing the next best-performing model, the GTrXL (Output), with over 10 million less parameters. All applicable gating variants in the previous sections were trained with the gated identity initialization. We observed in initial Memory Maze that the gated identity initialization significantly improved optimization stability and learning speed. Figure 7 compares an otherwise identical 4-layer GTrXL (GRU) trained with and without the gated identity initialization. Similarly to the previous sensitivity plots, we plot the ranked mean return of 10 runs at various times during training. As can be seen from Fig. 7, there is a significant gap caused by the bias initialization, suggesting that preconditioning the transformer to be close to Markovian in large learning speed gains. Gating has been shown to be effective to address the vanishing gradient problem and thus improve the learnability of recurrent models. LSTM networks rely on an input, forget and output gate that protect the update of the cell. GRU is another popular gated recurrent architecture that simplifies the LSTM cell, reducing the number of gates to two. Finding an optimal gating mechanism remains an active area of research, with other existing proposals (; ;), as well as works trying to discover optimal gating by neural architecture search More generally, gating and multiplicative interactions have a long history . Gating has been investigated previously for improving the representational power of feedforward and recurrent models (Van den ;), as well as learnability . Initialization has also played a crucial role in making deep models trainable (; ;). There has been a wide variety of work looking at improving memory in reinforcement learning agents. External memory approaches typically have a regular feedforward or recurrent policy interact with a memory database through read and write operations. Priors are induced through the design of the specific read/write operations, such as those resembling a digital computer (; or an environment map . An alternative non-parametric perspective to memory stores an entire replay buffer of the agent's past observations, which is made available for the agent to itself reason over either through fixed rules or an attention operation . Others have looked at improving performance of LSTM agents by extending the architecture with stacked hierarchical connections / multiple temporal scales and auxiliary losses ) or allowing an inner-loop update to the RNN weights . Other work has examined self-attention in the context of exploiting relational structure within the inputspace or within recurrent memories. In this paper we provided evidence that confirms previous observations in the literature that standard transformer models are unstable to train in the RL setting and often fail to learn completely . We presented a new architectural variant of the transformer model, the GTrXL, which has increased performance, more stable optimization, and greater robustness to initial seed and hyperparameters than the canonical architecture. The key contributions of the GTrXL are reordered layer normalization modules and a gating layer instead of the standard residual connection. We performed extensive ablation experiments testing the robustness, ease of optimization and final performance of the gating layer variations, as well as the effect of the reordered layer normalization. These empirically demonstrate that the GRU-type gating performs best across all metrics, exhibiting comparable robustness to hyperparameters and random seeds as an LSTM while still maintaining a performance improvement. Furthermore, the GTrXL (GRU) learns faster, more stably and achieves a higher final performance (even when controlled for parameters) than the other gating variants on the challenging multitask DMLab-30 benchmark suite. Having demonstrated substantial and consistent improvement in DMLab-30, Numpad and Memory Maze over the ubiquitous LSTM architectures currently in use, the GTrXL makes the case for wider adoption of transformers in RL. A core benefit of the transformer architecture is its ability to scale to very large and deep models, and to effectively utilize this additional capacity in larger datasets. In future work, we hope to test the limits of the GTrXL's ability to scale in the RL setting by providing it with a large and varied set of training environments. Numpad: Numpad has three actions, two of which move the sphere towards some direction in the x,y plane and the third allows the agent to jump in order to get over a pad faster. The observation consists of a variety of proprioceptive information (e.g. position, velocity, acceleration) as well as which pads in the sequence have been correctly activated (these will shut off if an incorrect pad is later hit), and the previous action and reward. Episodes last a fixed 500 steps and the agent can repeat the correct sequence any number of times to receive reward. Observations were processed using a simple 2-layer MLP with tanh activations to produce the transformer's input embedding. Ignoring the "jump" and "crouch" actions which we do not use, an action in the native DMLab action space consists of 5 integers whose meaning and allowed values are given in Table 4. Following previous work on DMLab , we used the reduced action set given in Table 5 with an action repeat of 4. Observations are 72 × 96 RGB images. Some levels require a language input, and for that all models use an additional 64-dimension LSTM to process the sentence. , the DMLab Arbitrary Visuomotor Mapping task was specifically used to highlight the MERLIN architecture's ability to utilize memory. In Figure 9 we show that, given a similarly reduced action set as used in , see Table 6, the GTrXL architecture can also reliably attain human-level performance on this task. Memory Maze: An action in the native Memory Maze action space consists of 8 continuous actions and a single discrete action whose meaning and allowed values are given in Table 7. Unlike for DMLab, we used a hybrid continuous-discrete distribution to directly output policies in the game's native action space. Observations are 72 × 96 RGB images. Table 6: Simplified action set for DMLab Arbitrary Visuomotor Mapping (AVM). This action set is the same as the one used for AVM in but with an additional no-op, which may also be replaced with the Fire action. Image Encoder: For DMLab-30 and Memory Maze, we used the same image encoder as in for multitask DMLab-30. The ResNet was adapted from and each of its layer blocks consists of a (3 × 3, stride 1) convolution, followed by (3 × 3, stride 2) max-pooling, followed by 2 3 × 3 residual blocks with ReLU non-linearities. For all experiments, beyond sampling independent random seeds, each run also has V-MPO hyperparameters sampled from a distribution (see Table 8). The sampled hyperparameters are kept fixed across all models for a specific experiment, meaning that if one of the α sampled is 0.002, then all models will have 1 run with α = 0.002 and so on for the rest of the samples. The exception is for the DMLab-30 LSTM, where a more constrained range was found to perform better in preliminary experiments. Each model had 8 seeds started, but not all runs ran to completion due to compute issues. These hyperparameter settings were dropped randomly and not due to poor environment performance. We report how many seeds ran to completion for all models. At least 6 seeds finished for every model tested. We list architecture details by section below. All LSTM models have residual skip connections in depth. All experiments in this work were carried out in an actor-learner framework that utilizes TF-Replicator for distributed training on TPUs in the 16-core Figure 10: The 25 hyperparameter settings sampled for the sensitivity ablation (Sec. 4.3.2). X-axis is in log scale and values are sampled from the corresponding ranges given in Table 8. MultiHeadAttention(E (l−1) ): where we used Einstein summation notation to denote the tensor multiplications, MaskedSoftmax is a causally-masked softmax to prevent addressing future information, Linear is a linear layer applied per time-step and we omit reshaping operations for simplicity. The basic MHA operation does not take sequence order into account explicitly because it is permutation invariant, so positional encodings are a widely used solution in domains like language where order is an important semantic cue, appearing in the original transformer architecture . To enable a much larger contextual horizon than would otherwise be possible, we use the relative position encodings and memory scheme described in. In this setting, there is an additional T -step memory tensor M (l) ∈ R T ×D, which is treated as constant during weight updates. Table 13: Partition of DMLab-30 levels into a memory-based and reactive split of levels. Model Median Human Normalized Score LSTM 136.6 ± 3.4 GTrXL 137.1 ± 5.0 Table 14: Final human-normalized median return across all 57 Atari levels for LSTM and GTrXL at 11.4 billion environment steps (equivalent to 200 million per individual game). Both models are 256 dimensions in width. We include standard error over runs. Figure 11: Median human-normalized returns as training progresses for both GTrXL and LSTM models. We run 8 hyperparameter settings per model. In this section, we run the GTrXL on the multitask Atari-57 benchmark. Although Atari-57 was not designed specifically to test an agent's memory capabilities, we include these here to demonstrate that we suffer no performance regression on a popular environment suite, providing further evidence that GTrXL can be used as an architectural replacement to the LSTM. The LSTM and GTrXL are matched in width at 256 dimensions. The GTrXL is 12 layers deep to show our model's learning stability even at large capacity. The LSTM architecture matches the one reported in. We train for 11.4 billion environment steps, equivalent to 200 million per environment. We run 8 hyperparameter settings per model. | We succeed in stabilizing transformers for training in the RL setting and demonstrate a large improvement over LSTMs on DMLab-30, matching an external memory architecture. | 1,082 | scitldr |
Knowledge distillation is an effective model compression technique in which a smaller model is trained to mimic a larger pretrained model. However in order to make these compact models suitable for real world deployment, not only do we need to reduce the performance gap but also we need to make them more robust to commonly occurring and adversarial perturbations. Noise permeates every level of the nervous system, from the perception of sensory signals to the generation of motor responses. We therefore believe that noise could be a crucial element in improving neural networks training and addressing the apparently contradictory goals of improving both the generalization and robustness of the model. Inspired by trial-to-trial variability in the brain that can from multiple noise sources, we introduce variability through noise at either the input level or the supervision signals. Our show that noise can improve both the generalization and robustness of the model. ”Fickle Teacher” which uses dropout in teacher model as a source of response variation leads to significant generalization improvement. ”Soft Randomization”, which matches the output distribution of the student model on the image with Gaussian noise to the output of the teacher on original image, improves the adversarial robustness manifolds compared to the student model trained with Gaussian noise. We further show the surprising effect of random label corruption on a model’s adversarial robustness. The study highlights the benefits of adding constructive noise in the knowledge distillation framework and hopes to inspire further work in the area. The design of Deep Neural Networks (DNNs) for efficient real world deployment involves careful consideration of following key elements: memory and computational requirements, performance, reliability and security. DNNs are often deployed in resource constrained devices or in applications with strict latency requirements such as self driving cars which leads to a necessity for developing compact models that generalizes well. Furthermore, since the environment in which the models are deployed are often constantly changing, it is important to consider their performance on both indistribution data as well as out-of-distribution data. Thereby ensuring the reliability of the models under distribution shift. Finally, the model needs to be robust to malicious attacks by adversaries . Many techniques have been proposed for achieving high performance in compressed model such as model quantization, model pruning, and knowledge distillation. In our study, we focus on knowledge distillation as an interactive learning method which is more similar to human learning. Knowledge Distillation involves training a smaller network (student) under the supervision of a larger pre-trained network (teacher). In the original formulation, proposed mimicking the softened softmax output of the teacher model which consistently improves the performance of the student model compared to the model trained without teacher assistance. However, despite the promising performance gain, there is still a significant performance gap between the student and the teacher model. Consequently an optimal method of capturing knowledge from the larger network and transferring it to a smaller model remains an open question. While reducing this generalization gap is important, in order to truly make these models suitable for real world deployment, it is also pertinent to incorporate methods into the knowledge distillation framework that improve the robustness of the student model to both commonly occurring and malicious perturbations. For our proposed methods, we derive inspiration from studies in neuroscience on how humans learn. A human infant is born with billions of neurons and throughout the course of its life, the connections between these neurons are constantly changing. This neuroplasticity is at the very core of learning . Much of the learning for a child happens not in isolation but rather through collaboration. A child learns by interacting with the environment and understanding it through their own experience as well as observations of others. Two learning theories are central to our approach: cognitive bias and trial-to-trial response variation. Human decision-making shows systematic simplifications and deviations from the tenets of rationality ('heuristics') that may lead to sub-optimal decisional outcomes ('cognitive biases') . These biases are strengthened through repeatedly rewarding a particular response to the same stimuli. Trial-to-trial response variation in the brain, i.e. variation in neural responses to the same stimuli, encodes valuable information about the stimuli . We hypothesize that introducing constructive noise in the student-teacher collaborative learning framework to mimic the trial-to-trial response variation in humans can act as a deterrent to cognitive bias which is manifested in the form of memorization and over-generalization in neural networks. When viewed from this perspective, noise can be a crucial element in improving learning and addressing the apparent contradictory goals of achieving accurate and robust models. In this work, we present a compelling case for the beneficial effects of introduction of noise in knowledge distillation. We provide a comprehensive study on the effects of noise on model generalization and robustness. Our contributions are as follows: • A comprehensive analysis on the effects of adding a diverse range of noise types in different aspects of the teacher-student collaborative learning framework. Our study aims to motivate further work in exploring how noise can improve both generalization and robustness of the student model. • A novel approach for transferring teacher model's uncertainty to a student using Dropout in teacher model as a source of trial-to-trial response variability which leads to significant generalization improvement. We call this method "Fickle Teacher". • A novel approach for using Gaussian noise in the knowledge distillation which improves the adversarial robustness of the student model by an order of magnitude while significantly limiting the drop in generalization. we refer to this method as "Soft Randomization". • Random label corruption as a strong deterrent to cognitive bias and demonstrating its surprising ability to significantly improve adversarial robustness with minimal reduction in generalization. Many experimental and computational methods have reported the presence of noise in the nervous system and how it affects the the function of system . Noise as a common regularization technique has been used for ages to improve generalization performance of overparameterized deep neural networks by adding it to the input data, the weights or the hidden units Steijvers & Grünwald, 1996;;; ). Many noise techniques have been shown to improve generalization such as Dropout and injection of noise to the gradient (; Neelakantan et al.). Many works show that noise is crucial for non-convex optimization;; ). A family of randomization techniques that inject noise in the model both during training and inference time are proven to be effective to the adversarial attacks (? ; ;). Randomized smoothing transforms any classifier into a new smooth classifier that has certifiable l 2 -norm robustness guarantees . Label smoothing improves the performance of deep neural networks across a range of tasks . However, Müller et al. reports that label smoothing impairs knowledge distillation. We believe the knowledge distillation framework with the addition of constructive noise might offer a promising direction towards the design goal mentioned earlier, i.e. achieving lightweight well generalizing models with improved robustness to both adversarial and naturally occurring perturbations. For our empirical analysis, we adopted CIFAR-10 because of its pervasiveness in both knowledge distillation and robustness literature. Furthermore, the size of the dataset allows for extensive experimentation. To study the effect of noise addition in the knowledge distillation framework, we use Hinton method which trains the student model by minimizing the Kullback-Leibler divergence between the smoother output probabilities of the student and teacher model. In all of our experiments we use α = 0.9 and τ = 4. We conducted our experiments on Wide Residual Networks (WRN) (b). Unless otherwise stated, we normalize the images between 0 and 1 and use standard training scheme as used in (a;) To evaluate the out of distribution generalization of our models, we used the ImageNet images from the CINIC dataset . For adversarial robustness evaluation, we use the Projected Gradient Descent (PGD) attack from and run for multiple step sizes. We report the worst robustness accuracy for 5 random initialization runs. Finally, we test the robustness of our models to commonly occurring corruptions and perturbations proposed by in CIFAR-C as a proxy for natural robustness. For details of the methods, please see appendex. In this section, we propose injecting different types of noise in the student-teacher learning framework of knowledge distillation and analyze their effect on the generalization and robustness of the model. Here, we add a signal-dependent noise to the output logits of the teacher model. For each sample, we add zero-mean Gaussian noise with variance that is proportional to the output logits in the given sample (z i).ẑ We study the effect for the noise range [0 − 0.5] at steps of 0.1. Figure 1 shows for noise levels up to 0.1, the random signal-dependent noise improves the generalization to CIFAR-10 test set compared to the Hinton method without noise while marginally reducing the out-of-distribution generalization to CINIC-ImageNet. Figure 1 and Figure 11 show a slight increase in the adversarial robustness and natural robustness of the models. Müller et al. reported that when the teacher model is trained with label smoothing, the knowledge distillation to the student model is impaired and the student model performs worse. On the contrary, for lower level of noise, our method improves the effectiveness of distillation process. Our method differs from their approach in that we train the teacher model without any noise and only when distilling knowledge to the student, we add noise to its softened logits. Inspired by trial-to-trial variability in the brain and its constructive role in learning, we propose using dropout in the teacher model as a source of variability in the supervision signal from the teacher. We train the teacher model with dropout and while training the student model, we keep the dropout active in the teacher model. As a , repeated representation of the same input image leads to different output prediction of teacher. Gal & Ghahramani used dropout to obtain principled uncertainty estimates from deep learning networks. Gurau et al. utilize knowledge distillation to better calibrate a student model with the same architecture as the teacher model by using the soft target distribution obtained by averaging the Monte Carlo samples. Our proposed method differs from their method in a number of ways. We use dropout as a source of uncertainty encoding noise for distilling knowledge to a compact student model. Also, instead of averaging Monte Carlo simulations, we used the logits returned by the teacher model with activate dropout and train the student for more epochs so that it can capture the uncertainty of the teacher directly. Figure 2: Encoding the uncertainty of teacher helps the student to (a)generalize better on both unseen data and out-of-distribution data, and (b) to ave higher generalization to PGD attack. Note that for higher dropout rate the performance of teacher drops. We compare the generalization and robustness of the proposed method for dropout in the range [0 − 0.5] at steps of 0.1. For training parameters, please see the appendex. Figure 12a show that training the student model with dropout using our scheme significantly improves both in-distribution and outof-distribution generalization over the Hinton method. Interestingly, even when the performance of the teacher model used to train the model is decreasing after drop rate 0.2, the student model performance still improves up to drop rate 0.4. For dropout rate upto 0.2, both PGD Robustness (Figure 12b) and natural robustness increases (Figure 6). This suggest that as per our hypothesis, adding trial-to-trial variability helps in distilling knowledge to the student model. Pinot et al. provided theoretical evidence for the relation between adversarial robustness and the intensity of random noise injection in the input image. They show that injection of noise drawn from the exponential family such as Gaussian or Laplace noise leads to guaranteed robustness to adversarial attack. However this improved robustness comes at the cost of generalization. We propose a novel method for adding Gaussian noise in the input image while distilling knowledge to the student model. Since the knowledge distillation framework provides an opportunity to combine multiple sources of information, we hypothesize that using the teacher model trained on clean images, to train the student model with random Gaussian noise can retain the adversarial robustness gain observed with randomized training and mitigate the loss in generalization. Our method involves minimizing the following loss function in the knowledge distillation framework. where S denotes the output of student, S τ and T τ denote the soften logits of student and teacher models by temperature τ, respectively. α and τ are the balancing factor and temperature parameters from the Hinton method. We trained the models with six Gaussian noise levels and observe a significant increase in adversarial robustness and a decrease in generalization. However, our proposed method outperforms the compact model trained with Gaussian noise without teacher assistance for both generalization and robustness (Figures: 3 and 4). Our method is able to increase the adversarial robustness even at lower noise intensity For σ = 0.05, our method achieves 33.85% compared to 3.53% for the student model trained alone. In addition, our method also improves the robustness to common corruptions. Figure 5 shows that the robustness to noise and blurring corruptions improves significantly as the Gaussian noise intensity increases. For weather corruptions, it improves robustness except for fog and frost. Finally for digital corruption except for contrast and saturation, the robustness improves. We also observe changes in the effect at different intensities, for example for frost, the robustness increases at lower noise level and then decreases for higher intensities. Our method allows the use of lower noise intensity for increasing adversarial robustness while keeping the loss in generalization very low compared to other adversarial training methods. Following the analogy with cognitive bias in humans, and relating it to the memorization and over generalization in deep neural networks, we propose a counter intuitive regularization technique based on label noise. For each sample in the training process, with probability p, we randomly change the one hot encoded target labels to an incorrect class. The intuition behind this method is g a u ss ia n _n o is e im p u ls e _n o is e sh o t_ n o is e sp e ck le _n o is e d e fo cu s_ b lu r g a u ss ia n _b lu r g la ss _b lu r m o ti o n _b lu r zo o m _b lu r b ri g h tn e ss fo g fr o st sn o w sp a tt e r co n tr a st e la st ic _t ra n sf o rm jp e g _c o m p re ss io n p ix e la te sa tu ra te that by randomly relabeling a fraction of the samples in each epoch, we encourage the model to not be overconfident in its predictions and discourage memorization. There has been a number of studies on improving the tolerance of the DNNs to noisy labels (; ;). However, to the best of our knowledge, random label noise has not been explored as a source of constructive noise to improve the generalization of the model. g a u ss ia n _n o is e im p u ls e _n o is e sh o t_ n o is e sp e ck le _n o is e d e fo cu s_ b lu r g a u ss ia n _b lu r g la ss _b lu r m o ti o n _b lu r zo o m _b lu r b ri g h tn e ss fo g fr o st sn o w sp a tt e r co n tr a st e la st ic _t ra n sf o rm jp e g _c o m p re ss io n p ix e la te sa tu ra te We extensively study the effect of random label corruption on a range of p values and at multiple levels: teacher model alone, student model alone, both student and teacher model. When the label corruption is only used during knowledge distillation to student (Corrupted-S), both in-distribution and out-of-distribution generalization increases even for very high corruption levels. When the label corruption is used for training the teacher model and then used to train the student model with (Corrupted-TS) and without (Corrupted-T) label corruption, the generalization drops (Figure 7). In general. knowledge, for high level of label corruption, knowledge distillation outperforms the teacher model. Interestingly, random label corruption leads to a huge increase in adversarial robustness. Just by training with 5% random labels, the PGD-20 robustness of the teacher model increases from 0% to 10.89%. We see this increase in robustness for Corrupted-T and Corrupted-TS. Up to 40% random label corruption, the adversarial robustness increases and slightly decreases for 50%. We believe that this observed phenomenon warrants further study. Inspired by trial-to-trial variability in the brain, we introduce variability in the knowledge distillation framework through noise at either the input level or the supervision signals. For this purpose, we proposed novel ways of introducing noise at multiple levels and studied their effect on both generalization and robustness. Fickle teacher improves the both in-distribution and out of distribution generalization significantly while also slightly improving robustness to common and adversarial perturbations. Soft randomization improves the adversarial robustness of the student model trained alone with Gaussian noise by a huge margin for lower noise intensities while also reducing the drop in generalization. We also showed the surprising effect of random label corruption alone in increasing the adversarial robustness by an order of magnitude in addition to improving the generalization. Our strong empirical suggest that injecting noises which increase the trial-to-trial variability in the knowledge distillation framework is a promising direction towards training compact models with good generalization and robustness. A APPENDIX In this section we provide details for the methods relevant our study. Hinton et al. proposed to use the final softmax function with a raised temperature and use the smooth logits of the teacher model as soft targets for the student model. The method involves minimizing the Kullback-Leibler divergence between the smoother output probabilities: where L CE denotes cross-entropy loss, σ denotes softmax function, z S student output logit, z T teacher output logit, τ and α are the hyperparameters which denote temperature and balancing ratio, respectively. Neural networks tend to generalize well when the test data comes from the same distribution as the training data . However, models in the real world often have to deal with some form of domain shift which adversely affects the generalization performance of the models ((; ; ;). Therefore, test set performance alone is not the optimal metric for evaluation the generalization of the models in test environment. To measure the out-of-distribution performance, we used the ImageNet images from the CINIC dataset . CINIC contains 2100 images randomly selected for each of the CIFAR-10 categories from the ImageNet dataset. Hence the performance of models trained on CIFAR-10 on these 21000 images can be considered as a approximation for a model's out-of-distribution performance. A.3.1 ADVERSARIAL ROBUSTNESS Deep Neural Networks have been shown to be highly vulnerable to carefully crafted imperceptible perturbations designed to fool a neural networks by an adversary . This vulnerability poses a real threat to deep learning model's deployment in the real world . Robustness to these adversarial attacks has therefore gained a lot of traction in the research community and progress has been to better evaluate robustness to adversarial attacks (; ;) and defend our models against these attacks . To evaluate the adversarial robustness of models in this study, we use the Projected Gradient Descent (PGD) attack from. The PGD-N attack initializes the adversarial image with the original image with the addition of a random noise within some epsilon bound,. For each step it takes the loss with respect to the input image and moves in the direction of loss with the step size and then clips it within the epsilon bound and the range of valid image. where denote epsilon-bound, α step size and X original image. The projection operator,d (A) denotes element-wise clipping, with A i,j clipped to the range [X i,j −, X i,j +] and within valid data range. In all of our experiments, we use 5 random initializations and report the worst adversarial robustness. While robustness to adversarial attack is important from security perspective, it is an instance of worst case distribution shift. The model also needs to be robust to naturally occurring perturbations which it will encounter frequently in the test environment. Recent works have shown that Deep Neural Networks are also vulnerable to commonly occurring perturbations in the real world which are far from the adversarial examples manifold. curated a set of real-world, unmodified and naturally occurring examples that causes classifier accuracy to significantly degrade. measured model's robustness to the minute transformations found across video frames which they refer to as natural robustness and found state-of-the-art classifier to be brittle to these transformations. In their study, they found robustness to synthetic color distortions as a good proxy for natural robustness. In our study we use robustness to the common corruptions and perturbations proposed by in CIFAR-C as a proxy for natural robustness. A.3.3 TRADE OFF BETWEEN GENERALIZATION AND ADVERSARIAL ROBUSTNESS While making our model's robust to adversarial attacks, we need to be careful not to overemphasize robustness to norm bounded perturbation and rigorously test their effect on model's in-distribution and out-of-distribution generalization as well as robustness to naturally occurring perturbation and distribution shift. Recent study have highlighted the adverse affect of adversarially trained model on natural robustness. showed that even a semantics-preserving transformations on the input data distribution significantly degrades the performance of adversarial trained models but only slightly affects the performance of standard trained model. showed that adversarially trained models improve robustness to mid and high frequency perturbations but at the expense of low frequency perturbations which are more common in the real world. Furthermore, in the adversarial literature, a number of studies has shown an inherent trade-off between adversarial robustness and generalization;;. We would like to point out that these studies were conducted under adversarial setting and do not necessarily hold true for general robustness of the model. To exploit the uncertainty of the teacher model for a sample, we propose random swapping noise methods that select a sample with some probability p and then swap the softened softmax logits if the difference is below a threshold. We propose two variants of random swapping: 1. Swap Top 2: Swap the top two logits if the difference between them is below the threshold. 2. Swap All: Consider all consecutive pairs iteratively and swap them if the difference is below the threshold value. These methods improve the in-distribution generalization but adversely affects the out-ofdistribution generalization (Figure 9 . It does not have a pronounced affect on the robustness (Figures: 9b, 10). A.5 TRAINING SCHEME FOR DISTILLATION WITH DROPOUT Because of the variability in the teacher model, the student model needs to be trained to more epochs in order for it to converge and effectively capture the uncertainty of the teacher model. We used the same initial learning rate of 0.1 and decay factor of 0.2 as per the standard training scheme. For dropout rate of 0.1 and 0.2, we train for 250 epochs and reduce learning rate at 75, 150 and 200 epochs. For dropout rate 0.3, we train for 300 epochs and reduce learning rate at 90, 180 and 240 epochs. Finally for drop rate of 0.4 and 0.5, due to the increased variability, we train for 350 epochs and reduce learning rate at 105, 210 and 280 epochs. Adversarial Robustness Figure 9: Noise on the supervision from teacher by swapping all logits or the top 2 (a) improves the accuracy of student on unseen data, but not the generalization to out-of-distribution data. g a u ss ia n _n o is e im p u ls e _n o is e sh o t_ n o is e sp e ck le _n o is e d e fo cu s_ b lu r g a u ss ia n _b lu r g la ss _b lu r m o ti o n _b lu r zo o m _b lu r b ri g h tn e ss fo g fr o st sn o w sp a tt e r co n tr a st e la st ic _t ra n sf o rm jp e g _c o m p re ss io n p ix e la te sa tu ra te g a u ss ia n _n o is e im p u ls e _n o is e sh o t_ n o is e sp e ck le _n o is e d e fo cu s_ b lu r g a u ss ia n _b lu r g la ss _b lu r m o ti o n _b lu r zo o m _b lu r b ri g h tn e ss fo g fr o st sn o w sp a tt e r co n tr a st e la st ic _t ra n sf o rm jp e g _c o m p re ss io n p ix e la te sa tu ra te | Inspired by trial-to-trial variability in the brain that can result from multiple noise sources, we introduce variability through noise in the knowledge distillation framework and studied their effect on generalization and robustness. | 1,083 | scitldr |
We introduce a novel end-to-end approach for learning to cluster in the absence of labeled examples. Our clustering objective is based on optimizing normalized cuts, a criterion which measures both intra-cluster similarity as well as inter-cluster dissimilarity. We define a differentiable loss function equivalent to the expected normalized cuts. Unlike much of the work in unsupervised deep learning, our trained model directly outputs final cluster assignments, rather than embeddings that need further processing to be usable. Our approach generalizes to unseen datasets across a wide variety of domains, including text, and image. Specifically, we achieve state-of-the-art on popular unsupervised clustering benchmarks (e.g., MNIST, Reuters, CIFAR-10, and CIFAR-100), outperforming the strongest baselines by up to 10.9%. Our generalization are superior (by up to 21.9%) to the recent top-performing clustering approach with the ability to generalize. Clustering unlabeled data is an important problem from both a scientific and practical perspective. As technology plays a larger role in daily life, the volume of available data has exploded. However, labeling this data remains very costly and often requires domain expertise. Therefore, unsupervised clustering methods are one of the few viable approaches to gain insight into the structure of these massive unlabeled datasets. One of the most popular clustering methods is spectral clustering (; ;), which first embeds the similarity of each pair of data points in the Laplacian's eigenspace and then uses k-means to generate clusters from it. Spectral clustering not only outperforms commonly used clustering methods, such as k-means , but also allows us to directly minimize the pairwise distance between data points and solve for the optimal node embeddings analytically. Moreover, it is shown that the eigenvector of the normalized Laplacian matrix can be used to find the approximate solution to the well known normalized cuts problem . In this work, we introduce CNC, a framework for Clustering by learning to optimize expected Normalized Cuts. We show that by directly minimizing a continuous relaxation of the normalized cuts problem, CNC enables end-to-end learning approach that outperforms top-performing clustering approaches. We demonstrate that our approach indeed can produce lower normalized cut values than the baseline methods such as SpectralNet, which consequently in better clustering accuracy. Let us motivate CNC through a simple example. In Figure 1, we want to cluster 6 images from CIFAR-10 dataset into two clusters. The affinity graph for these data points is shown in Figure 1 (a) (details of constructing such graph is discussed in Section 4.2). In this example, it is obvious that the optimal clustering is the of cutting the edge connecting the two triangles. Cutting this edge will in the optimal value for the normalized cuts objective. In CNC, we define a new differentiable loss function equivalent to the expected normalized cuts objective. We train a deep learning model to minimize the proposed loss in an unsupervised manner without the need for any labeled datasets. Our trained model directly returns the probabilities of belonging to each cluster (Figure 1(b) ). In this example, the optimal normalized cuts is 0.286 (Equation 1), and as we can see, the CNC loss also converges to this value (Figure 1(c) Optimal Normalized cuts #edge cuts = 1 per cluster volume = 2+2+3 = 7 1/7 + 1/7 = 0.286 Cluster 2 Cluster 2 Cluster 1 Figure 1: Motivational example: (a) affinity graph of 6 images from CIFAR-10, the objective is to cluster these images into two clusters. (b) CNC model is trained to minimize expected normalized cuts in an unsupervised manner without the need for any labeled data. For each data point, our model directly outputs the probabilities of it belonging to each of the clusters. (c) The CNC loss converges to the optimal normalized cuts value. In Algorithm 1 we show how we can scale this approach through a batch processing technique to large datasets. We compare the performance of CNC to several learning-based clustering approaches (SpectralNet, DEC , DCN , VaDE , DEPICT , IMSAT , and IIC ) on four datasets: MNIST, Reuters, CIFAR10, and CIFAR100. Our show up to 10.9% improvement over the baselines. Moreover, generalizing spectral embeddings to unseen data points, a task commonly referred to as out-of-sample-extension (OOSE), is a non-trivial task (; ;). Our confirm that CNC generalizes to unseen data. Our generalization are superior (by up to 21.9%) to SpectralNet, the recent top-performing clustering approach with the ability to generalize. Recent deep learning approaches to clustering attempt to embed the input data into a form that is amenable to clustering by k-means or Gaussian Mixture Models. focused on learning representations for clustering. To find the clustering-friendly latent representations and to better cluster the data, DCN proposed a joint dimensionality reduction (DR) and K-means clustering approach in which DR is accomplished via learning a deep neural network. DEC simultaneously learns cluster assignment and the underlying feature representation by iteratively updating a target distribution to sharpen cluster associations. Several other approaches rely on a variational autoencoder that utilizes a Gaussian mixture prior (; ; ; ;). These approaches are mainly based on data augmentation, where the network is trained to maximize the mutual information between inputs and predicted clusters, while regularizing the network so that the cluster assignment of the data points is consistent with the assignment of the augmented points. Different clustering objectives, such as self-balanced k-means and balanced min-cut, have also been exhaustively studied (; ;). One of the most effective techniques is spectral clustering, which first generates node embeddings in the eigenspace of the graph Laplacian, and then applies k-means clustering to these vectors (; ;). To address the fact that clusters with the lowest graph conductance tend to have few nodes , proposed regularized spectral clustering to encourage more balanced clusters. Generalizing clustering to unseen nodes and graphs is nontrivial (; ;). A recent work, SpectralNet, takes a deep learning approach to spectral clustering that generalizes to unseen data points. This approach first learns embeddings of the similarity of each pair of data points in Laplacian's eigenspace and then applies k-means to those embeddings to generate clusters. Unlike SpectralNet, we propose an end-to-end learning approach with a differentiable loss that directly minimizes the normalized cuts. We show that our approach indeed can produce lower normalized cut values than the baseline methods such as SpectralNet, which consequently in better clustering accuracy. Our evaluation show that CNC improves generalization accuracy on unseen data points by up to 21.9%. Since CNC objective is based on optimizing normalized cuts, in this section, we briefly overview the formal definition of this metric. Let G = (V, E, W) be a graph where V = {v i} and E = {e(v i, v j)|v i ∈ V, v j ∈ V } are the set of nodes and edges in the graph and w ij ∈ W is the edge weight of the e(v i, v j). Let n be the number of nodes. A graph G can be clustered into g disjoint sets S 1, S 2,... S g, where the union of the nodes in those sets are V (g k=1 S k = V), and each node belongs to only one set (g k=1 S k = ∅), by simply removing edges connecting those sets. For example, in Figure 1(a), by removing one edge two disjoint clusters are formed. Normalized cuts (Ncuts) which is defined based on the graph conductance, has been studied by , and the cost of a cut that forms disjoint sets S 1, S 2,... S g is computed as: WhereS k represents the complement of S k, i.e.,S k = i =k S i. cut(S k,S k) is called cut and is the total weight of the edges that are removed from G in order to form disjoint sets is the total edge weights (w ij), whose end points (v i, or v j) belong to S k. The cut and vol are: In running example (Figure 1), since the edge weights are one, cut(S 1,S 1) = cut(S 2,S 2) = 1, and Thus the Ncuts(S 1, S 2) = Finding the cluster assignments that minimizes the normalized cuts is NP-complete and an approximation to the this problem is based on the eigenvectors of the normalized graph Laplacian which has been studied in . CNC, on the other hand, is a neural network framework for learning to cluster in the absence of labeled examples by directly minimizing the continuous relaxation of the normalized cuts. As shown in Algorithm 1, end-to-end training of the CNC contains two steps, i.e, (i) data points embedding (line 3), and (ii) clustering (lines 4-9). In data points embedding, the goal is to learn embeddings that capture the affinity of the data points, while the clustering step uses those embeddings to learn the CNC model and outputs the cluster assignments. Next, we first focus on the clustering step and we introduce our new differentiable loss function to train CNC model. Later in Section 4.2, we discuss the details of the embedding step. In this section, we describe the clustering step in Algorithm 1 (lines 4-9). For each data point x i, the input to clustering step is embedding v i ∈ R d (detail in Section 4.2). The goal is to learn CNC model, which represents the assignment probabilities over g clusters. Clearly for n data points, it returns Y ∈ R n×g where Y ik represents the probability that v i belongs to cluster S k. The CNC model F θ is implemented using a neural network, where the parameter vector θ denotes the network weights. We propose a loss function based on output Y to calculate the expected normalized cuts. Thus CNC learns the F θ by minimizing this loss (Equation 7). Recall that cut(S k,S k) is the total weight of the edges that are removed from G in order to form disjoint sets S k andS k. In our setup, embeddings are the nodes in graph G, and neighbors of an embedding v i are based on the k-nearest neighbors. Let Y ik be the probability that node v i belongs to cluster S k. The probability that node v j does not belong to S k would be 1 − Y jk. Therefore, Compute affinity graph W ∈ R b×b over the M based on the k-nearest neighbors 7: Use M and W to train CNC model F θ: R d → R g that minimizes the expected normalized cuts (Equation 6) via backpropagation. For a data point with embedding v i the output y i = F θ (v i) represents the assignment probabilities over g clusters. 8: end while Inference, cluster assignments 9: For every data points x i whose embedding is v i return arg max of y i = F θ (v i) as its cluster assignment. Since the weight matrix W represents the edge weights adjacent nodes, we can rewrite Equation 3: The element-wise product with the weight matrix (W) ensures that only the adjacent nodes are considered. Moreover, the of W is an n × n matrix and reduce-sum is the sum over all of its elements. From Equation 2, vol(S k, V) is the total edge weights (w ij), whose end points (v i, or v j) belong to S k. Let D be a column vector of size n where D i is the total edge weights from node v i. We can update Equation 3 as follows to find the expected normalized cuts. The matrix representation is given in Equation 6, where Γ = Y D is a vector in R g, and g is the number of sets/clusters. is element-wise division and the of (Y Γ)(1 − Y) W is a n × n matrix where reduce-sum is the sum over all of its elements. CNC model F θ is implemented using a neural network, where the parameter θ denotes the network weights (y i = F θ (v i)). CNC is trained to optimize Equation 7 via backpropagation (Algorithm 1). arg min As you can see the affinity graph W is part of the CNC loss (Equation 7). Clearly, when the number of data points (n) is large, such calculation can be expensive. However, in our experimental , we show that for large dataset (e.g., Reuters contains 685,071 documents), it is possible to optimize the loss on randomly sampled minibatches of data. We also build the affinity graph over a given minibach using the embeddings and based on their k nearest-neighbor (Algorithm 1 (lines 5-6)). Specifically, in our implementation, CNC model F θ is a fully connected layer followed by gumble softmax, trained on randomly sampled minibatches of data to minimize Equation 6. In Section 5.7 through a sensitivity analysis we show that the minibatch size affects the accuracy of our model. When training is over, the final assignment of a data point with embedding v i to a cluster is the arg max of y i = F θ (v i) (Algorithm 1 (line 9)). In this section, we discuss the embedding step (line 3 in Algorithm 1). Different affinity measures, such as simple euclidean distance or nearest neighbor pairs combined with a Gaussian kernel, have been used in spectral clustering. Recently it is shown that unsupervised application of a Siamese network to determine the distances improves the quality of the clustering. In this work, we also use Siamese networks to learn embeddings that capture the affinities of the data points. Siamese network is trained to learn an adaptive nearest neighbor metric. It learns the affinities directly from euclidean proximity by "labeling" points x i, x j positive if x i − x j is small and negative otherwise. In other words, it generates embeddings such that adjacent nodes are closer in the embedding space and non-adjacent nodes are further. Such network is typically trained to minimize contrastive loss: We evaluate the performance of CNC in comparison to several deep learning-based clustering approaches on four real world datasets: MNIST, Reuters, CIFAR-10, and CIFAR-100. The details of the datasets are as follows: • MNIST is a collection of 70,000 28×28 gray-scale images of handwritten digits, divided into 60,000 training images and 10,000 test images. • The Reuters dataset is a collection of English news labeled by category. Like SpectralNet, DEC, and VaDE, we used the following categories: corporate/industrial, government/social, markets, and economics as labels and discarded all documents with multiple labels. Each article is represented by a tfidf vector using the 2000 most frequent words. The dataset contains 685,071 documents. We divided the data randomly to a 90%-10% split to evaluate the generalization ability of CNC. We also investigate the imapact of training data size on the generalization by considering following splits: 90%-10%, 70%-30%, 50%-50%, 20%-80%, and 10%-90%. • CIFAR-10 consists of 60000 32x32 colour images in 10 classes, with 6000 images per class. There are 50000 training images and 10000 test images. • CIFAR-100 has 100 classes containing 600 images each with a 500/100 train/test split per class. In all runs we assume the number of clusters is given. In MNIST and CIFAR-10 number of clusters (g) is 10, g = 4 in Reuters, g = 100 in CIFAR-100. We compare CNC to SpectralNet, DEC , DCN , VaDE , DEPICT , IMSAT , and IIC . While focused on learning representations for clustering, other approaches (; ; ; ;) rely on a variational autoencoder that utilizes a Gaussian mixture prior. SpectralNet, takes a deep learning approach to spectral clustering that generalizes to unseen data points. Table 1 shows the reported for these six methods. Similar to, for MNIST and Reuters we use publicly available and pre-trained autoencoders 1. The autoencoder used to map the Reuters data to code space was trained based on a random subset of 10,000 samples from the full dataset. Similar to , for CIFAR-10 and CIFAR-100 we applied 50-layer pre-trained deep residual networks trained on ImageNet to extract features and used them for clustering. We use two commonly used measures, the unsupervised clustering accuracy (ACC), and the normalized mutual information (NMI) in to evaluate the accuracy of the clustering. Both ACC and NMI are in, with higher values indicating better correspondence the clusters and the true labels. Note that true labels never used neither in training, nor in test. Clustering Accuracy (ACC): For data points X = {x 1, . . . x n}, let l = (l 1, . . . l n) and c = (c 1, . . . c n) be the true labels and predicted clusters respectively. The ACC is defined as: where is the collection of all permutations of 1,... g. The optimal permutation π can be computed using the Kuhn-Munkres algorithm . Normalized Mutual Information (NMI): Let I(l; c) be the mutual information between l and c, and H be their entropy. The NMI is: For each dataset we trained a Siamese network to learn embeddings which represents the affinity of data points by only considering the k-nearest neighbors of each data. In Table 1, we compare clustering performance across four benchmark datasets. Since most of the clustering approaches do not generalize to unseen data points, all data has been used for the training (Later in Section 5.4, to evaluate the generalizability we use 90%-10% split for training and testing). While the improvement of CNC is marginal over MNIST, it performs better across other three datasets. Specifically, over CIFAR-10, CNC outperforms SpectralNet and IIC on ACC by 20.1% and 10.9% respectively. Moreover, the NMI is improved by 12.3%. The over Reuters, and CIFAR-100, show 0.021% and 11% improvement on ACC. The NMI is also 27% better over CIFAR-100. The fact that our CNC outperforms existing approaches in most datasets suggests the effectiveness of using our deep learning approach to optimize normalized cuts for clustering. We performed an ablation study to evaluate the impact of embeddings by omitting this step in Algorithm 1. We find that on both MNIST and Reuters datasets, adding the embedding step improves the performance, but CNC without embeddings still outperforms SpectralNet without embeddings. On MNIST, the ACC and NMI are 0.945 and 0.873, whereas with the embeddings, ACC and NMI increase to 0.972 and 0.924 (Table 1). Without embeddings, CNC outperforms SpectralNet (with ACC of 0.8 and NMI of 0.814). On Reuters, the ACC and NMI are 0.684 and 0.428, whereas with the embeddings, ACC and NMI increase to 0.824 and 0.583. Again, even without embeddings, CNC outperforms SpectralNet (with ACC of 0.605 and NMI of 0.401). ]. CNC is able to find better cuts than the SpectralNet We further evaluate the generalization ability of CNC by dividing the data randomly to a 90%-10% split and training on the training set and report the ACC and NMI on the test set (Table 2). Among seven methods in Table 1, only SpectralNet is able to generalize to unseen data points. CNC outperforms SpectralNet in most datasets by up to 21.9% on ACC and up to 10.7% on NMI. Note that simple arg max over the output of CNC retrieves the clustering assignments while SpectralNet relies on k-means to predict the final clusters. To evaluate the impact of normalized cuts for the clustering task, we calculate the numerical value of the Normalized cuts (Equation 1) over the clustering of the CNC and SpectralNet. Since such calculation over whole dataset is very expensive we only show this over the test set. Table 3 shows the numerical value of the Normalized cuts over the clustering of the CNC and SpectralNet. As one can see CNC is able to find better cuts than the SpectralNet. Moreover, we observe that for those datasets that the improvement of the CNC is marginal (MNIST and Reuters), the normalized cuts of CNC are also only slightly better than the SpectralNet, while for the CIFAR-10 and CIFAR-100 that the accuracy improved significantly the normalized cuts of CNC are also much smaller than SpectralNet. The higher accuracy (ACC in Table 2) and smaller normalized cuts (Table 3), verify that indeed CNC loss function is a good notion for clustering task. As you may see in generalization (Table 2), when we reduce the size of the training data to 90% the accuracy of CNC slightly changed in compare to training over the whole data (Table 1). Based on this observation, we next investigate how varying the size of the training dataset affects the generalization. In other words, how ACC and NMI of test data change when we vary the size of the training dataset. We ran experiment over Routers dataset by dividing the data randomly based on the following data splits: 90%-10%, 70%-30%, 50%-50%, 20%-80%, and 10%-90%. For example, in 10%-90%, we train CNC over 10% of the data and we report the ACC and NMI of CNC over the 90% test set. Figure 3 shows how the ACC and NMI of CNC over the test data change as the size of the training data is varied. For example, when the size of the training data is 90%, the ACC of CNC over the test data is 0.824. As we expected and shown in Figure 3 the ACC and NMI of CNC increased as the size of the training data is increased. Interestingly, we observed that with only 10% training data the ACC of CNC is 0.68 which is only 14% lower than the ACC with 90% training data. Similarly the NMI of CNC with 10% training data is only 18% lower than the NMI with 90% training data. Here are the details of the CNC model for each dataset. • MNIST: The Siamese network has 4 layers sized with ReLU (Embedding size d is 10). The clustering module has 2 layers sized with a final gumbel softmax layer. Batch sized is 256 and we only consider 3 nearest neighbors to find the embeddings and constructing the affinity graph for each batch. We use Adam with lr = 0.005 with decay 0.5. Temperature starts at 1.5 and the minimum is set to 0.5. • Reuters: The Siamese network has 3 layers sized with ReLU (Embedding size d is 128). The clustering module has 3 layers sized with tanh activation and a final gumbel softmax layer. Batch sized is 128 and we only consider 3 nearest neighbors to find the embeddings and constructing the affinity graph for each batch. We use Adam with lr = 1e-4 with decay 0.5. Temperature starts at 1.5 and the minimum is set to 1.0. • CIFAR-10: The Siamese network has 2 layers sized with ReLU (Embedding size d is 256). The clustering module has 2 layers sized with tanh activation and a final gumbel softmax layer. Batch sized is 256 and we only consider 2 nearest neighbors to find the embeddings and constructing the affinity graph for each batch. We use Adam with lr = 1e-4 with decay 0.1. Temperature starts at 2.5 and the minimum is set to 0.5. • CIFAR-100: The Siamese network has 2 layers sized with ReLU (Embedding size d is 256). The clustering module has 3 layers sized with tanh activation and a final gumbel softmax layer. Batch sized is 1024 and we only consider 3 nearest neighbors to find the embeddings and constructing the affinity graph for each batch. We use Adam with lr = 1e-3 with decay 0.5. Temperature starts at 1.5 and the minimum is set to 1.0. Hyper-parameter Sensitivity: We train the CNC on the Reuters dataset by fixing some hyperparameters and varying others. We noticed that CNC benefits from tuning the number of hidden layers (hl), learning rate (lr), batch size (b), and the number of nearest neighbors (k), but is not particularly sensitive to any of the other hyper-parameters, including decay rate, patience parameter (cadence of epochs where decay is applied), Gumbel-Softmax temperature or minimum temperature (Figure 4). More precisely, we varied decay rate over the range [0.1-1.0], patience from epochs, Gumbel-Softmax temperature from [1.0-2.0], and minimum temperature from [0.5-1.0]. When we fix hl=3, lr=5e-5, b=64, and k=3, the average accuracy is 0.803 ± 2e − 3. With hl=3, lr=5e-4, b=512, and k=10, the average accuracy is 0.811 ± 2e − 3. With hl=3, lr=5e-4, b=128, and k=3, the average accuracy is 0.821 ± 4e − 3. With hl=2, lr=1e-4, b=256, and k=3, the average accuracy is 0.766 ± 9e − 4. And finally with hl=4, lr=1e-5, b=512, and k=3, the average accuracy is 0.766 ± 7e − 3. As one can see, the accuracy varied from 0.766 to 0.821. We propose CNC (Clustering by learning to optimize Normalized Cuts), a framework for learning to cluster unlabeled examples. We define a differentiable loss function equivalent to the expected normalized cuts and use it to train CNC model that directly outputs final cluster assignments. CNC achieves state-of-the-art on popular unsupervised clustering benchmarks (MNIST, Reuters, CIFAR-10, and CIFAR-100 and outperforms the strongest baselines by up to 10.9%. CNC also enables generation, yielding up to 21.9% improvement over SpectralNet, the previous best-performing generalizable clustering approach. Table 4 : Generalization : CNC is trained on VGG and validated on MNIST-conv. During inference, the model is applied to unseen TensorFlow graphs: ResNet. Inception-v3, and AlexNet. The ground truth for AlexNet is Bal = 99%, Cut = 4.6%, for Inception-v3, is Bal = 99%, Cut = 3.7%, and for ResNet is Bal = 99% and Cut = 3.3%. GraphSAGE-on generalizes better than the other models. To show that CNC generalizes effectively on unseen graphs, we train CNC on a single TensorFlow graph, VGG, and validate on MNIST-conv. During inference, we test the trained model on unseen TensorFlow graphs: AlexNet, ResNet, and Inception-v3. We consider the best quality among hMETIS, KaFFPa, and KaFFPaE as the ground truth. The ground truth for AlexNet is Bal = 99%, Cut = 4.6%, for Inception-v3, is Bal = 99%, Cut = 3.7%, and for ResNet is Bal = 99% and Cut = 3.3%. Table 4 shows the of our experiments, and illustrates the importance of graph embeddings in generalization. The operation type (such as Add, Conv2d, and L2loss in TensorFlow) is used as the node feature as a one-hot. We leverage GCN and GraphSAGE to capture similarities across graphs. In GraphSAGE-on both node embedding and clustering modules are trained jointly, while in GCN and GraphSAGE-off, only the clustering module is trained. Table 4 shows that the GraphSAGE-on (last row) achieves the best performance and generalizes better than the other models. Note that this model is trained on a single graph, VGG with only 1325 nodes, and is tested on AlexNet, ResNet, and Inception-v3 with 798, 20586, and 27114 nodes respectively. On the other hand, the ground truth is the of running different partitioning algorithms on each graph individually. In this work, our goal is not to beat the existing graph partitioning algorithms which involve a lot of heuristics on a given graph. Our generalization show promise that rather than using heuristics, CNC is able to learn graph structure for generalizable graph partitioning. Model Architecture and Hyper-parameters: The details of the model with the best performance (GraphSAGE-on) are as follows: the input feature dimension is 1518. GraphSAGE has 5 layers sized 512 with shared pooling, and the graph clustering module has 3 layers sized 64 with a final softmax layer. We use ReLU, Xavier initialization , and Adam with lr = 7.5e-5. | We introduce a novel end-to-end approach for learning to cluster in the absence of labeled examples. We define a differentiable loss function equivalent to the expected normalized cuts. | 1,084 | scitldr |
We introduce the largest (among publicly available) dataset for Cyrillic Handwritten Text Recognition and the first dataset for Cyrillic Text in the Wild Recognition, as well as suggest a method for recognizing Cyrillic Handwritten Text and Text in the Wild. Based on this approach, we develop a system that can reduce the document processing time for one of the largest mathematical competitions in Ukraine by 12 days and the amount of used paper by 0.5 ton. Text is one of the main ways to transfer the information and it can take many forms. It can be handwritten or printed, in the form of business documents, notes, bills, historical documents, advertisements, logos etc. Therefore, the method for its recognition should be flexible enough to work with different text styles and under the different external conditions. Although for the English language the task of text recognition is well studied,, for Cyrillic languages such studies are almost missing, the main reason being the lack of extensive publicly available datasets. To the best of our knowledge, the only public Cyrillic dataset consists only of individual letters, while others,,, are unavailable. In our research, we will focus on developing a single model for Handwritten Text Recognition and Text Recognition in the Wild, as the extreme case of printed text. Currently the amount of data provided by the existing Cyrillic datasets for text recognition is insufficient for training deep networks. That is why we have created a synthetic dataset and annotated datasets for Handwritten Text and Text in the Wild recognition. We approach the problem of generating Synthetic Cyrillic text recognition dataset by adapting the idea proposed by Jaderberg et al., but in our Cyrillic setup. For the text sampling stage, a word is randomly sampled from the UberText Corpus with the requirement to include only Cyrillic letters. For the font rendering, a font is randomly selected from the subset of fonts from UKR-Fonts, Google Fonts, Font Squirrel including all glyphs needed for generation of the sampled word. In total, the dataset consists of 881309 samples of 180994 different words (Figure 1). As mentioned above, at the moment there are no existing datasets for Text in the Wild recognition of Cyrillic words. We addressed this problem by collecting the dataset by annotating photos from the Internet depicting different cities all around Ukraine. The dataset consist of 505 samples in different orientations from 151 different photos and in total has 454 different words (Figure 1). The extended version of this dataset consists not only of rectangles of words, but also their locations in the images (although this information is not used in the research). The handwritten dataset was collected by processing data from one of the largest mathematical competitions in Ukraine. The dataset is extracted from the forms that were filled by children aged 7 to 18 from different parts of Ukraine and contains mainly their surnames. It consists of 82061 samples and 37007 words (Figure 1) and is divided into 3 parts (train 60%, validate 20% and test 20%), with the same distribution of each word in each part of the dataset. We developed a single model that was pretrained on Synthetic Cyrillic text and then finetuned on a mixture of Synthetic Cyrillic text and Handwritten Cyrillic text. The architecture consists of two Conv blocks with CRelu and Relu followed by six Conv blocks with Instance Normalisation and Leaky Relu. The obtained word error rate and character error rate are reported in Table 1. The method was used for the development of the new system for processing responses of participants of one of the largest math competitions in Ukraine (International Mathematics Contest "Kangaroo"). The new system allows to remove part of the form that was used by participants for manual annotation of their surnames and to reduce the size of the form from B5 to A5. This competition is organized twice a year and attracts around 500000 participants each time. The new system will allow to reduce the processing (scanning) time by 12 days, and the amount of used paper by 0.5 tons on each round. | We introduce several datasets for Cyrillic OCR and a method for its recognition | 1,085 | scitldr |
Most deep learning for NLP represents each word with a single point or single-mode region in semantic space, while the existing multi-mode word embeddings cannot represent longer word sequences like phrases or sentences. We introduce a phrase representation (also applicable to sentences) where each phrase has a distinct set of multi-mode codebook embeddings to capture different semantic facets of the phrase's meaning. The codebook embeddings can be viewed as the cluster centers which summarize the distribution of possibly co-occurring words in a pre-trained word embedding space. We propose an end-to-end trainable neural model that directly predicts the set of cluster centers from the input text sequence (e.g., a phrase or a sentence) during test time. We find that the per-phrase/sentence codebook embeddings not only provide a more interpretable semantic representation but also outperform strong baselines (by a large margin in some tasks) on benchmark datasets for unsupervised phrase similarity, sentence similarity, hypernym detection, and extractive summarization. Many widely-applicable NLP models learn a representation from only co-occurrence statistics in the raw text without any supervision. Examples include word embedding like word2vec or GloVe , sentence embeddings like skip-thoughts , and contextualized word embedding like ELMo and BERT . Most of these models use a single embedding to represent one sentence or one phrase and can only provide symmetric similarity measurement when no annotation is available. However, a word or phrase might have multiple senses, and a sentence can involve multiple topics, which are hard to analyze based on a single embedding without supervision. To address the issue, word sense induction methods and recent multi-mode word embeddings (; ;) represent each target word as multiple points or regions in a distributional semantic space by (explicitly or implicitly) clustering all the words appearing beside the target word. In Figure 1, the multi-mode representation of real property is illustrated as an example. Real property can be observed in legal documents where it usually means a real estate, while a real property can also mean a true characteristic in philosophic discussions. The previous approaches discover those senses by clustering observed neighboring words (e.g., company and tax). In contrast with topic modeling like LDA , the approaches need to solve a distinct clustering problem for every target word while topic modeling finds a single set of clusters by clustering all the words in the corpus. Extending these multi-mode representations to arbitrary sequences like phrases or sentences is difficult due to two efficiency challenges. First, there are usually many more unique phrases and sentences in a corpus than there are words, while the number of parameters for clustering-based approaches is O(|V | × |K| × |E|), where |V | is number of unique sequences, |K| is number of modes/clusters, and |E| is the number of embedding dimensions. Estimating and storing such a large number of parameters take time and space. More important, many unique sequences imply much fewer co-occurring words to be clustered for each sequence, especially for long sequences Figure 1: The target phrase real property is represented by four clustering centers. The previous work discovers the four modes by finding clustering centers which well compress the embedding of observed co-occurring words. Instead, our compositional model learns to predict the embeddings of cluster centers from the sequence of words in the target phrase so as to reconstruct the (unseen) co-occurring distribution well. like sentences, so an effective model needs to overcome this sample efficient challenge (i.e., sparseness in the co-occurring statistics). However, clustering approaches often have too many parameters to learn the compositional meaning of each sequence without overfitting. Nevertheless, the sentences (or phrases) sharing multiple words tend to have similar cluster centers, so we should be able to compress many redundant parameters in these local clustering problems to circumvent the challenges. In this work, we adopt a neural encoder and decoder to achieve the goal. As shown in Figure 1, instead of clustering co-occurring words beside a target sequence at test time as in previous approaches, we learn a mapping between the target sequence (i.e., phrases or sentences) and the corresponding cluster centers during training so that we can directly predict those cluster centers using a single forward pass of the neural network for an arbitrary unseen input sequences during testing. To allow the neural network to generate the cluster centers in an arbitrary order, we use a nonnegative and sparse coefficient matrix to dynamically match the sequence of predicted cluster centers and the observed set of co-occurring word embeddings during training. After the coefficient matrix is estimated for each input sequence, the gradients are back-propagated to cluster centers (i.e., codebook embeddings) and weights of decoder and encoder, which allows us to train the whole model jointly and end-to-end. In experiments, we show that the proposed model captures the compositional meanings of words in unsupervised phrase similarity tasks much better than averaging their (contextualized) word embeddings, strong baselines that are widely used in practice. In addition to similarity, our model can also measure asymmetric relations like hypernymy without any supervision. Furthermore, the multimode representation is shown to outperform the single-mode alternatives in sentence representation, especially as demonstrated in our extractive summarization experiment. In this section, we first formalize our training setup in Section 2.1. Next, our objective function and the architecture of our prediction mode will be described in Section 2.2 and Section 2.3, respectively. The approach is summarized in Figure 2 using an example sentence. Figure 2: Our model for sentence representation. We represent each sentence as multiple codebook embeddings (i.e., clustering centers) predicted by our sequence to embeddings model. Our loss encourages the model to generate codebook embeddings whose linear combination can well reconstruct the embeddings of co-occurring words (e.g., Music), while not able to reconstruct the negatively sampled words (i.e., the co-occurring words from other sentences) to avoid predicting common topics which co-occur with every sentence (e.g., is in this example). We express tth sequence of words in the corpus as I t = w xt...w yt <eos>, where x t and y t are the start and end position of the target sequence, respectively, and <eos> is the end of sequence symbol. We assume neighboring words beside each target phrase or sentence are related to some aspects of the sequence, so given I t as input, our training signal is to reconstruct a set of neighboring words, is a fixed window size. For sentence representation, N t is the set of all words in the previous and the next sentence. Notice that the training signal for sentences and phrases are different, which means we need to train one model for phrase and one model for sentence if both representations are desired. Since there are not many co-occurring words for a long sequence (none are observed for unseen testing sequences), our goal is to cluster the set of words which could "possibly" occur beside the sequence instead of the actual occurring words in the training corpus (e.g., the hidden co-occurring distribution instead of green and underlined words in Figure 2). Although most of the possibly co-occurring words are not observed in the corpus, we can still learn to predict them by observing co-occurring words from similar sequences. To focus on the semantics rather than syntax, we view the co-occurring words as a set rather than a sequence as in skip-thoughts . Notice that our model considers the word order information in the input sequence I t, but ignores the order of the co-occurring words N t. In this work, we model the distribution of co-occurring words in a pre-trained word embedding space. The embeddings of co-occurring words N t are arranged into a matrix W (N t) = [w t j] j=1...|Nt| with size |E| × |N t |, where |E| is the dimension of pre-trained word embedding, and each of its column w t j is a normalized word embedding whose 2-norm is 1. The normalization makes the cosine distance between two words become half of their Euclidean distance. Similarly, we denote the predicted cluster centers c t k of the input sequence I t as a |E| × K matrix where F is our neural network model and K is the number of clusters. We fix the number of clusters K in this work to simplify the design of our prediction model and how it is applied to downstream tasks. The effect of different cluster numbers will be discussed in the experimental section. The reconstruction loss of k-means clustering in the word embedding space can be written as, where M k,j = 1 if the jth word belongs to the k cluster and 0 otherwise. That is, M is a permutation matrix which matches the cluster centers and co-occurring words and allow the neural network to generate the centers in an arbitrary order. Non-negative sparse coding (NNSC) relaxes the constraints by allowing the coefficient M k,j to be a positive value but encouraging it to be 0. In this work, we adopt the relaxation because for all neural architectures (including transformers and LSTMs) we tried, the models using a NNSC loss learn to generates diverse K cluster centers while the predicted cluster centers using the kmeans loss collapse to much fewer modes which cannot well capture the conditional co-occurrence distribution. We hypothesize that it is because a NNSC loss is smoother and easier to be optimized for a neural network, while finding the nearest cluster center in the kmeans loss cannot stably encourage predicted clusters to play different roles for reconstructing the embeddings of observed co-occurring words. Using NNSC, we define our reconstruction error as where λ is a hyper-parameter controlling the sparsity of M. We force the coefficient value M k,j ≤ 1 to avoid the neural network learning to predict centers with small magnitudes which makes the optimal values of M k,j large and unstable. Having multiple outputs and estimating the permutation between the prediction and ground truth words is often computationally expensive . However, the proposed loss is efficient because we minimize the L2 distance in a pre-trained embedding space as in rather than using softmax, and M Ot can be efficiently estimated on the fly using convex optimization (we use RMSprop in our implementation). After M Ot is estimated, we can treat it as a constant and back-propagate the gradients to our neural network to achieve end-to-end training. To prevent the neural network from predicting the same global topics regardless of the input, our loss function for tth sequence is defined as where N rt is a set of co-occurring words of a randomly sampled sequence I rt. In our experiment, we use SGD to solve F = arg min F t L t (F). Our method could be viewed as a generalization of Word2Vec that can encode the compositional meaning of the words and decode multiple embeddings. Our neural network architecture is similar to transformation-based sequence to sequence (seq2seq) model . We use the same encoder T E(I t), which transforms the input sequence into a contextualized embeddings where the goal of the encoder is to map the sentences which are likely to have similar co-occuring word distribution closer together. Different from the typical seq2seq model , our decoder does not need to make discrete decisions because our outputs are a sequence of embeddings instead of words. This allows us to predict all the codebook embeddings in a single forward pass while still well capturing the dependency between output without the need of auto-regressive decoding. Similar to BERT, we treat the embedding of <eos> as the sentence representation. To make different codebook embeddings capture different aspects, we pass the embeddings of <eos> to different linear layers L k before becoming the input of the decoder T D. Specifically, the codebook embeddings We find that removing the attention on the e xt...e yt, contextualized word embeddings from the encoder, significantly increases our validation loss for sentence representation because there are often too many facets to be compressed into a single embedding. On the other hand, the attention does not change the performance of phrase representation too much, and we remove the attention connection between encoder and decoder (i.e., encoder and decoder have the same architecture) when evaluating our phrase representation. Notice that the framework is flexible. We can replace the encoder and decoder with other architectures. Besides transformers, we also try (bi-)LSTMs in our experiments. This flexibility also allows us to incorporate other input features (e.g., the author who writes the sentence). We first visualize the cluster centers predicted by our model in Table 1 (like we visualize the meaning of the red cluster center in Figure 2 using the word song or Music). The centers summarize the target sequence well and more codebook embeddings capture more semantic facets of a phrase or a sentence. Due to the difficulty of evaluating the topics conditioned on the input sequence using the classic metrics for global topic modeling, we show that the codebook embeddings can be used to improve the performances of various unsupervised semantic tasks, which indirectly measures the quality of the generated topics. We use the cased version (840B) of pre-trained GloVe embedding for sentence representation and use the uncased version (42B) for phrase representation. 2 Our model is trained on Wikipedia 2016 while the stop words are removed from the set of co-occurring words. In the phrase experiments, we only consider noun phrases, and their boundaries are extracted by applying simple regular expression rules to POS tags before training. The sentence boundaries and POS tags are detected using spaCy. 3 Our models do not need resources such as PPDB or other multi-lingual resources, so our models are compared with the baselines that only use the raw text and sentence/phrase boundaries. This setting is particularly practical for the domains with low resources such as scientific literature. To control the effect of embedding size, we set the number of dimensions in our transformers as the GloVe embedding size. Limited by computational resources, we train all the models using one modern GPU within a week. Because of the relatively small model size, we find that our models underfit the data after a week (i.e., the training loss is very close to the validation loss). It is hard to make a fair comparison with BERT . BERT is trained on a masked language modeling loss, which preserves more syntax information and can produce an effective pretrained embedding for many supervised downstream tasks. We report the performance of the BERT base model, which is still trained using more parameters, more output dimensions, larger corpus, and more computational resources during training compared with our models. Furthermore, BERT uses a word piece model to alleviate the out-of-vocabulary problem. Nevertheless, we still provide its unsupervised performances based on cosine similarity as a reference. Semeval 2013 task 5(a) English and Turney 2012 are two standard benchmarks for evaluating phrase similarity . BiRD and WikiSRS , which are recently collected, contain ground truth phrase similarities derived from human annotations. The task of Semeval 2013 is to distinguish similar phrase pairs from dissimilar phrase pairs. In Turney, given each query bigram, the goal is to identify the unigram that is most similar to the query bigram among 5 unigram candidates, 5 and Turney adds 5 more negative phrase pairs by pairing the reverse of bigram with the 5 unigrams. BiRD and WikiSRS-Rel measure the relatedness of phrases and WikiSRS-Sim measures the similarity of phrases. For our model, we evaluate two scoring functions that measure phrase similarity. The first way averages the contextualized word embeddings from our transformer encoder as the phrase embedding and computes the cosine similarity between two phrase embeddings. We label the method as Ours Emb. The similar phrases should have similar multi-facet embeddings, so we compute the reconstruction error from the set of normalized codebook embeddings of one phrase S 1 q to the embeddings of the other phrase S 2 q, vice versa, and add them together to become a symmetric distance SC: where When ranking for retrieving similar phrases, we use the negative distance to represent similarity. We compare our performance with 5 baselines. GloVe Avg and Word2Vec Avg compute the cosine similarity between two averaged word embeddings, which has been shown to be a strong baseline . BERT CLS and BERT Avg are the cosine similarities between CLS embeddings and between the averages of contextualized word embeddings from BERT , respectively. FCT LM Emb learns the weights of linearly combining word embeddings based on several linguistic features. The performances are presented in Table 2. SemEval 2013 and Turney have training and testing split and the performances in test sets are reported. Our models significantly outperform all baselines in the 4 datasets. Furthermore, our strong performances in Turney verify that our encoder incorporates the word order information when producing phrase embeddings. The indicate the effectiveness of non-linearly composing word embeddings (unlike GloVe, Word2Vec, and FCT baselines) in order to predict the set of co-occurring word embeddings (unlike the BERT baselines). The performance of Ours (K=1) is usually slightly better than Ours (K=10). This supports the finding of that multi-mode embeddings may not improve the performance in word similarity benchmarks even if they capture more senses or aspects of polysemies. Even though being slightly worse, the performances of Ours (K=10) remain strong compared with baselines. This indicates that the similarity performance is not sensitive to the number of clusters, which alleviates the problem of selecting K in practice. STS benchmark is a widely used sentence similarity task. Each model predicts a semantic similarity score between each sentence pair, and the scores are compared with average similarity annotations using the Pearson correlation coefficient. Intuitively, when two sentences are less similar to each other, humans tend to judge the similarity based on how they are similar in each aspect. Thus, we also compare the performances on lower half the datasets where their ground truth similarities are less than the median similarity score, and we call this benchmark STSB Low. In addition to BERT CLS, BERT Avg, and GloVe Avg, we compare our method with word mover's distance (WMD) and cosine similarity between skip-thought embeddings (ST Cos) . propose to weight the word w in each sentence according to α α+p(w), where α is a constant and p(w) is the probability of seeing the word w in the corpus. Following its recommendation, we set α to be 10 −4 in STS benchmark. After the weighting, we adopt the post-processing method from to remove the first principal component that is estimated using the training distribution and denote the method as GloVe SIF. The post-processing is not desired in some applications , so we also report the performance before removing principal components, which is called GloVe Prob_avg. The strong performance of (weighted) average embedding suggests that we should consider the embeddings of words in the sentence in addition to the sentence embedding(s) when measuring the sentence similarity. This is hard to be achieved in other sentence representation methods because their sentence embeddings and word embeddings are in different semantic spaces. Since our multi-facet embeddings are in the same space of word embeddings, we can use the multifacet embeddings to estimate the word importance (in terms of predicting possible co-occurring words beside the sentence). To compute the importance of a word in the sentence, we first compute the cosine similarity between the word and all predicted codebook embeddings, truncate the negative similarity to 0, and sum all similarity. Specifically, our simple importance/attention weighting for all the words in the query sentence S q is defined by where 1 is an all-one vector. The importance weighting is multiplied with the original weighting vectors on GloVe Avg (uniform weight), GloVe Prob_avg, and GloVe SIF to generate the of Our Avg, Our Prob_avg, and Our SIF, respectively. We compare all the in the development set and test set in Table 3. Ours SC, which matches between two sets of topics outperforms WMD, which matches between two sets of words in the sentence, and also outperforms BERT Avg, especially in STSB Low. All the scores in Ours (K=10) are significantly better than Ours (K=1),which demonstrates the benefits of multi-mode representation. Multiplying the proposed attention weighting boosts the performance from (weighted) averagingbased methods especially in STSB Low and when we do not rely on the generalization assumption of our training distribution. We also test a variant of our method which uses a bi-LSTM as the encoder and a LSTM as the decoder. Its average validation loss (-0.1289) in equation 2 is significantly higher than that of the transformer alternative (-0.1439), and it performances on Table 3 are also worse. Notice the architecture of this variant is similar to skip-thoughts except that skip-thoughts decodes a sequence instead of a set. The variant significantly outperforms ST Cos, which further justifies our approach of ignoring the order of co-occurring words in our NNSC loss. We apply our model to HypeNet , an unsupervised hypernymy detection dataset, based on an assumption that the co-occurring words of a phrase are often less related to some of its hyponyms. For instance, fly is a co-occurring word of animal which is less related to brown dog. Thus, the predicted codebook embeddings of a hyponym S hypo q (e.g., brown dog), which cluster the embeddings of co-occurring words (e.g., eats), often reconstruct the embeddings of its hypernym S hyper q (e.g., animal) better than the other way around (i.e., Er( Based on the assumption, our asymmetric scoring function is defined as The AUC of detecting hypernym among other relations and accuracy of detecting the hypernym direction are compared in Table 4 . Our methods outperform baselines, which only provide symmetric similarity measurement, and Ours (K=1) performs similarly compared with Ours (K=10). A good summary should cover multiple aspects that well represent all topics/concepts in the whole document. The objective can be quantified as discovering a summary A with a set of normalized embeddings C(A) which best reconstructs the distribution of normalized word embedding w in the document D . That is, where γ w is the importance of the word w, which is set as α α+p(w) as we did in Section 3.2. We further assume that the summary A consists of T sentences A 1... A T and the embedding set of the summary is the union of the embedding sets of the sentences C(A) = ∪ T t=1 C(A t), and we greedily select sentences to optimize equation 8 as did in. This extractive summarization method provides us a way to evaluate the embedding(s) of sentence. Our model can generate multiple codebook embeddings, which capture its different aspects as we see in Table 1, to represent each sentence in the document, so we let C(A t) = {F u (A t)}, a set of column vectors in the matrix F u (A t). We compare our approach with other alternative ways of modeling the aspects of sentences. For example, we can compute average word embeddings as a single-aspect sentence embedding. This baseline is labeled as Sent Emb. We can also use the embedding of all the words in the sentences as different aspects of the sentences. Since longer sentences have more words, we normalize the gain of the reconstruction loss by the sentence length. The method is denoted as W Emb. In contrast, the fixed number of codebook embeddings in our method avoids the problem. We also test the baselines of selecting random sentences (Rnd) and first n sentences (Lead) in the document. The on the testing set of CNN/Daily Mail are compared using F1 of ROUGE in Table 5. R-1, R-2, and Len mean ROUGE-1, ROUGE-2, and average summary length, respectively. All methods choose 3 sentences by following the setting in. Unsup, No Order means the methods do not use the sentence position information in the documents. In CNN/Daily Mail or other English news corpora, the sentence order information is a very strong signal. For example, the unsupervised methods such as Lead-3 are very strong baselines with performances similar to supervised methods such as RL , a state-of-the-art approach in this evaluation. To evaluate the quality of unsupervised sentence embeddings, we focus on comparing the unsupervised methods which do not assume the first few sentences form a good summary. In Table 5, predicting more aspects (i.e., using higher cluster numbers K) yields better , and setting K = 100 gives us the best performance after selecting 3 sentences. This demonstrates that larger cluster numbers K is desired in this application. Our method allows us to set K to be a relatively large number because of greatly alleviating the computational and sample efficiency challenges. Topic modeling has been extensively studied and widely applied due to its interpretability and flexibility of incorporating different forms of input features .; demonstrate that neural networks could be applied to discover semantically coherent topics. However, instead of optimizing a global topic model, our goal is to jointly and efficiently discovering different sets of topics/clusters on the small subsets of words that co-occur with target phrases or sentences. Sparse coding on word embedding space is used to model the multiple aspects of a word , and parameterizing word embeddings using neural networks is used to test hypothesis and save storage space . Besides, to capture asymmetric relations such as entailment, words are represented as single or multiple regions in Gaussian embeddings rather than a single point. However, the challenges of extending these methods to longer sequences are not addressed in these studies. One of our main challenges is to design a neural decoder for a set rather a sequence while modeling the dependency between the elements. This requires a matching step between two sets and compute the distance loss after the matching . One popular loss is called Chamfer distance, which is widely adopted in the auto-encoder models for point clouds , while more sophisticated matching loss options are also proposed . The goal of the studies focus on measuring symmetric distances between the ground truth set and predicted set (usually with the equal size), while our set decoder tries to reconstruct a set using a set of much fewer bases. Other ways to achieving the permutation invariants loss for neural networks includes removing the elements in the ground truth set which have been predicted , beam search , or predicting the permutation using a CNN , a transformer or reinforcement learning . In contrast, our goal is to efficiently predict a set of clustering centers that can well reconstruct the set of observed instances instead of predicting the set or sequence of the observed instances. In this work, we overcome the computational and sampling efficiency challenges of learning the multi-mode representation for long sequences like phrases or sentences. We use a neural encoder to model the compositional meaning of the target sequence and use a neural decoder to predict a set of codebook embeddings as the representation of the sentences or phrases. During training, we use a non-negative sparse coefficient matrix to dynamically match the predicted codebook embeddings to a set of observed co-occurring words and allow the neural decoder to predict the clustering centers with an arbitrary permutation. We demonstrate that the proposed models can learn to predict interpretable clustering centers conditioned on an (unseen) sequence, and the representation outperforms widely-used baselines such as BERT, skip-thoughts and various approaches based on GloVe in several unsupervised benchmarks. The experimental also suggest that multi-facet embeddings perform the best when the input sequence (e.g., a sentence) involves many aspects, while multi-facet and single-facet embeddings perform similarly good when the input sequence (e.g., a phrase) usually involves only one aspect. In the future, we would like to train a single model which could generate multi-facet embeddings for both phrases and sentences, and evaluate the method as a pre-trained embedding approach for supervised or semi-supervised settings. Furthermore, we plan to apply this method to other unsupervised learning tasks that heavily rely on co-occurrence statistics such as graph embedding or recommendation. Given the computational resource constraints, we keep our model simple enough to have a nearly converged training loss after 1 or 2 epoch(s). Since training takes a long time, we do not fine-tune the hyper-parameters in our models. We use a much smaller model compared with BERT but the architecture details in our transformer and most of its hyper-parameters are the same as the ones used in BERT. The sparsity penalty weights on coefficient matrix λ is set to be 0.4. The maximal sentence size is set to be 50 and we ignore the sentences longer than that. The maximal number of co-occurring words is set to be 30 (after removing the stop words), and we sub-sample the words if there are more words in the previous and next sentence. The number of dimensions in transformers is set to be 300. For sentence representation, the number of transformer layers on the decoder side is 5 and dropout on attention is 0.1 for K = 10 and the number of transformer layer on the decoder side is set to be 1 for K = 1 because we do not need to model the dependency of output basis. For phrase representation, the number of transformer layers on the decoder side is 2 and the dropout on attention is 0.5. The window size d t is set to be 5. We will release the code to reveal more hyper-parameter setting details. All the architecture and hyperparameters (except the number of codebook embeddings) in our models are determined by the validation loss of the self-supervised co-occurring word reconstruction task. The number of codebook embeddings K is chosen by the performance of training data in each task, but we observe that the performances are usually not sensitive to the numbers as long as K is large enough. We also suspect that the slight performance drops of models with too large K might just be caused by the fact that larger K needs longer training time and 1 week of training is insufficient to make the model converges. For skip-thoughts, the hidden embedding of is set to be 600. To make the comparison fair, we retrain the skip-thoughts in Wikipedia 2016 for 2 weeks. As mentioned in Section 3, our model has fewer parameters than the BERT base model and uses much less computational resources for training, so we only present the BERT base performance in the experiment sections. Nevertheless, we still wonder how well BERT large can perform in these unsupervised semantic tasks, so we compare our method with BERT Large in Table 6, Table 7, and Table 8. As we can see, BERT large is usually better than BERT base in the similarity tasks, but perform worse in the hypernym detection task. The performance gains of BERT in similarity tasks might imply that training a larger version of our model might be a promising future direction. Although increasing the model size boosts the performance of BERT, our method is still much better in most of the cases, especially in the phrase similarity tasks. One of the main reasons we hypothesize is that BERT is trained by predicting the masked words in the input sequence and the objective function might not be good if sequences are short (like phrases). C SUMMARIZATION COMPARISON GIVEN THE SAME SUMMARY LENGTH In Section 3.4, we compare our methods with other baselines when all the methods choose the same number of sentences. We suspect that the bad performances for W Emb (*) methods (i.e., representing each sentence using the embedding of words in the sentence) might come from the tendency of selecting shorter sentences. 6 To verify the hypothesis, we plot the R-1 performance of different unsupervised summarization methods that do not use sentence order information versus the sentence length in Figure 3. In the figure, we first observe that Ours (K=100) significantly outperforms W Emb (GloVe) and Sent Emb (GloVe) when summaries have similar length. In addition, we find that W Emb (*) actually usually outperform Sent Emb (*) when comparing the summaries with a similar length. Notice that this comparison might not be fair because W Emb (*) are allowed to select more sentences given the same length of summary and it might be easier to cover more topics in the document using more sentences. In practice, preventing choosing many short sentences might be preferable in an extractive summarization if fluency is an important factor. Nevertheless, if our goal is simply to maximize the ROUGE F1 score given a fixed length of summary without accessing the ground truth summary and sentence order information, the figure indicates that Ours (K=100) is the best choice when the summary length is less than around 50 words and W Emb (BERT) becomes the best method for a longer summary. The BERT in this figure is the BERT base model. The mixed suggest that combining our method with BERT in a way might be a promising direction to get the best performance in this task (e.g., use contextualized word embedding from BERT as our pre-trained word embedding space). We visualize predicted embeddings from 10 randomly selected sentences in our validation set. The format of the file is similar to Table 1. The first line of an example is always the prepossessed input sentence, where <unk> means an out-of-vocabulary placeholder. The embedding in each row is visualized by the nearest five neighbors in a GloVe embedding space and their cosine similarities to the vector. | We propose an unsupervised way to learn multiple embeddings for sentences and phrases | 1,086 | scitldr |
We present a meta-learning approach for adaptive text-to-speech (TTS) with few data. During training, we learn a multi-speaker model using a shared conditional WaveNet core and independent learned embeddings for each speaker. The aim of training is not to produce a neural network with fixed weights, which is then deployed as a TTS system. Instead, the aim is to produce a network that requires few data at deployment time to rapidly adapt to new speakers. We introduce and benchmark three strategies: (i) learning the speaker embedding while keeping the WaveNet core fixed, (ii) fine-tuning the entire architecture with stochastic gradient descent, and (iii) predicting the speaker embedding with a trained neural network encoder. The experiments show that these approaches are successful at adapting the multi-speaker neural network to new speakers, obtaining state-of-the-art in both sample naturalness and voice similarity with merely a few minutes of audio data from new speakers. Training a large model with lots of data and subsequently deploying this model to carry out classification or regression is an important and common methodology in machine learning. It has been particularly successful in speech recognition, machine translation BID1 and image recognition BID2 BID3. In this textto-speech (TTS) work, we are instead interested in few-shot meta-learning. Here the objective of training with many data is not to learn a fixed-parameter classifier, but rather to learn a "prior" neural network. This prior TTS network can be adapted rapidly, using few data, to produce TTS systems for new speakers at deployment time. That is, the intention is not to learn a fixed final model, but rather to learn a model prior that harnesses few data at deployment time to learn new behaviours rapidly. The output of training is not longer a fixed model, but rather a fast learner. Biology provides motivation for this line of research. It may be argued that evolution is a slow adaptation process that has ed in biological machines with the ability to adapt rapidly to new data during their lifetimes. These machines are born with strong priors that facilitate rapid learning. We consider a meta-learning approach where the model has two types of parameters: task-dependent parameters and task-independent parameters. During training, we learn all of these parameters but discard the task-dependent parameters for deployment. The goal is to use few data to learn the task-dependent parameters for new tasks rapidly. Task-dependent parameters play a similar role to latent variables in classical probabilistic graphical models. Intuitively, these variables introduce flexibility, thus making it easier to learn the taskindependent parameters. For example, in classical HMMs, knowing the latent variables in a simple learning problem of estimating the parameters of an exponential-family distribution. In neural networks, this approach also facilitates learning when there is clear data diversity and categorization. We show this for adaptive TTS BID4 BID5. In this setting, speakers correspond to tasks. During training we have many speakers, and it is therefore helpful to have task-dependent parameters to capture speaker-specific voice styles. At the same time, it is useful to have a large model with shared parameters to capture the generic process of mapping text to speech. To this end, we employ the WaveNet model. WaveNet BID6 is an autoregressive generative model for audio waveforms that has yielded state-of-art performance in speech synthesis. This model was later modified for real-time speech generation via probability density distillation into a feed-forward model BID7. A fundamental limitation of WaveNet is the need for hours of training data for each speaker. In this paper we describe a new WaveNet training procedure that facilitates adaptation to new speakers, allowing the synthesis of new voices from no more than 10 minutes of data with high sample quality. We propose several extensions of WaveNet for sample-efficient adaptive TTS. First, we present two non-parametric adaptation methods that involve fine-tuning either the speaker embeddings only or all the model parameters given few data from a new speaker. Second, we present a parametric textindependent approach whereby an auxiliary network is trained to predict new speaker embeddings. The experiments will show that all the proposed approaches, when provided with just a few seconds or minutes of recording, can generate high-fidelity utterances that closely resemble the vocal tract characteristics of a demonstration speaker, particularly when the entire model is fine-tuned end-to-end. When fine-tuning by first estimating the speaker embedding and subsequently fine-tuning the entire model, we achieve state-of-the-art in terms of sample naturalness and voice similarity to target speakers. These are robust across speech datasets recorded under different conditions and, moreover, we demonstrate that the generated samples are capable of confusing the state-of-the-art text-independent speaker verification system BID8.TTS techniques require hours of high-quality recordings, collected in controlled environments, for each new voice style. Given this high cost, reducing the length of the training dataset could be valuable. For example, it is likely to be very beneficial when attempting to restore the voices of patients who suffer from voice-impairing medical conditions. In these cases, long high quality recordings are scarce. WaveNet is an autoregressive model that factorizes the joint probability distribution of a waveform, x = {x 1, . . ., x T}, into a product of conditional distributions using the probabilistic chain rule: DISPLAYFORM0 p(x t |x 1:t−1, h; w), where x t is the t-th timestep sample, and h and w are respectively the conditioning inputs and parameters of the model. To train a multi-speaker WaveNet, the conditioning inputs h consist of the speaker identity s, the linguistic features l, and the logarithmic fundamental frequency f 0 values. l encodes the sequence of phonemes derived from the input text, and f 0 controls the dynamics of the pitch in the generated utterance. Given the speaker identity s for each utterance in the dataset, the Figure 2: Training (slow, lots of data), adaptation (fast, few data) and inference stages for the SEA-ALL architecture. The components with bold pink outlines are fine-tuned during the adaptation phase. The purpose of training is to produce a prior. This prior is combined with few data during adaptation to solve a new task. This adapted model is then deployed in the final inference stage. model is expressed as: DISPLAYFORM1 where a table of speaker embedding vectors e s (Embedding in FIG0) is learned alongside the standard WaveNet parameters. These vectors capture salient voice characteristics across individual speakers, and provide a convenient mechanism for generalizing WaveNet to the few-shot adaptation setting in this paper. The linguistic features l and fundamental frequency values f 0 are both time-series with a lower sampling frequency than the waveform. Thus, to be used as local conditioning variables they are upsampled by a transposed convolutional network. During training, l and f 0 are extracted by signal processing methods from pairs of training utterance and transcript, and during testing, those values are predicted from text by existing models. In recent years, a large body of literature uses large datasets to train models to learn an input-output mapping that is then used for inference. In contrast, few-shot meta-learning introduces an additional step, adaptation. In this meta-learning setting, the purpose of training becomes to learn a prior. During adaptation, this prior is combined with few data to rapidly learn a new skill; in this case adapting to a new speakers' voice style. Finally, the new skill is deployed, which in this paper we are referring to as inference. These three stages -training, adaptation and inference -are illustrated in Figure 2.We present two multi-speaker WaveNet extensions for few-shot voice adaptation. First, we introduce a non-parametric model fine-tuning approach, which involves adapting either the speaker embeddings or all the model parameters using held-aside demonstration data. Second, and for comparison purposes, we use a parametric approach whereby an auxiliary network is trained to predict the embedding vector of a new speaker using the demonstration data. Inspired by few-shot learning we first pre-train a multi-speaker conditional WaveNet model on a large and diverse dataset, as described in Section 2. Subsequently, we fine-tune the model parameters by retraining with respect to held-aside adaptation data. Training this WaveNet model to maximize the conditional log-likelihood of the generated audio jointly optimizes both the set of speaker parameters {e s} and the shared WaveNet core parameters w. Next, we extend this method to a new speaker by extracting the l and f 0 features from their adaptation data and randomly initializing a new embedding vector e. We then optimize e such that the demonstration waveforms, {x DISPLAYFORM0 0,demo)}, are likely under the model with w fixed (SEA-EMB): DISPLAYFORM1 Alternatively, all of the model parameters may be additionally fine-tuned (SEA-ALL): DISPLAYFORM2 0,demo; e, w).Both methods are non-parametric approaches to few-shot voice adaptation as the number of embedding vectors scales with the number of speakers. However, the training processes are slightly different. Because the SEA-EMB method optimizes only a low-dimensional vector, it is far less prone to overfitting, and we are therefore able to retrain the model to convergence even with mere seconds of adaptation data. By contrast, the SEA-ALL has many more parameters that might overfit to the adaptation data. We therefore hold out 10% of our demonstration data for calculating a standard early termination criterion. We also initialize e with the optimal value from the SEA-EMB method, and we find this initialization significantly improves the generalization performance even with a few seconds of adaptation data. In contrast to the non-parametric approach, whereby a different embedding vector is fitted for each speaker, one can train an auxiliary encoder network to predict an embedding vector for a new speaker given their demonstration data. Specifically, we model: DISPLAYFORM0 where for each training example, we include a randomly selected demonstration utterance from that speaker in addition to the regular conditioning inputs. The full WaveNet model and the encoder network e(·) are trained together from scratch. We refer the reader to the Appendix for further architectural details. This approach (SEA-ENC) exhibits the advantage of being trained in a transcriptindependent setting given only the input waveform, e(x demo), and requires negligible computation at adaptation time. However, the learned encoder can also introduce bias when fitting an embedding due to its limited network capacity. As an example, demonstrated a typical scenario whereby speaker identity information can be very quickly extracted with deep models from audio signals. Nonetheless, that the model is less capable of effectively leveraging additional training than approaches based on statistical methods. The linguistic features and fundamental frequencies which are used as inputs contain information specific to an individual speaker. As an example, the average voice pitch in the fundamental frequency sequence is highly speaker-dependent. Instead, we would like these features to be as speaker-independent as possible such that identity is modeled via global conditioning on the speaker embedding. To achieve this, we normalize the fundamental frequency values to have zero mean and unit variance separately for each speaker during training, denoted asf 0: DISPLAYFORM0 As mentioned earlier, at test time, we use an existing model to predict (l,f 0). Few-shot learning to build models, where one can rapidly learn using only a small amount of available data, is one of the most important open challenges in machine learning. Recent studies have attempted to address the problem of few-shot learning by using deep neural networks, and they have shown promising on classification tasks in vision BID11 BID12 and language. Few-shot learning can also be leveraged in reinforcement learning, such as by imitating human Atari gameplay from a single recorded action sequence BID14 or online video BID15.Meta-learning offers a sound framework for addressing few-shot learning. Here, an expensive learning process in machines with the ability to learn rapidly from few data. Meta-learning has a long history BID16 BID17, and recent studies include efforts to learn optimization processes BID18 that have been shown to extend naturally to the few-shot setting BID20 ). An alternative approach is model-agnostic meta learning (MAML) BID21, which differs by using a fixed optimizer and learning a set of base parameters that can be adapted to minimize any task loss by few steps of gradient descent. This method has shown promise in robotics BID22 BID23.In generative modeling, few-shot learning has been addressed from several perspectives, including matching networks BID24 and variable inference for memory addressing BID25. BID26 developed a sequential generative model that extended the Deep Recurrent Attention Writer (DRAW) model BID27, and BID28 extended PixelCNN with neural attention for few-shot auto-regressive density modeling. BID30 presented a gated linear model able to model complex densities from a single pass of a limited dataset. Early attempts of few-shot adaptation involved the attention models of BID28 and MAML BID21 ), but we found both of these strategies failed to learn informative speaker embedding in our preliminary experiments. There is growing interest in developing neural TTS models that can be trained end-to-end without the need for hand-crafted representations. In this study we focus on extending the autoregressive WaveNet model BID6 to the few-shot learning setting to adapt to speakers that were not presented at training time. Other recent neural TTS models include Tacotron 2 (building on) which uses WaveNet as a vocoder to invert mel-spectrograms generated by an attentive sequence-to-sequence model. DeepVoice 2 ) (building on BID34) introduced a multi-speaker variation of Tacotron that learns a low-dimensional embedding for each speaker, which was further extended in DeepVoice 3 to a 2,400 multi-speaker scenario. Unlike WaveNet and DeepVoice, the Char2Wav BID36 and VoiceLoop ) models produce World Vocoder Features BID38 instead of generating raw audio signals. Although many of these systems have produced high-quality samples for speakers present in the training set, generalizing to new speakers given only a few seconds of audio remains a challenge. There have been several concurrent works to address this few-shot learning problem. The VoiceLoop model introduced a novel memory-based architecture that was extended by to few-shot voice style adaptation, by introducing an auxiliary fitting network that predicts the embedding of a new speaker. BID40 extended the Tacotron model for one-shot speaker adaptation by conditioning on a speaker embedding vector extracted from a pretrained speaker identity model of BID8. The most similar approached to our work was proposed by for the DeepVoice 3 model. They considered both predicting the embedding with an encoding network and fitting the embedding based on a small amount of adaptation data, but the adaptation was applied to a prediction model for mel-spectrograms with a fixed vocoder. In this section, we evaluate the quality of samples of SEA-ALL, SEA-EMB and SEA-ENC. We first measure the naturalness of the generated utterances using the standard Mean Opinion Score (MOS) procedure. Then, we evaluate the similarity of generated and real samples using the subjective MOS test and objectively using a speaker verification system BID8. Finally, we study these varying the size of the adaptation dataset. We train a WaveNet model for each of our three methods using the same dataset, which combines the high-quality LibriSpeech audiobook corpus BID42 3.61 ± 0.06 3.56 ± 0.06 3.65 ± 0.06 3.58 ± 0.06 Table 1: Naturalness of the adapted voices using a 5-scale MOS score (higher is better) with 95% confidence interval on the LibriSpeech and VCTK held-out adaptation datasets. Numbers in bold are the best few-shot learning on each dataset without statistically significant difference. van den was trained with 24-hour production quality data, Our few-shot model performance is evaluated using two hold-out datasets. First, the LibriSpeech test corpus consists of 39 speakers, with an average of approximately 52 utterances and 5 minutes of audio per speaker. For every test speaker, we randomly split their demonstration utterances into an adaptation set for adapting our WaveNet models and a test set for evaluation. The subset of utterances used for early termination in Section 3.1 is chosen from the adaptation set. There are about 4.2 utterances on average per speaker in the test set and the rest in the adaptation set. Second, we consider a subset of the CSTR VCTK corpus BID43 consisting of 21 American English speakers, with approximately 368 utterances and 12 minutes of audio per speaker. We also apply the adaptation/test split with 10 utterances per speaker for test. We emphasize that no data from VCTK was presented to the model at training time. Since our underlying WaveNet model was trained on data largely from LibriSpeech (which was recorded under noisier conditions than VCTK), one might expect that the generated samples on the VCTK dataset contain characteristic artifacts that make generated samples easier to distinguish from real utterances. However, our evaluation using VCTK indicates that our model generalizes effectively and that such artifacts are not detectable. Synthetic utterances are provided on our demo webpage 1.It is worth mentioning, that SEA-ENC requires no adaptation time. Where for SEA-EMB, it takes 5 ∼ 10k optimizing steps to fit the embedding vector, and an additional 100 ∼ 200 steps to fine-tune the entire model using early stopping for SEA-ALL. We measure the quality of the generated samples by conducting a MOS test, whereby subjects are asked to rate the naturalness of generated utterances on a five-point Likert Scale (1: Bad, 2: Poor, 3: Fair, 4: Good, 5: Excellent). Furthermore, we compare with other published few-shot TTS systems systems, that were developed in parallel to this work. However, the literature uses varying combinations of training data and evaluation splits making comparison difficult. The presented are from the closest experimental setups to ours. Table 1 presents MOS for the adaptation models compared to real utterances. Two different adaptation dataset sizes are considered; T = 10 seconds, and T ≤ 5 minutes for LibriSpeech (T ≤ 10 minutes for VCTK). For reference on 16 kHz data, WaveNet trained on a 24-hour production quality speech dataset (van den) achieves a score of 4.21, while for LibriSpeech our best few-shot model attains an MOS score of 4.13 using only 5 minutes of data given a pre-trained multi-speaker model. We note that both fine-tuning models produce overall "good" samples for both the LibriSpeech and VCTK test sets, with SEA-ALL outperforming SEA-EMB in all cases. SEA-ALL is on par 3.41 ± 0.10 3.75 ± 0.09 3.51 ± 0.10 3.97 ± 0.09 SEA-EMB (ours)3.42 ± 0.10 3.56 ± 0.10 3.07 ± 0.10 3.18 ± 0.10 SEA-ENC (ours)2.47 ± 0.09 2.59 ± 0.09 2.07 ± 0.08 2.19 ± 0.09 Table 2: Voice similarity of generated voices using a 5-scale MOS score (higher is better) with 95% confidence interval on the LibriSpeech and VCTK held-out adaptation datasets.with the state-of-the-art performance on both datasets. The addition of extra adaptation data beyond 10 seconds of audio helps performance on LibriSpeech but not VCTK, and the gap between our best model and the real utterance is also wider on VCTK, possibly due to the different recording conditions. Beside naturalness, we also measure the similarity of the generated and real voices. The quality of similarity is the main evaluation metric for the voice adaptation problem. We first follow the experiment setup of BID40 to run a MOS test for a subjective assessment and then use a speaker verification model for objective evaluation in the next section. In every trial of this test a subject is presented with a pair of utterances consisting of a real utterance and another real or generated utterance from the same speaker, and is asked to rate the similarity in voice identity using a five-scale score (1: Not at all similar, 2: Slightly similar, 3: Moderately similar, 4: Very similar, 5: Extremely similar). Table 2 shows the MOS for real utterances and all the adaptation models under two adaptation data time settings on both datasets. Again, the SEA-ALL model outperforms the other two models, and the improvement over SEA-EMB scales with the amount of adaptation data. Particularly, the learned voices on the VCTK dataset achieve an average score of 3.97, demonstrating the generalization performance on a different dataset. As a rough comparisson, because of varying training setups, the state of the art system of BID40 achieves scores of 3.03 for LibriSpeech and 2.77 for VCTK when trained on LibriSpeech. Their model computes the embedding based on the d-vector, similar to our SEA-ENC approach, and performs competitively for the one-shot learning setting, but its performance saturates with 5 seconds of adaptation data, as explained in Section 3.2. We note the gap of similarity scores between SEA-ALL and real utterances, which suggests that although the generated samples sound similar to the target speakers, humans can still tell the difference from real utterances. We also apply the state-of-the-art text independent speaker verification (TI-SV) model of BID8 to objectively assess whether the generated samples preserve the acoustic features of the speakers. We calculate the TI-SV d-vector embeddings for generated and real voices. In Figure 3, we visualize the 2-dimensional projection of the d-vectors for a SEA-ALL model trained on T ≤ 5 minutes of data on the LibriSpeech dataset, and T ≤ 10 minutes on VCTK. There are clear clusters on both datasets, with a strikingly large inter-cluster distance and low intra-cluster separation. This shows both an ease of correctly identifying the speaker associated with a given generated utterance, and the difficulty in differentiating real from synthetic samples. A similar figure is presented in BID40, but there the generated and real samples do not overlap. This indicates that the method presented in this paper generates voices that are more indistinguishable from real ones, when measured with the same verification system. In the following subsections, we further analyze these . BID8. The utterances were generated using T ≤ 5 and T ≤ 10 minute samples from LibriSpeech and VCTK respectively. EER is marked with a dot. We first quantify whether generated utterances are attributed to the correct speaker. Following common practice in speaker verification BID8, we select the hold-out test set of real utterances from test speakers as the enrollment set and compute the centroid of the d-vectors for each speaker c i. We then use the adaptation set of test speakers as the verification set. For every verification utterance, we compute the cosine similarity between its d-vector v and a randomly chosen centroid c i. The utterance is accepted as one from speaker i if the similarity is exceeds a given threshold. We repeat the experiments with the same enrollment set and replace the verification set with samples generated by each adaptation method under different data size settings. Figure 5: Cosine similarity of real and generated utterances to the real enrollment set. In our setup we fix the enrollment set together with the speaker verification model from BID8, and study the performance of different verification sets that are either from real utterances or generated by a TTS system. TAB4 lists the equal error rate (EER) of the verification model with real and generated verification utterances, and Figure 4 shows the detection error trade-off (DET) curves for a more thorough inspection. Figure 4 only shows the adaptation models with the maximum data size setting (T ≤ 5 minutes for LibriSpeech and ≤ 10 minutes for VCTK). The for other data sizes are provided in Appendix B.We find that SEA-ALL outperforms the other two approaches, and the error rate decreases clearly with the size of demonstration data. Noticeably, the EER of SEA-ALL is even lower than the real utterance on the LibriSpeech dataset with sufficient adaptation data. A possible explanation is that the generated samples might be concentrated closer to the centroid of a speaker's embeddings than real speech with larger variance across utterances. Our SEA-EMB model performs better than SEA-ENC. Additionally, the benefit of more demonstration data is less significant than for SEA-ALL in both of these models. In this section, we compare the generated samples and the real utterances of the speaker being imitated. Figure 5 shows the box-plot of the cosine similarity between the embedding centroids of test speakers' enrollment set and real utterances from the same speaker, real utterances from a different speaker, and generated utterances adapted to the same speaker. Consistent with the observations from the previous subsection, SEA-ALL performs best. We further consider an adversarial scenario for speaker verification. In contrast to the previous standard speaker verification setup where we now select a verification utterance with either a real utterance from the same speaker or a synthetic sample from a model adapted to the same speaker. Under this setup, the speaker verification system is challenged by synthetic samples and acts as a classifier for real versus generated utterances. The ROC curve of this setup is shown in FIG3 and the models are using the maximum data size setting. Other data size settings can be found in Appendix C. If the generated samples are indistinguishable from real utterances, the ROC curve approaches the diagonal line (that is, the verification system fails to separate real and generated voices). Importantly, SEA-ALL manages to confuse the verification system especially for the VCTK dataset where the ROC curve is almost inline with the diagonal line with an AUC of 0.56. This paper studied three variants of meta-learning for sample efficient adaptive TTS. The adaptation method that fine-tunes the entire model, with the speaker embedding vector first optimized, shows impressive performance even with only 10 seconds of audio from new speakers. When adapted with a few minutes of data, our model matches the state-of-the-art performance in sample naturalness. Moreover, it outperforms other recent works in matching the new speaker's voice. We also demon-: ROC curve for real versus generated utterance detection. The utterances were generated using models with 5 and 10 minutes of training data per speaker from LibriSpeech and VCTK respectively. Lower curve indicate that the verification system is having a harder time distinguishing real from generated samples.strated that the generated samples achieved a similar level of voice similarity to real utterances from the same speaker, when measured by a text independent speaker verification model. Our paper considers the adaptation to new voices with clean, high-quality training data collected in a controlled environment. The few-shot learning of voices with noisy data is beyond the scope of this paper and remains a challenging open research problem. A requirement for less training data to adapt the model, however, increases the potential for both beneficial and harmful applications of text-to-speech technologies such as the creation of synthesized media. While the requirements for this particular model (including the high-quality training data collected in a controlled environment and equally high quality data from the speakers to which we adapt, as described in Section 5.1) present barriers to misuse, more research must be conducted to mitigate and detect instances of misuse of text-to-speech technologies in general. Our encoding network is illustrated as the summation of two sub-network outputs in FIG4. The first subnetwork is a pre-trained speaker verification model (TI-SV) BID8, comprising 3 LSTM layers and a single linear layer. This model maps a waveform sequence of arbitrary length to a fixed 256-dimensional d-vector with a sliding window, and is trained from approximately 36M utterances from 18K speakers extracted from anonymized voice search logs. On top of this we add a shallow MLP to project the output d-vector to the speaker embedding space. The second sub-network comprises 16 1-D convolutional layers. This network reduces the temporal resolution to 256 ms per frame (for 16 kHz audio), then averages across time and projects into the speaker embedding space. The purpose of this network is to extract residual speaker information present in the demonstration waveforms but not captured by the pre-trained TI-SV model. Here we provide the DET curves of speaker verification problem for models with different training data sizes in addition to those shown in Section 5.4.1. Figure 8: Detection error trade-off (DET) curve for speaker verification, using the TI-SV speaker verification model BID8. The utterances were generated using 1 minute or 10 seconds of utterance from LibriSpeech and VCTK. EER is marked with a dot. We provide the ROC curves of the speaker verification problem with adversarial examples from adaptation models with different training data sizes in addition to those shown in Section 5.4.2.: ROC curve for real vs. generated utterance detection. The utterances were generated using 1 minute or 10 seconds of utterance from LibriSpeech and VCTK. Lower curve suggests harder to distinguish real from generated samples. | Sample efficient algorithms to adapt a text-to-speech model to a new voice style with the state-of-the-art performance. | 1,087 | scitldr |
This paper introduces a probabilistic framework for k-shot image classification. The goal is to generalise from an initial large-scale classification task to a separate task comprising new classes and small numbers of examples. The new approach not only leverages the feature-based representation learned by a neural network from the initial task (representational transfer), but also information about the classes (concept transfer). The concept information is encapsulated in a probabilistic model for the final layer weights of the neural network which acts as a prior for probabilistic k-shot learning. We show that even a simple probabilistic model achieves state-of-the-art on a standard k-shot learning dataset by a large margin. Moreover, it is able to accurately model uncertainty, leading to well calibrated classifiers, and is easily extensible and flexible, unlike many recent approaches to k-shot learning. A child encountering images of helicopters for the first time is able to generalize to instances with radically different appearance from only a handful of labelled examples. This remarkable feat is supported in part by a high-level feature-representation of images acquired from past experience. However, it is likely that information about previously learned concepts, such as aeroplanes and vehicles, is also leveraged (e.g. that sets of features like tails and rotors or objects like pilots/drivers are likely to appear in new images). The goal of this paper is to build machine systems for performing k-shot learning, which leverage both existing feature representations of the inputs and existing class information that have both been honed by learning from large amounts of labelled data. K-shot learning has enjoyed a recent resurgence in the academic community BID6 BID5 BID15 BID12. Current stateof-the-art methods use complex deep learning architectures and claim that learning good features for k-shot learning entails training for k-shot specifically via episodic training that simulates many k-shot tasks. In contrast, this paper proposes a general framework based upon the combination of a deep feature extractor, trained on batch classification, and traditional probabilistic modelling. It subsumes two existing approaches in this vein BID1, and is motivated by similar ideas from multi-task learning BID0. The intuition is that deep learning will learn powerful feature representations, whereas probabilistic inference will transfer top-down conceptual information from old classes. Representational learning is driven by the large number of training examples from the original classes making it amenable to standard deep learning. In contrast, the transfer of conceptual information to the new classes relies on a relatively small number of existing classes and k-shot data points, which means probabilistic inference is appropriate. While generalisation accuracy is often the key objective when training a classifier, calibration is also a fundamental concern in many applications such as decision making for autonomous driving and medicine. Here, calibration refers to the agreement between a classifier's uncertainty and the frequency of its mistakes, which has recently received increased attention. For example, show that the calibration of deep architectures deteriorates as depth and complexity increase. Calibration is closely related to catastrophic forgetting in continual learning. However, to our knowledge, uncertainty has so far been over-looked by the k-shot community even though it is high in this setting. Our basic setup mimics that of the motivating example above: a standard deep convolutional neural network (CNN) is trained on a large labelled training set. This learns a rich representation of images at the top hidden layer of the CNN. Accumulated knowledge about classes is embodied in the top layer softmax weights of the network. This information is extracted by training a probabilistic model on these weights. K-shot learning can then 1) use the representation of images provided by the CNN as input to a new softmax function, 2) learn the new softmax weights by combining prior information about their likely form derived from the original dataset with the k-shot likelihood. The main contributions of our paper are: 1) We propose a probabilistic framework for k-shot learning. It combines deep convolutional features with a probabilistic model that treats the top-level weights of a neural network as data, which can be used to regularize the weights at k-shot time in a principled Bayesian fashion. We show that the framework recovers L 2 -regularised logistic regression, with an automatically determined setting of the regularisation parameter, as a special case.2) We show that our approach achieves state-of-the-art on the miniImageNet dataset by a wide margin of roughly 6% for 1-and 5-shot learning. We further show that architectures with better batch classification accuracy also provide features which generalize better at k-shot time. This finding is contrary to the current belief that episodic training is necessary for good performance and puts the success of recent complex deep learning approaches to k-shot learning into context.3) We show on miniImageNet and CIFAR-100 that our framework achieves a good trade-off between classification accuracy and calibration, and it strikes a good balance between learning new classes and forgetting the old ones. K-shot learning task. We consider the following discriminative k-shot learning task: First, we receive a large dataset DISPLAYFORM0 of images u i and labels y i ∈ {1, . . ., C} that indicate which of the C classes each image belongs to. Second, we receive a small dataset D = {u i, y i} N i=1 of C new classes, y i ∈ {C + 1, C + C}, with k images from each new class. Our goal is to construct a model that can leverage the information in D and D to predict well on unseen images u * from the new classes; the performance is evaluated against ground truth labels y *. Summary. In contrast to several recent k-shot learning approaches that mimic the k-shot learning task by episodic training on simulated k-shot tasks, we propose to use the large dataset D to train a powerful feature extractor on batch classification, which can then be used in conjunction with a simple probabilistic model to perform k-shot learning. In 2003, Bakker & Heskes introduced a general probabilistic framework for multi-task learning with multi-head models, in which all parameters of a generic feature extractor are shared between a set of tasks, and only the weights of the top linear layer (the "heads") are task dependent. In the following, we frame k-shot learning in a similar setting and propose a probabilistic framework for k-shot learning in this vein. Our framework comprises four phases that we refer to as 1) representational learning, 2) concept learning, 3) k-shot learning, and 4) k-shot testing, cf. FIG0. We then show that, for certain modelling assumptions, the obtained method is equivalent/related to regularised logistic regression with a specific choice for the regularisation parameter. We provide a high-level description of the probabilistic framework and present a more detailed derivation in Appendix A. While it might appear overly formal, the ing scheme will be simple and practical, and the probabilistic phrasing will make it extensible and automatic (no free parameters).Feature extractor and representational learning. We first introduce a convolutional neural network (CNN) Φ ϕ as feature extractor whose last hidden layer activations are mapped to two sets of softmax output units corresponding to the C classes in the large dataset D and the C classes in the small dataset D, respectively. These separate mappings are parametrized by weight matrices W for the old classes and W for the new classes. Denoting the output of the final hidden layer as x = Φ ϕ (u), the first softmax units compute p(y n | x n, W) = softmax(W x n) and the second p(y n | x n, W) = softmax(Wx n), cf. FIG0.For representational learning (phase 1) the large dataset D is used to train the CNN Φ ϕ using standard deep learning optimisation approaches. This involves learning the parameters ϕ of the feature extractor up to the last hidden layer, as well as the softmax weights W. The network parameters ϕ are fixed from this point on and shared across later phases. Probabilistic modelling. The next goal is to build a probabilistic method for k-shot prediction that transfers structure from the trained softmax weights W to the new k-shot softmax weights W and combines it with the k-shot training examples. Thus, given a test image u * during k-shot testing (phase 4), we compute its feature representation x * = Φ(u *), and the prediction for the new label y * is found by averaging the softmax outputs over the posterior distribution of the softmax weights given the two datasets, DISPLAYFORM0 To this end, we consider a general class of probabilistic models in which the two sets of softmax weights are generated from shared hyperparameters θ, so that p(W, W, θ) = p(θ)p(W|θ)p(W|θ) as indicated in the graphical model in FIG0. In this way, the large dataset D contains information about θ that in turn constrains the new softmax weights W. We further assume that there is very little uncertainty in W once the large initial training set is observed and so a maximum a posteriori (MAP) estimate, as returned by standard deep learning, suffices. As a consequence of this approximation and the structure of the model, the original data D are not required for the k-shot learning phase. Instead, the weights learned from these data, W, can themselves be treated as observed data, which induce a predictive distribution over the k-shot weights p(W| W MAP) via Bayes' rule. This argument is fully explained in Appendix A. We refer to this step as concept learning (phase 2) and note that all probabilistic modelling happens in the definition of p(W, W, θ), (see Secs. 2.2 and 2.3).During k-shot learning (phase 3) we treat this predictive distribution as our new prior on the weights and again use Bayes' rule to combine it with the softmax likelihood of the k-shot training examples D to obtain a new posterior over the weights that now also incorporates D, DISPLAYFORM0 Finally, we approximate Eq. by its MAP estimate W, so that the integral in Eq. becomes DISPLAYFORM0 The probabilistic model over the weights is key: a good model will transfer useful knowledge that improves performance. However, the usual trade-off between model complexity and learnability is particularly egregious in our setting as the weights W are few and high-dimensional and the number of k-shot samples is small. With an eye on simplicity, we make two simplifying assumptions. First, treating the weights from the hidden layer to the softmax outputs as a vector, we assume independence. Second, we assume the distribution between the weights of old and new classes to be identical, DISPLAYFORM0 After extensive testing, we found that a Gaussian model for the weights strikes the best compromise in the trade-off between complexity and learnability, cf. Sec. 4.2 for a detailed model comparison. Our method. We use a simple Gaussian model p(w|θ) = N (w|µ, Σ) with its conjugate Normal- DISPLAYFORM0 DISPLAYFORM1, and the posterior at k-shot time becomes DISPLAYFORM2 For details see Appendix C.1. For k-shot testing we use the MAP estimates for the weights of the new classes. We found that restricting the covariance matrix to be isotropic, Σ = σ 2 I, performed best at k-shot learning, probably due to the small number of data points to learn from as mentioned above. Relation to logistic regression. Standard logistic regression corresponds to the maximum likelihood (MLE) solution of the softmax likelihood p(y n | x n, W) = softmax(Wx n). Often, L 2 -regularisation on the weights W with inverse regularisation strength 1/C reg is used; the solution to this regularised optimisation problem corresponds to the MAP solution of a model with isotropic Gaussian prior on the weights with zero mean: DISPLAYFORM3 This method is analogous to Eq.. However, the probabilistic framework has several advantages: i) modelling assumptions and approximations are made explicit, ii) it is strictly more general and can incorporate non-zero means µ MAP, whereas standard regularised logistic regression assumes zero mean, and iii) the probabilistic interpretation provides a principled way of choosing the regularisation constant using the trained weights W: C reg = 2σ Embedding methods map the k-shot training and test points into a non-linear space and perform classification by assessing which training points are closest, according to a metric, to the test points. Siamese Networks BID5 train the embedding using a same/different prediction task derived from the original dataset and use a weighted L 1 metric for classification. Matching Networks BID15 construct a set of k-shot learning tasks from the original dataset to train an embedding defined through an attention mechanism that linearly combines training labels weighted by their proximity to test points. More recently, Prototypical Networks BID12 are a streamlined version of Matching Networks in which embedded classes are summarised by their mean in the embedding space. These embedding methods learn representations for k-shot learning, but do not directly leverage concept transfer. Amortised optimisation methods ) also simulate related k-shot learning tasks from the initial dataset, but instead train a second network to initialise and optimise a CNN to perform accurate classification on these small datasets. This method can then be applied for new k-shot tasks. Importantly, both embedding and amortised inference methods improve when the system is trained for a specific k-shot task: to perform well in 5-shot learning, training is carried out with episodes containing 5 examples in each class. The general statement appears to be that training specifically for k-shot learning is essential for building features which generalise well at k-shot testing time. The approach proposed in this paper is more flexible; it is not tailored for a specific k and, thus, does not require retraining when switching, e.g., from 5-shot to 10-shot learning. Moreover, BID12 find that using a larger number of k-shot classes for the training episodes (e.g., train with 20 k-shot classes per episode when testing on only 5 new k-shot classes) can be beneficial, and they choose this number by cross-validation on a validation-set. This is in alignment with our finding that training with more data and more classes improves performance at k-shot time. Deep probabilistic methods include the approach developed in this paper. The methods in this family are not unique to deep learning, and the idea of treating weights as data from which to transfer has been widely applied in multi-task learning BID0. The work most closely related to our own is not an approach to k-shot learning per se, but rather a method for training CNNs with highly imbalanced classes. It is similar in that it trains a form of Gaussian mixture model over the final layer weights using MAP inference that regularises learning. BID1 propose an elegant approach to k-shot learning that is an instance of the framework described here: a Gaussian model is fit to the weights with MAP inference. The evaluation is promising, but preliminary. One of the goals of this paper is to provide a comprehensive evaluation. While not using a probabilistic approach, develop a method for k-shot learning that trains a recognition model to amortise MAP inference for the softmax weights which can then be used at k-shot learning time. While this method trains the mapping from activation to weights jointly with the classifier, and thus does not learn from the weights per se, it does exploit the structure in the weights for k-shot learning. The code used to produce the following experiments will be made available after review. Dataset. miniImageNet has become a standard testbed for k-shot learning and is derived from the ImageNet ILSVRC12 dataset BID9 by extracting 100 out of the 1000 classes. Each class contains 600 images downscaled to 84 × 84 pixels. We use the 100 classes (64 train, 16 validation, 20 test) proposed by. As our approach does not require a validation set, we use both the training and validation data for the representational learning. Representational learning. We employ standard CNNs that are inspired by ResNet-34 and VGG BID11 for the representational learning on the C base classes, cf. Phase 1 in Sec. 2.1. These trained networks provide both W MAP and the fixed feature representation Φ ϕ for the k-shot learning and testing. We employed standard data augmentation from ImageNet for the representational learning but highlight that no data augmentation was used during the k-shot training and testing. For details on the architecture, training, and data augmentation see Appendix D.4. t-SNE embeddings BID14 of the learned last layer weights show sensible clusters, which highlights the structure exploited by the probabilistic model, see Appendix E.1.Baselines and competing methods. We compare against several baselines as well as recent stateof-the-art methods mentioned in Sec. 3. The baselines are computed on the features x = Φ φ (u) from the last hidden layer of the trained CNN: (i) Nearest Neighbours with cosine distance and (ii) regularized logistic regression with regularisation constant set either by cross-validation or (iii) using the variance of the weights, C = 2σ Testing protocol. We evaluate the methods on 600 random k-shot tasks by randomly sampling 5 classes from the 20 test classes and perform 5-way k-shot learning. Following BID12, we use 15 randomly selected images per class for k-shot testing to compute accuracies and calibration. Overall k-shot performance. We report performance on the miniImageNet dataset in Tab. 1 and Figs. 2 and 3. The best method uses as feature extractor a modified ResNet-34 with 256 features, DISPLAYFORM0 ResNet-34 + Isotropic Gaussian (ours) 56.3 ± 0.4% 73.9 ± 0.3% 78.5 ± 0.3% Matching Networks (reimplemented, 1-shot) 46.8 ± 0.5% --Matching Networks (reimplemented, 5-shot) -62.7 ± 0.5% -Meta-Learner LSTM 43.4 ± 0.8% 60.6 ± 0.7% -Prototypical Nets (1-shot) BID12 49.4 ± 0.8% 65.4 ± 0.7% -Prototypical Nets (5-shot) BID12 45.1 ± 0.8% 68.2 ± 0.7% - Table 1: Accuracy on 5-way classification on miniImageNet. Our best method, an isotropic Gaussian model using ResNet-34 features consistently outperforms all competing methods by a wide margin. trained with all 600 examples per training class, and a simple isotropic Gaussian model on the weights for concept learning. Despite its simplicity, our method achieves state-of-the-art and beats prototypical networks by a wide margin of about 6%. The baseline methods using the same feature extractor are also state-of-the-art compared to prototypical networks and both logistic regressions show comparable accuracy to our methods except for on 1-shot learning. In terms of log-likelihoods, Log Reg (C = 2σ 2 W) fares slightly better, whereas Log Reg (cv) is much worse. Deeper features lead to better k-shot learning. We investigate the influence of different feature extractors of increasing complexity on performance in Fig. 3: i) a VGG style network (500 train images per class), ii) a ResNet-34 (500 examples per class), and iii) a ResNet-34 (all 600 examples per class). We find that the complexity of the feature extractor as well as training set size consistently correlate with the accuracy at k-shot time. For instance, on 5-shot, Gauss (iso) achieves 65% accuracy with a VGG network and 74% with a ResNet trained with all available data, a significant increase of almost 10%. Moreover, Gauss (iso) outperforms Log Reg (C = 2σ 2 W) on 1-shot learning across models, and performs similarly on 5-and 10-shot. We attribute the difference to the former's ability to also model the mean of the Gaussian, whereas logistic regression assumes a zero mean. Importantly, this implies that training specifically for k-shot learning is not necessary for achieving high generalisation performance on this k-shot problem. On the contrary, training a powerful deep feature extractor on batch classification using all of the available training data, then building a simple probabilistic model using the learned features and weights achieves state-of-the-art. Recent models that use episodic training cannot leverage such deep feature extractors as for them the depth of the model is limited by the nature of training itself. The reference baseline in the k-shot learning literature is nearest neighbours, which performs on par with Gauss (iso) on 1-shot learning but is outperformed by all methods on 5-and 10-shot. This is evidence that building a simple classifier on top of the learned features works significantly better for k-shot learning than nearest neighbours. Calibration. A classifier is said to be calibrated when the probability it predicts for belonging to a given class is on par with the probability of it being the correct prediction. In other words, when ) to bright (C = 10). ECE plots are provided in Appendix E.3. examples for which it predicts a probability p of belonging to a given class are correctly classified for a fraction p of the examples. A calibration curve visualises the proportion of examples correctly classified as a function of their predicted probability; a perfectly calibrated classifier should in a diagonal line. Following, we consider the log likelihood on the k-shot test examples as well as Expected Calibration Error (ECE) as summary measures of calibration. ECE can be interpreted as the weighted average of the distance of the calibration curve to the diagonal. We find that Log Reg (C = 2σ 2 W) and Gauss (iso) provide better accuracy and calibration than Log Reg (cross-validation), cf. FIG3. The difference in calibration quality for different regularisations of logistic regression highlights the importance of choosing the right constant, as we discuss now. Choice of the regularisation constant for logistic regression. The so far suggest that training a simple linear model such as regularised logistic regression might be sufficient to perform well in k-shot learning. However, while the accuracy at k-shot time does not vary dramatically as the regularisation constant changes, the calibration does, and jointly maximizing both quantities is not possible, cf. the first two plots of Fig. 4. The standard (frequentist) method to tune this constant is cross validation, which is not applicable in the 1-shot setting, and suffers from lack of data in 5-and 10-shot. Contrary, our probabilistic framework provides a principled way of selecting this regularisation parameter by transfer from the training weights: Log Reg (C = 2σ 2 W) strikes a good balance between accuracy and log-likelihood. The third plot in Fig. 4 reports log-likelihood as a function of accuracy and provides further visualisation of the achieved trade-off between accuracy and calibration for Log Reg (C = 2σ 2 W), as well as the failure of Log Reg (cross-validation) to achieve a good compromise in 5-and 10-shot. Evaluation in an online setting. We also briefly consider the online setting, in which we jointly test on 80 old and 5 new classes, for which catastrophic forgetting BID2 ) is a well known problem. During k-shot learning and testing we employ a softmax which includes both the new and the old weights ing in a total of 85 weight vectors. We utilise ResNet-34 trained on 500 images per class to retain 100 test images on the old classes. While the k-shot weights were modelled probabilistically, we use the MAP estimate W ) only lose a couple of percent on the accuracy of the old classes, and perform well on the new classes, striking a good trade-off between forgetting and learning at k-shot time. For unregularised (MLE) logistic regression, the new weights completely dominate the old ones, highlighting that the right regularisation is important. Yet, cross-validation in this setting is often very challenging. When training Logistic Regression without including the old weights ("only new"), the new weights are dominated by the old ones and fail to learn the new classes, making training in the presence of the old weights an essential component for online learning. We performed an extensive comparison between different probabilistic models of the weights using different inference procedures, which we present in Appendix E.4. We report on the CIFAR-100 dataset on (i) Gaussian, (ii) mixture of Gaussians, and (iii) Laplace, all with either MAP estimation or Hybrid Monte Carlo sampling. We found that the simple Gaussian model is on par with or outperforms other methods at k-shot time, which we attribute to it striking a good balance between choosing a complex model, which may better fit the weights, and statistical efficiency, as the number of weights C (80 in our case) is often smaller than the dimensionality of the feature representation (256 in our case), cf. Sec. 2. This finding is supported by computing the log-likelihood of held out training weights under such model, with the Gaussian model performing best. Experiments using Hybrid Monte Carlo sampling for k-shot learning returned very similar performance to MAP estimation and at a much higher computational cost, due to the difficulty of performing sampling in such a high dimensional parameter space. Our recommendation is that practitioners should use simple models and employ simple inference schemes to estimate all free parameters thereby avoiding expending valuable data on validation sets. We present a probabilistic framework for k-shot learning that exploits the powerful features and class information learned by a neural network on a large training dataset. Probabilistic models are then used to transfer information in the network weights to new classes. Experiments on miniImageNet using a simple Gaussian model within our framework achieve state-of-the-art for 1-shot and 5-shot learning by a wide margin, and at the same time return well calibrated predictions. This finding is contrary to the current belief that episodic training is necessary to learn good k-shot features and puts the success of recent complex deep learning approaches to k-shot learning into context. The new approach is flexible and extensible, being applicable to general discriminative models and kshot learning paradigms. For example, preliminary on online k-shot learning indicate that the probabilistic framework mitigates catastrophic forgetting by automatically balancing performance on the new and old classes. The Gaussian model is closely related to regularised logistic regression, but provides a principled and fully automatic way to regularise. This is particularly important in k-shot learning, as it is a low-data regime, in which cross-validation performs poorly and where it is important to train on all available data, rather than using validation sets. Appendix to "Discriminative k-shot learning using probabilistic models"A DETAILS ON THE DERIVATION AND APPROXIMATIONS FROM SEC. 2.1As stated in the main text, the probabilistic k-shot learning approach comprises four phases mirroring the dataflow:Phase 1: Representational learning. The large dataset D is used to train the CNN Φ ϕ using standard deep learning optimisation approaches. This involves learning both the parameters ϕ of the feature extractor up to the last hidden layer, as well as the softmax weights W. The network parameters ϕ are fixed from this point on and shared across phases. This is a standard setup for multitask learning and in the present case it ensures that the features derived from the representational learning can be leveraged for k-shot learning. Phase 2: Concept learning. The softmax weights W are effectively used as data for concept learning by training a probabilistic model that detects structure in these weights which can be transferred for k-shot learning. This approach will be justified in the next section. For the moment, we consider a general class of probabilistic models in which the two sets of weights are generated from shared hyperparameters θ, so that p(W, W, θ) = p(θ)p(W|θ)p(W|θ) (see FIG0).Phases 3 and 4: k-shot learning and testing. Probabilistic k-shot learning leverages the learned representation Φ ϕ from phase 1 and the probabilistic model p(W, W, θ) from phase 2 to build a (posterior) predictive model for unseen new examples using examples from the small dataset D. Given the dataflow and the assumed probabilistic model in FIG0, a completely probabilistic approach would involve the following steps. In the concept learning phase, the initial dataset would be used to form the posterior distribution over the concept hyperparameters p(θ | D). The k-shot learning phase combines the information about the new weights provided by D with the information in the k-shot dataset D to form the posterior distribution DISPLAYFORM0 To see this, notice that DISPLAYFORM1 The graphical model in FIG0 entails that D is conditionally independent from D given W, such that DISPLAYFORM2 We recover Eq. by adding p(D) to the constant of proportionality. Inference in this model is generally intractable and requires approximations. The main challenge is computing the posterior distribution over the hyper-parameters given the initial dataset. However, progress can be made if we assume that the posterior distribution over the weights can be well approximated by the MAP value p(W | D) ≈ δ(W − W MAP). This is an arguably justifiable assumption as the initial dataset is large and so the posterior will concentrate on narrow modes (with similar predictive performance). In this case p(θ | D) ≈ p(θ | W MAP) and, due to the structure of the probabilistic model, all instances of D in Eq. and Eq. can be replaced by the analogous expressions involving W. This greatly simplifies the learning pipeline as the probabilistic modelling only needs to have access to the weights returned by representational learning. Remaining intractabilities involve only a small number of data points D and can be handled using standard approximate inference tools. The following summarizes the approximations and computational steps for each phase of training. Phase 4: k-shot testing. Approximate inference is used to compute p(y DISPLAYFORM0 In this section we briefly discuss different inference methods for the probabilistic models. In the main text we only considered MAP inference as we found that other more complicated inference schemes do not yield a practical benefit. However, in Appendix E.4 we provide a detailed model comparison, in which we also consider other approximate inference methods. In all cases the gradients of the densities w.r.t. W can be computed, enabling MAP inference in the k-shot learning phase to be efficiently performed via gradient-based optimisation using L-BFGS BID24 . Alternatively, Markov Chain Monte Carlo (MCMC) sampling can be performed to approximate the associated integral, see Eq.. Due to the high dimensionality of the space and as gradients are available, we employ Hybrid Monte Carlo (HMC) BID27 sampling in the form of the recently proposed NUTS sampler that automatically tunes the HMC parameters (step size and number of leapfrog steps) BID20. For the GMMs we employed pymc3 BID29 to perform MAP inference. As discussed in Sec. 2.1, we specify our model through p(W, W, θ) thus defining p(W | W M AP) in Eq.. This section analyses different priors on the weights: (i) Gaussian models, (ii) Gaussian mixture models, and (iii) Laplace distribution. In the main paper, we only use a Gaussian model with MAP inference, as we saw no significant advantage in using other, more complex models. However, we provide an extensive comparison of the different models in Appendix E.4. Possibly the simplest approach consists of modelling p(W | W) as a Gaussian distribution: DISPLAYFORM0 Details for this section can be found in BID26. The normal-inverse-Wishart distribution for µ and Σ is a conjugate prior for the Gaussian, which allows for the posterior to be written in closed form. More precisely, DISPLAYFORM1 where Z is the normalising constant. The posterior p(µ, Σ | W) also follows a normal-inverse-Wishart distribution: DISPLAYFORM2 where DISPLAYFORM3 and S is the sample covariance of W.For this model, we can integrate in closed form, which in the following multivariate Student t-distribution: DISPLAYFORM4.As with other approaches, one can also compute the MAP solutions for the mean µ MAP and covariance DISPLAYFORM5 For both the analytic posterior and the MAP approximation, p(W | W) depends on the hyperparameters of the normal-inverse-Wishart distribution: µ 0, ν 0, κ 0 and Λ 0. There are different ways to choose these hyperparameters. One way would be by optimising the log probability of held out training weights, see Appendix E.4 for a brief discussion. In practise, it is common to choose uninformative or data dependent priors as discussed by Murphy (2012, Chapter 4). A Gaussian mixture model can potentially leverage cluster structure in the weights (animal classes might have similar weights, for example). This is related to the tree-based prior proposed in. MAP inference is performed because exact inference is intractable. Similarly to the Gaussian case, different structures for the covariance of each cluster were tested. In our experiments, we fit the parameters of the GMM via maximum likelihood using the EM algorithm. GMM consists on modelling p(W | W) as a mixture of Gaussians with S components: DISPLAYFORM0 where S s=1 π s = 1. In this work, we only compute the MAP mean and covariance for each of the clusters, as opposed to averaging over the parameters of the mixture. The ing posterior is DISPLAYFORM1 The components of the mixture are fit in two ways. For CIFAR-100, the classes are grouped into 20 superclasses, each containing 5 of the 100 classes. One option is therefore to initialize 20 components, each fit with the data points in the corresponding superclass. For each such individual Gaussian, the MAP inference method presented in the previous section can be used. In order to increase the number of weight examples in each superclass, we merge the original superclasses into 9 larger superclasses. The merging of the superclasses is the following:• Aquatic mammals + fish• flowers + fruit and vegetables + trees• insects + non-insect invertebrates + reptiles• medium-sized mammals + small mammals• large carnivores + large omnivores and herbivores• people• large man-made outdoor things + large natural outdoor things• food containers + household electrical devices + household furniture• Vehicles 1 + Vehicles 2.The parameters of the mixture can also be fit using maximum likelihood with EM. We use the implementation of EM in scikit-learn. Both 3 and 10 clusters are considered in CIFAR-100. Weight log-likelihoods under this model and k-shot performance can be found in Appendix E.4.Note that, similarly to the Gaussian model, we consider isotropic, diagonal or full covariance models for the covariance matrices. Sparsity is an attractive feature which could be helpful for modelling the weights. Indeed, it is reasonable to assume that each class uses a set of characteristic features which drive classification accuracy, while others are irrelevant. Sparse models would then provide sensible regularization. As such, we consider a product of independent Laplace distribution. Sec. 2.3 highlights the relation between a Gaussian prior on the weights and L 2 regularised logistic regression. One can similarly show that the Laplace prior is related to L 1 regularised logistic regression, which is well known for encouraging sparse weight vectors. We consider a prior which factors along the feature dimensions: DISPLAYFORM0 where the product over j is along the feature dimensions and the sum over i is across the classes. We fit the parameters µ and λ via maximum likelihood: DISPLAYFORM1 An isotropic Laplace model with mean µ and scale λ is also considered: DISPLAYFORM2 where DISPLAYFORM3 To construct miniImageNet we use the same classes as initially proposed by and used in BID12, which is split into 64 training classes (cf. Tab. 2), 16 validation classes (cf. Tab. 3), and 20 test classes (cf. Tab. 4). We will make a full list of image files available. As we do not require a validation set, we combine the training and validation set to form an extended training set. We extract 600 images per class from the ImageNet 2012 Challange dataset BID23, scale the shorter side to 84 pixels and then centrally crop to 84 × 84 pixels, that is, we preserve the original aspect ratio of the image content. We use these coloured 84 × 84 × 3 images as input for representational and k-shot learning and testing. In order to train very deep models, such as a ResNet, we need to perform data augmentation as is the case when training full ImageNet. We use the following standard data augmentation from ImageNet that we adapt to the size of the input images:• random horizontal flipping • randomly paste image into 100 × 100 frame and cut out central 84 × 84 pixels • randomly change brightness, contrast, saturation and lightingWe highlight that we do not perform any data augmentation for the k-shot learning and k-shot testing but use the original 84 × 84 colour images as input to the feature extractor. CIFAR-100 consists of 100 classes each with 500 training and 100 test images of size 32 × 32.The classes are grouped into 20 superclasses with 5 classes each. For example, the superclass "fish" contains the classes aquarium fish, flatfish, ray, shark, and trout. Unless otherwise stated, we used a random split into 80 base classes and 20 k-shot learning classes. For k-shot learning and testing, we split the 100 classes into 80 base classes used for network training and 20 k-shot learning classes., 2, 3, 4, 5, 6, 7, 9, 10, 13, 14, 15, 16, 17, 18, 19, 21, 22, 24, 25, 27, 28, 32, 34, 35, 36, 38, 40, 42, 43, 44, 45, 46, 48, 49, 50, 51, 52, 53, 54, 55, 56, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 69, 70, 73, 74, 75, 76, 77, 78, 79, 80, 82, 83, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99 ] classes_heldout = We provide an exhaustive comparison of different probabilistic models for this k-shot learning task in Appendix E.4. DISPLAYFORM0 The network architecture is inspired by the ResNet-34 architecture for ImageNet ) that uses convolution blocks, with two convolutions each, that are bridged by skip connections. As a base, we utilise the example code 2 provided by tensorpack (https://github.com/ppwwyyxx/tensorpack), a neural network training library built on top of tensorflow (Martín BID25 . We adapt the number of features as well as the size of the last fully connected layer to account for the smaller number of training samples and training classes. The final architecture is detailed in Tab. 5. n03400231 frying pan, frypan, skillet n02108551 Tibetan mastiff n02687172 aircraft carrier, carrier, flattop, attack aircraft carrier n04296562 stage n13133613 ear, spike, capitulum n02165456 ladybug, ladybeetle, lady beetle, ladybird, ladybird beetle n03337140 file, file cabinet, filing cabinet n02966193 carousel, carrousel, merry-go-round, roundabout, whirligig n02074367 dugong, Dugong dugon n02105505 komondor n04389033 tank, army tank, armored combat vehicle, armoured combat vehicle n09246464 cliff, drop, drop-off n03924679 photocopier n03527444 holster n04612504 yawl n01749939 green mamba n04251144 snorkel n03347037 fire screen, fireguard n04067472 reel n03998194 prayer rug, prayer mat n13054560 bolete n02747177 ashcan, trash can, garbage can, wastebin, ash bin, ash-bin, ashbin, dustbin, trash barrel, trash bin n04435653 tile roof n02108089 boxer n03908618 pencil box, pencil case n01770081 harvestman, daddy longlegs, Phalangium opilio n03676483 lipstick, lip rouge n03220513 dome n04515003 upright, upright piano n04258138 solar dish, solar collector, solar furnace n04509417 unicycle, monocycle n01704323 triceratops n04443257 tobacco shop, tobacconist shop, tobacconist n02089867 Walker hound, Walker foxhound n01910747 jellyfish n02111277 Newfoundland, Newfoundland dog n04243546 slot, one-armed bandit n01558993 robin, American robin, Turdus migratorius n03047690 clog, geta, patten, sabot n03854065 organ, pipe organ n03476684 hair slide n02113712 miniature poodle n07747607 orange n03838899 oboe, hautboy, hautbois n07584110 consomme n02795169 barrel, cask n03017168 chime, bell, gong n04275548 spider web, spider's web n04604644 worm fence, snake fence, snake-rail fence, Virginia fence n02606052 rock beauty, Holocanthus tricolor n01843383 toucan n02457408 three-toed sloth, ai, Bradypus tridactylus n03062245 cocktail shaker n03207743 dishrag, dishcloth n02108915 French bulldog n06794110 street sign n02823428 beer bottle n03888605 parallel bars, bars n04596742 wok n02091831 Saluki, gazelle hound n02101006 Gordon setter n02120079 Arctic fox, white fox, Alopex lagopus n01532829 house finch, linnet, Carpodacus mexicanus n07697537 hotdog, hot dog, red hot Table 2 : Training classes for miniImageNet as proposed by n03075370 combination lock n02971356 carton n03980874 poncho n02114548 white wolf, Arctic wolf, Canis lupus tundrarum n03535780 horizontal bar, high bar n03584254 iPod n02981792 catamaran n03417042 garbage truck, dustcart n03770439 miniskirt, mini n02091244 Ibizan hound, Ibizan Podenco n02174001 rhinoceros beetle n09256479 coral reef n02950826 cannon n01855672 goose n02138441 meerkat, mierkat n03773504 missiles Table 3 : Validation classes for miniImageNet as proposed by n02116738 African hunting dog, hyena dog, Cape hunting dog, Lycaon pictus n02110063 malamute, malemute, Alaskan malamute n02443484 black-footed ferret, ferret, Mustela nigripes n03146219 cuirass n03775546 mixing bowl n03544143 hourglass n04149813 scoreboard n03127925 crate n04418357 theater curtain, theatre curtain n02099601 golden retriever n02219486 ant, emmet, pismire n03272010 electric guitar n04146614 school bus n02129165 lion, king of beasts, Panthera leo n04522168 vase n07613480 trifle n02871525 bookshop, bookstore, bookstall n01981276 king crab, Alaska crab, Alaskan king crab, Alaska king crab, Paralithodes camtschatica n02110341 dalmatian, coach dog, carriage dog n01930112 nematode, nematode worm, roundworm Table 5 : Network architecture. All unnamed layers are 2D convolutions with stated kernel size and padding SAME; the output of the shaded layer corresponds to Φ ϕ (u), the feature space representation of the image u, which is used as input for probabilistic k-shot learning. The network is trained using a decaying learning rate schedule and momentum SGD and is implemented in tensorpack using tensorflow. VGG-style Network for CIFAR-100 Output size Layers DISPLAYFORM0 FullyConnected, ELU C FullyConnected, SoftMax VGG-style Network for miniImageNet Output size Layers DISPLAYFORM1 FullyConnected, ELU C FullyConnected, SoftMax Table 6: Network architectures. All 2D convolutions have kernel size 3 × 3 and padding SAME; max-pooling is performed with stride 2. The output of the shaded layer corresponds to Φ ϕ (u), the feature space representation of the image u, which is used as input for probabilistic k-shot learningThe network architecture was inspired by the VGG networks , but does not employ batch normalisation. To speed up training, we employ exponential linear units (ELUs), which have been reported to lead to faster convergence as compared to ordinary ReLUs BID17 To regularise the networks, we employ dropout BID32 and regularisation of the weights in the fully connected layers. The networks are trained with the ADAM optimiser BID22 with decaying learning rate. The network is implemented in tensorpack using tensorflow. Figure 6: t-SNE embedding of the CIFAR-100 weights W trained using a VGG style architecture. The points are coloured according to their respective superclass. The colouring by superclass makes the structure in the weights evident, as t-SNE overall recovers the structure in the dataset. For instance, oak tree, palm tree, willow tree and pine tree form a cluster on the bottom right. This structure motivates our approach, as the training weights contain information which may be useful at k-shot time, for instance given a few example from chestnut trees. Structure is still present and we observe meaningful patterns, even though the classes in miniImageNet are more unique than in CIFAR-100. For instance, goose, house finch, toucan, Arctic fox, green mamba and other animals are clustered on the top, with birds close to each other. Examples of other small clusters include poncho and miniskirt, or organ and oboe. For readability, not all class names are plotted. We provide t-SNE embeddings for the weights of a VGG network trained in CIFAR-100 and a ResNet-34 trained on miniImageNet. A structure in the weights is apparent and provides motivation for our framework. The can be seen in FIG6 E.2 EXTENDED ON miniIMAGENET Fig. 8 provides extended on k-shot learning for the miniImageNet dataset for different network architectures. We investigate the influence of different feature extractors of increasing complexity and training data size on performance on: i) a VGG style network trained on 500 images per class, ii) a ResNet-34 trained on 500 examples per class, and iii) a ResNet-34 trained on all 600 examples per class. ResNet-34 trained with 500 images per class; bottom: a VGG style network trained with 500 images per class. We highlight that for all three architectures the order of the different methods as well as the main messages are the same. However, the general performance in terms of accuracy and calibration differ between the architectures. The more complex architecture trained on most images performs best in terms of accuracy, indicating that it learns better features for k-shot learning. Both ResNets behave very similarly on calibration whereas the VGG-style network performs better (lower ECE and higher log likelihood as well as more diagonal calibration curve). This is in line with observations by that calibration of deep architectures gets worse as depth and complexity increase. ) to bright (C = 10). In addition to Fig. 4 we also provide for calibration in terms of ECE (lower is better), which are consistent with log likelihoods (higher is better): The Bayesian inspired choice of the regularisation parameter strikes a good balance between accuracy and calibration and consistently outperforms cross-validated choice of the parameter. Optimised value of mean negative log probability Gauss (iso) −175.9 ± 0.3 Gauss (MAP prior) −196.1 ± 0.5 Gauss (integr. prior) −200.6 ± 0.4 GMM 3-means (iso) −179.0 ± 0.3 GMM 3-means (diag) −181.2 ± 0.3 GMM 10-means iso −181.6 ± 0.4 GMM 10-means (diag) −181.6 ± 0.4 Laplace (iso) −173.8 ± 0.4 Laplace (diag) 0 −100 −200 −176.6 ± 0.5 Table 9: Held-out log probabilities on random 70/10-splits of the training weights for the different models on CIFAR-100. Values are averaged over 50 splits. Figure 10: Results on CIFAR-100 for VGG style architecture. We report accuracy, log-likelihood and calibration for the methods and inference procedures presented in Tab. 8. With the exception of GMM (10, iso) and Laplace, all methods are similar terms of accuracy and log-likelihood. Gauss (integr. prior) HMC and Gauss (MAP) HMC are slightly better calibrated than our proposed Gauss (MAP) iso, but require significantly more computation for the sampling procedure.behave very similar but that multivariate Gaussian models generally outperform other models. We attribute the good performance of the simpler models to the small number of data points (C − 10 = 70 training weights) and the high dimensionality of the space, which entail that fitting even simple models is difficult. Thus, more complicated models cannot improve over them.k-shot performance in CIFAR-100. Accuracies are measured on a 5-way classification task on the k-shot classes for k ∈ {1, 5, 10}. Results were averaged two-fold: (i) 20 random splits of the 5 k-shot classes; (ii) 10 repetitions of each split with different k-shot training examples. Among our models, no statistically significant difference in accuracy is observed, with the exception of Laplace MAP and GMM (iso), which consistently underperforms. These findings are consistent in terms of log-likelihoods, see the first and second plots in FIG0. Finally, our methods are generally well calibrated, with Gaussian models generally better than Laplace models. Moreover, all methods (with the exception of Laplace and GMM (10, iso) have low ECE and high accuracy, see the third and fourth plots of FIG0. While Gauss (integr. prior) HMC and Gauss (MAP) HMC are sightly better calibrated than our proposed method in the main paper, Gauss (MAP) iso, we believe the gain in calibration is not worth the significant increase in computational resources needed for the sampling procedure. Interestingly, both GMM approaches are not able to outperform the other, simpler models. This is in line with the previous observation that the simpler models are better able to explain the weights. Again, we attribute this inability of mixture models to use their larger expressivity/capacity to the small number of data points and the high-dimensionality of weight-space which means learning even simple models is difficult. These observations suggest that the use of mixture models in this type of k-shot learning framework is not beneficial and is in contrast to the approach of, who employ a tree-structured mixture model. The authors show compare a model in which the assignments to the superclasses in the tree are optimized over against a model with a naive initialisation of the superclass assignments, and show that the first outperforms the second. However, they do not compare against a simpler baseline, e.g., a single Gaussian model. Overall, we observe that there is no significant benefit of more complex methods over the simple isotropic Gaussian, either in terms of accuracy, log-likelihood or calibration. Thus, our recommendation is that practitioners should use simple models and employ simple inference schemes to estimate all free parameters thereby avoiding expending valuable data on validation sets | This paper introduces a probabilistic framework for k-shot image classification that achieves state-of-the-art results | 1,088 | scitldr |
Building on the recent successes of distributed training of RL agents, in this paper we investigate the training of RNN-based RL agents from distributed prioritized experience replay. We study the effects of parameter lag ing in representational drift and recurrent state staleness and empirically derive an improved training strategy. Using a single network architecture and fixed set of hyper-parameters, the ing agent, Recurrent Replay Distributed DQN, quadruples the previous state of the art on Atari-57, and matches the state of the art on DMLab-30. It is the first agent to exceed human-level performance in 52 of the 57 Atari games. Reinforcement Learning (RL) has seen a rejuvenation of research interest recently due to repeated successes in solving challenging problems such as reaching human-level play on Atari 2600 games BID15, beating the world champion in the game of Go BID21, and playing competitive 5-player DOTA BID18. The earliest of these successes leveraged experience replay for data efficiency and stacked a fixed number of consecutive frames to overcome the partial observability in Atari 2600 games. However, with progress towards increasingly difficult, partially observable domains, the need for more advanced memory-based representations increases, necessitating more principled solutions such as recurrent neural networks (RNNs). The use of LSTMs BID8 within RL has been widely adopted to overcome partial observability BID5 BID16 BID3 BID4.In this paper we investigate the training of RNNs with experience replay. We have three primary contributions. First, we demonstrate the effect of experience replay on parameter lag, leading to representational drift and recurrent state staleness. This is potentially exacerbated in the distributed training setting, and ultimately in diminished training stability and performance. Second, we perform an empirical study into the effects of several approaches to RNN training with experience replay, mitigating the aforementioned effects. Third, we present an agent that integrates these findings to achieve significant advances in the state of the art on Atari-57 BID1 and matches the state of the art on DMLab-30 BID0. To the best of our knowledge, our agent, Recurrent Replay Distributed DQN (R2D2), is the first to achieve this using a single network architecture and fixed set of hyper-parameters. Our work is set within the Reinforcement Learning (RL) framework BID23, in which an agent interacts with an environment to maximize the sum of discounted, γ ∈, rewards. We model the environment as a Partially Observable Markov Decision Process (POMDP) given by the tuple (S, A, T, R, Ω, O) BID17 BID10 BID12. The underlying Markov Decision Process (MDP) is defined by (S, A, T, R), where S is the set of states, A the set of actions, T a transition function mapping state-actions to probability distributions over next states, and R: S × A → R is the reward function. Finally, Ω gives the set of observations 2.2 DISTRIBUTED REINFORCEMENT LEARNING Recent advances in reinforcement learning have achieved significantly improved performance by leveraging distributed training architectures which separate learning from acting, collecting data from many actors running in parallel on separate environment instances BID3 BID4 BID18 a; BID11.Distributed replay allows the Ape-X agent to decouple learning from acting, with actors feeding experience into the distributed replay buffer and the learner receiving (randomized) training batches from it. In addition to distributed replay with prioritized sampling, Ape-X uses n-step return targets BID22, the double Q-learning algorithm, the dueling DQN network architecture BID26 and 4-framestacking. Ape-X achieved state-of-the-art performance on Atari-57, significantly out-performing the best single-actor algorithms. It has also been used in continuous control domains and again showed state-of-the-art , further demonstrating the performance benefits of distributed training in RL.IMPALA BID3 ) is a distributed reinforcement learning architecture which uses a first-in-first-out queue with a novel off-policy correction algorithm called V-trace, to learn sequentially from the stream of experience generated by a large number of independent actors. IMPALA stores sequences of transitions along with an initial recurrent state in the experience queue, and since experience is trained on exactly once, this data generally stays very close to the learner parameters. BID3 showed that IMPALA could achieve strong performance in the Atari-57 and DMLab-30 benchmark suites, and furthermore was able to use a single large network to learn all tasks in a benchmark simultaneously while maintaining human-level performance. We propose a new agent, the Recurrent Replay Distributed DQN (R2D2), and use it to study the interplay between recurrent state, experience replay, and distributed training. R2D2 is most similar to Ape-X, built upon prioritized distributed replay and n-step double Q-learning (with n = 5), generating experience by a large number of actors (typically 256) and learning from batches of replayed experience by a single learner. Like Ape-X, we use the dueling network architecture of BID26, but provide an LSTM layer after the convolutional stack, similarly to BID4.Instead of regular (s, a, r, s) transition tuples, we store fixed-length (m = 80) sequences of (s, a, r) in replay, with adjacent sequences overlapping each other by 40 time steps, and never crossing episode boundaries. When training, we unroll both online and target networks BID15 on the same sequence of states to generate value estimates and targets. We leave details of our exact treatment of recurrent states in replay for the next sections. Like Ape-X, we use 4-frame-stacks and the full 18-action set when training on Atari. On DMLab, we use single RGB frames as observations, and the same action set discretization as BID7. Following the modified Ape-X version in BID19, we do not clip rewards, but instead use an invertible value function rescaling of the form h(x) = sign(x)(|x| + 1 − 1) + x which in the following n-step targets for the Q-value function: DISPLAYFORM0 Here, θ − denotes the target network parameters which are copied from the online network parameters θ every 2500 learner steps. Our replay prioritization differs from that of Ape-X in that we use a mixture of max and mean absolute n-step TD-errors δ i over the sequence: p = η max i δ i + (1 − η)δ. We set η and the priority exponent to 0.9. This more aggressive scheme is motivated by our observation that averaging over long sequences tends to wash out large errors, thereby compressing the range of priorities and limiting the ability of prioritization to pick out useful experience. Finally, compared to Ape-X, we used the slightly higher discount of γ = 0.997, and disabled the loss-of-life-as-episode-end heuristic that has been used in Atari agents in some of the work since BID15. A full list of hyper-parameters is provided in the Appendix. We train the R2D2 agent with a single GPU-based learner, performing approximately 5 network updates per second (each update on a mini-batch of 64 length-80 sequences), and each actor performing ∼ 260 environment steps per second on Atari (∼ 130 per second on DMLab). In order to achieve good performance in a partially observed environment, an RL agent requires a state representation that encodes information about its state-action trajectory in addition to its current observation. The most common way to achieve this is by using an RNN, typically an LSTM BID8, as part of the agent's state encoding. To train an RNN from replay and enable it to learn meaningful long-term dependencies, whole state-action trajectories need to be stored in replay and used for training the network. BID5 compared two strategies of training an LSTM from replayed experience:• Using a zero start state to initialize the network at the beginning of sampled sequences.• Replaying whole episode trajectories. The zero start state strategy's appeal lies in its simplicity, and it allows independent decorrelated sampling of relatively short sequences, which is important for robust optimization of a neural network. On the other hand, it forces the RNN to learn to recover meaningful predictions from an atypical initial recurrent state ('initial recurrent state mismatch'), which may limit its ability to fully rely on its recurrent state and learn to exploit long temporal correlations. The second strategy on the other hand avoids the problem of finding a suitable initial state, but creates a number of practical, computational, and algorithmic issues due to varying and potentially environment-dependent sequence length, and higher variance of network updates because of the highly correlated nature of states in a trajectory when compared to training on randomly sampled batches of experience tuples. BID5 observed little difference between the two strategies for empirical agent performance on a set of Atari games, and therefore opted for the simpler zero start state strategy. One possible explanation for this is that in some cases, an RNN tends to converge to a more'typical' state if allowed a certain number of'burn-in' steps, and so recovers from a bad initial recurrent state on a sufficiently long sequence. We also hypothesize that while the zero start state strategy may suffice in the mostly fully observable Atari domain, it prevents a recurrent network from learning actual long-term dependencies in more memory-critical domains (e.g. on DMLab).To fix these issues, we propose and evaluate two strategies for training a recurrent neural network from randomly sampled replay sequences, that can be used individually or in combination: DISPLAYFORM0 Figure 1: Top row shows Q-value discrepancy ∆Q as a measure for recurrent state staleness. (a) Diagram of how ∆Q is computed, with green box indicating a whole sequence sampled from replay. For simplicity, l = 0 (no burn-in). (b) ∆Q measured at first state and last state of replay sequences, for agents training on a selection of DMLab levels (indicated by initials) with different training strategies. Bars are averages over seeds and through time indicated by bold line on x-axis in bottom row. (c) Learning curves on the same levels, varying the training strategy, and averaged over 3 seeds.• Stored state: Storing the recurrent state in replay and using it to initialize the network at training time. This partially remedies the weakness of the zero start state strategy, however it may suffer from the effect of'representational drift' leading to'recurrent state staleness', as the stored recurrent state generated by a sufficiently old network could differ significantly from a typical state produced by a more recent version.• Burn-in: Allow the network a'burn-in period' by using a portion of the replay sequence only for unrolling the network and producing a start state, and update the network only on the remaining part of the sequence. We hypothesize that this allows the network to partially recover from a poor start state (zero, or stored but stale) and find itself in a better initial state before being required to produce accurate outputs. In all our experiments we will be using the proposed agent architecture from Section 2.3 with replay sequences of length m = 80, with an optional burn-in prefix of l = 40 or 20 steps. Our aim is to assess the negative effects of representational drift and recurrent state staleness on network training and how they are mitigated by the different training strategies. For that, we will compare the Qvalues produced by the network on sampled replay sequences when unrolled using one of these strategies and the Q-values produced when using the true stored recurrent states at each step (see Figure 1a, showing different sources for the hidden state).More formally, let o t,..., o t+m and h t,..., h t+m denote the replay sequence of observations and stored recurrent states, and denote by h t+1 = h(o t, h t ; θ) and q(h t ; θ) the recurrent state and Qvalue vector output by the recurrent neural network with parameter vector θ, respectively. We writê h t for the hidden state, used during training and initialized under one of the above strategies (either DISPLAYFORM1 is computed by unrolling the network with parametersθ on the sequence o t, . . ., o t+l+m−1 . We estimate the impact of representational drift and recurrent state staleness by their effect on the Q-value estimates, by measuring Q-value discrepancy DISPLAYFORM2 for the first (i = l) and last (i = l + m − 1) states of the non-burn-in part of the replay sequence (see Figure 1a for an illustration). The normalization by the maximal Q-value helps comparability between different environments and training stages, as the Q-value range of an agent can vary dras-tically between these. Note that we are not directly comparing the Q-values produced at acting and training time, q(h t ; θ) and q(ĥ t ;θ), as these can naturally be expected to be distinct as the agent is being trained. Instead we focus on the difference that from applying the same network (parameterized byθ) to the distinct recurrent states. In Figure 1b, we are comparing agents trained with the different strategies on several DMLab environments in terms of this proposed metric. It can be seen that the zero start state heuristic in a significantly more severe effect of recurrent state staleness on the outputs of the network. As hypothesized above, this effect is greatly reduced for the last sequence states compared to the first ones, after the RNN has had time to recover from the atypical start state, but the effect of staleness is still substantially worse here for the zero state than the stored state strategy. Another potential downside of the pure zero state heuristic is that it prevents the agent from strongly relying on its recurrent state and exploit long-term temporal dependencies, see Section 5.We observe that the burn-in strategy on its own partially mitigates the staleness problem on the initial part of replayed sequences, while not showing a significant effect on the Q-value discrepancy for later sequence states. Empirically, this translates into noticeable performance improvements, as can be seen in Figure 1c. This itself is noteworthy, as the only difference between the pure zero state and the burn-in strategy lies in the fact that the latter unrolls the network over a prefix of states on which the network does not receive updates. In informal experiments (not shown here) we observed that this is not due to the different unroll lengths themselves (i.e., the zero state strategy without burn-in, on sequences of length l + m, performed worse overall). We hypothesize that the beneficial effect of burn-in lies in the fact that it prevents'destructive updates' to the RNN parameters ing from highly inaccurate initial outputs on the first few time steps after a zero state initialization. The stored state strategy, on the other hand, proves to be overall much more effective at mitigating state staleness in terms of the Q-value discrepancy, which also leads to clearer and more consistent improvements in empirical performance. Finally, the combination of both methods consistently yields the smallest discrepancy on the last sequence states and the most robust performance gains. We conclude the section with the observation that both stored state and burn-in strategy provide substantial advantages over the naive zero state training strategy, in terms of (indirect) measures of the effect of representation drift and recurrent state staleness, and empirical performance. Since they combine beneficially, we use both of these strategies (with burn-in length of l = 40) in the empirical evaluation of our proposed agent in Section 4. Additional on the effects of distributed training on representation drift and Q-value discrepancy are given in the Appendix. In this section we evaluate the empirical performance of R2D2 on two challenging benchmark suites for deep reinforcement learning: Atari-57 BID1 and DMLab-30 BID0. One of the fundamental contributions of Deep Q-Networks (DQN) BID15 was to set as standard practice the use of a single network architecture and set of hyper-parameters across the entire suite of 57 Atari games. Unfortunately, expanding past Atari this standard has not been maintained and, to the best of our knowledge, at present there is no algorithm applied to both Atari-57 and DMLab-30 under this standard. In particular, we will compare performance with Ape-X and IMPALA for which hyper-parameters are tuned separately for each benchmark. For R2D2, we use a single neural network architecture and a single set of hyper-parameters across all experiments. This demonstrates greater robustness and generality than has been previously observed in deep RL. It is also in pursuit of this generality, that we decided to disable the (Atari-specific) heuristic of treating life losses as episode ends, and did not apply reward clipping. Despite this, we observe state-of-the-art performance in both Atari and DMLab, validating the intuitions derived from our empirical study. A more detailed ablation study of the effects of these modifications is presented in the Appendix. The Atari-57 benchmark is built upon the Arcade Learning Environment (ALE) BID1, and consists of 57 classic Atari 2600 video games. Initial human-level performance was. Right: Example individual learning curves of R2D2, averaged over 3 seeds, and Ape-X, single seed.achieved by DQN BID15, and since then RL agents have improved significantly through both algorithmic and architectural advances. Currently, state of the art for a single actor is achieved by the recent distributional reinforcement learning algorithms IQN and Rainbow BID6, and for multi-actor , Ape-X.Figure 2 (left) shows the median human-normalized scores across all games for R2D2 and related methods (see Appendix for full Atari-57 scores and learning curves). R2D2 achieves an order of magnitude higher performance than all single-actor agents and quadruples the previous state-ofthe-art performance of Ape-X using fewer actors (256 instead of 360), ing in higher sampleand time-efficiency. Table 1 lists mean and median human-normalized scores for R2D2 and other algorithms, highlighting these improvements. In addition to achieving state-of-the-art on the entire task suite, R2D2 also achieves the highest ever reported agent scores on a large fraction of the individual Atari games, in many cases'solving' the respective games by achieving the highest attainable score. In FIG0 (right) we highlight some of these individual learning curves of R2D2. As an example, notice the performance on MS.PACMAN is even greater than that of the agent reported in BID25, which was engineered specifically for this game. Furthermore, we notice that Ape-X achieves super-human performance for the same number of games as Rainbow, and that its improvements came from improving already strong scores. R2D2 on the other hand is super-human on 52 out of 57 games. Of those remaining, we anecdotally observed that three (SKIING, SOLARIS, and PRIVATE EYE) can reach super-human performance with higher discount rates and faster target network updates. The other two (MONTEZUMA'S REVENGE and PITFALL) are known hard exploration problems, and solving these with a general-purpose algorithm will likely require new algorithmic insights. DMLab-30 is a suite of 30 problems set in a 3D first-person game engine, testing for a wide range of different challenges BID0. While Atari can largely be approached with only frame-stacking, DMLab-30 requires long-term memory to achieve reasonable performance. Perhaps because of this, and the difficulty of integrating recurrent state with experience replay, topperforming agents have, to date, always come in the form of actor-critic algorithms trained in (near) on-policy settings. For the first time we show state-of-the-art performance on DMLab-30 using a value-function-based agent. We stress that, different from the state-of-the-art IMPALA architecture BID3, the R2D2 agent uses the same set of hyper-parameters here as on Atari. Here we are mainly interested in comparing to the IMPALA'experts', not to its multi-task variant. Since the original IMPALA experts were trained on a smaller amount of data (approximately 333M Figure 3 : DMLab-30 comparison of R2D2 and R2D2+ with our re-run of IMPALA shallow and deep in terms of mean-capped human-normalized score BID3 . DMLab-30 Human-Normalized Score Median Mean Median Mean-Capped Ape-X 434.1% 1695.6% --Reactor BID4 187.0% ---IMPALA, deep BID3 191 Table 1 : Comparison of Atari-57 and DMLab-30 . R2D2 average final score over 3 seeds (1 seed for feed-forward variant), IMPALA final score over 1 seed, Ape-X best training score with 1 seed. Our re-run of IMPALA uses the same improved action set from BID7 as R2D2, and is trained for a comparable number of environment frames (10B frames; the original IMPALA experts in BID3 were only trained for approximately 333M frames). R2D2+ refers to the adapted R2D2 variant matching deep IMPALA's 15-layer ResNet architecture and asymmetric reward clipping, as well as using a shorter target update period of 400.environment frames) and since R2D2 uses the improved action set introduced in BID7, we decided to re-run the IMPALA agent with improved action set and for a comparable training time (10B environment frames) for a fairer comparison, ing in substantially improved scores for the IMPALA agent compared to the original in BID3, see Table 1.Figure 3 compares R2D2 with IMPALA. We note that R2D2 exceeds the performance of the (shallow) IMPALA version, despite using the exact same set of hyper-parameters and architecture as the variant trained on Atari, and in particular not using the'optimistic asymmetric reward clipping' used by all IMPALA agents 1.To demonstrate the potential of our agent, we also devise a somewhat adapted R2D2 version for DMLab only (R2D2+) by adding asymmetric reward clipping, using the 15-layer ResNet from IMPALA (deep), and reducing the target update frequency from 2500 to 400 for better sample efficiency. To fit the larger model in GPU memory, we reduced the batch size from 64 to 32 in these runs only. We observe that this modified version yields further substantial improvements over standard R2D2 and matches deep IMPALA in terms of sample efficiency as well as asymptotic performance. Both our re-run of deep IMPALA and R2D2+ are setting new state-of-the-art scores on the DMLab-30 benchmark. Atari-57 is a class of environments which are almost fully observable (given 4-frame-stack observations), and agents trained on it are not necessarily expected to strongly benefit from a memoryaugmented representation. The main algorithmic difference between R2D2 and its predecessor, Ape-X, is the use of a recurrent neural network, and it is therefore surprising by how large a margin R2D2 surpasses the previous state of the art on Atari. In this section we analyze the role of the LSTM network and other algorithmic choices for the high performance of the R2D2 agent. Since the performance of asynchronous or distributed RL agents can depend on subtle implementational details and even factors such as precise hardware setup, it is impractical to perform a direct comparison to the Ape-X agent as reported in. Instead, here we verify that the LSTM and its training strategy play a crucial role for the success of R2D2 by a comparison of the R2D2 agent with a purely feed-forward variant, all other parameters held fixed. Similarly, we consider the performance of R2D2 using reward clipping without value rescaling (Clipped) and using a smaller discount factor of γ = 0.99 (Discount). The ablation in Figure 4 show very clearly that the LSTM component is crucial for boosting the agent's peak performance as well as learning speed, explaining much of the performance difference to Ape-X. Other design choices have more mixed effects, improving in some games and hurting performance in others. Full ablation (in particular, an ablation over the full Atari-57 suite of the feed-forward agent variant, as well as an ablation of the use of the life-loss-as-episode-termination heuristic) are presented in the Appendix. In our next experiment we test to what extent the R2D2 agent relies on its memory, and how this is impacted by the different training strategies. For this we select the Atari game MS.PACMAN, on which R2D2 shows state-of-the-art performance despite the game being virtually fully observable, and the DMLab task EMSTM WATERMAZE, which strongly requires the use of memory. We train two agents on each game, using the zero and stored state strategies, respectively. We then evaluate these agents by restricting their policy to a fixed history length: at time step t, their policy uses an LSTM unrolled over time steps o t−k+1,..., o t, with the hidden state h t−k replaced by zero instead of the actual hidden state (note this is only done for evaluation, not at training time of the agents).In Figure 5 (left) we decrease the history length k from ∞ (full history) down to 0 and show the degradation of agent performance (measured as mean score over 10 episodes) as a function of k. We additionally show the difference of max-Q-values and the percentage of correct greedy actions (where the unconstrained variant is taken as ground truth).We observe that restricting the agent's memory gradually decreases its performance, indicating its nontrivial use of memory on both domains. Crucially, while the agent trained with stored state shows higher performance when using the full history, its performance decays much more rapidly than for the agent trained with zero start states. This is evidence that the zero start state strategy, used in past RNN-based agents with replay, limits the agent's ability to learn to make use of its memory. While this doesn't necessarily translate into a performance difference (like in MS.PACMAN), it does so whenever the task requires an effective use of memory (like EMSTM WATERMAZE). This advantage of the stored state compared to the zero state strategy may explain the large performance difference between R2D2 and its close cousin Reactor BID4, which trains its LSTM policy from replay with the zero state strategy. Finally, the right and middle columns of Figure 5 show a monotonic decrease of the quality of Qvalues and the ing greedy policy as the available history length k is decreased to 0, providing a simple causal link between the constraint and the empirical agent performance. For a qualitative comparison of different behaviours learned by R2D2 and its feed-forward variant, we provide several agent videos at https://bit.ly/r2d2600. Here we take a step back from evaluating performance and discuss our empirical findings in a broader context. There are two surprising findings in our . First, although zero state initialization was often used in previous works BID5 BID4, we have found that it leads to misestimated action-values, especially in the early states of replayed sequences. Moreover, without burn-in, updates through BPTT to these early time steps with poorly estimated outputs seem to give rise to destructive updates and hinder the network's ability to recover from sub-optimal initial recurrent states. This suggests that either the context-dependent recurrent state should be stored along with the trajectory in replay, or an initial part of replayed sequences should be reserved for burn-in, to allow the RNN to rely on its recurrent state and exploit long-term temporal dependencies, and the two techniques can also be combined beneficially. We have also observed that the underlying problems of representational drift and recurrent state staleness are potentially exacerbated in the distributed setting (see Appendix), highlighting the importance of robustness to these effects through an adequate training strategy of the RNN.Second, we found that the impact of RNN training goes beyond providing the agent with memory. Instead, RNN training also serves a role not previously studied in RL, potentially by enabling better representation learning, and thereby improves performance even on domains that are fully observable and do not obviously require memory (cf. BREAKOUT in the feed-forward ablation).Finally, taking a broader view on our empirical , we note that scaling up of RL agents through parallelization and distributed training allows them to benefit from huge experience throughput and achieve ever-increasing over broad simulated task suites such as Atari-57 and DMLab-30. Impressive as these are in terms of raw performance, they come at the price of high sample complexity, consuming billions of simulated time steps in hours or days of wall-clock time. One widely open avenue for future work lies in improving the sample efficiency of these agents, to allow applications to domains that do not easily allow fast simulation at similar scales. Another remaining challenge, very apparent in our on Atari-57, is exploration: Save for the hardest-exploration games from Atari-57, R2D2 surpasses human-level performance on this task suite significantly, essentially'solving' many of the games therein. Figure 6: Left: Parameter lag experienced with distributed prioritized replay with (top) 256 and (bottom) 64 actors on four DMLab levels: explore obstructed goals large (eogl), explore object rewards many (eorm), lasertag three opponents small (lots), rooms watermaze (rw). Center: initialstate and Right: final-state Q-value discrepancy for the same set of experiments. In this section, we investigate the effects of distributed training of an agent using a recurrent neural network, where a large number of actors feed their experience into a replay buffer for a single learner. On the one hand, the distributed setting typically presents a less severe problem of representational drift than the single-actor case, such as the one studied in BID5. This is because in relative terms, the large amount of generated experience is replayed less frequently (on average, an experience sample is replayed less than once in the Ape-X agent, compared to eight times in DQN), and so distributed agent training tends to give rise to a smaller degree of'parameter lag' (the mean age, in parameter updates, of the network parameters used to generate an experience, at the time it is being replayed).On the other hand, the distributed setting allows for easy scaling of computational resources according to hardware or time constraints. An ideal distributed agent should therefore be robust to changes in, e.g., the number of actors, without careful parameter re-tuning. As we have seen in the previous section, RNN training from replay is sensitive to the issue of representational drift, the severity of which can depend on exactly these parameters. To investigate these effects, we train the R2D2 agent with a substantially smaller number of actors. This has a direct (inversely proportional) effect on the parameter lag (see Figure 6 (left)). Specifically, in our experiments, as the number of actors is changed from 256 to 64, the mean parameter lag goes from 1500 to approximately 5500 parameter updates, which in turn impacts the magnitude of representation drift and recurrent state staleness, as measured by ∆Q in Section 3.The right two columns in Figure 6 show an overall increase of the average ∆Q for the smaller number of actors, both for first and last states of replayed sequences. This supports the above intuitions and highlights the increased importance of an improved training strategy (compared to the zero state strategy) in the distributed training setting, if a certain level of empirical agent performance is to be maintained across ranges of extrinsic and potentially hardware dependent parameters..'reset' refers to the agent variant using life losses as full episode terminations (preventing value function bootstrapping across life loss events, as well as resetting the LSTM state), whereas'roll' only prevents value function bootstrapping, but unrolls the LSTM for the duration of a full episode (potentially spanning multiple life losses). In this section we give additional experimental supporting our empirical study in the main text. FIG2 gives a more in-depth view of the ablation from Figure 4. We see that, with the exception of the feed-forward ablation, there are always games in which the ablated choice performs better. Our choice of architecture and configuration optimizes for overall performance and general (cross-domain) applicability, but for individual games there are different configurations that would yield improved performance. Additionally, in FIG4 we compare R2D2 with variants using the life loss signal as episode termination. Both ablation variants interrupt value function bootstrapping past the life loss events, but differ in that one ('reset') also resets the LSTM state at these events, whereas the other ('roll') only resets the LSTM state at actual episode boundaries, like regular R2D2. Despite the fact that the life loss heuristic is generally helpful to speed up learning in Atari, we did not use it in our main R2D2 agent for the sake of generality of the algorithm. Atari-57 -Human-normalized Median Ape-X R2D2, FF Rainbow Reactor Figure 9: Comparing sample efficiency between state-of-the-art agents on Atari-57. We observe a general trend of increasing final performance being negatively correlated with sample efficiency, which holds for all four algorithms compared. In Figure 9 we compare the sample efficiency of R2D2 with recent state-of-the-art agents on Atari-57 in terms of human-normalized median score. As expected, the more distributed agents have worse sample efficiency early on, but also much improved long-term performance. This is an interesting correlation on its own, but we add that R2D2 appears to achieve a qualitatively different performance curve than any of the other algorithms. Note that, while Ape-X has a larger number of actors than R2D2 (360 compared to 256), its learner processes approximately 20 batches of size 512 per second, whereas R2D2 performs updates on batches of 64 × 80 observations (batch size × sequence length), at a rate of approximately 5 per second. This in a reduced'replay ratio' (effective number of times each experienced observation is being replayed): On average, Ape-X replays each observation approximately 1.3 times, whereas this number is only about 0.8 for R2D2, which explains the initial sample efficiency advantage of Ape-X. R2D2 uses the same 3-layer convolutional network as DQN BID15, followed by an LSTM with 512 hidden units, which feeds into the advantage and value heads of a dueling network BID26, each with a hidden layer of size 512. Additionally, the LSTM receives as input the reward and one-hot action vector from the previous time step. On the four language tasks in the DMLab suite, we are using the same additional language-LSTM with 64 hidden units as IMPALA BID3 Target network update interval 2500 updates Value function rescaling h(x) = sign(x)(|x| + 1 − 1) + x, = 10 −3 Table 2: Hyper-parameters values used in R2D2. All missing parameters follow the ones in Ape-X.As is usual for agent training on Atari since BID15, we cap all (training and evaluation) episodes at 30 minutes (108, 000 environment frames Table 3 : Performance of R2D2 and R2D2+, averaged over 3 seeds, compared to our own singleseed re-run of IMPALA (shallow/deep) with improved action-set and trained on the same amount of data (10B environment frames). Compared to standard R2D2, the R2D2+ variant uses a shorter target network update frequency (400 compared to 2500), as well as the substantially larger 15-layer ResNet and the custom'optimistic asymmetric reward clipping' from BID3 | Investigation on combining recurrent neural networks and experience replay leading to state-of-the-art agent on both Atari-57 and DMLab-30 using single set of hyper-parameters. | 1,089 | scitldr |
The current state-of-the-art end-to-end semantic role labeling (SRL) model is a deep neural network architecture with no explicit linguistic features. However, prior work has shown that gold syntax trees can dramatically improve SRL, suggesting that neural network models could see great improvements from explicit modeling of syntax. In this work, we present linguistically-informed self-attention (LISA): a new neural network model that combines multi-head self-attention with multi-task learning across dependency parsing, part-of-speech, predicate detection and SRL. For example, syntax is incorporated by training one of the attention heads to attend to syntactic parents for each token. Our model can predict all of the above tasks, but it is also trained such that if a high-quality syntactic parse is already available, it can be beneficially injected at test time without re-training our SRL model. In experiments on the CoNLL-2005 SRL dataset LISA achieves an increase of 2.5 F1 absolute over the previous state-of-the-art on newswire with predicted predicates and more than 2.0 F1 on out-of-domain data. On ConLL-2012 English SRL we also show an improvement of more than 3.0 F1, a 13% reduction in error. Semantic role labeling (SRL) extracts a high-level representation of meaning from a sentence, labeling e.g. who did what to whom. Explicit representations of such semantic information have been shown to improve in challenging downstream tasks such as dialog systems BID63 BID14, machine reading BID8 BID65 and machine translation BID36 BID5.Though syntax was long considered an obvious prerequisite for SRL systems BID34 BID51, recently deep neural network architectures have surpassed syntacticallyinformed models BID69 BID25 BID60, achieving state-of-the art SRL performance with no explicit modeling of syntax. Still, recent work BID53 BID25 indicates that neural network models could see even higher performance gains by leveraging syntactic information rather than ignoring it. BID25 indicate that many of the errors made by a strong syntax-free neural-network on SRL are tied to certain syntactic confusions such as prepositional phrase attachment, and show that while constrained inference using a relatively low-accuracy predicted parse can provide small improvements in SRL accuracy, providing a gold-quality parse leads to very significant gains. incorporate syntax from a highquality parser BID31 using graph convolutional neural networks BID32, but like BID25 they attain only small increases over a model with no syntactic parse, and even perform worse than a syntax-free model on out-of-domain data. These works suggest that though syntax has the potential to improve neural network SRL models, we have not yet designed an architecture which maximizes the benefits of auxiliary syntactic information. In response, we propose linguistically-informed self-attention (LISA): a model which combines multi-task learning BID12 with stacked layers of multi-head self-attention BID64 trained to act as an oracle providing syntactic parses to downstream parameters tasked with predicting semantic role labels. Our model is endto-end: earlier layers are trained to predict prerequisite parts-of-speech and predicates, which are supplied to later layers for scoring. The model is trained such that, as syntactic parsing models improve, providing high-quality parses at test time can only improve its performance, allowing the model to benefit maximally from improved parsing models without requiring re-training. Unlike previous work, we encode each sentence only once, predict its predicates, part-of-speech tags and syntactic parse, then predict the semantic roles for all predicates in the sentence in parallel, leading to exceptionally fast training and decoding speeds: our model matches state-of-the art accuracy in less than one quarter the training time. In extensive experiments on the CoNLL-2005 and CoNLL-2012 datasets, we show that our linguistically-informed models consistently outperform the syntax-free state-of-the-art for SRL models with predicted predicates. On CoNLL-2005, our single model out-performs the previous state-of-the-art single model on the WSJ test set by nearly 1.5 F1 points absolute using its own predicted parses, and by 2.5 points using a stateof-the-art parse . On the challenging out-of-domain Brown test set, our model also improves over the previous state-ofthe-art by more than 2.0 F1. On CoNLL-2012, our model gains 1.4 points with its own parses and more than 3.0 points absolute over previous work: 13% reduction in error. Our single models also out-perform state-of-the-art ensembles across all datasets, up to more than 1.4 F1 over a strong fivemodel ensemble on CoNLL-2012. Our goal is to design an efficient neural network model which makes use of linguistic information as effectively as possible in order to perform endto-end SRL. LISA achieves this by combining: Multi-task learning across four related tasks; a new technique of supervising neural attention to predict syntactic dependencies; and careful conditioning of different parts of the model on gold versus predicted annotations during training. Figure 1 depicts the overall architecture of our model. To first encode rich token-level representations, our neural network model takes word embeddings as input, which are passed through stacked convolutional, feed-forward and multihead self-attention layers BID64 to efficiently produce contextually encoded token embeddings (Eqns. 1-4). We choose this combination of network components because we found it to perform better than LSTM, CNN or self-attention layers alone in terms of speed-accuracy Pareto efficiency in initial experiments. To predict semantic role labels, the contextually encoded tokens are projected to distinct predicate and role embeddings (§2.4), and each predicted predicate is scored with the sequence's role representations using a bilinear model (Eqn. 5), producing per-label scores for BIO-encoded semantic role labels for each token and each semantic frame in the sequence entirely in parallel. To incorporate syntax, one self-attention head is trained to attend to each token's syntactic parent, allowing the model to use this attention head as an oracle for syntactic dependencies. We encourage the model to use this syntactic information as much as possible by giving subsequent layers access to a gold parse oracle during training, allowing either the predicted parse attention or an externally predicted parse to be used at test time. We introduce this syntactically-informed self-attention in more detail in §2.2.We integrate part-of-speech and predicate information into earlier layers by re-purposing representations closer to the input to predict predicates and part-of-speech (POS) tags (§2.3). We simplify optimization and benefit from shared statistical strength derived from highly correlated POS and predicates by treating tagging and predicate detection as a single task, performing multi-class classification into the joint Cartesian product space of POS and predicate labels. The model is trained end-to-end by maximum likelihood using stochastic gradient descent (§2.5). The input to the network is a sequence X of T token representations x t. Each token representation is the sum of a fixed (pre-trained) and learned (randomly initialized) word embedding. In the case where we feed a predicate indicator embedding p t as input to the network, we concatenate that representation with the word embedding to give the final token embedding. h -1 Figure 1: Word embeddings are input to k CNN layers. This output is passed to a joint POS/predicate classifier and j layers of multi-head selfattention. One attention head is trained to attend to parse parents. A bilinear operation scores distinct predicate and role representations to produce BIOencoded SRL predictions. DISPLAYFORM0 These token representations are then the input to a series of width-3 stacked convolutional layers with residual connections BID24, producing contextually embedded token representations c (k) t at each layer k. We denote the kth convolutional layer as C (k). Let r(·) denote the leaky ReLU activation function BID39, and let LN (·) denote layer normalization BID2, then starting with input x t, the final CNN output is given by the recurrence: DISPLAYFORM1 We use leaky ReLU activations to avoid dead activations and vanishing gradients BID26, whereas layer normalization reduces covariate shift between layers BID28 without requiring distinct train-and testtime operations. We then feed this representation as input to a series of residual multi-head self-attention layers with feed-forward connections in the style of the encoder portion of the Transformer architecture of BID64. This architecture allows each token to observe long-distance context from the entire sentence like an LSTM, but unlike an LSTM, representations for all tokens can be computed in parallel at each layer. We first project 1 the output of the convolutional layers to a representation c (p) t that is the same size as the output of the self-attention layers and add a positional encoding vector computed as a deterministic sinusoidal function of t, following BID64. We then apply the selfattention layers to this projected representation, applying layer normalization after each residual connection. Denoting the jth self-attention layer as T (j) (·), the output of that layer s (j) t, and h as the number of attention heads at each layer, the 1 All of our linear projections include bias terms, which we omit in this exposition for the sake of clarity.following recurrence applied to initial input c DISPLAYFORM2 gives our final token representations s DISPLAYFORM3 with which we perform a weighted sum of the value vectors a value vh for each other token v to compose a new token representation for each attention head. The representations for each attention head are concatenated into a single vector a t. We feed this representation through a multi-layer perception, add it to the initial representation and apply layer normalization to give the final output of selfattention layer j: DISPLAYFORM4 2.2 Syntactically-informed self-attention Typically, neural attention mechanisms are left on their own to learn to attend to relevant inputs. Instead, we propose training the self-attention to attend to specific tokens corresponding to the syntactic structure of the sentence as a mechanism for passing linguistic knowledge to later layers. Specifically, we train with an auxiliary objective on one attention head which encourages that head to attend to each token's parent in a syntactic dependency tree. We use the attention weights a th between token t and each other token q in the sequence as the distribution over possible heads for token t: P (q = head(t) | X ) = a thq, where we define the root token as having a self-loop. This attention head thus emits a directed graph 2 where each token's head is the token to which the attention assigns the highest weight. This attention head now becomes an oracle for syntax, denoted P, providing a dependency parse to downstream layers. This model not only predicts its own dependency arcs, but allows for the injection of auxiliary parse information at test time by simply swapping out the oracle given by a th to one produced by e.g. a state-of-the-art parser. In this way, our model can benefit from improved, external parsing models without re-training. Unlike typical multi-task models, ours maintains the ability to leverage external syntactic information. Unfortunately, this parsing objective does not maximize the model's ability to use the syntactic information for predicting semantic role labels in later layers. Though one would expect model accuracy to increase significantly if injecting e.g. gold dependency arcs into the learned attention head at test time, we find that without specialized training this is not the case: Without the training described below, fixing P to gold parses at test time improves SRL F1 over predicted parses by 0.3 points, whereas the F1 increases by 7.0 when the model is trained with our technique. 3 Injecting high-accuracy predicted parses follows the same trend. We hypothesize that the model is limited by the poor representations to which it has access during early training. When training begins, the model observes randomly initialized attention rather than strong syntactic information, even in the head which will be trained to provide it with such information. Thus rather than learning to look to this head for syntax, the model learns to encode that information itself, like a model which was trained with no explicit syntax at all. Prior work BID68, has alleviated this problem by pre-training the parameters of earlier tasks before initiating the training of later tasks. However, optimization in this setting becomes computationally expensive and complicated, especially as the number of auxiliary tasks increases, and when using adaptive techniques for stochastic gradient descent such as Adam BID30.To alleviate this problem, during training we 2 In most but not all cases, the head emits a tree, but we do not currently enforce it.3 CoNLL-2012. CoNLL-2005 yields similar .clamp P to the gold parse (P G) when using its representation for later layers, while still training a th to predict syntactic heads. We find that this vastly improves the model's ability to leverage the parse information encoded in P at test time. Our approach is essentially an extension of teacher forcing BID66 to MTL. Though a large body of work suggests that, by closing the gap between observed data distributions during train and test, training on predicted rather than gold labels leads to improved test-time accuracy BID21 BID52 BID15 BID22 BID13 BID6 BID4, our simple approach works surprisingly well; we leave more advanced scheduled sampling techniques to future work. We also share the parameters of lower layers in our model to predict POS tags and predicates. Following He et al. FORMULA1, we focus on the end-toend setting, where predicates must be predicted on-the-fly. Since we also train our model to predict syntactic dependencies, it is beneficial to give the model some knowledge of POS information. While much previous work employs a pipelined approach to both POS tagging for dependency parsing and predicate detection for SRL, we take a multi-task learning (MTL) approach BID12, sharing the parameters of earlier layers in our SRL model with a joint POS and predicate detection objective. Since POS is a strong predictor of predicates, 4 and the complexity of training a multi-task model increases with the number of tasks, we combine POS tagging and predicate detection into a joint label space: for each POS tag TAG in the training data which co-occurs with a predicate, we add a label of the form TAG:PREDICATE.Specifically, we experiment with feeding a lower-level representation, r t, which may be either c (k) t, the output of the convolutional layers, or s t, the output of the first self-attention layer, to a linear classifier. We compute locally-normalized probabilities using the softmax function: P (z t | X) ∝ exp(r t), where z t is a label in the joint space. We apply this supervision at earlier lay-ers following prior work BID54 BID23. Our final goal is to predict semantic roles for each predicate in the sequence. We score each predicate 5 against each token in the sequence using a bilinear operation, producing per-label scores for each token for each predicate, with predicates and syntax determined by oracles V and P.First, we project each token representation s (j) t to a predicate-specific representation s pred t and a role-specific representation s role t. We then provide these representations to a bilinear transformation U for scoring. So, the role label scores s f t for the token at index t with respect to the predicate at index f (i.e. token t and frame f) are given by: DISPLAYFORM0 which can be computed in parallel across all semantic frames in an entire minibatch. We calculate a locally normalized distribution over role labels for token t in frame f using the softmax function: DISPLAYFORM1 At test time, we perform constrained decoding using the Viterbi algorithm to emit valid sequences of BIO tags, using unary scores s f t and the transition probabilities given by the training data. We maximize the sum of the likelihoods of the individual tasks, entrusting the network to learn parameters which model the complex coupling between tasks, rather than explicitly modeling structure in the output labels: DISPLAYFORM0 where λ is a penalty on the syntactic attention loss. Note that as described in §2.2, the terms for the syntactically-informed attention and joint predicate/POS prediction are conditioned only on the input sequence X, whereas the SRL component is conditioned on gold predicates V G and gold parse structure P G during training. We train the model using Nadam SGD combined with the learning rate schedule in BID64. In addition to MTL, we regularize our model using element-wise and word dropout BID55 BID20 and parameter averaging. We use gradient clipping to avoid exploding gradients BID7 BID45. Our models are implemented in TensorFlow BID0 with source code and models to be released upon publication. Additional details on optimization and hyperparameters are included in Appendix A. Early approaches to SRL BID50 BID56 BID29 BID61 focused on developing rich sets of linguistic features as input to a linear model, often combined with complex constrained inference e.g. with an ILP BID51. BID59 showed that constraints could be enforced more efficiently using a clever dynamic program for exact inference. BID57 modeled syntactic parsing and SRL jointly, and BID35 jointly modeled SRL and CCG parsing. BID19 were among the first to use a neural network model for SRL, a CNN over word embeddings which failed to out-perform non-neural models. successfully employed neural networks by embedding lexicalized features and providing them as factors in the model of BID59.More recent neural models are syntax-free. BID69, and BID25 Some work has incorporated syntax into neural models for SRL. BID53 incorporate syntax by embedding dependency paths, and similarly encode syntax using a graph CNN over a predicted syntax tree, out-performing models without syntax on CoNLL-2009. However, both models are at risk of over-fitting to or otherwise inheriting the flaws of the predictions upon which they are trained. Indeed, report that their model does not out-perform a similar syntaxfree model on out-of-domain data. Syntactically-informed self-attention is similar to the concurrent work of BID37, who use edge marginals produced by the matrixtree algorithm as attention weights for document classification and natural language inference. MTL BID12 ) is popular in NLP. Collobert et al. FORMULA1 multi-task part-of-speech, chunking, NER and SRL. BID68 jointly train a dependency parser and POS tagger. Søgaard and train a multitask model for POS, chunking and CCG tagging. BID23 built a single, jointly trained model for POS, chunking, parsing, semantic relatedness and entailment, using a special regularization scheme to facilitate training. BID9 and investigate different combinations of NLP tagging tasks including POS, chunking and FrameNet semantics . BID38 enhance a machine translation model by multitasking with parsing. MTL has also been applied to semantic dependency parsing: BID58 multi-task with a syntax-based tagging objective while BID46 train on three semantic dependency frameworks. The question of training on gold versus predicted labels is closely related to learning to search BID21 BID52 BID13 and scheduled sampling BID6, with applications in NLP to sequence labeling and transition-based parsing BID15 BID22 BID4. We believe more sophisticated approaches extending these techniques to MTL could improve LISA in future work. We present on the CoNLL-2005 shared task BID11 and the CoNLL-2012 English subset of OntoNotes 5.0 BID49, achieving state-of-the-art for a single model with predicted predicates on both corpora. In all experiments, we initialize with pre-trained GloVe word embeddings BID47, hyperparameters that ed in the best performance on the validation set were selected via a small grid search, and models were trained for a maximum of 7 days on one TitanX GPU using early stopping on the validation set. 6 For CoNLL-2005 we convert constituencies to dependencies using the Stanford head rules v3.5 (de) and for CoNLL-2012 we use ClearNLP (b), following previous work. A detailed description of hyperparameter settings and data pre-processing can be found in Appendix A.For both datasets, we compare our best models (LISA G) to three strong sets of baselines: the syntax-free deep LSTM model of BID25 which was the previous state-of-the-art model for SRL with predicted predicates, both as an ensemble of five models (PoE) and as a single model (single); an ablation of our own self-attention model where we don't incorporate any syntactic information (SA), and another ablation where we do train with syntactically-informed self-attention, but where downstream layers in the model are conditioned on the predicted attention weights (i.e. dynamic oracle, D) rather than the gold parse (G) during training (LISA D).We demonstrate that our models can benefit from injecting state-of-the-art predicted parses at test time (+D&M) by setting the attention oracle to parses predicted by , the state-of-the-art dependency parser for English PTB and winner of the 2017 CoNLL shared task BID67. In all cases, using these parses at test time improves performance. We also evaluate our model using the gold syntactic parse at test time (+Gold), to provide an upper bound for the benefit that syntax could have for SRL using LISA. These experiments show that despite LISA's strong performance, there remains substantial room for improvement. In §4.4 we perform detailed analysis comparing SRL models us-6 Our best reported CoNLL-2012 model was trained for just under 6 days, though it matched BID25 ing gold and predicted parses to better understand where syntax provides the most benefit to SRL, and what remains to be improved. We first report the unlabeled attachment scores (UAS) of our parsing models on the CoNLL-2005 and 2012 SRL test sets (Table 1). achieves the best scores, obtaining state-of-the-art on the CoNLL-2012 split of OntoNotes in terms of UAS, followed by LISA G then LISA D. 7 We still see SRL accuracy improvements despite our relatively low parser UAS from LISA's predicted parses, but the difference in accuracy likely explains the large increase in SRL we see from decoding with D&M parses. TAB3 reports precision, recall and F1 on the CoNLL-2012 test set. Our SA model already performs strongly without access to syntax, out-performing the single model of BID25 but under-performing their ensemble. Adding syntactically-informed training to the self- 7 The previous best score we know of is 92.5 attained by Mate BID10, as reported in BID18. attention increases over the model without syntax, achieving about the same score using dynamic versus gold parse oracles for downstream layers during training. When evaluating using an injected parse, we see that a large increase of more than 1.5 F1 absolute for LISA G and this increase is markedly larger than for LISA D. With the injected D&M parse, our single models impressively outperform the ensemble. We also report predicate detection precision, recall and F1 TAB5 ). Our models obtain much higher scores than BID25 on this task, likely explaining improvements of our basic SA model over theirs. Like BID25, our model achieves much higher precision than recall, indicative of the model memorizing predicate words from the training data. Interestingly, our SA model out-performs syntax-infused models by a small margin. We hypothesize that this could be due to asking the LISA models to learn to predict more tasks, taking some model capacity away from predicate detection. TAB7 lists precision, recall and F1 on the CoNLL-2005 test sets. Unlike on CoNLL-2012, our SA baseline does not out-perform BID25. This is likely due to their predicate detection scores being closer to ours on this data (Table 5). Interestingly, unlike on CoNLL-2012 we see a distinct improvement between LISA G and LISA D in models which use LISA parses: LISA G training leads to improved SRL scores by more than 1 F1 absolute using LISA-predicted parses. Similar to CoNLL-2012, we see very little improvement from adding D&M parses at test-time with the dynamic oracle, whereas we obtain the highest score of all when using D&M parses combined with LISA G, demonstrating that our training technique markedly improves LISA's ability to leverage improved parses at test time. Our best single models out-perform the ensemble of (Table 5): all our models out-perform the baseline in terms of F1. In the case of CoNLL-2005 BID25 attains higher recall, especially on the Brown test set, while our model achieves higher precision. We report only LISA G since there is little difference across *SA models. LISA in its current form does not perform as well when gold predicates are given at test time. Table 6 presents LISA G performance with predicate indicator embeddings provided on the input. On neither test set does our model using LISA parses out-perform the state-of-the-art. With D&M parses, our models out-perform BID25, but not BID60.We attribute this behavior to two factors. First, the models of BID25 and BID60 are larger than our models. 8. Our models were designed to predict predicates, and we found the current model size sufficient for good performance in this setting. Second, our model encodes each sequence only once, while the works to which we compare re-encode the sequence anew for each predicate. Since our model predicts its own predicates using a shared sentence encoding, it is impossible to encode sequences in this way. We also do not enforce that the model assign the correct predicate label during decoding, leading to incorrect predicate predictions despite gold predicate inputs. For example, in a challenging sentence which contains two distinct semantic frames with the identical predicate continued, our model incorrectly predicts both tokens as predicates in one of the frames. With more careful modeling toward gold predicates, our technique could be improved for this setting. Still, LISA shows impressive performance when gold predicates are not available, as when using SRL in the wild. In §4.2 and §4.3 we observed that while LISA performs well with state-of-the-art predicted syntax, it still sees a large gain across all datasets of 4-5 F1 points absolute when provided with gold syntax trees. In order to better understand the nature of these improvements, we perform a detailed model analysis based on that of BID25 First, we compare the impact of Viterbi decoding with LISA, D&M, and gold syntax trees TAB10, finding the same trends across both datasets. While Viterbi decoding makes a larger difference over greedy decoding with LISA parses than with D&M, we find that Viterbi has the exact same impact for D&M and gold parses: Gold parses provide no improvement over state-of-the-art predicted parses in terms of BIO label consistency. We also assess SRL F1 as a function of sentence length. In FIG3 we see that providing LISA with gold parses is particularly helpful for sentences longer than 10 tokens. This likely directly follows from the tendency of syntactic parsers to perform worse on longer sentences. Next, we compare SRL error types. Following BID25, we apply a series of corrections to model predictions in order to understand which error types the gold parse resolves: e.g. Fix Labels fixes labels on spans which match gold boundaries, whereas Merge Spans merges adjacent predicted spans into a gold span. 9 In Figure 3 we see that much of the performance gap between the gold and predicted parses is due to span boundary errors (Merge Spans, Split Spans and Fix Span Boundary), which supports the hypothesis proposed by BID25 that incorporating syntax could be particularly helpful for resolving these errors. BID25 that these errors are due mainly to prepositional phrase (PP) attachment mistakes. We also find this to be the case: Figure 4 shows a breakdown of split/merge corrections by phrase type. Though the number of corrections decreases substantially across phrase types, the proportion of corrections attributed to PPs remains the same (approx. 50%) even after providing the correct PP attachment to the model, indicating that PP span boundary mistakes are due not only to parse mistakes, but are a fundamental difficulty for SRL. We present linguistically-informed self-attention: a new multi-task neural network model that effectively incorporates rich linguistic information for semantic role labeling. LISA out-performs the state-of-the-art on two benchmark SRL datasets, including out-of-domain, while training more than 4× faster. Future work will explore improving LISA's parsing accuracy, developing better training techniques and adapting to more tasks. Following previous work BID25, we evaluate our models on the CoNLL-2012 data split BID49 of OntoNotes 5.0 BID27. 10 This dataset is drawn from seven domains: newswire, web, broadcast news and conversation, magazines, telephone conversations, and text from the bible. The text is annotated with gold part-of-speech, syntactic constituencies, named entities, word sense, speaker, co-reference and semantic role labels based on the PropBank guidelines BID44. Propositions may be verbal or nominal, and there are 41 distinct semantic role labels, excluding continuation roles and including the predicate. We processed the data as follows: We convert the semantic proposition and role segmentations to BIO boundary-encoded tags, ing in 129 distinct BIO-encoded tags (including continuation roles). We initialize word embeddings with 100d pre-trained GloVe embeddings trained on 6 billion tokens of Wikipedia and Gigaword BID47. Following the experimental setup for parsing from Choi et al. FORMULA1, we convert constituency structure to dependencies using the ClearNLP dependency converter BID17, use automatic part-of-speech tags assigned by the ClearNLP tagger BID16, and exclude single-token sentences in our parsing evaluation. 10 We constructed the data split following instructions at: BID11 ) is based on the original PropBank corpus BID44, which labels the Wall Street Journal portion of the Penn TreeBank corpus (PTB) BID42 with predicateargument structures, plus a challenging out-ofdomain test set derived from the Brown corpus (Francis and Kučera, 1964). This dataset contains only verbal predicates, though some are multiword verbs, and 28 distinct role label types. We obtain 105 SRL labels including continuations after encoding predicate argument segment boundaries with BIO tags. We evaluate the SRL performance of our models using the srl-eval.pl script provided by the CoNLL-2005 shared task, 11 which computes segment-level precision, recall and F1 score. We also report the predicate detection scores output by this script. For CoNLL-2005 we train the same parser as for CoNLL-2012 except on the typical split of the WSJ portion of the PTB using Stanford dependencies (de) and POS tags from the Stanford CoreNLP left3words model BID62. We train on WSJ sections 02-21, use section 22 for development and section 23 for test. This corresponds to the same train/test split used for propositions in the CoNLL-2005 dataset, except that section 24 is used for development rather than section 22. We train the model using the Nadam algorithm for adaptive stochastic gradient descent (SGD), which combines Adam BID30 SGD with Nesterov momentum BID43. We additionally vary the learning rate lr as a function of an initial learning rate lr 0 and the current training step step as described in Vaswani which increases the learning rate linearly for the first warm training steps, then decays it proportionally to the inverse square root of the step number. We found this learning rate schedule essential for training the self-attention model. We only update optimization moving-average accumulators for parameters which receive gradient updates at a given step. 12 In all of our experiments we used initial learning rate 0.04, β 1 = 0.9, β 2 = 0.98, = 1 × 10 −12 and dropout rates of 0.33 everywhere. We use four self-attention layers made up of 8 attention heads each with embedding dimension 64, and two CNN layers with filter size 1024. The size of all MLP projections: In the feed-forward portion of self-attention, predicate and role representations, and representation used for joint partof-speech/predicate classification is 256. We train with warm = 4000 warmup steps and clip gradient norms to 5. | Our combination of multi-task learning and self-attention, training the model to attend to parents in a syntactic parse tree, achieves state-of-the-art CoNLL-2005 and CoNLL-2012 SRL results for models using predicted predicates. | 1,090 | scitldr |
Bottleneck structures with identity (e.g., residual) connection are now emerging popular paradigms for designing deep convolutional neural networks (CNN), for processing large-scale features efficiently. In this paper, we focus on the information-preserving nature of identity connection and utilize this to enable a convolutional layer to have a new functionality of channel-selectivity, i.e., re-distributing its computations to important channels. In particular, we propose Selective Convolutional Unit (SCU), a widely-applicable architectural unit that improves parameter efficiency of various modern CNNs with bottlenecks. During training, SCU gradually learns the channel-selectivity on-the-fly via the alternative usage of (a) pruning unimportant channels, and (b) rewiring the pruned parameters to important channels. The rewired parameters emphasize the target channel in a way that selectively enlarges the convolutional kernels corresponding to it. Our experimental demonstrate that the SCU-based models without any postprocessing generally achieve both model compression and accuracy improvement compared to the baselines, consistently for all tested architectures. Nowadays, convolutional neural networks (CNNs) have become one of the most effective approaches in various fields of artificial intelligence. With a growing interest of CNNs, there has been a lot of works on designing more advanced CNN architectures BID43 BID21. In particular, the simple idea of adding identity connection in ResNet BID11 has enabled breakthroughs in this direction, as it allows to train substantially deeper/wider networks than before by alleviating existed optimization difficulties in previous CNNs. Recent CNNs can scale over a thousand of layers BID12 or channels BID18 without much overfitting, and most of these "giant" models consider identity connections in various ways BID49 BID18. However, as CNN models grow rapidly, deploying them in the real-world becomes increasingly difficult due to computing resource constraints. This has motivated the recent literature such as network pruning BID9 BID28 BID35, weight quantization BID36 BID3, adaptive networks BID47 BID5 BID0 BID19, and resource-efficient architectures BID17 BID40 BID32.For designing a resource-efficient CNN architecture, it is important to process succinct representations of large-scale channels. To this end, the identity connections are useful since they allow to reduce the representation dimension to a large extent while "preserving" information from the previous layer. Such bottleneck architectures are now widely used in modern CNNs such as ResNet BID11 and DenseNet BID18 for parameter efficiency, and many state-of-the-art mobile-targeted architectures such as SqueezeNet BID20, ShuffleNet BID53 BID32, MoblileNet BID16 BID40, and CondenseNet BID17 commonly address the importance of designing efficient bottlenecks. Contribution. In this paper, we propose Selective Convolutional Unit (SCU), a widely-applicable architectural unit for efficient utilization of parameters in particular as a bottleneck upon identity connection. At a high-level, SCU performs a convolutional operation to transform a given input. The main goal of SCU, however, is rather to re-distribute their computations only to selected channels (a) (b)Figure 1: (a) An illustration of channel de-allocation and re-allocation procedures. The higher the saturation of the channel color, the higher the ECDS value. (b) The overall structure of SCU.of importance, instead of processing the entire input naively. To this end, SCU has two special operations: (a) de-allocate unnecessary input channels (dealloc), and (b) re-allocate the obstructed channels to other channels of importance (realloc) (see Figure 1a). They are performed without damaging the network output (i.e., function-preserving operations), and therefore one can call them safely at any time during training. Consequently, training SCU is a process that increases the efficiency of CNN by iteratively pruning or rewiring its parameters on-the-fly along with learning them. In some sense, it is similar to how hippocampus in human brain learn, where new neurons are generated daily, and rewired into the existing network while maintaining them via neuronal apoptosis or pruning BID38 BID49.We combine several new ideas to tackle technical challenges for such on-demand, efficient trainable SCU. First, we propose expected channel damage score (ECDS), a novel metric of channel importance that is used as the criterion to select channels for dealloc or realloc. Compared to other popular magnitude-based metrics BID28 BID35, ECDS allows capturing not only low-magnitude channels but also channels of low-contribution under the input distribution. Second, we impose channel-wise spatial shifting bias when a channel is reallocated, providing much diversity in the input distribution. It also has an effect of enlarging the convolutional kernel of SCU. Finally, we place a channel-wise scaling layer inside SCU with sparsity-inducing regularization, which also promotes dealloc (and consequently realloc as well), without further overhead in inference and training. We evaluate the effectiveness of SCU by applying it to several modern CNN models including ResNet BID11, DenseNet BID18, and ResNeXt BID49, on various classification datasets. Our experimental consistently show that SCU improves the efficiency of bottlenecks both in model size and classification accuracy. For example, SCU reduces the error rates of DenseNet-40 model (without any post-processing) by using even less parameters: 6.57% → 5.95% and 29.97% → 28.64% on CIFAR-10/100 datasets, respectively. We also apply SCU to a mobile-targeted CondenseNet BID17 model, and further improve its efficiency: it even outperforms NASNet-C BID54, an architecture searched with 500 GPUs for 4 days, while our model is constructed with minimal efforts automatically via SCU.There have been significant interests in the literature on discovering which parameters to be pruned during training of neural networks, e.g., see the literature of network sparsity learning BID48 BID25 BID41 BID35 BID30 BID4. On the other hand, the progress is, arguably, slower for how to rewire the pruned parameters of a given model to maximize its utility. proposed Dense-Sparse-Dense (DSD), a multi-step training flow applicable for a wide range of DNNs showing that re-training with re-initializing the pruned parameters can improve the performance of the original network. Dynamic network surgery BID7, on the other hand, proposed a methodology of splicing the pruned connections so that mis-pruned ones can be recovered, yielding a better compression performance. In this paper, we propose a new way of rewiring for parameter efficiency, i.e., rewiring for channel-selectivity, and a new architectural framework that enables both pruning and rewiring in a single pass of training without any postprocessing or re-training (as like human brain learning). Under our framework, one can easily set a targeted trade-off between model compression and accuracy improvement depending on her purpose, simply by adjusting the calling policy of dealloc and realloc. We believe that our work sheds a new direction on the important problem of training neural networks efficiently. In this section, we describe Selective Convolutional Unit (SCU), a generic architectural unit for bottleneck CNN architectures. The overall structure of SCU is described in Section 2.1 and 2.2. In Section 2.3, we introduce a metric deciding channel-selectivity in SCU. We present in Section 2.4 how to handle a network including SCUs in training and inference. Bottleneck structures in modern CNNs. We first consider a residual function defined in ResNet BID11 which has an identity mapping: for a given input random variable X ∈ R (H and W are the height and width of each channel, respectively, and I is the number of channels or feature maps) and a non-linear function F, the output of a residual function is written by Y = X + F(X). This function has been commonly used as a building block for designing recent deep CNN models, in a form that F is modeled by a shallow CNN. However, depending on how F is designed, computing F(X) can be expensive when I is large. For tackling the issue, bottleneck structure is a prominent approach, that is, to model F by F • R by placing a bottleneck R that firstly maps X into a lower dimension of I < I features. This approach, in essence, requires the identity connection, for avoiding information loss from X to Y. Namely, the identity connection enables a layer to save redundant computation (or parameters) for just "keeping" information from the input. Bottleneck structures can be used other than ResNet as well, as long as the identity connection exists. Recent architectures including DenseNet BID18, PyramidNet BID8 and DPN develop this idea with using a different aggregation function W instead of addition in ResNet, e.g., W(X, X) = [X, X] (channel-wise concatenation) for DenseNet. Designing R-F -W is now a common way of handling large features. Channel-selectivity for efficient bottlenecks. Although placing a bottleneck R reduces much computation of the main function F, we point out the majority of modern CNNs currently use inefficient design of R itself, so that even the computation of R often dominates the remaining. In ResNet and DenseNet models, for example, bottlenecks are designed using a pointwise convolution with a batch normalization layer (BN) BID21 and ReLU BID34: DISPLAYFORM0 where Conv I→I denotes a pointwise convolution that maps I features into I features, i.e., its parameters can be represented by a I × I matrix. This means that the parameters of R grows linearly on I, and it can be much larger than F if I I. For example, in case of DenseNet-BC-190 BID18, 70% of the total parameters are devoted for modeling R, which is inefficient as the expressivity of a pointwise convolution is somewhat limited. In this paper, we attempt to improve the efficiency of R in two ways: (a) reducing the parameters in Conv I→I by channel pruning, and (b) improving its expressivity by using the pruned parameters again. This motivates our goal to learn both channel-selectivity and parameters jointly. Overall architecture of SCU. SCU is designed to learn the channel-selectivity via dynamic pruning and rewiring of channels during training. In this paper, we focus on putting SCU as a bottleneck R, and show that the channel-selectivity of SCU improves its parameter efficiency. Our intuition is that (a) the information-preserving nature of identity connection brings optimization benefits if neurons in its structure are dynamically pruned during training, and (b) such pruning can be particularly effective on bottlenecks as their outputs are in a much lower dimension compared to the input. Nevertheless, we believe that our ideas on SCU are not limited to the bottleneck structures, as the concept of channel-selectivity can be generalized to other structures. At a high level, SCU follows the bottleneck structure from, but for two additional layers: Channel Distributor (CD) and Noise Controller (NC) whose details are presented in Section 2.2. We model a non-linear function SCU: R I×H×W → R I ×H×W as follows (see Figure 1b): DISPLAYFORM0 SCU has two special operations which control its input channels to process: (a) channel deallocation (dealloc), which obstructs unnecessary channels from being used in future computations, and (b) channel re-allocation (realloc), which allocates more parameters to important, non-obstructed channels by copying them into the obstructed areas. We design those operations to be function preserving, i.e. they do not change the original function of the unit, so that can be called at anytime during training without damage. Repeating dealloc and realloc alternatively during training translates the original input to what has only a few important channels, potentially duplicated multiple times. Namely, the parameters originally allocated to handle the entire input now operate on its important subset. On the way of designing the operations of function preserving, we propose Expected Channel Damage Score (ECDS) that leads to an efficient, safe way to capture unimportant channels by measuring how much the output of SCU changes on average (w.r.t. data distribution) after removing each channel. The details of ECDS are in Section 2.3. Channel Distributor (CD) is the principal mechanism of SCU and is placed at the beginning of the unit. The role of CD is to "rebuild" the input, so that unnecessary channels can be discarded, and important channels are copied to be emphasized. In essence, we implement this function by re-indexing and blocking the input channel-wise: CD(X) i:= g i · X πi with an index pointer π i ∈ {1, 2, · · ·, I}, a gate variable g i ∈ {0, 1} for i = 1, 2, · · ·, I. Here, we notice that CD(X) may contain a channel copied multiple times, i.e., multiple π i's can have the same value. Since SCU has different parameters for each channel, setting multiple π i's has an effect of allocating more parameters to better process the channel pointed by π i. We found that, however, it is hard to take advantage of the newly allocated parameters by simply copying a channel due to symmetry, i.e., the parameters for each channel usually degenerates. Due to this, we consider spatial shifting biases DISPLAYFORM0 2 for each channel, as illustrated in FIG0. This trick can provide the copied channels much diversity in input distributions (and hence relaxing degeneracy), in a way that it is effective for the convolutional layer in SCU: it enlarges the convolutional kernel from 1 × 1 for the re-allocated channels only. To summerize, CD(X) i takes (a) the channel which π i is pointing, in the spatially shifted form with bias b i, or (b) 0 if the gate g i is closed. Formally, CD can be represented by DISPLAYFORM1. CD(X) has the same size to X, and defined as follows: DISPLAYFORM2 Here, shift(X, b h, b w) denotes the "shifting" operation along spatial dimensions of X. For each pixel location (i, j) in X, we define shift(X, b h, b w) i,j as: DISPLAYFORM3 using a bilinear interpolation kernel. This formulation allows b h and b w to be continuous real values, thereby to be learned via gradient-based methods with other parameters jointly. Noise Controller (NC) is a component for more effective training of SCU. As SCU continuously performs channel pruning via dealloc during training, the efficiency of SCU depends on which regularization is used. The key role of NC is to induce the training of SCU to get more channel-wise sparsity, so that more channels can be de-allocated safely. Formally, NC is a channel-wise re-scaling layer: NC(X):= X θ, 1 where θ = (θ i) I i=1 are parameters to be learned. For the channel-wise sparsity, we impose sparsity-inducing regularization specifically on θ. Although any sparsity-inducing regularization can be used for θ BID28 BID48, in this paper we adopt the Bayesian pruning approach proposed by BID35 2 for two reasons: (a) it is easy to incorporate into training process, and (b) we found that noise incurred from Bayesian parameters helps to recover damage from channel pruning. In general, a Bayesian scheme regards each parameter θ as a random variable with prior p(θ). Updating the posterior p(θ|D) from data D often leads the model to have much sparsity, if p(θ) is set to induce sparsity, e.g., by 1 denotes the element-wise product. 2 For completeness, we present for the readers an overview of BID35 in Appendix B.log-uniform prior BID23. Meanwhile, p(θ|D) is usually approximated with a simpler model q φ (θ), where φ are parameters to be learned. In case of NC, we regard each scaling parameter as a random variable, so that they become channel-wise multiplicative noises on input. We follow BID35 for the design choices on q φ (θ) and p(θ), by fully-factorized log-normal DISPLAYFORM4 Consider an input random variable X = (X i ∈ R H×W) DISPLAYFORM0, and DISPLAYFORM1 I ×H×W, where S −i denotes a SCU identical to S but g i = 0. In other words, it is the expected amount of changes in outputs when S i is "damaged" or "pruned". The primary goal of this criteria is to make dealloc to be function preserving. We define ECDS(S) i by the (Euclidean) norm of the averaged values of E[SCU(X; S) − SCU(X; S −i)] over the spatial dimensions: DISPLAYFORM2 Notice that the above definition requires a marginalization over random variable X. One can estimate it via Monte Carlo sampling using training data, but this is computationally too expensive compared to other popular magnitude-based metrics BID28 BID35. Instead, we utilize the BN layer inside SCU, to infer the current input distribution of each channel at any time of training. This trick enables to approximate ECDS(S) i by a closed formula of S i, avoiding expensive computations of SCU(X; ·), as in what follows. Consider a hidden neuron x following BN and ReLU, i.e., y = ReLU(BN(x)), and suppose one wants to estimate E[y] without sampling. To this end, we exploit the fact that BN already "accumulates" its input statistics continuously during training. Under assuming that BN(x) ∼ N (β, γ 2) where γ and β are the scaling and shifting parameter in BN, respectively, it is elementary to check: DISPLAYFORM3 where φ N and Φ N denote the p.d.f. and the c.d.f. of the standard normal distribution, respectively. The assumption is quite reasonable during training BN as each mini-batch is exactly normalized before applying the scaling and shifting inside BN. The idea is directly extended to obtain a closed form formula of ECDS(S) i under some assumptions, as stated in the following proposition. DISPLAYFORM4, for all i. The proof of the above proposition is given in Appendix D. In essence, there are three main terms in the formula: (a) a term that measures how much the input channel is active, (b) how much the NC amplifies the input, and (c) the total magnitude of weights in the convolutional layer. Therefore, it allows a way to capture not only low-magnitude channels but also channels of low-contribution under the input distribution (see Section 3.2 for comparisons with other metrics). Consider a CNN model p(Y|X, Θ) employing SCU, where Θ denotes the collection of model parameters. For easier explanation, we rewrite Θ by (V, W): V consists of (π, g) in CDs, and W is the remaining ones. DISPLAYFORM0, (V, W) is trained via alternating two phases: (a) training W via stochastic gradient descent (SGD), and (b) updating V via dealloc or realloc. The overall training process is mainly driven by (a), and the usage of (b) is optional. In (a), we use stochastic variational inference BID22 in order to incorporate stochasticity incurred from NC, so that SCU can learn its Bayesian parameters in NC jointly with the others via SGD. On the other hand, in (b), dealloc and realloc are called on demand during training depending on the purpose. For example, one may decide to call dealloc only throughout the training to obtain a highly compressed model, or one could use realloc as well to utilize more model parameters. Once (b) is called, (a) is temporally paused and V are updated. Training via stochastic variational inference. We can safely ignore the effect of V during training of W, since they remain fixed. Recall that, each noise θ from a NC is assumed to follow q φ (θ) = LogN(θ|µ, σ 2). Then, θ can be "re-parametrized" with a noise ε from the standard normal distribution as follows: DISPLAYFORM1 2 ). Stochastic variational inference BID22 ) allows a minibatch-based stochastic gradient method for θ, in such case that θ can be re-parametrized with an non-parametric noise. The final loss we minimize for a minibatch {( DISPLAYFORM2 becomes (see Appendix F for more details): DISPLAYFORM3 where DISPLAYFORM4 is a sampled vector from the fully-factorized standard normal distribution. Channel de-allocation and re-allocation. DISPLAYFORM5 The main role of dealloc and realloc is to update W CD in S that are not trained directly via SGD. They are performed as follows: select slices to operate by thresholding ECDS(S), and update S from the selected channels. More formally, when dealloc is called, S i's where ECDS(S) i < T l for a fixed threshold T l are selected, and g i's in W CD are set by 0. If one chooses small T l, this operation does not hurt the original function. On the other hand, realloc selects channels by collecting S i where ECDS(S) i > T h, for another threshold T h. Each of the selected channels can be re-allocated only if there is a closed channel in S. If there does not exist a enough space, channels with higher ECDS have priority to be selected. A single re-allocation of a channel S i to a closed channel S j consists of several steps: DISPLAYFORM6 ← 0, (iv) re-initialize the shifting bias b j, and (v) set π j ← π i. This procedure is function-preserving, due to (iii).After training a SCU S, one can safely remove S i's that are closed, to yield a compact unit. Then, CDs are now operated by "selecting" channels rather than by obstructing, thereby the subsequent layers play with smaller dimensions. Hence, at the end, SCU is trained to select only a subset of the input for performing the bottleneck operation. For NC, on the other hand, one can still use it for inference, but efficient inference can be performed by replacing each noise θ i by constant E[θ i], following the well-known approximation used in many dropout-like techniques BID15. In our experiments, we apply SCU to several well-known CNN architectures that uses bottlenecks, and perform experiments on CIFAR-10/100 BID24 and ImageNet BID37 classification datasets. The more details on our experimental setups, e.g., datasets, training details, and configurations of SCU, are given in Appendix G. Improving existing CNNs with SCU. We consider models using ResNet BID11, DenseNet BID18 and ResNeXt BID49 architectures. In general, every model we used in this paper forms a stack of multiple bottlenecks, where the definition of each bottleneck differs depending on its architecture except that it can be commonly expressed by R-F -W (the details are given in TAB6 in the appendix). We compare the existing models with the corresponding new ones in which the bottlenecks are replaced by SCU. For each SCU-based model, we consider three cases: (a) neither dealloc nor realloc is used during training, (b) only dealloc is used, and (c) both dealloc and realloc are used. We measure the total number of parameters in bottlenecks, and error rates. TAB1 compares the existing CNN models with the corresponding ones using SCU, on CIFAR-10/100. The consistently demonstrate that SCU improves the original models, showing their effectiveness in different ways. When only dealloc is used, the model tends to be trained with minimizing their parameter to use. Using realloc, SCU now can utilize the de-allocated parameters to improve their accuracy aggressively. Note that SCU can improve the accuracy of the original model even neither dealloc nor realloc is used. This gain is from the regularization effect of stochastic NC, acting a dropout-like layer. We also emphasize that one can set a targeted trade-off between compression of SCU and accuracy improvement depending on her purpose, simply by adjusting the calling policy of dealloc and realloc. For example, in case of DenseNet-100 model on CIFAR-10, one can easily trade-off between reductions in (compression, error) = (−51.4%, −1.78%) and (−18.2%, −8.24%). In overall, SCU-based models achieve both model compression and accuracy improvement under all tested architectures. TAB2 shows the on ImageNet, which are consistent to those on CIFAR-10/100. Notice that reducing parameters and error simultaneously is much more non-trivial in the case of ImageNet, e.g., reducing error 23.6% → 23.0% requires to add 51 more layers to ResNet-101 (i.e., ResNet-152), as reported in the official repository of ResNet BID13.Designing efficient CNNs with SCU. We also demonstrate that SCU can be used to design a totally new efficient architecture. Recall that, in this paper, SCU focus on the bottlenecks inside the overall structure. The other parts, F or W, are other orthogonal design choices. To improve the efficiency of the parts, we adopt some components from CondenseNet BID17, which is one of the state-of-the-art architectures in terms of computational efficiency, designed for mobile devices. Although we do not adopt their main component, i.e., learned group convolution (LGC) as it also targets for the bottleneck as like SCU, we can still utilize other components of CondenseNet: increasing growth rate (IGR) (doubles the growth rate of DenseNet for every N blocks starting from 8) and the use of group convolution for F. Namely, we construct a new model, coined CondenseNet-SCU by adopting IGR and GC upon a DenseNet-182 model with SCU. We replace each 3×3 convolution for F by a group convolution of 4 groups. We train this model using dealloc only to maximize the computational efficiency. In TAB3, we compare our model with state-of-the-art level CNNs, including ResNet-1001 BID12, WRN-28-10 , NASNet-C BID54, and the original CondenseNet-182. As one can observe, our model shows better efficiency compared to the corresponding CondenseNet, suggesting the effectiveness of SCU overLGC. Somewhat interestingly, ours even outperforms NASNet-C that is an architecture searched over thousands of candidates, in both model compression and accuracy improvement. We finally remark that CondenseNet-SCU-182 model presented in TAB3 originally has 6.29M parameters in total before training, devoting 5.89M for bottlenecks, i.e., it is about 93.7% of the total number of parameters. This is indeed an example in that reducing overhead from bottlenecks is important for better efficiency, which is addressed by SCU. We also perform numerous ablation studies on the proposed SCU, investigating the effect of the key components: CD, NC, and ECDS. For evaluation, we use the DenseNet-SCU-40 model (DenseNet-40 using SCU) trained for CIFAR-10. We also follow the training details described in Appendix G.Spatial shifting and re-allocation. We propose spatial shifting as a trick in realloc procedure to provide diversity in input distributions. To evaluate its effect, we compare three DenseNet-SCU-40 models with different configurations of SCU: (D) only dealloc during training, (+R) realloc together but without spatial shifting, and (+R+S) further with the shifting. FIG1 shows that +R does not improve the model performance much compared to D, despite +R+S outperforms both of them. This suggests that copying a channel naively is not enough to fully utilize the rewired parameters, and spatial shifting is an effective way to overcome the issue. DISPLAYFORM0 Sparsity-inducing effect of NC. We place NC in SCU to encourage more sparse channels. To verify such an effect, we consider DenseNet-SCU-40 model (say M1) and its variant removing NC from SCU (say M2). We first train M1 and M2 calling neither dealloc nor realloc, and compare them how the ECDS of each channel is distributed. FIG1 shows that M1 tends to have ECDS closer to zero, i.e., more channels will be de-allocated than M2. Next, we train these models using dealloc, to confirm that NC indeed leads to more deallocation. The left panel of FIG1 shows that the number of de-allocated channels of M1 is relatively larger than that of M2, which is the desired effect of NC. Note that M1 also outperforms M2 on error rates, which is an additional advantage of NC from its stochastic regularization effect. Nevertheless, remark that M2 in FIG1 already de-allocates many channels, which suggests that SBP (used in NC) is not crucial for efficient de-allocation. Rather, the efficiency mainly comes from ECDS. To prove this claim, we evaluate three variants of M1 which use different de-allocation policies than ECDS < T l: (a) SNR < 1 (thresholding the signal-to-noise ratio of NC in each channel by 1, proposed by the original SBP; M3), (b) SNR < 2.3 (M4) and (c) 2 < 0.25 (thresholding W Conv i 2 ; M5). We train them using only dealloc, and compare the performances with the proposed model (M1). The right panel of FIG1 shows the of the three variants. First, we found that the M3 could not de-allocate any channel in our setting (this is because we prune a network on-the-fly during training, while the original SBP only did it after training). When we de-allocate competitive numbers of channels against M1 by tuning thresholds of others (M4 and M5), the error rates are much worse than that of M1. These observations confirm that ECDS is a more effective de-allocation policy than other magnitude-based metrics. We demonstrate that CNNs of large-scale features can be trained effectively via channel-selectivity, primarily focusing on bottleneck architectures. The proposed ideas on channel-selectivity, however, would be applicable other than the bottlenecks, which we believe is an interesting future research direction. We also expect that channel-selectivity has a potential to be used for other tasks as well, e.g., interpretability BID42, robustness BID6, and memorization BID51. Consider a probabilistic model p(Y|X, θ) between two random variables X and Y, and suppose one wants to infer θ from a dataset D = {(x n, y n)} N n=1 consisting N i.i.d. samples from the distribution of (X, Y). In Bayesian inference, θ is regarded as a random variable, under assuming some prior knowledge in terms of a prior distribution p(θ). The dataset D is then used to update the posterior belief on θ, namely p(θ|D) = p(D|θ)p(θ)/p(D) from the Bayes rule. In many cases, however, computing p(θ|D) through Bayes rule is intractable since it requires to compute intractable integrals. To address the issue, variational inference approximates p(θ|D) by another parametric distribution q φ (θ), and tries to minimize the KL-divergence D KL (q φ (θ) p(θ|D)) between q φ (θ) and p(θ|D). Instead of directly minimizing it, one typically maximizes the variational lower bound L(φ), due to the following: DISPLAYFORM0 where DISPLAYFORM1 In case of complex models, however, expectations in are still intractable. BID22 proposed an unbiased minibatch-based Monte Carlo estimator for them, which can be used when q φ (θ) is representable by θ = f (φ, ε) with a non-parametric noise ε ∼ p(ε). For a minibatch DISPLAYFORM2 Now we can solve optimize L(φ) by stochastic gradient ascent methods, if f is differentiable. For a model having non-Bayesian parameters, say W, we can still apply the above approach by maximizing DISPLAYFORM3 where φ and W can be jointly optimized under DISPLAYFORM4 Structured Bayesian pruning (SBP) BID35 ) is a good example to show how stochastic variational inference can be incorporated into deep neural networks. The SBP framework assumes X to be an object of I features, that is, X = (X i) DISPLAYFORM0. For example, X ∈ R I×H×W can be a convolutional input consisting I channels, of the form X = (DISPLAYFORM1 where W and H denote the width and the height of each channel, respectively. It considers a dropout-like layer with a noise vector θ = (θ i) I i=1 ∼ p noise (θ), which outputs X θ of the same size as X.4 Here, θ is treated as a random vector, and the posterior p(θ|D) is approximated by a fully-factorized truncated log-normal distribution q φ (θ): DISPLAYFORM2 DISPLAYFORM3 where 1 [a,b] denotes the indicator function for the inveral [a, b]. Meanwhile, the prior p(θ) is often chosen by a fully-factorized log-uniform distribution, e.g., Sparse Variational Dropout, and SBP use the truncated version: DISPLAYFORM4 The reason why they use truncations for q φ (θ) and p(θ) is to prevent D KL (q φ (θ) p(θ)) to be improper. Previous works BID23 ignore this issue by implicitly regarding them as truncated distributions on a broad interval, but SBP treats this issue explicitly. Note that, each θ i ∼ q φ (θ i) = LogN(θ i |µ i, σ 2 i) in the noise vector θ can be re-parametrized with a non-parametric uniform noise ε i ∼ U(ε|0, 1) by: DISPLAYFORM5 where DISPLAYFORM6 σi, and Φ denotes the cumulative distribution function of the standard normal distribution. Now one can optimize φ = (µ, σ) jointly with the weights W of a given neural network via stochastic variational inference described in Section A. Unlike, SBP regards W as a non-Bayesian parameter, and the final loss L SBP to optimize becomes DISPLAYFORM7 Here, the KL-divergence term is scaled by α to compensate the trade-off between sparsity and accuracy. In practice, SBP starts from a pre-trained model, and re-trains it using the above loss. Due to the sparsity-inducing behavior of log-uniform prior, θ is forced to become more noisy troughout the re-training. Neurons with θ of signal-to-noise ratio (SNR) below 1 are selected, and removed after the re-training: DISPLAYFORM8 C BAYESIAN PRUNING AND IDENTITY CONNECTIONS SCU requires "training-time removal" of input channels for the channel de-allocation and reallocation to work. But usually, this process should be done carefully since it can make the optimization much difficult and put the network into a bad local minima. In particular, it occurs if we select channels to remove too aggressively. It is known that this issue becomes more pronounced in Bayesian neural networks BID44 BID35 BID30, such as SBP we use in this paper. Recall the variational lower bound objective in, for Bayesian parameters φ and non-Bayesian W. If the gradient of the first term DISPLAYFORM9, that is, to follow the prior p(θ). Unfortunately, in practice, we usually observe this phenomena at the early stage of training, when W are randomly initialized. In that case then, q φ (θ) will become p(θ) too fast because of the "uncertain" W, thereby many channels will be pruned forever, in SBP for example. This problem is usually dealt with in one of two ways: (a) using a pre-trained network as a starting point of W BID35, and (b) a "warm-up" strategy, where the KL-divergence term is rescaled by β that increases linearly from 0 to 1 during training BID44 BID30. In this paper, however, neither methods are used, but instead we have found that the problem can be much eased with identity connections, as it can eliminate a possible cause of the optimization difficulty from removing channels: optimization difficulty from losing information as an input passes through a deep network. The presence of identity connection implies that the information of an input will be fully preserved even in the case when all the parameters in a layer are pruned. This may not be true in models without identity, for example, in VGGNet BID43, one can see that the information of an input will be completely lost if any of the layers removes its entire channels. This suggests us that identity connections can be advantageous not only for scaling up the network architectures, but also for reducing the size of them. ). Then, we have: DISPLAYFORM10 Now, check that ECDS(S i) becomes: DISPLAYFORM11 By the assumption that DISPLAYFORM12 ) for all h, w, we get: DISPLAYFORM13 DISPLAYFORM14 where φ N and Φ N denote the probability distribution function and the cumulative distribution function of the standard normal distribution. Therefore, the desired formula for ECDS(S i) can be obtained by using the linearity of expectation: DISPLAYFORM15 To validate whether the assumption BN(CD(X; W CD); W BN ) i,h,w ∼ N (β i, γ 2 i) holds in modern CNNs, we first observe that, once we ignore the effects from spatial shifting, 5 a necessary condition of the assumption is that (X i,:,:) are identically distributed normal for a given channel X i. This is because BN and CD do not change the "shape" of pixel-wise distributions of X i. From this observation, we conduct a set of experiments focusing on a randomly chosen hidden layer in a DenseNet-40 model. We analyze the empirical distribution of the hidden activation incoming to the layer calculated from CIFAR-10 test dataset. Since the data consists of 10,000 samples, we get an hidden activation X test ∈ R 10000×C×32×32, 6 where C denotes the number of channels of the input. These observations suggest us that the assumption of Proposition 1 can be reasonable except for the boundaries. We also emphasize that these trends we found are also appeared even when the model is not trained at all FIG4, that is, all the weights are randomly initialized, which implies that these properties are not "learned", but come from a structural property of CNN, e.g. equivariance on translation, or the central limit theorem. This observation provides us another support why the ECDS formula stated in Proposition 1 is valid at any time during training. From Θ = (V, W): V consists of (π, g) in CDs, we further rewrite W by (W NC, W C): W NC the parameters in NCs, and W C is the remaining ones. One can safely ignore the effect of V during training of (W NC, W C), since they remain fixed. Recall that each noise θ from a NC is assumed to follow LogN(θ|µ, σ 2). They can be re-written with a noise ε from the standard normal distribution, i.e., θ = f ((µ, σ), ε) = exp (µ + σ · ε), where ε ∼ N (0, 1 2). In such case that each noise θ from NC can be "re-parametrized" with an non-parametric noise and the corresponding parameters φ = (µ, σ), we can then use stochastic variational inference BID22 for the optimization of (W NC, W C) with a minibatch-based stochastic gradient method (see Appendix A for more details). Then, the final loss we minimize for a minibatch {( DISPLAYFORM0 where DISPLAYFORM1 is a sampled vector from the fully-factorized standard normal distribution, and D KL (· ·) denotes the KL-divergence. Although not shown in, an extra regularization term R(W C) can be added to the loss for the non-Bayesian parameters W C, e.g., weight decays. In fact, in our case, i.e. q φ (θ) = LogN(θ|µ, σ 2) and DISPLAYFORM2 As we explain in Appendix B, SBP bypasses this issue by using truncated distributions on a compact interval [a, b] for q φ (θ) and p(θ). We found that, however, this treatment also imposes extra computational overheads on several parts of training process, such as on sampling noises and computing D KL (q φ (θ) p(θ)). These overheads are non-negligible on large models like ResNet or DenseNet, which we are mainly focusing on. Therefore, unlike SBP, here we do not take truncations on q φ (θ) and p(θ) due to practical consideration, assuming an approximated form between the truncated distributions of q φ (θ) and p(θ) on a large interval. Then we can replace each D KL (q φ (θ) p(θ)) in by − log σ for optimization. In other words, each noise θ in NC is regularized to a larger variance, i.e., the more "noisy". We observed that this approximation does not harm much on the performance of SCU. Nevertheless, one should be careful that q φ (θ) and p(θ) should not be assumed as the un-truncated forms itself, but instead as approximated forms of truncated distributions on a large interval, not to make the problem ill-posed. As used in SBP, if they are truncated, the KL-divergence becomes: DISPLAYFORM3 Datasets. We perform our experiments extensively on CIFAR-10 and CIFAR-100 BID24 ) classification datasets. CIFAR-10/100 contains 60,000 RGB images of size 32 × 32 pixels, 50,000 for training and 10,000 for test. Each image in the two datasets is corresponded to one of 10 and 100 classes, respectively, and the number of data is set evenly for each class. We use a common scheme for data-augmentation BID45 BID27 BID11 BID18. ImageNet classification dataset, on the other hand, consists of 1.2 million training images and 50,000 validation images, which are labeled with 1,000 classes. We follow BID17 BID11 for preprocessing of data in training and inference time. Training details. All models in our experiments is trained by stochastic gradient descent (SGD) method, with Nesterov momentum of weight 0.9 without dampening. We use a cosine shape learning rate schedule BID29, i.e., decreasing the learning rate gradually from 0.1 to 0 throughout the training. We set the weight decay 10 −4 by for non-Bayesian parameters of each model. We train each CIFAR model for 300 epochs with mini-batch size 64 following BID18, except for the "DenseNet-BC-190+mixup" models as they are trained for 200 epochs following the original setting. For ImageNet models, on the other hand, we train for 120 epochs with mini-batch size 256. is employed in a model, we initialize W NC = (µ, σ) by (0, e −3), and DISPLAYFORM0. Initializations of W BN and W Conv may differ depending on models, and we follow the initialization scheme of the given model. In our experiments, we follow a pre-defined calling policy when dealloc and realloc will be called throughout training. If dealloc is used, it is called at the end of each epoch of training. On the other hand, if realloc is used, it start to be called after 10% of the training is done, called for every 3 epochs, and stopped in 50% of training is done. The thresholds for dealloc and realloc, i.e. T l and T h, is set by 0.0025 and 0.05, respectively, except for CondenseNet-SCU-182 TAB3, in which T l is adjusted by 0.001 for an effective comparison with the baseline. For all the CIFAR-10/100 models, we re-initialize b i by a random sample from [−1.5, 1.5] × [−1.5, 1.5] pixels uniformly whenever a channel slice S i is re-open via realloc process. We set the weight decay on each b i to 10 −5 separately from the other parameters. For the ImageNet TAB2, however, we did not jointly train b for faster training. Instead, each b i is set fixed unless it is re-initialized via realloc. In this case, we sampled a point from [−2.5, 2.5] × [−2.5, 2.5] pixels uniformly for the re-initialization. We found that this simple reallocation scheme can also improve the efficiency of SCU. In general, every model we used here forms a stack of multiple bottlenecks, where the definition of each bottleneck differs depending on its architecture (see TAB6). Each stack is separated into three (CIFAR-10/100) or four (ImageNet) stages by average pooling layers of kernel 2 × 2 to perform down-sampling. Each of the stages consists N bottleneck blocks, and we report which N is used for all the tested models in TAB7. The whole stack of each model follows a global average pooling layer BID27 ) and a fully connected layer, and followed by single convolutional layer (See TAB8). There exist some minor differences between the ing models and the original papers BID11 BID18 BID49. In ResNet and ResNeXt models, we place an explicit 2 × 2 average pooling layer for down-sampling, instead of using convolutional layer of stride 2. Also, we use a simple zero-padding scheme for doubling the number of channels between stages. In case of DenseNet, on the other hand, our DenseNet models are different from DenseNet-BC proposed by BID18, in a sense that we do not place a 1 × 1 convolutional layer between stages (which is referred as the "compression" layer in the original DenseNet). Nevertheless, we observed that the models we used are trained as well as the originals. | We propose a new module that improves any ResNet-like architectures by enforcing "channel selective" behavior to convolutional layers | 1,091 | scitldr |
Predictive models that generalize well under distributional shift are often desirable and sometimes crucial to machine learning applications. One example is the estimation of treatment effects from observational data, where a subtask is to predict the effect of a treatment on subjects that are systematically different from those who received the treatment in the data. A related kind of distributional shift appears in unsupervised domain adaptation, where we are tasked with generalizing to a distribution of inputs that is different from the one in which we observe labels. We pose both of these problems as prediction under a shift in design. Popular methods for overcoming distributional shift are often heuristic or rely on assumptions that are rarely true in practice, such as having a well-specified model or knowing the policy that gave rise to the observed data. Other methods are hindered by their need for a pre-specified metric for comparing observations, or by poor asymptotic properties. In this work, we devise a bound on the generalization error under design shift, based on integral probability metrics and sample re-weighting. We combine this idea with representation learning, generalizing and tightening existing in this space. Finally, we propose an algorithmic framework inspired by our bound and verify is effectiveness in causal effect estimation. A long-term goal in artificial intelligence is for agents to learn how to act. This endeavor relies on accurately predicting and optimizing for the outcomes of actions, and fundamentally involves estimating counterfactuals-what would have happened if the agent acted differently? In many applications, such as the treatment of patients in hospitals, experimentation is infeasible or impractical, and we are forced to learn from biased, observational data. Doing so requires adjusting for the distributional shift between groups of patients that received different treatments. A related kind of distributional shift arises in unsupervised domain adaptation, the goal of which is to learn predictive models for a target domain, observing ground truth only in a source domain. In this work, we pose both domain adaptation and treatment effect estimation as special cases of prediction across shifting designs, referring to changes in both action policy and feature domain. We separate policy from domain as we wish to make causal statements about the policy, but not about the domain. Learning from observational data to predict the counterfactual outcome under treatment B for a patient who received treatment A, one must adjust for the fact that treatment A was systematically given to patients of different characteristics from those who received treatment B. We call this predicting under a shift in policy. Furthermore, if all of our observational data comes from hospital P, but we wish to predict counterfactuals for patients in hospital Q, with a population that differs from P, an additional source of distributional shift is at play. We call this a shift in domain. Together, we refer to the combination of domain and policy as the design. The design for which we observe ground truth is called the source, and the design of interest the target. The two most common approaches for addressing distributional shift are to learn shift-invariant representations of the data BID0 or to perform sample re-weighting or matching (; BID13 . Representation learning approaches attempt to extract only information from the input that is invariant to a change in design and predictive of the variable of interest. Such representations are typically learned by fitting deep neural networks in which activations of deeper layers are regularized to be distributionally similar across designs BID0 BID15 . Although representation learning can be shown to reduce the error associated to distributional shift BID15 in some cases, standard approaches are biased, even in the limit of infinite data, as they penalize the use also of predictive information. In contrast, re-weighting methods correct for distributional shift by assigning higher weight to samples from the source design that are representative of the target design, often using importance sampling. This idea has been well studied in, for example, the causal inference BID20, domain adaptation and reinforcement learning BID19 literature. For example, in causal effect estimation, importance sampling is equivalent to re-weighting units by the inverse probability of observed treatments (treatment propensity). Re-weighting with knowledge of importance sampling weights often leads to asymptotically unbiased estimators of the target outcome, but may suffer from high variance in finite samples .A significant hurdle in applying re-weighting methods is that optimal weights are rarely known in practice. There are a variety of methods to learn these weights. Weights can be estimated as the inverse of estimated feature or treatment densities BID20 BID7 but this plug-in approach can lead to highly unstable estimates. More stable methods learn weights by minimizing distributional distance metrics BID8 BID13 BID4 ). Closely related, matching produces weights by finding units in the source design that are close in some metric to units in the target design. Specifying a distributional or unit-wise metric is challenging, especially if the input space is high-dimensional where no metric incorporating all features can ever be made small. This has inspired heuristics such as first performing variable selection and then finding a matching in the selected covariates. Our key algorithmic contribution is to show how to combine the intuition behind shift-invariant representation learning and re-weighting methods by jointly learning a representation Φ of the input space and a weighting function w(Φ) to minimize a) the re-weighted empirical risk and b) a re-weighted measure of distributional shift between designs. This is useful also for the identity representation Φ(x) = x, as it allows for principled control of the variance of estimators through regularization of the re-weighting function w(x), mitigating the issues of exact importance sampling methods. Further, this allows us to evaluate w on hold-out samples to select hyperparameters or do early stopping. Finally, letting w depend on Φ alleviates the problem of choosing a metric by which to optimize sample weights, as Φ is trained to extract information predictive of the outcome. We capture these ideas in an upper bound on the generalization error under a shift in design and specialize it to the case of treatment effect estimation. We bring together two techniques used to overcome distributional shift between designs-re-weighting and representation learning, with complementary robustness properties, generalizing existing methods based on either technique. We give finite-sample generalization bounds for prediction under design shift, without assuming access to importance sampling weights or to a well-specified model, and develop an algorithmic framework to minimize these bounds. We propose a neural network architecture that jointly learns a representation of the input and a weighting function to improve balance across changing settings. Finally, we apply our proposed algorithm to the task of predicting causal effects from observational data, achieving state-of-the art on a widely used benchmark. The goal of this work is to accurately predict outcomes of interventions T ∈ T in contexts X ∈ X drawn from a target design p π (X, T). The outcome of intervening with t ∈ T is the potential outcome Y (t) ∈ Y (, Ch. 1-2), which has a stationary distribution p t (Y | X) given context X. Assuming a stationary outcome is akin to the covariate shift assumption , often used in domain adaptation.1 For example, in the classical binary setting, Y represents the outcome under treatment and Y the outcome under control. The target design consists of two components: the target policy p π (T | X), which describes how one intends to map observations of contexts (such as patient prognostics) to interventions (such as pharmacological treatments) and the target domain p π (X), which describes the population of contexts to which the policy will be applied. The target design is known to us only through m unlabeled sam-ples (x 1, t 1),..., (x m, t m) from p π (X, T). Outcomes are only available to us in labeled samples from a source domain: (x 1, t 1, y 1),..., (x n, t n, y n), where (x i, t i) are draws from a source design p µ (X, T) and y i = y i (t i) is a draw from p T (Y | X), corresponding only to the factual outcome Y (T) of the treatment administered. Like the target design, the source design consists of a domain of contexts for which we have data and a policy, which describes the (unknown) historical administration of treatment in the data. Only the factual outcomes of the treatments administered are observed, while the counterfactual outcomes y i (t) for t = t i are, naturally, unobserved. Our focus is the observational or off-policy setting, in which interventions in the source design are performed non-randomly as a function of X, p µ (T | X) = p µ (T). This encapsulates both the covariate shift often observed between treated and control populations in observational studies and the covariate shift between the domain of the study and the domain of an eventual wider intervention. Examples of this problem are plentiful: in addition to the example given in the introduction, consider predicting the return of an advertising policy based on the historical of a different policy, applied to a different population of customers. We stress that we are interested in the causal effect of an intervention T on Y, conditioned on X. As such, we cannot think of X and T as a single variable. Without additional assumptions, it is impossible to deduce the effect of an intervention based on observational data alone BID18, as it amounts disentangling correlation and causation. Crucially, for any unit i, we can observe the potential outcome y i (t) of at most one intervention t. In our analysis, we make the following standard assumptions. Assumption 1 (Consistency, ignorability and overlap). For any unit i, assigned to intervention t i, we observe Y i = Y (t i). Further, {Y (t)} t∈T and the data-generating process p µ (X, T, Y) satisfy strong ignorability: {Y (t)} t∈T ⊥ ⊥ T | X and overlap: DISPLAYFORM0 Assumption 1 is a sufficient condition for causal identifiability BID20. Ignorability is also known as the no hidden confounders assumption, indicating that all variables that cause both T and Y are assumed to be measured. Under ignorability therefore, any domain shift in p(X) cannot be due to variables that causally influence T and Y, other than through X. Under Assumption 1, potential outcomes equal conditional expectations: DISPLAYFORM1, and we may predict Y (t) by regression. We further assume common domain support, ∀x ∈ X: p π (X = x) > 0 ⇒ p µ (X = x) > 0. Finally, we adopt the notation p(x):= p(X = x). We attempt to learn predictors f: DISPLAYFORM0 Recall that under Assumption 1, this conditional expectation is equal to the (possibly counterfactual) potential outcome Y (t), conditioned on X. Our goal is to ensure that hypotheses f are accurate under a design p π that deviates from the data-generating process, p µ. This is unlike standard supervised learning for which p π = p µ. We measure the (in)ability of f to predict outcomes under π, using the expected risk, DISPLAYFORM1 is an appropriate loss function, such as the squared loss, L(y, y):= (y − y) 2 or the log-loss, depending on application. As outcomes under the target design p π are not observed, even through a finite sample, we cannot directly estimate using the empirical risk under p π. A common way to resolve this is to use importance sampling -the observation that if p µ and p π have common support, with w DISPLAYFORM2 Hence, with access to w *, an unbiased estimator of R π (f) may be obtained by re-weighting the (factual) empirical risk under µ, DISPLAYFORM3 Unfortunately, importance sampling weights can be very large when p π is large and p µ small, ing in large variance inR & ). More importantly, p µ (x, t) is rarely known in practice, and neither is w *. In principle, however, any re-weighting function w with the following property yields a valid risk under the re-weighted distribution p w µ. DISPLAYFORM4 DISPLAYFORM5 We denote the re-weighted density p w µ (x, t):= w(x, t)p µ (x, t).A natural candidate in place of w * is an estimateŵ * based on estimating densities p π (x, t) and p µ (x, t). In this work, we adopt a different strategy, learning parameteric re-weighting functions w from observational data, that minimize an upper bound on the risk under p π. An important special case of our setting is when treatments are binary, T ∈ {0, 1}, often interpreted as treating (T = 1) or not treating (T = 0) a unit, and the domain is fixed across designs, p µ (X) = p π (X). This is the classical setting for estimating treatment effects-the effect of choosing one intervention over another BID17. 2 The effect of an intervention T = 1 in context X, is measured by the conditional average treatment effect (CATE), DISPLAYFORM0 Predicting τ for unobserved units typically involves prediction of both potential outcomes 3. In a clinical setting, knowledge of τ is necessary to assess which medication should be administered to a certain individual. Historically, the (population) average treatment effect, ATE = E x∼p [τ (x)], has received comparatively much more attention BID20, but is inadequate for personalized decision making. Using predictors f (x, t) of potential outcomes Y (t) in contexts X = x, we can estimate the CATE byτ (x) = f (x, 1) − f (x, 0) and measure the quality using the mean squared error (MSE), DISPLAYFORM1 In Section 4, we argue that estimating CATE from observational data requires overcoming distributional shift with respect to the treat-all and treat-none policies, in predicting each respective potential outcome, and show how this can be used to derive generalization bounds for CATE. A large body of work has shown that under assumptions of ignorability and having a wellspecified model, various regression methods for counterfactual estimation are asymptotically consistent BID4 BID1 BID3. However, consistency like these provide little insight into the case of model misspecification. Under model misspecification, regression methods may suffer from additional bias when generalizing across designs due to distributional shift. A common way to alleviate this is importance sampling, see Section 2. This idea is used in propensity-score methods BID2, that use treatment assignment probabilities (propensities) to re-weight samples for causal effect estimation, and more generally in re-weighted regression, see e.g. . A major drawback of these methods is the assumption that the design density is known. To address this, others BID8 BID13, have proposed learning sample weights w to minimize the distributional distance between samples under p π and p w µ, but rely on specifying the data representation a priori, without regard for which aspects of the data actually matter for outcome prediction and policy estimation. On the other hand, BID12 proposed learning representations for counterfactual inference, inspired by work in unsupervised domain adaptation BID16. The drawback of this line of work is that the generalization bounds of Shalit et al. FORMULA4 and BID15 are loose-even if infinite samples are available, they are not guaranteed to converge to the lowest possible error. Moreover, these approaches do not make use of important information that can be estimated from data: the treatment/domain assignment probabilities. We give a bound on the risk in predicting outcomes under a target design p π (T, X) based on unlabeled samples from p π and labeled samples from a source design p µ (T, X). Our combines representation learning, distribution matching and re-weighting, ing in a tighter bound than the closest related work,. The predictors we consider are compositions f (x, t) = h(Φ(x), t) where Φ is a representation of x and h an hypothesis. We first give an upper bound on the risk in the general design shift setting, then show how this can be used to bound the error in prediction of treatment effects. In Section 5 we give a about the asymptotic properties of the minimizers of this upper bound. Risk under distributional shift Our bounds on the risk under a target design capture the intuition that if either a) the target design π and source design µ are close, or b) the true outcome is a simple function of x and t, the gap between the target risk and the re-weighted source risk is small. These notions can be formalized using integral probability metrics (IPM) that measure distance between distributions w.r.t. a normed vector space of functions H. Definition 2. The integral probability metric (IPM) distance, associated with a normed vector space of functions H, between distributions p and q is, DISPLAYFORM0 Important examples of IPMs include the Wasserstein distance, for which H is the family of functions with Lipschitz constant at most 1, and the Maximum Mean Discrepancy for which H are functions in the norm-1 ball in a reproducing kernel Hilbert space. Using definitions 1-2, and the definition of re-weighted risk, see FORMULA4, we can state the following (see Appendix A.2 for a proof). Lemma 1. For hypotheses f with loss f such that f / f H ∈ H, and p µ, p π with common support, there exists a valid re-weighting w of p µ, see Definition 1, such that, DISPLAYFORM1 The first inequality is tight for importance sampling weights, w(DISPLAYFORM2 The bound of Lemma 1 is tighter if p µ and p π are close (the IPM is smaller), and if the loss lives in a small family of functions H (the supremum is taken over a smaller set). Lemma 1 also implies that there exist weighting functions w(x, t) that achieve a tighter bound than the uniform weighting w(x, t) = 1, implicitly used by. While importance sampling weights in a tight bound in expectation, neither the design densities nor their ratio are known in general. Moreover, exact importance weights often in large variance in finite samples BID6. Here, we will search for a weighting function w, that minimizes a finite-sample version of, trading off bias and variance. We examine the empirical value of this idea alone in Section 6.1. We now introduce the notion of representation learning to combat distributional shift. Representation learning The idea of learning representations that reduce distributional shift in the induced space, and thus the source-target error gap, has been applied in domain adaptation BID0, algorithmic fairness and counterfactual prediction . The hope of these approaches is to learn predictors that predominantly exploit information that is common to both source and target distributions. For example, a face detector should be able to recognize the structure of human features even under highly variable environment conditions, by ignoring , lighting etc. We argue that re-weighting (e.g. importance sampling) should also only be done with respect to features that are predictive of the outcome. Hence, in Section 5, we propose using re-weightings that are functions of learned representations. We follow the setup of , and consider learning twice-differentiable, invertible representations Φ: X → Z, where Z is the representation space, and Ψ: Z → X is the inverse representation, such that Ψ(Φ(x)) = x for all x. Let E denote space of such representation functions. For a design π, we let p π,Φ (z, t) be the distribution induced by Φ over Z × T, with p w π,Φ (z, t):= p π,Φ (z, t)w(Ψ(z), t) its re-weighted form andp w π,Φ its re-weighted empirical form, following our previous notation. Finally, we let G ⊆ {h : Z × T → Y} denote a set of hypotheses h(Φ, t) operating on the representation Φ and let F the space of all compositions, F = {f = h(Φ(x), t): h ∈ G, Φ ∈ E}. We can now relate the expected target risk R π (f) to the re-weighted empirical source riskR w µ (f). Theorem 1. Given is a labeled sample (x 1, t 1, y 1),..., (x n, t n, y n) from p µ, and an unlabeled sample (x 1, t 1),..., (x m, t m) from p π, with corresponding empirical measuresp µ andp π. Suppose that Φ is a twice-differentiable, invertible representation, that h(Φ, t) is an hypothesis, and DISPLAYFORM3 where L is the squared loss, L(y, y) = (y − y) 2, and assume that there exists a constant B Φ > 0 such that h,Φ /B Φ ∈ H ⊆ {h : Z × T → Y}, where H is a reproducing kernel Hilbert space of a kernel, k such that k((z, t), (z, t)) < ∞. Finally, let w be a valid re-weighting of p µ,Φ. Then with probability at least 1 − 2δ, DISPLAYFORM4 where C DISPLAYFORM5. A similar bound exists where H is the family of functions Lipschitz constant at most 1, and IPM H the Wasserstein distance, but with worse sample complexity. See Appendix A.2 for a proof of Theorem 1 that involves applying finite-sample generalization bounds to Lemma 1, as well as moving to the space induced by the representation Φ.Theorem 1 has several implications: non-identity feature representations, non-uniform sample weights, and variance control of these weights can all contribute to a lower bound. Using uniform weights w(x, t) = 1 in, in a bound similar to that of Shalit et al. FORMULA4 and BID15. When π = µ, minimizing uniform-weight bounds in biased hypotheses, even in the asymptotical limit, as the IPM term does not vanish when the sample size increases. This is an undesirable property, as even k-nearest-neighbor classifiers are consistent in the limit of infinite samples. We consider minimizing with respect to w, improving the tightness of the bound. Theorem 1 indicates that even though importance sampling weights w * yield estimators with small bias, they can suffer from high variance, as captured by the factor V µ (w, f). The factor B Φ is not known in general as it depends on the true outcome, and is determined by f H as well as the determinant of the Jacobian of Ψ, see Appendix A.2. Qualitatively, B Φ measures the joint complexity of Φ and f and is sensitive to the scale of Φ-as the scale of Φ vanishes, B Φ blows up. To prevent this in practice, we normalize Φ. As B Φ is unknown, substituted a hyperparameter α for B Φ, but discussed the difficulties of selecting its value without access to counterfactual labels. In our experiments, we explore a heuristic for adaptively choosing α, based on measures of complexity of the observed held-out loss as a function of the input. Finally, the term C Theorem 1 is immediately applicable to the case of unsupervised domain adaptation in which there is only a single potential outcome of interest, T = {0}. In this case, DISPLAYFORM6 Conditional average treatment effects A simple argument shows that the error in predicting the conditional average treatment effect, MSE(τ) can be bounded by the sum of risks under the constant treat-all and treat-none policies. As in Section 2.2, we consider the case of a fixed domain p π (X) = p µ (X) and binary treatment T = {0, 1}. Let R πt (f) denote the risk under the constant policy π t such that ∀x ∈ X: p πt (T = t | X = x) = 1. Proposition 1. We have with MSE(τ) as in and R πt (f) the risk under the constant policy π t, DISPLAYFORM7 The proof involves the relaxed triangle inequality and the law of total probability. By Proposition 1, we can apply Theorem 1 to R π1 and R π0 separately, to obtain a bound on MSE(τ). For brevity, we refrain from stating the full , but emphasize that it follows from Theorem 1. In Section 6.2, we evaluate our framework in treatment effect estimation, minimizing this bound. Motivated by the theoretical insights of Section 4, we propose to jointly learn a representation Φ(x), a re-weighting w(x, t) and an hypothesis h(Φ, t) by minimizing a bound on the risk under the target design, see. This approach improves on previous work in two ways: it alleviates the bias of when sample sizes are large, see Section 4, and it increases the flexibility of the balancing method of BID8 by learning the representation to balance. For notational brevity, we let w i = w(x i, t i). Recall thatp w π,Φ is the re-weighted empirical distribution of representations Φ under p π. The training objective of our algorithm is the RHS of, with hyperparameters β = (α, λ h, λ w) substituted for model (and representation) complexity terms, DISPLAYFORM0 where R(h) is a regularizer of h, such as 2 -regularization. We can show the following . Theorem 2. Suppose H is a reproducing kernel Hilbert space given by a bounded kernel. Suppose weak overlap holds in that DISPLAYFORM1 Consequently, under the assumptions of Thm. 1, for sufficiently large α and λ w, DISPLAYFORM2 In words, the minimizers of converge to the representation and hypothesis that minimize the counterfactual risk, in the limit of infinite samples. Implementation Minimization of L π (h, Φ, w; β) over h, Φ and w is, while motivated by Theorem 2, a difficult optimization problem to solve in practice. For example, adjusting w to minimize the empirical risk term may in overemphasizing "easy" training examples, ing in a poor local minimum. Perhaps more importantly, ensuring invertibility of Φ is challenging for many representation learning frameworks, such as deep neural networks. In our implementation, we deviate from theory on these points, by fitting the re-weighting w based only on imbalance and variance terms, and don't explicitly enforce invertibility. As a heuristic, we split the objective, see, in two and use only the IPM term and regularizer to learn w. In short, we adopt the following alternating procedure. DISPLAYFORM3 The re-weighting function w(x, t) could be represented by one free parameter per training point, as it is only used to learn the model, not for prediction. However, we propose to let w be a parametric function of Φ(x). Doing so ensures that information predictive of the outcome is used for balancing, and lets us compute weights on a hold-out set, to perform early stopping or select hyperparameters. This is not possible with existing re-weighting methods such as BID8 BID13. An example architecture for the treatment effect estimation setting is presented in FIG2. By Proposition 1, estimating treatment effects involves predicting under the two constant policiestreat-everyone and treat-no-one. In Section 6, we evaluate our method in this task. As noted by , choosing hyperparameters for counterfactual prediction is fundamentally difficult, as we cannot observe ground truth for counterfactuals. In this work, we explore setting the balance parameter α adaptively. α is used in in place of B Φ, a factor measuring the complexity of the loss and representation function as functions of the input, a quantity that changes during training. As a heuristic, we use an approximation of the Lipschitz constant of f, with f = h(Φ(x), t), based on observed examples: DISPLAYFORM4 We use a moving average over batches to improve stability. We create a synthetic domain adaptation experiment to highlight the benefit of using a learned reweighting function to minimize weighted risk over using importance sampling weights w * (x) = DISPLAYFORM0 for small sample sizes. We observe n labeled source samples, distributed according to p µ (x) = N (x; m µ, I d) and predict for n unlabeled target samples drawn according to DISPLAYFORM1 and c ∼ N and let y = σ(β x + c) where σ(z) = 1/(1 + e −z). Importance sampling weights, w * (x) = p π (x)/p µ (x), are known. In experiments, we vary n from 10 to 600. We fit (misspecified) linear models 4 f (x) = β x + γ to the logistic outcome, and compare minimizing a weighted source risk by a) parameterizing sample weights as a small feed-forward neural network to minimize (ours) b) using importance sampling weights (baseline), both using gradient descent. For our method, we add a small variance penalty, λ w = 10 −3, to the learned weights, use MMD with an RBF-kernel of σ = 1.0 as IPM, and let α = 10. We compare to exact importance sampling weights (IS) as well as clipped IS weights (ISC), w M (x) = min(w(x), M ) for M ∈ {5, 10}, a common way of reducing variance of re-weighting methods .In FIG4, we see that our proposed method behaves well at small sample sizes compared to importance sampling methods. The poor performance of exact IS weights is expected at smaller samples, as single samples are given very large weight, ing in hypotheses that are highly sensitive to the training set. While clipped weights alleviates this issue, they do not preserve relevance ordering of high-weight samples, as many are given the truncation value M, in contrast to the reweighting learned by our method. True domain densities are known only to IS methods. We evaluate our framework in the CATE estimation setting, see Section 2.2. Our task is to predict the expected difference between potential outcomes conditioned on pre-treatment variables, for a held-out sample of the population. We compare our to ordinary least squares (OLS) (with one regressor per outcome), OLS-IPW (re-weighted OLS according to a logistic regression estimate of propensities), Random Forests, Causal Forests , BID5, and CFRW (with Wasserstein penalty). Finally, we use as baseline (IPM-WNN): first weights are found by IPM minimization in the input space BID8 BID13, then used in a re-weighted neural net regression, with the same architecture as our method. Our implementation, dubbed RCFR for Re-weighted CounterFactual Regression, parameterizes representations Φ(x), weighting functions w(Φ, t) and hypotheses h(Φ, t) using neural networks, trained by minimizing. We use the RBF-kernel maximum mean discrepancy as the IPM BID9. For a description of the architecture, training procedure and hyperparameters, see Appendix B. We compare using uniform w = 1 and learned weights, setting the balance parameter α either fixed, by an oracle (test-set error), or adaptively using the heuristic described in Section 5. To pick other hyperparameters, we split training sets into one part used for function fitting and one used for early stopping and hyperparameter selection. Hyperparameters for regularization are chosen based on the empirical loss on a held-out source (factual) sample. CATE Error, RMSE(τ) DISPLAYFORM0 (b) For small imbalance penalties α, re-weighting (low λw) has no effect. For moderate α, less uniform re-weighting (smaller λw) improves the error, c) for large α, weighting helps, but overall error increases. Best viewed in color. FORMULA4, and a synthesized continuous outcome that can be used to compute the ground-truth CATE error. Average over 100 different realizations/settings of the outcome are presented in TAB0. We see that our proposed method achieves state-of-the-art , and that adaptively choosing α does not hurt performance much. Furthermore, we see a substantial improvement from using nonuniform sample weights. In FIG4 we take a closer look at the behavior of our model as we vary its hyperparameters on the IHDP dataset. Between the two plots we can draw the following : a) For moderate to large α ∈, we observe a marginal gain from using the IPM penalty. This is consistent with the observations of. b) For large α ∈, we see a large gain from using a non-uniform re-weighting (small λ w). c) While large α makes the factual error more representative of the counterfactual error, using it without re-weighting in higher absolute error. We believe that the moderate sample size of this dataset is one of the reasons for the usefulness of our method. See Appendix C.2 for a complementary view of these . We have proposed a theory and an algorithmic framework for learning to predict outcomes of interventions under shifts in design-changes in both intervention policy and feature domain. The framework combines representation learning and sample re-weighting to balance source and target designs, emphasizing information from the source sample relevant for the target. Existing reweighting methods either use pre-defined weights or learn weights based on a measure of distributional distance in the input space. These approaches are highly sensitive to the choice of metric used to measure balance, as the input may be high-dimensional and contain information that is not predictive of the outcome. In contrast, by learning weights to achieve balance in representation space, we base our re-weighting only on information that is predictive of the outcome. In this work, we apply this framework to causal effect estimation, but emphasize that joint representation learning and re-weighting is a general idea that could be applied in many applications with design shift. Our work suggests that distributional shift should be measured and adjusted for in a representation space relevant to the task at hand. Joint learning of this space and the associated re-weighting is attractive, but several challenges remain, including optimization of the full objective and relaxing the invertibility constraint on representations. For example, variable selection methods are not covered by our current theory, as they induce a non-ivertible representation, but a similar intuition holds there-only predictive attributes should be used when measuring imbalance. We believe that addressing these limitations is a fruitful path forward for future work. We denote the re-weighted density p w µ (x, t):= w(x, t)p µ (x, t).Expected & empirical risk We let the (expected) risk of f measured by h under p µ be denoted DISPLAYFORM0 where l h is an appropriate loss function, and the empirical risk over a sample DISPLAYFORM1 We use the superscript w to denote the re-weighted risks DISPLAYFORM2 Definition A1 (Importance sampling). For two distributions p, q on Z, of common support, ∀z ∈ Z: p(z) > 0 ⇐⇒ q(z) > 0, we call DISPLAYFORM3 the importance sampling weights of p and q. Definition 2 (Restated). The integral probability metric (IPM) distance, associated with the function family H, between distributions p and q is defined by DISPLAYFORM4 We begin by bounding the expected risk under a distribution p π in terms of the expected risk under p µ and a measure of the discrepancy between p π and p µ. Using definition 2 we can show the following . Lemma 1 (Restated). For hypotheses f with loss f such that f / f H ∈ H, and p µ, p π with common support, there exists a valid re-weighting w of p µ, see Definition 1, such that, DISPLAYFORM5 The first inequality is tight for importance sampling weights, w(x, t) = p π (x, t)/p µ (x, t). The second inequality is not tight for general f, even if f ∈ H, unless p π = p µ.Proof. The follows immediately from the definition of IPM. DISPLAYFORM6 Further, for importance sampling weights w IS (x, t) = π(t;x) µ(t;x), for any h ∈ H, DISPLAYFORM7 and the LHS is tight. We could apply Lemma 1 to bound the loss under a distribution q based on the weighted loss under p. Unfortunately, bounding the expected risk in terms of another expectation is not enough to reason about generalization from an empirical sample. To do that we use Corollary 2 of BID6, restated as a Theorem below. Theorem A1 (Generalization error of re-weighted loss BID6). For a loss function h of any hypothesis h ∈ H ⊆ {h : X → R}, such that d = Pdim({ h : h ∈ H}) where Pdim is the pseudo-dimension, and a weighting function w(x) such that E p [w] = 1, with probability 1 − δ over a sample (x 1, ..., x n), with empirical distributionp, DISPLAYFORM8 we get the simpler form DISPLAYFORM9 We will also need the following about estimating IPMs from finite samples from.Theorem A2 (Estimation of IPMs from empirical samples ). Let M be a measurable space. Suppose k is measurable kernel such that sup x∈M k(x, x) ≤ C ≤ ∞ and H the reproducing kernel Hilbert space induced by k, with ν:= sup x∈M,f ∈H f (x) < ∞. Then, witĥ p,q the empirical distributions of p, q from m and n samples respectively, and with probability at least 1 − δ, DISPLAYFORM10 We consider learning twice-differentiable, invertible representations Φ: X → Z, where Z is the representation space, and Ψ: Z → X is the inverse representation, such that Ψ(Φ(x)) = x for all x. Let E denote space of such representation functions. For a design π, we let p π,Φ (z, t) be the distribution induced by Φ over Z × T, with p w π,Φ (z, t):= p π,Φ (z, t)w(Ψ(z), t) its re-weighted form andp w π,Φ its re-weighted empirical form, following our previous notation. Note that we do not include t in the representation itself, although this could be done in principle. Let G ⊆ {h : Z × T → Y} denote a set of hypotheses h(Φ, t) operating on the representation Φ and let F denote the space of all compositions, F = {f = h(Φ(x), t): h ∈ G, Φ ∈ E}. We now restate and prove Theorem 1. Given is a labeled sample D µ = {(x 1, t 1, y 1),..., (x n, t n, y n)} from p µ, and an unlabeled sample D π = {(x 1, t 1),..., (x m, t m)} from p π, with corresponding empirical measuresp µ andp π. Suppose that Φ is a twice-differentiable, invertible representation, that h(Φ, t) is an hypothesis, and DISPLAYFORM0 where L is the squared loss, L(y, y) = (y − y) 2, and assume that there exists a constant B Φ > 0 such that h,Φ /B Φ ∈ H ⊆ {h : Z × T → Y}, where H is a reproducing kernel Hilbert space of a kernel, k such that k((z, t), (z, t)) < ∞. Finally, let w be a valid re-weighting of p µ,Φ. Then with probability at least 1 − 2δ, DISPLAYFORM1 where C F n,δ measures the capacity of F and has only logarithmic dependence on n, D H m,n,δ measures the capacity of H, σ 2 Y is the expected variance in potential outcomes, and DISPLAYFORM2 A similar bound exists where H is the family of functions Lipschitz constant at most 1, but with worse sample complexity. Proof. We have by definition DISPLAYFORM3 f (x, t, y)p(y | t, x)(p π (x, t) − p Proof. Let f * = Φ * • h * ∈ arg min f ∈F R π (f) and let w * (x, t) = p π,Φ (Φ * (x), t)/p µ,Φ (Φ * (x), t). Since min h,Φ,w L π (h, Φ, w; β) ≤ L π (h *, Φ *, w * ; β), it suffices to show that L π (h *, Φ *, w * ; β) = R π (f *) + O(1/ √ n + 1/ √ m). We will work term by term: DISPLAYFORM4 For term D, letting w * i = w * (x i, t i), we have that by weak overlap DISPLAYFORM5 For term A, under ignorability, each term in the sum in the first term has expectation equal to R π (f *) and so, so by weak overlap and bounded second moments of loss, we have A = R π (f *) + O p (1/ √ n). For term B, since h * is fixed we have deterministically that DISPLAYFORM6 Finally, we address term C, which when expanded can be written as DISPLAYFORM7 w * i h(Φ * (x i), t i )).10 −4 10 −2 10 −0 10 2 ∞ Re-weighting regularization λw (uniformity) Figure 3: Error in CATE estimation on IHDP as a function of re-weighting regularization strength λ w (left) and source prediction error (right). We see in the left-hand plot that a) for small imbalance penalties α, re-weighting (low λ w) has no effect, b) for moderate α, less uniform re-weighting (smaller λ w) improves the error, c) for large α, weighting helps, but overall error increases. In the right-hand plot, we compare the ratio of CATE error to source error. Color represents α (see left) and size λ w. We see that for large α, the source error is more representative of CATE error, but does not improve in absolute value without weighting. Here, α was fixed. Best viewed in color. In Figure 3, we see two different views of the IHDP . | A theory and algorithmic framework for prediction under distributional shift, including causal effect estimation and domain adaptation | 1,092 | scitldr |
Deep ensembles have been empirically shown to be a promising approach for improving accuracy, uncertainty and out-of-distribution robustness of deep learning models. While deep ensembles were theoretically motivated by the bootstrap, non-bootstrap ensembles trained with just random initialization also perform well in practice, which suggests that there could be other explanations for why deep ensembles work well. Bayesian neural networks, which learn distributions over the parameters of the network, are theoretically well-motivated by Bayesian principles, but do not perform as well as deep ensembles in practice, particularly under dataset shift. One possible explanation for this gap between theory and practice is that popular scalable approximate Bayesian methods tend to focus on a single mode, whereas deep ensembles tend to explore diverse modes in function space. We investigate this hypothesis by building on recent work on understanding the loss landscape of neural networks and adding our own exploration to measure the similarity of functions in the space of predictions. Our show that random initializations explore entirely different modes, while functions along an optimization trajectory or sampled from the subspace thereof cluster within a single mode predictions-wise, while often deviating significantly in the weight space. We demonstrate that while low-loss connectors between modes exist, they are not connected in the space of predictions. Developing the concept of the diversity--accuracy plane, we show that the decorrelation power of random initializations is unmatched by popular subspace sampling methods. Consider a typical classification problem, where x n ∈ R D denotes the D-dimensional features and y n ∈ [1, . . ., K] denotes the class label. Assume we have a parametric model p(y|x, θ) for the conditional distribution where θ denotes weights and biases of a neural network, and p(θ) is a prior distribution over parameters. The Bayesian posterior over parameters is given by p(y n |x n, θ). (Computing the exact posterior distribution over θ is computationally expensive (if not impossible) when p(y n |x n, θ) is a deep neural network. A variety of approximations have been developed for Bayesian neural networks, including Laplace approximation , Markov chain Monte Carlo methods (; ;), variational Bayesian methods (; ; ;) and Monte-Carlo dropout . While computing the posterior is challenging, it is usually easy to perform maximum-a-posteriori (MAP) estimation, which corresponds to a mode of the posterior. The MAP solution can be written as the minimizer of the following loss (negative log likelihood + negative log prior): log p(y n |x n, θ). The MAP solution is computationally efficient, but only gives a point estimate and not a distribution over parameters. Deep ensembles, proposed by , train an ensemble of neural networks by initializing at M different values and repeating the minimization multiple times which could lead to M different solutions, if the loss is non-convex. (found adversarial training provides additional benefits in some of their experiments, but we will ignore adversarial training and focus only on ensembles with random initialization in this paper.) Given finite training data, many parameter values could equally well explain the observations, and capturing these diverse solutions is crucial for quantifying epistemic uncertainty . Bayesian neural networks learn a distribution over weights, and a good posterior approximation should be able to learn multi-modal posterior distributions in theory. Deep ensembles were inspired by the bootstrap , which has nice theoretical properties. However, it has been empirically observed by; that training individual networks with just random initialization is sufficient in practice and using the bootstrap even hurts performance in some cases (e.g. for small ensemble sizes). and independently benchmarked existing methods for uncertainty quantification on a variety of datasets and architectures, and observed that ensembles tend to outperform approximate Bayesian neural networks in terms of both accuracy and uncertainty, particularly under dataset shift. ) on train and validation data. These empirical observations raise an important question: Why do ensembles trained with just random initialization work so well in practice? One possible hypothesis is that ensembles tend to sample from different modes 1 in function space, whereas variational Bayesian methods (which minimize)) might fail to explore multiple modes even though they are effective at capturing uncertainty within a single mode. See Figure 1 for a cartoon illustration. Note that while the MAP solution is a local minima for the training loss by definition, it may not necessarily be a local minima for the validation loss. Recent work on understanding loss landscapes (; ; allows us to investigate this hypothesis. Note that prior work on loss landscapes has focused on mode-connectivity and low-loss tunnels, but has not explicitly focused on how diverse the functions from different modes are, beyond an initial exploration in . Our findings show that: • The functions sampled along a single training trajectory or subspace thereof (e.g. diagonal Gaussian, low-rank Gaussian and Dropout subspaces) tend to be very similar in predictions (while potential far away in the weight space), whereas functions sampled from different randomly initialized trajectories tend to be very diverse. • Solution modes are connected in the loss landscape but they are distinct in the space of predictions. Low-loss tunnels create functions with near-identical low values of loss along the path, however these functions tend to be very different in function space, changing significantly in the middle of the tunnel. The loss landscape of neural networks (also called the objective landscape) -the space of weights and biases that the network navigates -is typically a very high dimensional function and therefore could potentially be very complicated. However, many empirical show interesting properties of the loss surface. observed that the loss along a linear path from an initialization to the corresponding optimum is monotonically decreasing, encountering no significant obstacles along the way. demonstrated that constraining optimization to a random, low-dimensional hyperplane in the weight space leads to comparable to full-space optimization, provided that the dimension exceeds a modest threshold. This was geometrically understood and extended in .; demonstrate that while a linear path between two independent optima hits a high loss area in the middle, there in fact exist continuous, low-loss paths connecting any pair of optima. These observations are unified into a single phenomenological model in . While independent, low-loss optima in the loss landscape are connected, provide an early indication that in fact they represent very different functions in terms of their predictions. Therefore the connectivity cannot be due to trivial symmetries of the network which would keep the input-output mapping intact. We train convolutional neural networks on the CIFAR-10 dataset: • SmallCNN: channels for 10 epochs which achieves 64% test accuracy. • MediumCNN: channels for 20 epochs which achieves 70% test accuracy. • ResNet20v1: for 200 epochs which achieves 90% test accuracy. We use the Adam optimizer for training and to make sure the effects we observe are general, we validate that our hold for vanilla stochastic gradient descent (SGD) as well, which we do not show in this paper. We use batch size 128 and dropout 0.03 for training SmallCNN and MediumCNN. To generate weight space and prediction space similarity , we use a constant learning rate of 1.6 × 10 −3, unless specified otherwise. We do not use any data augmentation with those two architectures. For ResNet20v1, we use the data augmentation and learning rate schedule used in Keras examples 2. The overall trends are consistent across all architectures, datasets, and other hyperparameter and non-linearity choices we explored. First, we compute the similarity between different checkpoints along a single trajectory. We plot the cosine similarity in weight space in Figure 2 (a) and the disagreement in function space, defined as the fraction of points the checkpoints disagree on, in Figure 2 (b). We observe that the checkpoints along a trajectory are largely similar both in the weight space and the function space. Next, we evaluate how diverse the final solutions from different random initializations are. The functions from different initialization are different, as demonstrated by the similarity plots in Figure 3. Comparing this with Figures 2(a) and 2(b), we see that functions within a single trajectory exhibit higher similarity and functions across different trajectories exhibit much lower similarity. Next, we take the predictions from different checkpoints along the individual training trajectories from multiple initializations and compute a t-SNE plot to visualize their similarity in function space. More precisely, we take the softmax output for a set of points, flatten the vector and use it as the input to the t-SNE plot. Figure 2 (c) shows that the functions explored by different trajectories (denoted by circles with different colors) are far away, while functions explored within a single trajectory (circles with the same color) tend to be much more similar. In addition to the checkpoints along a trajectory, we also construct subspaces based on each individual trajectory. Scalable Bayesian methods typically compute statistics based on the weights along a trajectory, hence visualizing the diversity of functions between the subspace helps understand the difference between Bayesian neural networks and ensembles. We use a representative set of four subspace sampling methods: a random subspace, a Monte Carlo dropout, a diagonal Gaussian approximation, and a low-rank covariance matrix Gaussian approximation. In the descriptions of the methods, let w 0 be the current weight-space position (the weights and biases of our trained neural net) around which we will construct the subspace. Results on CIFAR-10 using two different architectures. For each of these architectures, the left subplot shows the cosine similarity between different solutions in weight space, and the right subplot shows the fraction of labels on which the predictions from different solutions disagree. • Random subspace sampling: We start at an optimized solution w 0 and choose a random directionv in the weight space. We step in that direction by choosing different values of t and looking at predictions at configurations w 0 + tv. We do this for many random directionsv. • Monte Carlo dropout subspace: We start at an optimized solution w 0 and apply dropout with a randomly chosen p keep to it. We do this many times, each time choosing a random p keep, and look at predictions at dropout p keep (w 0). • Diagonal Gaussian subspace: We start at an optimized solution w 0 and look at the most recent iterations of training proceeding it. For each trainable parameter w i, we calculate its mean mean i and standard deviation std(w i). To sample solutions from the subspace, we draw each parameter independently as w i ∼ N (mean i, std i). We repeat this many times and obtain predictions in each. This corresponds to sampling from a normal distribution with a diagonal covariance matrix. • Low-rank Gaussian subspace: We start at an optimized solution w 0 and look at the most recent iterations of training proceeding it. For each trainable parameter w i, we calculate its mean mean i. For a rank-k approximation, we calculate top k principal components of the weight vectors in the most recent iterations of training {p i ∈ R params} k. We sample from a k-dimensional normal distribution and obtain the weight configurations as w ∼ mean Figure 4 shows that functions sampled from a subspace (denoted by colored squares) corresponding to a particular initialization, are much more similar to each other. While some subspaces are more diverse, they still do not overlap with functions from another randomly initialized trajectory. Diversity versus Accuracy plots To illustrate the difference in another fashion, we sample functions from a single subspace and plot diversity (as measured by disagreement between predictions) versus accuracy in Figure 5. Comparing these subspace points (colored dots) to the baseline optima (green star) and the optima from different random initializations (denoted by red stars), we observe that random initializations are much more effective at sampling diverse and accurate solutions, than subspace based methods constructed out of a single trajectory. Figure 4: Results using SimpleCNN on CIFAR-10: t-SNE plots of validation set predictions for each trajectory along with four different subspace generation methods (showed by squares), in addition to 3 independently initialized and trained runs (different colors). As visible in the plot, the subspacesampled functions stay in the prediction-space neighborhood of the run around which they were constructed, demonstrating that truly different functions are not sampled. The diversity score used above quantifies the difference of two functions, by measuring fraction of points on which their predictions differ. We chose this approach due to its simplicity; one could also compute the KL-divergence or other distances between the output probability distributions Let d diff denote the fraction of predictions on which the two functions differ. It is 0 when the two functions make identical class predictions, and 1 when they differ on every single example. To account for the fact that the lower the accuracy of a function, the higher its potential d diff due to the possibility of the wrong answers being random and uncorrelated between the two functions, we normalize this by (1 − a), where a is the accuracy. For a reference function f * of accuracy a * and a function f of accuracy a whose predictions are obtained by randomly perturbing the predictions of f *, the expected fractional difference is, where C is the number of classes. If the function f of accuracy a were entirely independent of f *, then the expected fractional difference would be. Those two limiting behavioursthe function f being derived from f * by a perturbation, and f and f * being completely independent -form the two dashed lines in Figure 5. We refer to Appendix D for further details on the limiting curves. The diversity reached is not as high as the theoretical optimum even for the independently initialized and optimized solutions, which provides scope for future work. Figure 6 shows the radial loss landscape (train as well as the validation set) along the directions of two different optima. The left subplot shows that different trajectories achieve similar values of the loss, and the right subplot shows the similarity of these functions to their respective optima (in particular the fraction of labels predicted on which they differ divided by their error rate). While the loss values from different optima are similar, the functions are different, which confirms that random initialization leads to different modes in function space. Figure 6: Results using MediumCNN on CIFAR-10: Radial loss landscape cut between the origin and two independent optima and the predictions of models on the same plane. We construct a low-loss tunnel between different optima using the procedure proposed by , which is a simplification of the procedures proposed in and. As shown in Figure 7 (a), we start at the linear interpolation point (denoted by the black line) and reach the closest point on the manifold by minimizing the training loss. The minima of the training loss are denoted by the yellow line in the manifolds. Figure 7 (b) confirms that the tunnel is indeed low-loss. In order to visualize the 2-dimensional cut through the loss landscape and the the associated predictions on along a curved low-loss path, we divide the path into linear segments, and compute the loss and prediction similarities on a triangle given by this segment on one side and the origin of the weight space on the other. We perform this operation on each of the linear segments from which the low-loss path is constructed, and place them next to each other for visualization. Figure 8 visualizes the loss along the manifold, as well as the similarity to the original optima. Note that the regions between radial yellow lines consist of segments, and we stitch these segments together in Figure 8. The accuracy plots show that as we traverse along the low-loss tunnel, the accuracy remains fairly constant as expected. However, the prediction similarity plot shows that the low-loss tunnel does not correspond to similar solutions in function space. What it shows is that while the modes are connected in terms of accuracy/loss, their functional forms remain distinct and they do not collapse into a single mode. Our observations in the previous section suggest that subspace-based methods and ensembling should provide complementary benefits in terms of uncertainty and accuracy. To test this, we evaluate the performance of the following four variants using SmallCNN on CIFAR-10: • Baseline: optimum at the end of a single training trajectory. • Subspace sampling: average predictions over the solutions sampled from a subspace. • Ensemble: train baseline multiple times with random initialization and average the predictions. • Ensemble + Subspace sampling: train multiple times with random initialization, use subspace sampling within each trajectory. Figure 8: Results using MediumCNN on CIFAR-10: Radial loss landscape cut between the origin and two independent optima along an optimize low-loss connector and predictions similarity along the same planes. Figures 9(a) and 9(b) show the for low rank Gaussian subspace and diagonal Gaussian subspace respectively. The validate our hypothesis as (i) subspace sampling and ensembling provide complementary benefits, and (ii) the relative benefits of ensembling are higher as it averages predictions over more diverse solutions. Weight averaging within a subspace One could use the mean and diagonal/low-rank variance to approximate each mode of the posterior, however that increases the number of parameters required for each mode. Using just the mean weight for each mode would not increase the number of parameters. proposed stochastic weight averaging (SWA) for better generalization. One could also compute an (exponential moving) average of the weights along the trajectory, inspired by Polyak-Ruppert averaging in convex optimization, (see also for a Bayesian view on iterate averaging). As weight averaging has been already studied by, we do not discuss it in detail. Figure S1 provides an illustration of why these strategies might help with generalization. We use weight averaging (WA) on the last few epochs which corresponds to using the mean of the subspace within each mode. Figure 10 (a) shows that weight averaging achieves better performance within each mode, and ensemble + WA performs as well as ensemble + subspace combination methods, without any additional parameter overhead. Figure 10(b) shows accuracy and Brier score on CIFAR-10, both on the usual test set (corresponding to the intensity = 0 column) as well as on the CIFAR-10-C benchmark proposed which contains corrupted versions of CIFAR-10 with varying intensity values, making it useful to verify calibration under dataset shift . We see that ensembling and weight-averaging provide complementary benefits. WA improves over the vanilla baseline, but combining WA with ensembling over multiple random initializations improves performance further. Figure 9 reports accuracy and Brier score on the usual CIFAR-10 test set as a function of ensemble size. Under dataset shift, it is particular important to have diverse functions to avoid overconfident predictions (as averaging over similar functions would not reduce overconfidence). To illustrate the effect on another challenging dataset, we repeat these experiments on ImageNet using the same ResNet20V1 architecture. Due to computational constraints, we focus mainly on the experiment decoupling the effect of weight averaging vs ensembling. Figure 11 Our show that trajectories of randomly initialized neural networks explore different modes in function space, which explains why deep ensembles with random initializations help. They are essentially orthogonal to each other in the space of weights and very diverse in terms of their predictions. While these modes can be connected via optimized low-loss paths between them, we demonstrate that they correspond to distinct functions in terms of their predictions. Therefore the connectivity in the loss landscape does not imply connectivity in the space of functions. Subspace sampling methods such as weight averaging, Monte Carlo dropout, and various versions of local Gaussian approximations, sample functions that might lie relatively far from the starting point in the weight space, however, they remain in the vicinity of their starting point in terms of predictions, giving rise to an insufficiently diverse set of functions. Using the concept of the diversityaccuracy plane, we demonstrate empirically that these subspace sampling methods never reach the combination of diversity and accuracy that independently trained models do, limiting their usefulness for ensembling. A VISUALIZING THE LOSS LANDSCAPE ALONG ORIGINAL DIRECTIONS AND WA DIRECTIONS Figure S1 shows the loss landscape (train as well as the validation set) and the effect of WA. Figure S1: Loss landscape versus generalization: weights are typically initialized close to 0 and increase radially through the course of training. Top row: we pick two optima from different trajectories as the axes, and plot loss surface. Looking at x and y axes, we observe that while a wide range of radii achieve low loss on training set, the range of optimal radius values is narrower on validation set. Bottom row: we average weights within each trajectory using WA and use them as axes. A wider range of radius values generalize better along the WA directions, which confirms the findings of. Random seed affects both initial parameter values as well the order of shuffling of data points. We run experiments to decouple the effect of random initialization and shuffling; Figure S2 shows shows the . We observe that both of them provide complementary sources of randomness, with random initialization being the dominant of the two. As expected, random mini-batch shuffling adds more randomness at higher learning rates due to gradient noise. We run additional experiments comparing the diversity of solutions found vs their test accuracy on CIFAR-100. CIFAR-100 is an intermediate step between CIFAR-10 and ImageNet, and is overall Figure S2: The effect of random initializations and random training batches on the diversity of predictions. much more challenging to learn than CIFAR-10. Our additional are presented in Figure S3. Solutions obtained by subspace sampling methods described in Section 4 have a worse trade off between prediction diversity (needed for ensembling) and accuracy, compared to independently initialized and trained optima. This is consistent with our on CIFAR-10 in Figure 5. In Figures 5 and S3 we bound our empirical by two theoretically derived curves, limiting the expected trade off between diversity and accuracy in the best and worst case scenarios. The ing functions are presented in the main text in Section 3.2. We will show the detailed derivations here. Given a C-class classification problem and a reference solution with accuracy a *, we would like to obtain a function d diff (a) which gives us the fraction of labels on which another solution disagrees with the reference solution as a function of its accuracy. The best case scenario is when the predicted labels are uncorrelated with the reference solution's labels. On a particular example, the probability that the reference solution got it correctly is a *, and the probability that our solution got it correctly is a. On those examples, the predictions do not differ since they both have to be equal to the ground truth label. The probability that the reference solution is correct on an example while our solution is wrong is a * (1 − a). The probability that the reference solution is wrong on an example while our solution is correct is (1 − a *)a. On the examples where both solutions are wrong (probability (1 − a *)(1 − a)) there are two cases: a) the two solutions agree (an additional factor of 1/(C − 1)) or b) disagree (an additional factor of (C − 2)/(C − 1)). Only the case b) contributes to the fraction of labels on which they disagree. Hence we end up with The other extreme case is when the predictions of our new solution are just the predictions of the reference solution perturbed by perturbations of different strength. Then, the solutions retain a great amount of correlation. Let the probability of a label changing be p. We will consider 4 cases: a) the label of the correctly classified image does not flip (probability a * (1 − p)), b) it flips (probability a * p), c) an incorrectly labelled image does not flip (probability (1 − a *)(1 − p)), and d) it flips (probability (1 − a *)p). The ing accuracy a(p) obtains a contribution a * (1 − p) from case a) and with probability 1/(C − 1) contribution (1 − a *)p from d). Therefore a(p) = a * (1 − p) + p(1 − a *)/(C − 1). Inverting this relationship, we get p(a) = (C − 1)(a * − a)/(Ca * − 1). The fraction of labels on which the solutions disagree is simply p by our definition of p, and therefore d diff (a; a *, C) = (C − 1)(a * − 1a) | We study deep ensembles through the lens of loss landscape and the space of predictions, demonstrating that the decorrelation power of random initializations is unmatched by subspace sampling that only explores a single mode. | 1,093 | scitldr |
Existing deep learning approaches for learning visual features tend to extract more information than what is required for the task at hand. From a privacy preservation perspective, the input visual information is not protected from the model; enabling the model to become more intelligent than it is trained to be. Existing approaches for suppressing additional task learning assume the presence of ground truth labels for the tasks to be suppressed during training time. In this research, we propose a three-fold novel contribution: (i) a novel metric to measure the trust score of a trained deep learning model, (ii) a model-agnostic solution framework for trust score improvement by suppressing all the unwanted tasks, and (iii) a simulated benchmark dataset, PreserveTask, having five different fundamental image classification tasks to study the generalization nature of models. In the first set of experiments, we measure and improve the trust scores of five popular deep learning models: VGG16, VGG19, Inception-v1, MobileNet, and DenseNet and demonstrate that Inception-v1 is having the lowest trust score. Additionally, we show of our framework on color-MNIST dataset and practical applications of face attribute preservation in Diversity in Faces (DiF) and IMDB-Wiki dataset. The primary objective of artificial intelligence is to imitate human intelligence tabular rasa. Especially, with the advent of deep learning (DL), the models are striving to perform composite tasks by learning complex relationships and patterns available in noisy, unstructured data . With this sudden growth in the consumption of data by models, there has been a lot of study on the privacy and security of the learnt model . Data governance and model governance frameworks, control and protect sharing of data and model meta information between two entities and also their social implications . The premise of model privacy has majorly revolved around preserving the model content from human (man-in-the-middle) adversarial attacks . However, the model itself could learn all the private information from the data and become much more intelligent than the original intent it was trained for. With the strive for model generalization, including techniques for transfer learning and multi-task learning, the model is encouraged to learn more and more generic features from the data that could be used for more than one task (Søgaard &). Consider the example described in Figure 1, where a classifier is trained to detect the shape of an object from images. However, using the features extracted by the above classifier, the size and location of the object in the image can also be predicted. Thus, a shape classifier is more intelligent than its objective of only predicting the shape of the object. While in certain applications, this is a required property of classification models (such as in, transfer learning and domain adaptation), in most of the privacy preserving applications, the data and its other visual attributes have to be kept private from the model itself. As an additional real-world example, we train a DL model to predict the gender from a face image. However, the DL model learns most generic features from the face image, enabling it to predict the age and the identity of the person. The input face image could be saved securely from a human attacker, however, there is not much focus on securing from the model itself. Additionally as shown in Figure 1 (a), the task of debiasing is to remove the the bias (color) in learning a specific task (shape). This happens due to the high correlation between the color and shapes in the input images. However, as shown in Figure 1 (b), our task in model trust is to forcefully The fundamental research motivation in this work is to study if a learning model could be restricted to perform only one or a specific group of tasks. ensure that the model learns to perform only one or few selected tasks (shape) from the input images and unlearn all other tasks (color, size, location). If multi-class classification tasks could be done from the same image, the research question is, "How can we ensure that the model is learnt only for one or a few tasks (called as, preserved tasks), and is strictly not learnt for the other tasks (called as, suppressed tasks)?". To pursue research on this problem, there are few evident challenges: (i) there is a lack of a balanced and properly curated image dataset where multiple classification tasks could be performed on the same image, (ii) the complete knowledge of both the preserved tasks and the suppressed tasks should be known apriori, that is, we cannot suppress those tasks that we don't have information about, and (iii) presence of very few model agnostic studies to preserve and suppress different task groups. In this research, we propose a novel framework to measure the trust score of a trained DL model and a solution approach to improve the trust score during training. The major research contributions are summarized as follows: 1. A simulated, class-balanced, multi-task dataset, PreserveTask with five tasks that could be performed on each image: shape, size, color, location, and color classification. 2. A novel metric to measure the trustworthiness score of a trained DL model. The trust scores of five popular DL models are measured and compared: VGG16, VGG19, Inception-v1, MobileNet, and DenseNet. A generic model-agnostic solution framework to improve the trust scores of DL models during training by preserving a few tasks and suppressing other tasks on the same image. 3. Experimental analysis are performed for the proposed framework in comparison with other existing approaches under different settings. Experimentally, we considered the model with the least trust score, Inception-v1, and showed that the proposed framework aids in improving the overall trust score 1. 4. To demonstrate the practical applications and generalizability of the metric and the solution framework, we show additionally in colored MNIST dataset and face attribute preservation using two datasets: (i) Diversity in Faces (DiF) (Merler et al.) (ii) IMDBWiki . There are broadly two different groups of work related to the research problem at hand: (i) kanonymity preservation and (ii) attribute suppression. k-anonymity Preservation: The objective here is to preserve the anonymity of certain attributes from being predicted by the model. To quote some earlier works, , studied to mask out potentially sensitive information from video feeds. In the last decade, face recognition has become an important commercial applications and also an application that demanded discussion regarding privacy preservation. Studies focused on extracting only the required meta information from face images while not extracting the identity. This was a required step to make face as a usable biometric. Studies such as , , and focused on preserving the identity of the face image from the model by performing face de-identification. Studies such as and focused on anonymizing the face gender information while models could extract the identity. Attribute Suppression: The aim of this group of techniques is to explicitly suppress a few attributes by perturbing the input data to the model. Studies such as and test if the learnt models are robust and protected against adversarial attacks. suggested using a constrained generative adversarial network (GAN) to perturb the input face image and suppress the required attribute. The GANs will generate the attribute free face image of the original face image. The closest related work to our approach, is the study by where the visual attributes are decorrelated using a negative gradient in the model. The demonstrate that the classification task could be performed by preserving specific attributes in the image while suppressing the influence of the remaining. Additionally, there is a good amount of research in bias mitigation while learning models . The primary aim is to debias the model learning from any kind of correlated attributes , which is different from our aim of improving the model's trust. The major gaps in the existing research works are: (i) most of the techniques focus on data perturbation, that is, changing the input data from x to x such that the suppressed task information is not available in the data. There is not much focus on model perturbation without altering the input data, (ii) most of the existing datasets have only binary attributes and hence suppressing and preserving a few tasks does not actually translate to the classification complexity of multi-class tasks, and (iii) there is a lack of a well curated benchmark dataset to evaluate the privacy preserving capacity of DL models. Shared tasks performed on the same image carry some common attributes which are often extracted by complex deep learning models. The objective of this is to untangle the shared tasks and enable deep learning models to perform only one (or few) of those tasks. In order to evaluate the performance of such a framework, the dataset should have the following properties: • Should perform multiple tasks on the same image and each task should have varying number of classes, in order to study the relationship of complexity of classification tasks. Figure 3: (a) A deep learning model learning features suited for multiple tasks, more than the intended shape classification task, (b) Existing approaches suppress other known tasks, such as size classification by backpropagation of negative loss or gradient, (c) Proposed approach of suppressing all possible n-class classification task by using random class labels. • As this research area is nascent, the dataset should be noise-free and class balanced, to avoid other complexities that could influence classification performance. • Tasks should be designed in such a way that certain tasks, share common attributes and features, while certain tasks should be independent of each other. There are some similar publicly available datasets in the literature. LFW , CelebA , IMDB-Wiki , AwA 2 , and CUB datasets have multiple binary classification tasks, while only one nonbinary classification task. It is challenging to study the influence of complexity of classification tasks using these datasets and hence is not extendable to practical applications. CLEVR dataset provides with four different tasks with variable number of classes. However, each image contains multiple objects with different shape, color, and textures, allowing multiple labels for each task. Task suppression in multi-label, multi-task classification setting provides a very challenging experimental setting. Inspired from the CLEVR dataset, we create a new PreserveTask dataset, which is a multi-task dataset exclusively designed for the purpose of bench-marking models against preserving task privacy. The primary objective is to create easy-to-perform multi-task dataset, where the performance of the individual tasks is high. As shown in Figure 2, PreserveTask dataset has five different classification tasks, as follows: (i) Shape Classification: circle, triangle, diamond, pentagon, hexagon, (ii) Color Classification: violent, indigo, blue, green, yellow, orange, red, (iii) Size Classification: small, medium, large, (iv) Location Classification: quadrant 1, quadrant 2, quadrant 3, quadrant 4, (v) Background Color Classification: white, black, or colored. These five tasks are chosen such that few tasks are highly correlated (size, shape), while few tasks are ideally independent of each other (size, color). All the images are generated as 256 × 256 colored images. There are 5 (shapes) * 7 (color) * 3 (size) * 4 (location) * 3 ( color) = 1260 variations, with 50 images for training and 10 images for testing for each variation, generating a total of 63, 000 training and 12, 600 images. This ensures that there is a perfect class balance across all tasks. It is to be noted that the task of suppression of unknown shared task is a fairly open research problem. Hence, in order to set the benchmark of different frameworks, an easy, straightforward PreserveTask dataset is created as a conscious decision without having much noise, such as in DeepFashion dataset. As the problem area matures, further extensions of this dataset could be generated and more real world natural objects could be added. To understand the current scenario of shared task learning, consider any deep learning model as shown in Figure 3 (a). Assume a deep learning model, say VGG19, is trained for predicting the shape of objects in images. Ideally, the features f 1 obtained from the model should be good for object shape prediction. However, it is observed that different size, color, location prediction classifiers could be trained on top of f 1 demonstrating that f 1 contains more information about the object than just its shape. While this is a required property in multi-task learning and in applications of domain adaptation, from a task privacy preservation perspective this should be controlled. In literature, few technique variants exist to suppress the model from learning a few attributes or tasks . As shown in Figure 3 (b), if the model has to be suppressed from learning the size of the objects, a negative loss or negative gradient is applied to enable features f 2 to not carry any information about the size of the object while retaining all the information about the shape of the object. This comes with an assumption that the information about the tasks to be suppressed are available during training time along with its ground truth class labels for the entire training data. In our proposed framework, we overcome this assumption and do not expect the suppression task information to be available during model training time. Additionally, we provide a model agnostic approach of suppressing task learning so that the framework could be directly applied to any deep learning model. Let x ∈ X be the input data and y x ∈ Y (n) be the n different tasks that could be performed on the image. We learn a model,, be the feature representation for the given task. Ideally, while only g: provides high classification accuracy in most cases. To overcome this challenge, we generate random n-class labels in the gradient reversal (GR) branch in order to suppress any other n-class classification, as shown in Figure 3 (c),. Multiple gradient reversal branches could be built for varying values of n to suppress all possible other classification tasks. The DL model is trained by a custom loss function as follows, where L m is the loss of the model branch trained for the task to be preserved. L is the sum of individual losses (L i) which are to be maximized (task suppressed). λ and (1 − λ) are the weights given for the minimization and maximization losses which can be chosen based on the amount of sharing between the tasks. Additionally, each of the individual L i could be distinct loss functions in the model, depending on the task performed. Thus, it can be observed that the proposed framework is both DL model agnostic and loss function agnostic. PreserveTask will be used as the benchmark dataset against which the trust score of any trained DL model could be extracted. The trained DL model is evaluated against different tasks in the PreserveTask and the entire confusion matrix of performance accuracy is obtained (5 × 5 corresponding to the five tasks). The behavior of an ideal DL model, would provide 100% accuracy on the leading diagonal i.e., the tasks it was trained for, while providing, random classification accuracies for other tasks. The confusion matrix for such an ideal DL model is shown in Figure 4. For example in the first row, the DL model was trained to learn and predict the color of the object. Hence, color prediction performance should be 1 (denoting, 100% accuracy), while other tasks should provide random 1/n accuracy, where n is the number of classes. Let the ideal performance matrix be denoted as M and the obtained performance matrix for a given trained DL model be T. By intuition, the matrix T that does not deviate much from the ideal matrix M should have a higher trust score. The trust score is mathematically computed as follows, where, W = 4 × I 5 · 1 5 provides the weight corresponding to each task pair, I is an identity matrix and 1 5 is a ones matrix, each of dimensionality 5 × 5. Since for each preserving task, there are four suppressing tasks, the deviation of the preserving task from the ideal matrix is scaled by a factor of four to normalize the computation. Note that if the diagonal elements perform poorly, the concern is on the performance of the model. On the contrary, if the non-diagonal elements has a higher performance, the concern is on the trust of a model from a privacy preservation perspective. The proposed metric implements this notion to compute the trustworthiness of a trained DL model. The trust score is bounded between. By Trust scores obtained after various suppression techniques for Inception-v1. It can be observed that using random labels for unknown tasks, we could improve the trustworthiness. empirical analysis, we observe that a trust score above 0.9 is highly desirable, a trust score between 0.8 and 0.9 is practically acceptable, and any score below 0.8 is considered poor. The trust score of the ideal matrix is 1, while the trust score of a 1 5 (all task classification performance is 100%) is 0.6259. To understand the sensitivity of the proposed metric, let us assume that in the ideal matrix, any one non-diagonal element is changed to 1 which in a trust score of 0.98125. Thus, any reduction of (1 -0.98125) = 0.0175 in the trust score corresponds to one additional unwanted task being learnt by the classifier. In this section, we show the experimental and perform analysis of the proposed framework. Initially, we measure the trustworthiness of the existing models. We then experimentally demonstrate suppression of different tasks in various experimental settings. All the experiments are performed using the PreserveTask dataset. For additional and detailed comparison with other techniques, please refer to the appendix. Consider a popular deep learning model, Inception-v1 consisting of 22 computational layers. The model was trained from scratch using the PreserveTask for the task of shape classification, providing 99.98%. In order to study, if this deep learning model learnt additional visual attributes, as well, the last flatten layer's output (4096 × 1) were extracted. Four different two-hidden layer neural network classifiers were trained 2 using the extracted features to predict size, color, location, and color of the objects. The prediction accuracies were 97.29%, 51.25%, 99.98%, 92.05%, respectively for the four tasks. It can be observed that the performance of size, location, and prediction are really high proving that the features obtained from Inception v1 model has features corresponding to these tasks as well. Also, it can be observed that the color prediction performance is very low, as shape and color prediction are inherently independent tasks. The similar experiment is repeated for training the Inception v1 model on one task and using the learnt feature to predict the performance of other tasks, and the are shown in Figure 4. Ideally, only the diagonal elements of this confusion matrix should have higher accuracies (red in color) while the rest of the prediction should have lower accuracies (green in color). Accordingly, the trust score of the trained Inception-v1 model (proposed in section 4.1) was found to be 0.7530, which is very poor. In order to further demonstrate that this additional intelligence is not a property of just Inception-v1 model, similar experiments are performed using four other popular deep learning models: VGG16, VGG19, MobileNet, and DenseNet. The trust scores of all the DL models are shown in Figure 5. It can be observed that out of these five models, Inception-v1 and DenseNet has the lowest trust score while MobileNet has the highest trust score. While one could argue that the Inception-v1 model learns highly generic features supporting multi-task and transfer learning, from a privacy preservation perspective, the model is found to have a poor trust score. This leads to the open question, "Do models always needs to be additionally intelligent, and if not, how to suppress them?" In this section, we perform experiments to suppress the tasks that are known apriori during training, that is, the ground truth labels of the suppression task is available. For simplicity, in demonstrating the experimental , we assume that one task is to be preserved and one task is to be suppressed, using the Inception-v1 model. This experimental setting is similar to the approach explained in Figure 3 (b). The gradient reversal (GR) layer unlearns the suppressed task, while learning the preserved task. In order to compare the performance of GR, we also use a customized negative loss function which minimizes the loss obtained for the preserved task while maximizing the loss obtained for the suppressed task, weighted by a constant factor. The features eventually extracted from the flatten layer has to show similar performance on the preserved task while reduced performance on the suppressed task. Colour accuracy across experiments Figure 7: Comparison of color prediction performance with and without using the different task suppression mechanisms. It can be observed that using random labels reduces the performance of color prediction irrespective of whether the preserved task was shape or size prediction. corresponding trust scores are shown in Figure 5. It can be observed that suppressing known tasks using GR layer improves the trust of the baseline model from 0.7530 to 0.8563. The obtained in the previous section made the assumption that the ground truth labels of the suppression task have to be available while training the Inception-v1 model. In an attempt to break that assumption, the experimental setting discussed in Figure 3 (c) is performed. Instead of the actual ground truth labels of a particular task, randomly generated n-class labels are used during every mini-batch. Thus, for the same mini-batch training in the next epoch, a different set of random class labels are generated to be maximized. This ensures that the model does not memorize a single suppression task, but, learns to suppress all possible n-class classification tasks. Figure 6 (c) and (d) demonstrates the obtained by using random class labels. In comparison with Figure 4, it can be observed that using random class performs well in certain settings. For example, while trying to preserve the shape features and suppressing the prediction capacity of color, the original model's prediction performance of 92.05% reduced to 87.06% by using the actual labels of color, while further reduced to 33.37% while using random 3-class labels. It is further highlighted in Figure 7 where color prediction is chosen as the task to be suppressed, while shape and size are independently being preserved. It can be observed that the proposed framework of using random labels, reduces the performance of color prediction from 51.25% to 26.83% when using actual labels and 17.94% when using random labels, when shape prediction was the preserved task. A similar performance reduction from 35.59% to 14.29% is observed when size prediction was the preserved task. We conclude that using random labels for task suppression produces a comparable trust score to using known labels while producing better than the baseline trust score of a DL model. Colored MNIST Dataset: We introduced two additional tasks of foreground and color prediction tasks into the MNIST dataset. As shown in Figure 8, colored MNIST images are created by randomly assigning one of the 10 possible foreground colors and one of the different 10 possible colors. Similar assignment is performed in both training and test dataset, to maintain the standard experimental protocol. MobileNet model was trained from scratch to obtained a baseline trust score of 0.756. After using our framework for task suppression with random labels and gradient reversal based training on the suppression branch, we observed that the MobileNet models trust scores increased to 0.824. In Figure 8 (middle), the TSNE plot shows that when the model is learnt only for shapes, the features for'red' and'cyan' colored images are still separable. However, after suppressing the color prediction task using the proposed framework, the features'red' and'cyan' colored images are scattered and no longer separable, as shown in Figure 8 (right). Diversity in Faces (DiF) Dataset: In DiF dataset (Merler et al.), we considered the tasks of gender (two class) and pose (three class) classification. The aim is learn (preserve) only one of these while suppressing the other. Since, the dataset was highly skewed for different classes, we considered a subset of 39296 images with equal class balance 3. We trained Inception-v1 model on this dataset from scratch and obtained a trust score of 0.7497. Using our framework for task suppression with GR layer and known class labels, the trust score of the model increased to 0.8606. Additionally, with random unknown class labels, we observed that the model's trust scores increased to 0.9069. In IMDB-Wiki dataset , we considered the tasks of gender (two class) and age (ten class) classification. The cropped face images of the Wiki dataset are used to train the DenseNet model (the second least trusted model according to our trust scores). The trained model provided a baseline trust score of 0.7846. After using our framework for task suppression and known class labels, the trust score of DenseNet model increased to 0.7883. Also, with random unknown class labels, we observed that the model's trust scores increased to 0.7860. Thus, our framework for measuring and improving a DL model's trust has lots of practical applications. A face recognition system or a face image based gender recognition system can now be deployed with an additional trust on the model's intelligence level. In this research, we showcased a model-agnostic framework for measuring and improving the trustworthiness of a model from a privacy preservation perspective. The proposed framework did not assume the need for the suppression task labels during train time, while, similar performance could be obtained by training using random classification boundaries. A novel simulated benchmark dataset called PreserveTask was created to methodically evaluate and analyze a DL model's capability in suppressing shared task learning. This dataset opens up further research opportunities in this important and practically necessary research domain. Experimentally, it was shown that popular DL models such as VGG16, VGG19, Inception-v1, DenseNet, and MobileNet show poor trust scores and tend to be more intelligent than they were trained for. Also, we show a practical case study of our proposed approach in face attribute classification using: (i) Diversity in Faces (DiF) and (ii) IMDB-Wiki datasets. We would like to extend this work by studying the effect of multi-label classification tasks during suppression. This supplementary material contains all the detailed hyper-parameters used by different models that we trained, to aid in reproducing the that we showed in the research paper. Additionally, we provide more detailed analysis and visualizations of the , that could not be included in the paper due to space constraints. Five different baseline deep learning models were used in the experiments: Inception-v1, VGG16, VGG19, DenseNet, and MobileNet. The different parameters and the training process used in these experiments are shown below: • The data is z-normalized to have a zero mean and unit standard deviation, before being provided to the models for training. • The standard architectures of Inception-v1, VGG16, VGG19, DenseNet, and MobileNet are borrowed from the default implementations in the Keras library. • The deep learning models were trained with categorical cross-entropy and Adam optimizer with parameters as learning rate = 0.0001 and amsgrad set as F alse. For all the experiments, a two hidden layer neural network is used as a classifier. This is to maintain consistency of the same classifier across all the experiments. • The architecture is Dense → Dropout (0.5) → Dense → Dropout (0.3) → Dense (num of classes) • Each of the Dense layer has a ReLU activation function. • categorical cross-entropy is used as the loss function with Adam as the optimizer, having parameter values as learning rate = 0.0001 and amsgrad set as F alse. • 20% of the data is used as validation data and the model is trained for 100 epochs with early stopping. • Batch size of 32 was used to make the computation faster and the experiments were run using 1 × K80 GPU. In this section, we are including additional analysis, visualizations, and charts of the presented in the main paper. In order to aid better comparison, we include the charts and presented in the main paper also here, so that the supplementary could be read in an independent manner. Figure 9: Trust scores obtained for various DL models. It can be observed that, of the five models, the Inception-v1 and DenseNet has the least trust score while MobileNet has the highest. Figure 15: Trust scores obtained after various suppression techniques. It can be observed that even using random labels for unknown tasks, we could improve the trustworthiness of the Inception-v1 model on the PreserveTask dataset. Figure 16: The performance matrix heat-map, after suppressing a known task using negative loss, detailing the shared task performance of Inception-v1 model on the PreserveTask dataset. Figure 17: The performance matrix heat-map, after suppressing a known task using GR layer, detailing the shared task performance of Inception-v1 model on the PreserveTask dataset. Figure 18: The performance matrix heat-map, after suppressing a unknown task using negative loss, detailing the shared task performance of Inception-v1 model on the PreserveTask dataset. Figure 19: The performance matrix heat-map, after suppressing a unknown task using GR layer, detailing the shared task performance of Inception-v1 model on the PreserveTask dataset. Figure 20: Trust scores obtained in the Diversity in Faces (DiF) dataset after various suppression techniques. It can be observed that even using random labels for unknown tasks, we could improve the trustworthiness of the Inception-v1 model. | Can we trust our deep learning models? A framework to measure and improve a deep learning model's trust during training. | 1,094 | scitldr |
Learning a policy using only observational data is challenging because the distribution of states it induces at execution time may differ from the distribution observed during training. In this work, we propose to train a policy while explicitly penalizing the mismatch between these two distributions over a fixed time horizon. We do this by using a learned model of the environment dynamics which is unrolled for multiple time steps, and training a policy network to minimize a differentiable cost over this rolled-out trajectory. This cost contains two terms: a policy cost which represents the objective the policy seeks to optimize, and an uncertainty cost which represents its divergence from the states it is trained on. We propose to measure this second cost by using the uncertainty of the dynamics model about its own predictions, using recent ideas from uncertainty estimation for deep networks. We evaluate our approach using a large-scale observational dataset of driving behavior recorded from traffic cameras, and show that we are able to learn effective driving policies from purely observational data, with no environment interaction. In recent years, model-free reinforcement learning methods using deep neural network controllers have proven effective on a wide range of tasks, from playing video or text-based games BID26 BID29 to learning algorithms and complex locomotion tasks ). However, these methods often require a large number of interactions with the environment in order to learn. While this is not a problem if the environment is simulated, it can limit the application of these methods in realistic environments where interactions with the environment are slow, expensive or potentially dangerous. Building a simulator where the agent can safely try out policies without facing real consequences can mitigate this problem, but requires human engineering effort which increases with the complexity of the environment being modeled. Model-based reinforcement learning approaches try to learn a model of the environment dynamics, and then use this model to plan actions or train a parameterized policy. A common setting is where an agent alternates between collecting experience by executing actions using its current policy or dynamics model, and then using these experiences to improve its dynamics model. This approach has been shown empirically to significantly reduce the required number of environment interactions needed to obtain an effective policy or planner BID1 BID7 BID28 BID6.Despite these improvements in sample complexity, there exist settings where even a single poor action executed by an agent in a real environment can have consequences which are not acceptable. At the same time, with data collection becoming increasingly inexpensive, there are many settings where observational data of an environment is abundant. This suggests a need for algorithms which can learn policies primarily from observational data, which can then perform well in a real environment. Autonomous driving is an example of such a setting: on one hand, trajectories of human drivers can be easily collected using traffic cameras BID14, ing in an abundance of observational data; on the other hand, learning through interaction with the real environment is not a viable solution. However, learning policies from purely observational data is challenging because the data may only cover a small region of the space over which it is defined. If the observational data consists of stateaction pairs produced by an expert, one option is to use imitation learning BID36. However, this is well-known to suffer from a mismatch between the states seen at training and execution time BID37. Another option is to learn a dynamics model from observational data, and then use it to train a policy BID31. However, the dynamics model may make arbitrary predictions outside the domain it was trained on, which may wrongly be associated with low cost (or high reward) as shown in FIG0. The policy network may then exploit these errors in the dynamics model and produce actions which lead to wrongly optimistic states. In the interactive setting, this problem is naturally self-correcting, since states where the model predictions are wrongly optimistic will be more likely to be experienced, and thus will correct the dynamics model. However, the problem persists if the dataset of environment interactions which the model is trained on is fixed. In this work, we propose to train a policy while explicitly penalizing the mismatch between the distribution of trajectories it induces and the one reflected in the training data. We use a learned dynamics model which is unrolled for multiple time steps, and train a policy network to minimize a differentiable cost over this rolled-out trajectory. This cost contains two terms: a policy cost which represents the objective the policy seeks to optimize, and an uncertainty cost which represents its divergence from the states it is trained on. We measure this second cost by using the uncertainty of the dynamics model about its own predictions, calculated using dropout. We apply our approach in the context of learning policies to drive an autonomous car in dense traffic, using a large-scale dataset of real-world driving trajectories which we also adapt into an environment for testing learned policies 1. We show that model-based control using this additional uncertainty regularizer substantially outperforms unregularized control, and enables learning good driving policies using only observational data with no environment interaction or additional labeling by an expert. We also show how to effectively leverage an action-conditional stochastic forward model using a modified posterior distribution, which encourages the model to maintain sensitivity to input actions. We assume we are given a dataset of observational data which consists of state-action pairs D = {(s t, a t)} t. We first describe our general approach, which consists of two steps: learning an actionconditional dynamics model using the collected observational data, and then using this model to train a fast, feedforward policy network which minimizes both a policy cost and an uncertainty cost. In this work, we consider recent approaches for stochastic prediction based on Variational Autoencoders BID21 BID2 BID8 ). The stochastic model f θ (s 1:t, a t, z t) takes as input a sequence of observed or previously predicted states s 1:t, an action a t, and a latent variable z t which represents the information about the next state s t+1 which is not a deterministic function of the input. During training, latent variables are sampled from a distribution whose parameters are output by a posterior network q φ (s 1:t, s t+1) conditioned on the past inputs and true targets. This network is trained jointly with the rest of the model using the reparameterization trick, and a term is included in the loss to minimize the KL divergence between the posterior distribution and a fixed prior p(z), which in our case is an isotropic Gaussian. The per-sample loss used for training the stochastic model is given by: DISPLAYFORM0 After training, different future predictions for a given sequence of frames can be generated by sampling different latent variables from the prior distribution. Recent models for stochastic video prediction BID2 BID8 do not use their model for planning or training a policy network, and parameterize the posterior distribution over latent variables using a diagonal Gaussian. In our case, we are training an actionconditional video prediction model which we will later use to train a policy. This leads to an additional requirement: it is important for the prediction model to accurately respond to input actions, and not use the latent variables to encode factors of variation in the outputs which are due to the actions. To this end we propose to use a mixture of two Gaussians, with one component fixed to the prior, as our posterior distribution: DISPLAYFORM1 This can be seen as applying a form of global dropout to the latent variables at training time 2, and forces the prediction model to extract as much information as possible from the input states and actions by making the latent variable independent of the output with some probability. In our experiments we will refer to this parameterization as z-dropout. Once the forward model is trained, we use it to train a parameterized policy network π ψ, which we assume to be stochastic. We first sample an initial state sequence s 1:t from the training set, unroll the forward model over T time steps, and backpropagate gradients of a differentiable objective function with respect to the parameters of the policy network (shown in FIG1). During this process the weights of the forward model are fixed, and only the weights of the policy network are optimized. This objective function contains two terms: a policy cost C, which reflects the underlying objective the policy is trying to learn, and an uncertainty cost U, which reflects how close the predicted state induced by the policy network is to the manifold which the data D is drawn from. Training the policy using a stochastic forward model involves solving the following problem, where latent variables are sampled from the prior and input into the forward model at every time step: DISPLAYFORM0 The uncertainty cost U is applied to states predicted by the forward model, and could reflect any measure of their likelihood under the distribution the training data is drawn from. We propose here a general form based on the uncertainty of the dynamics model, which is calculated using dropout. Intuitively, if the dynamics model is given a state-action pair from the same distribution as D (which it was trained on), it will have low uncertainty about its prediction. If it is given a state-action pair which is outside this distribution, it will have high uncertainty. Dropout BID16 BID43 ) is a regularization technique which consists of randomly setting hidden units in a neural network to zero with some probability. The work of BID13 showed that estimates of the neural network's uncertainty for a given input can be obtained by calculating the covariance of its outputs taken over multiple dropout masks. We note that this uncertainty estimate is the composition of differentiable functions: each of the models induced by applying a different dropout mask is differentiable, as is the covariance operator. Furthermore, we can summarize the covariance matrix by taking its trace (which is equal to the sum of its eigenvalues, or equivalently the sum of the variances of the outputs across each dimension), which is also a differentiable operation. This provides a scalar estimate of uncertainty which is differentiable with respect to the input. More precisely, let f θ1,..., f θ K denote our prediction model with K different dropout masks applied to its hidden units (this can also be viewed as changing its weights). We define our scalar measure of uncertainty U as follows: DISPLAYFORM1 where d is the dimensionality of the output. Minimizing this quantity with respect to actions encourages the policy network to produce actions which, when plugged into the forward model, will produce predictions which the forward model is confident about 3.A simple way to define U given an initial sequence of states s 1:t from D would be to set U (ŝ t+k) = ŝ t+k − s t+k 2, which would encourage the policy network to output actions which lead to a similar trajectory as the one observed in the dataset. This leads to a set of states which the model is presumably confident about, but may not be a trajectory which also satisfies the policy cost C unless the dataset D consists of expert trajectories. If this is the case, setting Figure 3: Training the policy network using the differentiable uncertainty cost, calculated using dropout. DISPLAYFORM2 DISPLAYFORM3 We call the first approach MPUR, for Model-Predictive Policy with Uncertainty Regularization, and the second MPER, for Model-Predictive Policy with Expert Regularization. A key feature of both approaches is that we optimize the objective over T time steps, which is made possible by our learned dynamics model. This means that the actions will receive gradients from multiple time steps ahead, which will penalize actions which lead to large divergences from the training manifold further into the future, even if they only cause a small divergence at the next time step. Our MPUR approach can be viewed as training a Bayesian neural network (BNN) BID30 with latent variables using variational inference BID18 BID21. The distribution over model predictions for s t+1 is given by: DISPLAYFORM0 The distribution p(θ, z|D) reflects the posterior over model weights and latent variables given the data, and is intractable to evaluate. We instead approximate it with the variational distribution q parameterized by η = {φ, θ *}: DISPLAYFORM1 Here q φ represents a distribution over latent variables represented using a posterior network with parameters φ, which could be a diagonal Gaussian or the mixture distribution described in Section 2.1. The distribution q θ * is the dropout approximating distribution over forward model parameters described in BID13, a mixture of two Gaussians with one mean fixed at zero. We show in Appendix B that training the stochastic forward model with dropout by minimizing the loss function in Equation 1 is approximately minimizing the Kullback-Leibler divergence between this approximate posterior and the true posterior. Once the forward model is trained, for a given input we can obtain an approximate distribution over outputs p(ŝ t+1 |s 1:t, a) induced by the approximate posterior by sampling different latent variables and dropout masks. We now show that the covariance of the outputsŝ t+1 can be decomposed into a sum of two covariance matrices which represent the aleatoric and epistemic uncertainty, using a similar approach as BID10. Using the conditional covariance formula we can write: DISPLAYFORM2 The first term is the covariance of the random vector E z [ŝ t+1 |s 1:t, a, θ] when θ ∼ q θ * (θ). This term ignores any contribution to the variance from z and only considers the effect of θ. As such it represents the epistemic uncertainty. The second term represents the covariance of the predictions obtained by sampling different latent variables z ∼ p(z) averaged over different dropout masks, and ignores any contribution to the variance from θ. As such it represents the aleatoric uncertainty. Our uncertainty penalty explicitly penalizes the trace of the first matrix where the expectation over z is approximated by a single sample from the prior. Note also that the covariance matrix corresponding to the aleatoric uncertainty will change depending on the inputs. This allows our approach to handle heteroscedastic environments, where the aleatoric uncertainty will vary for different inputs. We apply our approach to learn driving policies using a large-scale dataset of driving videos taken from traffic cameras. The Next Generation Simulation program's Interstate 80 (NGSIM I-80) dataset BID14 consists of 45 minutes of recordings from traffic cameras mounted over a stretch of highway. The driver behavior is complex and includes sudden accelerations, lane changes and merges which are difficult to predict; as such the dataset has high environment (or aleatoric) uncertainty. After recording, a viewpoint transformation is applied to rectify the perspective, and vehicles are identified and tracked throughout the video. This yields a total 5596 car trajectories, which we split into training (80%), validation (10%) and testing sets (10%). In all, the dataset contains approximately 2 million transitions. We then applied additional preprocessing to obtain a state and action representation (s t, a t) for each car at each time step, suitable for learning an action-conditional predictive model. Our state representation s t consists of two components: an image i t representing the neighborhood of the car, and a vector u t representing its current position and velocity. The images i t are centered around the ego car and encode both the lane emplacements and the locations of other cars. Each image has 3 channels: the first (red) encodes the lane markings, the second (green) encodes the locations of neighboring cars, which are represented as rectangles reflecting the dimensions of each car, and the third channel (blue) represents the ego car, also scaled to the correct dimensions. This is summarized in FIG2. The action a t at a given time step consists of a 2-dimensional vector representing the acceleration/braking of the car and its change in steering angle. We also define two cost functions which together make up the policy cost: a proximity cost which reflects how close the ego car is to neighboring cars, and a lane cost which reflects how much the ego car overlaps with lane markings. These are represented as a cost vector at each timestep, c t = (C proximity (s t), C lane (s t)). Full details can be found in Appendix A.We also adapted this dataset to be used as an environment to evaluate learned policies, with the same interface as OpenAI Gym BID5. Choosing a policy for neighboring cars is challenging due to a cold-start problem: to accurately evaluate a learned policy, the other cars would need to follow human-like policies which would realistically react to the controlled car, which are not available. We take the approach of letting all the other cars in the environment follow their trajectories from the dataset, while a single car is controlled by the policy we seek to evaluate. This approach avoids hand-designing a policy for the neighboring cars which would likely not reflect the diverse nature of human driving. The limitation is that the neighboring cars do not react to the controlled car, which likely makes the problem more difficult as they do not try to avoid collisions. A number of authors have explored the use of learned, action-conditional forward models which are then used for planning, starting with classic works in the 90's BID32 BID40 BID17, and more recently in the context of video games BID33 BID35 BID35, robotics and continous control BID12 BID0 BID28 BID42. Our approach to learning policies by backpropagating through a learned forward model is related to the early work of BID31 in the deterministic case, and the SVG framework of in the stochastic case. However, neither of these approaches incorporates a term penalizing the uncertainty of the forward model when training the policy network. The works of BID25 BID6 also used model uncertainty estimates calculated using dropout in the context of model-based reinforcement learning, but used them for sampling trajectories during the forward prediction step. Namely, they applied different dropout masks to simulate different state trajectories which reflect the distribution over plausible models, which were then averaged to produce a cost estimate used to select an action. Our model uncertainty penalty is related to the cost used in, who used dropout and model ensembling to compute uncertainty estimates for a binary action-conditional collision detector for a flying drone. These estimates were then used to select actions out of a predefined set which yielded a good tradeoff between speed, predicted chance of collision and uncertainty about the prediction. In our work, we apply uncertainty estimates to the predicted high-dimensional states of a forward model at every time step, summarize them into a scalar, and backpropagate gradients through the unrolled forward model to then train a policy network by gradient descent. The work of BID10 ) also proposed adding an uncertainty penalty when training paramaterized policies, but did so in the context of BNNs trained using α-divergences applied in low-dimensional settings, whereas we use variational autoencoders combined with dropout for high-dimensional video prediction. α-BNNs can yield better uncertainty estimates than variational inference-based methods, which can underestimate model uncertainty by fitting to a local mode of the exact posterior BID9 BID23. However, they also require computing multiple samples from the distribution over model weights when training the forward model, which increases memory requirements and limits scalability to high-dimensional settings such as the ones we consider here. The problem of covariate shift when executing a policy learned from observational data has been well-recognized in imitation learning BID36 BID37. The work of BID38 proposed a method to efficiently use expert feedback (if available) to correct this shift, which has also been applied in the context of autonomous driving . Our approach also addresses covariate shift, but does so without querying an expert. Our MPER approach is related to the work of BID11, who also performed imitation learning at the level of trajectories rather than individual actions. They did so in low-dimensional settings using Gaussian Processes, whereas our method uses an unrolled neural network representing the environment dynamics which can be applied to high-dimensional state representations. The work of BID3 ) also used a neural network dynamics model in the context of imitation learning, but did so in the interactive setting to minimize a loss produced by a discriminator network. Several works have used deep learning models for autonomous driving, either to learn policies through imitation learning BID36 BID22 BID4 BID34 or for modeling vehicle dynamics . These works focused on lane following or avoiding static obstacles in visually rich environments and did not consider settings with dense moving traffic. The work of BID39 developed a model of the interactions between the two drivers which was then used to plan actions in simple settings, using symbolic state representations. In our work, we consider the problem of learning driving policies in dense traffic, using high-dimensional state representations which reflect the neighborhood of the ego car. We now report experimental . We designed a deterministic and stochastic forward model to model the state and action representations described in Section 3, using convolutional layers to process the images i t and fully-connected layers to process the vectors u t and actions a t. All model details can be found in Appendix C and training details can be found in Appendix D. Code and additional video for the model predictions and learned policies can be found at the following URL: https://sites.google.com/view/model-predictive-driving/home. We first generated predictions using both deterministic and stochastic forward models, shown in FIG3. The deterministic model produces predictions which become increasingly blurry, while the stochastic model produces predictions which stay sharp far into the future. By sampling different sequences of latent variables, different future scenarios are generated. Note that the two sequences generated by the stochastic model are different from the ground truth future which occurs in the dataset. This is normal as the future observed in the dataset is only one of many possible ones. Additional video generations can be viewed at the URL. The human trajectories observed in the testing set, which are all collision-free. No action A policy which outputs an action of zero, maintaining constant speed and direction. 1-step IL A policy network trained with single-step imitation learning. SVG A policy network trained with stochastic value gradients. This is the same setup as, with the difference that the agent does not interact with the environment and learns from a fixed observational dataset. VG A policy trained with value gradients, using the deterministic forward model. This is similar to SVG, but does not involve latent variables. MPUR A policy trained with MPUR, using a deterministic or stochastic model. A cost term is included to penalize the uncertainty of the dynamics model. MPER A policy trained with MPER, using a deterministic or stochastic model. The policy is trained to match expert trajectories from the training set.(c) Figure 6: a) Performance different methods, measured in success rate and distance travelled. Including a cost term penalizing the dynamics model's uncertainty is essential for good performance. Using the modified posterior distribution (z-dropout) improves performance when using the stochastic forward model. b) Training policies by performing longer rollouts through the environment dynamics model also significantly improves performance. c) Summary of compared methods. We evaluated policies using two measures: whether the controlled car reaches the end of the road segment without colliding into another car or driving off the road, and the distance travelled before the episode ends. Policies which collide quickly will travel shorter distances. We compared our approach against several baselines which can also learn from observational data, which are described in Figure 6c. Table 1: Policy and uncertainty costs with and without uncertainty regularization. The policy trained with unregularized VG exploits errors in the forward model to produce actions which yield low predicted cost but high uncertainty. Including the uncertainty cost yields higher predicted cost, but better performance when the policy is executed in the environment.the environment can be found at the URL. The policies learn effective behaviors such as braking, accelerating and turning to avoid other cars. Figure 7 shows trajectories on the map for different methods. We see that the single-step imitation learner produces divergent trajectories which turn into other lanes, whereas the MPUR and MPER methods show trajectories which primarily stay within their lanes. MPUR becomes equivalent to VG in the deterministic setting if we remove the uncertainty penalty, and the large difference in performance shows that including this penalty is essential. Table 1 shows the average predicted policy cost and uncertainty cost of the two methods. VG produces much lower predicted policy cost, yet very high uncertainty cost. This indicates that the actions the policy produces induce a distribution over states which the forward model is highly uncertain about. The policy trained with MPUR produces higher policy cost estimates, but lower uncertainty cost, and performs much better when executed in the environment. The stochastic model trained with a standard Gaussian posterior yields limited improvement over the deterministic model. However, the stochastic model trained with the z-dropout parameterization yields a significant improvement. Comparisons of action-conditional predictions using both models can be seen at the URL. The standard model is less responsive to input actions than the model trained with z-dropout, which likely accounts for their difference in performance. We hypothesize that the standard model uses its latent variables to encode factors of variation in the output which are in fact due to the actions. Using z-dropout discourages this since information in the output cannot be encoded in the latent variables the times they are sampled from the prior, and the loss can better be lowered by predicting the outputs from the actions instead. Please see Appendix E for additional experiments and discussion of this phenomenon. Figure 6b shows the performance of MPUR and MPER for different rollout lengths. All methods see their performance improve dramatically as we increase the rollout length, which encourages the distribution of states the policy induces and the training distribution to match over longer time horizons. We also see that the stochastic model with z-dropout outperforms the standard stochastic model as well as the deterministic model over most rollout lengths. In this work, we proposed a general approach for learning policies from purely observational data. The key elements are: i) a learned stochastic dynamics model, which is used to optimize a policy cost over multiple time steps, ii) an uncertainty term which penalizes the divergence of the trajectories induced by the policy from the manifold it was trained on, and iii) a modified posterior distribution which keeps the stochastic model responsive to input actions. We have applied this approach to a large observational dataset of real-world traffic recordings, and shown it can effectively learn policies for navigating in dense traffic, which outperform other approaches which learn from observational data. However, there is still a sizeable gap between the performance of our learned policies and human performance. We release both our dataset and environment, and encourage further research in this area to help narrow this gap. We also believe this provides a useful setting for evaluating generative models in terms of their ability to produce good policies. Finally, our approach is general and could potentially be applied to many other settings where interactions with the environment are expensive or unfeasible, but observational data is plentiful. To begin with, we describe the details and preparation of the dataset and planning environment which we used, which are summarized in There are three time segments, each of 15 minutes, taken at different times of day which capture the transition between uncongested and congested peak period conditions. After recording, a viewpoint transformation is applied to rectify the perspective, and vehicles are identified and tracked throughout the video; additionally, their size is inferred. This yields a total 5596 car trajectories, represented as sequences of coordinates {x t, y t}. We split these trajectories into training (80%), validation (10%) and testing sets (10%).We then applied additional preprocessing to obtain suitable representations for learning a predictive model. Specifically, we extracted the following: i) a state representation for each car at each time step s t, which encodes the necessary information to choose an action to take, ii) an action a t which represents the action of the driver, and iii) a cost c t, which associates a quality measure to each state. We describe each of these below. State representation: Our state representation consists of two components: an image representing the neighborhood of the car, and a vector representing its current position and velocity. For the images, we rendered images centered around each car which encoded both the lane emplacements and the locations of other cars. Each image has 3 channels: the first (red) encodes the lane markings, the second (green) encodes the locations of neighboring cars, which are represented as rectangles reflecting the dimensions of each car, and the third channel (blue) represents the ego car, also scaled to the correct dimensions. All images have dimensions 3 × 117 × 24, and are denoted by i t. 4 Two examples are shown in FIG6. We also computed vectors u t = (p t, ∆p t), where p t = (x t, y t) is the position at time t and ∆p t = (x t+1 − x t, y t+1 − y t) is the velocity. The higher the speed, the longer the safety distance required to maintain low cost. Action representation: Each action vector a t consists of two components: an acceleration (which can be positive or negative) which reflects the change in speed, and a change in angle. The acceleration at a given time step is computed by taking the difference between two consecutive speeds, while the change in angle is computed by projecting the change in speed along its orthogonal direction: DISPLAYFORM0 Cost: Our cost function has two terms: a proximity cost and a lane cost. The proximity cost reflects how close the ego car is to neighboring cars, and is computed using a mask in pixel space whose width is equal to the width of a lane and whose height depends on the speed of the car. Two examples are shown in FIG6. This mask is pointwise multiplied with the green channel, and the maximum value is taken to produce a scalar cost. The lane cost uses a similar mask fixed to the size of the car, and is similarly multiplied with the red channel, thus measuring the car's overlap with the lane. Both of these operations are differentiable so that we can backpropagate gradients with respect to these costs through images predicted by a forward model. This preprocessing yields a set of state-action pairs (s t, a t) (with s t = (i t, u t)) for each car, which constitute the dataset we used for training our prediction model. We then use the cost function to optimize action sequences at planning time, using different methods which we describe in Section 2.2.We now describe how we adapted this dataset to be used as an environment to evaluate planning methods. Building an environment for evaluating policies for autonomous driving is not obvious as it suffers from a cold-start problem. Precisely measuring the performance of a given driving policy would require it to be evaluated in an environment where all other cars follow policies which accurately reflect human behavior. This involves reacting appropriately both to other cars in the environment as well as the car being controlled by the policy being evaluated. However, constructing such an environment is not possible as it would require us to already have access to a policy which drives as humans do, which in some sense is our goal in the first place. One could hand-code a driving policy to control the other cars in the environment, however is it not clear how to do so in a way which accurately reflects the diverse and often unpredictable nature of human driving. observation = env.reset while not done: action = policy(observation) observation, reward, done, info = env.step(action) env.render Figure 9: NGSIM planning environment. We adopt a different approach where we let all other cars in the environment follow their trajectories in the dataset, while controlling one car with the policy we seek to evaluate. The trajectory of the controlled car is updated as a function of the actions output by the policy, while the trajectories of the other cars remain fixed. If the controlled car collides with another car, this is recorded and the episode ends. This approach has the advantage that all other cars in the environment maintain behavior which is close to human-like. The one difference with true human behavior is that the other cars do not react to the car being controlled or try to avoid it, which may cause crashes which would not occur in real life. The driving task is thus possibly made more challenging than in a true environment, which we believe is preferable to using a hand-coded policy. The interface is set up the same way as environments in OpenAI Gym BID5, and can be accessed with a few lines of Python code, as shown in Figure 9. Our MPUR approach can be viewed as training a Bayesian neural network (BNN) BID30 with latent variables using variational inference BID18 BID21. The distribution over model predictions for s t+1 is given by: DISPLAYFORM0 The distribution p(θ, z|D) reflects the posterior over model weights and latent variables given the data, and is intractable to evaluate. We instead approximate it with the variational distribution q parameterized by η: DISPLAYFORM1 Here q φ represents a distribution over latent variables parameterized by φ, which could be a diagonal Gaussian or the mixture distribution described in Section 2.1. The distribution q θ * is the dropout approximating distribution over model parameters described in BID13 (Section 3.2 of Supplement). This defines a mixture of two Gaussians with small variances over each row of each weight matrix in the forward model, with the mean of one Gaussian fixed at zero. The parameters of this distribution are the model weights, and samples can be drawn by applying different dropout masks. The parameters of the variational distribution are thus η = {θ *, φ}, and can be optimized by maximizing the evidence lower bound, which is equivalent to minimizing the Kullback-Leibler divergence between the approximate posterior and true posterior: DISPLAYFORM2 Here p 0 (z, θ) = p 0 (z) · p 0 (θ) represents a prior over latent variables and model parameters. By applying the chain rule for KL divergences together with the fact that z and θ are independent, we obtain: DISPLAYFORM3 Setting both Gaussians in p 0 (θ) to have zero mean, the second KL term becomes equivalent to scaled 2 regularization on the model parameters, and can be set arbitrarily small (Section 4.2 in Supplement of BID13). Ignoring this term and approximating the integral in Equation 3 using a single sample, we obtain: DISPLAYFORM4 whereθ ∼ q θ * (θ) and z ∼ q φ (s 1:t, s t+1)). Assuming a diagonal Gaussian likelihood on the outputs with constant variance 1/β, we can rewrite this as: DISPLAYFORM5 Multiplying by β does not change the maximum. We now see that maximizing this quantity is equivalent to minimizing our loss term in Equation 1, i.e. training a variational autoencoder with dropout. The architecture of our forward model consists of four neural networks: a state encoder f enc, an action encoder f act, a decoder f dec, and the posterior network f φ. At every time step, the state encoder takes as input the concatenation of 20 previous states, each of which consists of a context image i t and a 4-dimensional vector u t encoding the car's position and velocity. The images i t−20,..., i t are run through a 3-layer convolutional network with 64-128-256 feature maps, and the vectors u t−20,..., u t are run through a 2-layer fully connected network with 256 hidden units, whose final layers contain the same number of hidden units as the number of elements in the output of the convolutional network (we will call this number n H). The posterior network takes the same input as the encoder network, as well as the the ground truth state s t+1, and maps them to a distribution over latent variables, from which one sample z t is drawn. This is then passed through an expansion layer which maps it to a representation of size n H. The action encoder, which is a 2-layer fully-connected network, takes as input a 2-dimensional action a t encoding the car's acceleration and change in steering angle, and also maps it to a representation of size n H. The representations of the input states, latent variable, and action, which are all now the same size, are combined via addition. The is then run through a deconvolutional network with 256-128-64 feature maps, which produces a prediction for the next image i t+1, and a 2-layer fully-connected network (with 256 hidden units) which produces a prediction for the next state vector u t+1. These are illustrated in Figure C.The specific updates of the stochastic forward model are given by: DISPLAYFORM0 DISPLAYFORM1 The per-sample loss is given by: We also train a cost predictor which takes as input the states predicted by the forward model and produces a two-dimensional output (one output for the proximity cost, and one output for the lane cost). This consists of a 3-layer encoder followed by a two-layer fully connected network with sigmoid non-linearities as the end to constrain the values between 0 and 1. DISPLAYFORM2 We trained our prediction model in deterministic mode (p = 0) for 200,000 updates, followed by another 200,000 updates in stochastic mode. We save the model after training in deterministic mode and treat it as a deterministic baseline. Our model was trained using Adam BID20 with learning rate 0.0001 and minibatches of size 64, unrolled for 20 time steps, and with dropout (p dropout = 0.1) at every layer, which was necessary for computing the epistemic uncertainty cost when training the policy network. All cars are initialized at the beginning of the road segment with the initial speed they were driving at in the dataset, and then are controlled by the policy being measured. We only report performance for cars in the testing trajectories, which were not used when training the forward model or policy network. All policy networks have the same architecture: a 3-layer ConvNet with feature maps of size 64-128-256 (which takes 20 consecutive frames as input), followed by 3 fully-connected layers with 256 hidden units each, with the last layer outputting the parameters of a 2D Gaussian distribution from which the action is sampled. All policy networks are trained with Adam with learning rate 0.0001. The MPER and MPUR policies are trained by backpropagation through the unrolled forward model using the reparamaterization trick BID21. The single-step imitation learner is trained to directly minimize the negative log-likelihood of the ground truth action in the dataset under the parameters output by the policy network. All MPUR policies use a weighting of λ = 0.5 for the uncertainty cost. Additionally, we detach gradients of predicted costs coming into the states, to prevent the policy from lowering the predicted cost (which is speed-dependent) by slowing down. We found that not doing this can in the policy slowing excessively, and then attempting to speed up only when another car gets close. We repeat the policy training with 3 random seeds for each method. The policy cost which we minimize for VG, SVG and MPUR is given by: DISPLAYFORM0 where C proximity and C lane are the proximity and lane costs described in Section 3. This puts a higher priority on avoiding other cars while still encouraging the policy to stay within the lanes. MPUR additionally minimizes U, the model uncertainty cost described in Section 2.2.When computing the uncertainty cost, to compensate for differences in baseline uncertainty across different rollout lengths, we normalize by the empirical mean and variance for every rollout length t of the forward model over the training set, to obtain µ t U and σ t U. We then define our uncertainty cost as follows: DISPLAYFORM1 If the uncertainty estimate is lower than the mean uncertainty estimate on the training set for this rollout length, this loss will be zero. These are cases where the model prediction is within normal uncertainty ranges. If the uncertainty estimate is higher, this loss exerts a pull to change the action so that the future state will be predicted with higher confidence by the forward model. We found the lack of responsiveness of the stochastic forward model to be especially pronounced when it is given a sequence of latent variables inferred from the current training sequence by the posterior network, instead of a sequence of latent variables sampled from the prior (intuitively, using the inferred sequence corresponds to using the future from the dataset, while using a sampled sequence corresponds to a different future). One reason for this difference in responsiveness may be that in the first case, the latent variables are highly dependent whereas in the second they are independent. If action information is encoded in the latent variables, the effects on the output may partially cancel each other out when the latents are independent. However, when they are highly dependent, together they may explain away the effects of the actions input to the forward model. The table below shows the performance of MPUR policies learned using inferred and sampled latent variables. We see a large drop in performance when using the inferred latent variables. This is consistent with the videos at the URL, which show that the forward model is less sensitive to actions when the latent variables are sampled from the posterior instead of the prior. Note that the z-dropout parameterization reduces this problem somewhat. Although the goal of this work is to learn policies with no environment interaction, for completeness we also report of running Proximal Policy Optimization (PPO) BID41, a state-of-the-art model-free algorithm which learns through interacting with its environment. We used the OpenAI Baselines implementation BID5 with the same policy network architecture as for the other methods, and set the reward to be the negative policy cost defined in equation 9. We measure both final performance and cumulative regret, using the success rate as a reward. Cumulative regret is a measure often used in online learning which represents the difference between the agent's accumulated reward and the accumulated reward which would have been obtained by following the optimal policy, which we take to be human performance here. Specifically, the regret at epoch M is given by: DISPLAYFORM0 where R * represents the reward obtained by following the optimal policy and R m is the reward obtained by following the policy at epoch m. Unlike final performance, regret also reflects poor decisions made during the learning process. Results are shown in FIG0. PPO obtains slightly higher final performance than MPUR, but also incurs higher regret as it executes poor policies in the environment during the early stages of learning. In contrast, MPUR learns through observational data and already has a good policy in place when it begins interacting with the environment. | A model-based RL approach which uses a differentiable uncertainty penalty to learn driving policies from purely observational data. | 1,095 | scitldr |
Dynamical system models (including RNNs) often lack the ability to adapt the sequence generation or prediction to a given context, limiting their real-world application. In this paper we show that hierarchical multi-task dynamical systems (MTDSs) provide direct user control over sequence generation, via use of a latent code z that specifies the customization to the individual data sequence. This enables style transfer, interpolation and morphing within generated sequences. We show the MTDS can improve predictions via latent code interpolation, and avoid the long-term performance degradation of standard RNN approaches. Time series data often arise as a related'family' of sequences, where certain characteristic differences exist between the sequences in a dataset. Examples include the style of handwritten text , the response of a patient to an anaesthetic , or the style of locomotion in motion capture (mocap) data . In this paper, we will consider how such variation may be modelled, and effectively controlled by an end user. Such related data is often pooled to train a single dynamical system, despite the internal variation. For a simple model, such as a linear dynamical system (LDS), this will in learning only an average effect. In contrast, a recurrent neural network (RNN) may model this variation, but in an implicit and opaque manner. Such a'black-box' approach prohibits end-user control, and may suffer from mode drift, such as in , where a generated mocap sequence performs an unprompted transition from walking to drinking. Some of these problems may be alleviated by appending'context labels' to the inputs (see e.g. , §10.2.4) which describe the required customization. However, such labels are often unavailable, and the approach may fail to model the variation adequately even when they are. To move beyond these approaches, we consider latent variable models, where a latent variable z characterizes each sequence. This may be seen as a form of multi-task learning (MTL, see), from which we derive the name multi-task dynamical system (MTDS), with each sequence treated as a task. A straightforward approach is to append the latent z to the inputs of the model, similarly to the'context label' approach, thereby providing customization of the various bias (or offset) parameters of the model. A number of examples of this have been proposed recently, e.g. in and Miladinović et al.. Nevertheless, this'bias customization' has limited expressiveness and is often unsuitable for customizing simple models. In this paper we investigate a more powerful form of customization which modulates all the system and emission parameters. In this approach, the parameters of each task are constrained to lie on a learned low dimensional manifold, indexed by the latent z. Our experiments show that this approach in improved performance and/or greater data efficiency than existing approaches, as well as greater robustness to unfamiliar test inputs. Further, varying z can generate a continuum of models, allowing interpolation between sequence predictions (see Figure 1b for an example), and potentially morphing of sequence characteristics over time. Contributions In this paper we propose the MTDS, which goes beyond existing work by allowing full adaptation of all parameters of general dynamical systems via use of a learned nonlinear manifold. We show how the approach may be applied to various popular models, and provide general purpose...... learning and inference algorithms. Our experimental studies use synthetic data (sum of two damped harmonic oscillators) and real-world human locomotion mocap data. We illuminate various properties of the MTDS formulation in our experiments, such as data efficiency, user control, and robustness to dataset shift, and show how these go beyond existing approaches to time series modelling. We finally utilize the increased user control in the context of mocap data to demonstrate style morphing. To this end, we introduce the model in Section 2, giving examples and discussing the particular challenges in learning and inference. We discuss the relation to existing work in Section 3. Experimental setup and are given in Section 4 with a in Section 5. Consider a collection of input-output sequences D = {Y Ti}, i = 1,..., N, where T i denotes the length of sequence i. Each sequence i is described by a different dynamical system, whose parameter θ (i) depends on the hierarchical latent variable z (i) ∈ Z: for t = 1,..., T i. The state variables X (i) = {x Ti}, x t ∈ X follow the latent dynamics starting from x 0:= 0 (other choices of initial state are possible). See Figure 1a for a graphical model. In this paper we assume Z = R k which the vector-valued function h φ (·) transforms to conformable model parameters θ ∈ R d, d k. Note that h φ may keep some dimensions of θ constant with respect to z. We call this a Multi-Task Dynamical System, going beyond the usage in. In order to make the framework more concrete we will describe two general choices of the base model. In what follows we will write each parameter with a subscript z to denote dependence on z (e.g. A z := A(z)) to reduce notational clutter. The choice of p(z) and h φ will depend on the application, but a fairly general choice is a deep latent Gaussian model . See section A.1.1 in the supplementary material for further discussion. For a given z, a multi-task linear dynamical system (MTLDS) can be described by: w t ∼ N (0, R z), t ∼ N (0, S z), with θ z = {A z, B z, b z, C z, D z, d z, R z, S z} = h φ (z). The parameterization of θ z must satisfy the constraints of positive definite R z and S z and stable A z (i.e. A z 2 ≤ 1) for all z, hence projection methods such as in are not applicable. We choose an alternative formulation of the LDS, replacing the latent dynamics in eq. by: where Σ z is a diagonal matrix and Q z orthogonal with no loss of generality (proof in supp. mat.). Since Σ z Q z ≤ Σ z Q z = Σ z, stability can be enforced e.g. by Σ = diag{tanh (υ)} for some vector υ. For more details see section A.1.2 in the supplementary material. Due to the nonlinearity of an RNN, enforcing stability of A z is not strictly required (see e.g. , §4.4), although bounding the spectral radius may be useful for learning (e.g.). The dynamics of a multi-task RNN (MT-RNN) are described by: Combined with the emission model of eq., we have If long-term dependencies are important we may consider an orthogonal transition matrix (parameterized as for the MTLDS) to create a multi-task version of the Orthogonal RNN . The parameters φ of an MTDS can be learned from a dataset The first term in the integrand,. For clarity of exposition we only consider models with deterministic state, which extends to the MTRNN and MTLDS above (in the case w t = 0 for all t). This also avoids the interaction effect with the choice of approximate marginalization over X. Equation also cannot be computed in closed form in general, and so we resort to approximate learning. A natural choice is via the ELBO (see an alternative MCO approach for unsupervised tasks in Section A.1.3, supp. mat.). We write the ELBO of eq. as: where D KL is the Kullback-Leibler divergence and q λ (z|Y, U) an approximate posterior for z. We can now optimize a lower bound of the marginal likelihood via arg max φ,λ where low variance unbiased gradients of eq. are available via reparameterization , where µ λ, s λ are inference networks (e.g. Fabius & van). It can be difficult to learn a sensible latent representation if the base model is a powerful RNN. When each output can be identified unambiguously via the inputs preceding it, a larger ELBO can be obtained by the RNN learning the relationship without using the latent variable (see e.g.). A useful heuristic for avoiding such optima is KL annealing (e.g.). In our experiments we perform an initial optimization without the KL penalty (second term in eq. 9), initializing s λ (Y, U) to a small constant value. For an unseen test sequence {Y, U}, the posterior predictive distribution is p(y t+1:T | y 1:t, u 1:T) = Z p(y t+1:T | u 1:T, z) p(z | y 1:t, u 1:t) dz, usually estimated via Monte Carlo. The key quantity is the posterior over z, which may be approximated by the inference networks µ λ, s λ. However, for novel test sequences, the inference networks may perform poorly and standard approximate inference techniques may be preferred. For further discussion and a description of our inference approach, see sections A.1.4, A.1.5 in the supp. mat. We note that z may not require inference, for instance by using the posterior of a sequence in the the training set. This may be useful for artistic control, style transfer, embedding domain knowledge or overriding misleading observations. We also note that the latent code can be varied during the state rollout to simulate a task which varies over time. A number of dynamical models following the'bias customization' approach have been proposed recently. Miladinović et al. and propose models where the biases of an LSTM cell depend on a (hierarchical) latent variable. propose a dynamical system where the latent dynamics are concatenated with a time-constant latent variable. In contrast, our MTDS model performs full parameter customization, and this on both the dynamics and emission distributions. A number of other proposals may be considered specialized applications of the MTDS. use a small deterministic nonlinear dynamical system for the base model whose parameters depend on z using a nonlinear factor analysis structure. use a small RNN as the base model, where the transition matrix depends on z via multilinear decomposition. use a small stochastic nonlinear dynamical system with h φ a set of parameter vectors chosen discretely (or in convex combination) via z. Controlling and customizing sequence prediction has received much attention in the case of video data. As in the MTDS, these approaches learn features that are constant (or slowly varying) within a subsequence. and propose methods for disentangling time-varying and static features, but do not provide a useful density over the latter, nor an obvious way to customize the underlying dynamics. use a GAN architecture where z factorizes into content and motion components. force a parts-based decomposition of the scene before inferring the latent content z. However, as before, the dynamic evolution cannot be easily customized with these methods. Hierarchical approaches for dynamical systems with time-varying parameters are proposed in (corresponding to non-stationary assumptions) and in (for the purposes of local LDS approximation). These models, like the MTDS can adapt all the parameters, but are linear and correspond to single task problems. predict the parameters of simple time-varying LDS models directly via an RNN. While this is a multi-task problem, it is assumed that all necessary variation can be inferred from the inputs U. Multi-task GPs are commonly used for sequence prediction. Examples include those in; Titsias & Lázaro-; Álvarez et al.;. MTGPs however can only be linear combinations of (a small number of) latent functions, further, predictions depend critically upon often unknown mean functions, and inputs are not easily integrated. Note that an MTDS with no inputs, an LDS base model, a linear-Gaussian prior over the emission parameters We investigate the performance of the MTDS on two datasets. The first experiment investigates the performance of the MTDS on synthetic data generated by linear superposition of damped harmonic oscillation (DHO). The second experiment considers real-world mocap data for human locomotion. Data The generative model for J oscillators with constant amplitudes γ and variable frequency and decay factors,.., 80 for tasks i = 1, 2,..., N. The emission noise is distributed iid as Model We model the DHO data using an MTLDS with deterministic state X = R 4 and a k = 4 latent variable z. All LDS parameters were adapted via the latent z except D:= 0 and the emission variance s 2, which was learned. For optimization, we use the MCO algorithm of section A.1.3. This can obtain a tighter bound than the ELBO, and is useful to investigate convergence to the true model over increasing N. We contrast this with a bias customization approach (e.g. Miladinović et al., 2019), implemented similarly, but such that only the parameter b (eq. 6) depends on z. We also train a Pooled LDS, which is the standard approach, using the same parameters for all tasks, and a single-task (STL) LDS which is learned from scratch using Bayesian inference over all parameters for each task. The Pooled-LDS was initialized using spectral methods (see Van Evaluation We assess how quickly and effectively the models can adapt to novel test sequences with a training set size of N = 2 1, 2 2, . . ., 2 7 . (The STL approach effectively uses N = 0.) The test set comprises 20 additional sequences drawn from the generating distribution. For an initial subsequence y 1:t, we estimate the predictive posterior p(y t+1:T |y 1:t) for various t and assess the predictions via root mean squared error (RMSE) and negative log likelihood (NLL). For MTL we use the Monte Carlo inference method described in supp. mat. A.1.5 and for STL we use Hamiltonian Monte Carlo . Each experiment is repeated 10 times to estimate sampling variance. The , shown in Table 1 and supp. mat. section A.2.2, show substantial advantage of using the MTLDS ('MT Full') over single-task or pooled approaches. The MTLDS consistently outperforms the Pooled-LDS for all training sizes N ≥ 4. Merely performing bias customization ('MT Bias') is insufficient to perform much better than a pooled approach. An example of MTLDS test time prediction is shown in Figure 2, with Figures 2c and 2d demonstrating effective generalization from the N = 4 training examples (Figure 2a). Even after 40 observations, the STL approach (which is capable of fitting each sequence exactly) does not significantly outperform the N = 4 MTLDS. Furthermore, the runtime was approx. 1000 times longer since STL inference is higher dimensional and poorly conditioned, and requires a more expensive algorithm. Note that with a larger training set size of N = 128, the MLTDS approaches the likelihood of the true model (Figure 7, supp. mat.). Data The dataset consists of 31 sequences from (ca. 2000 frames average at 30fps) in 8 styles: angry, childlike, depressed, neutral, old, proud, sexy, strutting. In this case the family of possible sequences corresponds to differing walking styles. Each observation represents a 21-joint skeleton in a Lagrangian frame, y t ∈ R 64 where the root movement is represented by a smoothed component and its remainder. we represent joints by their spatial position rather than their rotation. We also provide inputs that an animator may wish to control: the root trajectory over the next second, the gait cycle and a boolean value determining whether the skeleton turns around the inside or outside of a corner. See section A.3.1 in the supplementary materials for more details. Model We use a recurrent 2-layer base model where the first hidden layer is a 1024 unit GRU and the second hidden layer is a 128 unit standard RNN, follwed by a linear decoding layer. The first-layer GRU does not vary with z, i.e. it learns a shared representation of the input sequence across all i. Explicitly, omitting index i, the model for a given z is: for t = 1,..., T. The parameters are θ = {ψ 1, ψ 2, H, C, d} where ψ 1 and H are constant wrt. z. The matrix H ∈ R ×1024 (< 1024) induces a bottleneck between layers, forcing z to explain more of the variance. For our experiments, a small can be used (we use = 24). The first layer GRU uses 1024 units since it was observed experimentally to produce smoother animations than smaller networks. The second layer does not use a gated architecture, as gates appear to learn style inference more easily, and in less use of z. For learning, each sequence was broken into overlapping segments of length 64 (approx. two second intervals), which allows z to vary across a sequence. We learn the model using an open-loop objective, i.e. the y t are not appended to the inputs. This forces the model to recover from its mistakes as in , although unlike these approaches, we do not append predictions to the inputs either. Our rationale is that the state captures the same information as the predictions, and while previous approaches required observations y 1:τ as inputs to seed the state, we can use the latent z. The model was optimized using the variational procedure in section 2.2, where a slower learning rate (by a factor of 10-50) for the first layer parameters (i.e. ψ 1, H) usually ed in a more descriptive z. We also found that standard variational inference for each z (i) worked better in general than using amortized inference. For comparison, we implement a bias customization model ('MTBias') via a deterministic state version of Miladinović et al., 2019, which follows eqs.- but only the RNN bias in eq. is a function of z. We also implement a 1-layer and 2-layer GRU without the multi-task apparatus, which serves both as an ablation test and a competitor model on the new dataset. Style inference is performed with the same network given an initial seed sequence y 1:τ. We train these in closed-loop (i.e. traditional next step 'teacher forcing' criterion) and open-loop settings. For baselines, we use constant predictions of (i) the training set mean and (ii) the last observed frame of the seed sequence ('zero-velocity' prediction). We test the data efficiency of the MTDS by training the models on subsets of the original dataset. Besides the models described above, 8'single-task' versions of the GRU models are trained which only see data for a single style. We use six training sets of approximate size 2 8, 2 9, 2 10, 2 11, 2 12, 2 13 frames per style, where sampling is stratified carefully across all styles, and major variations thereof. For all experiments, the model fit (MSE) is calculated from the same 32 held out sequences (each of length 64). The are shown in Figure 3a. As expected, the MTDS, MTBias and Pooled models obtain'multi-task' gains over STL approaches for small datasets. However, the MTDS demonstrates much greater data efficiency, achieving close to the minimum error with only 7% of the dataset. The MTBias model requires more than twice this amount to obtain the same performance, and the Pooled model requires more than four times this amount. More details, as with all mocap experiments, can be found in supp. mat. section A.2.3. We investigate how well the MTDS can generalize to novel sequence styles via use of a leave-one-out (LOO) setup, similar to transfer learning. For each test style, a model is trained on the other 7 styles in the training set, and hence encounters novel sequence characteristics at test time. We average the test error over the LOO folds as well as 32 different starting locations on each test sequence. The are given in Figure 3b. We see that while the competitor (pooled) models perform well initially, they usually degrade quickly (worse for closed-loop models). In contrast, the multi-task models finds a better customization which evidences no obvious worsening over the predictive interval. Unlike pooled-RNNs, the MTDS and MTBias models can firstly perform correct inference of their customization, and secondly can'remember' it over long intervals. We note that all models struggle to customize the arms effectively, since their test motions are often entirely novel. Customization to the legs and trunk is easier since less extrapolation is required (see animation videos linked in section A.4.1). We investigate the control available in the latent z by performing style transfer. For various inputs U (s1) from each source style s 1, we generate predictions from the model using target style s 2, encoded by z (s2). We use a classifier with multinomial outputs, trained on the 8 styles of the training set, to test whether the target style s 2 can be recognized from the data generated by the MTDS. Figure 3c gives the classifier'probability' for each target style s 2, averaged over all the inputs {U (s1): s 1 = s 2 }. Successful style transfer should in a the classifier assigning a high probability to the target style. These suggest that the prediction style can be well controlled by z (s2) in the case of the full MTDS, but the MTBias demonstrates reduced control for some (source, target) pairs. See the videos linked in section A.4.1 for examples, and sec. A.2.3 for more details. Qualitative investigation Qualitatively, the MTDS appears to learn a sensible manifold of walking styles, which we assess through visualization of the latent space. A k = 2 latent embedding can be seen in Figure 4 where the z (i) for each training segment i is coloured by the true style label. Some example motions are plotted in the figure. The MTDS embedding broadly respects the style label, but learns a more nuanced representation, splitting some labels into multiple clusters and coalescing others. These appear broadly valid, e.g. the'proud' style contains both marching and arm-waving, with the latter similar to an arm-waving motion in the'childlike' style. This highlights the limitation of relying on task labels. Visualizations such as Fig. 1b indicate that smooth style interpolation is available via interpolation in latent space. We take advantage of this in the animations (linked from sec. A.4.1) by morphing styles dynamically. In this work we have shown how to extend dynamical systems with a general-purpose hierarchical structure for multi-task learning. Our MTDS framework performs customization at the level of all parameters, not just the biases, and adapts all parameters for general classes of dynamical systems. We have seen that the latent code can learn a fine-grained embedding of sequence variation and can be used to modulate predictions. Clearly good predictive performance for sequences requires task inference, whether implicit or explicit. There are three advantages of making this inference explicit. Firstly, it enhances control over predictions. This might be used by animators to control the style of predictions for mocap models, or to express domain knowledge, such as ensuring certain sequences evolve similarly. Secondly, it can improve generalization from small datasets since task interpolation is available out-of-the-box. Thirdly, it can be more robust against changes in distribution at test time than a pooled model: is a unit Gaussian p(z) = N (0, I). This choice allows simple sampling schemes, and straight-forward posterior approximations. It is also a useful choice for interpolation, since it allows continuous deformation of its outputs. An alternative choice might be a uniform distribution over a compact set, however posterior approximation is more challenging, see Svénsen for one approach. Sensible default choices for h φ include affine operators and multilayer perceptrons (MLPs). However, when the parameter space R d is large, it may be infeasible to predict d outputs from an MLP. Consider an RNN with 100k parameters. If an MLP has m L−1 = 300 units in the final hidden layer, the expansion to the RNN parameters in the final layer will require 30×10 6 parameters alone. A practical approach is to use a low rank matrix for this transformation, equivalent to adding an extra linear layer of size m L where we must have m L m L−1 to reduce the parameterization sufficiently. Since we will typically need m L to be O, we are restricting the parameter manifold of θ to lie in a low dimensional subspace. Since MLP approaches with a large base model will then usually have a restricted final layer, are there any advantages over a simple linear-Gaussian model for the prior p(z) and h φ? There may indeed be many situations where this simpler model is reasonable. However, we note some advantages of the MLP approach: 1. The MLP parameterization can shift the density in parameter space to more appropriate regions via nonlinear transformation. 2. A linear space of recurrent model parameters can yield highly non-linear changes even to simple dynamical systems (see e.g. the bifurcations in §8 of). We speculate it might be advantageous to curve the manifold to avoid such phenomena. 3. More expressive choices may help utilization of the latent space (e.g.). This may in fact motivate moving beyond a simple MLP for the h φ. The matrices A, B, R, S of the MTLDS can benefit from specific parameterizations, which we will discuss in turn. Degeneracy of LDS. It will be useful to begin with the well-known over-parameterization of linear dynamical systems. The hidden dynamics of a LDS can be transformed by any invertible matrix G while retaining the same distribution over the emissions Y. This follows essentially because the basis used to represent X is arbitrary. The distribution over Y is unchanged under the following parameter transformations: Parameterization of A. The stability constraint, is equivalent to ensuring that the singular values of A lie within the unit hypercube (since singular values are non-negative). Let A = U ΣV T be the singular value decomposition (SVD) of A. Now we have from the previous that if an LDS has latent dynamics with transition parameter A, we may replace the dynamics under the similarity transform G −1 AG. Choose G = U, i.e. the left singular values of A, and hence A = ΣV T U =: ΣQ for some orthogonal matrix Q. This follows from the closure of the orthogonal group under multiplication, which is easily verified. Note that in choosing this transformation, no additional constraints are placed on the other parameters in the LDS. Orthogonal matrices can be parameterized in a number of ways (see e.g.). A straight-forward choice is the Cayley transform.: "if Q is an orthogonal matrix that does not have the eigenvalue -1, then it may be written in Cayley's form: where S is skew-symmetric". In order to permit negative eigenvalues, we can pre-multiply by a diagonal matrix E with elements in {+1, −1}. Since we then have A = ΣEQ, E can be absorbed into Σ, and so the stability constraint can be satisfied with the parameterization A = ΣQ where Σ is a diagonal matrix with elements in [−1, +1] and Q is a Cayley-transform of a skew-symmetric matrix. This follows from the overparameterization of the LDS, and we emphasise that the system equations and are not equivalent, but any LDS distribution over Y can be written with latent dynamics of the form. Choose G = κ −1 I in eq.. It may be observed that the scale κ of the latent system can be chosen arbitrarily without affecting A. We wish to avoid such degeneracies in a hierachical model, since we may otherwise waste statistical strength and computation on learning equivalent representations. We can remove this by fixing the scale of B. An indirect but straightforward approach is to upper bound the magnitude of each element of B. ForB predicted by h φ (z) we might choose the transformation B = tanh(B) where tanh acts element-wise. If a sparse B is desired, one can use an over-parameterization of two matricesB 1,B 2, and choose B = σ(B 1) • tanh(B 2), where • is element-wise multiplication, and σ a logistic sigmoid. The former parameterization is unlikely to find a sparse representation since the gradient of tanh is greatest at 0. Parameterization of R, S. The covariance matrices R, S must be in the positive definite cone. Where a diagonal covariance will suffice, any parameterization for enforcing positivity can be used, such as exponentiation, squaring or softplus. A number of parameterizations are available for full covariance matrices (see). A simple choice is to decompose the matrix, say R = LL T, where L is a lower triangular Cholseky factor. As before, it is useful to enforce uniqueness, which can be done by ensuring the diagonal is positive. We provide an alternative learning algorithm to the VB approach in section 2.2 which obtains a tighter lower bound. This was important for the DHO experiments in order to monitor convergence to the true model. The below is perhaps a novel approach for learning in unsupervised cases (i.e. where U = ∅), but cannot be performed efficiently for supervised problems without modification. Monte Carlo Objectives construct a lower bound for marginal likelihoods via a transformation of an appropriate Monte Carlo estimator. Specifically we consider the logarithmic transformation of: m = 1,..., M; an importance sampling estimator for p(Y). Using Jensen's inequality, we show that the following is a lower bound on the log marginal likelihood: where p(z 1:M):= p(z 1)...p(z M). The tightness of the bound can be increased by increasing the number of samples M . Assuming p(z) has been re-parameterized to be parameter-free, we can easily calculate the gradient (if not, see). By exchanging integration and differentiation, we can calculate the gradient as: Note that eq. is an importance sampled version of the Fisher identity. We might expect this estimator to suffer from high variance, since the prior is a poor proposal for the posterior. However, the prior should not be a poor proposal for the aggregate posterior, i.e. ). In fact, importance sampling from the prior may serve as a useful bias in this case, attracting the posterior distributions which have a large D KL (p(z | Y (i) )||p(z)) towards the prior. Our observation is that sampling from the prior can be amortized over each sequence Y (i), i = 1,..., N. Specifically, for each particle z m, the dynamics, can be run forward once to calculatê Y m, from which the likelihood Y (i), for all tasks i = 1,..., N can be calculated inexpensively. The amortized cost of taking M samples (e.g. M ∈ O(10 3)) now becomes M/N, which may be relatively small. We can also take advantage of low-discrepancy random variates such as Sobol sequences to reduce variance. We propose that each sequence i resamples a small number M rsmp ≤ 5 of particles from the importance weights for each i to reduce the cost of backpropagation (a similar resampling scheme is suggested in). See Algorithm 1. In the supervised case (i.e. where each observation Y (i) has a different input U (i) ), running the dynamics forward from a particle z m can no longer be amortized over all {Y (i) } since the prediction We can therefore only amortize the parameter generation θ = h φ (z), which is often less expensive than running the dynamics forward. For this reason Algorithm 1 is primarily restricted to unsupervised problems. A hybrid approach would essentially in the importance weighted autoencoder (IWAE) of. Inference at test time can be performed by any number of variational or Monte Carlo approaches. As in the main text, our focus here is on deterministic state dynamical systems. For stochastic state models, additional reasoning similar to Miladinović et al. will be required. A gold standard of inference over z may be the No U-Turn Sampler (NUTS) of (a form of Hamiltonian Monte Carlo), provided k is not too large and efficiency is not a concern. However, given the sequential nature of the model, it is natural to consider exploiting the posterior at time t for calculating the posterior at time t + 1. Bayes' rule suggests an update of the end end Optimize(optimizer, φ, g); end end following form: following the conditional independence assumptions of the MTDS. This update (in principle) incorporates the information learned at time t in an optimal way, and further suggests a constant time update wrt t. However, evaluation of p(y t+1 | u 1:t+1, h φ (z)) usually scales linearly with t, since the state x t+1 must be calculated recursively from x 0 given z and u 1:t+1. Nevertheless, sequential incorporation of previous information will perform a kind of annealing which reduces the difficulty, and hopefully the runtime of inference at each stage. We first provide some of the difficulties of such an approach, looking first at Monte Carlo (MC) methods. Naïve application of Sequential Monte Carlo (SMC) will in severe particle depletion over time. To see this, let the posterior after time t be p(z | y 1:t, u 1:t) = 1 M M m=1 w m δ(z− z m). Then the updated posterior at time t + 1 will be:, simply a re-weighting of existing particles. Over time, the number of particles with significant weights w m will substantially reduce. But since the model is static with respect to z (see), there is no dynamic process to'jitter' the {z m} as in a typical particle filter, and hence a resampling step cannot improve diversity. discusses two related solutions: firstly using'rejuvenation steps' (cf.) which applies a Markov transition kernel to each particle. The downside to this approach is the requirement to run until convergence; and the diagnosis thereof, which can in substantial extra computation. One might instead sample from a fixed proposal distribution (accepting a move with the usual Metropolis-Hastings probability) for which convergence is more easily monitored. A Sequential Monte Carlo sampler approach may be preferred, which permits local moves, and can reduce sample impoverishment via resampling (similar to SMC). However, the approach requires careful choices of both forward and backward Markov kernels which substantially reduces its ease of use. A well-known variational approach to problems with the structure of eq. is assumed density filtering (ADF, see e.g.). For each t, ADF performs the Bayesian update and the projects the posterior into a parametric family Q. The projection is done with respect to the reverse KL Divergence, i.e. q t+1 = arg min q∈Q D KL p(z | y 1:t+1, u 1:t+1) || q. Intuitively, the projection finds an'outer approximation' of the true posterior, avoiding the'mode seeking' behaviour of the forward KL, which is particularly problematic if it attaches to the wrong mode. Clearly the performance of ADF depends crucially on the choice of Q. Unfortunately, where Q is expressive enough to capture a good approximation, the optimization problem will usually be challenging, and must resort to stochastic gradient approaches, ing in an expensive inner loop. Furthermore, when the changes from q t to q t+1 are relatively small, the gradient signal will be weak, ing perhaps in misdiagnosed convergence and hence accumulation of error over increasing t. A recent suggestion of is to improve efficiency via re-use of previous (stale) gradient evaluations. Standard variance reduction techniques may also be considered to improve convergence in the inner loop. In our experiments, we found sampling approaches faster and more reliable for each update, as well as providing diagnostic information, and so we eschew variational approaches. (Our experiments used a fairly small k (≤ 10); variational approaches may be preferred in higher dimensional problems.) Specifically we use iterated importance sampling (IS) to update the posterior at each t. The key quantity for IS is the proposal distribution q prop: we need a proposal that is well-matched to the target distribution. Our observation is that the natural annealing properties of the filtering distributions (eq. 23) allow a slow and reliable adaptation of q prop. In order to capture complex multimodal posteriors, we parameterize q prop by a mixture of Gaussians (MoG). For each t, the proposal distribution is improved over N AIS iterations using adaptive importance sampling (AdaIS), described for mixture models in Cappé et al.. We briefly review the methodology for a target distribution p *. Let the AdaIS procedure at the nth iteration use the proposal: α j ∈ R + s.t. For our experiments, this approach worked robustly and efficiently, and appears superior to the alternatives discussed. Unlike SMC, we obtain a q prop which is a good parameteric approximation of the true posterior. We therefore avoid the sample impoverishment problem discussed above (eq. 25). Due to the small number of iterations of AdaIS required (usually ≤ 5 for our problems), it is substantially faster than MCMC moves, and since stochastic gradients are avoided, convergence is much faster than variational approaches. The scheme benefits from the observed fast initial convergence rates of the EM algorithm (see e.g.), particularly since early stopping can be used for the initial iterates. In practice, one may not wish to calculate a posterior at every t, but instead intervals of length τ. In our DHO experiments (k = 4) we use τ = 5, and usually have ESS > 0.6M after n = 4 inner iterations, with total computation per q t requiring 250-300ms on a laptop. We observe in our experiments that posteriors are often multimodal for t ≤ 20 and sometimes beyond, motivating the MoG parameterization. In these experiments, the MoG appears to capture the salient characteristics of the target distribution well. Note as in section A.1.3, Sobol or other low-discrepancy sequences may be used to reduce sampling variance from q prop. Under review as a conference paper at ICLR 2020 for t = 1,..., T, z ∈ R 4, x t ∈ R 4, suppressing task index i for clarity. Define x 0:= 0, and for all tasks u = [1, 0, 0, 0, . . .]. A is parameterized as discussed in section A.1.2 using a product of a diagonal and orthogonal matrix ΣQ. The diagonal of Σ is constrained to lie in [−1, +1] using the tanh function, and Q is parameterized by the Cayley transform of a skew symmetric matrix S. Using an upper triangular matrix Γ, we have S = Γ − Γ T, and Q = (I − S)(I + S) −1. We parameterize B via the product of logistic sigmoid and tanh functions as in section A.1.2 in order to learn a sparse parameterization. C is unconstrained, and the parameter s is optimized as a constant wrt. z. The STLDS is parameterized in the same way. The prior p(z) is a unit Gaussian distribution, and h φ is a 2 hidden-layer neural network. We use a fixed feature extractor T in the first layer in order to help encode a rectangular support within a spherically symmetric distribution. The second layer is a fully-connected 300 unit layer with sigmoid activations. Learning. The output of an MTLDS is very sensitive to the parameter A, and care must be taken to avoid divergence during optimization. The diagonal-orthogonal parameterization greatly helped to stabilize the optimization over a more naïve orthogonal-diagonal-orthogonal SVD parameterization. We also reduced the learning rate by a factor of 10 for A. It proved useful to artificially elevate the estimate of s during training using a prior log s ∼ N (−1.5, 0.05) (derived from preliminary experiments) since the MTDS can otherwise overfit small datasets (see also discussion in §3, Svénsen, 1998), with associated instability in optimization. The learning rate schedule is given in Table 2, for which the prior over log s ∼ N (m, 0.05) was annealed from m = −1.0 to m = −1.5. The "momentum" parameter β 1 (c.f.) is also reduced at the end of optimization. The latter was motivated by oscillation and moderate deviations observed near optima, apparently caused (upon investigation) by strong curvature of the loss surface. Inference. The latent z are inferred online using the adaptive IS scheme of section A.1.5. We also perform inference over log s since it is held artificially high for optimization, and its true optimal value is not known. An informative prior close to the learned value log s ∼ N −2.0, 0.1 2, was Epoch η β 1 log s mean M 1 8e-4 0.9 -1.0 1 000 200 8e-4 0.9 -1.3 1 000 600 4e-4 0.9 -1.5 2 000 1000 2e-4 0.8 -1.5 4 000 nevertheless used since the posterior was sometimes approximately singular, causing high condition numbers in the estimated covariance matrix of the proposal. The hyperparameters are given in Table 3. These parameters did not require tuning as for optimization, but were sensible defaults. These also seem to work well without tuning for other experiments such as the Mocap data. Each posterior for a given time t took on average approx. 0.3 seconds. We used the No U-Turn Sampler for the STL experiments due to poor conditioning and the higher complexity and dimensionality of the posterior (19 dimensions). Tuning is performed using ideas from Hoffman & Gelman (2014, Algorithm 4, 5), and the mass matrix is estimated from the warmup phase. 2 The warmup stage lasted 1000 samples and the subsequent 600 samples were used for inference. Each sampler was initialized from a MAP value, obtained via optimization with 10 random restarts. For both MAP optimization and sampling, we found it essential to enforce a low standard deviation (we used log s = −2 and log s ∼ N −2, 0.2 2 respectively) similarly to the MTL experiments. The autocorrelation-based effective sample size (, ch. 11.5) typically exceeds 100 for each parameter. Each posterior for a given time t took on average approx. 300 seconds. Note that as discussed in section A.1.4, unlike our AdaIS procedure, we cannot make much re-use of previous computation here. The average (over the 10 repetitions) are given in Table 4, which extends Table 1 in the main text with the NLL . The distribution of these can be seen in the violin plots of Figure 8. The RMSE of the MTLDS are all significantly better than both the pooled and single-task models according to a Welch's t-test and Mann-Whitney U-test, except for MTLDS-4 at t = 40. The latter is significantly better than the pooled model, but is indistinguishable from the STLDS at the level α = 0.05. We also consider the convergence of the MTLDS to the true model with increasing N. For each experiment, we average the log marginal likelihood of the test sequences estimated via 10 000 (Sobol) samples from the prior. As before, the prior should be a good proposal for the aggregate posterior, and we amortize the same samples over all test sequences. In order to interpret the difference to the true distribution log p * (Y test) − log p(Y test | φ), we use the Bayes Factor interpretations given by. For instance a difference of 1.0 is'barely worth mentioning', but a difference of 4.0 is'strong evidence' that the distributions are different. We average over 10 000 test examples to avoid sampling variation of the test set. Figure 7 show boxplots of the log marginal likelihood for each model over increasing N, where the boxes show the interquartile range (IQR) over the 10 repetitions. We see convergence towards the true value with increasing N, with the difference of the MTLDS-128'barely worth mentioning'. Figure 9a, which is a subset of the CMU skeleton. Representation in observation space. We choose a Lagrangian representation (Figure 9c) where the coordinate frame is centered at the root joint of the skeleton (joint 1 in Fig. 9a, the pelvis), projected onto the ground. The frame is rotated such that the z-axis points in the "forward" direction, roughly normal to the body. This is in contrast to the Eulerian frame (Figure 9b) which has an absolute fixed position for all t. In the Lagrangian frame, the joint positions are always relative to the root joint, which avoids confusing the overall trajectory of the skeleton (typified by the root joint), and the overall rotation of the skeleton, with the local motions of the joints. The relative joint positions can be represented by spatial position or by joint angle. For the latter, the spatial positions of all joints can be recovered from the angle made with their parent joint via use of forward kinematics (FK). This construction ensures the constant bone length of the skeleton over time, which is a desirable property. However, it also substantially increases the sensitivity of internal joints. For instance, the rotation of the trunk will disproportionately affect the error of the joints in both arms. For this reason, we have chosen to model the spatial position of joints, which may in violations of bone length, but avoids these sensitivity issues. See also §2.1. One can further encode the joint positions via velocity (i.e. differencing) which may in smoother predictions. We avoid this encoding for the local joint motion (joints 2 to 21) since it can suffer from accumulated errors, but we do use it to predict the co-ordinate frame as is standard in mocap models. Hence our per-frame representation consists of the velocityẋ,ż,ω of the co-ordinate frame, the relative vertical position of the root joint, and 3-d position of the remaining 20 joints, which gives y t ∈ R 64. Choice of inputs. Our choice of inputs will reflect controls that an animator may wish to manipulate. The first input will be the trajectory that the skeleton is to follow. As in Holden et al. (2017b), we provide the trajectory over the next second (30 frames), sampled uniformly every 5 frames. Unlike previous work, there is no trajectory history in the inputs since this can be kept in the recurrent state. The (2-d) trajectory co-ordinates are given wrt. the current co-ordinate frame, and hence can rotate rapidly during a tight corner. In order to provide some continuity in the inputs, we also provide a first difference of the trajectory in Eulerian co-ordinates. Table 5: Hyper-parameters of mocap models. η denotes the learning rate. The velocity implied by the differenced trajectory does not disambiguate the gait frequency vs. stride length. The same motion might be achieved with fast short steps, or slower long strides. We therefore provide the gait frequency via a phasor (as in b), whose frequency may be externally controlled. This is provided by sine and cosine components to avoid the discontinuity at 2π. A final ambiguity exists from the trajectory at tight corners: the skeleton can rotate either towards the focus of the corner, or towards the outside. Figure 9d demonstrates the latter, which appears not infrequently in the data. We provide a boolean indicator alongside the trajectory which identifies corners for which this happens. Altogether we have u t ∈ R 32: 12 inputs for the Lagrangian trajectory, 12 inputs for the differenced Eulerian trajectory, 2 inputs for the gait phase and 6 inputs for the turning indicators. Extracting the root trajectory. The root trajectory is computed by projecting the root joint onto the ground. However, this projection may still contain information about the style of locomotion, for instance via swaying. We wish to remove all such information, since a model can otherwise learn the style without reference to a latent z. Our goal is to find an appropriately smoothed version of the extracted trajectory T. We use a cubic B-spline fit to control points fitted to the'corners' of the trajectory. These control points are selected using a polygonal approximation to T using the Ramer-Douglas-Peucker algorithm (RDP, e.g.). Briefly, the RDP algorithm uses a divide-and-conquer approach which greedily chooses points that minimize the Hausdorff distance of T to the polygonal approximation. Some per-style tuning of the RDP parameter, and a small number of manually added control points rendered this a semi-automatic process. Extracting the gait phase. Foot contacts are calculated via the code used by Holden et al. (2017b), which is based on thresholding of vertical position and velocity of each foot. As in we check visually for outliers and correct misclassified foot contacts manually. The leading edge of each foot contact is taken to represent 0 (left) and π (right), and the gait phase is calculated by interpolation. In this section we discuss elements of the experimental setup, learning and inference common to all experiments. Details particular to each experiment can be found in the following section. Further Model Details The MTDS architecture is described in section A.3, aside from the choice of prior. We tested both linear and nonlinear h φ in preliminary experiments and the performance was often similar. The nonlinear version used a one hidden layer MLP with 300 hidden units with tanh activations. For the final affine layer, we used a rank 30 matrix which, chosen pragmatically as a trade-off between flexibility and parameter count (see discussion in section A.1.1). Both choices often performed similarly, however the linear approach was chosen, since optimization of the latent z on new data was faster, and apparently more robust to choice of initialization. A nonlinear h φ may be more important when the base model is simpler. The benchmark models use an encoding length of τ = 64 frames. The encoder shares parameters with the decoder, i.e. the RNN is simply'warm started' for 64 frames before prediction. The benchmark models, unlike the MTDS, predict the difference from the previous frame (or 'velocity') via a residual architecture, as this performs better in. Further Learning Details Our primary goal was qualitative: to obtain good style-content separation, high quality animations and smooth interpolation between sequences. Therefore hyperparameter selection for the MTDS proceeded via quantitative means (via the ELBO) and visual inspection of the qualitative criteria. The qualitative desiderata motivated split learning rates between shared and multi-task networks (cf. section 4.2), and the amount of L2 regularization. See Table 5 for the chosen values. The main learning rate η applies to the fixed parameters wrt. z (i.e. ψ 1, H), and the multi-task learning rate applies to the parameter generation parameters φ and inference parameters λ. Standard variational inference proved more reliable than amortized inference: we used a Gaussian with diagonal covariance (parameterized using softplus) for the variational posterior over each z. L2 regularization was applied to φ, ψ 1, H. Unless otherwise specified, we optimized each model using a batch size N batch = 16 for 20 000 iterations. The ELBO had often reached a plateau by this time, and training even longer ed in a worse latent representation at times (as evidenced through poor style transfer). As noted in the main text, we remove the KL penalty of eq. for the initial 2 000 iterations, and enforce a small posterior standard deviation (s λ = 10 −3) for the same duration. This is similar to finding a MAP estimate for the {z}. For the remaining iterations, the original ELBO criterion is used, and the constraint on s λ is removed. The model is implemented in PyTorch and trained on GPUs. Since we use a fairly small max. sequence length L = 64, truncated backpropagation through time was not necessary. The hyper-parameters for the benchmark models were found (Table 5) using a grid search over learning rate and regularization, as well as the optimizers {Adam, (vanilla) SGD}. We performed the search over the pooled data for all 8 styles, with a stratified sample of 12.5% held out for a validation set. Once the hyperparameters were chosen, benchmark models were also trained for 20 000 iterations, recording the validation error every 1 000 iterations on a stratified 12.5% held out sample. The model with the lowest validation error during optimization is chosen. We standardize the data so that when pooled, each dimension has zero mean and unit variance. Finally, note that as discussed in section A.3.1, the data are represented in Lagrangian form, therefore drifts in the predicted trajectory from the true one are not necessarily heavily penalized. This can be altered by changing the weights on the root velocities, but we did not do this. Inference. At test time, especially for experiment 2, we cannot expect amortized inference to perform optimally, and we consider standard inference techniques. We want to understand the nature of the posterior distributions, and so we again used the AdaIS approach of section A.1.5. In practice, each posterior was unimodal and approximately Gaussian. Furthermore, the variation in sequence space for different z in the posterior was usually fairly small, and the posterior predictive mean performed similarly to using a point estimate. Each observation from which z is inferred is of size 64 × 64 and hence the posterior is fairly concentrated. Unlike the DHO model, this is a more expensive procedure. Our k = 3 experiments took approx. 24 seconds per observation for inference. An optimization approach using standard techniques may be expected to perform similarly at a reduced computational cost. Hence unless otherwise specified, inference was done via optimization. Experiment 1 -MTL The training data for each style uses 4 subsequences chosen carefully to represent the inter-style variation. Obviously it is important that frames are consecutive rather than randomly sampled. Over the increasing size training sets, each of these subsequences is a superset of the previous one. The 6 training set sizes (2 8, 2 9, 2 10, 2 11, 2 12, 2 13 frames per style) are not exact since short subsequences are discarded (e.g. at file boundaries), and the largest set contains all the training data except the test set 3, where data are not evenly distributed over styles. The test set comprises 4 sequences from each style, each of length 64, and is the same for all experiments. A length-64 seed sequence immediately preceding each test sequence was used for inference for all models. The models are trained as described above, except for the single task (STL) models. The STL models use an identical architecture to the pooled 1-layer GRU models, except they are trained only on the data for their style. Since there is less data for these models, we train them for a Table 6: Mocap Experiment 1 (MTL): predictive MSE for length-64 predictions where training sets are a given fraction of the original dataset. maximum of 5 000 iterations. We do not train 2-layer GRUs, since the amount of data is small for most experiments. The full are given in Table 6. We use fractions of the dataset instead of absolute training set sizes to aid understanding. The performance of the MTDS appears to increase with larger k, and suggests that we need k > 3 to achieve optimal performance on unseen training data. The demonstrate substantial benefit of the MTDS over a pooled RNN model in terms of sample efficiency, but not in asymptotic performance, as might be expected. According to a paired t-test, the improvements of the k = 7 MTDS over the (1-layer, open loop) pooled GRU are significant for training set sizes 3%, 7%, 13% and 27%. 4 At a style level, the k = 7 MTDS performs at least as well as the pooled GRUs for the first four training set sizes. See Figure 10. Note that the'angry' and'childlike' styles appear to be harder than the others, most likely due to their relatively high speed. For example animations of the MTL experiments, see the linked video in section A.4.1. Table 7 provides the aggregate of experiment 2 for each of the mocap models. A visualization is given in Figure 11. The 2-layer competitors are shown here for completeness, but they achieve similar performance to the 1-layer models on aggregate. Figure 12 provides a breakdown of these on a per-style basis. Styles 5-8 appear to be easier from the point of view of the benchmarks, but the MTDS shows equal or better performance on all styles except style 5. The competitor achieve better short-term performance than the MTDS. However, note that the zero-velocity baseline performs similarly to the open-loop GRUs for the first 5 predictions. This suggests that the MTDS may be improved for these early predictions simply by interpolating from the zero-velocity baseline for small values of t. We are unable to conclude from these experiments that the benchmark models can represent the style better initially, but simply that they can smooth the transition from the seed sequence better. The classifier is learned on the original observations to distinguish between the 8 styles. We use a 512-unit GRU to encode an observation sequence (usually of 64 frames), and transform the final state via a 300-unit hidden layer MLP with sigmoid activations into multinomial emissions. The model is trained via cross-entropy with 20% of the training data held out as a validation set; training was stopped as the validation error approached 0. We perform a standardization of the gait frequency across all styles, since some styles can be identified purely by calculating the frequency. The mean frequency across all styles (1 cycle per 33 frames) is applied to all sequences via linear interpolation. In this we make use of the instantaneous phase given in the inputs. We use a k = 8 latent code for the MTDS as the model is trained on all styles. for each source style s 1, with examples j = 1,..., 4. We next seek the'archetypal' latent code z associated with each target style s 2. For each s 2, we optimize z over 20 candidate values, obtained from the posterior mean of the style s 2 in the training set. Data are generated from all {U (s1) j } and the z which provides the greatest success in style transfer is chosen. The 32 highly varied input sequences guard against overfitting -the'archetypal' codes for each style must perform well across much of the variety of the original dataset. We provide a scalar measurement of the'success' of style transfer for each pair (s 1, s 2) by using the ing'probability' that the classifier assigns the target style s 2, averaged across the four input sequences for the source s 1. The of these experiments are shown in Figure 13a for the model with multi-task bias, and Figure 13b shows the for the full MTDS. Table 3c in the main text gives the marginal of these wrt. the target style. The cells in Figure 13 give the classifier probability for the target style for each (source, target) combination, averaged over the four source inputs. Successful style transfer should in a the classifier assigning a high score in every cell of the table. For most (source, target) pairs, the full MTDS model substantially outperforms the MTBias model: it appears that MTDS can control the prediction well in the majority of cases, and the MTBias model offers reduced control in general. However, we observe for both models that it is more difficult when styles are associated with extremes of the input distribution. Specifically, both the'childlike' and'angry' styles have unusually high speed inputs, and the'old' style has unusually low speeds. Note that in order to provide style transfer, the models are mostly ignoring these correlations, even though they are very useful for prediction. Further improvements may be available, perhaps by using an adversarial loss, or applying domain knowledge to the model. This is orthogonal to our contribution, and we leave this to future work. Providing style transfer from all varieties of source style is a challenging task. For instance, some styles include sources with widely varying speeds and actions, which may be mismatched to the target style. To understand what may be more typical use of the model, we provide an easier variant of this experiment where only one example of each source style is provided, rather than four. Note nevertheless that the same z (s2) is still used across all sources s 1. The of this secondary experiment are provided in Figure 14. In this case, style transfer is successful for almost all (source, target) pairs in the case of the MTDS, except for the angry style. The MTBias model still has many notable failures. 1. In-sample predictions https://vimeo.com/362069486. The goal is to showcase the best possible performance of the models by predicting from inputs in the training set. 2. MTL examples https://vimeo.com/362122944. Examples from Experiment 1. We compare the quality of animations and fit to the ground truth for two limited training set sizes (6.7% and 13.3% of the full data). For both models, MSE to the ground truth is given, averaged over the entire predictive window (length 256). This is different to the experimental setup which uses only the first 64 frames. 3. Novel test examples https://vimeo.com/362068342. Examples from Experiment 2. We show the adaptions obtained by each model to novel sequences, in particular showcasing examples of the pooled GRU models inferring suboptimal styles wrt. MSE. Again, MSE to the ground truth is given averaged over the predictive window (length 256). 4. Style morphing https://vimeo.com/361910646. This animation demonstrates the effect of changing the latent code over time. This also demonstrates style transfer and style interpolation from experiment 3. For style morphing, we found it useful to fix the dynamical bias of the second layer (parameter b in eq. 7) wrt. z since it otherwise ed in'jumps' while interpolating between sequences. We speculate that shifting the bias induces bifurcations in the state space, whereas adapting the transition matrix allows for smooth interpolation. | Tailoring predictions from sequence models (such as LDSs and RNNs) via an explicit latent code. | 1,096 | scitldr |
By injecting adversarial examples into training data, adversarial training is promising for improving the robustness of deep learning models. However, most existing adversarial training approaches are based on a specific type of adversarial attack. It may not provide sufficiently representative samples from the adversarial domain, leading to a weak generalization ability on adversarial examples from other attacks. Moreover, during the adversarial training, adversarial perturbations on inputs are usually crafted by fast single-step adversaries so as to scale to large datasets. This work is mainly focused on the adversarial training yet efficient FGSM adversary. In this scenario, it is difficult to train a model with great generalization due to the lack of representative adversarial samples, aka the samples are unable to accurately reflect the adversarial domain. To alleviate this problem, we propose a novel Adversarial Training with Domain Adaptation (ATDA) method. Our intuition is to regard the adversarial training on FGSM adversary as a domain adaption task with limited number of target domain samples. The main idea is to learn a representation that is semantically meaningful and domain invariant on the clean domain as well as the adversarial domain. Empirical evaluations on Fashion-MNIST, SVHN, CIFAR-10 and CIFAR-100 demonstrate that ATDA can greatly improve the generalization of adversarial training and the smoothness of the learned models, and outperforms state-of-the-art methods on standard benchmark datasets. To show the transfer ability of our method, we also extend ATDA to the adversarial training on iterative attacks such as PGD-Adversial Training (PAT) and the defense performance is improved considerably. Deep learning techniques have shown impressive performance on image classification and many other computer vision tasks. However, recent works have revealed that deep learning models are often vulnerable to adversarial examples BID14, which are maliciously designed to deceive the target model by generating carefully crafted adversarial perturbations on original clean inputs. Moreover, adversarial examples can transfer across models to mislead other models with a high probability BID9. How to effectively defense against adversarial attacks is crucial for security-critical computer vision systems, such as autonomous driving. As a promising approach, adversarial training defends from adversarial perturbations by training a target classifier with adversarial examples. Researchers have found BID7 BID11 that adversarial training could increase the robustness of neural networks. However, adversarial training often obtains adversarial examples by taking a specific attack technique (e.g., FGSM) into consideration, so the defense targeted such attack and the trained model exhibits weak generalization ability on adversarial examples from other adversaries BID7. BID22 showed that the robustness of adversarial training can be easily circumvented by the attack that combines with random perturbation from other models. Accordingly, for most existing adversarial training methods, there is a risk of overfitting to adversarial examples crafted on the original model with the specific attack. In this paper, we propose a novel adversarial training method that is able to improve the generalization of adversarial training. From the perspective of domain adaptation (DA) BID20, there is a big domain gap between the distribution of clean examples and the distribution of adversarial examples in the high-level representation space, even though adversarial perturbations are imperceptible to humans. showed that adversarial perturbations are progressively amplified along the layer hierarchy of neural networks, which maximizes the distance between the original and adversarial subspace representations. In addition, adversarial training simply injects adversarial examples from a specific attack into the training set, but there is still a large sample space for adversarial examples. Accordingly, training with the classification loss on such a training set will probably lead to overfitting on the adversarial examples from the specific attack. Even though BID24 showed that adversarial training with iterative noisy attacks has stronger robustness than the adversarial training with single-step attacks, iterative attacks have a large computational cost and there is no theoretical analysis to justify that the adversarial examples sampled in such way could be sufficiently representative for the adversarial domain. Our contributions are focused on how to improve the generalization of adversarial training on the simple yet scalable attacks, such as FGSM (Goodfellow et al.). The key idea of our approach is to formulate the learning procedure as a domain adaptation problem with limited number of target domain samples, where target domain denotes adversarial domain. Specifically, we introduce unsupervised as well as supervised domain adaptation into adversarial training to minimize the gap and increase the similarity between the distributions of clean examples and adversarial examples. In this way, the learned models generalize well on adversarial examples from different ∞ bounded attacks. We evaluate our ATDA method on standard benchmark datasets. Empirical show that despite a small decay of accuracy on clean data, ATDA significantly improves the generalization ability of adversarial training and has the transfer ability to extend to adversarial training on PGD BID11. In this section, we introduce some notations and provides a brief overview of the current advanced attack methods, as well as the defense methods based on adversarial training. Denote the clean data domain and the adversarial data domain by D and A respectively, we consider a classifier based on a neural network f (x): DISPLAYFORM0 outputs the probability distribution for an input x ∈ d, and k denotes the number of classes in the classification task. Let ϕ be the mapping at the logits layer (the last neural layer before the final softmax function), so that f (x) = sof tmax(ϕ(x)). Let be the magnitude of the perturbation. Let x adv be the adversarial image computed by perturbing the original image x. The cost function of image classification is denoted as J(x, y). We define the logits as the logits layer representation, and define the logit space as the semantic space of the logits layer representation. We divide attacks into two types: white-box attacks have the complete knowledge of the target model and can fully access the model; black-box attacks have limited knowledge of the target classifier (e.g.,its architecture) but can not access the model weights. We consider four attack methods to generate adversarial examples. For all attacks, the components of adversarial examples are clipped in.Fast Gradient Sign Method (FGSM). Goodfellow et al. introduced FGSM to generate adversarial examples by applying perturbations in the direction of the gradient. DISPLAYFORM0 As compared with other attack methods, FGSM is a simple, yet fast and efficient adversary. Accordingly, FGSM is particularly amenable to adversarial training. Projected Gradient Descent (PGD). The Projected Gradient Descent (PGD) adversary was introduced by BID11 without random start, which is a stronger iterative variant of FGSM. This method applies FGSM iteratively for k times with a budget α instead of a single step. DISPLAYFORM1 Here clip(·, a, b) function forces its input to reside in the range of [a, b]. PGD usually yields a higher success rate than FGSM does in the white-box setting but shows weaker capability in the black-box setting.. BID22 proposed R+FGSM against adversarially trained models by applying a small random perturbation of step size α before applying FGSM. DISPLAYFORM0 Momentum Iterative Method (MIM). MIM ) is a modification of the iterative FGSM and it won the first place of NIPS 2017 Adversarial Attacks Competition. Its basic idea is to utilize the gradients of the previous t steps with a decay factor µ to update the gradient at step t + 1 before applying FGSM with a budget α. DISPLAYFORM1 An intuitive technique to defend a deep model against adversarial examples is adversarial training, which injects adversarial examples into the training data during the training process. First, Goodfellow et al. proposed to increase the robustness by feeding the model with both original and adversarial examples generated by FGSM and by learning with the modified objective function. BID7 scaled the adversarial training to ImageNet BID16 and showed better by replacing half the clean example at each batch with the corresponding adversarial examples. Meanwhile, BID7 discovered the label leaking effect and suggested not to use the FGSM defined with respect to the true label y true. However, their approach has weak robustness to the RAND+FGSM adversary. BID22 proposed an ensemble adversarial training to improve robustness on black-box attacks by injecting adversarial examples transferred from a number of fixed pre-trained models into the training data. DISPLAYFORM0 For adversarial training, another approach is to train only with adversarial examples. BID13 proposed a specialization of the method (Goodfellow et al.) that learned only with the objective function of adversarial examples. BID11 demonstrated successful defenses based on adversarial training with the noisy PGD, which randomly initialize an adversarial example within the allowed norm ball before running iterative attack. However, this technique is difficult to scale to large-scale neural networks BID6 as the iterative attack increases the training time by a factor that is roughly equal to the number of iterative steps. BID24 developed a robust training method by linear programming that minimized the loss for the worst case within the perturbation ball around each clean data point. However, their approach achieved high test error on clean data and it is still challenging to scale to deep or wide neural networks. As described above, though adversarial training is promising, it is difficult to select a representative adversary to train on and most existing methods are weak in generalization for various adversaries, as the region of the adversarial examples for each clean data is large and contiguous BID21 BID19. Furthermore, generating a representative set of adversarial examples for large-scale datasets is computationally expensive. In this work, instead of focusing on a better sampling strategy to obtain representative adversarial data from the adversarial domain, we are especially concerned with the problem of how to train with clean data and adversarial examples from the efficient FGSM, so that the adversarially trained model is strong in generalization for different adversaries and has a low computational cost during the training. We propose an Adversarial Training with Domain Adaptation (ATDA) method to defense adversarial attacks and expect the learned models generalize well for various adversarial examples. Our motivation is to treat the adversarial training on FGSM as a domain adaptation task with limited number of target domain samples, where the target domain denotes adversarial domain. We combine standard adversarial training with the domain adaptor, which minimizes the domain gap between clean examples and adversarial examples. In this way, our adversarially trained model is effective on adversarial examples crafted by FGSM but also shows great generalization on other adversaries. It's known that there is a huge shift in the distributions of clean data and adversarial data in the high-level representation space. Assume that in the logit space, data from either the clean domain or the adversarial domain follow a multivariate normal distribution, i.e., DISPLAYFORM0 Our goal is to learn the logits representation that minimizes the shift by aligning the covariance matrices and the mean vectors of the clean distribution and the adversarial distribution. To implement the CORrelation ALignment (CORAL), we define a covariance distance between the clean data and the adversarial data as follows. DISPLAYFORM1 where C ϕ(D) and C ϕ(A) are the covariance matrices of the clean data and the adversarial data in the logit space respectively, and · 1 denotes the L 1 norm of a matrix. Note that L CORAL (D, A) is slightly different from the CORAL loss proposed by BID17.Similarly, we use the standard distribution distance metric, Maximum Mean Discrepancy (MMD) BID1, to minimize the distance of the mean vectors of the clean data and the adversarial data. DISPLAYFORM2 The loss function for Unsupervised Domain Adaptation (UDA) can be calculated as follows. DISPLAYFORM3 Even though the unsupervised domain adaptation achieves perfect confusion alignment, there is no guarantee that samples of the same label from clean domain and adversarial domain would map nearby in the logit space. To effectively utilize the labeled data in the adversarial domain, we introduce a supervised domain adaptation (SDA) by proposing a new loss function, denoted as margin loss, to minimize the intra-class variations and maximize the inter-class variations on samples of different domains. The SDA loss is shown in Eq. FORMULA10. DISPLAYFORM0 Here sof tplus denotes a function ln(1 + exp(·)); c ytrue ∈ R k denotes the center of y true class in the logit space; C = {c j | j = 1, 2, ..., k} is a set consisting of the logits center for each class, which will be updated as the logits changed. Similar to the center loss BID23, we update center c j for each class j: DISPLAYFORM1 where 1 condition = 1 if the condition is true, otherwise 1 condition = 0; α denotes the learning rate of the centers. During the training process, the logits center for each class can integrate the logits representation from both the clean domain and the adversarial domain. For adversarial training, iterative attacks are fairly expensive to compute and single-step attacks are fast to compute. Accordingly, we use a variant of FGSM attack BID7 ) that avoids the label leaking effect to generate a new adversarial example x adv i for each clean example x i. DISPLAYFORM0 where y target denotes the predicted class arg max{ϕ(x i)} of the model. However, in this case, the sampled adversarial examples are aggressive but not sufficiently representative due to the fact that the sampled adversarial examples always lie at the boundary of the ∞ ball of radius (see FIG1) and the adversarial examples within the boundary are ignored. For adversarial training, if we train a deep neural network only on the clean data and the adversarial data from the FGSM attack, the adversarially trained model will overfit on these two kinds of data and exhibits weak generalization ability on the adversarial examples sampled from other attacks. From a different perspective, such problem can be viewed as a domain adaptation problem with limited number of labeled target domain samples, as only some special data point can be sampled in the adversarial domain by FGSM adversary. Consequently, it is natural to combine the adversarial training with domain adaptation to improve the generalization ability on adversarial data. We generate new adversarial examples by the variant of FGSM attack shown in Eq. FORMULA1, then we use the following loss function to meet the criteria of domain adaptation while training a strong classifier. Compute the loss by Eq. and update parameters of network f by back propagation; 10: until the training converges. DISPLAYFORM1 In this section, we evaluate our ATDA method on various benchmark datasets to demonstrate the robustness and contrast its performance against other competing methods under different white-box and black-box attacks with bounded ∞ norm. Code for these experiments is available at https: //github.com/JHL-HUST/ATDA. Datasets. We consider four popular datasets, namely Fashion-MNIST BID26, SVHN BID12, CIFAR-10 and CIFAR-100 BID5 Baselines. To evaluate the generalization power on adversarial examples in both the white-box and black-box settings, we report the clean test accuracy, the defense accuracy on FGSM, PGD, R+FGSM and MIM in the non-targeted way. The common settings for these attacks are shown in Table 5 of the Appendix. We compare our ATDA method with normal training as well as several state-of-the-art adversarial training methods:• Normal Training (NT). Training with cross-entropy loss on the clean training data.• Standard Adversarial Training (SAT) (Goodfellow et al.). Training with the cross-entropy on the clean training data and the adversarial examples from the FGSM variant with perturbation to avoid label leaking.• Ensemble Adversarial Training (EAT) BID22. Training with cross-entropy on the clean training data and the adversarial examples crafted from the currently trained model and the static pre-trained models by the FGSM variant with the perturbation to avoid label leaking.• Provably Robust Training (PRT) BID24. Training with cross-entropy loss on the worst case in the ∞ ball of radius around each clean training data point. It could be seen as training with a complicated method of sampling in the ∞ ball of radius. Evaluation Setup. For each benchmark dataset, we train a normal model and various adversarial models with perturbation on a main model with ConvNet architecture, and evaluate them on various attacks bounded by. Moreover, for Ensemble Adversarial Training (EAT), we use two different models as the static pre-trained models. For black-box attacks, we test trained models on the adversarial examples transferred from a model held out during the training. All experiments are implemented on a single Titan X GPU. For all experiments, we set the hyper-parameter λ in Eq. to 1/3 and the hyper-parameter α in Eq. to 0.1. For more details about neural network architectures and training hyper-parameters, see Appendix A. We tune the networks to make sure they work, not to post concentrates on optimizing these settings. We evaluate the defense performance of our ATDA method from the perspective of classification accuracy on various datasets, and compare with the baselines. Evaluation on Fashion-MNIST. The accuracy on Fashion-MNIST are reported in TAB1. NT yields the best performance on the clean data, but generalizes poorly on adversarial examples. SAT and EAT overfit on the clean data and the adversarial data from FGSM. PRT achieves lower error against various adversaries, but higher error on the clean data. ATDA achieves stronger robustness against different ∞ bounded adversaries as compared to SAT (adversarial training on FGSM).Evaluation on SVHN. The classification accuracy on SVHN are summarized in TAB1. PRT seems to degrade the performance on the clean testing data and exhibits weak robustness on various attacks. As compared to SAT, ATDA achieves stronger generalization ability on adversarial examples from various attacks and higher accuracy on the white-box adversaries, at the same time it only loses a negligible performance on clean data. Evaluation on CIFAR-10. Compared with Fashion-MNIST and SVHN, CIFAR-10 is a more difficult dataset for classification. As PRT is challenging and expensive to scale to large neural networks due to its complexity, the of PRT are not reported. The accuracy on CIFAR-10 are summarized in TAB1. ATDA outperforms all the competing methods on most adversaries, despite a slightly lower performance on clean data. Evaluation on CIFAR-100. The CIFAR-100 dataset contains 100 image classes, with 600 images per class. Our goal here is not to achieve state-of-the-art performance on CIFAR-100, but to compare the generalization ability of different training methods on a comparatively large dataset. The on CIFAR-100 are summarized in TAB1. Compared to SAT, ATDA achieves better generalization on various adversarial examples and it does not degrade the performance on clean data. In , the accuracy provide empirical evidence that ATDA has great generalization ability on different adversaries as compared to SAT and outperforms other competing methods. To further investigate the defence performance of the proposed method, we compute two other metrics: the local loss sensitivity to perturbations and the shift of adversarial data distribution with respect to the clean data distribution. Local Loss Sensitivity. One method to quantify smoothness and generalization to perturbations for models is the local loss sensitivity BID0. It is calculated in the clean testing data as follows. The lower the value is, the smoother the loss function is. DISPLAYFORM0 The of the local loss sensitivity for the aforementioned learned models are summarized in TAB2. The suggest that adversarial training methods do increase the smoothness of the model as compared with the normal training and ATDA performs the best. Distribution Discrepancy. To quantify the dissimilarity of the distributions between the clean data and the adversarial data, we compare our learned logits embeddings with the logits embeddings of the competing methods on Fashion-MNIST. We use t-SNE BID10 for the comparison on the training data, testing data and adversarial testing data from the white-box FGSM or PGD. The comparisons are illustrated in FIG3 and we report the detailed MMD distances across domains in TAB3. Compared with NT, SAT and EAT actually increase the MMD distance across domains of the clean data and the adversarial data. In contrast, PRT and ATDA can learn domain invariance between the clean domain and the adversarial domain. Furthermore, our learned logits representation achieves the best performance on domain invariance. FIG4. For each model, we report the average accuracy rates over all white-box attacks and all black-box attacks, respectively. The illustrate that, by aligning the covariance matrix and mean vector of the clean and adversarial examples, UDA plays a key role in improving the generalization of SAT on various attacks. In general, the aware of margin loss on SDA can also improve the defense quality on standard adversarial training, but the effectiveness is not very stable over all datasets. By combining UDA and SDA together with SAT, our final algorithm ATDA can exhibits stable improvements on the standard adversarial training. In general, the performance of ATDA is slightly better than SAT+UDA. We report the average accuracy rates over all white-box attacks and all black-box attacks, respectively. ATDA can simply be extended to adversarial training on other adversaries. We now consider to extend the ATDA method to PGD-Adversarial Training (PAT) BID11: adversarial training on the noisy PGD with perturbation. By combining adversarial training on the noisy PGD with domain adaptation, we implement an extension of ATDA for PAT, called PATDA. For the noisy PGD, we set the iterated step k as 10 and the budget α as /4 according to BID11.As shown in TAB4, we evaluate the defense performance of PAT and PATDA on various datasets. On Fashion-MNIST, we observe that PATDA fails to increase robustness to most adversaries as compared to PAT. On SVHN, PAT and PATDA fail to converge properly. The are not surprising, as training with the hard and sufficient adversarial examples (from the noisy PGD) requires the neural networks with more parameters. On CIFAR-10 and CIFAR-100, PATDA achieves stronger robustness to various attacks than PAT. In general, PATDA exhibits stronger robustness to various adversaries as compared to PAT. The indicate that domain adaptation can be applied flexibly to adversarial training on other adversaries to improve the defense performance. In this study, we regard the adversarial training as a domain adaptation task with limited number of target labeled data. By combining adversarial training on FGSM adversary with unsupervised and supervised domain adaptation, the generalization ability on adversarial examples from various attacks and the smoothness on the learned models can be highly improved for robust defense. In addition, ATDA can easily be extended to adversarial training on iterative attacks (e.g., PGD) to improve the defense performance. The experimental on several benchmark datasets suggest that the proposed ATDA and its extension PATDA achieve significantly better generalization as compared with current competing adversarial training methods. This work is supported by National Natural Science Foundation. In the appendix, we show all details of the common settings, neural network architectures and training hyper-parameters for the experiments. A.1 HYPER-PARAMETERS FOR ADVERSARIES.For each dataset, the details about the hyper-parameters of various adversaries are shown in Table 5, where denotes the magnitude of adversarial perturbations. Fashion-MNIST. In the training phase, we use Adam optimizer with a learning rate of 0.001 and set the batch size to 64. For Fashion-MNIST, the neural network architectures for the main model, the static pre-trained models and the model held out during training are depicted in TAB6. For all adversarial training methods, the magnitude of perturbations is 0.1 in ∞ norm. In the training phase, we use Adam optimizer with a learning rate of 0.001 and set the batch size to 32 and use the same architectures as in Fashion-MNIST. For all adversarial training methods, the magnitude of perturbations is 0.02 in ∞ norm. FORMULA1 FC FORMULA1 FC FORMULA1 CIFAR-10. In the training phase, we use the same training settings as in SVHN. we use Adam optimizer with a learning rate of 0.001 and set the batch size to 32. In order to enhance the expressive power of deep neural networks, we use Exponential Linear Unit (ELU) BID2 as the activation function and introduce Group Normalization BID25 into the architectures. The neural network architectures for CIFAR-10 are shown in Table 7. For all adversarial training methods, the magnitude of perturbations is 4/255 in ∞ norm. CIFAR-100. We use the same training settings as in CIFAR-10. For CIFAR-100, the neural network architectures for the main model, the static pre-trained models and the model held out during training are shown in Table 8. For all adversarial training methods, the magnitude of perturbations is 4/255 in ∞ norm. | We propose a novel adversarial training with domain adaptation method that significantly improves the generalization ability on adversarial examples from different attacks. | 1,097 | scitldr |
Character-level language modeling is an essential but challenging task in Natural Language Processing. Prior works have focused on identifying long-term dependencies between characters and have built deeper and wider networks for better performance. However, their models require substantial computational resources, which hinders the usability of character-level language models in applications with limited resources. In this paper, we propose a lightweight model, called Group-Transformer, that reduces the resource requirements for a Transformer, a promising method for modeling sequence with long-term dependencies. Specifically, the proposed method partitions linear operations to reduce the number of parameters and computational cost. As a , Group-Transformer only uses 18.2\% of parameters compared to the best performing LSTM-based model, while providing better performance on two benchmark tasks, enwik8 and text8. When compared to Transformers with a comparable number of parameters and time complexity, the proposed model shows better performance. The implementation code will be available. Character-level language modeling has become a core task in the field of natural language processing (NLP) such as classification , sequence tagging (a), question answering , and recognition , with its simplicity on generating text and its adaptability to other languages. Along with the development of deep learning in NLP, using recurrent neural networks (RNNs) have been a standard way to solve the problem for many years. Recently, however, a new architecture, Transformer , have shown promise in addressing this problem and have achieved breakthroughs in general language modeling . Though this technique has achieved incredible successes, it has led to the huge size of Transformerbased models due to building deeper and wider networks. Transformer-XL and GPT-2, for instance, contain 277M and 1542M parameters, respectively. This trend toward a large size model for performance is not suitable for edge device applications, which require small memory sizes, such as optical character reader (OCR) and speech to text (STT), and for auto-correction and auto-completion applications that need fast real-time responsiveness. To tackle this issue, choosing an appropriately efficient strategy becomes more crucial, especially in the real-world application which requires not only good performance but a lightweight model. In this paper, we introduce a lightweight transformer for character-level language modeling. Our method is one of the factorization methods in that it separates the standard linear layer in transformer architecture using group-wise linear operation and makes sparse connectivity between linear transformations. The proposed model is referred to as Group-Transformer since it is inspired by the group convolution approaches that have effectively compressed huge image processing models for usability on mobile devices. While the group strategy reduces parameters and calculations in the proposed modules, its mutually exclusive calculation for the multiple groups compromises performance, caused by the information loss of inter-group correlations. To compensate for this problem, we added two inter-group operations that share a common feature over groups for the group attention layer and linking features in different groups for the group feed-forward layer. By modeling the inter-group information flows, Group-Transformer becomes performant as well as lightweight. We conducted extensive experiments on two benchmark datasets, enwik8 and text8, and found that Group-Transformer with 6M parameters outperformed all LSTM-based models with under 35M parameters. Furthermore, Group-Transformer shows better performance when compared against Transformers with a comparable number of parameters. We provide further analysis to identify the contributions of our proposed modules in detail. To the best of our knowledge, Group-Transformer is the first attempt to build a lightweight Transformer with the group strategy. Since Transformer has become a promising model for diverse NLP tasks, there have been attempts to improve its efficiency with two majority approaches. The first is to restrict dependencies between input tokens to reduce superfluous pair-wise calculations b; a). The approach provides time efficiency during inference, but it does not address the heavy parameterization of Transformer. The second approach is to develop a lightweight network architecture while maintaining the properties of Transformer. For example, utilize quaternion algebra to build lightweight modules for Transformer. They also use the factorize the components of the embedding layer, but the expression power can be limited by the connection of factorized components based on the quaternion principle. Another such approach (b) combined the multi-head attention and point-wise feed-forward layer to devise a unified module with fewer parameters. Despite these attempts on architectural changes, their models still struggle to provide a lightweight language model with nearly still 30M parameters. In this work, we describe a lightweight transformer with less than 10M parameters, which is extremely small when compared against previous character-level language models. A group strategy has attracted much attention recently to compress many large and deep state-of-theart convolutional neural networks (CNNs) (; ;). For example, when the group strategy is applied to a standard linear layer with a weight W ∈ R I×O, the feature map is partitioned into G groups. As a , the layer is replaced by G small linear layers where each holds a weight W ∈ R (I/G)×(O/G), leading to a significant parameter reduction. Although intuitively appealing, it has been reported that applying the group strategy to the model often leads to huge performance degradation, since the features in different groups cannot interact with each other. To overcome this problem, ShuffleNet proposed channel shuffle operation to make interactions between different groups. This kind of consideration has also been applied to recurrent neural networks (RNNs). proposed group-wise RNN as a special form of ensembled RNNs. But, they did not consider the interactions between different groups. Inspired by the combined the shuffling idea into the group-wise RNN and achieved promising . In this work, we adopt the group strategy and build the group-wise operations suitable for Transformer architecture. 3 GROUP-TRANSFORMER Figure 1a shows the overall architecture of Group-Transformer. It consists of a group embedding (bottom grey box), which embeds a character into grouped features, a group attention (yellow box), which contains attention modules to identify dependencies in the time domain, and a group feedforward layer (green box), which re-configures the grouped features. As can be seen, when an input character is given, Group-Transformer converts the input into multiple group representation (blue dots and red dots), processes and merges them to predict the next character. Figure 1b and 1c show group-wise information flow (blue and red arrows), and inter-group information flow (grey arrow) in the sub-modules. Without the inter-group information flows, the grouped features are processed independently. We observed that inter-group modeling ensures that the groups become aware of the others and prevents different groups hold the same information. The following subsections describe architectural details of the sub-modules and their relations. Group embedding layer identifies a set of embeddings to represent a token. The idea of representing a sentence, word or even character using a set of vectors can widely be found in many NLP models that embed input tokens by concatenating (or summing) its embedding and its sub-units' embeddings (; ;). Similarly, we assume a single character c to be represented with G vector representations of groups, that is, When a character is given, the group embedding layer retrieves a corresponding set of vectors and passes it to the following group attention layer. Through this paper, we describe a process at a single time step. The attention mechanism identifies dependencies between features in the time domain and combines the information of them. It contains three steps; identifying queries, keys, and values, retrieving relative features at different times, and transforming the attended feature into the input domain . The main focus of this paper is to apply a group strategy to the feature space of Transformer. Thus, we let the second step be identical to those of the original Transformer and focused on the first and the third steps. Figure 1b explains the architecture of group attention. The multi-head attention module represents the second step, the under operations identify the queries for the first step, and the upper operations transform the attention output for the third step. We note that we do not represent the key and value for the multi-head attention block in the figure because they are possible to come from another source domain. The group attention processes the grouped features with intra-group operations (white boxes) and inter-group operations (grey boxes). Let x = [x 1, ..., x G] be a set of input vectors where x g ∈ R Dgroup for the group g. Since the multi-head attention contains H attention modules for a single group, group attention first calculates query q gh for a group g and its head h as the below, where W q-intra gh ∈ R Dgroup×(Dgroup/H) and W q-inter gh ∈ R Dgroup×(Dgroup/H) are linear weights to describe an intra-group (white boxes) and an inter-group (grey box) combinations, respectively. In the formula, the first term on the right-hand side identifies a specific feature for the head h in the group g and the second term determines head-wise features that allow the grouped features to share a common expression retrieving other features in a different time. When comparing with the fully connected linear layer over the groups, the approach restricts the connection between the groups, so requires a fewer number of parameters and calculations. For the key k gh and the value v gh, we use fully connected layers by using all group pairs g and g; Dgroup×(Dgroup/H) are linear weights. As we mentioned, since the keys and the values can be defined from the other source domain, we use the same formula of the original Transformer, pursuing the universality of the proposed module. The identified headed elements are used for connecting features in the time domain. In this step, position encoding has an important role for the features to be aware of their position in an input sequence. In this paper, we apply the relative positional encoding, which describes a long-length character sequence effectively. By following , we define the attention score map with the relative positional information and the attention mechanism determines the attended feature a gh of the head h in the group g. The multiple heads [a g1, ..., a gH] in the group g are combined as the below; where W o-intra gh ∈ R (Dgroup/H)×Dgroup and W o-inter gh ∈ R (Dgroup/H)×Dgroup are linear weights for combining intra-group and inter-group information, respectively. As can be seen, the final output is determined with a specific feature from its own group and a shared feature from whole groups. This step utilizes the same mechanism used to identify the queries with the same objective spreading group-wise information from multi-headed attention to all groups. These intra-group and inter-group modelings mainly contribute to reducing the number of parameters and calculations. Finally, the inputs x g are added into the output o g asx g = x g + o g for a residual connection. Group feed-forward layer re-configures the outputs of the attention module,x g, by applying groupwise operation at each position. Figure 1c shows the architecture of the proposed module. As can be seen, the groups are shuffled (grey box) and support each other. The group-wise features are processed with two linear transformations and one non-linear activation. As the original module does, the linear layers in our module transpose the input feature into a high dimensional space with non-linear activation and transform the output back into the input space. The group feed-forward layer can be formally explained as follows. Given G input features [x 1, ...,x G], group feed-forward layer transposes the grouped features into a high dimensional space as follows;ȳ where Dgroup×D G are linear weights for mapping intra-group and inter-group information into theD G -dimensional space, relatively bigger than D group dimension. Here, we introduce a low-rank matrix approximation on the inter-group transformation matrix W f1-inter g g. Modeling interactions between the groups requires the G × G numbers of weights, as well as the multiple weights for the group g, transpose the group into the high dimensional space for the target group g. If designing a fully connected weight for all groups like the original Transformer, the feed-forward layer still holds heavyweights and expensive calculations. To reduce the overburden, we factorize the matrix W and. The newly introduced dimension M is smaller than D group, and thus the number of parameters and calculation is reduced proportionally with the ratio between M and D group. In this paper, we set M as D group /G to control the dimension relatively with the number of the groups. Interestingly, such matrix factorization can be modeled efficiently with a group-wise linear transformation and a shuffle trick as shown in Figure 1c. Please refer to the appendix for the detail of the shuffle trick. Finally, a group-wise linear transformation is applied upon the high-dimensional feature as follow; where W f2 g ∈ RD G ×Dgroup is a linear weight. For a residual connection, each grouped input feature is added into the output of the group feed-forward layer;ŷ g =x g + y g. Here, we describe the efficiency of Group-Transformer in view of the number of parameters and required computational costs. When considering the original transformer, its required numbers of parameters are 4 * O(D 2 model) for its attention module (query, key, value, and output linear) and 2 * O(D modelDmodel) for its feed-forward module, whereD model is a bottleneck dimension. Group-Transformer pursues to reduce the number of parameters by splitting the hidden state into multiple groups and processing them group-wisely. When we set the total dimension over groups as for group attention and 3 G * O(D modelDmodel) for group feed-forward module. The number of groups is increasing, the resources is decreasing. Appendix B provides the detailed required resources of all sub-modules and comparisons with those of the original Transformer. We demonstrate the efficiency of the proposed Group-Transformer with two popular benchmark datasets, enwik8 and text8. The enwik8 dataset contains 100M of English Wikipedia texts with 204 unique characters including alphabets, non-Latin and special characters. In comparison, the text8 dataset provides 100MB of pre-processed texts only with 27 unique characters by filtering superfluous content, such as tables, citations, and punctuation, and by replacing the non-Latin characters with spelled-out equivalents (i.e., "15" to "one five"). For a fair comparison with previous works, we used the training/dev/test splits defined by for both enwik8 and text8. Most of the experimental settings follow those of , where the difference lies in the hyperparameters that influence the size of the model. We set the number of layers L as 9, and we fixed the total size of feature D model for a single character as 256 and the total numbers of heads as 8 while the number of groups are explored in {2,4}. For the regularization of the model, we applied layer normalization independently over groups and dropout layers upon the outputs of the group attention and the group feed-forward layer with the probability p = 0.1. The length of the feed sequence was 512 with the cached 512-length for the previous sequence . We use the Adam optimizer with a learning rate of 2.5e-4, β 1 of 0.9, β 2 of 0.999, a batch size of 22, the number of iterations of 400,000, and the best model on the validation set is chosen. The implementation code will be available for the other details. We compare the Group-Transformer against existing character-level language models using under 50M parameters in Table 1. The prior models are grouped according to their methodologies, including "LSTM," and "Transformer". We observe that the Group-Transformer outperforms the LSTM models with under 30M parameters and that the 2 Group-Transformer attains the best performance against all prior LSTM models on the enwik8 dataset. When compared to Transformers, we observe The number of groups can be interpreted as a hyper-parameter affecting the model size. Figure 2 shows the effectiveness of three hyper-parameters such as the number of layers, the size of hidden dimension, and the number of groups. The default model used Transformer-XL with L = 9, H model = 8, D model = 256, andD model = 4 * D model, and then we reduced the three hyper-parameters. When making the model thinner or shallower, the performances of the model become worse, but the required resources are getting lower. When comparing ours with two reduction methods, the group strategy shows better performances than the models requiring comparable resources. This experiment proved that the feature grouping methods, the main idea of this paper, is more efficient to reduce the model size and the time complexity than tuning other model parameters. Group-Transformer includes two modules utilizing group-wise operations and inter-group modeling. We conduct ablation studies to identify the contributions of the proposed modules and inter-group operations. Table 3: Ablation study on the proposed modules, group attention and group feed-forward layer. Table 3 shows the module-wise impact on the number of parameters and performance. For a fair comparison, we set the baseline model to a reduced Transformer-XL of less than 8M parameters, and can gradually reduce the model size by replacing the attention and the feedforward layer with Group-Transformer module selectively. When replacing the feed-forward layer with Group-Transformer module, we observe that the number of parameters in all cases decreases more efficiently than replacing the attention module. Interestingly, when replacing both modules, the degradation is lower than the sum of the individual performance losses, but the sum of the individuals' reduces the required resources. This demonstrates more efficiency of concurrently using both group-wise modules. H1 H2 H3 H4 H5 H6 H7 H8 H1 H2 H3 H4 H5 H6 H7 H8 G1 G2 H1 H2 H3 H4 H5 H6 H7 H8 G1 G2 H1 H2 H3 H4 H5 H6 H7 H8 G1 G2 G3 G4 H1 H2 H3 H4 H5 H6 H7 H3 H4 H5 H6 H7 H8 H1 H2 H3 H4 H5 H6 H7 H8 G1 G2 H1 H2 H3 H4 H5 H6 H7 H8 G1 G2 H1 H2 H3 H4 H5 H6 H7 H8 G1 G2 G3 G4 H1 H2 H3 H4 H5 H6 H7 H3 H4 H5 H6 H7 H8 H1 H2 H3 H4 H5 H6 H7 H8 G1 G2 H1 H2 H3 H4 H5 H6 H7 H8 G1 G2 H1 H2 H3 H4 H5 H6 H7 H8 G1 G2 G3 G4 H1 H2 H3 H4 H5 H6 H7 H3 H4 H5 H6 H7 H8 H1 H2 H3 H4 H5 H6 H7 H8 G1 G2 H1 H2 H3 H4 H5 H6 H7 H8 G1 G2 H1 H2 H3 H4 H5 H6 H7 H8 G1 G2 G3 G4 H1 H2 H3 H4 H5 H6 H7 In addition to this, we also investigate the influence of inter-group operations in our model. When the inter-group operations are removed (grey boxes in Figure 1b and 1c), we observed the performance degradation on 2-Group-Transformer by 0.028 bpc and 4-Group-Transformer by 0.051 bpc. These gaps are relatively huge when compared to the performance gap between Transformer-XL and Group Transformers in Table 3. The re-emphasize the importance of inter-group modeling in GroupTransformer. Figure 3 shows the similarity patterns between the multi-head attention of our models ((a) and (c)) and the ablation models without the inter-group operations ((b) and (d)). As can be seen, the multi-head attention map from the model without inter-group operations shows high similarities among different groups, while the proposed model shows the opposite. These similarity patterns imply that the model cannot fully take advantage of multi-head attention, which is designed to attend multiple positions of content, without the proposed inter-group operation. Recently, remarkable progress has been made in character-level language modeling by Transformer. The advantage of Transformer lies in its effectiveness in modeling long-term dependencies between characters. However, the models have been developed with a huge number of parameters, and the inference of them has required an expensive computational cost. We argue that big models cannot be used in a limited computational environment. Group-Transformer has been developed to prove the effectiveness of Transformer in a lightweight setting. We have grouped features and proposed group-wise operations to reduce the number of parameters and time complexity of Transformer. In addition, to fully realize the advantage of the original Transformer, we have connected the groups to interact with each other. When applying Group-Transformer on enwik8 and text8, we found that Group-Transformer only with 6M parameters achieves better performances than LSTM-based models holding over 30M parameters. Further analysis has proved the effectiveness of the group strategy to reduce computational resources. Julian Georg Zilly, Rupesh Kumar Srivastava, Jan Koutník, and Jürgen Schmidhuber. Recurrent highway networks. In Proceedings of the 34th International Conference on Machine LearningVolume 70, pp. 4189-4198, 2017. Here, we describe the shuffle trick used for the inter-group interaction in the group feed-forward layer;ȳ where M ×Dgroup are linear weights used in the low rank matrix factorization. To explain the relationship from the shuffle trick, we describes the operations in the group feed-forward layer in a bottom-up way. When we applying group-wise linear operations on the input features [x 1, ...,x G], the outputs are formed as [By splitting each element into G groups and the shuffle operation perturbs the outputs as follows; . . . . . . . . . . . . . . . . . . where the Shuffle operation transposes the first and second dimensions of G × G × M matrix and W Dgroup×M is a linear weight describing information flow from the group g to the group g. Finally, a linear transformation with a weight W . . . where the Flatten operation vectorizes G×M matrix to G * M vector. Therefore, the outputs (ȳ In this section, we compare Group-Transformer from the original transformer in views of the numbers of parameters. For a common expression, we denote D model,D model, H as the feature size, the filter size in the feed-forward layer and the number of heads for the original transformer, and we set the feature size of a group as D group = D model /G, the filter sizeD group =D model /G, the number of heads in a group as H group = H/G for Group-Transformer. In this calculation, we set the filter size as four times bigger than D model . The multi-head attention of the original transformer uses 4D 2 model of parameters for the query, the key, the value, and the output. The feature size for the multiple head is usually set as D model /H where H is the number of the heads. Therefore, all transformations in the module is conducted for a D model -dimensional input feature to identify a D model -dimensional feature. model . When the number of groups is 2, the number of parameters of group attention is the same with those of the original transformer. However, when the number of groups increases to 4 or 8, the number of the parameters decreases to 75% or 62.5% of the original module. A point-wise feed-forward layer of the original transformer requires 8D To see the effectiveness of our models on real-world applications, we have performed two generative tasks; word completion and sentence completion. The former is to generate a word and the latter is to conclude a sentence when a given character sequence is in-complete. Table 4 shows the generated top 20 words to conclude the in-complete character sequence, "pr". Although our 6M and 4M Group-Transformers showed relatively lower scores (bpc) on the quantitative evaluations, as can be seen, the model still produces all real or plausible English words without a significant quality gap from the Transformer-XL with 41M parameters. Seed: mary was not permitted to see them or to speak in her ···(abbreviate) ···proof of guilt if authentic the inquiry reached the that nothing was proven from the start this could have been pr ··· (Truth) predicted Transformer-XL (41M, 1.17 bpc on text8) proven, proved, proof, presented, proposed, probably, prevented, preceded, predicted, presumed, praised, preserved, problematic, preferred, present, previously, precisely, printed, produced, profound 2 Group-Transformer (6M, 1.26 bpc on text8) proven, proof, proved, proposed, present, previously, presented, preserved, printed, probably, practically, produced, prepared, prohibited, predicted, progressively, profound, primarily, problematic, practical 4 Group-Transformer (4M, 1.30 bpc on text8) proven, present, proposed, preserved, presented, previously, proved, practiced, produced, prepared, printed, probably, practically, provided, properly, presumed, praised, presently, prevented, primarily The proposed method is focused on developing character-level language models, but the model can be applied to other NLP tasks. When it comes to the word-level language modeling, compressing the word embedding layer becomes the most important part for designing a lightweight language model. Therefore, we set a embedding dimension as 500 and adjusted the number of layers and the hidden dimension for the models to have the same number of parameters (4.5M). Specifically, we set the bottleneck dimension as 4 times larger than the hidden dimension and follows other experimental settings of. Table 7: Performance comparison between the numbers of groups under the similar number of parameters. We denote "L" and "D" as the number of layers and the hidden dimension, respectively. We investigated the effects of all grouping methods in group attention. Table 8 shows that the benefit on the parameter size is marginal compared to the performance drop when the number of grouping operations is increased. On the other hand, there is a relatively small performance gap between grouping targets under the same number of grouping operations. | This paper proposes a novel lightweight Transformer for character-level language modeling, utilizing group-wise operations. | 1,098 | scitldr |
Domain adaptation tackles the problem of transferring knowledge from a label-rich source domain to an unlabeled or label-scarce target domain. Recently domain-adversarial training (DAT) has shown promising capacity to learn a domain-invariant feature space by reversing the gradient propagation of a domain classifier. However, DAT is still vulnerable in several aspects including training instability due to the overwhelming discriminative ability of the domain classifier in adversarial training, restrictive feature-level alignment, and lack of interpretability or systematic explanation of the learned feature space. In this paper, we propose a novel Max-margin Domain-Adversarial Training (MDAT) by designing an Adversarial Reconstruction Network (ARN). The proposed MDAT stabilizes the gradient reversing in ARN by replacing the domain classifier with a reconstruction network, and in this manner ARN conducts both feature-level and pixel-level domain alignment without involving extra network structures. Furthermore, ARN demonstrates strong robustness to a wide range of hyper-parameters settings, greatly alleviating the task of model selection. Extensive empirical validate that our approach outperforms other state-of-the-art domain alignment methods. Additionally, the reconstructed target samples are visualized to interpret the domain-invariant feature space which conforms with our intuition. Deep neural networks have gained great success on a wide range of tasks such as visual recognition and machine translation . They usually require a large number of labeled data that can be prohibitively expensive to collect, and even with sufficient supervision their performance can still be poor when being generalized to a new environment. The problem of discrepancy between the training and testing data distribution is commonly referred to as domain shift . To alleviate the effect of such shift, domain adaptation sets out to obtain a model trained in a label-rich source domain to generalize well in an unlabeled target domain. Domain adaptation has benefited various applications in many practical scenarios, including but not limited to object detection under challenging conditions , cost-effective learning using only synthetic data to generalize to real-world imagery , etc. Prevailing methods for unsupervised domain adaptation (UDA) are mostly based on domain alignment which aims to learn domain-invariant features by reducing the distribution discrepancy between the source and target domain using some pre-defined metrics such as maximum mean discrepancy . proposed to achieve domain alignment by domainadversarial training (DAT) that reverses the gradients of a domain classifier to maximize domain confusion. Having yielded remarkable performance gain, DAT was employed in many subsequent UDA methods . Even so, there still exist three critical issues of DAT that hinder its performance: as the domain classifier has high-capacity to discriminate two domains, the unbalanced adversarial training cannot continuously provide effective gradients, which is usually overcome by manually adjusting the weights of adversarial training according to specific tasks; DAT-based methods cannot deal with pixel-level domain shift ; the domain-invariant features learned by DAT are only based on intuition but difficult to interpret, which impedes the investigation of the underlying mechanism of adversarial domain adaptation. To overcome the aforementioned difficulties, we propose an innovative DAT approach, namely Max-margin Domain-Adversarial Training (MDAT), to realize stable and comprehensive domain alignment. To demonstrate its effectiveness, we develop an Adversarial Reconstruction Network (ARN) that only utilizes MDAT for UDA. Specifically, ARN consists of a shared feature extractor, a label predictor, and a reconstruction network (i.e. decoder) that serves as a domain classifier. Supervised learning is conducted on source domain, and MDAT helps learn domain-invariant features. In MDAT, the decoder only focuses on reconstructing samples on source domain and pushing the target domain away from a margin, while the feature extractor aims to fool the decoder by learning to reconstruct samples on target domain. In this way, three critical issues can be solved by MDAT: the max-margin loss reduces the discriminative capacity of domain classifier, leading to balanced and thus stable adversarial training; without involving new network structures, MDAT achieves both pixel-level and feature-level domain alignment; visualizing the reconstructed samples reveals how the source and target domains are aligned. We evaluate ARN with MDAT on five visual and non-visual UDA benchmarks. It achieves significant improvement to DAT on all tasks with pixel-level or higher-level domain shift. We also observe that it is insensitive to the choices of hyperparameters and as such is favorable for replication in practice. In principle, our approach is generic and can be used to enhance any UDA methods that leverage domain alignment as an ingredient. Domain adaptation aims to transfer knowledge from one domain to another. provide an upper bound of the test error on the target domain in terms of the source error and the H H-distance. As the source error is stationary for a fixed model, the goal of most UDA methods is to minimize the H H-distance by reducing some metrics such as Maximum Mean Discrepancy (MMD) and CORAL . Inspired by Generative Adversarial Networks (GAN) , proposed to learn domain-invariant features by adversarial training, which has inspired many UDA methods thereafter. Adversarial Discriminative Domain Adaptation (ADDA) tried to fool the label classifier by adversarial training but not in an end-to-end manner. CyCADA and PixelDA leveraged GAN to conduct both feature-level and pixel-level domain adaptation, which yields significant improvement yet the network complexity is high. Another line of approaches that are relevant to our method is the reconstruction network (i.e. the decoder network). The success of image-to-image translation corroborates that it helps learn pixellevel features in an unsupervised manner. employed a decoder network for pixel-level adaptation, and Domain Separate Network (DSN) further leveraged multiple reconstruction networks to learn domain-specific features. These approaches treat the decoder network as an independent component that is irrelevant to domain alignment . In this paper, our approach proposes to utilize the decoder network as domain classifier in MDAT which enables both feature-level and pixel-level domain alignment in a stable and straightforward fashion. In unsupervised domain adaptation, we assume that the model works with a labeled dataset X S and an unlabeled dataset X T. Let X S = {(x s i, y s i)} i∈ [Ns] denote the labeled dataset of N s samples from the source domain, and the certain label y s i belongs to the label space Y that is a finite set (Y = 1, 2, ..., K). The other dataset X T = {x t i} i∈ [Nt] has N t samples from the target domain but has no labels. We further assume that two domains have different distributions, i.e. The proposed architecture is composed of a shared feature extractor G e for two domains, a label predictor G y and a reconstruction network G r. In addition to the basic supervised learning in the source domain, our adversarial reconstruction training enables the extractor G e to learn domain-invariant features. Specifically, the network G r aims to reconstruct the source samples x s and to impede the reconstruction of the target samples x t, while the extractor G e tries to fool the reconstruction network in order to reconstruct the target samples x t. trained to determine whether the input sample belongs to the source or the target domain while the feature extractor learns to deceive the domain classifier, which is formulated as: In DAT, we usually utilize CNN as the feature extractor and fully connected layers (FC) as the domain classifier. DAT reduces the cross-domain discrepancy, achieving significant performance improvement for UDA. Nevertheless, the training of DAT is rather unstable. Without sophisticated tuning of the hyper-parameters, DAT cannot reach the convergence. Through empirical experiments, we observe that such instability is due to the imbalanced minimax game. The binary domain classifier D can easily achieve convergence with very high accuracy at an early training epoch, while it is much harder for the feature extractor F to fool the domain classifier and to simultaneously perform well on the source domain. In this sense, the domain classifier dominates DAT, and the only solution is to palliate the training of D by tuning the hyper-parameters according to different tasks. In our method, we restrict the capacity of the domain classifier so as to form a minimax game in a harmonious manner. Inspired by the max-margin loss in Support Vector Machine (SVM) (i.e. hinge loss), if we push the source domain and the target domain away from a margin rather than as far as possible, then the training task of F to fool D becomes easier. For a binary domain classifier, we define the margin loss as where y is the predicted domain label, [·] +:= max(0, ·), m is a positive margin and t is the ground truth label for two domains (t = −1 for the source domain and t = 1 for the target domain). Then we introduce our MDAT scheme based on an innovative network architecture. Besides the training instability issue, DAT also suffers from restrictive feature-level alignment -lack of pixel-level alignment. To realize stable and comprehensive domain alignment together, we first propose an Adversarial Reconstruction Network (ARN) and then elaborate MDAT. As depicted in Figure 1, our model consists of three parts including a shared feature extractor G e for both domains, a label predictor G y and a reconstruction network G r. Let the feature extractor G e (x; θ e) be a function parameterized by θ e which maps an input sample x to a deep embedding z. Let the label predictor G y (z; θ y) be a task-specific function parameterized by θ y which maps an embedding z to a task-specific predictionŷ. The reconstruction network G r (z; θ r) is a decoding function parameterized by θ r that maps an embedding z to its corresponding reconstructionx. The first learning objective for the feature extractor G e and label predictor G y is to perform well in the source domain. For a supervised K-way classification problem, it is simply achieved by minimizing the negative log-likelihood of the ground truth class for each sample: where y s i is the one-hot encoding of the class label y s i and the logarithm operation is conducted on the softmax predictions of the model. The second objective is to render the feature learning to be domain-invariant. This is motivated by the covariate shift assumption that indicates if the feature distributions S(z) = {G e (x; θ e)|x ∼ D S } and T (z) = {G e (x; θ e)|x ∼ D T } are similar, the source label predictor G y can achieve a similar high accuracy in the target domain. To this end, we design a decoder network G r that serves as a domain classifier, and then MDAT could be applied for stable training. Different from the normal binary domain classifier, MDAT lets the decoder network G r only reconstruct the features in the source domain and push the features in the target domain away from a margin m. In this way, the decoder has the functionality of distinguishing the source domain from the target domain. The objective of training G r is formulated as where m is a positive margin and L r (·) is the mean squared error (MSE) term for the reconstruction loss that is defined as where || · || 2 2 denotes the squared L 2 -norm. Oppositely, to form a minimax game, the feature extractor G e learns to deceive G r such that the learned target features are indistinguishable to the source ones, which is formulated by: Then the whole learning procedure of ARN with MDAT can be formulated by: where L y denotes the negative log-likelihood of the ground truth class for labeled sample (x s i, y s i) and α controls the interaction of the loss terms. In the following section, we provide theoretical justifications on how MDAT reduces the distribution discrepancy, and discuss why it is superior to the classic DAT. In this section, we provide the theoretical justifications on how the proposed method reduces the distribution discrepancy for UDA. The rationale behind domain alignment is motivated from the learning theory of non-conservative domain adaptation problem by Ben-David et al. : Theorem 3.1 Let H be the hypothesis space where h ∈ H. Let (D S, s) and (D T, t) be the two domains and their corresponding generalization error functions. The expected error for the target domain is upper bounded by where Theoretically, when we minimize the H H-distance, the upper bound of the expected error for the target domain is reduced accordingly. As derived in DAT , assuming a family of domain classifiers H d to be rich enough to contain the symmetric difference hypothesis set of H p, such that H p H p = {h|h = h 1 ⊕ h 2, h 1, h 2 ∈ H p} where ⊕ is XOR-function, the empirical H p H p -distance has an upper bound with regard to the optimal domain classifier h: whereD S andD T denote the distributions of the source and target feature space Z S and Z T, respectively. Note that the MSE of G r plus a ceiling function is a form of domain classifier h(z), i.e. [m − L r (·)] + − 0.5 for m = 1. It maps source samples to 0 and target samples to 1 which is exactly the upper bound in Eq.10. Therefore, our reconstruction network G r maximizes the domain discrepancy with a margin and the feature extractor learns to minimize it oppositely. Compared with the conventional DAT-based methods that are usually based on a binary logistic network , the proposed ARN with MDAT is more attractive and incorporates new merits conceptually and theoretically: Stable training and insensitivity to hyper-parameters. Using the decoder as domain classifier with a margin loss to restrain its overwhelming capacity in adversarial training, the minimax game can continuously provide effective gradients for training the feature extractor. Moreover, through the experiments in Section 4, we discover that our method shows strong robustness to the hyperparameters, i.e. α and m, greatly alleviating the parameters tuning for model selection. Richer information for comprehensive domain alignment. Rather than DAT that uses a bit of domain information, MDAT utilizes the reconstruction network as the domain classifier that could capture more domain-specific and pixel-level features during the unsupervised reconstruction . Therefore, MDAT further helps address pixel-level domain shift apart from the feature-level shift, leading to comprehensive domain alignment in a straightforward manner. Feature visualization for method validation. Another key merit of MDAT is that MDAT allows us to visualize the features directly by the reconstruction network. It is crucial to understand to what extent the features are aligned since this helps to reveal the underlying mechanism of adversarial domain adaptation. We will detail the interpretability of these adapted features in Section 4.3. In this section, we evaluate the proposed ARN with MDAT on a number of visual and non-visual UDA tasks with varying degrees of domain shift. We conduct ablation study to corroborate the effectiveness of MDAT and unsupervised reconstruction for UDA. Then the sensitivity of the hyperparameters is investigated, and the adapted features are interpreted via the reconstruction network in ARN. Setup. We evaluate our method on four classic visual UDA datasets and a WiFi-based Gesture Recognition (WGR) dataset . The classic datasets have middle level of domain shift including MNIST , USPS , Street View House Numbers (SVHN) and Synthetic Digits (SYN). For a fair comparison, we follow the same CNN architecture as DANN while using the inverse of G e as G r with pooling operation replaced by upsampling. For the penalty term α, we choose 0.02 by searching over the grid {10 −2, 1}. We also obtain the optimal margin m = 5 by a search over {10 −1, 10}. Then we use the same hyperparameter settings for all tasks to show the robustness. For the optimization, we simply use Adam Optimizer (lr = 2 × 10 −4, β 1 = 0.5, β 2 = 0.999) and train all experiments for 50 epochs with batch size 128. We implemented our model and conducted all the experiments using the PyTorch framework. More implementation details are illustrated in the appendix. Baselines. We evaluate the efficacy of our approach by comparing it with existing UDA methods that perform three ways of domain alignment. Specifically, MMD regularization and Correlation Alignment employ the statistical distribution matching. DRCN and DSN Table 1: We compare with general, statistics-based (S), reconstruction-based (R) and adversarialbased (A) state-of-the-art approaches. We repeated each experiment for 3 times and report the average and standard deviation (std) of the test accuracy in the target domain. while many prevailing UDA methods adopt domain-adversarial training including DANN , ADDA , MECA , CyCADA and CADA . For all transfer tasks, we follow the same protocol as DANN that uses official training data split in both domains for training and evaluates the testing data split in the target domain. Both datasets are composed of grey-scale handwritten images with diverse stroke weights, leading to low-level domain shift. Since USPS has only 7291 training images, USPS→MNIST is more difficult. As shown in Table 1, our method achieves state-of-the-art accuracy of 98.6% on MNIST→USPS and 98.4% on USPS→MNIST, which demonstrates that ARN can tackle low-level domain shift by only using ART (rather than many adversarial UDA methods that adopt other loss terms to adjust classifier boundaries or conduct style transfer). SVHN→MNIST and SYN→SVHN. The SVHN dataset contains RGB digit images that introduce significant variations such as scale, , embossing, rotation, slanting and even multiple digits. The SYN data consists of 50k RGB images of varying color, , blur and orientation. These two tasks have tremendous pixel-level domain shfit. The proposed method achieves a state-ofthe-art performance of 97.4% for SVHN→MNIST, far ahead of other DAT-based methods, significantly improving the classic DANN by 22.7%. Similarly, ARN with MDAT also achieves a noticeable improvement of 5.3% compared with the source-only model, even outperforming the supervised SVHN accuracy 91.3%. WiFi Gesture Recognition with Distant Domains. To evaluate the proposed method on a non-visual UDA task, we applied our method to the WiFi gesture recognition dataset . The WiFi data of six gestures was collected in two rooms regarded as two domains. The in Table 2 demonstrate that our approach significantly improves classification accuracy against Source-Only and DANN by 32.9% and 23.1%, respectively. Table 3: The accuracy (%) with different hyperparameters on SVHN→MNIST. The contribution of MDAT and image reconstruction in ARN. We design an ablation study to verify the contribution of MDAT and unsupervised reconstruction in ARN. To this end, we discard the term L r (x t) in Eq.4, and evaluate the method, denoted as ARN w.o. MDAT in Table 1. Comparing ARN w.o. MDAT with source-only model, we can infer the effect of unsupervised reconstruction for UDA. It is observed that ARN w.o. MDAT improves tasks with low-level domain shift such as MNIST↔USPS, which conforms with our discussion that the unsupervised reconstruction is instrumental in learning low-level features. Comparing ARN w.o. MDAT with the original ARN, we can infer the contribution of MDAT. Table 1 shows that the MDAT achieves an impressive marginof-improvement. For USPS→MNIST and SVHN→MNIST, the MDAT improves ARN w.o. MDAT by around 30%. It demonstrates that MDAT which helps learn domain-invariant representations is the main reason for the tremendous improvement. Parameter sensitivity. We investigate the effect of α and m on SVHN→MNIST. The in Table 3 show that ARN achieves good performance as α ∈ [0.01, 0.1] and even with larger α ARN is able to achieve convergence. In comparison, denoting α as the weight of adversarial loss, the DANN cannot converge when α > 0.2. For the sensitivity of m, the accuracy of ARN exceeds 96.0% as m ≥ 1. These analyses validate that the training of ARN is not sensitive to the parameters and even in the worst cases ARN can achieve convergence. Gradients and training procedure. We draw the training procedure with regard to loss and target accuracy in Figure 2 (b) and Figure 2(a), respectively. In Figure 2 (b), ARN has smoother and more effective gradients (L r) for all α, while the loss of DAT domain classifier (L d) gets extremely small at the beginning. This observation conforms with our intuition, which demonstrates that by restricting the capacity of domain classifier MDAT provides more effective gradients for training feature extractor, leading to a more stable training procedure. This could be further validated in Figure 2 (b) where the ARN accuracy is more stable than that of DAT across training epochs. Table 4: Visualizing the source image, target images and reconstructed target images (R-Target Images) for four digit adaptation tasks. Interpreting MDAT features via reconstructed images. One of the key advantages of ARN is that by visualizing the reconstructed target images we can infer how the features are domain-invariant. We reconstruct the MDAT features of the test data and visualize them in Table 4. It is observed that the target features are reconstructed to source-like images by the decoder G r. As discussed before, intuitively, MDAT forces the target features to mimic the source features, which conforms with our visualization. Similar to image-to-image translation, this indicates that our method conducts implicit feature-to-feature translation that transfers the target features to source-like features, and hence the features become domain-invariant. We analyze the performance of domain alignment for DANN (DAT) and ARN (MDAT) by plotting T-SNE embeddings of the features z on the task SVHN→MNIST. In Figure 3 (a), the source-only model obtains diverse embeddings for each category but the domains are not aligned. In Figure 3 (b), the DANN aligns two domains but the decision boundaries of the classifier are vague. In Figure 3 (c), the proposed ARN effectively aligns two domains for all categories and the classifier boundaries are much clearer. We proposed a new domain alignment approach namely max-margin domain-adversarial training (MDAT) and a MDAT-based network for unsupervised domain adaptation. The proposed method offers effective and stable gradients for the feature learning via an adversarial game between the feature extractor and the reconstruction network. The theoretical analysis provides justifications on how it minimizes the distribution discrepancy. Extensive experiments demonstrate the effectiveness of our method and we further interpret the features by visualization that conforms with our insight. Potential evaluation on semi-supervised learning constitutes our future work. Hyperparameter For all tasks, we simply use the same hyperparameters that are chosen from the sensitivity analysis. We use α = 0.02 and m = 5.0, and we reckon that better can be obtained by tuning the hyperparameters for specific tasks. Network Architecture For a fair comparison, we follow the network in DANN for digit adaptation and simply build the reconstruction network by the inverse network of the extractor. Here we draw the network architectures in Table 5. For WiFi gesture recognition, we adopt the same architecture as CADA that is a modified version of LeNet-5. We have presented all the of the sensitivity study in Section 4.2, and now we show their detailed training procedures in Figure 4 (a) and 4(b). It is observed that the accuracy increases when α drops or the margin m increases. The reason is very simple: when α is too large, it affects the effect of supervised training on source domain; when the margin m is small, the divergence between source and target domain (i.e. H H-distance) cannot be measured well. Here we provide more visualization of the reconstructed images of target samples. In Figure 5, the target samples are shown in the left column while their corresponding reconstructed samples are shown in the right. We can see that for low-level domain shift such as MNIST↔USPS, the reconstructed target samples are very source-like while preserving their original shapes and skeletons. However, for larger domain shift in Figure 5 (c) and 5(d), they are reconstructed to source-like same digits but simultaneously some noises are removed. Specifically, in Figure 5 (d), we can see that one target sample (SVHN) may contain more than one digits that are noises for recognition. After reconstruction, only the right digits are reconstructed. Some target samples may suffer from terrible illumination conditions but their reconstructed digits are very clear, which is amazing. | A stable domain-adversarial training approach for robust and comprehensive domain adaptation | 1,099 | scitldr |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.